tag:digitalethics.net,2013:/posts Digital Ethics by FuturistGerd 2020-05-25T15:41:10Z Digital Ethics by Futurist Gerd Leonhard tag:digitalethics.net,2013:Post/1549590 2020-05-25T15:41:10Z 2020-05-25T15:41:10Z Why AI ethics should be top of mind for business leaders
“As artificial intelligence continues to shape societies and change economies around the world, business leaders need to be aware of the power of the technology and their responsibility for its ethical deployment, implementation and management.”

Why AI ethics should be top of mind for business leaders
via Instapaper
tag:digitalethics.net,2013:Post/1548615 2020-05-23T13:27:37Z 2020-05-23T13:27:37Z Amazon was built for the pandemic—and will likely emerge from it stronger than ever
“Well before the pandemic, Amazon was using its digital might to increasingly insinuate itself into our lives. Some 150 million Prime members—a number that grew by 50 million in less than two years—order clothing, staples, and electronics from the e-commerce giant, watch original Prime Video movies and TV shows, and listen to music on Amazon’s streaming media channels. Even consumers who don’t actively use Amazon’s website to shop spend much of their digital lives using services like Netflix that run on Amazon’s ubiquitous AWS servers.

Now the pandemic has accelerated these trends more than anyone could have imagined—and America’s increased reliance on Amazon’s services is likely to stick.”

Amazon was built for the pandemic—and will likely emerge from it stronger than ever
via Instapaper
tag:digitalethics.net,2013:Post/1548611 2020-05-23T13:09:04Z 2020-05-23T13:09:04Z Opinion | Telecommuting is not the future
“A company could even save a few bucks on office space!

In behavioral psychology, there’s a concept called the recency bias — we recall with most immediacy the recent past. Right now, keeping workers safe from the coronavirus and other illnesses is a primary consideration for employers. And if you read articles predicting the future of telework, you’ll see that people are assuming it will be the same going forward.

But mercifully, this is unlikely to be true. And this is when the employer will likely remember that money spent on real estate is often money well spent.”

Opinion | Telecommuting is not the future
via Instapaper
tag:digitalethics.net,2013:Post/1509741 2020-02-15T09:40:58Z 2020-02-15T09:40:59Z The world's 2,153 billionaires have more wealth than 4.6 billion people combined, Oxfam says
“Other suggestions made by Oxfam to help mitigate inequality included investing in national care systems, challenging sexism, introducing laws to protect carers’ rights, and ending extreme wealth.

“Extreme wealth is a sign of a failing economic system,” the report said. “Governments must take steps to radically reduce the gap between the rich and the rest of society and prioritize the wellbeing of all citizens over unsustainable growth and profit.”

The call for a tax overhaul reinforces the charity’s message ahead of last year’s WEF summit, when Oxfam urged governments to hike tax rates for corporations and society’s richest to reduce wealth disparity.”

The world's 2,153 billionaires have more wealth than 4.6 billion people combined, Oxfam says
via Instapaper
tag:digitalethics.net,2013:Post/1493763 2019-12-28T09:59:06Z 2019-12-28T09:59:06Z How Big Tech Manipulates Academia to Avoid Regulation
“Meanwhile, corporations have tried to shift the discussion to focus on voluntary “ethical principles,” “responsible practices,” and technical adjustments or “safeguards” framed in terms of “bias” and “fairness” (e.g., requiring or encouraging police to adopt “unbiased” or “fair” facial recognition). In January 2018, Microsoft published its “ethical principles” for AI, starting with “fairness.” In May, Facebook announced its “commitment to the ethical development and deployment of AI” and a tool to “search for bias” called “Fairness Flow.” In June, Google published its “responsible practices” for AI research and development. In September, IBM announced a tool called “AI Fairness 360,” designed to “check for unwanted bias in datasets and machine learning models.” In January 2019, Facebook granted $7.5 million for the creation of an AI ethics center in Munich, Germany. In March, Amazon co-sponsored a $20 million program on “fairness in AI” with the U.S. National Science Foundation. In April, Google canceled its AI ethics council after backlash over the selection of Kay Coles James, the vocally anti-trans president of the right-wing Heritage Foundation. These corporate initiatives frequently cited academic research that Ito had supported, at least partially, through the MIT-Harvard fund.”

How Big Tech Manipulates Academia to Avoid Regulation
via Instapaper
tag:digitalethics.net,2013:Post/1486251 2019-12-07T06:57:15Z 2019-12-07T06:57:18Z Opinion | Our Brains Are No Match for Our Technology
“Content algorithms would continue to drive us down rabbit holes toward extremism and conspiracy theories, since automating recommendations is cheaper than paying human editors to decide what’s worth our time. And radical content, incubated in insular online communities, would continue to inspire mass shootings.

By influencing two billion brains in these ways, today’s social media holds the pen of world history: The forces it has unleashed will affect future elections and even our ability to tell fact from fiction, increasing the divisions within society.”

Opinion | Our Brains Are No Match for Our Technology
via Instapaper

tag:digitalethics.net,2013:Post/1479708 2019-11-19T16:24:15Z 2019-11-19T16:24:15Z Singapore wants widespread AI use in smart nation drive | ZDNet
“Domestically, our private and public sectors will use AI decisively to generate economic gains and improve lives. Internationally, Singapore will be recognised as a global hub in innovating, piloting, test-bedding, deploying and scaling AI solutions for impact," said the SNDGO, which is part of the Prime Minister's Office.

To kick off its efforts, the government identified five national projects that focused on key industry challenges, including intelligent freight planning in transport and logistics, chronic disease prediction and management in healthcare, and border clearance operations in national safety and security. These form part of nine sectors that have been earmarked for heightened deployment as AI is expected to generate high social and economic value for Singapore. These verticals include manufacturing, finance, cybersecurity, and government.”

Singapore wants widespread AI use in smart nation drive | ZDNet
via Instapaper

tag:digitalethics.net,2013:Post/1479110 2019-11-18T11:10:30Z 2019-11-18T11:10:30Z Regulating Technology Firms in the 21st Century - Yang2020 - Andrew Yang for President
“Big Tech companies are the winners of the 21st century economy. They’ve amassed too much power, largely profiting from our personal data, and unaccountable responsibility—we have reached a point where the government needs to step in. And we’re starting to take notice, with about 50% of US adults1 favoring more regulation on tech firms. These companies themselves are asking for regulation (until you propose specifics).

Unfortunately, our government is unequipped to handle it. We dissolved the Office of Technology Assessment in 19952. Recent hearings3 with tech CEOs like Mark Zuckerberg exposed the lack of basic understanding of technology by members of our Congress.

Digital giants such as Facebook, Amazon, Google, and Apple have scale and power that renders them more quasi-sovereign states than conventional companies. They’re making decisions on rights that government usually makes, like speech and safety. Their business models are predicated on keeping people engaged, driven by algorithms that are supercharged by technology to predict our behavior, such as artificial intelligence and machine learning, and that feed off of our data, creating an increasing asymmetry of power without any accountability.”

Regulating Technology Firms in the 21st Century - Yang2020 - Andrew Yang for President
via Instapaper
tag:digitalethics.net,2013:Post/1474521 2019-11-06T15:49:37Z 2019-11-06T15:49:38Z Our Tech and Our Markets Have an Anti-Human Agenda
“Engineers at our leading tech firms and universities tend to see human beings as the problem and technology as the solution.

When they are not developing interfaces to control us, they are building intelligence to replace us. Any of these technologies could be steered toward extending our human capabilities and collective power. Instead, they are deployed in concert with the demands of a marketplace, political sphere, and power structure that depend on human isolation and predictability in order to operate.”

Our Tech and Our Markets Have an Anti-Human Agenda
via Instapaper

tag:digitalethics.net,2013:Post/1471801 2019-10-30T11:14:34Z 2019-10-30T11:14:34Z Read the Letter Facebook Employees Sent to Mark Zuckerberg About Political Ads
“We are proud to work for a place that enables that expression, and we believe it is imperative to evolve as societies change. As Chris Cox said, “We know the effects of social media are not neutral, and its history has not yet been written.””

Read the Letter Facebook Employees Sent to Mark Zuckerberg About Political Ads
via Instapaper
tag:digitalethics.net,2013:Post/1467895 2019-10-19T21:53:15Z 2019-10-19T21:53:15Z We need an economic model that works for people and the planet
“This is the good news. Hearts and minds are changing. An increasing number of millennials, business leaders and women in particular are calling for a new kind of market: a sustainable market, an inclusive, equitable, green and profitable market where sustainable principles drive growth, generating long-term value through the integration and balance of natural, social, human and financial capital.”

We need an economic model that works for people and the planet
via Instapaper
tag:digitalethics.net,2013:Post/1467679 2019-10-19T10:38:14Z 2019-10-19T10:38:14Z Opinion | Marc Benioff: We Need a New Capitalism
“But capitalism as it has been practiced in recent decades — with its obsession on maximizing profits for shareholders — has also led to horrifying inequality. Globally, the 26 richest people in the world now have as much wealth as the poorest 3.8 billion people, and the relentless spewing of carbon emissions is pushing the planet toward catastrophic climate change. In the United States, income inequality has reached its highest level in at least 50 years, with the top 0.1 percent — people like me — owning roughly 20 percent of the wealth while many Americans cannot afford to pay for a $400 emergency. It’s no wonder that support for capitalism has dropped, especially among young people.”

Opinion | Marc Benioff: We Need a New Capitalism
via Instapaper

tag:digitalethics.net,2013:Post/1464293 2019-10-09T18:09:10Z 2019-10-09T18:12:22Z AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell - Future of Life Institute
“And so a machine should be intelligent if its actions achieve its goals. And then of course we have to supply the goals in the form of reward functions or cost functions or logical goals statements. And that works up to a point. It works when machines are stupid. And if you provide the wrong objective, then you can reset them and fix the objective and hope that this time what the machine does is actually beneficial to you. But if machines are more intelligent than humans, then giving them the wrong objective would basically be setting up a kind of a chess match between humanity and a machine that has an objective that’s across purposes with our own. And we wouldn’t win that chess match.”

AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell - Future of Life Institute
via Instapaper

tag:digitalethics.net,2013:Post/1458129 2019-09-22T10:33:43Z 2019-09-22T10:33:44Z WeWork and the Great Unicorn Delusion
“Since going public, Uber’s valuation has fallen nearly 50 percent. The company is on pace to lose more than $8 billion this year, due to onetime payouts to Uber employees and mounting quarterly losses. And that was before California codified a court ruling that could force the company to reclassify its workforce as full-time employees, something with the potential to transform its domestic business.”

WeWork and the Great Unicorn Delusion
via Instapaper
tag:digitalethics.net,2013:Post/1458113 2019-09-22T08:58:33Z 2019-09-22T08:58:34Z New surveillance tech means you'll never be anonymous again
“In the US, San Francisco, Somerville and Oakland recently banned the use of facial recognition by law enforcement and government agencies, while Portland is talking about forbidding the use of facial recognition entirely, including by private businesses. A coalition of 30 civil society organisations, representing over 15 million members combined, is calling for a federal ban on the use of facial recognition by US law enforcement.”

New surveillance tech means you'll never be anonymous again
via Instapaper

tag:digitalethics.net,2013:Post/1449741 2019-08-29T14:47:02Z 2019-08-29T14:47:03Z Silicon Valley's Secret Philosophers Should Share Their Work
“There is a growing pattern of tech luminaries posing as open to concerns and then swiftly dismissing them. Yuval Noah Harari, the influential author Sapiens and Homo Deus and a historian concerned about technology’s capacity to harm humanity’s future, has captured the attention of many Silicon Valley grandees. Yet, in his recent discussion with Mark Zuckerberg, when Harari openly worried that authoritarian forms of government become more likely as data collection gets concentrated in the hands of a few, Zuckerberg replied that he is “more optimistic about democracy.” Throughout the conversation, Zucker­berg seemed unable or unwilling to take Harari’s questions about Facebook’s negative impact on the world seriously. Similarly, Twitter cofounder Jack Dorsey is very public about his love of Eastern philosophy and meditation practices as ways of leading a more reflective, focused life, but is quick to brush aside the idea that Twitter has design features that hijack people’s attention and get them to spend time aimlessly cruising the platform. The gap between preaching and practicing in Silicon Valley isn’t promising.”

Silicon Valley's Secret Philosophers Should Share Their Work
via Instapaper

tag:digitalethics.net,2013:Post/1448869 2019-08-26T21:06:11Z 2019-08-26T21:06:11Z The Glimmer of a Climate New World Order
“Saturday at the Atlantic, Franklin Foer proposed that meaningful action to combat warming may require that the bedrock principle of national sovereignty be retired, such that leaders like Bolsonaro (or, for that matter, Trump) won’t be able to operate with impunity on climate issues which, despite playing out within those nations’ borders, impact the rest of the world as well (often more so, since impacts are distributed unequally). “If there were a functioning global community, it would be wrestling with how to more aggressively save the Amazon, and acknowledging that the battle against climate change demands not only new international cooperation but, perhaps, the weakening of traditional concepts of the nation-state,” he wrote. “The case for territorial incursion in the Amazon is far stronger than the justifications for most war.””

The Glimmer of a Climate New World Order
via Instapaper
tag:digitalethics.net,2013:Post/1448022 2019-08-24T09:57:22Z 2019-08-24T09:57:22Z Facebook and the grand challenge of digital ethics
“Facebook achieved this dominance by combining social media, mobile, cloud and big data technology. Its phenomenal rise to power happened on the back of emerging technologies, not individually but together. Cloud-enabled big data and mobile helped deliver influence through social media, all made possible by the internet and the world wide web. It’s a classic example of explosive growth on the back of tech-driven innovation that taps into an unmet customer need.

‘What 2.7bn people see and interpret as truth daily will be “governed” by a single for-profit company’

Facebook already has a bigger daily impact on the lives of some people than their government. In some respects, it has just as much influence.

Now, what 2.7bn people see and interpret as truth daily – and the approximately $40bn that firms spend in advertising each year – will be ‘governed’ by a single for-profit company. Compounding this concern, consider that Facebook, through preferred stock, is entirely controlled by one person.”

Facebook and the grand challenge of digital ethics
via Instapaper

tag:digitalethics.net,2013:Post/1448021 2019-08-24T09:57:11Z 2019-08-24T09:57:12Z The new Facebook reality

tag:digitalethics.net,2013:Post/1445389 2019-08-16T13:14:13Z 2019-08-16T13:14:13Z Elon Musk’s ‘Brain Chip’ Could Be Suicide of the Mind, Says Scientist
“Musk argued that such devices will help humans deal with the so-called AI apocalypse, a scenario in which artificial intelligence outpaces human intelligence and takes control of the planet away from the human species. “Even in a benign AI scenario, we will be left behind,” Musk warned. “But with a brain-machine interface, we can actually go along for the ride. And we can have the option of merging with AI. This is extremely important.”

However, some members of the science community warn that such a device could actually lead to human beings’ self-destruction before the “AI apocalypse” even comes along.

In an op-ed for The Financial Times on Tuesday, cognitive psychologist and philosopher Susan Schneider said merging human brains with AI would be “suicide for the human mind.”

“The philosophical obstacles are as pressing as the technological ones,” wrote Schneider, who chairs the Library of Congress and directs the AI, Mind and Society Group at the University of Connecticut.

To illustrate this point, she brought up a hypothetical scenario inspired by Australian science fiction writer Greg Egan: Imagine as soon as you are born, an AI device called the “jewel” is inserted in your brain which constantly monitors your brain’s activity in order to learn how to mimic your thoughts and behaviors. By the time you are an adult, the device has perfectly “backed up” your brain and can think and behave just like you. Then, you have your original brain surgically removed and let the “jewel” be your “new brain.””

Elon Musk’s ‘Brain Chip’ Could Be Suicide of the Mind, Says Scientist
via Instapaper

tag:digitalethics.net,2013:Post/1441414 2019-08-05T18:01:28Z 2019-08-05T18:01:29Z China has started a grand experiment in AI education. It could reshape how the world learns.
“As machines become better at rote tasks, humans will need to focus on the skills that remain unique to them: creativity, collaboration, communication, and problem-solving. They will also need to adapt quickly as more and more skills fall prey to automation. This means the 21st-century classroom should bring out the strengths and interests of each person, rather than impart a canonical set of knowledge more suited for the industrial age.

AI, in theory, could make this easier. It could take over certain rote tasks in the classroom, freeing teachers up to pay more attention to each student. Hypotheses differ about what that might look like. Perhaps AI will teach certain kinds of knowledge while humans teach others; perhaps it will help teachers keep track of student performance or give students more control over how they learn. Regardless, the ultimate goal is deeply personalized teaching.”

China has started a grand experiment in AI education. It could reshape how the world learns.
via Instapaper

tag:digitalethics.net,2013:Post/1419493 2019-06-12T13:52:28Z 2019-06-12T13:52:29Z Food Abundance and Unintended Consequences
“What potential unintended consequences emerge as we move towards food abundance? The Future Today Institute describes a scenario where high-tech local microfarms upend the status quo for supply chains built around conventional agriculture and supermarkets. They envision a possible future where the shift impacts everyone from merchants and importers to truck drivers and UPC code sticker providers. Food shortage driven by extreme weather is also likely to drive a migration from impacted regions to countries like the U.S. and Europe; creating a humanitarian crisis. As stated by FTI:

That’s why planning for this plant future is vital to ensure that their plant factories arrive with opportunity rather than civil and economic unrest.”

Food Abundance and Unintended Consequences
via Instapaper

tag:digitalethics.net,2013:Post/1419491 2019-06-12T13:51:54Z 2019-06-12T13:51:54Z Will AI shatter human exceptionalism?
“From an evolutionary perspective, this is preposterous. The fact that humans are different from other animals is a distinction of degree, not of kind. Once we properly orient ourselves on the evolutionary tree, it becomes clear that we can learn more about ourselves by focusing on our similarities with other animals than by perpetuating the myth that we’re categorically unique.

Peter Clarke, “Transhumanism and the Death of Human Exceptionalism” at Areo”

Will AI shatter human exceptionalism?
via Instapaper
tag:digitalethics.net,2013:Post/1418202 2019-06-09T09:52:32Z 2019-06-09T09:52:32Z Soul Downloading… Please wait. Syntax vs semantics
“J. Searle arguments, that a machine will never have a mind or consciousness, because “understanding” is always only simulated. The logic of computers follows a pure formal structure (syntax), which orders symbols according to clear rules and hence only emulates understanding. In contrast to humans, with a mind and consciousness, who are able to attribute meaning and content to words and language (semantics).”

Soul Downloading… Please wait.
via Instapaper

tag:digitalethics.net,2013:Post/1417413 2019-06-07T07:32:10Z 2019-06-07T07:32:10Z 'Black Mirror' Isn't Surprising Anymore. We're Screwed
“In other words, show creator Charlie Brooker and executive producer Annabel Jones in all likelihood plucked Bauer's vision quest not from the headlines but from their own brains—only to have reality outpace what would otherwise be a pitch-perfect lampoon of tech-founder sanctimony. Such is the burden of Black Mirror. More than seven years after it first debuted, the sci-fi anthology can still make you laugh (sometimes), unnerve you (many more times), and even disappoint you (more on that in a bit). It just may no longer surprise you.”

'Black Mirror' Isn't Surprising Anymore. We're Screwed
via Instapaper

tag:digitalethics.net,2013:Post/1414773 2019-05-30T16:58:06Z 2019-05-30T16:58:07Z Imagining New Institutions for the Internet Age – OneZero
“In a world awash in information, the curator is king. Behind each digital throne is an algorithm, a specialized artificial intelligence that is powered by data. More data means better machine learning which attracts more talent that build better products that attract more users that generate more data. Rinse, repeat. This positive feedback loop means that A.I. tends toward centralization. Centralization means monopoly and monopoly means power. That’s why companies like Google and Facebook post annual revenues that dwarf the gross domestic product of some countries.”

Imagining New Institutions for the Internet Age – OneZero
via Instapaper

tag:digitalethics.net,2013:Post/1414684 2019-05-30T13:09:03Z 2019-05-30T13:09:03Z Meet me in Bucharest June 10!!
Technology, humanity, society and ethics: A look at the next 10 years

Understanding the future and developing foresights is becoming mission-critical. Join us for this groundbreaking session where Gerd will introduce the most important things we must know about the etfuture, today, such as the decline of the oil and fossile fuel economy, the end of routine work (and why that's not the end of work),  the newly emerging opportunities caused by industry convergence, automation vs globalization, tomorrow’s ethics, a new economic system, the future of Europe and much more. 
The future is better than we think - we just need to govern it wisely.

Keynote Speaker: Gerd Leonhard - Futurist | Author | Speaker | CEO - TheFuturesAgency
Guest Speaker: Peter Vander Auwera - Content Curator Digital Ethics | Speaker | Sensemaker

tag:digitalethics.net,2013:Post/1413621 2019-05-27T11:13:24Z 2019-05-27T11:13:25Z Amazon Is Working on a Device That Can Read Human Emotions
“The notion of building machines that can understand human emotions has long been a staple of science fiction, from stories by Isaac Asimov to Star Trek’s android Data. Amid advances in machine learning and voice and image recognition, the concept has recently marched toward reality. Companies including Microsoft Corp., Alphabet Inc.’s Google and IBM Corp., among a host of other firms, are developing technologies designed to derive emotional states from images, audio data and other inputs. Amazon has discussed publicly its desire to build a more lifelike voice assistant.

The technology could help the company gain insights for potential health products or be used to better target advertising or product recommendations. The concept is likely to add fuel to the debate about the amount and type of personal data scooped up by technology giants, which already collect reams of information about their customers. Earlier this year, Bloomberg reported that Amazon has a team listening to and annotating audio clips captured by the company’s Echo line of voice-activated speakers.”

Amazon Is Working on a Device That Can Read Human Emotions
via Instapaper

tag:digitalethics.net,2013:Post/1412665 2019-05-24T17:14:28Z 2019-05-24T17:14:30Z Is Surveillance the Future of Service?
“If that’s not Orwellian enough for you, consider that technology giant Adobe recently launched a cloud-based platform that, by using a variety of data points and technologies, identifies individual shoppers in real-time as they enter a store, portraying them as moving dots on a store map. It then allows store management to click on and receive a full profile of each individual, including spending patterns, marital status, age range, city of residence and more. From there, each individual consumer can be micro-targeted with specific offers and promotions to suit their known purchasing patterns.

Still not dystopian enough? Then take a visit to an Amazon Go store, the first of which opened in Seattle in 2018. From the moment you scan your mobile device on entry to crossing the threshold on exit, every movement and interaction you have with the store is monitored in real time. Make no mistake: Amazon is a data company first and foremost and is now bringing the same level of surveillance to physical stores that has allowed it to become the online behemoth it is today. In fact, according to a 2014 patent filing, the company intends to use its growing vortex of customer data to begin what it calls “anticipatory shipping,” a complex predictive analytics and logistics system that will enable Amazon to accurately ship us products before we even know we wanted or needed them.

And if all this weren’t enough, in his description of his company’s “store of the future” or “augmented retail” initiative, Farfetch founder José Neves describes a world where individual shoppers are “recognised as [they] come into the store, which is either via beacons or via a wallet like your Apple Wallet, scanning in like you would with a boarding pass for a flight." Then, there is what Neves refers to as the "offline cookie, a technology that automatically adds products to your wish list on your app as you touch them in the store, without having to scan anything.”

So, how do humans feel about becoming “offline cookies?”

Is Surveillance the Future of Service?
via Instapaper

tag:digitalethics.net,2013:Post/1411372 2019-05-21T16:09:57Z 2019-05-21T16:10:02Z Don’t let industry write the rules for AI
“Companies’ input in shaping the future of AI is essential, but they cannot retain the power they have gained to frame research on how their systems impact society or on how we evaluate the effect morally. Governments and publicly accountable entities must support independent research, and insist that industry shares enough data for it to be kept accountable.”

Don’t let industry write the rules for AI
via Instapaper