tag:digitalethics.net,2013:/posts Digital Ethics by FuturistGerd 2019-11-19T16:24:15Z Digital Ethics by Futurist Gerd Leonhard tag:digitalethics.net,2013:Post/1479708 2019-11-19T16:24:15Z 2019-11-19T16:24:15Z Singapore wants widespread AI use in smart nation drive | ZDNet
“Domestically, our private and public sectors will use AI decisively to generate economic gains and improve lives. Internationally, Singapore will be recognised as a global hub in innovating, piloting, test-bedding, deploying and scaling AI solutions for impact," said the SNDGO, which is part of the Prime Minister's Office.

To kick off its efforts, the government identified five national projects that focused on key industry challenges, including intelligent freight planning in transport and logistics, chronic disease prediction and management in healthcare, and border clearance operations in national safety and security. These form part of nine sectors that have been earmarked for heightened deployment as AI is expected to generate high social and economic value for Singapore. These verticals include manufacturing, finance, cybersecurity, and government.”

Singapore wants widespread AI use in smart nation drive | ZDNet
https://www.zdnet.com/article/singapore-wants-widespread-ai-use-in-smart-nation-drive/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1479110 2019-11-18T11:10:30Z 2019-11-18T11:10:30Z Regulating Technology Firms in the 21st Century - Yang2020 - Andrew Yang for President
“Big Tech companies are the winners of the 21st century economy. They’ve amassed too much power, largely profiting from our personal data, and unaccountable responsibility—we have reached a point where the government needs to step in. And we’re starting to take notice, with about 50% of US adults1 favoring more regulation on tech firms. These companies themselves are asking for regulation (until you propose specifics).

Unfortunately, our government is unequipped to handle it. We dissolved the Office of Technology Assessment in 19952. Recent hearings3 with tech CEOs like Mark Zuckerberg exposed the lack of basic understanding of technology by members of our Congress.

Digital giants such as Facebook, Amazon, Google, and Apple have scale and power that renders them more quasi-sovereign states than conventional companies. They’re making decisions on rights that government usually makes, like speech and safety. Their business models are predicated on keeping people engaged, driven by algorithms that are supercharged by technology to predict our behavior, such as artificial intelligence and machine learning, and that feed off of our data, creating an increasing asymmetry of power without any accountability.”

Regulating Technology Firms in the 21st Century - Yang2020 - Andrew Yang for President
https://www.yang2020.com/blog/regulating-technology-firms-in-the-21st-century/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1474521 2019-11-06T15:49:37Z 2019-11-06T15:49:38Z Our Tech and Our Markets Have an Anti-Human Agenda
“Engineers at our leading tech firms and universities tend to see human beings as the problem and technology as the solution.

When they are not developing interfaces to control us, they are building intelligence to replace us. Any of these technologies could be steered toward extending our human capabilities and collective power. Instead, they are deployed in concert with the demands of a marketplace, political sphere, and power structure that depend on human isolation and predictability in order to operate.”

Our Tech and Our Markets Have an Anti-Human Agenda
https://medium.com/team-human/our-tech-and-our-markets-have-an-anti-human-agenda-be21d4db767c
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1471801 2019-10-30T11:14:34Z 2019-10-30T11:14:34Z Read the Letter Facebook Employees Sent to Mark Zuckerberg About Political Ads
“We are proud to work for a place that enables that expression, and we believe it is imperative to evolve as societies change. As Chris Cox said, “We know the effects of social media are not neutral, and its history has not yet been written.””

Read the Letter Facebook Employees Sent to Mark Zuckerberg About Political Ads
https://www.nytimes.com/2019/10/28/technology/facebook-mark-zuckerberg-letter.html
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1467895 2019-10-19T21:53:15Z 2019-10-19T21:53:15Z We need an economic model that works for people and the planet
“This is the good news. Hearts and minds are changing. An increasing number of millennials, business leaders and women in particular are calling for a new kind of market: a sustainable market, an inclusive, equitable, green and profitable market where sustainable principles drive growth, generating long-term value through the integration and balance of natural, social, human and financial capital.”

We need an economic model that works for people and the planet
https://www.weforum.org/agenda/2019/09/how-to-make-markets-more-sustainable/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1467679 2019-10-19T10:38:14Z 2019-10-19T10:38:14Z Opinion | Marc Benioff: We Need a New Capitalism
“But capitalism as it has been practiced in recent decades — with its obsession on maximizing profits for shareholders — has also led to horrifying inequality. Globally, the 26 richest people in the world now have as much wealth as the poorest 3.8 billion people, and the relentless spewing of carbon emissions is pushing the planet toward catastrophic climate change. In the United States, income inequality has reached its highest level in at least 50 years, with the top 0.1 percent — people like me — owning roughly 20 percent of the wealth while many Americans cannot afford to pay for a $400 emergency. It’s no wonder that support for capitalism has dropped, especially among young people.”

Opinion | Marc Benioff: We Need a New Capitalism
https://www.nytimes.com/2019/10/14/opinion/benioff-salesforce-capitalism.html
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1464293 2019-10-09T18:09:10Z 2019-10-09T18:12:22Z AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell - Future of Life Institute
“And so a machine should be intelligent if its actions achieve its goals. And then of course we have to supply the goals in the form of reward functions or cost functions or logical goals statements. And that works up to a point. It works when machines are stupid. And if you provide the wrong objective, then you can reset them and fix the objective and hope that this time what the machine does is actually beneficial to you. But if machines are more intelligent than humans, then giving them the wrong objective would basically be setting up a kind of a chess match between humanity and a machine that has an objective that’s across purposes with our own. And we wouldn’t win that chess match.”

AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell - Future of Life Institute
https://futureoflife.org/2019/10/08/ai-alignment-podcast-human-compatible-artificial-intelligence-and-the-problem-of-control-with-stuart-russell/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1458129 2019-09-22T10:33:43Z 2019-09-22T10:33:44Z WeWork and the Great Unicorn Delusion
“Since going public, Uber’s valuation has fallen nearly 50 percent. The company is on pace to lose more than $8 billion this year, due to onetime payouts to Uber employees and mounting quarterly losses. And that was before California codified a court ruling that could force the company to reclassify its workforce as full-time employees, something with the potential to transform its domestic business.”

WeWork and the Great Unicorn Delusion
https://www.theatlantic.com/ideas/archive/2019/09/unicorn-delusion/598465/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1458113 2019-09-22T08:58:33Z 2019-09-22T08:58:34Z New surveillance tech means you'll never be anonymous again
“In the US, San Francisco, Somerville and Oakland recently banned the use of facial recognition by law enforcement and government agencies, while Portland is talking about forbidding the use of facial recognition entirely, including by private businesses. A coalition of 30 civil society organisations, representing over 15 million members combined, is calling for a federal ban on the use of facial recognition by US law enforcement.”

New surveillance tech means you'll never be anonymous again
https://www.wired.co.uk/article/surveillance-technology-biometrics
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1449741 2019-08-29T14:47:02Z 2019-08-29T14:47:03Z Silicon Valley's Secret Philosophers Should Share Their Work
“There is a growing pattern of tech luminaries posing as open to concerns and then swiftly dismissing them. Yuval Noah Harari, the influential author Sapiens and Homo Deus and a historian concerned about technology’s capacity to harm humanity’s future, has captured the attention of many Silicon Valley grandees. Yet, in his recent discussion with Mark Zuckerberg, when Harari openly worried that authoritarian forms of government become more likely as data collection gets concentrated in the hands of a few, Zuckerberg replied that he is “more optimistic about democracy.” Throughout the conversation, Zucker­berg seemed unable or unwilling to take Harari’s questions about Facebook’s negative impact on the world seriously. Similarly, Twitter cofounder Jack Dorsey is very public about his love of Eastern philosophy and meditation practices as ways of leading a more reflective, focused life, but is quick to brush aside the idea that Twitter has design features that hijack people’s attention and get them to spend time aimlessly cruising the platform. The gap between preaching and practicing in Silicon Valley isn’t promising.”

Silicon Valley's Secret Philosophers Should Share Their Work
https://www.wired.com/story/silicon-valleys-secret-philosophers-should-share-their-work/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1448869 2019-08-26T21:06:11Z 2019-08-26T21:06:11Z The Glimmer of a Climate New World Order
“Saturday at the Atlantic, Franklin Foer proposed that meaningful action to combat warming may require that the bedrock principle of national sovereignty be retired, such that leaders like Bolsonaro (or, for that matter, Trump) won’t be able to operate with impunity on climate issues which, despite playing out within those nations’ borders, impact the rest of the world as well (often more so, since impacts are distributed unequally). “If there were a functioning global community, it would be wrestling with how to more aggressively save the Amazon, and acknowledging that the battle against climate change demands not only new international cooperation but, perhaps, the weakening of traditional concepts of the nation-state,” he wrote. “The case for territorial incursion in the Amazon is far stronger than the justifications for most war.””

The Glimmer of a Climate New World Order
http://nymag.com/intelligencer/2019/08/climate-at-the-g-7-glimmers-of-a-new-world-order.html
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1448022 2019-08-24T09:57:22Z 2019-08-24T09:57:22Z Facebook and the grand challenge of digital ethics
“Facebook achieved this dominance by combining social media, mobile, cloud and big data technology. Its phenomenal rise to power happened on the back of emerging technologies, not individually but together. Cloud-enabled big data and mobile helped deliver influence through social media, all made possible by the internet and the world wide web. It’s a classic example of explosive growth on the back of tech-driven innovation that taps into an unmet customer need.

‘What 2.7bn people see and interpret as truth daily will be “governed” by a single for-profit company’
– BRIAN HOPKINS

Facebook already has a bigger daily impact on the lives of some people than their government. In some respects, it has just as much influence.

Now, what 2.7bn people see and interpret as truth daily – and the approximately $40bn that firms spend in advertising each year – will be ‘governed’ by a single for-profit company. Compounding this concern, consider that Facebook, through preferred stock, is entirely controlled by one person.”

Facebook and the grand challenge of digital ethics
https://www.siliconrepublic.com/companies/facebook-media-currency-digital-ethics
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1448021 2019-08-24T09:57:11Z 2019-08-24T09:57:12Z The new Facebook reality




]]>
tag:digitalethics.net,2013:Post/1445389 2019-08-16T13:14:13Z 2019-08-16T13:14:13Z Elon Musk’s ‘Brain Chip’ Could Be Suicide of the Mind, Says Scientist
“Musk argued that such devices will help humans deal with the so-called AI apocalypse, a scenario in which artificial intelligence outpaces human intelligence and takes control of the planet away from the human species. “Even in a benign AI scenario, we will be left behind,” Musk warned. “But with a brain-machine interface, we can actually go along for the ride. And we can have the option of merging with AI. This is extremely important.”

However, some members of the science community warn that such a device could actually lead to human beings’ self-destruction before the “AI apocalypse” even comes along.

In an op-ed for The Financial Times on Tuesday, cognitive psychologist and philosopher Susan Schneider said merging human brains with AI would be “suicide for the human mind.”

“The philosophical obstacles are as pressing as the technological ones,” wrote Schneider, who chairs the Library of Congress and directs the AI, Mind and Society Group at the University of Connecticut.

To illustrate this point, she brought up a hypothetical scenario inspired by Australian science fiction writer Greg Egan: Imagine as soon as you are born, an AI device called the “jewel” is inserted in your brain which constantly monitors your brain’s activity in order to learn how to mimic your thoughts and behaviors. By the time you are an adult, the device has perfectly “backed up” your brain and can think and behave just like you. Then, you have your original brain surgically removed and let the “jewel” be your “new brain.””

Elon Musk’s ‘Brain Chip’ Could Be Suicide of the Mind, Says Scientist
https://observer.com/2019/08/elon-musk-neuralink-ai-brain-chip-danger-psychologist/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1441414 2019-08-05T18:01:28Z 2019-08-05T18:01:29Z China has started a grand experiment in AI education. It could reshape how the world learns.
“As machines become better at rote tasks, humans will need to focus on the skills that remain unique to them: creativity, collaboration, communication, and problem-solving. They will also need to adapt quickly as more and more skills fall prey to automation. This means the 21st-century classroom should bring out the strengths and interests of each person, rather than impart a canonical set of knowledge more suited for the industrial age.

AI, in theory, could make this easier. It could take over certain rote tasks in the classroom, freeing teachers up to pay more attention to each student. Hypotheses differ about what that might look like. Perhaps AI will teach certain kinds of knowledge while humans teach others; perhaps it will help teachers keep track of student performance or give students more control over how they learn. Regardless, the ultimate goal is deeply personalized teaching.”

China has started a grand experiment in AI education. It could reshape how the world learns.
https://www.technologyreview.com/s/614057/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-could-reshape-how-the/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1419493 2019-06-12T13:52:28Z 2019-06-12T13:52:29Z Food Abundance and Unintended Consequences
“What potential unintended consequences emerge as we move towards food abundance? The Future Today Institute describes a scenario where high-tech local microfarms upend the status quo for supply chains built around conventional agriculture and supermarkets. They envision a possible future where the shift impacts everyone from merchants and importers to truck drivers and UPC code sticker providers. Food shortage driven by extreme weather is also likely to drive a migration from impacted regions to countries like the U.S. and Europe; creating a humanitarian crisis. As stated by FTI:

That’s why planning for this plant future is vital to ensure that their plant factories arrive with opportunity rather than civil and economic unrest.”

Food Abundance and Unintended Consequences
https://frankdiana.net/2019/06/12/food-abundance-and-unintended-consequences/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1419491 2019-06-12T13:51:54Z 2019-06-12T13:51:54Z Will AI shatter human exceptionalism?
“From an evolutionary perspective, this is preposterous. The fact that humans are different from other animals is a distinction of degree, not of kind. Once we properly orient ourselves on the evolutionary tree, it becomes clear that we can learn more about ourselves by focusing on our similarities with other animals than by perpetuating the myth that we’re categorically unique.

Peter Clarke, “Transhumanism and the Death of Human Exceptionalism” at Areo”

Will AI shatter human exceptionalism?
https://mindmatters.ai/2019/03/will-ai-shatter-human-exceptionalism/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1418202 2019-06-09T09:52:32Z 2019-06-09T09:52:32Z Soul Downloading… Please wait. Syntax vs semantics
“J. Searle arguments, that a machine will never have a mind or consciousness, because “understanding” is always only simulated. The logic of computers follows a pure formal structure (syntax), which orders symbols according to clear rules and hence only emulates understanding. In contrast to humans, with a mind and consciousness, who are able to attribute meaning and content to words and language (semantics).”

Soul Downloading… Please wait.
https://www.sovereignmagazine.co.uk/2019/06/03/soul-downloading-please-wait/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1417413 2019-06-07T07:32:10Z 2019-06-07T07:32:10Z 'Black Mirror' Isn't Surprising Anymore. We're Screwed
“In other words, show creator Charlie Brooker and executive producer Annabel Jones in all likelihood plucked Bauer's vision quest not from the headlines but from their own brains—only to have reality outpace what would otherwise be a pitch-perfect lampoon of tech-founder sanctimony. Such is the burden of Black Mirror. More than seven years after it first debuted, the sci-fi anthology can still make you laugh (sometimes), unnerve you (many more times), and even disappoint you (more on that in a bit). It just may no longer surprise you.”

'Black Mirror' Isn't Surprising Anymore. We're Screwed
https://www.wired.com/story/black-mirror-season-3-review/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1414773 2019-05-30T16:58:06Z 2019-05-30T16:58:07Z Imagining New Institutions for the Internet Age – OneZero
“In a world awash in information, the curator is king. Behind each digital throne is an algorithm, a specialized artificial intelligence that is powered by data. More data means better machine learning which attracts more talent that build better products that attract more users that generate more data. Rinse, repeat. This positive feedback loop means that A.I. tends toward centralization. Centralization means monopoly and monopoly means power. That’s why companies like Google and Facebook post annual revenues that dwarf the gross domestic product of some countries.”

Imagining New Institutions for the Internet Age – OneZero
https://onezero.medium.com/imagining-new-institutions-for-the-internet-age-bf17212063db
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1414684 2019-05-30T13:09:03Z 2019-05-30T13:09:03Z Meet me in Bucharest June 10!!
Technology, humanity, society and ethics: A look at the next 10 years

Understanding the future and developing foresights is becoming mission-critical. Join us for this groundbreaking session where Gerd will introduce the most important things we must know about the etfuture, today, such as the decline of the oil and fossile fuel economy, the end of routine work (and why that's not the end of work),  the newly emerging opportunities caused by industry convergence, automation vs globalization, tomorrow’s ethics, a new economic system, the future of Europe and much more. 
The future is better than we think - we just need to govern it wisely.

Keynote Speaker: Gerd Leonhard - Futurist | Author | Speaker | CEO - TheFuturesAgency
Guest Speaker: Peter Vander Auwera - Content Curator Digital Ethics | Speaker | Sensemaker

]]>
tag:digitalethics.net,2013:Post/1413621 2019-05-27T11:13:24Z 2019-05-27T11:13:25Z Amazon Is Working on a Device That Can Read Human Emotions
“The notion of building machines that can understand human emotions has long been a staple of science fiction, from stories by Isaac Asimov to Star Trek’s android Data. Amid advances in machine learning and voice and image recognition, the concept has recently marched toward reality. Companies including Microsoft Corp., Alphabet Inc.’s Google and IBM Corp., among a host of other firms, are developing technologies designed to derive emotional states from images, audio data and other inputs. Amazon has discussed publicly its desire to build a more lifelike voice assistant.

The technology could help the company gain insights for potential health products or be used to better target advertising or product recommendations. The concept is likely to add fuel to the debate about the amount and type of personal data scooped up by technology giants, which already collect reams of information about their customers. Earlier this year, Bloomberg reported that Amazon has a team listening to and annotating audio clips captured by the company’s Echo line of voice-activated speakers.”

Amazon Is Working on a Device That Can Read Human Emotions
https://www.bloomberg.com/news/articles/2019-05-23/amazon-is-working-on-a-wearable-device-that-reads-human-emotions
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1412665 2019-05-24T17:14:28Z 2019-05-24T17:14:30Z Is Surveillance the Future of Service?
“If that’s not Orwellian enough for you, consider that technology giant Adobe recently launched a cloud-based platform that, by using a variety of data points and technologies, identifies individual shoppers in real-time as they enter a store, portraying them as moving dots on a store map. It then allows store management to click on and receive a full profile of each individual, including spending patterns, marital status, age range, city of residence and more. From there, each individual consumer can be micro-targeted with specific offers and promotions to suit their known purchasing patterns.

Still not dystopian enough? Then take a visit to an Amazon Go store, the first of which opened in Seattle in 2018. From the moment you scan your mobile device on entry to crossing the threshold on exit, every movement and interaction you have with the store is monitored in real time. Make no mistake: Amazon is a data company first and foremost and is now bringing the same level of surveillance to physical stores that has allowed it to become the online behemoth it is today. In fact, according to a 2014 patent filing, the company intends to use its growing vortex of customer data to begin what it calls “anticipatory shipping,” a complex predictive analytics and logistics system that will enable Amazon to accurately ship us products before we even know we wanted or needed them.

And if all this weren’t enough, in his description of his company’s “store of the future” or “augmented retail” initiative, Farfetch founder José Neves describes a world where individual shoppers are “recognised as [they] come into the store, which is either via beacons or via a wallet like your Apple Wallet, scanning in like you would with a boarding pass for a flight." Then, there is what Neves refers to as the "offline cookie, a technology that automatically adds products to your wish list on your app as you touch them in the store, without having to scan anything.”

So, how do humans feel about becoming “offline cookies?”

Is Surveillance the Future of Service?
https://www.businessoffashion.com/articles/opinion/is-surveillance-the-future-of-service
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1411372 2019-05-21T16:09:57Z 2019-05-21T16:10:02Z Don’t let industry write the rules for AI
“Companies’ input in shaping the future of AI is essential, but they cannot retain the power they have gained to frame research on how their systems impact society or on how we evaluate the effect morally. Governments and publicly accountable entities must support independent research, and insist that industry shares enough data for it to be kept accountable.”

Don’t let industry write the rules for AI
http://www.nature.com/articles/d41586-019-01413-1
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1410951 2019-05-20T16:32:23Z 2019-05-20T16:32:29Z Social Media Are Ruining Political Discourse
“A presence on Twitter has become almost a job requirement for columnists and pundits. YouTube can also be a valuable educational resource with videos of political roundtables, academic conferences, lectures, and interviews. But the flow-oriented design of these media inhibits extended debate. When the liberal economist Paul Krugman tweeted a critique of the inconsistency of Republican policies on interest rates, for example, most of the more than 100 replies were simply derisive comments about Republican hypocrisy—posts created to derive pleasure from online riposte rather than advocacy for a particular position.

By contrast, blog posts and articles in online newspapers and magazines are not flow media; they are digital extensions of the kind of political writing that characterized printed newspapers and journals in the 19th and 20th centuries. There might be an opportunity for the readers to comment at the end of the article, but their responses do not contribute to flow and engagement in the same way. Even formal news and commentary often decays into flow fodder, such as when people post gut-feel responses to social media about articles they haven’t even read, based on the headline alone.

The politics of flow now poses a serious challenge to the earlier tradition of political debate. Some pundits have interpreted Trump’s populism as a realignment of the traditional political narratives of the left and the right. In both his presidential campaign and his presidency, Trump showed how easy it was to break both narratives into incendiary fragments that could be reshuffled into a variety of combinations. From the left he took opposition to international trade agreements and economic globalism; from the right, hostility to social programs and the federal bureaucracy (“drain the swamp”).”

Social Media Are Ruining Political Discourse
https://www.theatlantic.com/technology/archive/2019/05/why-social-media-ruining-political-discourse/589108/?_hsenc=p2ANqtz-82144uBghSiGsOclpXytsnfx5Tlp906M_u1MaQEZnigt8tqgpaBa3-bcNJIuL37kqtaIDQ37Z78zbVZu9tGb--n5CPeNgKlogF764EeSc0pGRmWGI&_hsmi=72835581&utm_campaign=the_download.unpaid.engagement&utm_content=72835581&utm_medium=email&utm_source=hs_email
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1408521 2019-05-13T11:36:02Z 2019-05-13T11:36:03Z Opinion | It’s Time to Break Up Facebook
“But it’s his very humanity that makes his unchecked power so problematic.

Mark’s influence is staggering, far beyond that of anyone else in the private sector or in government. He controls three core communications platforms — Facebook, Instagram and WhatsApp — that billions of people use every day. Facebook’s board works more like an advisory committee than an overseer, because Mark controls around 60 percent of voting shares. Mark alone can decide how to configure Facebook’s algorithms to determine what people see in their News Feeds, what privacy settings they can use and even which messages get delivered. He sets the rules for how to distinguish violent and incendiary speech from the merely offensive, and he can choose to shut down a competitor by acquiring, blocking or copying it.”

Opinion | It’s Time to Break Up Facebook
https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1407778 2019-05-11T08:31:49Z 2019-05-11T08:31:49Z The End of Privacy Rursus Lege
“Cybersecurity is right up there with wealth inequality and global warming as impactful and dangerous for corporations and big banks in the years going forwards. The Internet has been totally hacked by data-sharing companies that monetized our innocence on social media.”

The End of Privacy Rursus Lege
https://medium.com/artificial-intelligence-network/the-end-of-privacy-rursus-lege-b134fce96bb5
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1407541 2019-05-10T16:30:29Z 2019-05-10T16:30:29Z Forget about artificial intelligence, extended intelligence is the future (joi ito)
“Instead of thinking about machine intelligence in terms of humans vs machines, we should consider the system that integrates humans and machines – not artificial intelligence but extended intelligence. Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.

We must question and adapt our own purpose and sensibilities as observers and designers within systems for a much more humble approach: humility over control.”

Forget about artificial intelligence, extended intelligence is the future
https://www.wired.co.uk/article/artificial-intelligence-extended-intelligence
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1407002 2019-05-09T08:36:22Z 2019-05-09T08:36:22Z Forget about artificial intelligence, extended intelligence is the future (Joi Ito)
“They have found a perfect partner in digital computation, a seemingly knowable, controllable, machine-based system of thinking and creating that is rapidly increasing in its ability to harness and process complexity and, in the process, bestowing wealth and power on those who have mastered it.

In Silicon Valley, the combination of groupthink and the financial success of this cult of technology has created a feedback loop, lacking in self-regulation (although #techwontbuild, #metoo and #timesup are forcing some reflection).”

Forget about artificial intelligence, extended intelligence is the future
https://www.wired.co.uk/article/artificial-intelligence-extended-intelligence
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1405471 2019-05-05T11:56:48Z 2019-05-05T11:56:48Z Will Artificial Intelligence Enhance or Hack Humanity?
“Also when it’s used to enhance you, the question is, who decides what is a good enhancement and what is a bad enhancement? So our immediately, our immediate fallback position is to fall back on the traditional humanist ideas, that the customer is always right, the customers will choose the enhancement. Or the voter is always right, the voters will vote, there will be a political decision about the enhancement. Or if it feels good, do it. We’ll just follow our heart, we’ll just listen to ourselves. None of this works when there is a technology to hack humans on a large scale. You can't trust your feelings, or the voters, or the customers on that. The easiest people to manipulate are the people who believe in free will, because they think they cannot be manipulated. So how do you how do you decide what to enhance if, and this is a very deep ethical and philosophical question—again that philosophers have been debating for thousands of years—what is good? What are the good qualities we need to enhance?”

Will Artificial Intelligence Enhance or Hack Humanity?
https://www.wired.com/story/will-artificial-intelligence-enhance-hack-humanity/
via Instapaper

]]>