tag:digitalethics.net,2013:/posts Digital Ethics by FuturistGerd 2019-08-24T09:57:22Z Digital Ethics by Futurist Gerd Leonhard tag:digitalethics.net,2013:Post/1448022 2019-08-24T09:57:22Z 2019-08-24T09:57:22Z Facebook and the grand challenge of digital ethics
“Facebook achieved this dominance by combining social media, mobile, cloud and big data technology. Its phenomenal rise to power happened on the back of emerging technologies, not individually but together. Cloud-enabled big data and mobile helped deliver influence through social media, all made possible by the internet and the world wide web. It’s a classic example of explosive growth on the back of tech-driven innovation that taps into an unmet customer need.

‘What 2.7bn people see and interpret as truth daily will be “governed” by a single for-profit company’
– BRIAN HOPKINS

Facebook already has a bigger daily impact on the lives of some people than their government. In some respects, it has just as much influence.

Now, what 2.7bn people see and interpret as truth daily – and the approximately $40bn that firms spend in advertising each year – will be ‘governed’ by a single for-profit company. Compounding this concern, consider that Facebook, through preferred stock, is entirely controlled by one person.”

Facebook and the grand challenge of digital ethics
https://www.siliconrepublic.com/companies/facebook-media-currency-digital-ethics
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1448021 2019-08-24T09:57:11Z 2019-08-24T09:57:12Z The new Facebook reality




]]>
tag:digitalethics.net,2013:Post/1445389 2019-08-16T13:14:13Z 2019-08-16T13:14:13Z Elon Musk’s ‘Brain Chip’ Could Be Suicide of the Mind, Says Scientist
“Musk argued that such devices will help humans deal with the so-called AI apocalypse, a scenario in which artificial intelligence outpaces human intelligence and takes control of the planet away from the human species. “Even in a benign AI scenario, we will be left behind,” Musk warned. “But with a brain-machine interface, we can actually go along for the ride. And we can have the option of merging with AI. This is extremely important.”

However, some members of the science community warn that such a device could actually lead to human beings’ self-destruction before the “AI apocalypse” even comes along.

In an op-ed for The Financial Times on Tuesday, cognitive psychologist and philosopher Susan Schneider said merging human brains with AI would be “suicide for the human mind.”

“The philosophical obstacles are as pressing as the technological ones,” wrote Schneider, who chairs the Library of Congress and directs the AI, Mind and Society Group at the University of Connecticut.

To illustrate this point, she brought up a hypothetical scenario inspired by Australian science fiction writer Greg Egan: Imagine as soon as you are born, an AI device called the “jewel” is inserted in your brain which constantly monitors your brain’s activity in order to learn how to mimic your thoughts and behaviors. By the time you are an adult, the device has perfectly “backed up” your brain and can think and behave just like you. Then, you have your original brain surgically removed and let the “jewel” be your “new brain.””

Elon Musk’s ‘Brain Chip’ Could Be Suicide of the Mind, Says Scientist
https://observer.com/2019/08/elon-musk-neuralink-ai-brain-chip-danger-psychologist/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1441414 2019-08-05T18:01:28Z 2019-08-05T18:01:29Z China has started a grand experiment in AI education. It could reshape how the world learns.
“As machines become better at rote tasks, humans will need to focus on the skills that remain unique to them: creativity, collaboration, communication, and problem-solving. They will also need to adapt quickly as more and more skills fall prey to automation. This means the 21st-century classroom should bring out the strengths and interests of each person, rather than impart a canonical set of knowledge more suited for the industrial age.

AI, in theory, could make this easier. It could take over certain rote tasks in the classroom, freeing teachers up to pay more attention to each student. Hypotheses differ about what that might look like. Perhaps AI will teach certain kinds of knowledge while humans teach others; perhaps it will help teachers keep track of student performance or give students more control over how they learn. Regardless, the ultimate goal is deeply personalized teaching.”

China has started a grand experiment in AI education. It could reshape how the world learns.
https://www.technologyreview.com/s/614057/china-squirrel-has-started-a-grand-experiment-in-ai-education-it-could-reshape-how-the/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1419493 2019-06-12T13:52:28Z 2019-06-12T13:52:29Z Food Abundance and Unintended Consequences
“What potential unintended consequences emerge as we move towards food abundance? The Future Today Institute describes a scenario where high-tech local microfarms upend the status quo for supply chains built around conventional agriculture and supermarkets. They envision a possible future where the shift impacts everyone from merchants and importers to truck drivers and UPC code sticker providers. Food shortage driven by extreme weather is also likely to drive a migration from impacted regions to countries like the U.S. and Europe; creating a humanitarian crisis. As stated by FTI:

That’s why planning for this plant future is vital to ensure that their plant factories arrive with opportunity rather than civil and economic unrest.”

Food Abundance and Unintended Consequences
https://frankdiana.net/2019/06/12/food-abundance-and-unintended-consequences/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1419491 2019-06-12T13:51:54Z 2019-06-12T13:51:54Z Will AI shatter human exceptionalism?
“From an evolutionary perspective, this is preposterous. The fact that humans are different from other animals is a distinction of degree, not of kind. Once we properly orient ourselves on the evolutionary tree, it becomes clear that we can learn more about ourselves by focusing on our similarities with other animals than by perpetuating the myth that we’re categorically unique.

Peter Clarke, “Transhumanism and the Death of Human Exceptionalism” at Areo”

Will AI shatter human exceptionalism?
https://mindmatters.ai/2019/03/will-ai-shatter-human-exceptionalism/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1418202 2019-06-09T09:52:32Z 2019-06-09T09:52:32Z Soul Downloading… Please wait. Syntax vs semantics
“J. Searle arguments, that a machine will never have a mind or consciousness, because “understanding” is always only simulated. The logic of computers follows a pure formal structure (syntax), which orders symbols according to clear rules and hence only emulates understanding. In contrast to humans, with a mind and consciousness, who are able to attribute meaning and content to words and language (semantics).”

Soul Downloading… Please wait.
https://www.sovereignmagazine.co.uk/2019/06/03/soul-downloading-please-wait/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1417413 2019-06-07T07:32:10Z 2019-06-07T07:32:10Z 'Black Mirror' Isn't Surprising Anymore. We're Screwed
“In other words, show creator Charlie Brooker and executive producer Annabel Jones in all likelihood plucked Bauer's vision quest not from the headlines but from their own brains—only to have reality outpace what would otherwise be a pitch-perfect lampoon of tech-founder sanctimony. Such is the burden of Black Mirror. More than seven years after it first debuted, the sci-fi anthology can still make you laugh (sometimes), unnerve you (many more times), and even disappoint you (more on that in a bit). It just may no longer surprise you.”

'Black Mirror' Isn't Surprising Anymore. We're Screwed
https://www.wired.com/story/black-mirror-season-3-review/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1414773 2019-05-30T16:58:06Z 2019-05-30T16:58:07Z Imagining New Institutions for the Internet Age – OneZero
“In a world awash in information, the curator is king. Behind each digital throne is an algorithm, a specialized artificial intelligence that is powered by data. More data means better machine learning which attracts more talent that build better products that attract more users that generate more data. Rinse, repeat. This positive feedback loop means that A.I. tends toward centralization. Centralization means monopoly and monopoly means power. That’s why companies like Google and Facebook post annual revenues that dwarf the gross domestic product of some countries.”

Imagining New Institutions for the Internet Age – OneZero
https://onezero.medium.com/imagining-new-institutions-for-the-internet-age-bf17212063db
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1414684 2019-05-30T13:09:03Z 2019-05-30T13:09:03Z Meet me in Bucharest June 10!!
Technology, humanity, society and ethics: A look at the next 10 years

Understanding the future and developing foresights is becoming mission-critical. Join us for this groundbreaking session where Gerd will introduce the most important things we must know about the etfuture, today, such as the decline of the oil and fossile fuel economy, the end of routine work (and why that's not the end of work),  the newly emerging opportunities caused by industry convergence, automation vs globalization, tomorrow’s ethics, a new economic system, the future of Europe and much more. 
The future is better than we think - we just need to govern it wisely.

Keynote Speaker: Gerd Leonhard - Futurist | Author | Speaker | CEO - TheFuturesAgency
Guest Speaker: Peter Vander Auwera - Content Curator Digital Ethics | Speaker | Sensemaker

]]>
tag:digitalethics.net,2013:Post/1413621 2019-05-27T11:13:24Z 2019-05-27T11:13:25Z Amazon Is Working on a Device That Can Read Human Emotions
“The notion of building machines that can understand human emotions has long been a staple of science fiction, from stories by Isaac Asimov to Star Trek’s android Data. Amid advances in machine learning and voice and image recognition, the concept has recently marched toward reality. Companies including Microsoft Corp., Alphabet Inc.’s Google and IBM Corp., among a host of other firms, are developing technologies designed to derive emotional states from images, audio data and other inputs. Amazon has discussed publicly its desire to build a more lifelike voice assistant.

The technology could help the company gain insights for potential health products or be used to better target advertising or product recommendations. The concept is likely to add fuel to the debate about the amount and type of personal data scooped up by technology giants, which already collect reams of information about their customers. Earlier this year, Bloomberg reported that Amazon has a team listening to and annotating audio clips captured by the company’s Echo line of voice-activated speakers.”

Amazon Is Working on a Device That Can Read Human Emotions
https://www.bloomberg.com/news/articles/2019-05-23/amazon-is-working-on-a-wearable-device-that-reads-human-emotions
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1412665 2019-05-24T17:14:28Z 2019-05-24T17:14:30Z Is Surveillance the Future of Service?
“If that’s not Orwellian enough for you, consider that technology giant Adobe recently launched a cloud-based platform that, by using a variety of data points and technologies, identifies individual shoppers in real-time as they enter a store, portraying them as moving dots on a store map. It then allows store management to click on and receive a full profile of each individual, including spending patterns, marital status, age range, city of residence and more. From there, each individual consumer can be micro-targeted with specific offers and promotions to suit their known purchasing patterns.

Still not dystopian enough? Then take a visit to an Amazon Go store, the first of which opened in Seattle in 2018. From the moment you scan your mobile device on entry to crossing the threshold on exit, every movement and interaction you have with the store is monitored in real time. Make no mistake: Amazon is a data company first and foremost and is now bringing the same level of surveillance to physical stores that has allowed it to become the online behemoth it is today. In fact, according to a 2014 patent filing, the company intends to use its growing vortex of customer data to begin what it calls “anticipatory shipping,” a complex predictive analytics and logistics system that will enable Amazon to accurately ship us products before we even know we wanted or needed them.

And if all this weren’t enough, in his description of his company’s “store of the future” or “augmented retail” initiative, Farfetch founder José Neves describes a world where individual shoppers are “recognised as [they] come into the store, which is either via beacons or via a wallet like your Apple Wallet, scanning in like you would with a boarding pass for a flight." Then, there is what Neves refers to as the "offline cookie, a technology that automatically adds products to your wish list on your app as you touch them in the store, without having to scan anything.”

So, how do humans feel about becoming “offline cookies?”

Is Surveillance the Future of Service?
https://www.businessoffashion.com/articles/opinion/is-surveillance-the-future-of-service
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1411372 2019-05-21T16:09:57Z 2019-05-21T16:10:02Z Don’t let industry write the rules for AI
“Companies’ input in shaping the future of AI is essential, but they cannot retain the power they have gained to frame research on how their systems impact society or on how we evaluate the effect morally. Governments and publicly accountable entities must support independent research, and insist that industry shares enough data for it to be kept accountable.”

Don’t let industry write the rules for AI
http://www.nature.com/articles/d41586-019-01413-1
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1410951 2019-05-20T16:32:23Z 2019-05-20T16:32:29Z Social Media Are Ruining Political Discourse
“A presence on Twitter has become almost a job requirement for columnists and pundits. YouTube can also be a valuable educational resource with videos of political roundtables, academic conferences, lectures, and interviews. But the flow-oriented design of these media inhibits extended debate. When the liberal economist Paul Krugman tweeted a critique of the inconsistency of Republican policies on interest rates, for example, most of the more than 100 replies were simply derisive comments about Republican hypocrisy—posts created to derive pleasure from online riposte rather than advocacy for a particular position.

By contrast, blog posts and articles in online newspapers and magazines are not flow media; they are digital extensions of the kind of political writing that characterized printed newspapers and journals in the 19th and 20th centuries. There might be an opportunity for the readers to comment at the end of the article, but their responses do not contribute to flow and engagement in the same way. Even formal news and commentary often decays into flow fodder, such as when people post gut-feel responses to social media about articles they haven’t even read, based on the headline alone.

The politics of flow now poses a serious challenge to the earlier tradition of political debate. Some pundits have interpreted Trump’s populism as a realignment of the traditional political narratives of the left and the right. In both his presidential campaign and his presidency, Trump showed how easy it was to break both narratives into incendiary fragments that could be reshuffled into a variety of combinations. From the left he took opposition to international trade agreements and economic globalism; from the right, hostility to social programs and the federal bureaucracy (“drain the swamp”).”

Social Media Are Ruining Political Discourse
https://www.theatlantic.com/technology/archive/2019/05/why-social-media-ruining-political-discourse/589108/?_hsenc=p2ANqtz-82144uBghSiGsOclpXytsnfx5Tlp906M_u1MaQEZnigt8tqgpaBa3-bcNJIuL37kqtaIDQ37Z78zbVZu9tGb--n5CPeNgKlogF764EeSc0pGRmWGI&_hsmi=72835581&utm_campaign=the_download.unpaid.engagement&utm_content=72835581&utm_medium=email&utm_source=hs_email
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1408521 2019-05-13T11:36:02Z 2019-05-13T11:36:03Z Opinion | It’s Time to Break Up Facebook
“But it’s his very humanity that makes his unchecked power so problematic.

Mark’s influence is staggering, far beyond that of anyone else in the private sector or in government. He controls three core communications platforms — Facebook, Instagram and WhatsApp — that billions of people use every day. Facebook’s board works more like an advisory committee than an overseer, because Mark controls around 60 percent of voting shares. Mark alone can decide how to configure Facebook’s algorithms to determine what people see in their News Feeds, what privacy settings they can use and even which messages get delivered. He sets the rules for how to distinguish violent and incendiary speech from the merely offensive, and he can choose to shut down a competitor by acquiring, blocking or copying it.”

Opinion | It’s Time to Break Up Facebook
https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1407778 2019-05-11T08:31:49Z 2019-05-11T08:31:49Z The End of Privacy Rursus Lege
“Cybersecurity is right up there with wealth inequality and global warming as impactful and dangerous for corporations and big banks in the years going forwards. The Internet has been totally hacked by data-sharing companies that monetized our innocence on social media.”

The End of Privacy Rursus Lege
https://medium.com/artificial-intelligence-network/the-end-of-privacy-rursus-lege-b134fce96bb5
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1407541 2019-05-10T16:30:29Z 2019-05-10T16:30:29Z Forget about artificial intelligence, extended intelligence is the future (joi ito)
“Instead of thinking about machine intelligence in terms of humans vs machines, we should consider the system that integrates humans and machines – not artificial intelligence but extended intelligence. Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.

We must question and adapt our own purpose and sensibilities as observers and designers within systems for a much more humble approach: humility over control.”

Forget about artificial intelligence, extended intelligence is the future
https://www.wired.co.uk/article/artificial-intelligence-extended-intelligence
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1407002 2019-05-09T08:36:22Z 2019-05-09T08:36:22Z Forget about artificial intelligence, extended intelligence is the future (Joi Ito)
“They have found a perfect partner in digital computation, a seemingly knowable, controllable, machine-based system of thinking and creating that is rapidly increasing in its ability to harness and process complexity and, in the process, bestowing wealth and power on those who have mastered it.

In Silicon Valley, the combination of groupthink and the financial success of this cult of technology has created a feedback loop, lacking in self-regulation (although #techwontbuild, #metoo and #timesup are forcing some reflection).”

Forget about artificial intelligence, extended intelligence is the future
https://www.wired.co.uk/article/artificial-intelligence-extended-intelligence
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1405471 2019-05-05T11:56:48Z 2019-05-05T11:56:48Z Will Artificial Intelligence Enhance or Hack Humanity?
“Also when it’s used to enhance you, the question is, who decides what is a good enhancement and what is a bad enhancement? So our immediately, our immediate fallback position is to fall back on the traditional humanist ideas, that the customer is always right, the customers will choose the enhancement. Or the voter is always right, the voters will vote, there will be a political decision about the enhancement. Or if it feels good, do it. We’ll just follow our heart, we’ll just listen to ourselves. None of this works when there is a technology to hack humans on a large scale. You can't trust your feelings, or the voters, or the customers on that. The easiest people to manipulate are the people who believe in free will, because they think they cannot be manipulated. So how do you how do you decide what to enhance if, and this is a very deep ethical and philosophical question—again that philosophers have been debating for thousands of years—what is good? What are the good qualities we need to enhance?”

Will Artificial Intelligence Enhance or Hack Humanity?
https://www.wired.com/story/will-artificial-intelligence-enhance-hack-humanity/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1404734 2019-05-03T10:54:53Z 2019-05-03T10:54:53Z Michael Dell: Technology must reflect our humanity and our values | ZDNet
“A new age of miracles is literally around the corner," Dell said. "We're technologists and we share an awesome responsibility -- it's up to us to ensure that technology reflects our humanity and our values.

"While technology can amplify human genius, it can also amplify human frailty."

Discussing the concept of unconscious bias in AI, Dell said his company is also experimenting within where its hiring practices are concerned, specifically taking the approach of making sure that bias is not reflected in its technologies.”

Michael Dell: Technology must reflect our humanity and our values | ZDNet
https://www.zdnet.com/article/michael-dell-technology-must-reflect-our-humanity-and-our-values/
via Instapaper





]]>
tag:digitalethics.net,2013:Post/1401880 2019-04-25T13:11:40Z 2019-04-25T13:11:41Z The experience economy is booming, but it must benefit everyone
“As the forces of the Fourth Industrial Revolution accelerate, consumers are enjoying the benefits of rapid innovation and new models of consumption, but also struggling to maintain a sense of connection and understanding our rapidly changing world. In that context, it should be no surprise that experiences, especially transformative ones that educate, inspire and bring people together, are growing in popularity.”

The experience economy is booming, but it must benefit everyone
https://www.weforum.org/agenda/2019/01/the-experience-economy-is-booming-but-it-must-benefit-everyone/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1390572 2019-03-27T01:07:07Z 2019-03-27T01:07:07Z Human Contact Is Now a Luxury Good 5* read via the NYT
“Such programs are proliferating. And not just for the elderly.

Life for anyone but the very rich — the physical experience of learning, living and dying — is increasingly mediated by screens.

Not only are screens themselves cheap to make, but they also make things cheaper. Any place that can fit a screen in (classrooms, hospitals, airports, restaurants) can cut costs. And any activity that can happen on a screen becomes cheaper. The texture of life, the tactile experience, is becoming smooth glass.

The rich do not live like this. The rich have grown afraid of screens. They want their children to play with blocks, and tech-free private schools are booming. Humans are more expensive, and rich people are willing and able to pay for them. Conspicuous human interaction — living without a phone for a day, quitting social networks and not answering email — has become a status symbol.

All of this has led to a curious new reality: Human contact is becoming a luxury good.”

Human Contact Is Now a Luxury Good
https://www.nytimes.com/2019/03/23/sunday-review/human-contact-luxury-screens.html
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1387999 2019-03-20T18:08:34Z 2019-03-20T18:08:34Z Coders’ Primal Urge to Kill Inefficiency—Everywhere
“It’s one thing to optimize your personal life. But for many programmers, the true narcotic is transforming the world. Scale itself is a joy; it’s mesmerizing to watch your new piece of code suddenly explode in popularity, going from two people to four to eight to the entire globe. You’ve accelerated some aspect of life—how we text or pay bills or share news—and you can see the ripples spread outward.”

Coders’ Primal Urge to Kill Inefficiency—Everywhere
https://www.wired.com/story/coders-efficiency-is-beautiful/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1387466 2019-03-19T09:35:38Z 2019-03-19T09:35:39Z Humanity is on 'the highway to digital dictatorship', says Yuval Noah Harari
“Well, none of this is inevitable,” he says, suddenly eager to sound a note of optimism. “We can do things on the level of global co-operation, such as an agreement against producing autonomous weapons systems. We can do things on the level of individual government, for example laws to govern the ownership and use of data. And we can do things on the level of the individual. Each of us makes daily choices that have some impact. What control do you have over your own data, on your smartphone, for example?””

Humanity is on 'the highway to digital dictatorship', says Yuval Noah Harari
https://www.noted.co.nz/currently/social-issues/yuval-noah-harari-humanity-on-highway-to-digital-dictatorship/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1387465 2019-03-19T09:35:36Z 2019-03-20T05:33:37Z The Automatic Weapons of Social Media (the dangers of automated business models) - good read !
“What’s changed my mind is the recalcitrant posture of these companies in the face of overwhelming evidence that their platforms are being intentionally manipulated to undermine our democracy. This is an existential crisis, both for civil society and for the health of the businesses being manipulated. But to date the response from the platforms is the equivalent of politicians’ “hopes and prayers” after a school shooting: Soothing murmurs, evasion of truly hard conversations, and a refusal to acknowledge the core problem: Their automated business models.”

The Automatic Weapons of Social Media
https://medium.com/newco/the-automatic-weapons-of-social-media-3ccce92553ad
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1386095 2019-03-15T17:54:17Z 2019-03-15T17:54:18Z This is How AI is Redefining Love
“DNA Dating and VR Dating

By 2020, DNA dating and VR dating will remove the last vestiges of unpredictability from love.

In DNA dating, People will use their DNA, big data and artificial intelligence to create their Perfect Dates with max compatibility. Their Personal Dating Avatars will find, diagnose and transact to make sure your date candidate is not a psycho. Next, there will be sentiment and behavioral analysis, along with a compatibility check for lifestyle, economics, culture, and values.

Then, the Intimacy Diagnostic created with all the above parameters will be evaluated. If the diagnostic is compatible, you meet and take the relationship forward. Else just abort and move on.

On the other hand, VR or Virtual reality dating focuses more on the experience of meeting people in real life.

Ross Dawson, Chairman, Future Exploration Network says. “There’s a big gap still between your image or profile-based dating and just meeting someone in real life — we still meet people in real life, but we still have Tinder, eHarmony, and your profile-based matching. So what we need to do is fill the gap in between, because so many times after seeing photos, reading each other’s profiles, etc., it’s a totally different experience when you meet in real life.”

Which brings us to Virtual Reality Dating — a way to be able to connect, to be able to see what it’s like to be with somebody and chat, interact, without actually being physically there. And that’s when you can decide whether to go out on a real date because you have to be sure it’s worth your time and your safety to go out on a real physical date.”

This is How AI is Redefining Love
https://medium.com/swlh/this-is-how-ai-is-redefining-love-53c78f0f1118
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1385908 2019-03-15T06:11:09Z 2019-03-15T06:11:09Z This is How AI is Redefining Love (Medium)
“Algorithms can end up knowing a person better than friends, family or even themselves, and that’s revolutionizing matchmaking”, says Michal Kosinski, a computational psychologist and assistant professor at Stanford University’s Graduate School of Business. “Algorithms can learn from experiences of billions of others, while a typical person can only learn from their own experience and the experience of a relatively small number of friends.””

This is How AI is Redefining Love
https://medium.com/swlh/this-is-how-ai-is-redefining-love-53c78f0f1118
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1385375 2019-03-13T21:42:47Z 2019-03-13T21:42:47Z AI is reinventing the way we invent
“New methods of invention with wide applications don’t come by very often, and if our guess is right, AI could dramatically change the cost of doing R&D in many different fields.” Much of innovation involves making predictions based on data. In such tasks, Cockburn adds, “machine learning could be much faster and cheaper by orders of magnitude.””

AI is reinventing the way we invent
https://www.technologyreview.com/s/612898/ai-is-reinventing-the-way-we-invent/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1382349 2019-03-06T22:02:46Z 2019-03-06T22:02:47Z How I fell out of love with the internet .. brillant Must-read
“Find out Facebook let Netflix and Spotify read your private messages. Find out Facebook uses your location data to send you more targeted ads. To steer your body. Look up Facebook’s patents. Find a technique for using passive imaging data to detect your emotions and deliver content. Find a method for generating emojis based on facial analysis. Find a system for tapping your phone and monitoring your TV habits. Facebook will never apologize for any of this. This is their business model, watching you and productizing you and selling you off. Living on the internet feels like living in an empire now — Mark Zuckerberg’s empire.”

How I fell out of love with the internet
https://qz.com/1551620/how-i-fell-out-of-love-with-the-internet/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1382340 2019-03-06T21:47:01Z 2019-03-06T21:47:01Z AI Ethics and the Human Problem
“The Alibaba City Brain project, which is employed by the Chinese retail giant Alibaba in the city of Hangzhou, aims to ‘create a cloud-based system where information about a city, and as a result everyone in it, is stored and used to control the city’. This has had positive impacts with the trial of City Brain on traffic vastly improving traffic speed in Hangzhou; however, it has led to many questioning the issue of privacy and surveillance.

Whilst the world is nowhere near AI controlled cities just yet, the fact that they are in (albeit infantile) development without proper regulations over issues such as privacy is worrying. Even more worrying is this statement from the AI manager at Alibaba: ‘In China, people have less concern with privacy, which allows us to move faster.’ Further demonstrating my earlier point of power.”

AI Ethics and the Human Problem
https://www.iotforall.com/ai-ethics-human-problem/
via Instapaper

]]>