tag:digitalethics.net,2013:/posts Digital Ethics by FuturistGerd 2019-02-21T22:19:02Z Digital Ethics by Futurist Gerd Leonhard tag:digitalethics.net,2013:Post/1376802 2019-02-21T22:10:47Z 2019-02-21T22:19:02Z [singularity] The Troubling Trajectory Of Technological Singularity



“The technology triggered intelligence evolution in machines and the linkages between ideas, innovations and trends have in fact brought us on the doorsteps of singularity. Irrespective of whether we believe that the singularity will happen or not, the very thought raises many concerns and critical security risk uncertainties for the future of humanity. This forces us to begin a conversation with ourselves and with others (individually and collectively) about what we want as a species.”

The Troubling Trajectory Of Technological Singularity
https://www.forbes.com/sites/cognitiveworld/2019/02/10/the-troubling-trajectory-of-technological-singularity/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1376419 2019-02-20T21:54:53Z 2019-02-20T21:54:54Z AI is incredibly smart, but it will never match human creativity
“Humanity’s safe-haven in the coming years will be exactly that — consciousness. Spontaneous thought, creative thinking, and a desire to challenge the world around us. As long as humans exist there will always be a need to innovate, to solve problems through brilliant ideas. Rather than some society in which all individuals will be allowed to carry out their days creating works of art, the machine revolution will instead lead to a society in which anyone can make a living by dreaming and providing creative input to projects of all kinds. The currency of the future will be thought.

This article was originally published on Alex Wulff's Medium”

AI is incredibly smart, but it will never match human creativity
https://thenextweb.com/syndication/2019/01/02/ai-is-incredibly-smart-but-it-will-never-match-human-creativity/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1375798 2019-02-19T12:03:08Z 2019-02-19T12:03:08Z [digital ethics] Only 17% Of Consumers Believe Personalized Ads Are Ethical, Survey Says



“A massive majority of consumers believe that using their data to personalize ads is unethical. And a further 59% believe that personalization to create tailored newsfeeds -- precisely what Facebook, Twitter, and other social applications do every day -- is unethical.

At least, that's what they say on surveys.”

Only 17% Of Consumers Believe Personalized Ads Are Ethical, Survey Says
https://www.forbes.com/sites/johnkoetsier/2019/02/09/83-of-consumers-believe-personalized-ads-are-morally-wrong-survey-says

]]>
tag:digitalethics.net,2013:Post/1372203 2019-02-09T15:21:46Z 2019-02-09T15:21:47Z Facebook’s provocations of the week – Monday Note describes the Google business model
“Imagine if JPMorgan owned the New York Stock Exchange, was the sole market-maker on its own equity, the exclusive broker for every other equity in the market, ran the entire settlement and clearing system in the market, and basically wouldn’t let anyone see who had bought shares and which share or certificate or number they bought… That is Google’s business model.””

Facebook’s provocations of the week – Monday Note
https://mondaynote.com/facebooks-provocations-of-the-week-9fc6af6de12f
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1372202 2019-02-09T15:21:30Z 2019-02-09T15:21:30Z The Next Privacy War Will Happen in Our Homes – Member Feature Stories – Medium
“October, Amazon showcased Alexa’s newest features, including the ability to detect when someone is whispering and respond at a quieter volume. According to Wired, Amazon also has plans to introduce a home security feature, Alexa Guard, giving the program the ability to listen “for trouble such as broken glass or a smoke alarm when you’re away from home.” A month later, the Telegraph reported that Amazon had patented Alexa software that could one day analyze someone’s voice for signs of illness (like a cough or a sneeze) and respond by offering to order cough drops.”

The Next Privacy War Will Happen in Our Homes – Member Feature Stories – Medium
https://medium.com/s/story/why-the-next-privacy-war-will-be-over-sound-d7b59b1533f3
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1372186 2019-02-09T14:45:58Z 2019-02-09T14:45:58Z Understanding China's AI Strategy
“Jack Ma, the chairman of Alibaba, said explicitly in a speech at the 2019 Davos World Economic Forum that he was concerned that global competition over AI could lead to war.14”

Understanding China's AI Strategy
https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1371865 2019-02-08T11:10:54Z 2019-02-08T11:10:55Z What is work?
“Since the dawn of the industrial age, work has become ever more transactional and predictable; the execution of routine, tightly defined tasks. In virtually every large public and private sector organization, that approach holds: thousands of people, each specializing in certain tasks, limited in scope, increasingly standardized and specified, which ultimately contribute to the creation and delivery of predictable products and services to customers and other stakeholders. The problem? Technology can increasingly do that work. Actually, technology should do that work: Machines are more accurate, they don’t get tired or bored, they don’t break for sleep or weekends. If it’s a choice between human or machines to do the kind of work that requires compliance and consistency, machines should win every time.”

What is work?
https://www2.deloitte.com/insights/us/en/focus/technology-and-the-future-of-work/what-is-work.html
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1371822 2019-02-08T06:26:04Z 2019-02-08T06:26:05Z Team Human vs. Team AI
“Artificial intelligence adds another twist. After we launch technologies related to AI and machine learning, they not only shape us, but they also begin to shape themselves. We give them an initial goal, then give them all the data they need to figure out how to accomplish it. From that point forward, we humans no longer fully understand how an AI program may be processing information or modifying its tactics. The AI isn’t conscious enough to tell us. It’s just trying everything and hanging onto what works for the initial goal, regardless of its other consequences.”

Team Human vs. Team AI
https://www.strategy-business.com/article/Team-Human-vs-Team-AI?gko=4d55d
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1370592 2019-02-05T07:12:34Z 2019-02-05T07:12:35Z Recent events highlight an unpleasant scientific practice: ethics dumping
“Dig deeper, though, and what happened starts to look more intriguing than just the story of a lone maverick having gone off the rails in a place with lax regulation. It may instead be an example of a phenomenon called ethics dumping.

Ethics dumping is the carrying out by researchers from one country (usually rich, and with strict regulations) in another (usually less well off, and with laxer laws) of an experiment that would not be permitted at home, or of one that might be permitted, but in a way that would be frowned on. The most worrisome cases involve medical research, in which health, and possibly lives, are at stake. But other investigations—anthropological ones, for example—may also be carried out in a more cavalier fashion abroad. As science becomes more international the risk of ethics dumping, both intentional and unintentional, has risen. The suggestion in this case is that Dr He was encouraged and assisted in his project by a researcher at an American university.”

Recent events highlight an unpleasant scientific practice: ethics dumping
https://www.economist.com/science-and-technology/2019/02/02/recent-events-highlight-an-unpleasant-scientific-practice-ethics-dumping
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1370246 2019-02-04T11:29:31Z 2019-02-04T11:29:32Z The new elite’s phoney crusade to save the world – without changing anything
“That vast numbers of Americans and others in the west have scarcely benefited from the age is not because of a lack of innovation, but because of social arrangements that fail to turn new stuff into better lives. For example, American scientists make the most important discoveries in medicine and genetics and publish more biomedical research than those of any other country – but the average American’s health remains worse and slower-improving than that of peers in other rich countries, and in some years life expectancy actually declines. American inventors create astonishing new ways to learn thanks to the power of video and the internet, many of them free of charge – but the average US high-school leaver tests more poorly in reading today than in 1992. The country has had a “culinary renaissance”, as one publication puts it, one farmers’ market and Whole Foods store at a time – but it has failed to improve the nutrition of most people, with the incidence of obesity and related conditions rising over time.”

The new elite’s phoney crusade to save the world – without changing anything
http://www.theguardian.com/news/2019/jan/22/the-new-elites-phoney-crusade-to-save-the-world-without-changing-anything
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1369578 2019-02-02T17:31:02Z 2019-02-02T17:31:02Z ‘Merging man and machine doesn’t come without consequences’. Gerd Leonhard comments
“Google’s director of engineering Ray Kurzweil is aligned with Mr Musk on this issue, and regularly enthuses about the possibility of man and machine combining to optimise our skills and extend our lifespans. Mr Leonhard, however, does not share this utopian vision. “I don’t want to be faced with the challenge of becoming a cyborg,” he says. “There are things we’d stop doing. Anything slow and inefficient, we wouldn’t do any longer, and I think that’s dehumanising. Also, it means that the rich can augment themselves and become superhuman, while the unaugmented will become useless in comparison.”

But perhaps we’re getting ahead of ourselves. Current experiments with non-invasive BCIs (ie, not implanted inside the skull) are still limited in their scope, and the technology would have to improve by several orders of magnitude before it could boost our lifespans (or, indeed, end up sowing divisions in society). But work is being done outside the field of EEGs that might speed up that journey. New York company CTRL-labs has produced a wristband that senses electrical pulses in the arm, and according to chief executive Thomas Reardon, has all the capabilities of a cranial implant. “There’s nothing you can do with a chip in your brain that we can’t do better,” he boasted in interview with The Verge in June. In tests, CTRL-labs have successfully demonstrated the movement of virtual objects by the power of thought, and gaming enthusiasts have been fascinated. Once problems of speed and accuracy have been conquered, it could represent a gaming revolution where controllers are no longer needed, and experiences become fully immersive.

But while he acknowledges that it is the job of scientists and companies to build this kind of advanced technology, Mr Leonhard says that they also have a responsibility for unforeseen side-effects. “If we have a serious uptake in this kind of augmented reality, I believe we’re going to have a lot of issues with health, mental health and attention deficits.” So how far should we go with the convergence of man and machine? “I’m excited about the future,” he says. “But I’m a humanist. I don’t think we should use technology to leave humanity behind us.””

‘Merging man and machine doesn’t come without consequences’
https://www.thenational.ae/arts-culture/comment/merging-man-and-machine-doesn-t-come-without-consequences-1.780792
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1367732 2019-01-27T22:04:57Z 2019-01-27T22:05:28Z GSMA sharpens focus on ethical digitalisation with the launch of 'Digital Declaration'
Hear hear ! Better late than never :))


“Social, technological, political and economic currents are combining to create a perfect storm of disruption across all industries,” said Mats Granryd, director general of the GSMA.

“A new form of responsible leadership is needed to successfully navigate this era. We are on the cusp of the 5G era, which will spark exciting new possibilities for consumers and promises to transform the shape of virtually every business. In the face of this disruption, those that embrace the principles of the Digital Declaration will strive for business success in ways that seek a better future for their consumers and societies. Those that do not change can expect to suffer increasing scrutiny from shareholders, regulators and consumers,” he added.”

GSMA sharpens focus on ethical digitalisation with the launch of 'Digital Declaration'
https://www.totaltele.com/501995/GSMA-sharpens-focus-on-ethical-digitalisation-with-the-launch-of-Digital-Declaration
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1367292 2019-01-26T09:00:12Z 2019-01-26T09:00:12Z World Leaders at Davos Call for Global Rules on Tech
“The rapid spread of digital technology in daily life and the implications that has on the future of work and data security will require more international cooperation, not less, Ms. Merkel said. But she acknowledged that nobody knows how to write the rules.

Neither the American nor the Chinese approach would work for Europeans, who place a high value on privacy and social justice, Ms. Merkel said.

“I still have yet to see any global architecture that deals with these questions,” she said.”

World Leaders at Davos Call for Global Rules on Tech
https://www.nytimes.com/2019/01/23/technology/world-economic-forum-data-controls.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1365599 2019-01-21T21:40:04Z 2019-01-21T21:40:04Z The World Is Choking on Digital Pollution
“As always, progress has not been without a price. Like the factories of 200 years ago, digital advances have given rise to a pollution that is reducing the quality of our lives and the strength of our democracy. We manage what we choose to measure. It is time to name and measure not only the progress the information revolution has brought, but also the harm that has come with it. Until we do, we will never know which costs are worth bearing.

We seem to be caught in an almost daily reckoning with the role of the internet in our society. This past March, Facebook lost $134 billion in market value over a matter of weeks after a scandal involving the misuse of user data by the political consulting firm Cambridge Analytica. In August, several social media companies banned InfoWars, the conspiracy-mongering platform of right-wing commentator Alex Jones. Many applauded this decision, while others cried of a left-wing conspiracy afoot in the C-suites of largely California-based technology companies.”

The World Is Choking on Digital Pollution
https://washingtonmonthly.com/magazine/january-february-march-2019/the-world-is-choking-on-digital-pollution/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1363498 2019-01-15T06:47:15Z 2019-01-15T06:47:17Z Don’t believe the hype: the media are unwittingly selling us an AI fantasy | John Naughton
“The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.””

Don’t believe the hype: the media are unwittingly selling us an AI fantasy | John Naughton
http://www.theguardian.com/commentisfree/2019/jan/13/dont-believe-the-hype-media-are-selling-us-an-ai-fantasy
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1361586 2019-01-09T09:33:14Z 2019-01-09T09:33:14Z Beware corporate ‘machinewashing’ of AI
“To compound the problem, the baleful effects of AI are often rooted in the very algorithms that drive many tech companies’ profit streams. Economists call such societal costs “negative externalities.” A key component of a negative externality is that the selling or buying of the product itself doesn’t price in the costs borne by others in society as a result of this transaction.

For instance, if people are clicking like crazy on ideologically divisive content served up by personalized algorithms designed to manipulate emotions, it may make both the social media company and the individual user happy in the moment. But it’s inarguably a bad thing for the world. However, those clicks equal money, and when you’re answering to impatient shareholders, greed has an edge over principle.”

Beware corporate ‘machinewashing’ of AI
https://www.bostonglobe.com/opinion/2019/01/07/beware-corporate-machinewashing/IwB1GkAxBlFaOzfo8Wh0IN/story.html
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1359397 2019-01-02T04:41:40Z 2019-01-02T04:41:41Z How Much of the Internet Is Fake?
“What’s gone from the internet, after all, isn’t “truth,” but trust: the sense that the people and things we encounter are what they represent themselves to be. Years of metrics-driven growth, lucrative manipulative systems, and unregulated platform marketplaces, have created an environment where it makes more sense to be fake online — to be disingenuous and cynical, to lie and cheat, to misrepresent and distort — than it does to be real. Fixing that would require cultural and political reform in Silicon Valley and around the world, but it’s our only choice. Otherwise we’ll all end up on the bot internet of fake people, fake clicks, fake sites, and fake computers, where the only real thing is the ads.”

How Much of the Internet Is Fake?
http://nymag.com/intelligencer/2018/12/how-much-of-the-internet-is-fake.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1359387 2019-01-02T03:38:50Z 2019-01-02T03:40:06Z A Conversation with Rudy Rucker
“one of Wolfram's points has been that any natural process can embody universal computation. Once you have universal computation, it seems like in principle, you might be able to get intelligent behavior emerging even if it's not programmed. So then, it's not clear that there's some bright line that separates human intelligence from the rest of the intelligence. I think when we say "artificial intelligence," what we're getting at is the idea that it would be something that we could bring into being, either by designing or probably more likely by evolving it in a laboratory setting.”

Episode 76: A Conversation with Rudy Rucker
https://voicesinai.com/episode/episode-76-a-conversation-with-rudy-rucker/?utm_campaign=Petervan%27s+Delicacies&utm_medium=email&utm_source=Revue+newsletter
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1357426 2018-12-27T01:08:04Z 2018-12-27T01:08:05Z The Anti-Human Religion of Silicon Valley – Douglas Rushkoff – Medium must-read!!
“The anti-human agenda of technologists might not be so bad — or might never be fully realized — if it didn’t dovetail so neatly with the anti-human agenda of corporate capitalism. Each enables the other, reinforcing an abstract, growth-based scheme of infinite expansion — utterly incompatible with human life or the sustainability of our ecosystem. They both depend on a transcendent climax where the chrysalis of matter is left behind and humanity is reborn as pure consciousness or pure capital.”

The Anti-Human Religion of Silicon Valley – Douglas Rushkoff – Medium
https://medium.com/s/douglas-rushkoff/the-anti-human-religion-of-silicon-valley-ac37d5528683
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1351410 2018-12-08T12:53:10Z 2018-12-08T12:53:11Z Microsoft Warns Washington to Regulate A.I. Before Its Too Late
“AI Now—a group composed of tech employees from companies including Microsoft and Google, and affiliated with New York University—says exemplify the need for stricter regulation of artificial intelligence. The group’s report, published Thursday, underscores the inherent dangers in using A.I. to do things like amplify surveillance in fields including finance and policing, and argues that accountability and oversight are necessities where this type of nascent technology is concerned. Crucially, they argue, people should be able to opt out of facial-recognition systems altogether. “Mere public notice of their use is not sufficient, and there should be a high threshold for any consent, given the dangers of oppressive and continual mass surveillance,” the organization writes. “These tools are very suspect and based on faulty science,””

Microsoft Warns Washington to Regulate A.I. Before Its Too Late
https://www.vanityfair.com/news/2018/12/microsoft-warns-washington-to-regulate-ai-before-its-too-late
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1346382 2018-11-22T09:34:46Z 2018-11-22T09:34:48Z One of the fathers of AI is worried about its future - we need more democracy in A.I. research !
“another reason why we need to have more democracy in AI research. It’s that AI research by itself will tend to lead to concentrations of power, money, and researchers. The best students want to go to the best companies. They have much more money, they have much more data. And this is not healthy. Even in a democracy, it’s dangerous to have too much power concentrated in a few hands.”

One of the fathers of AI is worried about its future
https://www.technologyreview.com/s/612434/one-of-the-fathers-of-ai-is-worried-about-its-future/
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1345243 2018-11-19T07:31:29Z 2018-11-19T07:31:30Z The future of artificial intelligence depends on human wisdom - some good AI stats
“Where the physical and intellectual capacities of humans are inherently limited, AI has the potential to add to that reservoir of capacity to improve lives.

Then, of course, there is the money. As a result of AI, it is projected that global GDP would increase by up to 14 percent in 2030, an estimated increase of $15.7 trillion, with the greatest gains to come in China (26 percent increase in GDP) and the U.S. (14 percent increase in GDP). Gartner predicts that by 2020, almost all new software will contain AI elements.

Huge amounts are already being invested by businesses that seek the efficiency gains and outsized accomplishments AI promises. Venture capital investment in AI startups grew 463 percent from 2012-2017. A McKinsey report noted that global demand for data scientists has exceeded supply by over 50 percent in 2018 alone. They are so coveted that some Chinese companies are reportedly hiring senior machine learning researchers with salaries above $500,000. According to Mark Cuban, by 2017 Google had incorporated AI into its business model and generated $9 billion more as a result and Cuban also posited that the world’s first trillionaire would stem from the AI field.”

The future of artificial intelligence depends on human wisdom
https://www.salon.com/2018/11/17/the-future-of-artificial-intelligence-depends-on-human-wisdom/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1344291 2018-11-15T19:58:21Z 2018-11-15T19:58:22Z Automation may take our jobs—but it’ll restore our humanity
“One implication of all this is that for humans to succeed in the AI-powered future, we need to double down on our humanity. Technical skills will no doubt remain important in the future of work, but as AI allows us to automate repetitive tasks across many industries, these will in many cases take a back seat to soft skills. Communication, emotional intelligence, creativity, critical thinking, collaboration, and cognitive flexibility will become the most sought-after abilities. To prepare for that future, we need to emphasize developing higher-order thinking and emotional skills.”

Automation may take our jobs—but it’ll restore our humanity
https://qz.com/1054034/automation-may-take-our-jobs-but-itll-restore-our-humanity/
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1343021 2018-11-12T04:50:18Z 2018-11-12T04:50:19Z Mindset and Heartset - a crucial message !
“The focus on mindset has even deeper roots. If we go back to the Enlightenment from the late 1600’s to the early 1800’s, the key message from the great thinkers of that time was to celebrate the power of the mind and all that it could accomplish. It’s not an accident that this era was also known as the “Age of Reason.” The mind is of course a powerful vehicle for driving amazing insights and accomplishments and should be celebrated. But there’s a risk that we reduce everything to the mind. It’s all about ideas and reason. The body is just a distraction or, at best, something to be nurtured because it holds our mind. Life is so much more complicated than that.”

Mindset and Heartset
https://edgeperspectives.typepad.com/edge_perspectives/2018/11/mindset-and-heartset.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1342776 2018-11-11T14:58:57Z 2018-11-11T14:59:00Z Mindset and Heartset - must read by John Hagel
“The key assumption in the room was that it was all about the mind. They assumed that our assumptions and beliefs shape what we feel and what we do. In this view of the world, emotions are a distraction, or at best a second order effect, and it’s ultimately all about our mind.

Expanding our view
I would suggest that we’re a lot more complicated than that. Our emotions aren’t just derivative of our assumptions and beliefs. Emotions shape our perceptions, assumptions, thoughts and beliefs as well. If you try to shape assumptions and beliefs without paying attention to the emotions that already exist, good luck.

We need to move beyond mindset and expand our horizons to address our heartset: what are the emotions that filter how we perceive the world, shape what we believe and influence how we act?”

Mindset and Heartset
https://edgeperspectives.typepad.com/edge_perspectives/2018/11/mindset-and-heartset.html
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1342733 2018-11-11T10:37:46Z 2018-11-11T10:37:46Z Artificial Intelligence Is Not A Technology (Forbes)
“Yet, with centuries of technology advancement and the almost exponential increase of computing resources, data, knowledge, and capabilities, we still have not yet achieved the vision of Artificial General Intelligence (AGI) -- machines that can be an equal counterpart of human ability. We’re not even close. We have devices we can talk to that don’t understand what we’re saying. We have cars that will happily drive straight into a wall if that’s what your GPS instructs it to do. Machines are detecting images but not understanding what they are. And we have amazing machines that can beat world champions at chess and Go and multiplayer games, but can’t answer a question as basic as “how long should I cook a 14 pound turkey?” We’ve mastered computing. We’ve wrangled big data. We’re figuring out learning. We have no idea how to achieve general intelligence.”

Artificial Intelligence Is Not A Technology
https://www.forbes.com/sites/cognitiveworld/2018/11/01/artificial-intelligence-is-not-a-technology/?utm_campaign=f93d7b1204-EMAIL_CAMPAIGN_2018_11_10_05_46&utm_medium=email&utm_source=Cognitive%2BRoundUp&utm_term=0_8baf59472a-f93d7b1204-98985207
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1342449 2018-11-10T11:00:26Z 2018-11-10T11:00:30Z Ignore AI Fear Factor at Your Peril: A Futurist’s Call for 'Digital Ethics'
“In fact, Leonhard cited Gartner’s recent pronouncement that a leading tech topic for 2019 will be “digital ethics,” a focus on compliance, values and respect for individuals' data in response to public concerns about privacy. Leonhard himself defines digital ethics as “the difference between doing whatever technological progress will enable us to do, and putting human happiness and societal flourishing first at all times.””

Ignore AI Fear Factor at Your Peril: A Futurist’s Call for 'Digital Ethics'
https://www.enterprisetech.com/2018/11/03/ignore-the-ai-fear-factor-at-your-peril-a-futurists-call-for-digital-ethics/
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1342448 2018-11-10T10:59:26Z 2018-11-10T10:59:26Z Sundar Pichai of Google: ‘Technology Doesn’t Solve Humanity’s Problems’
“But there’s a deeper thing here, which is: Technology doesn’t solve humanity’s problems. It was always naïve to think so. Technology is an enabler, but humanity has to deal with humanity’s problems. I think we’re both over-reliant on technology as a way to solve things and probably, at this moment, over-indexing on technology as a source of all problems, too.”

Sundar Pichai of Google: ‘Technology Doesn’t Solve Humanity’s Problems’
https://www.nytimes.com/2018/11/08/business/sundar-pichai-google-corner-office.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1336527 2018-10-27T13:03:52Z 2018-10-27T13:03:53Z PDF with presentation from ICEE Fest 2018: Technology and Humanity - the next 10 years (Futurist Gerd Leonhard)

]]>
tag:digitalethics.net,2013:Post/1336526 2018-10-27T13:01:24Z 2018-10-27T13:01:25Z Shared presentation: Humans and Technology - Heaven or Hell? (AHRA Aruba 2018, Gerd Leonhard)

]]>