tag:digitalethics.net,2013:/posts Digital Ethics by FuturistGerd 2019-03-27T01:07:07Z Digital Ethics by Futurist Gerd Leonhard tag:digitalethics.net,2013:Post/1390572 2019-03-27T01:07:07Z 2019-03-27T01:07:07Z Human Contact Is Now a Luxury Good 5* read via the NYT
“Such programs are proliferating. And not just for the elderly.

Life for anyone but the very rich — the physical experience of learning, living and dying — is increasingly mediated by screens.

Not only are screens themselves cheap to make, but they also make things cheaper. Any place that can fit a screen in (classrooms, hospitals, airports, restaurants) can cut costs. And any activity that can happen on a screen becomes cheaper. The texture of life, the tactile experience, is becoming smooth glass.

The rich do not live like this. The rich have grown afraid of screens. They want their children to play with blocks, and tech-free private schools are booming. Humans are more expensive, and rich people are willing and able to pay for them. Conspicuous human interaction — living without a phone for a day, quitting social networks and not answering email — has become a status symbol.

All of this has led to a curious new reality: Human contact is becoming a luxury good.”

Human Contact Is Now a Luxury Good
https://www.nytimes.com/2019/03/23/sunday-review/human-contact-luxury-screens.html
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1387999 2019-03-20T18:08:34Z 2019-03-20T18:08:34Z Coders’ Primal Urge to Kill Inefficiency—Everywhere
“It’s one thing to optimize your personal life. But for many programmers, the true narcotic is transforming the world. Scale itself is a joy; it’s mesmerizing to watch your new piece of code suddenly explode in popularity, going from two people to four to eight to the entire globe. You’ve accelerated some aspect of life—how we text or pay bills or share news—and you can see the ripples spread outward.”

Coders’ Primal Urge to Kill Inefficiency—Everywhere
https://www.wired.com/story/coders-efficiency-is-beautiful/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1387466 2019-03-19T09:35:38Z 2019-03-19T09:35:39Z Humanity is on 'the highway to digital dictatorship', says Yuval Noah Harari
“Well, none of this is inevitable,” he says, suddenly eager to sound a note of optimism. “We can do things on the level of global co-operation, such as an agreement against producing autonomous weapons systems. We can do things on the level of individual government, for example laws to govern the ownership and use of data. And we can do things on the level of the individual. Each of us makes daily choices that have some impact. What control do you have over your own data, on your smartphone, for example?””

Humanity is on 'the highway to digital dictatorship', says Yuval Noah Harari
https://www.noted.co.nz/currently/social-issues/yuval-noah-harari-humanity-on-highway-to-digital-dictatorship/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1387465 2019-03-19T09:35:36Z 2019-03-20T05:33:37Z The Automatic Weapons of Social Media (the dangers of automated business models) - good read !
“What’s changed my mind is the recalcitrant posture of these companies in the face of overwhelming evidence that their platforms are being intentionally manipulated to undermine our democracy. This is an existential crisis, both for civil society and for the health of the businesses being manipulated. But to date the response from the platforms is the equivalent of politicians’ “hopes and prayers” after a school shooting: Soothing murmurs, evasion of truly hard conversations, and a refusal to acknowledge the core problem: Their automated business models.”

The Automatic Weapons of Social Media
https://medium.com/newco/the-automatic-weapons-of-social-media-3ccce92553ad
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1386095 2019-03-15T17:54:17Z 2019-03-15T17:54:18Z This is How AI is Redefining Love
“DNA Dating and VR Dating

By 2020, DNA dating and VR dating will remove the last vestiges of unpredictability from love.

In DNA dating, People will use their DNA, big data and artificial intelligence to create their Perfect Dates with max compatibility. Their Personal Dating Avatars will find, diagnose and transact to make sure your date candidate is not a psycho. Next, there will be sentiment and behavioral analysis, along with a compatibility check for lifestyle, economics, culture, and values.

Then, the Intimacy Diagnostic created with all the above parameters will be evaluated. If the diagnostic is compatible, you meet and take the relationship forward. Else just abort and move on.

On the other hand, VR or Virtual reality dating focuses more on the experience of meeting people in real life.

Ross Dawson, Chairman, Future Exploration Network says. “There’s a big gap still between your image or profile-based dating and just meeting someone in real life — we still meet people in real life, but we still have Tinder, eHarmony, and your profile-based matching. So what we need to do is fill the gap in between, because so many times after seeing photos, reading each other’s profiles, etc., it’s a totally different experience when you meet in real life.”

Which brings us to Virtual Reality Dating — a way to be able to connect, to be able to see what it’s like to be with somebody and chat, interact, without actually being physically there. And that’s when you can decide whether to go out on a real date because you have to be sure it’s worth your time and your safety to go out on a real physical date.”

This is How AI is Redefining Love
https://medium.com/swlh/this-is-how-ai-is-redefining-love-53c78f0f1118
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1385908 2019-03-15T06:11:09Z 2019-03-15T06:11:09Z This is How AI is Redefining Love (Medium)
“Algorithms can end up knowing a person better than friends, family or even themselves, and that’s revolutionizing matchmaking”, says Michal Kosinski, a computational psychologist and assistant professor at Stanford University’s Graduate School of Business. “Algorithms can learn from experiences of billions of others, while a typical person can only learn from their own experience and the experience of a relatively small number of friends.””

This is How AI is Redefining Love
https://medium.com/swlh/this-is-how-ai-is-redefining-love-53c78f0f1118
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1385375 2019-03-13T21:42:47Z 2019-03-13T21:42:47Z AI is reinventing the way we invent
“New methods of invention with wide applications don’t come by very often, and if our guess is right, AI could dramatically change the cost of doing R&D in many different fields.” Much of innovation involves making predictions based on data. In such tasks, Cockburn adds, “machine learning could be much faster and cheaper by orders of magnitude.””

AI is reinventing the way we invent
https://www.technologyreview.com/s/612898/ai-is-reinventing-the-way-we-invent/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1382349 2019-03-06T22:02:46Z 2019-03-06T22:02:47Z How I fell out of love with the internet .. brillant Must-read
“Find out Facebook let Netflix and Spotify read your private messages. Find out Facebook uses your location data to send you more targeted ads. To steer your body. Look up Facebook’s patents. Find a technique for using passive imaging data to detect your emotions and deliver content. Find a method for generating emojis based on facial analysis. Find a system for tapping your phone and monitoring your TV habits. Facebook will never apologize for any of this. This is their business model, watching you and productizing you and selling you off. Living on the internet feels like living in an empire now — Mark Zuckerberg’s empire.”

How I fell out of love with the internet
https://qz.com/1551620/how-i-fell-out-of-love-with-the-internet/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1382340 2019-03-06T21:47:01Z 2019-03-06T21:47:01Z AI Ethics and the Human Problem
“The Alibaba City Brain project, which is employed by the Chinese retail giant Alibaba in the city of Hangzhou, aims to ‘create a cloud-based system where information about a city, and as a result everyone in it, is stored and used to control the city’. This has had positive impacts with the trial of City Brain on traffic vastly improving traffic speed in Hangzhou; however, it has led to many questioning the issue of privacy and surveillance.

Whilst the world is nowhere near AI controlled cities just yet, the fact that they are in (albeit infantile) development without proper regulations over issues such as privacy is worrying. Even more worrying is this statement from the AI manager at Alibaba: ‘In China, people have less concern with privacy, which allows us to move faster.’ Further demonstrating my earlier point of power.”

AI Ethics and the Human Problem
https://www.iotforall.com/ai-ethics-human-problem/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1381637 2019-03-05T07:26:33Z 2019-03-05T07:26:33Z [artificial intelligence] We analyzed 16,625 papers to figure out where AI is headed next (MIT tech review)



“The biggest shift we found was a transition away from knowledge-based systems by the early 2000s. These computer programs are based on the idea that you can use rules to encode all human knowledge. In their place, researchers turned to machine learning—the parent category of algorithms that includes deep learning.

Among the top 100 words mentioned, those related to knowledge-based systems—like “logic,” “constraint,” and “rule”—saw the greatest decline. Those related to machine learning—like “data,” “network,” and “performance”—saw the highest growth.”

We analyzed 16,625 papers to figure out where AI is headed next
https://www.technologyreview.com/s/612768/we-analyzed-16625-papers-to-figure-out-where-ai-is-headed-next/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1380788 2019-03-03T12:34:01Z 2019-03-03T12:34:02Z [digital heresy] This Is Silicon Valley – OneZero



“In Silicon Valley, few people find things like climate change important enough to talk about at length, and even fewer find it important enough to work on. It’s not where the money is at. It’s not where “success” is at. And it’s certainly not where the industry is at. Instead, money comes from changing a button from green to blue, from making yet another food delivery app, and from getting more clicks on ads. That’s just how the Valley and the tech industry are set up. As Jeffrey Hammerbacher, a former Facebook executive, told Bloomberg, “The best minds of my generation are thinking about how to make people click ads.”

This is Silicon Valley.”

This Is Silicon Valley – OneZero
https://onezero.medium.com/this-is-silicon-valley-3c4583d6e7c2
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1380459 2019-03-02T16:53:51Z 2019-03-02T17:05:41Z [artificial intelligence] Seeking Ground Rules for A.I. via the nyt


“The Recommendations

Transparency Companies should be transparent about the design, intention and use of their A.I. technology.

Disclosure Companies should clearly disclose to users what data is being collected and how it is being used.

Privacy Users should be able to easily opt out of data collection.

Diversity A.I. technology should be developed by inherently diverse teams.

Bias Companies should strive to avoid bias in A.I. by drawing on diverse data sets.

Trust Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.

Accountability There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.

Collective governance Companies should work together to self-regulate the industry.

Regulation Companies should work with regulators to develop appropriate laws to govern the use of A.I.

“Complementarity” Treat A.I. as tool for humans to use, not a replacement for human work.”

Seeking Ground Rules for A.I.
https://www.nytimes.com/2019/03/01/business/ethical-ai-recommendations.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1380175 2019-03-01T19:52:31Z 2019-03-01T19:52:32Z [technology] Get ready for the age of sensor panic



“But after what seems like daily reports about Facebook privacy transgressions, Russian hacking, Chinese industrial espionage, Android malware and all manner of leaks, hacks and privacy-invading blunders, we’ve entered into a new era of public distrust of all things technological.”

Get ready for the age of sensor panic
https://www.computerworld.com/article/3342629/get-ready-for-the-age-of-sensor-panic.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1378967 2019-02-26T21:36:30Z 2019-02-26T21:40:41Z [digital herecy] Uber and the Ongoing Erasure of Public Life



“Cities struggling to keep subways and buses running are being drained of revenue by tech companies and a reserve army of cars. These cars, in turn, coagulate the arteries of the city, blocking the remaining fleet of buses, causing a downward spiral of decreasing ridership and growing traffic.

Despite all of this, Uber claims to support mass transit. “Everyone agrees on the solution,” a company spokesperson said in an e-mail. “We need tools that help ensure sustainable travel modes like public transportation are prioritized over single occupant vehicles.” The company has regularly portrayed itself as offering “first-mile, last-mile” solutions for transit: carrying you to and from the train station or bus stop. In fact, the evidence of its success in this arena is inconclusive. In some suburbs or city peripheries, where these solutions are most necessary, Uber has become a subsidized alternative to the transit to which it supposedly offers a connection, partnering with municipal and transit agencies to replace their existing bus services.”

Uber and the Ongoing Erasure of Public Life
https://www.newyorker.com/culture/dept-of-design/uber-and-the-ongoing-erasure-of-public-life
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1377783 2019-02-24T15:22:01Z 2019-02-24T15:22:01Z [artificial intelligence] A philosopher argues that an AI can’t be an artist




“Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.

For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.”

A philosopher argues that an AI can’t be an artist
https://www.technologyreview.com/s/612913/a-philosopher-argues-that-an-ai-can-never-be-an-artist/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1376802 2019-02-21T22:10:47Z 2019-02-21T22:19:02Z [singularity] The Troubling Trajectory Of Technological Singularity



“The technology triggered intelligence evolution in machines and the linkages between ideas, innovations and trends have in fact brought us on the doorsteps of singularity. Irrespective of whether we believe that the singularity will happen or not, the very thought raises many concerns and critical security risk uncertainties for the future of humanity. This forces us to begin a conversation with ourselves and with others (individually and collectively) about what we want as a species.”

The Troubling Trajectory Of Technological Singularity
https://www.forbes.com/sites/cognitiveworld/2019/02/10/the-troubling-trajectory-of-technological-singularity/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1376419 2019-02-20T21:54:53Z 2019-02-20T21:54:54Z AI is incredibly smart, but it will never match human creativity
“Humanity’s safe-haven in the coming years will be exactly that — consciousness. Spontaneous thought, creative thinking, and a desire to challenge the world around us. As long as humans exist there will always be a need to innovate, to solve problems through brilliant ideas. Rather than some society in which all individuals will be allowed to carry out their days creating works of art, the machine revolution will instead lead to a society in which anyone can make a living by dreaming and providing creative input to projects of all kinds. The currency of the future will be thought.

This article was originally published on Alex Wulff's Medium”

AI is incredibly smart, but it will never match human creativity
https://thenextweb.com/syndication/2019/01/02/ai-is-incredibly-smart-but-it-will-never-match-human-creativity/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1375798 2019-02-19T12:03:08Z 2019-02-19T12:03:08Z [digital ethics] Only 17% Of Consumers Believe Personalized Ads Are Ethical, Survey Says



“A massive majority of consumers believe that using their data to personalize ads is unethical. And a further 59% believe that personalization to create tailored newsfeeds -- precisely what Facebook, Twitter, and other social applications do every day -- is unethical.

At least, that's what they say on surveys.”

Only 17% Of Consumers Believe Personalized Ads Are Ethical, Survey Says
https://www.forbes.com/sites/johnkoetsier/2019/02/09/83-of-consumers-believe-personalized-ads-are-morally-wrong-survey-says

]]>
tag:digitalethics.net,2013:Post/1372203 2019-02-09T15:21:46Z 2019-02-09T15:21:47Z Facebook’s provocations of the week – Monday Note describes the Google business model
“Imagine if JPMorgan owned the New York Stock Exchange, was the sole market-maker on its own equity, the exclusive broker for every other equity in the market, ran the entire settlement and clearing system in the market, and basically wouldn’t let anyone see who had bought shares and which share or certificate or number they bought… That is Google’s business model.””

Facebook’s provocations of the week – Monday Note
https://mondaynote.com/facebooks-provocations-of-the-week-9fc6af6de12f
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1372202 2019-02-09T15:21:30Z 2019-02-09T15:21:30Z The Next Privacy War Will Happen in Our Homes – Member Feature Stories – Medium
“October, Amazon showcased Alexa’s newest features, including the ability to detect when someone is whispering and respond at a quieter volume. According to Wired, Amazon also has plans to introduce a home security feature, Alexa Guard, giving the program the ability to listen “for trouble such as broken glass or a smoke alarm when you’re away from home.” A month later, the Telegraph reported that Amazon had patented Alexa software that could one day analyze someone’s voice for signs of illness (like a cough or a sneeze) and respond by offering to order cough drops.”

The Next Privacy War Will Happen in Our Homes – Member Feature Stories – Medium
https://medium.com/s/story/why-the-next-privacy-war-will-be-over-sound-d7b59b1533f3
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1372186 2019-02-09T14:45:58Z 2019-02-09T14:45:58Z Understanding China's AI Strategy
“Jack Ma, the chairman of Alibaba, said explicitly in a speech at the 2019 Davos World Economic Forum that he was concerned that global competition over AI could lead to war.14”

Understanding China's AI Strategy
https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1371865 2019-02-08T11:10:54Z 2019-02-08T11:10:55Z What is work?
“Since the dawn of the industrial age, work has become ever more transactional and predictable; the execution of routine, tightly defined tasks. In virtually every large public and private sector organization, that approach holds: thousands of people, each specializing in certain tasks, limited in scope, increasingly standardized and specified, which ultimately contribute to the creation and delivery of predictable products and services to customers and other stakeholders. The problem? Technology can increasingly do that work. Actually, technology should do that work: Machines are more accurate, they don’t get tired or bored, they don’t break for sleep or weekends. If it’s a choice between human or machines to do the kind of work that requires compliance and consistency, machines should win every time.”

What is work?
https://www2.deloitte.com/insights/us/en/focus/technology-and-the-future-of-work/what-is-work.html
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1371822 2019-02-08T06:26:04Z 2019-02-08T06:26:05Z Team Human vs. Team AI
“Artificial intelligence adds another twist. After we launch technologies related to AI and machine learning, they not only shape us, but they also begin to shape themselves. We give them an initial goal, then give them all the data they need to figure out how to accomplish it. From that point forward, we humans no longer fully understand how an AI program may be processing information or modifying its tactics. The AI isn’t conscious enough to tell us. It’s just trying everything and hanging onto what works for the initial goal, regardless of its other consequences.”

Team Human vs. Team AI
https://www.strategy-business.com/article/Team-Human-vs-Team-AI?gko=4d55d
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1370592 2019-02-05T07:12:34Z 2019-02-05T07:12:35Z Recent events highlight an unpleasant scientific practice: ethics dumping
“Dig deeper, though, and what happened starts to look more intriguing than just the story of a lone maverick having gone off the rails in a place with lax regulation. It may instead be an example of a phenomenon called ethics dumping.

Ethics dumping is the carrying out by researchers from one country (usually rich, and with strict regulations) in another (usually less well off, and with laxer laws) of an experiment that would not be permitted at home, or of one that might be permitted, but in a way that would be frowned on. The most worrisome cases involve medical research, in which health, and possibly lives, are at stake. But other investigations—anthropological ones, for example—may also be carried out in a more cavalier fashion abroad. As science becomes more international the risk of ethics dumping, both intentional and unintentional, has risen. The suggestion in this case is that Dr He was encouraged and assisted in his project by a researcher at an American university.”

Recent events highlight an unpleasant scientific practice: ethics dumping
https://www.economist.com/science-and-technology/2019/02/02/recent-events-highlight-an-unpleasant-scientific-practice-ethics-dumping
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1370246 2019-02-04T11:29:31Z 2019-02-04T11:29:32Z The new elite’s phoney crusade to save the world – without changing anything
“That vast numbers of Americans and others in the west have scarcely benefited from the age is not because of a lack of innovation, but because of social arrangements that fail to turn new stuff into better lives. For example, American scientists make the most important discoveries in medicine and genetics and publish more biomedical research than those of any other country – but the average American’s health remains worse and slower-improving than that of peers in other rich countries, and in some years life expectancy actually declines. American inventors create astonishing new ways to learn thanks to the power of video and the internet, many of them free of charge – but the average US high-school leaver tests more poorly in reading today than in 1992. The country has had a “culinary renaissance”, as one publication puts it, one farmers’ market and Whole Foods store at a time – but it has failed to improve the nutrition of most people, with the incidence of obesity and related conditions rising over time.”

The new elite’s phoney crusade to save the world – without changing anything
http://www.theguardian.com/news/2019/jan/22/the-new-elites-phoney-crusade-to-save-the-world-without-changing-anything
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1369578 2019-02-02T17:31:02Z 2019-02-02T17:31:02Z ‘Merging man and machine doesn’t come without consequences’. Gerd Leonhard comments
“Google’s director of engineering Ray Kurzweil is aligned with Mr Musk on this issue, and regularly enthuses about the possibility of man and machine combining to optimise our skills and extend our lifespans. Mr Leonhard, however, does not share this utopian vision. “I don’t want to be faced with the challenge of becoming a cyborg,” he says. “There are things we’d stop doing. Anything slow and inefficient, we wouldn’t do any longer, and I think that’s dehumanising. Also, it means that the rich can augment themselves and become superhuman, while the unaugmented will become useless in comparison.”

But perhaps we’re getting ahead of ourselves. Current experiments with non-invasive BCIs (ie, not implanted inside the skull) are still limited in their scope, and the technology would have to improve by several orders of magnitude before it could boost our lifespans (or, indeed, end up sowing divisions in society). But work is being done outside the field of EEGs that might speed up that journey. New York company CTRL-labs has produced a wristband that senses electrical pulses in the arm, and according to chief executive Thomas Reardon, has all the capabilities of a cranial implant. “There’s nothing you can do with a chip in your brain that we can’t do better,” he boasted in interview with The Verge in June. In tests, CTRL-labs have successfully demonstrated the movement of virtual objects by the power of thought, and gaming enthusiasts have been fascinated. Once problems of speed and accuracy have been conquered, it could represent a gaming revolution where controllers are no longer needed, and experiences become fully immersive.

But while he acknowledges that it is the job of scientists and companies to build this kind of advanced technology, Mr Leonhard says that they also have a responsibility for unforeseen side-effects. “If we have a serious uptake in this kind of augmented reality, I believe we’re going to have a lot of issues with health, mental health and attention deficits.” So how far should we go with the convergence of man and machine? “I’m excited about the future,” he says. “But I’m a humanist. I don’t think we should use technology to leave humanity behind us.””

‘Merging man and machine doesn’t come without consequences’
https://www.thenational.ae/arts-culture/comment/merging-man-and-machine-doesn-t-come-without-consequences-1.780792
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1367732 2019-01-27T22:04:57Z 2019-01-27T22:05:28Z GSMA sharpens focus on ethical digitalisation with the launch of 'Digital Declaration'
Hear hear ! Better late than never :))


“Social, technological, political and economic currents are combining to create a perfect storm of disruption across all industries,” said Mats Granryd, director general of the GSMA.

“A new form of responsible leadership is needed to successfully navigate this era. We are on the cusp of the 5G era, which will spark exciting new possibilities for consumers and promises to transform the shape of virtually every business. In the face of this disruption, those that embrace the principles of the Digital Declaration will strive for business success in ways that seek a better future for their consumers and societies. Those that do not change can expect to suffer increasing scrutiny from shareholders, regulators and consumers,” he added.”

GSMA sharpens focus on ethical digitalisation with the launch of 'Digital Declaration'
https://www.totaltele.com/501995/GSMA-sharpens-focus-on-ethical-digitalisation-with-the-launch-of-Digital-Declaration
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1367292 2019-01-26T09:00:12Z 2019-01-26T09:00:12Z World Leaders at Davos Call for Global Rules on Tech
“The rapid spread of digital technology in daily life and the implications that has on the future of work and data security will require more international cooperation, not less, Ms. Merkel said. But she acknowledged that nobody knows how to write the rules.

Neither the American nor the Chinese approach would work for Europeans, who place a high value on privacy and social justice, Ms. Merkel said.

“I still have yet to see any global architecture that deals with these questions,” she said.”

World Leaders at Davos Call for Global Rules on Tech
https://www.nytimes.com/2019/01/23/technology/world-economic-forum-data-controls.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1365599 2019-01-21T21:40:04Z 2019-01-21T21:40:04Z The World Is Choking on Digital Pollution
“As always, progress has not been without a price. Like the factories of 200 years ago, digital advances have given rise to a pollution that is reducing the quality of our lives and the strength of our democracy. We manage what we choose to measure. It is time to name and measure not only the progress the information revolution has brought, but also the harm that has come with it. Until we do, we will never know which costs are worth bearing.

We seem to be caught in an almost daily reckoning with the role of the internet in our society. This past March, Facebook lost $134 billion in market value over a matter of weeks after a scandal involving the misuse of user data by the political consulting firm Cambridge Analytica. In August, several social media companies banned InfoWars, the conspiracy-mongering platform of right-wing commentator Alex Jones. Many applauded this decision, while others cried of a left-wing conspiracy afoot in the C-suites of largely California-based technology companies.”

The World Is Choking on Digital Pollution
https://washingtonmonthly.com/magazine/january-february-march-2019/the-world-is-choking-on-digital-pollution/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1363498 2019-01-15T06:47:15Z 2019-01-15T06:47:17Z Don’t believe the hype: the media are unwittingly selling us an AI fantasy | John Naughton
“The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.””

Don’t believe the hype: the media are unwittingly selling us an AI fantasy | John Naughton
http://www.theguardian.com/commentisfree/2019/jan/13/dont-believe-the-hype-media-are-selling-us-an-ai-fantasy
via Instapaper


]]>