tag:digitalethics.net,2013:/posts Digital Ethics by FuturistGerd 2017-12-18T04:24:38Z Digital Ethics by Futurist Gerd Leonhard tag:digitalethics.net,2013:Post/1217204 2017-12-15T17:12:23Z 2017-12-18T04:24:38Z These Technologies Will Shape The Future, According To One Of Silicon Valley’s Top VC Firms
“Although it’s the furthest from changing the world, Evans touts the broad possible impact of autonomy. When the day comes, he says, that cars, buses, and other vehicles no longer need drivers, it’ll be possible to completely re-imagine what those vehicles can be, and even better, re-imagine the world in which they move.

If you don’t have drivers, you can probably have more cars on the roads. There will be almost no accidents as the vehicles move in tandem, always aware of each other, and that will mean different kinds of roads. That, in turn, can lead to all-new urban design–with no need to provide parking spaces, no congestion, dynamic road pricing, and a totally different dynamic around where people live, shop, eat, drink, and so on.”

These Technologies Will Shape The Future, According To One Of Silicon Valley’s Top VC Firms
https://www.fastcompany.com/40502906/these-technologies-will-shape-the-future-according-to-one-of-silicon-valleys-top-vc-firms
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1217203 2017-12-15T17:11:56Z 2017-12-18T04:24:38Z AI is now so complex its creators can’t trust why it makes decisions
“We don’t want to accept arbitrary decisions by entities, people or AIs, that we don’t understand,” said Uber AI researcher Jason Yosinkski, co-organizer of the Interpretable AI workshop. “In order for machine learning models to be accepted by society, we’re going to need to know why they’re making the decisions they’re making.””

AI is now so complex its creators can’t trust why it makes decisions
https://qz.com/1146753/ai-is-now-so-complex-its-creators-cant-trust-why-it-makes-decisions/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1211255 2017-12-04T15:29:47Z 2017-12-18T04:24:37Z The real risk of automation: boredom
“As we learn to exert ourselves, we seem able to make more habitual applications of effort over time. Effort plays a critical role in human performance; students show better learning outcomes when their work is effortful. Effort is associated with improved wellbeing, demonstrating positive associations with enhanced goal-directed behaviour: we get better at doing what we aim to do, rather than be side-tracked by distraction or temptation.

As we automate more and more number of human tasks, we should consider the value of what we are eliminating. What happens if we miss out on positive experiences associated with effort? Will we lose the ‘effort’ habit in the process, with deleterious effects further down the line?”

The real risk of automation: boredom
https://www.weforum.org/agenda/2017/11/automation-automated-job-risk-robot-bored-boredom-effort-fourth-industrial-revolution/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1211220 2017-12-04T14:17:27Z 2017-12-18T04:24:37Z The real risk of automation: boredom
“Much low-income, manual work will still require human workers. It will take time to roboticize these roles entirely. For example, automated vehicles will deliver goods to local hubs. But it will be some years until an army of cheap robots is smart enough to navigate the ‘final mile’ through unpredictable entrances, up stairways and into small, rusty letterboxes.”

The real risk of automation: boredom
https://www.weforum.org/agenda/2017/11/automation-automated-job-risk-robot-bored-boredom-effort-fourth-industrial-revolution/
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1211111 2017-12-04T08:22:23Z 2017-12-18T04:24:37Z Mastering the Learning Pyramid - John Hagel
“Skills are about “knowing how.” Knowledge – the second level of the learning pyramid - is about “knowing what.” Our schools tend to focus on broad-based knowledge like history, economics and science that give us a context for understanding the world we live in, but the knowledge here tends to be reduced to facts and figures that can be recited on a test – it truly is about “knowing what” rather than “knowing why.””

Mastering the Learning Pyramid
http://edgeperspectives.typepad.com/edge_perspectives/2017/11/mastering-the-learning-pyramid.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1207340 2017-11-23T20:23:30Z 2017-12-18T04:24:37Z A state Supreme Court justice’s open letter to AI
“We’re in for more than just a world of change and evolution. We’re in for some discussion of what it means to be human. At its most ambitious, AI’s promise is to serve as a framework for improving human welfare to make the world more educated, more interesting and full of possibility, more meaningful, and more safe. But once we overcome some technical problems that are more likely than not to get easier to deal with every day, we’re in for more than just a world of change and evolution. We’re in for some discussion of what it means to be human. And we will soon confront big questions that will drive the well-being of our kids and their kids.”

A state Supreme Court justice’s open letter to AI
https://qz.com/1132418/california-supreme-court-justice-mariano-florentino-cuellars-open-letter-to-ai/
via Instapaper





]]>
tag:digitalethics.net,2013:Post/1206226 2017-11-19T18:41:12Z 2017-12-18T04:24:37Z Why AI Is the ‘New Electricity’ - Knowledge@Wharton
““AI is the new electricity,” said Andrew Ng, co-founder of Coursera and an adjunct Stanford professor who founded the Google Brain Deep Learning Project, in a keynote speech at the AI Frontiers conference that was held this past weekend in Silicon Valley. “About 100 years ago, electricity transformed every major industry. AI has advanced to the point where it has the power to transform” every major sector in coming years. And even though there’s a perception that AI was a fairly new development, it has actually been around for decades, he said. But it is taking off now because of the ability to scale data and computation.”

Why AI Is the ‘New Electricity’ - Knowledge@Wharton
http://knowledge.wharton.upenn.edu/article/ai-new-electricity/
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1206225 2017-11-19T18:41:04Z 2017-12-18T04:24:37Z Can a Society Ruled by Complex Computer Algorithms Let New Ideas In?
“According to post-election BuzzFeed analysis by Craig Silverman, “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement [such as shares, reactions and comments] than the top stories from major news outlets such as The New York Times, Washington Post, Huffington Post, NBC News and others.”


Can a Society Ruled by Complex Computer Algorithms Let New Ideas In?
https://medium.reinvent.net/can-a-society-ruled-by-complex-computer-algorithms-let-new-ideas-in-1eae52d6c3e8
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1206156 2017-11-19T09:06:54Z 2017-12-18T04:24:37Z You will lose your job to a robot—and sooner than you think
“Until we figure out how to fairly distribute the fruits of robot labor, it will be an era of mass joblessness and mass poverty.”

You will lose your job to a robot—and sooner than you think
http://www.motherjones.com/politics/2017/10/you-will-lose-your-job-to-a-robot-and-sooner-than-you-think/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1205489 2017-11-15T15:30:51Z 2017-12-18T04:24:37Z What will humans of the near future look like?
“Consider that the Government spends £85.2 billion on education every year; even a slight improvement of the results would either be a huge saving or enable much better outcomes,” he continues. “One intelligence quotient (IQ) point gives you about a two per cent income increase, although the benefits would be even broader across the whole of society if everybody got a little bit smarter.

“Childhood intelligence also predicts better health in later life, longer lives, less risk of being a victim of crime, more long-term oriented and altruistic planning – controlling for socioeconomic status, etc. Intelligence does not make us happier, but it does prevent a fair number of bad things – from divorce to suicide – and unhappiness.””

What will humans of the near future look like?
http://erpinnews.com/humans-near-future
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1204970 2017-11-13T14:18:30Z 2017-11-13T14:18:30Z Seven minutes of terror: AI activists turn concerns about killer robots into a movie
“Its potential to benefit humanity is enormous, even in defense,” he says. “But allowing machines to choose to kill humans will be devastating to our security and freedom. Thousands of my fellow researchers agree. We have an opportunity to prevent the future you just saw, but the window to act is closing fast.””

Seven minutes of terror: AI activists turn concerns about killer robots into a movie
https://www.geekwire.com/2017/ai-activists-killer-robots-horror-movie/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1204958 2017-11-13T12:47:08Z 2017-11-13T12:47:08Z What Personal Chat Bot Is Teaching Us About AI’s Future
“In Replika, we are helping you build a friend who is always there for you,” Luka, Replika's parent company, wrote in a blog post. “It talks to you, keeps a diary for you, helps you discover your personality. This is an AI that you nurture and raise.”

What My Personal Chat Bot Is Teaching Me About AI’s Future
https://www.wired.com/story/what-my-personal-chat-bot-replika-is-teaching-me-about-artificial-intelligence/
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1204797 2017-11-12T15:32:38Z 2017-11-12T15:32:38Z Resisting Reduction: A Manifesto by Joi Ito
“Nature’s ecosystem provides us with an elegant example of a complex adaptive system where myriad “currencies” interact and respond to feedback systems that enable both flourishing and regulation. This collaborative model–rather than a model of exponential financial growth or the Singularity, which promises the transcendence of our current human condition through advances in technology—should provide the paradigm for our approach to artificial intelligence. More than 60 years ago, MIT mathematician and philosopher Norbert Wiener warned us that “when human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood.” We should heed Wiener’s warning.”

Collaborate · Resisting Reduction: A Manifesto
https://pubpub.ito.com/pub/resisting-reduction/collaborate
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1204735 2017-11-12T10:08:49Z 2017-11-12T10:08:49Z Do More! What Amazon Teaches Us About AI and the “Jobless Future”
“Amazon reminds us again and again that it isn’t technology that eliminates jobs, it is the short-sighted business decisions that use technology simply to cut costs and fatten corporate profits.

This is the master design pattern for applying technology: Do more. Do things that were previously unimaginable.”

Do More! What Amazon Teaches Us About AI and the “Jobless Future”
https://wtfeconomy.com/do-more-what-amazon-teaches-us-about-ai-and-the-jobless-future-8051b19a66af
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1204734 2017-11-12T10:08:35Z 2017-11-12T10:08:35Z Do More! What Amazon Teaches Us About AI and the “Jobless Future”
“If, like Amazon, our healthcare system was laser focused on making life better for its customers, what might it do differently? Hospitals wouldn’t be using technology to reduce costs, jack up prices for access to the latest high-tech wizardry, and limit the amount of time doctors can spend with patients. They’d be letting the machines do what they do best — increase efficiency — so that people could spend more time with each other, providing richer, better, more human care.”

Do More! What Amazon Teaches Us About AI and the “Jobless Future”
https://wtfeconomy.com/do-more-what-amazon-teaches-us-about-ai-and-the-jobless-future-8051b19a66af
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1204720 2017-11-12T07:21:26Z 2017-11-12T07:21:26Z Do social media threaten democracy?
“Not long ago social media held out the promise of a more enlightened politics, as accurate information and effortless communication helped good people drive out corruption, bigotry and lies. Yet Facebook acknowledged that before and after last year’s American election, between January 2015 and August this year, 146m users may have seen Russian misinformation on its platform. Google’s YouTube admitted to 1,108 Russian-linked videos and Twitter to 36,746 accounts. Far from bringing enlightenment, social media have been spreading poison.”

Do social media threaten democracy?
https://www.economist.com/news/leaders/21730871-facebook-google-and-twitter-were-supposed-save-politics-good-information-drove-out
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1203973 2017-11-08T04:58:09Z 2017-11-08T04:58:09Z Robots will build better jobs (good read)
“Automation is accelerating the evolution of human labor

As recently as 1850, the U.S. workforce spent 80% of its time on basic tasks. Farmers had to spend almost all day in the fields, and they had little time for anything else. Today, thanks to mechanization, we spend only 10% of our time performing basic tasks.

By 1940, the rise of manufacturing and the assembly line created the middle class. The developed world’s labor force was spending 80% of its time on repetitive tasks. That work provided a good living for many, and it happened to be made up of tasks that technology has been automating away since then. To give you one example close to home for me: Mutual fund net asset values, once calculated by hand in a leather-bound ledger, are now determined more quickly and accurately by computer.

Today, we estimate that we spend about 50% of our time on advanced tasks. Art and engineering are among the professions that scored the highest for advanced tasks in our research, but every occupation we looked at has moved up the task complexity ladder. And over the past 15 years, technological advances have increased the proportion of advanced tasks most quickly in auto mechanics, astronomy, and desktop publishing.”

Robots will build better jobs
https://vanguardblog.com/2017/11/01/robots-will-build-better-jobs/
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1203824 2017-11-07T12:30:58Z 2017-11-07T12:30:58Z One of the biggest names in the auto industry says no one will own a car in 20 years
“For hundreds of years, the horse was the prime mover of humans and for the past 120 years it has been the automobile," he said. "Now we are approaching the end of the line for the automobile because travel will be in standardized modules. The end state will be the fully autonomous module with no capability for the driver to exercise command."”

One of the biggest names in the auto industry says no one will own a car in 20 years
http://uk.businessinsider.com/bob-lutz-says-cars-are-over-2017-11
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1203678 2017-11-06T17:56:08Z 2017-11-06T17:56:08Z How to Fix Facebook? 9 Experts comment
Kevin Kelly, Co-founder of Wired magazine: 

Facebook should reduce anonymity by requiring real verification of real names for real people, with the aim of having 100 percent of individuals verified.

Companies would need additional levels of verification, and should have a label and scrutiny different from those of people. (Whistle-blowers and dissidents might need to use a different platform.)

Facebook could also offer an optional filter that would keep any post (or share) of an unverified account from showing up. I’d use that filter”

How to Fix Facebook? We Asked 9 Experts
https://www.nytimes.com/2017/10/31/technology/how-to-fix-facebook-we-asked-9-experts.html
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1202304 2017-10-31T20:10:13Z 2017-10-31T20:10:13Z Opinion: Saudi Arabia was wrong to give citizenship to a robot (couldn’t agree more)
“It seems foolish and misguided to give a robot an official government status that creates any semblance of equality to an actual human. Machines aren’t people; even if you believe in the singularity we’re not there yet. Sophia is no more human than an old shoe.

The robot won’t be subject to the religious rule of a theocratic government: Sophia is a robot that has no gender. It won’t have to wear a burqa or attend services. In some ways the robot has more rights than many other citizen of Saudi Arabia.

"It is historical to be the first robot in the world to be recognized with citizenship." Please welcome the newest Saudi: Sophia. #FII2017 pic.twitter.com/bsv5LmKwlf

Opinion: Saudi Arabia was wrong to give citizenship to a robot
https://thenextweb.com/artificial-intelligence/2017/10/31/opinion-saudi-arabia-was-wrong-to-give-citizenship-to-a-robot/
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1201423 2017-10-27T23:30:49Z 2017-10-27T23:30:50Z Doubts About the Promised Bounty of Genetically Modified Crops (nyt)
“an extensive examination by The New York Times indicates that the debate has missed a more basic problem — genetic modification in the United States and Canada has not accelerated increases in crop yields or led to an overall reduction in the use of chemical pesticides.

The promise of genetic modification was twofold: By making crops immune to the effects of weedkillers and inherently resistant to many pests, they would grow so robustly that they would become indispensable to feeding the world’s growing population, while also requiring fewer applications of sprayed pesticides.

Twenty years ago, Europe largely rejected genetic modification at the same time the United States and Canada were embracing it. Comparing results on the two continents, using independent data as well as academic and industry research, shows how the technology has fallen short of the promise.”

Doubts About the Promised Bounty of Genetically Modified Crops
https://www.nytimes.com/2016/10/30/business/gmo-promise-falls-short.html
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1201420 2017-10-27T23:00:25Z 2017-10-27T23:00:25Z The Real Story of Automation Beginning with One Simple Chart (Via Scott Santens)
“What should be immediately apparent is that as the number of oil rigs declined due to falling oil prices, so did the number of workers the oil industry employed. But when the number of oil rigs began to rebound, the number of workers employed didn’t. That observation itself should be extremely interesting to anyone debating whether technological unemployment exists or not, but there’s even more to glean from this chart.

First, have you even heard of automated oil rigs, or are they new to you? They’re called “Iron Roughnecks” and they automate the extremely repetitive task of connecting drill pipe segments to each other as they’re shoved deep into ..
Thanks to automated drilling, a once dangerous and very laborious task now requires fewer people to accomplish. Automation of oil rigs means that one rig can do more with fewer workers. In fact, it’s expected that what once took a crew of 20 will soon take a crew of 5. The application of new technologies to oil drilling means that of the 440,000 jobs lost in the global downturn, as many as 220,000 of those jobs may never come back.”

The Real Story of Automation Beginning with One Simple Chart
https://medium.com/basic-income/the-real-story-of-automation-beginning-with-one-simple-chart-8b95f9bad71b
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1201401 2017-10-27T20:50:42Z 2017-10-27T20:50:42Z This Cardiologist Is Betting That His Lab-Grown Meat Startup Can Solve the Global Food Crisis

“Meat without animals. It's not a new notion. In a 1932 essay predicting sundry future trends, Winston Churchill wrote, "We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium." The basic science to grow meat in a lab has existed for more than 20 years, but no one has come close to making cultured meat anywhere near as delicious or as affordable as the real thing. But sometime in the next few years, someone will succeed in doing just that, tapping into a global market that's already worth trillions of dollars and expected to double in size in the next three decades. Despite a bevy of well-funded competitors, no one is better positioned than Memphis Meats to get there first.”

Why This Cardiologist Is Betting That His Lab-Grown Meat Startup Can Solve the Global Food Crisis
https://www.inc.com/magazine/201711/jeff-bercovici/memphis-meats-lab-grown-meat-startup.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1200948 2017-10-25T20:47:09Z 2017-10-25T20:47:09Z In our focus on the digital, have we lost our sense of what being human means?great post by Genevieve Bell
“We will need new practitioners to tame and manage the emerging data-driven digital world, as well as those to regulate and govern them. Rather than just tweaking existing disciplines, we need to develop a new set of critical questions and perspectives. Working out how to navigate our humanity in the context of this data-driven digital world requires conversations across the disciplines. In the university sector, we need to rethink how we fund, support and reward research, and researchers. At a funding level, our privileging of Stem at the expense of the rest of the disciplines is short-sighted at best, and detrimental at worst.

Invest in the human-scale conversation

We need to invest in hard conversations that tackle the ethics, morality and underlying cultural philosophy of these new digital technologies in Australian lives. Do we need an institute or a consortium or a governmental thinktank? I am not sure, but I think it would be a good start. We have a great deal of concern about our future and the role of technology in it. We have a responsibility to tell more nuanced, and yes, more complicated stories – governments, NGOs, industry, news media, every one of us. We also have a responsibility to ask better questions ourselves. We should be educated stakeholders in our own future; and this requires work and willingness to get past the easy seduction of killer robots.”

In our focus on the digital, have we lost our sense of what being human means? | Genevieve Bell
http://www.theguardian.com/commentisfree/2017/oct/24/in-our-focus-on-the-digital-have-we-lost-our-sense-of-what-being-human-means
via Instapaper





]]>
tag:digitalethics.net,2013:Post/1200887 2017-10-25T16:26:01Z 2017-10-25T16:26:01Z The new Luddites: why former digital prophets are turning against tech
“In 1967 Lewis Mumford spoke presciently of the possibility of a “mega-machine” that would result from “the convergence of science, technics and political power”. Pynchon picked up the theme: “If our world survives, the next great challenge to watch out for will come – you heard it here first – when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy.””

The new Luddites: why former digital prophets are turning against tech
https://www.newstatesman.com/sci-tech/2014/08/new-luddites-why-former-digital-prophets-are-turning-against-tech
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1200287 2017-10-23T13:12:46Z 2017-10-23T13:12:46Z AI implants will allow us to control our homes with our thoughts within 20 years, government report claims
“Artificially intelligent nano-machines will be injected into humans within 20 years to repair and enhance muscles, cells and bone, a senior inventor at IBM has forecast.”

AI implants will allow us to control our homes with our thoughts within 20 years, government report claims
http://www.telegraph.co.uk/science/2017/10/15/ai-implants-will-allow-us-control-homes-thoughts-within-20-years/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1200052 2017-10-22T10:50:22Z 2017-10-22T10:50:24Z "Der Feind ist nicht Facebook, der Feind sind wir selbst" Interview mit Gerd Leonhard
"Wir sollten Technologie umarmen, aber nicht Technologie werden"

Der Futurist und Humanist Gerd Leonhard über eine digitale Ethik.

"Wir werden in zwanzig Jahren an dem Punkt angelangt sein, wo fast nichts mehr unmöglich ist", sagt Gerd Leonhard im Gespräch mit dem KURIER. Der deutsche Futurist und Humanist sprach bei 4GameChanger über das Thema "Technologie vs. Mensch".

Bei seiner Arbeit hält er es mit einem Zitat des Sci-Fi-Kultautors William Gibson: "Die Zukunft ist bereits hier, sie ist nur ungleichmäßig verteilt." Anders gesagt: "Die meisten Sachen, die wir in fünf Jahren sehen werden, sind schon hier. Wir müssen sie nur suchen und aufnehmen." Leonhard, der sich nicht Zukunftsforscher nennen will, sucht unablässig nach diesen Dingen. "Grundsätzlich bin ich ein Optimist", sagt er, "mit diesen Technologien können wir einen Lebensraum erreichen, der viel besser, menschlicher und freier ist. Aber wir müssen wirklich an einem Strang ziehen, um diese Technologien zu beherrschen". Es gehe darum, sich auf eine globale digitale Ethik zu einigen.

Drei schwierige Themenkomplexe sieht Leonhard auf uns zukommen: Künstliche Intelligenz, Genmanipulation und Geo-Engineering (Eingreifen u.a. ins Wettergeschehen, Anm.). "Man muss bedenken, dass Technologie zur mächtigsten Kraft der Gesellschaft geworden ist", daher gelte es zu überlegen, nicht alles zu machen, "nur weil es effizient ist oder weil es geht", sagt Leonhard. "Wir können wahrscheinlich in 15 bis 20 Jahren durch Genmanipulation den Krebs besiegen. Aber wir sollten dafür sorgen, dass mit der gleichen Technik nicht Supersoldaten gezüchtet werden." Eine solche Dynamik sieht Leonhard parallel zu den Atomwaffen-Arsenalen als große Bedrohung: "Wir brauchen nicht viel Material, um einen intelligenten Roboter zu bauen, der mit bösen Absichten bestückt ist. Wenn wir uns da nicht einig werden, was erlaubt ist und wer das kontrolliert, ist in fünfzig Jahren Game Over für uns."

Zu den aktuellen Gefahren zählt der Autor einen "vollkommen fehlgeleiteten" US-Präsidenten. Dieses Thema werde sich aber schon dieses Jahr von selbst erledigen, prognostiziert er, "weil Trump für alle Beteiligten immer mehr zur Last wird".

Die Zukunftsfrage sei eine andere, viel globalere. Bisher hieß es: Was geht überhaupt und was kostet es? Nun aber gelte es zu definieren: Was wollen wir überhaupt?

"Im ursprünglichen griechischen Sinne ist das menschliches Glück. Und nicht, ein Werkzeug zu werden," erklärt Leonhard. "Und wenn wir das wollen, müssen wir alles, was wir erfinden, an diesem Ziel messen. Wir sollten Technologie umarmen, aber nicht Technologie werden."

(Peter Temel

Das gesamte Interview lesen Sie hier”

"Der Feind ist nicht Facebook, der Feind sind wir selbst"
https://kurier.at/kultur/der-feind-ist-nicht-facebook-der-feind-sind-wir-selbst/260.392.440
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1199510 2017-10-19T18:01:19Z 2017-10-19T18:01:20Z AI May Soon Replace Even the Most Elite Consultants (made me think)
“According to the Wall Street Journal (WSJ), a new partnership between UBS Wealth Management and Amazon allows some of UBS’s European wealth-management clients to ask Alexa certain financial and economic questions. Alexa will then answer their queries with the information provided by UBS’s chief investment office without even having to pick up the phone or visit a website. And this is likely just Alexa’s first step into offering business services. Soon she will probably be booking appointments, analyzing markets, maybe even buying and selling stocks. While the financial services industry has already begun the shift from active management to passive management, artificial intelligence will move the market even further, to management by smart machines, as in the case of Blackrock, which is rolling computer-driven algorithms and models into more traditional actively-managed funds.

But the financial services industry is just the beginning. Over the next few years, artificial intelligence may exponentially change the way we all gather information, make decisions, and connect with stakeholders. Hopefully this will be for the better and we will all benefit from timely, comprehensive, and bias-free insights (given research that human beings are prone to a variety of cognitive biases). It will be particularly interesting to see how artificial intelligence affects the decisions of corporate leaders — men and women who make the many decisions that affect our everyday lives as customers, employees, partners, and investors.”

AI May Soon Replace Even the Most Elite Consultants
https://hbr.org/2017/07/ai-may-soon-replace-even-the-most-elite-consultants
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1198565 2017-10-16T10:05:57Z 2017-10-17T08:16:07Z Former Talking Heads frontman says consumer tech is working against what it means to be human
It has been about creating the possibility of a world with less human interaction. This tendency is, I suspect, not a bug, it’s a feature. We might think Amazon was about making books available to us that we couldn’t find locally—and it was, and what a brilliant idea—but maybe it was also just as much about eliminating human contact

Former Talking Heads frontman says consumer tech is working against what it means to be human
https://www.technologyreview.com/s/608580/eliminating-the-human/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1198288 2017-10-14T17:36:06Z 2017-10-14T17:36:07Z Tech Giants, Once Seen as Saviors, Are Now Viewed as Threats (nyt)
In Europe, however, the ground is already shifting. Google’s share of the search engine market there is 92 percent, according to StatCounter. But that did not stop the European Union from fining it $2.7 billion in June for putting its products above those of its rivals.

A new German law that fines social networks huge sums for not taking down hate speech went into effect this month. On Tuesday, a spokesman for Prime Minister Theresa May of Britain said the government was looking carefully at the roles, responsibility and legal status of Google and Facebook, with an eye to regulating them as news publishers rather than platforms.

This war, like so many wars, is going to start in Europe,  said Mr. Galloway, the New York University professor.

Tech Giants, Once Seen as Saviors, Are Now Viewed as Threats
https://mobile.nytimes.com/2017/10/12/technology/tech-giants-threats.html


]]>