tag:digitalethics.net,2013:/posts Digital Ethics by FuturistGerd 2017-08-18T13:42:34Z Digital Ethics by Futurist Gerd Leonhard tag:digitalethics.net,2013:Post/1184683 2017-08-18T13:42:33Z 2017-08-18T13:42:34Z Automation may take our jobs—but it’ll restore our humanity (says x.ai CEO)
“Our very human future

One implication of all this is that for humans to succeed in the AI-powered future, we need to double down on our humanity. Technical skills will no doubt remain important in the future of work, but as AI allows us to automate repetitive tasks across many industries, these will in many cases take a back seat to soft skills. Communication, emotional intelligence, creativity, critical thinking, collaboration, and cognitive flexibility will become the most sought-after abilities. To prepare for that future, we need to emphasize developing higher-order thinking and emotional skills.

While our formal education system catches up to the shifting definition of human intelligence, here are three basic ideas for improving your prospects in the future of work.

Learn to tell stories. Machines aren’t very good at storytelling beyond rote reports. Telling engaging and creative stories is essential if you want to collaborate effectively with other humans. It can improve your communications in many ways—from reframing a product feature to a customer to selling a new internal KPI for how you measure success. A workshop from an organization like The Story Studio is a great place to start.
Boost your creativity. A lot of people think creativity can’t be learned; you either have it, or you don’t. But that’s not true. Creativity is a process and you can ignite that process and improve your chances of creative results. For example, taking regular, reflective breaks, going for walks, and making time for unstructured play (yes, even for adults!) have been shown to boost creativity.
Learn how to sell. Selling is an inherently human trait, and it’s an incredibly important one. I’m not just talking about selling products, but also how to sell yourself, your ideas, and convincing others to get on board with you. Mastering the basic concepts of sales involves a whole lot of very human qualities: understanding psychology, listening and asking questions, empathizing with others, and finding creative solutions to problems.”

Automation may take our jobs—but it’ll restore our humanity
via Instapaper

tag:digitalethics.net,2013:Post/1184102 2017-08-16T09:33:00Z 2017-08-16T09:33:00Z How Technology Might Get Out of Control (about the Nash equilibrium's demise?)
“People use laws, social norms and international agreements to reap the benefits of technology while minimizing undesirable things like environmental damage. In aiming to find such rules of behavior, we often take inspiration from what game theorists call a Nash equilibrium, named after the mathematician and economist John Nash. In game theory, a Nash equilibrium is a set of strategies that, once discovered by a set of players, provides a stable fixed point at which no one has an incentive to depart from their current strategy.

To reach such an equilibrium, the players need to understand the consequences of their own and others' potential actions. During the Cold War, for example, peace among nuclear powers depended on the understanding the any attack would ensure everyone's destruction. Similarly, from local regulations to international law, negotiations can be seen as a gradual exploration of all possible moves to find a stable framework of rules acceptable to everyone, and giving no one an incentive to cheat – because doing so would leave them worse off.

But what if technology becomes so complex and starts evolving so rapidly that humans can’t imagine the consequences of some new action? This is the question that a pair of scientists -- Dimitri Kusnezov of the National Nuclear Security Administration and Wendell Jones, recently retired from Sandia National Labs -- explore in a recent paper. Their unsettling conclusion: The concept of strategic equilibrium as an organizing principle may be nearly obsolete.”

How Technology Might Get Out of Control
via Instapaper

tag:digitalethics.net,2013:Post/1183274 2017-08-13T16:53:53Z 2017-08-13T16:53:53Z Things to Hang on Your Mental Mug Tree | Edge.org - some great morsels from Rory Sutherland
“There's a rather lovely company in the UK that pays people who are housebound—whether for medical reasons, or are caregivers—to handwrite envelopes and letters. You could regard this as a very silly thing to do, but in costly signaling theory terms, it makes perfect sense. The open rate of these letters, and the response they generate, is an order of magnitude higher than for laser-printed letters.

Another thing worth bearing in mind is countersignaling, which, unlike signaling, seems to be uniquely human. There aren't cases of peacocks who demonstrate their extraordinary genetic quality by having really shitty tails. What seems to happen with humans is you have multiple parallel status currencies, and quite often you will signal your position on status by adopting none of the status currencies of the class immediately below your own, or by essentially demonstrating zero effort in standard status currencies. An unwashed bass guitarist in a cool rock band, for example, can get away with poor levels of hygiene, which signals: "I'm so sexy by dint of my bass guitar playing skills that I can get away with not making an effort in any of these conventional areas." Sometimes it's done as a positional thing, and sometimes it's done as a pure demonstration of handicap.

Relevance theory [from Dan Sperber and Deirdre Wilson] might be another thing that's interesting. In other words, replacing the “conduit” idea of communication with this idea that we communicate the minimum necessary for the recipient to recreate the message within their own head using context as a very large part of the information. Those interesting new theories of communication, which don't always sit with the Claude Shannon theories, are worth exploring. A very simple manifestation would be jokes which, like IKEA furniture, demand some self-assembly on the part of the recipient.”

Things to Hang on Your Mental Mug Tree | Edge.org
via Instapaper
tag:digitalethics.net,2013:Post/1182934 2017-08-12T09:54:45Z 2017-08-12T09:54:45Z The key to jobs in the future is not college but compassion – Read This
“the truth is, only a tiny percentage of people in the post-industrial world will ever end up working in software engineering, biotechnology or advanced manufacturing. Just as the behemoth machines of the industrial revolution made physical strength less necessary for humans, the information revolution frees us to complement, rather than compete with, the technical competence of computers. Many of the most important jobs of the future will require soft skills, not advanced algebra.”

The key to jobs in the future is not college but compassion – Livia Gershon | Aeon Essays
via Instapaper

tag:digitalethics.net,2013:Post/1180754 2017-08-05T15:40:58Z 2017-08-05T15:50:34Z This is how Big Oil will die – NewCo Shift (must read)
“It’s 2025, and 800,000 tons of used high strength steel is coming up for auction.

The steel made up the Keystone XL pipeline, finally completed in 2019, two years after the project launched with great fanfare after approval by the Trump administration. The pipeline was built at a cost of about $7 billion, bringing oil from the Canadian tar sands to the US, with a pit stop in the town of Baker, Montana, to pick up US crude from the Bakken formation. At its peak, it carried over 500,000 barrels a day for processing at refineries in Texas and Louisiana.

But in 2025, no one wants the oil.

The Keystone XL will go down as the world’s last great fossil fuels infrastructure project. TransCanada, the pipeline’s operator, charged about $10 per barrel for the transportation services, which means the pipeline extension earned about $5 million per day, or $1.8 billion per year. But after shutting down less than four years into its expected 40 year operational life, it never paid back its costs.”

This is how Big Oil will die – NewCo Shift
via Instapaper

tag:digitalethics.net,2013:Post/1180387 2017-08-04T07:52:52Z 2017-08-04T10:00:24Z Have Smartphones Destroyed a Generation? - The Atlantic
“a generation shaped by the smartphone and by the concomitant rise of social media. I call them iGen. Born between 1995 and 2012, members of this generation are growing up with smartphones, have an Instagram account before they start high school, and do not remember a time before the internet. The Millennials grew up with the web as well, but it wasn’t ever-present in their lives, at hand at all times, day and night. iGen’s oldest members were early adolescents when the iPhone was introduced, in 2007, and high-school students when the iPad entered the scene, in 2010. A 2017 survey of more than 5,000 American teens found that three out of four owned an iPhone.

The advent of the smartphone and its cousin the tablet was followed quickly by hand-wringing about the deleterious effects of “screen time.” But the impact of these devices has not been fully appreciated, and goes far beyond the usual concerns about curtailed attention spans. The arrival of the smartphone has radically changed every aspect of teenagers’ lives, from the nature of their social interactions to their mental health. These changes have affected young people in every corner of the nation and in every type of household. The trends appear among teens poor and rich; of every ethnic background; in cities, suburbs, and small towns. Where there are cell towers, there are teens living their lives on their smartphone.

To those of us who fondly recall a more analog adolescence, this may seem foreign and troubling. The aim of generational study, however, is not to succumb to nostalgia for the way things used to be; it’s to understand how they are now. Some generational changes are positive, some are negative, and many are both. More comfortable in their bedrooms than in a car or at a party, today’s teens are physically safer than teens have ever been. They’re markedly less likely to get into a car accident and, having less of a taste for alcohol than their predecessors, are less susceptible to drinking’s attendant ills.

Psychologically, however, they are more vulnerable than Millennials were: Rates of teen depression and suicide have skyrocketed since 2011. It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.”

Have Smartphones Destroyed a Generation? - The Atlantic
via Instapaper

tag:digitalethics.net,2013:Post/1180233 2017-08-03T20:48:50Z 2017-08-03T20:48:50Z Human ingenuity will be the genesis for IoT prosperity
“As business leaders, we must think beyond the fiscal bottom line and technological advances in products and services and ask ourselves, how will IoT affect the communities we operate in and whatwill our role be in readying society and the workforce for this digital phenomenon that is rapidly proliferating? Technology itself has no ethics. It is only when people apply purpose and innovative thinking beyond revenue and profit that we will be able to reap collective benefits and security of the digital world.

We explored this topic in depth at the recent IoT World Forum in London, where renowned futurist Gerd Leonhard provided us a stunning window into the ethics of IoT and the critical role of human ingenuity in designing and shepherding its outcomes. (Watch the replay of Gerd’s keynote, moderated by Cisco’s CMO, Karen Walker: “Beyond Business: A Holistic View of the Societal and Human Impact of IoT.”)

As the IoT World Forum team put its agenda together for an influential community of C-suite executives in London, there was a realization that we needed to address this topic, as provocative (and sobering) as it might be. We recognized that we had to acknowledge the “elephant in the room”: that we are in unchartered territory, as we enter into this new era of exponential change together. When we think about what the implications are of a rapid surge in IoT innovation, we must all collectively consider the potential effects on the geopolitical and global economic landscape (in both advanced and developing nations); on global challenges such as wealth inequality, aging populations, healthcare, and the environment; and on the global workforce. Of course, no one has all the answers, but we must be bold in exploring these issues as a global business community. I will explore this in more depth in my next blog, but I will say that we know we need a global unified approach to succeed. No one can go it alone, and a “head in the sand” mentality is not an option.”

Human ingenuity will be the genesis for IoT prosperity
via Instapaper

tag:digitalethics.net,2013:Post/1178834 2017-07-31T07:33:38Z 2017-07-31T07:33:39Z Made me think: the End of humanity as we know it's ‘coming in 2045’ and Google is preparing for it
“What is the singularity?

In maths/physics, the singularity is the point at which a function takes an infinite value because it’s incomprehensibly large.
The technological singularity, as it called, is the moment when artificial intelligence takes off into ‘artificial superintelligence’ and becomes exponentially more intelligent more quickly.

As self-improvement becomes more efficient, it would get quicker and quicker at improvement until the machine became infinitely more intelligent infinitely quickly.”

End of humanity as we know it's ‘coming in 2045’ and Google is preparing for it
via Instapaper

tag:digitalethics.net,2013:Post/1178580 2017-07-30T13:54:10Z 2017-07-30T13:54:11Z Martin Seligman: We Aren’t Built to Live in the Moment (why we are all futurists)
“What best distinguishes our species is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. It usually lifts our spirits, but it’s also the source of most depression and anxiety, whether we’re evaluating our own lives or worrying about the nation. Other animals have springtime rituals for educating the young, but only we subject them to “commencement” speeches grandly informing them that today is the first day of the rest of their lives.

A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain, as psychologists and neuroscientists have discovered — rather belatedly, because for the past century most researchers have assumed that we’re prisoners of the past and the present.”

Opinion | We Aren’t Built to Live in the Moment
via Instapaper

tag:digitalethics.net,2013:Post/1177307 2017-07-26T15:48:13Z 2017-07-26T15:48:13Z The Real Risks of AI – NewCo Shift
“AGI can be something that we have virtually no understanding or recognition of, but which may have a significant understanding of us if it is given access to the Internet or a significant data repository.

Such a lack of mutual understanding is where all of the real risks reside. This is what should be talking about when we talk about worst case scenarios. We erroneously assume that we will be able to recognize AGI as such.”

The Real Risks of AI – NewCo Shift
via Instapaper

tag:digitalethics.net,2013:Post/1176368 2017-07-23T21:20:26Z 2017-07-23T21:20:27Z The DeepMind debacle demands dialogue on data - thoughtful read via Nature.com
“Innovations such as artificial intelligence, machine learning and the Internet of Things offer great opportunities, but will falter without a public consensus around the role of data. To develop this, all data collectors and crunchers must be open and transparent. Consider how public confidence in genetic modification was lost in Europe, and how that has set back progress.

Public dialogue can build trust through collaborative efforts. A 14-member Citizen's Reference Panel on health technologies was convened in Ontario, Canada in 2009. The Engage2020 programme incorporates societal input in the Horizon2020 stream of European Union science funding.”

The DeepMind debacle demands dialogue on data
via Instapaper

tag:digitalethics.net,2013:Post/1176256 2017-07-23T12:22:16Z 2017-07-23T12:22:16Z When Moore’s Law Met AI – Artificial Intelligence and the Future of Computing – Medium
“AI is bigger than Moore’s Law

In a nutshell, this shift by Tesla summarizes the kinds of demand machine learning-like applications are going to make on available processing. It isn’t just autonomous vehicles. It will be our connected devices, on-device inferencing to support personal interfaces, voice interactions and augmented reality.

In addition, our programming modalities our changing. In the pre-machine learning world, a large amount of ‘heavy lifting’ was done by the brains of the software developer. These smart developers have the task of simplifying and representing the world mathematically (as software code), which then gets executed in a deterministic and dumb fashion.

In the new world of machine learning, the software developer needs to worry less about translating the detailed abstractions of the world into code. Instead, they build probabilistic models which need to crunch enormous datasets to recommend a best output. What the programmer saves in figuring out a mathematical abstraction they make up for by asking the computer to do many calculations (often billions at a time).

As machine learning creeps across the enterprise, the demand for processing in the firm will increasingly significantly. What kind of impact will this have on the IT industry, its hardware and software suppliers? How will practices change? What opportunities will this create?”

When Moore’s Law Met AI – Artificial Intelligence and the Future of Computing – Medium
via Instapaper

tag:digitalethics.net,2013:Post/1175753 2017-07-21T12:33:09Z 2017-07-21T12:33:10Z What next? How future gazing became big business (Quoting
“Leonhard says he simply focuses on the long term for people who are too busy do it for themselves. “If you took any executive and freed them up for two weeks and said look to the future they could probably find out what it is if they set their mind to it. Since they never get time, they use me as a translator.””

What next? How future gazing became big business
via Instapaper

tag:digitalethics.net,2013:Post/1174140 2017-07-16T20:17:04Z 2017-07-16T20:17:04Z Tencent and digital ethics
“🎲 Tencent falls 3.7% after limiting the gaming time for children on its top-grossing mobile game. More than 50m people play Honour of Kings daily, generating revenues of $876m for Tencent in Q1. I know at least one mother who felt this voluntary anti-addiction move which wiped $13bn off their market cap was a corporate responsibility of the highest order - even if they had a nudge”

🔮⭐ Saul Klein special; Zebra economy, the rise of Chinafrica, crypto bubble bursting, teleporting ++#122
via Instapaper

tag:digitalethics.net,2013:Post/1174135 2017-07-16T20:00:17Z 2017-07-16T20:00:17Z From Inequality to Immortality
“my grim forecast is that a world where such miracles of longevity are confined to billionaires will see socio-political upheaval, the likes of which will make the current hand-wringing and brow-furrowing on the rise of inequality seem quaint in comparison. In the meantime, expect a lot of books and articles and blog posts, targeted at the thought-leader industrial complex, that will at the least, make for stimulating conversation.”

From Inequality to Immortality
via Instapaper

tag:digitalethics.net,2013:Post/1173742 2017-07-15T10:17:41Z 2017-07-16T21:18:47Z Is advertising over? What chief marketers are saying about the future of marketing (interruption is dead)
“eMarketer predicts that brands will spend a staggering $34 billion on Facebook this year alone.

But as businesses spend ever more money on advertising – nearly $500 billion in 2016 globally according to MAGNA – there are clear signs of nervousness among big business and a recognition that ads can be super annoying. YouTube, for example, will pull its 30-second non-skippable ad format next year, because it wants to provide "a better ads experience for users online," according a statement emailed to CNBC.

In April, Procter and Gamble, one of the world's largest advertisers, blasted the ad industry for overwhelming consumers with advertising. "There's too much crap," said P&G's chief brand officer Marc Pritchard, in a speech to the American Association of Advertising Agencies, in a transcript seen by CNBC.

"We bombard consumers with thousands of ads a day, subject them to endless load times, interrupt them with pop-ups and overpopulate their screens and feeds," he said. Pritchard called for advertisers and agencies to work together to make better content, and said that P&G will be "focusing on fewer and better ideas that last longer."”

Is advertising over? What chief marketers are saying about the future of marketing
via Instapaper
tag:digitalethics.net,2013:Post/1173687 2017-07-15T02:37:08Z 2017-07-15T02:37:09Z When Will the Planet Be Too Hot For Humans? Much, Much Sooner Than You Imagine.
“Until recently, permafrost was not a major concern of climate scientists, because, as the name suggests, it was soil that stayed permanently frozen. But Arctic permafrost contains 1.8 trillion tons of carbon, more than twice as much as is currently suspended in the Earth’s atmosphere. When it thaws and is released, that carbon may evaporate as methane, which is 34 times as powerful a greenhouse-gas warming blanket as carbon dioxide when judged on the timescale of a century; when judged on the timescale of two decades, it is 86 times as powerful. In other words, we have, trapped in Arctic permafrost, twice as much carbon as is currently wrecking the atmosphere of the planet, all of it scheduled to be released at a date that keeps getting moved up, partially in the form of a gas that multiplies its warming power 86 times over.”

When Will the Planet Be Too Hot For Humans? Much, Much Sooner Than You Imagine.
via Instapaper
tag:digitalethics.net,2013:Post/1173140 2017-07-13T13:12:02Z 2017-07-13T13:15:06Z A Blueprint for Coexistence with AI | Very thoughtful piece by Kei Fu Li
“Our coexistence with artificial intelligence hinges on combining what is humanly unattainable—the hugely scaled narrow AI intelligence that will only get better at any given domain—with what we humans can uniquely offer to one another. And that is love. What makes us human is that we can love.

We are far from understanding the human “heart,” let alone replicating it. But we do know that humans are uniquely able to love and be loved. The moment when we see our newborn babies; the feeling of love at first sight; the warm feeling from friends who listen to us empathetically; the feeling of self-actualization when we help someone in need. Loving and being loved are what makes our lives worthwhile.”

A Blueprint for Coexistence with AI | Backchannel
via Instapaper

tag:digitalethics.net,2013:Post/1172796 2017-07-12T14:34:00Z 2017-07-12T14:34:00Z Climate Change: Are We as Doomed as That New York Magazine Article Says? (The Atlantic)
“On the other hand, a strategy for addressing climate change is coming together. The cost of solar and wind energy are plunging worldwide; carmakers are promising to take more of their fleet electric, and the amount of carbon released into the atmosphere from human activity has stabilized over the past three years. Decarbonizing will be an arduous and difficult global project—but technological development and government policy are finally bringing it into the realm of the possible.

But on the other, other hand, the Trump administration is methodically and successfully undermining the substance of American climate policy. It has spread untruths about climate science, abandoned the Paris Agreement, and stricken dozens of climate-focused EPA rules from the law books. Michael Oppenheimer, a Princeton professor who has observed climate diplomacy for 30 years, told me that this is one of the most dispiriting moments he can remember—and that he believes Earth is now doomed to warm by more than two degrees Celsius.

That’s the state of the world right now. There are three ongoing shifts and no easy way to synthesize them. The facts don’t lend themselves to an overwhelming vision. Instead, they suggest that the planet’s economic system is in the middle of a difficult and supremely important political battle with itself. As Brad Plumer, a New York Times climate reporter, tweeted last week: There are “two radically opposed visions of the future; [it’s] not yet clear which one will win out.”

It’s into that morass that this week’s New York magazine walks. In a widely shared article, David Wallace-Wells sketches the bleakest possible scenario for global warming. He warns of a planet so awash in greenhouse gas that Brooklyn’s heat waves will rival Bahrain’s. The breadbaskets of China and the United States will enter a debilitating and everlasting drought, he says. And millions of brains will so lack oxygen that they’ll slip into a carbon-induced confusion.”

Are We as Doomed as That New York Magazine Article Says?
via Instapaper

tag:digitalethics.net,2013:Post/1172130 2017-07-10T11:09:01Z 2017-07-10T11:09:02Z Reining in the dastardly algorithms that are trying to control our lives
“The moment we are unable to recognize whether we feel better because of pleasantries arising from the decisions we made ourselves or because of an artificial environment that an algorithm has created, we are in big trouble. Because at that moment, instead of technology working for us by expanding our world, it has exerted its control to narrow it.

Machine learning on the Web potentially manipulates and constricts our worldview. In the real world, though, it manipulates our bodies and physicality, narrowing the boundaries of our world.”

Reining in the dastardly algorithms that are trying to control our lives
via Instapaper
tag:digitalethics.net,2013:Post/1171356 2017-07-07T15:59:51Z 2017-07-07T15:59:51Z Ends, Means, and Antitrust (nice Stretchery post on the Google fine)
“This is perhaps the most consequential aspect of this case, and I think the European Commission got it exactly right. Last year in Antitrust and Aggregation I explained why the unique dynamics of the Internet push towards dominant players that look very different from the monopolies of the past:

Aggregation Theory is about how business works in a world with zero distribution costs and zero transaction costs; consumers are attracted to an aggregator through the delivery of a superior experience, which attracts modular suppliers, which improves the experience and thus attracts more consumers, and thus more suppliers in the aforementioned virtuous cycle. It is a phenomenon seen across industries including search (Google and web pages), feeds (Facebook and content), shopping (Amazon and retail goods), video (Netflix/YouTube and content creators), transportation (Uber/Didi and drivers), and lodging (Airbnb and rooms, Booking/Expedia and hotels).

The first key antitrust implication of Aggregation Theory is that, thanks to these virtuous cycles, the big get bigger; indeed, all things being equal the equilibrium state in a market covered by Aggregation Theory is monopoly: one aggregator that has captured all of the consumers and all of the suppliers. This monopoly, though, is a lot different than the monopolies of yesteryear: aggregators aren’t limiting consumer choice by controlling supply (like oil) or distribution (like railroads) or infrastructure (like telephone wires); rather, consumers are self-selecting onto the Aggregator’s platform because it’s a better experience.”

Ends, Means, and Antitrust
via Instapaper

tag:digitalethics.net,2013:Post/1170882 2017-07-06T05:49:00Z 2017-07-06T05:49:03Z Rise of the machines: who is the ‘internet of things’ good for (via TheGuardian)
“the colonisation of the domestic environment by similarly networked products and services is intended to deliver a very different experience: convenience. The aim of such “smart home” efforts is to short-circuit the process of reflection that stands between having a desire and fulfilling that desire by buying something.”

Rise of the machines: who is the ‘internet of things’ good for?
via Instapaper

tag:digitalethics.net,2013:Post/1169639 2017-07-02T10:28:33Z 2017-07-02T10:28:33Z The Rise of the Thought Leader - some critical thoughts by Daniel Drezner
“The case against thought leaders, The Ideas Industry shows, is damning. As Drezner notes, some of the marquee names in thought leadership are distinguished by their facile thinking and transparent servility to the wealthy. The biggest idea in Thomas Friedman’s best-known book, The World Is Flat, is, Drezner summarizes, that “to thrive in the global economy, one needs to be ‘special,’ a unique brand like Michael Jordan.” It is more of a marketing principle than a philosophical insight. But “businessmen adore Friedman’s writings on how technology and globalization transform the global economy,” Drezner explains, because his message reinforces their worldview.”

Interesting points here - can't really decide if this is an astute analysis or party a kind of jealousy ... or both ?

The Rise of the Thought Leader
via Instapaper
tag:digitalethics.net,2013:Post/1168757 2017-06-29T09:34:59Z 2017-06-29T09:35:00Z A leading Silicon Valley engineer explains why every tech worker needs a humanities education (the power of philosophy)
“It worries me that so many of the builders of technology today are people who haven’t spent time thinking about these larger questions.” Ruefully—and with some embarrassment at my younger self’s condescending attitude toward the humanities—I now wish that I had strived for a proper liberal arts education. That I’d learned how to think critically about the world we live in and how to engage with it. That I’d absorbed lessons about how to identify and interrogate privilege, power structures, structural inequality, and injustice. That I’d had opportunities to debate my peers and develop informed opinions on philosophy and morality. And even more than all of that, I wish I’d even realized that these were worthwhile thoughts to fill my mind with—that all of my engineering work would be contextualized by such subjects.”

A leading Silicon Valley engineer explains why every tech worker needs a humanities education
via Instapaper

tag:digitalethics.net,2013:Post/1168740 2017-06-29T07:48:36Z 2017-06-29T07:48:37Z The Guardian view on the EU’s Google judgment: firm and fair - made me think!
“The breathtaking fine of €2.4bn that the European commission has imposed on Google for exploiting its virtual monopoly of search is shocking and welcome. It shows that there is at least one polity that is prepared to stand up to the giant tech companies and try to bring them under the rule of the law. The individual countries of Europe are not large enough: Denmark, which has just announced the rather gimmicky appointment of an “ambassador to Silicon Valley”, has a GDP only about two-thirds the size of Facebook’s business. But the EU is big enough and strong enough to act. Further judgments and no doubt further fines are expected in two other cases where Google is accused of steering the market towards its own advertising businesses rather than those of its competitors.

The technology of the mobile internet has been a huge blessing for the world. But where it is not in the hands of undemocratic governments, it is controlled today by multinational advertising companies, which is the business that makes both Google and Facebook their almost incredible profits. However benign their intentions, the sheer size and reach of these companies makes them dangerous. This judgment represents one of the few serious attempts to manage these monopolies. It’s a welcome start.”

The Guardian view on the EU’s Google judgment: firm and fair | Editorial
via Instapaper

tag:digitalethics.net,2013:Post/1168181 2017-06-27T17:54:53Z 2017-06-27T17:54:54Z Be Aware, Be Very Aware (Tristan Harris Podcast)
“we should acknowledge that the psychology of our minds work in specific, predictable and persuadable ways with “big holes waiting for things to pop in.” This presents a kind of existential problem because all of us are trapped inside the same psychological architecture and vulnerable to the techniques of persuasion.

“… persuasion is kind of like that. There is something that can subvert my architecture. I can’t close the holes that are in my brain, they are just there for someone to exploit. The best I can do is to become aware of some of them, but then I don’t want to walk around the world being just vigilant all the time of all the ways my buttons are being pressed.”

Tristan Harris says there’s a whole industry dedicated to this “dark art form” that people are not aware of. Consider, for example, that many people, when asked about the rise of big data, are not really all that alarmed that their personal data is out there. The familiar response is: “I’ve got nothing to hide.” But if they realized that this data is used to feed the attention economy and the underlying methods of persuasion that come with it they might be more concerned. What if this dedicated group of engineers develops a type of artificial intelligence that literally knows how to persuade you to do anything?”

Be Aware, Be Very Aware – Slaw
via Instapaper

tag:digitalethics.net,2013:Post/1168129 2017-06-27T15:52:00Z 2017-06-27T15:52:01Z Meet Amazon’s New Echo Show: Alexa Is Watching
“It has this wild new feature called Drop In. Drop In lets you give people permission to automatically connect with your device. Here’s how it works. Let’s say my father has activated Drop In for me on his Echo Show. All I have to do is say, “Alexa, drop in on Dad.” It then turns on the microphone and camera on my father’s device and starts broadcasting that to me. For the several seconds of the call, my father’s video screen would appear fogged over. But then there he’ll be. And to be clear: This happens even if he doesn’t answer. Unless he declines the call, audibly or by tapping on the screen, it goes through. It just starts. Hello, you look nice today.

Honestly, I haven’t figured out what to think about this yet. But it’s here.”

Meet Amazon’s New Echo Show: Alexa Is Watching
via Instapaper

tag:digitalethics.net,2013:Post/1167410 2017-06-25T18:56:19Z 2017-06-25T18:56:20Z The Real Threat of Artificial Intelligence - 5* read by Kai-Fu Lee
“One way or another, we are going to have to start thinking about how to minimize the looming A.I.-fueled gap between the haves and the have-nots, both within and between nations. Or to put the matter more optimistically: A.I. is presenting us with an opportunity to rethink economic inequality on a global scale. These challenges are too far-ranging in their effects for any nation to isolate itself from the rest of the world.

Kai-Fu Lee is the chairman and chief executive of Sinovation Ventures, a venture capital firm, and the president of its Artificial Intelligence Institute.”

Opinion | The Real Threat of Artificial Intelligence
via Instapaper

tag:digitalethics.net,2013:Post/1167367 2017-06-25T16:19:52Z 2017-06-25T16:19:53Z Inequality boosted by AI - must read links via Azeem Azhar
“Economic growth has gone hand in hand with rising inequality for more than 9,000 years. Inequalities have only narrowed through war or plague. History offers very little comfort to those in search of peaceful leveling. Is there one? (See also Piketty's 2014 essay, a global, progressive wealth tax is the best solution to spiraling inequality.)

🗜️ The real threat of AI. Kai-Fu Lee: "most of the money being made from artificial intelligence will go to the United States and China. A.I. is an industry in which strength begets strength … [other nations] will essentially become [those] country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit. A.I. is presenting us with an opportunity to rethink economic inequality on a global scale.””

🔮 Uber and leadership; emotions at work; Apple's secrecy; quantum computing; McJobs, and electric planes ++ #119
via Instapaper

tag:digitalethics.net,2013:Post/1166752 2017-06-23T07:28:12Z 2017-06-23T07:28:12Z Is it unethical to design robots to resemble humans? Great story on anthropomorphization
“And so the more we humanize chatbots, virtual assistants, and machines, the more we in turn display human emotions toward them. This is the process of anthropomorphism, whereby inanimate objects are attributed with human emotions, traits, and intentions. When something appears alive, it is in our nature to view it through a human lens. Now that many AIs and conversational bots have the illusion of being self-aware, they therefore trigger emotional responses in their users as if they were human. If the despised printer in Office Space had resembled a human (or a living animal, for that matter), our feelings toward both the object and the violent perpetrators would be altered. That’s why many people cringe when they see Boston Dynamics’ robotic dog getting kicked.”

Is it unethical to design robots to resemble humans?
via Instapaper