tag:digitalethics.net,2013:/posts Digital Ethics by FuturistGerd 2018-10-17T06:56:36Z Digital Ethics by Futurist Gerd Leonhard tag:digitalethics.net,2013:Post/1333142 2018-10-17T06:56:34Z 2018-10-17T06:56:36Z Gartner picks digital ethics and privacy as a strategic trend for 2019
“But what really stands out in Gartner’s list of developing and/or barely emergent strategic tech trends is digital ethics and privacy — given the concept is not reliant on any particular technology underpinning it; yet is being (essentially) characterized as an emergent property of other already deployed (but unnamed) technologies. So is actually in play — in a way that others on the list aren’t yet (or aren’t at the same mass scale).

The analyst dubs digital ethics and privacy a “growing concern for individuals, organisations and governments”, writing: “People are increasingly concerned about how their personal information is being used by organisations in both the public and private sector, and the backlash will only increase for organisations that are not proactively addressing these concerns.”

Yes, people are increasingly concerned about privacy. Though ethics and privacy are hardly new concepts (or indeed new discussion topics). So the key point is really the strategic obfuscation of issues that people do in fact care an awful lot about, via the selective and non-transparent application of various behind-the-scenes technologies up to now — as engineers have gone about collecting and using people’s data without telling them how, why and what they’re actually doing with it.”

Gartner picks digital ethics and privacy as a strategic trend for 2019
http://social.techcrunch.com/2018/10/16/gartner-picks-digital-ethics-and-privacy-as-a-strategic-trend-for-2019/
via Instapaper







]]>
tag:digitalethics.net,2013:Post/1331876 2018-10-13T12:28:27Z 2018-10-13T12:28:28Z GSMA boss urges strong response to digital pressures - Mobile World Live
“There are fears that digital disruption has the potential for destroying jobs and increasing inequality, and in the process, upending the European social model,” said Granryd.

He continued to state that politicians and regulators alike are “seeking to harness the digital revolution” while ensuring Europe’s citizens are fully benefitting, as democracies become a target of “malign influences misusing the networks that are part of our everyday life.”

“The unrestricted nature of the internet, once seen as its greatest virtue, is now being challenged as its greatest weakness,” the GSMA boss added.”

GSMA boss urges strong response to digital pressures - Mobile World Live
https://www.mobileworldlive.com/featured-content/home-banner/gsma-boss-urges-strong-response-to-digital-pressures/
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1331246 2018-10-11T17:21:33Z 2018-10-11T17:21:34Z Tech Workers Now Want to Know: What Are We Building This For?
“Across the technology industry, rank-and-file employees are demanding greater insight into how their companies are deploying the technology that they built. At Google, Amazon, Microsoft and Salesforce, as well as at tech start-ups, engineers and technologists are increasingly asking whether the products they are working on are being used for surveillance in places like China or for military projects in the United States or elsewhere.”

Tech Workers Now Want to Know: What Are We Building This For?
https://www.nytimes.com/2018/10/07/technology/tech-workers-ask-censorship-surveillance.html
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1331081 2018-10-11T08:00:21Z 2018-10-11T08:00:22Z Universal Basic Income Is Silicon Valley’s Latest Scam
“Uber’s business plan, like that of so many other digital unicorns, is based on extracting all the value from the markets it enters. This ultimately means squeezing employees, customers, and suppliers alike in the name of continued growth. When people eventually become too poor to continue working as drivers or paying for rides, UBI supplies the required cash infusion for the business to keep operating.”

Universal Basic Income Is Silicon Valley’s Latest Scam
https://medium.com/s/powertrip/universal-basic-income-is-silicon-valleys-latest-scam-fd3e130b69a0
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1330466 2018-10-09T05:29:36Z 2018-10-09T05:29:36Z Why Data Needs Humans as Much as Humans Need Data
“The thing that will separate the great agencies and great work from the merely informed and statistically correct is not the quantity of the data, but the quality of the people sifting through, interpreting and applying it—the people who can turn cold, hard facts into insights, ideas and emotions; the people who know how to truly connect with others.”

Why Data Needs Humans as Much as Humans Need Data
https://www.adweek.com/brand-marketing/why-data-needs-humans-as-much-as-humans-need-data/
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1328470 2018-10-03T13:24:31Z 2018-10-03T13:24:31Z Newco Shift | Technology, Humanity, and the Existential Test
“the technology industry reveled for decades as a kind of benighted warrior for societal good. As one Senator told me during the Facebook hearings this past summer, “we purposefully didn’t regulate technology, and that was the right thing to do.” But now? He shrugged. Now, maybe it’s time.”

Newco Shift | Technology, Humanity, and the Existential Test
https://shift.newco.co/2018/09/24/technology-humanity-and-the-existential-test/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1328147 2018-10-02T12:06:42Z 2018-10-02T12:06:43Z A Model for the Future of Education
“For me it’s about passion, curiosity, imagination, critical thinking, and grit.

Passion: You’d be amazed at how many people don’t have a mission in life… A calling… something to jolt them out of bed every morning. The most valuable resource for humanity is the persistent and passionate human mind, so creating a future of passionate kids is so very important. For my 7-year-old boys, I want to support them in finding their passion or purpose… something that is uniquely theirs. In the same way that the Apollo program and Star Trek drove my early love for all things space, and that passion drove me to learn and do.
Curiosity: Curiosity is something innate in kids, yet something lost by most adults during the course of their life. Why? In a world of Google, robots, and AI, raising a kid that is constantly asking questions and running “what if” experiments can be extremely valuable. In an age of machine learning, massive data, and a trillion sensors, it will be the quality of your questions that will be most important.
Imagination: Entrepreneurs and visionaries imagine the world (and the future) they want to live in, and then they create it. Kids happen to be some of the most imaginative humans around… it’s critical that they know how important and liberating imagination can be.
Critical Thinking: In a world flooded with often-conflicting ideas, baseless claims, misleading headlines, negative news, and misinformation, learning the skill of critical thinking helps find the signal in the noise. This principle is perhaps the most difficult to teach kids.
Grit/Persistence: Grit is defined as “passion and perseverance in pursuit of long-term goals,” and it has recently been widely acknowledged as one of the most important predictors of and contributors to success.”

A Model for the Future of Education
https://singularityhub.com/2018/09/12/a-model-for-the-future-of-education-and-the-tech-shaping-it/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1326732 2018-09-28T10:03:11Z 2018-09-28T10:03:11Z Just Don’t Call It Privacy
“In other words, asking companies whose business models revolve around exploiting data-based consumer-influence techniques to explain their privacy policies seems about as useful as asking sharks to hold forth on veganism.

“Congress should not be examining privacy policies,” Marc Rotenberg, the executive director of the Electronic Privacy Information Center, a prominent digital rights nonprofit, told me last week. “They should be examining business practices. They should be examining how these firms collect and use the personal data of customers, of internet users.””

Just Don’t Call It Privacy
https://www.nytimes.com/2018/09/22/sunday-review/privacy-hearing-amazon-google.html
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1326512 2018-09-27T17:12:24Z 2018-09-27T17:12:25Z Just Don’t Call It Privacy
“In a surveillance economy where companies track, analyze and capitalize on our clicks, the issue at hand isn’t privacy. The problem is unfettered data exploitation and its potential deleterious consequences — among them, unequal consumer treatment, financial fraud, identity theft, manipulative marketing and discrimination.”

Just Don’t Call It Privacy
https://www.nytimes.com/2018/09/22/sunday-review/privacy-hearing-amazon-google.html
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1326391 2018-09-27T14:06:26Z 2018-09-27T14:06:27Z Top 9 ethical issues in artificial intelligence
“In 2014, roughly the same revenues were generated by the three biggest companies in Detroit and the three biggest companies in Silicon Valley ... only in Silicon Valley there were 10 times fewer employees.”

Top 9 ethical issues in artificial intelligence
https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1324402 2018-09-22T15:25:15Z 2018-09-22T15:25:17Z Former Head of Google China Foresees an AI Crisis--and Proposes a Solution
“AI will take many single-task, single-domain jobs away. You can argue that humans have abilities that AI does not: We can conceptualize, strategize, create. Whereas today’s AI is just a really smart pattern recognizer that can take in data, optimize, and beat humans at a given task. But how many jobs in the world are simple repetitions of tasks that can be optimized? How many jobs require no creativity, strategizing, conceptualization? Most jobs are repetitive: truck-driving, telemarketing, dishwashing, fruit picking, assembly-line work, and so on. I’m afraid that about 50 percent of jobs in the world are in danger.”

Former Head of Google China Foresees an AI Crisis--and Proposes a Solution
https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/former-head-of-google-china-foresees-an-ai-crisis
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1324348 2018-09-22T11:52:57Z 2018-09-22T11:52:57Z The future of humanity depends on design ethics, says Tim Wu
“To be clear, these false loops are an explicit business strategy. The more you can convince someone that they need to keep checking your site, the more time they’ll spend on your platform–and the more ads they’ll see. It’s the same philosophy that underpins incessant notifications and the infinite scroll you find on many media sites (including our own). “If you were to obey Facebook’s mandate–hey, this friend commented on this, you should comment back, oh, you need to like this–you’d spend 24 hours there and still you’d never close the loops,” Wu says.”

The future of humanity depends on design ethics, says Tim Wu
https://www.fastcompany.com/90239599/the-future-of-humanity-depends-on-design-ethics-says-tim-wu
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1322627 2018-09-17T16:26:21Z 2018-09-17T16:26:39Z Forget the new iPhones: Apple’s best product is now privacy
“In 2018, no issue is more important than user privacy–or the lack of it. We’re tracked by private industry on an unprecedented scale, with major corporations having so much data about us–much of it gleaned without our knowledge–that they can tell when a teenager is pregnant (and inform the teen’s father) or even predict your future actions based on decisions you haven’t made yet. If you want to be part of this world, designed by advertisers and tech giants, you must relinquish your right to privacy. In other words, we live in a commercial surveillance state.”

Forget the new iPhones: Apple’s best product is now privacy
https://www.fastcompany.com/90236195/forget-the-new-iphones-apples-best-product-is-now-privacy
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1321942 2018-09-15T16:00:23Z 2018-09-15T16:00:34Z Senior Google Scientist Resigns Over “Forfeiture of Our Values” in China
““I view our intent to capitulate to censorship and surveillance demands in exchange for access to the Chinese market as a forfeiture of our values and governmental negotiating position across the globe,” he wrote, adding: “There is an all-too-real possibility that other nations will attempt to leverage our actions in China in order to demand our compliance with their security demands.””

Senior Google Scientist Resigns Over “Forfeiture of Our Values” in China
https://theintercept.com/2018/09/13/google-china-search-engine-employee-resigns/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1318178 2018-09-04T06:25:08Z 2018-09-04T06:25:09Z The World Economic Forum warns that AI may destabilize the financial system
“And McWaters says that as AI becomes more widely used in finance, it will be important to consider issues like biased algorithms, which can discriminate against certain groups of people. Financial companies should not be too eager to simply replace staff either, he says. As the study suggests, human skills will remain important even as automation becomes more widespread.”

The World Economic Forum warns that AI may destabilize the financial system
https://www.technologyreview.com/s/611890/the-world-economic-forum-warns-that-ai-may-destabilize-the-financial-system/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1317599 2018-09-02T16:39:42Z 2018-09-02T16:39:42Z Franken-algorithms: the deadly consequences of unpredictable code
“Computers are already vastly superior to us at certain specialized tasks, but the day they rival our general ability is probably some way off – if it ever happens. Human beings may not be best at much, but we’re second-best at an impressive range of things.

Here’s the problem. Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.


Algorithms are beginning to learn from their environments. Illustration: Marco Goran Romano”

Franken-algorithms: the deadly consequences of unpredictable code
http://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1317293 2018-09-01T14:23:33Z 2018-09-01T14:23:38Z Life, the universe and everything, according to futurist Michio Kaku
“The other thing Hawking warned us about was artificial intelligence. What's your opinion?

We've been brainwashed by Hollywood on this. Our most advanced robot has the intelligence of a cockroach. We can't even get a robot to get out of a car, something a five-year-old can do. We're not likely to see a robot that thinks, and looks and talks like you, uses your voice inflections, until the end of this century.”

Life, the universe and everything, according to futurist Michio Kaku
https://www.smh.com.au/technology/life-the-universe-and-everything-according-to-futurist-michio-kaku-20180820-p4zylp.html
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1317084 2018-08-31T22:00:30Z 2018-08-31T22:00:30Z In the Future of Work, Creativity Still Rules - Connected Futures on techvshuman Gerd Leonhard
“Any smart company,” he said, “would not just fire everyone when they can replace them with machines, but move them to more value-adding jobs … If the goal is to spread the power of technology and the benefits across society and create new jobs and new positions and reinvest, then we can do that.”

I don’t think that the end of routine means the end of jobs. I think it just means the end of routine.
- Gerd Leonhard

For the most part, Leonhard fears, we are falling short of that goal. To start, he believes we must move from rote learning to developing the qualities that will separate humans from machines, whether in education or workplace culture.

“We need in the future to focus on the human only-skills,” he stressed. “Mostly emotional intelligence. This is right now a number one desired capability in HR. And we need to teach it to our kids. That includes intelligence to create, to imagine, to tell a story.”

Preparing for these changes, Leonhard believes, is also the smart, competitive thing to do.

“People look at technology,” he explained, “and say wow, instead of having people do this, we can have AI or whatever. And that is a very short-term view because in five or 10 years every single company will have that technology. So, you end up being a commodity. But the thing that makes it a real company that has values and purpose and meaning is the people that work in it.”

The Floating, Fluid, Future Team

As technology change upends the very foundation of how we work, expect teams to be dynamic and fluid, management less hierarchical, and collaboration constant. (Cisco’s Future of Work research supports this conclusion.)

“I believe that the future is about wide-scale collaboration,” Leonhard said, “working on missions rather than for jobs and for necessarily one company.”

For some, that will mean adapting to a gig economy. That won’t be without its pain points, especially if we allow what Leonhard has called “digital Darwinism.””

In the Future of Work, Creativity Still Rules - Connected Futures
https://connectedfutures.cisco.com/article/future-work-creativity-still-rules/
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1309532 2018-08-06T06:57:40Z 2018-08-06T06:57:40Z Why liberal arts and the humanities are as important as engineering - Vivek Wadhwa
“With the convergence of medicine, artificial intelligence and sensors, we can create digital doctors that monitor our health and help us prevent disease; with the advances in genomics and gene editing, we have the ability to create plants that are drought resistant and that feed the planet; with robots powered by artificial intelligence, we can build digital companions for the elderly. Nanomaterial advances are enabling a new generation of solar and storage technologies that will make energy affordable and available to all.

Creating solutions such as these requires a knowledge of fields such as biology, education, health sciences and human behavior. Tackling today’s biggest social and technological challenges requires the ability to think critically about their human context, which is something that humanities graduates happen to be best trained to do.

An engineering degree is very valuable, but the sense of empathy that comes from music, arts, literature and psychology provides a big advantage in design. A history major who has studied the Enlightenment or the rise and fall of the Roman Empire gains an insight into the human elements of technology and the importance of its usability.”

Why liberal arts and the humanities are as important as engineering - Vivek Wadhwa
http://wadhwa.com/2018/06/12/liberal-arts-humanities-important-engineering/
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1309528 2018-08-06T06:43:33Z 2018-08-06T06:43:34Z Why liberal arts and the humanities are as important as engineering - Vivek Wadhwa
“Steve Jobs touted the importance of liberal arts and humanities at the unveiling of the iPad 2: “It’s in Apple’s DNA that technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the result that makes our heart sing, and nowhere is that more true than in these post-PC devices.” With this focus, he built the most valuable company in the world and set new standards for the technology industry.

Logitech CEO Bracken Darrell, who majored in English, also emphasized this. I recently asked him how he turned his company around and caused its stock price to increase by an astonishing 450 percent over five years. He said that it was through relentlessly focusing on design in every product the company built; that engineering is important but what makes a technology product most successful is its design.

The key to good design is a combination of empathy and knowledge of the arts and humanities. Musicians and artists inherently have the greatest sense of creativity. You can teach artists how to use software and graphics tools; turning engineers into artists is hard.”

Why liberal arts and the humanities are as important as engineering - Vivek Wadhwa
http://wadhwa.com/2018/06/12/liberal-arts-humanities-important-engineering/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1306247 2018-07-25T13:58:40Z 2018-07-25T13:58:41Z Man vs. Machine: The New Ethics of Cybersecurity
“Leonhard’s proposed manifesto focuses on five specific human rights that he believes could be endangered if we don’t have an ethical framework by which to guide us. These are:

1. The right to remain natural ,i.e., biological. This means we can be employed, use public services, buy things and function in society without a requirement to deploy technology on or inside our bodies.

2. The right to be inefficient if and where it defines our basic humanities. We must be able to make the choice to be slower than technology and not make efficiency more important than humanity.

3. The right to disconnect. We must retain the right to switch off connectivity, go dark on the network and pause communications, tracking and monitoring.

4. The right to be anonymous. We must retain the option of not being identified and tracked, such as when using a digital application or platform when it doesn’t pose a risk or impose itself on others.

5. The right to employ or involve people instead of machines. We should not allow companies or employers to be disadvantaged if they choose to use people instead of machines—even if it’s more expensive and less efficient.”

Man vs. Machine: The New Ethics of Cybersecurity
https://www.securityroundtable.org/new-ethics-of-cybersecurity/
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1303063 2018-07-14T09:39:07Z 2018-07-14T09:39:08Z Review of Technology vs Humanity: The coming clash between man and machine - Ron Immink -
“As Peter Diamandis’ 6Ds, he identified a number of effects.

Dependency – Leaving our thinking to software and algorithms because it’s just so much more convenient and fast.
Confusion – Not knowing if it was the intended human who replied to my emails, or her AI assistant. Or even not knowing if I made my own decision or if I was manipulated by my IDA.
Loss of control – Not having a way of knowing if the AI’s anticipation was correct or not, as we could not possibly track the system’s logic or even comprehend the workings of a quantum computing-fueled, machine-learning system. In other words, we would need to either trust it completely or not at all, similar to the dilemma that some airplane pilots are already facing with their autopilot systems.
Abdication – Being tempted to leave more tasks to systems that would handle them for us, whether it is coordinating personal schedules, making appointments, or answering simple emails. Then, of course, it would be very likely that we would simply blame the cloud/bot/AI if something went wrong.
Reminds me of the “The Seventh Sense“. Do you think AI will let us when it has taken over?”

Technology vs Humanity: The coming clash between man and machine - Ron Immink - Bestselling author, speaker, entrepreneur and ecopreneur.
https://www.ronimmink.com/technology-vs-humanity-coming-clash-man-machine/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1303051 2018-07-14T08:41:08Z 2018-07-14T08:41:08Z ‘Find Your Passion’ vs growth mindset
“Dweck, one of the paper’s authors, has previously studied different types of mind-sets as they relate to intelligence. People who have a growth mind-set about their own intelligence tend to be less afraid of failure, according to her research, because they believe smarts are cultivated, not inherent. Interests are related to, but distinct from, abilities, the study authors told me: You can be interested in something but not very good at it. “I’ve been playing guitar for 25 years, but I can’t say that my abilities have gotten that much better in the past 10 years,” O’Keefe said.

Dweck told me that “find your passion” has a laudable history. “Before that, people were saying, ‘Find your genius,’ and that was so intimidating. It implied that only people who were really brilliant at something could succeed,” she said. “‘Find your passion’ felt more democratic. Everybody can have an interest.” But this study suggests that even the idea of finding your “true” interest can intimidate people and keep them from digging further into a field.”

‘Find Your Passion’ Is Awful Advice
https://www.theatlantic.com/science/archive/2018/07/find-your-passion-is-terrible-advice/564932/
via Instapaper

]]>
tag:digitalethics.net,2013:Post/1301821 2018-07-10T17:54:39Z 2018-07-10T17:54:40Z Let’s make private data into a public good
“The so-called sharing economy is based on the same idea. Instead of interacting with some kind of institution (like a travel agency), customers interact with each other. The role of a company, then, is not to provide the service but to connect sellers (like someone who owns a car and is willing to drive it) with buyers (someone who needs a ride). These so-called platforms are presented as a radical transformation in the way goods and services are produced, shared, and delivered. But they’re also an easy way for companies to avoid responsibility. When disabled users complain to Uber that their drivers refuse to put wheelchairs in the trunk, Uber says, well, we’re not a taxi company, we’re just a platform. Airbnb is similarly reluctant to take responsibility for the safety of the premises offered on its site, or for racial discrimination against renters by property owners. After all, Airbnb didn’t build the apartments and doesn’t own them—it’s just a platform.”

Let’s make private data into a public good
https://www.technologyreview.com/s/611489/lets-make-private-data-into-a-public-good/
via Instapaper


]]>
tag:digitalethics.net,2013:Post/1300417 2018-07-06T14:10:08Z 2018-07-06T14:10:09Z Google is Reportedly Looking to Take Over Call Centers With Its Duplex AI Assistant
“If Google can find its way into the business of automated phone calls for companies, it could creep its way into a massive profit center. The research firm ResearchAndMarkets projects the cloud-based customer call center market will reach about $21 billion by 2022—up from $6.8 billion in 2017.

Of course, as the market expands, so will the competition, and Google isn’t the only major tech firm that sees opportunity in entering call centers. Last year, Amazon started selling a version of its wildly popular voice assistant Alexa designed specifically for use responding to questions via phone and text. Companies like IBM, Microsoft, and Cisco also already have their foot in the door of this business, per The Information.

While there may be major profits awaiting these firms as the conquer call centers, they also come with a cost: humans will inevitably be bumped from the jobs. Most companies already outsource the work to countries where they can pay paltry wages in order to keep the expenses related to customer service down, and AI would likely cut those costs even more. That would be devastating for a nation like the Philippines, which has an estimated 1.2 million call-center workers according to the Wall Street Journal.”

Google is Reportedly Looking to Take Over Call Centers With Its Duplex AI Assistant
https://gizmodo.com/google-is-reportedly-looking-to-take-over-call-centers-1827379911
via Instapaper




]]>
tag:digitalethics.net,2013:Post/1296717 2018-06-24T17:26:19Z 2018-06-24T17:26:20Z 41 highlights from Homo Deus by Yuval Harari: “Science is converging on an all-encompassing dogma, which says that organisms are algorithms and life is data processing.”
“For 300 years the world has been dominated by humanism, which sanctifies the life, happiness and power of Homo sapiens. The attempt to gain immortality, bliss and divinity merely takes the long-standing humanist ideals to their logical conclusion.

You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins. It’s not a perfect analogy, of course, but it is the best archetype we can actually observe rather than just imagine.”

41 highlights from Homo Deus by Yuval Harari: “Science is converging on an all-encompassing dogma, which says that organisms are algorithms and life is data processing.”
http://www.kevinhabits.com/41-highlights-from-homo-deus-by-yuval-harari-science-is-converging-on-an-all-encompassing-dogma-which-says-that-organisms-are-algorithms-and-life-is-data-processing/
via Instapaper
]]>
tag:digitalethics.net,2013:Post/1295516 2018-06-19T14:30:43Z 2018-06-19T14:30:43Z It’s time to rein in the data barons
“Jonathan Taplin, the director emeritus of the Annenberg Innovation Lab at the University of Southern California, argues in Move Fast and Break Things, his book about the power of the internet giants, that rebel artists have long had to deal with “suits” who control distribution of their work. But the rise of companies like Facebook and Amazon has increased the stakes immeasurably. “The concentration of profits in the making of arts and news,” he writes, “has made more than just artists and journalists vulnerable: it has made all those who seek to profit from the free exchange of ideas and culture vulnerable to the power of a small group of …”

It’s time to rein in the data barons
https://www.technologyreview.com/s/611425/its-time-to-rein-in-the-data-barons/
via Instapaper



]]>
tag:digitalethics.net,2013:Post/1285481 2018-05-20T07:29:39Z 2018-05-20T07:29:40Z Steven Pinker’s Ideas About Progress Are Fatally Flawed. These Eight Graphs Show Why.
“Taken together, these graphs illustrate ecological overshoot: the fact that, in the pursuit of material progress, our civilization is consuming the earth’s resources faster than they can be replenished. Overshoot is particularly dangerous because of its relatively slow feedback loops: if your checking account balance approaches zero, you know that if you keep writing checks they will bounce. In overshoot, however, it’s as though our civilization keeps taking out bigger and bigger overdrafts to replenish the account, and then we pretend these funds are income and celebrate our continuing “progress.” In the end, of course, the money runs dry and it’s game over.”

Steven Pinker’s Ideas About Progress Are Fatally Flawed. These Eight Graphs Show Why.
https://patternsofmeaning.com/2018/05/17/steven-pinkers-ideas-about-progress-are-fatally-flawed-these-eight-graphs-show-why/
via Instapaper





]]>
tag:digitalethics.net,2013:Post/1285266 2018-05-19T16:44:00Z 2018-05-19T16:44:02Z Google’s Selfish Ledger is an unsettling vision of Silicon Valley social engineering - watch the video !!
“Building on the ledger idea, the middle section of the video presents a conceptual Resolutions by Google system, in which Google prompts users to select a life goal and then guides them toward it in every interaction they have with their phone. The examples, which would “reflect Google’s values as an organization,” include urging you to try a more environmentally friendly option when hailing an Uber or directing you to buy locally grown produce from Safeway.”

Google’s Selfish Ledger is an unsettling vision of Silicon Valley social engineering
https://www.theverge.com/2018/5/17/17344250/google-x-selfish-ledger-video-data-privacy
via Instapaper






]]>
tag:digitalethics.net,2013:Post/1282784 2018-05-12T12:52:09Z 2018-05-12T12:52:10Z Jaron Lanier Q&A: ‘We Won, and We Turned Into Assholes’
“And then when you move out of the tech world, everybody’s struggling. It’s a very strange thing. The numbers show an economy that’s doing well, but the reality is that the way it’s doing well doesn’t give many people a feeling of security or confidence in their futures. It’s like everybody’s working for Uber in one way or another. Everything’s become the gig economy. And we routed it that way, that’s our doing. There’s this strange feeling when you just look outside of the tight circle of Silicon Valley, almost like entering another country, where people are less secure. It’s not a good feeling. I don’t think it’s worth it, I think we’re wrong to want that feeling.”

Jaron Lanier Q&A: ‘We Won, and We Turned Into Assholes’
http://nymag.com/selectall/2018/04/jaron-lanier-interview-on-what-went-wrong-with-the-internet.html
via Instapaper

]]>