[artificial intelligence] A philosopher argues that an AI can’t be an artist





“Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.

For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.”

A philosopher argues that an AI can’t be an artist
https://www.technologyreview.com/s/612913/a-philosopher-argues-that-an-ai-can-never-be-an-artist/
via Instapaper

[singularity] The Troubling Trajectory Of Technological Singularity




“The technology triggered intelligence evolution in machines and the linkages between ideas, innovations and trends have in fact brought us on the doorsteps of singularity. Irrespective of whether we believe that the singularity will happen or not, the very thought raises many concerns and critical security risk uncertainties for the future of humanity. This forces us to begin a conversation with ourselves and with others (individually and collectively) about what we want as a species.”

The Troubling Trajectory Of Technological Singularity
https://www.forbes.com/sites/cognitiveworld/2019/02/10/the-troubling-trajectory-of-technological-singularity/
via Instapaper

AI is incredibly smart, but it will never match human creativity

“Humanity’s safe-haven in the coming years will be exactly that — consciousness. Spontaneous thought, creative thinking, and a desire to challenge the world around us. As long as humans exist there will always be a need to innovate, to solve problems through brilliant ideas. Rather than some society in which all individuals will be allowed to carry out their days creating works of art, the machine revolution will instead lead to a society in which anyone can make a living by dreaming and providing creative input to projects of all kinds. The currency of the future will be thought.

This article was originally published on Alex Wulff's Medium”

AI is incredibly smart, but it will never match human creativity
https://thenextweb.com/syndication/2019/01/02/ai-is-incredibly-smart-but-it-will-never-match-human-creativity/
via Instapaper


[digital ethics] Only 17% Of Consumers Believe Personalized Ads Are Ethical, Survey Says




“A massive majority of consumers believe that using their data to personalize ads is unethical. And a further 59% believe that personalization to create tailored newsfeeds -- precisely what Facebook, Twitter, and other social applications do every day -- is unethical.

At least, that's what they say on surveys.”

Only 17% Of Consumers Believe Personalized Ads Are Ethical, Survey Says
https://www.forbes.com/sites/johnkoetsier/2019/02/09/83-of-consumers-believe-personalized-ads-are-morally-wrong-survey-says

Facebook’s provocations of the week – Monday Note describes the Google business model

“Imagine if JPMorgan owned the New York Stock Exchange, was the sole market-maker on its own equity, the exclusive broker for every other equity in the market, ran the entire settlement and clearing system in the market, and basically wouldn’t let anyone see who had bought shares and which share or certificate or number they bought… That is Google’s business model.””

Facebook’s provocations of the week – Monday Note
https://mondaynote.com/facebooks-provocations-of-the-week-9fc6af6de12f
via Instapaper

The Next Privacy War Will Happen in Our Homes – Member Feature Stories – Medium

“October, Amazon showcased Alexa’s newest features, including the ability to detect when someone is whispering and respond at a quieter volume. According to Wired, Amazon also has plans to introduce a home security feature, Alexa Guard, giving the program the ability to listen “for trouble such as broken glass or a smoke alarm when you’re away from home.” A month later, the Telegraph reported that Amazon had patented Alexa software that could one day analyze someone’s voice for signs of illness (like a cough or a sneeze) and respond by offering to order cough drops.”

The Next Privacy War Will Happen in Our Homes – Member Feature Stories – Medium
https://medium.com/s/story/why-the-next-privacy-war-will-be-over-sound-d7b59b1533f3
via Instapaper

What is work?

“Since the dawn of the industrial age, work has become ever more transactional and predictable; the execution of routine, tightly defined tasks. In virtually every large public and private sector organization, that approach holds: thousands of people, each specializing in certain tasks, limited in scope, increasingly standardized and specified, which ultimately contribute to the creation and delivery of predictable products and services to customers and other stakeholders. The problem? Technology can increasingly do that work. Actually, technology should do that work: Machines are more accurate, they don’t get tired or bored, they don’t break for sleep or weekends. If it’s a choice between human or machines to do the kind of work that requires compliance and consistency, machines should win every time.”

What is work?
https://www2.deloitte.com/insights/us/en/focus/technology-and-the-future-of-work/what-is-work.html
via Instapaper



Team Human vs. Team AI

“Artificial intelligence adds another twist. After we launch technologies related to AI and machine learning, they not only shape us, but they also begin to shape themselves. We give them an initial goal, then give them all the data they need to figure out how to accomplish it. From that point forward, we humans no longer fully understand how an AI program may be processing information or modifying its tactics. The AI isn’t conscious enough to tell us. It’s just trying everything and hanging onto what works for the initial goal, regardless of its other consequences.”

Team Human vs. Team AI
https://www.strategy-business.com/article/Team-Human-vs-Team-AI?gko=4d55d
via Instapaper

Recent events highlight an unpleasant scientific practice: ethics dumping

“Dig deeper, though, and what happened starts to look more intriguing than just the story of a lone maverick having gone off the rails in a place with lax regulation. It may instead be an example of a phenomenon called ethics dumping.

Ethics dumping is the carrying out by researchers from one country (usually rich, and with strict regulations) in another (usually less well off, and with laxer laws) of an experiment that would not be permitted at home, or of one that might be permitted, but in a way that would be frowned on. The most worrisome cases involve medical research, in which health, and possibly lives, are at stake. But other investigations—anthropological ones, for example—may also be carried out in a more cavalier fashion abroad. As science becomes more international the risk of ethics dumping, both intentional and unintentional, has risen. The suggestion in this case is that Dr He was encouraged and assisted in his project by a researcher at an American university.”

Recent events highlight an unpleasant scientific practice: ethics dumping
https://www.economist.com/science-and-technology/2019/02/02/recent-events-highlight-an-unpleasant-scientific-practice-ethics-dumping
via Instapaper