[artificial intelligence] We analyzed 16,625 papers to figure out where AI is headed next (MIT tech review)

“The biggest shift we found was a transition away from knowledge-based systems by the early 2000s. These computer programs are based on the idea that you can use rules to encode all human knowledge. In their place, researchers turned to machine learning—the parent category of algorithms that includes deep learning.

Among the top 100 words mentioned, those related to knowledge-based systems—like “logic,” “constraint,” and “rule”—saw the greatest decline. Those related to machine learning—like “data,” “network,” and “performance”—saw the highest growth.”

We analyzed 16,625 papers to figure out where AI is headed next
via Instapaper

[digital heresy] This Is Silicon Valley – OneZero

“In Silicon Valley, few people find things like climate change important enough to talk about at length, and even fewer find it important enough to work on. It’s not where the money is at. It’s not where “success” is at. And it’s certainly not where the industry is at. Instead, money comes from changing a button from green to blue, from making yet another food delivery app, and from getting more clicks on ads. That’s just how the Valley and the tech industry are set up. As Jeffrey Hammerbacher, a former Facebook executive, told Bloomberg, “The best minds of my generation are thinking about how to make people click ads.”

This is Silicon Valley.”

This Is Silicon Valley – OneZero
via Instapaper

[artificial intelligence] Seeking Ground Rules for A.I. via the nyt

“The Recommendations

Transparency Companies should be transparent about the design, intention and use of their A.I. technology.

Disclosure Companies should clearly disclose to users what data is being collected and how it is being used.

Privacy Users should be able to easily opt out of data collection.

Diversity A.I. technology should be developed by inherently diverse teams.

Bias Companies should strive to avoid bias in A.I. by drawing on diverse data sets.

Trust Organizations should have internal processes to self-regulate the misuse of A.I. Have a chief ethics officer, ethics board, etc.

Accountability There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology.

Collective governance Companies should work together to self-regulate the industry.

Regulation Companies should work with regulators to develop appropriate laws to govern the use of A.I.

“Complementarity” Treat A.I. as tool for humans to use, not a replacement for human work.”

Seeking Ground Rules for A.I.
via Instapaper

[technology] Get ready for the age of sensor panic

“But after what seems like daily reports about Facebook privacy transgressions, Russian hacking, Chinese industrial espionage, Android malware and all manner of leaks, hacks and privacy-invading blunders, we’ve entered into a new era of public distrust of all things technological.”

Get ready for the age of sensor panic
via Instapaper

[digital herecy] Uber and the Ongoing Erasure of Public Life

“Cities struggling to keep subways and buses running are being drained of revenue by tech companies and a reserve army of cars. These cars, in turn, coagulate the arteries of the city, blocking the remaining fleet of buses, causing a downward spiral of decreasing ridership and growing traffic.

Despite all of this, Uber claims to support mass transit. “Everyone agrees on the solution,” a company spokesperson said in an e-mail. “We need tools that help ensure sustainable travel modes like public transportation are prioritized over single occupant vehicles.” The company has regularly portrayed itself as offering “first-mile, last-mile” solutions for transit: carrying you to and from the train station or bus stop. In fact, the evidence of its success in this arena is inconclusive. In some suburbs or city peripheries, where these solutions are most necessary, Uber has become a subsidized alternative to the transit to which it supposedly offers a connection, partnering with municipal and transit agencies to replace their existing bus services.”

Uber and the Ongoing Erasure of Public Life
via Instapaper

[artificial intelligence] A philosopher argues that an AI can’t be an artist

“Claims like Kurzweil’s that machines can reach human-level intelligence assume that to have a human mind is just to have a human brain that follows some set of computational algorithms—a view called computationalism. But though algorithms can have moral implications, they are not themselves moral agents. We can’t count the monkey at a typewriter who accidentally types out Othello as a great creative playwright. If there is greatness in the product, it is only an accident. We may be able to see a machine’s product as great, but if we know that the output is merely the result of some arbitrary act or algorithmic formalism, we cannot accept it as the expression of a vision for human good.

For this reason, it seems to me, nothing but another human being can properly be understood as a genuinely creative artist. Perhaps AI will someday proceed beyond its computationalist formalism, but that would require a leap that is unimaginable at the moment. We wouldn’t just be looking for new algorithms or procedures that simulate human activity; we would be looking for new materials that are the basis of being human.”

A philosopher argues that an AI can’t be an artist
via Instapaper

[singularity] The Troubling Trajectory Of Technological Singularity

“The technology triggered intelligence evolution in machines and the linkages between ideas, innovations and trends have in fact brought us on the doorsteps of singularity. Irrespective of whether we believe that the singularity will happen or not, the very thought raises many concerns and critical security risk uncertainties for the future of humanity. This forces us to begin a conversation with ourselves and with others (individually and collectively) about what we want as a species.”

The Troubling Trajectory Of Technological Singularity
via Instapaper

AI is incredibly smart, but it will never match human creativity

“Humanity’s safe-haven in the coming years will be exactly that — consciousness. Spontaneous thought, creative thinking, and a desire to challenge the world around us. As long as humans exist there will always be a need to innovate, to solve problems through brilliant ideas. Rather than some society in which all individuals will be allowed to carry out their days creating works of art, the machine revolution will instead lead to a society in which anyone can make a living by dreaming and providing creative input to projects of all kinds. The currency of the future will be thought.

This article was originally published on Alex Wulff's Medium”

AI is incredibly smart, but it will never match human creativity
via Instapaper

[digital ethics] Only 17% Of Consumers Believe Personalized Ads Are Ethical, Survey Says

“A massive majority of consumers believe that using their data to personalize ads is unethical. And a further 59% believe that personalization to create tailored newsfeeds -- precisely what Facebook, Twitter, and other social applications do every day -- is unethical.

At least, that's what they say on surveys.”

Only 17% Of Consumers Believe Personalized Ads Are Ethical, Survey Says

Facebook’s provocations of the week – Monday Note describes the Google business model

“Imagine if JPMorgan owned the New York Stock Exchange, was the sole market-maker on its own equity, the exclusive broker for every other equity in the market, ran the entire settlement and clearing system in the market, and basically wouldn’t let anyone see who had bought shares and which share or certificate or number they bought… That is Google’s business model.””

Facebook’s provocations of the week – Monday Note
via Instapaper