AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell - Future of Life Institute

“And so a machine should be intelligent if its actions achieve its goals. And then of course we have to supply the goals in the form of reward functions or cost functions or logical goals statements. And that works up to a point. It works when machines are stupid. And if you provide the wrong objective, then you can reset them and fix the objective and hope that this time what the machine does is actually beneficial to you. But if machines are more intelligent than humans, then giving them the wrong objective would basically be setting up a kind of a chess match between humanity and a machine that has an objective that’s across purposes with our own. And we wouldn’t win that chess match.”

AI Alignment Podcast: Human Compatible: Artificial Intelligence and the Problem of Control with Stuart Russell - Future of Life Institute
https://futureoflife.org/2019/10/08/ai-alignment-podcast-human-compatible-artificial-intelligence-and-the-problem-of-control-with-stuart-russell/
via Instapaper