A Max-Planck Institute study suggests humans couldn’t prevent an AI from making its own choices.The researchers used Alan Turing’s “halting problem” to test their theory.Programming a superintelligent AI with “containment algorithms” or rules would be futile.Visit Business Insider’s homepage for more stories .10 Things in Tech: Get the latest tech trends & innovations Loading Something is loading.
Email address By clicking ‘Sign up’, you agree to receive marketing emails from Insider as well as other partner offers and accept our Terms of Service and Privacy Policy .It may not be theoretically possible to predict the actions of artificial intelligence, according to researchers from the Max-Planck Institute for Humans and Machines .
“A super-intelligent machine that controls the world sounds like science fiction,” said Manuel Cebrian, co-author of the study and leader of the research group.”But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it [sic].”
Our society is moving increasingly towards a reliance on artificial intelligence — from AI-run interactive job interviews to creating music and even memes , AI is already very much part of everyday life.
According to the research group’s study, published in the Journal of Artificial Intelligence Research , to predict an AI’s actions, a simulation of that exact superintelligence would need to be made.
The question of whether a superintelligence could be contained is hardly a new one.
Manuel Alfonseca, co-author of the study and leader of the research group at the Max-Planck Institute’s Center for Humans and Machines said that it all centers around “containment algorithms” not dissimilar to Asimov’s First Law of Robotics, according to IEEE ..