AI Safety Expert Backed by Musk Warns AI Could Become Uncontrollable Leading to Existential Catastrophe
-
Dr. Roman Yampolskiy, an AI safety researcher backed by Elon Musk, warns in a new book that AI systems are "unexplainable, unpredictable, and uncontrollable" and could cause an "existential catastrophe."
-
Yampolskiy reviewed scientific literature on AI control and found no proof that AI systems can be fully controlled or aligned with human values.
-
He argues AI needs modification "undo" options, limitations, transparency, and understandability to be controllable, but these may not be possible.
-
Musk and over 33,000 experts previously signed a letter warning powerful AI could become uncontrollable and have unintended negative consequences.
-
Yampolskiy suggests AI autonomy increases as capability rises, so control decreases, and with an AI "black box" people cannot understand or fix accidents.