EU Proposes New AI Act to Regulate Technology Over Safety and Trust Concerns
-
The EU's proposed AI Act aims to regulate AI technology over a 3-year implementation period to address concerns over safety and trust. It will ban certain "unacceptable risk" systems like social scoring and emotive analysis.
-
The act defines AI as an autonomous, adaptable system that makes predictions, recommendations, or decisions from inputs. It covers chatbots and facial recognition, but exempts military/defense AI.
-
"High risk" AI systems in critical infrastructure and services will face accuracy, risk assessment, and logging requirements. EU citizens can request explanations of decisions.
-
Generative AI like chatbots will need to comply with copyright laws and disclosure rules. Models posing "systemic risk" face additional testing and monitoring.
-
Punishments range from multimillion Euro fines up to 7% of company turnover for breaches like deploying banned systems or violating transparency rules. Oversight falls to a new European AI Office.