Government Considers New Measures to Ensure Responsible and Transparent Use of AI
-
Tech companies may be asked to label or watermark content generated by AI like ChatGPT. The government wants to manage risks of rapidly evolving "high risk" AI.
-
An expert advisory group will be set up to develop AI policy and guardrails. A voluntary "AI safety standard" will help businesses adopt AI responsibly.
-
Transparency measures like public reporting on training data are being considered. Labelling of AI-generated content is another option.
-
Reviews are underway on use of generative AI in schools and by government. There are concerns about deepfakes and harmful content.
-
"High risk" AI like recidivism predictors and self-driving cars may need stricter regulation. "Low risk" AI like email filters can continue freely.