Scientists Call for Oversight and Guidelines to Ensure Responsible Development of AI like ChatGPT
-
Rapid advances in generative AI like ChatGPT require oversight to manage risks like misinformation and diminish validity of research.
-
Scientists should take the lead in testing and improving AI safety through an independent auditing body and living guidelines.
-
The guidelines advocate human verification of key research steps and transparency about AI use.
-
The auditing body would develop benchmarks to certify AI systems for accuracy, bias, and ethical issues.
-
Obtaining international funding and collaboration with tech companies is crucial to implement the guidelines and auditing body.