Scientists Call for Oversight and Guidelines to Ensure Responsible Development of AI like ChatGPT
-
Rapid advances in generative AI like ChatGPT require oversight to manage risks like misinformation and diminish validity of research.
-
Scientists should take the lead in testing and improving AI safety through an independent auditing body and living guidelines.
-
The guidelines advocate human verification of key research steps and transparency about AI use.
-
The auditing body would develop benchmarks to certify AI systems for accuracy, bias, and ethical issues.
-
Obtaining international funding and collaboration with tech companies is crucial to implement the guidelines and auditing body.
![](https://media.nature.com/lw1024/magazine-assets/d41586-023-03266-1/d41586-023-03266-1_26195210.jpg)