San Francisco Issues AI Guidelines for City Workers, Warning Against Biases and Data Privacy Risks
-
San Francisco released preliminary AI guidelines for city employees encouraging disclosure, experimentation, and fact checking when using the technology.
-
The guidelines warn against entering sensitive data into public AI tools where it could be seen by companies like OpenAI or the public.
-
The rules come as other states and cities are encouraging or launching pilots for employees to use AI tools like ChatGPT in their work.
-
The guidelines point out AI can be useful for tasks like drafting emails and coding but also warn the programs can reflect biases in their training data.
-
An expert said the rules fail to provide enough clarity on when and how employees should use AI and more formal training is needed rather than encouraging unguided experimentation.