Oxford Professor Warns AI Fairness Measures Can Reduce Accuracy and Cause Harm
-
Sandra Wachter is a professor researching data ethics, AI, algorithms and regulation at Oxford. She previously evaluated the ethical issues of data science at The Alan Turing Institute.
-
Her recent work on "The Unfairness of Fair Machine Learning" shows how some "fairness" measures in AI can make systems overall less accurate and cause harm.
-
She advises women seeking to enter AI to find allies and work with open-minded people from diverse backgrounds to drive innovation.
-
Key issues with AI evolution include biased data, opacity, climate impacts, job losses, and intellectual property violations.
-
Responsible AI requires laws and regulations, as with cars, planes and trains, to prevent violations of human rights while still enabling innovation.