New Visualization Tool Lets Anyone See Biases and Errors in AI Systems
-
AI image recognition is being used in critical systems like health scans and self-driving cars, but errors could have serious consequences.
-
Algorithms can exhibit biases if their training data reflects societal biases. Good benchmarks are needed.
-
Finding errors is like a needle in a haystack given the huge datasets algorithms use.
-
New tool transforms algorithm data into colored dots to easily see relationships and errors.
-
Tool is openly available so anyone can inspect neural network datasets and find problems.