Workers Wary of Emotion AI's Privacy Risks and Biased Decisions
-
Emotion AI aims to detect emotions through biological signals and computer use, but has questionable scientific validity and can enable invasion of privacy and bias.
-
Workers fear emotion AI could harm well-being, privacy, work performance, and lead to stigma and unjust decisions, with marginalized groups most at risk.
-
Despite claimed benefits like supporting well-being, many workers expect no personal benefit and have concerns about privacy, incorrect inferences, and job loss.
-
Power imbalances may be exacerbated, with employers relying on potentially inaccurate emotion readings to make decisions about workers.
-
Workers may refuse to work under emotion AI or expend effort to mask emotions, adding emotional labor and distraction from their actual jobs.