News/Research

Hannah Zeavin on Empathy

02 Oct, 2021

Hannah Zeavin on Empathy

Is empathy the wrong goal for computational models? BCNM faculty Hannah Zeavin debates whether the machines we build, code, and design can be or even should be empathetic. Hannah features on AI Now Institute's medium to write a guest post for their "AI Lexicon" project, a call for contributions to generate alternative narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

From the article:

In the contemporary world of AI and machine learning, empathy is too frequently treated as a blanket solution for what ails technology. The logic goes something like, if we can understand the user’s experience more deeply, we can code a semblance of good relations. This drives development in robotics, virtual agents, and chatbots to replace humans for care, companionship, and other forms of automated (and frequently feminized labor). While Ovetta Sampson of Microsoft Research writes that many design conversations “start with empathy,” she argues that empathy is too frequently misdefined and therefore misdeveloped in technologies, resulting in the ubiquitous spread of “false empathy.”

Read the entire article here!