News/Research

Revisited: "Machine Generated Culpability"

16 Feb, 2016

Revisited: "Machine Generated Culpability"

Kate Mattingly (TDPS) is the 2015-2016 History and Theory of New Media liaison.

Below she recaps the 2/11/2016 talk with Ahmed Ghappour on Machine Generated Culpability:

Ahmed Ghappour on Cybersecurity and Immanent Threat

“I think it’s chilling and problematic and we’re all doomed.” This was the last statement by Ahmed Ghappour on Thursday, February 11 as he concluded a lively and multifaceted question-and-answer period.

His talk, entitled “Machine Generated Culpability,” was part of the History and Theory of New Media lecture series, and examined how algorithms are used to determine threats and the legal ramifications of such automated decision-making. His examples spanned from the use of drones to perpetrators of child pornography to a bot made by a Swiss art group, !Mediengruppe Bitnik, that bought drugs, among other things, online using bitcoin.

The underlying goal, in Ghappour’s words, was “to provoke a way to think about the design and use of predictive inferences that looks past traditional benchmarking metrics, towards socio-technical conceptions such as cognitive opacity and human agency that should drive the extent to which we use machine to generate ‘culpability’ within the bounds of the law.” One reason why automated analytics have become so significant is because they offer the potential to process massive amounts of information. As Ghappour said, “We generate so much data it’s impossible for one person to review it,” yet our reliance on these systems poses questions about the legality and significance of these decision-making processes. Today, according to Ghappour, “most data-mining occurs in a legal vacuum.”

To illustrate how machine-generated culpability works, and how it involves constitutional issues that intersect with the Fourth Amendment, Ghappour introduced a comparison with police dogs, or “dog sniffing.” Both involve a kind of “if/then” command, but the use of machine-generated culpability adds the issue of “whyness.” In Ghappour’s model, the use of machine-learning algorithms cannot be divided into two simple categories of “algorithm can be used” or “algorithm cannot be used,” but rather must include three categories. The third one was defined by Ghappour as stemming from the dilemma, “Even if you have access to the source-code, you still can’t explain how culpability is happening.” Taking this idea one step further, Ghappour recommended that the engineering sector insist on making sure there are controls in software to explain “whyness.” This measure would avoid the racist, sexist, and invasive ways in which algorithms have been employed and shift the focus toward making sure the right institutions are making decisions about the use of machine-generated culpability.

During the question and answer session, as graduate students and faculty members asked Ghappour about his research, he said the issue really revolves around due process and the question, “does due process exist in a fully automated system?” Reflecting on this for a moment Ghappour, who received his J.D. degree from NYU, admitted, “I like criminal law because the threshold for due process is at its maximum.”

Check out photos from our event below:

HTNM 16 — Ahmed Ghappour, Machine Generated Culpability