Last week Ahmed Ghappour gave a talk on the complex legal and moral questions that the digital revolution has brought into our society and to which the law has been slow to respond. Ghappour explores issues in the context of cybersecurity and national security enforcement, where automated decision-making and autonomous operations may be necessary for effective threat-incident prevention.
Kate Mattingly (TDPS) is the 2015-2016 History and Theory of New Media liaison.
Below she recaps the 2/11/2016 talk with Ahmed Ghappour on Machine Generated Culpability:
Ahmed Ghappour on Cybersecurity and Immanent Threat
“I think it’s chilling and problematic and we’re all doomed.” This was the last statement by Ahmed Ghappour on Thursday, February 11 as he concluded a lively and multifaceted question-and-answer period.
His talk, entitled “Machine Generated Culpability,” was part of the History and Theory of New Media lecture series, and examined how algorithms are used to determine threats and the legal ramifications of such automated decision-making. His examples spanned from the use of drones to perpetrators of child pornography to a bot made by a Swiss art group, !Mediengruppe Bitnik, that bought drugs, among other things, online using bitcoin.
The underlying goal, in Ghappour’s words, was “to provoke a way to think about the design and use of predictive inferences that looks past traditional benchmarking metrics, towards socio-technical conceptions such as cognitive opacity and human agency that should drive the extent to which we use machine to generate ‘culpability’ within the bounds of the law.” One reason why automated analytics have become so significant is because they offer the potential to process massive amounts of information. As Ghappour said, “We generate so much data it’s impossible for one person to review it,” yet our reliance on these systems poses questions about the legality and significance of these decision-making processes. Today, according to Ghappour, “most data-mining occurs in a legal vacuum.”
To illustrate how machine-generated culpability works, and how it involves constitutional issues that intersect with the Fourth Amendment, Ghappour introduced a comparison with police dogs, or “dog sniffing.” Both involve a kind of “if/then” command, but the use of machine-generated culpability adds the issue of “whyness.” In Ghappour’s model, the use of machine-learning algorithms cannot be divided into two simple categories of “algorithm can be used” or “algorithm cannot be used,” but rather must include three categories. The third one was defined by Ghappour as stemming from the dilemma, “Even if you have access to the source-code, you still can’t explain how culpability is happening.” Taking this idea one step further, Ghappour recommended that the engineering sector insist on making sure there are controls in software to explain “whyness.” This measure would avoid the racist, sexist, and invasive ways in which algorithms have been employed and shift the focus toward making sure the right institutions are making decisions about the use of machine-generated culpability.
During the question and answer session, as graduate students and faculty members asked Ghappour about his research, he said the issue really revolves around due process and the question, “does due process exist in a fully automated system?” Reflecting on this for a moment Ghappour, who received his J.D. degree from NYU, admitted, “I like criminal law because the threshold for due process is at its maximum.”
Check out photos from our event below:
What happens when an algorithm is capable of identifying security targets (e.g. for a drone strike or “cyber operation”) with greater accuracy than human analysts? Does it matter if a human analyst or expert cannot articulate the reasons why the target was chosen? Does it matter what the operational purpose of the targeting is? This talk will raises these issues in the context of cybersecurity and national security enforcement, where automated decision-making and autonomous operations may be necessary for effective threat-incident prevention. The goal is to provoke a way to think about the design and use of predictive inferences that looks past traditional benchmarking metrics, towards socio-technical conceptions such as cognitive opacity and human agency that should drive the extent to which we use machine to generate “culpability” within the bounds of the law.
Ahmed Ghappour is an acclaimed law professor at UC Hastings, whose research focuses on up and coming technologies and national security, with stress on the role of cyberspace as a battleground. He directs the Liberty, Security, and Technology Clinic wherein he and his students litigate constitutional issues in espionage, counterterrorism, and computer hacking cases.
Ghappour has litigated numerous high profile cases, most recently representing whistleblower Chelsea Manning, Ross Ulbricht (alleged mastermind of the Silk Road), and journalist Barrett Brown (alleged spokesperson for hacktivist collective “Anonymous”). In USA v Moalin, Ghappour lodged the first challenge to NSA collection of telephony metadata in the context of a criminal case. Ghappour has also represented Guantanamo Bay detainees and challenged the US Extraordinary Rendition Program. He is a member of the National Security Committee of the National Association of Criminal Defense Lawyers.
In a former life, Ghappour was a diagnostics and system verification engineer at Silicon Graphics, where he wrote programs to discover vulnerabilities in high performance computer systems.
The History and Theory of New Media Lecture Series brings to campus leading humanities scholars working on issues of media transition and technological emergence. The series promotes new, interdisciplinary approaches to questions about the uses, meanings, causes, and effects of rapid or dramatic shifts in techno-infrastructure, information management, and forms of mediated expression. Presented by the Berkeley Center for New Media, these events are free and open to the public. For more information, visit: http://htnm-berkeley.com/