Summer Research Dispatch: Brian Bartz in Trevor Paglen's Studio

06 Sep, 2019

Summer Research Dispatch: Brian Bartz in Trevor Paglen's Studio

Each year, the Berkeley Center for New Media is thrilled to offer summer research awards to support our graduates in their cutting edge work. Below, Brian Bartz describes how he used the funds to work on training a custom computer vision algorithm in alum Trevor Paglen's studio.

Through the generous funding provided by this summer research grant, I was able to spend July in Berlin working with BCNM alumnus Trevor Paglen and his studio production team on a forthcoming interactive installation artwork. In recent months, Trevor’s studio has been preparing for a number of shows surrounding the imageNet dataset. This dataset consists of some 14 million photographs, and is often used to train the deep neural networks used for common computer vision tasks, such as recognizing objects in images, or performing facial recognition.

Despite its academic roots in Stanford and Princeton’s departments of computer science, the imageNet dataset encodes extreme bias and often results in blatant miscategorizations of people and objects in ways that can prove harmful in real world applications. And given the increasing proliferation of computer-vision technologies throughout military and state apparatuses, such miscategorizations can entail increased violence, especially when turned onto vulnerable and marginalized populations as a means by which to police and control them.

In Trevor’s studio, we focused in on the 1,600 or so categories that ImageNet provides for “types of people.” These run the gamut from everyday and banal, to rude or blatantly offensive. By putting these categories directly on display, Trevor hopes to, “demonstrate how various kinds of politics propagate through technical systems,” ultimately highlighting the biases and politics which saturate the production of allegedly value-neutral technologies.

Working in conjunction with Leif Ryge, Trevor’s full-time computer programmer, we trained a custom computer-vision algorithm which would look at an image and categorize any people in it based on the “types of people” categories in imageNet. The rest of my time in the studio was spent troubleshooting potential hardware options and writing Python code to run the algorithm in a museum setting, where viewers could interactively see how the imageNet dataset would perceive them.

This experience proved immensely useful and informative to my own art practice; not only did it provide me with many of the technical skills needed to implement my own computer vision algorithms in an art context, but it also gave me insight into how such technology is handled in a high-caliber art studio with serious deadlines and constraints to contend with. Through the many rich conversations I had with the incredible people working in Trevor’s studio, I was able to expand my understanding of the lower-level mechanics of these types of algorithms, and the ways in which institutional bias can reproduce itself through them.

I was also afforded the time to visit many of Berlin’s art institutions and galleries, allowing me to see the work of many artists I have long admired. As a hub for artists contending with the politics of technology and surveillance, my time in Berlin proved invaluable as a source of inspiration, research, and new technical skills which I plan to put to good use this coming year.