Trevor Paglen’s ImageNet Roulette
Alum Trevor Paglen, along with Kate Crawford, created the incredible art piece ImageNet Roulette.
ImageNet uses an open-source Caffe deep-learning framework (produced at UC Berkeley) trained on the images and labels in the “person” categories (which are currently “down for maintenance”) of the ImageNet dataset, which is typically used for data recognition. Proper nouns were removed.
When a user uploads a picture, the application first runs a face detector to locate any faces. If it finds any, it sends them to the Caffe model for classification. The application then returns the original images with a bounding box showing the detected face and the label the classifier has assigned to the image. If no faces are detected, the application sends the entire scene to the Caffe model and returns an image with a label in the upper left corner.
As we have shown, ImageNet contains a number of problematic, offensive, and bizarre categories. Hence, the results ImageNet Roulette returns often draw upon those categories. That is by design: we want to shed light on what happens when technical systems are trained using problematic training data. AI classifications of people are rarely made visible to the people being classified. ImageNet Roulette provides a glimpse into that process—and to show how things can go wrong.
ImageNet Roulette does not store the photos people upload.
This piece took the internet by storm, with essays in the New York Times, Wired, BBC, Vice, Vice again(!), The Guardian, Artnet, Hyperallergic, the Art Newspaper, Document Journal, Fast Company, Metro, Frieze, Frieze again (!), The Verge, Lifehacker, Business Insider, Jezebel, Fortune!
From the NYT article:
But for Mr. Paglen, a larger issue looms. The fundamental truth is that A.I. learns from humans — and humans are biased creatures. “The way we classify images is a product of our worldview,” he said. “Any kind of classification system is always going to reflect the values of the person doing the classifying.”