News/Research

BCNM at UIST 2021

14 Nov, 2021

BCNM at UIST 2021

UIST (ACM Symposium on User Interface Software and Technology) is the premier forum for innovations in the software and technology of human-computer interfaces. Every year, UIST brings together researchers and practitioners from diverse areas that include traditional graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, and CSCW. The intimate size, the single track, and comfortable surroundings make this symposium an ideal opportunity to exchange research results and implementation experiences.

Our very own Eric Paulos, Björn Hartmann, Jingyi Li, and Molly Nicholas presented their latest research at the program. Check out their work below!

Eric Paulos
Adroid: Augmenting Hands-on Making with a Collaborative Robot
Rundong Tian, Eric Paulos
Adroid1 enables users to borrow precision and accuracy from a robotic arm when using hand-held tools. When a tool is mounted to the robot, the user can hold and move the tool directly—Adroid measures the user’s applied forces and commands the robot to move in response. Depending on the tool and scenario, Adroid can selectively restrict certain motions. In the resulting interaction, the robot acts like a virtual “jig” which constrains the tool’s motion, augmenting the user’s accuracy, technique, and strength, while not diminishing their agency during open-ended fabrication tasks. We complement these hands-on interactions with projected augmented reality for visual feedback about the state of the system. We show how tools augmented by Adroid can support hands-on making and discuss how it can be configured to support other tasks within and beyond fabrication.

Björn Hartmann
Weaving Schematics and Code: Interactive Visual Editing for Hardware Description Languages
Richard Lin, Rohit Ramesh, Nikhil Jain, Josephine Koe, Ryan Nuqui, Prabal Dutta, Björn Hartmann
In many engineering disciplines such as circuit board, chip, and mechanical design, a hardware description language (HDL) approach provides important benefits over direct manipulation interfaces by supporting concepts like abstraction and generator meta-programming. While several such HDLs have emerged recently and promised power and flexibility, they also present challenges – especially to designers familiar with current graphical workflows. In this work, we investigate an IDE approach to provide a graphical editor for a board-level circuit design HDL. Unlike GUI builders which convert an entire diagram to code, we instead propose generating equivalent HDL from individual graphical edit actions. By keeping code as the primary design input, we preserve the full power of the underlying HDL, while remaining useful even to advanced users. We discuss our concept, design considerations such as performance, system implementation, and report on the results of an exploratory remote user study with four experienced hardware designers.

Jingyi Li
Automated Accessory Rigs for Layered 2D Character Illustrations
Jingyi Li, Wilmot Li, Sean Follmer, Maneesh Agrawala
Mix-and-match character creation tools enable users to quickly produce 2D character illustrations by combining various predefined accessories, like clothes and hairstyles, which are represented as separate, interchangeable artwork layers. However, these accessory layers are often designed to fit only the default body artwork, so users cannot modify the body without manually updating all the accessory layers as well. To address this issue, we present a method that captures and preserves important relationships between artwork layers so that the predefined accessories adapt with the character’s body. We encode these relationships with four types of constraints that handle common interactions between layers: (1) occlusion, (2) attachment at a point, (3) coincident boundaries, and (4) overlapping regions. A rig is a set of constraints that allow a motion or deformation specified on the body to transfer to the accessory layers. We present an automated algorithm for generating such a rig for each accessory layer, but also allow users to select which constraints to apply to specific accessories. We demonstrate how our system supports a variety of modifications to body shape and pose using artwork from mix-and-match data sets.

Molly Nicholas
AirConstellations: In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures
Nicolai Marquardt, Nathalie Henry, Riche Christian Holz, Hugo Romat, Michel Pahud, Frederik Brudy, David Ledo, Chunjong Park, Molly Jane Nicholas, Teddy Seyed, Eyal Ofek, Bongshin Lee, William A.S. Buxton, Ken Hinckley
AirConstellations supports a unique semi-fixed style of cross-device interactions via multiple self-spatially-aware armatures to which users can easily attach (or detach) tablets and other devices. In particular, AirConstellations affords highly flexible and dynamic device formations where the users can bring multiple devices together in-air — with 2–5 armatures poseable in 7DoF within the same workspace — to suit the demands of their current task, social situation, app scenario, or mobility needs. This affords an interaction metaphor where relative orientation, proximity, attaching (or detaching) devices, and continuous movement into and out of ad-hoc ensembles can drive context-sensitive interactions. Yet all devices remain self-stable in useful configurations even when released in mid-air.

We explore flexible physical arrangement, feedforward of transition options, and layering of devices in-air across a variety of multi-device app scenarios. These include video conferencing with flexible arrangement of the person-space of multiple remote participants around a shared task-space, layered and tiled device formations with overview+detail and shared-to-personal transitions, and flexible composition of UI panels and tool palettes across devices for productivity applications. A preliminary interview study highlights user reactions to AirConstellations, such as for minimally disruptive device formations, easier physical transitions, and balancing ”seeing and being seen” in remote work.

Learn more about UIST here!