News/Research

BCNM at CHI 2023

19 Apr, 2023

BCNM at CHI 2023

Check out the amazing work from BCNM faculty, students, and alumni at the ACM Computer-Human Interaction Conference of 2023!

Faculty

Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts

J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, Qian Yang

Pre-trained large language models ("LLMs") like GPT-3 can engage in fluent, multi-turn instruction-taking out-of-the-box, making them attractive materials for designing natural language interactions. Using natural language to steer LLM outputs ("prompting") has emerged as an important design technique potentially accessible to non-AI-experts. Crafting effective prompts can be challenging, however, and prompt-based interactions are brittle. Here, we explore whether non-AI-experts can successfully engage in "end-user prompt engineering" using a design probe—a prototype LLM-based chatbot design tool supporting development and systematic evaluation of prompting strategies. Ultimately, our probe participants explored prompt designs opportunistically, not systematically, and struggled in ways echoing end-user programming systems and interactive machine learning systems. Expectations stemming from human-to-human instructional experiences, and a tendency to overgeneralize, were barriers to effective prompt design. These findings have implications for non-AI-expert-facing LLM-based tool design and for improving LLM-and-prompt literacy among programmers and the public, and present opportunities for further research.

https://programs.sigchi.org/chi/2023/index/content/96495

Kaleidoscope: A Reflective Documentation Tool for a User Interface Design Course

Sarah Sterman, Molly Jane Nicholas, Janaki Vivrekar, Jessica R Mindel, Eric Paulos

Documentation can support design work and create opportunities for learning and reflection. We explore how a novel documentation tool for a remote interaction design course provides insight into design process and integrates strategies from expert practice to support studio-style collaboration and reflection. Using Research through Design, we develop and deploy Kaleidoscope, an online tool for documenting design process, in an upper-level HCI class during the COVID-19 pandemic, iteratively developing it in response to student feedback and needs. We discuss key themes from the real-world deployment of Kaleidoscope, including: tensions between documentation and creation; effects of centralizing discussion; privacy and visibility in shared spaces; balancing evidence of achievement with feelings of overwhelm; and the effects of initial perceptions and incentives on tool usage. These successes and challenges provide insights to guide future tools for design documentation and HCI education that scaffold learning process as an equal partner to execution.

https://programs.sigchi.org/chi/2023/index/content/96411

Students

Expressiveness, Cost, and Collectivism: How the Design of Preference Languages Shapes Participation in Algorithmic Decision-Making

Samantha Robertson, Tonya Nguyen, Cathy Hu, Catherine Albiston, Afshin Nikzad, Niloufar Salehi

Emerging methods for participatory algorithm design have proposed collecting and aggregating individual stakeholders’ preferences to create algorithmic systems that account for those stakeholders’ values. Drawing on two years of research across two public school districts in the United States, we study how families and school districts use students’ preferences for schools to meet their goals in the context of algorithmic student assignment systems. We find that the design of the preference language, i.e. the structure in which participants must express their needs and goals to the decision-maker, shapes the opportunities for meaningful participation. We define three properties of preference languages – expressiveness, cost, and collectivism – and discuss how these factors shape who is able to participate, and the extent to which they are able to effectively communicate their needs to the decision-maker. Reflecting on these findings, we offer implications and paths forward for researchers and practitioners who are considering applying a preference-based model for participation in algorithmic decision making.

https://programs.sigchi.org/chi/2023/index/content/96400

Understanding Version Control as Material Interaction with Quickpose

Eric Rawn, Jingyi Li, Eric Paulos, Sarah E. Chasins

Whether a programmer with code or a potter with clay, practitioners engage in an ongoing process of working and reasoning with materials. Existing discussions in HCI have provided rich accounts of these practices and processes, which we synthesize into three themes: (1) reciprocal discovery of goals and materials, (2) local knowledge of materials, and (3) annotation for holistic interpretation. We then apply these design principles generatively to the domain of version control to present Quickpose: a version control system for creative coding. In an in-situ, longitudinal study of Quickpose guided by our themes, we collected usage data, version history, and interviews. Our study explored our participants’ material interaction behaviors and the initial promise of our proposed measures for recognizing these behaviors. Quickpose is an exploration of version control as material interaction, using existing discussions to inform domain-specific concepts, measures, and designs for version control systems.

https://programs.sigchi.org/chi/2023/index/content/95934

Lotio: Lotion-Mediated Interaction with an Electronic Skin-Worn Display

Katherine W Song, Christine Dierk, Szu Ting Tung, Eric Paulos

Skin-based electronics are an emerging genre of interactive technologies. In this paper, we leverage the natural uses of lotions and propose them as mediators for driving novel, low-power, quasi-bistable, and bio-degradable electrochromic displays on the skin and other surfaces. We detail the design, fabrication, and evaluation of one such "Lotion Interface," including how it can be customized using low-cost everyday materials and technologies to trigger various visual and temporal effects – some lasting up to fifteen minutes when unpowered. We characterize different fabrication techniques and lotions to demonstrate various visual effects on a variety of skin types and tones. We highlight the safety of our design for humans and the environment. Finally, we report findings from an exploratory user study and present a range of compelling applications for Lotion Interfaces that expand the on-skin and surface interaction landscapes to include the familiar and often habitual practice of applying lotion.

https://programs.sigchi.org/chi/2023/index/content/96585

Vɪᴍ: Customizable, Decomposable Electrical Energy Storage

Katherine W Song, Eric Paulos

Providing electrical power is essential for nearly all interactive technologies, yet it often remains an afterthought. Some designs handwave power altogether as an "exercise for later." Others hastily string together batteries to meet the system's electrical requirements, enclosing them in whatever box fits. Vɪᴍ is a new approach -- it elevates power as a first-class design element; it frees power from being a series of discrete elements, instead catering to exact requirements; it enables power to take on new, flexible forms; it is fabricated using low-cost, accessible materials and technologies; finally, it advances sustainability by being rechargeable, non-toxic, edible, and compostable. Vɪᴍs are decomposable battery alternatives that rapidly charge and can power small applications for hours. We present Vɪᴍs, detail their characteristics, offer design guidelines for their fabrication, and explore their use in applications spanning prototyping, fashion, and food, including novel systems that are entirely decomposable and edible.

https://programs.sigchi.org/chi/2023/index/content/96609

Alumni

Closer Worlds: Using Generative AI to Facilitate Intimate Conversations

Tiffany Chen, Cassandra Lee, Jessica R Mindel, Neska ElHaouij, Rosalind Picard

Deep emotional intimacy is a foundational aspect of strong relationships. One strategy technologists use to mediate connection is creating games, which offer fun, emotionally-rich spaces for dynamic social interactions. Other researchers have leveraged advancements in generative AI to explore social communication. In this paper we explore how a game using text-to-image models might induce emotionally intimate conversations. We design and test Closer Worlds, an ML-assisted 2-person experience which asks personal questions and creates images in a playful world-building scenario. We explore design principles inspired by facilitation research and assess their effectiveness in a pilot study with 24 participants. We conclude that Closer Worlds elicits self-disclosure behavior, but less than a similar game without the use of generative AI. However, participants enjoy the experience and potential to visually represent shared values. We conclude by discussing future ways to use generative techniques and games to foster circumstances for emotional conversations to emerge.

https://programs.sigchi.org/chi/2023/index/content/98836

AdaCAD: Parametric Design as a New Form of Notation for Complex Weaving

Laura Devendorf, Kathryn Walters, Marianne Fairbanks, Etta Sandry, Emma R Goodwill

Woven textiles are increasingly a medium through which HCI is inventing new technologies. Key challenges in integrating woven textiles in HCI include the high level of textile knowledge required to make effective use of the new possibilities they afford and the need for tools that bridge the concerns of textile designers and concerns of HCI researchers. This paper presents AdaCAD, a parametric design tool for designing woven textile structures. Through our design and evaluation of AdaCAD we found that parametric design helps weavers notate and explain the logics behind the complex structures they generate. We discuss these finding in relation to prior work in integrating craft and/or weaving in HCI, histories of woven notation, and boundary object theory to illuminate further possibilities for collaboration between craftspeople and HCI practitioners.

https://programs.sigchi.org/chi/2023/index/content/95711

Crafting Interactive Circuits on Glazed Ceramic Ware

Clement Zheng, Bo Han, Xin Liu, Laura Devendorf, Hans Tan, Ching Chiuan Yen

Glazed ceramic is a versatile material that we use every day. In this paper, we present a new approach that instruments existing glazed ceramic ware with interactive electronic circuits. We informed this work by collaborating with a ceramics designer and connected his craft practice to our experience in physical computing. From this partnership, we developed a systematic approach that begins with the subtractive fabrication of traces on glazed ceramic surfaces via the resist-blasting technique, followed by applying conductive ink into the inlaid traces. We capture and detail this approach through an annotated flowchart for others to refer to, as well as externalize the material insights we uncovered through ceramic and circuit swatches. We then demonstrate a range of interactive home applications built with this approach. Finally, we reflect on the process we took and discuss the importance of collaborating with craftspeople for material-driven research within HCI.

https://programs.sigchi.org/chi/2023/index/content/96570

Just Do Something: Comparing Self-proposed and Machine-recommended Stress Interventions among Online Workers with Home Sweet Office

Xin Tong, Matthew Louis Mauriello, Marco Antonio Mora-Mendoza, Nina Prabhu, Jane Paik Kim, Pablo E Paredes Castro

Modern stress management techniques have been shown to be effective, particularly when applied systematically and with the supervision of an instructor. However, online workers usually lack sufficient support from therapists and learning resources to self-manage their stress. To better assist these users, we implemented a browser-based application, Home Sweet Office (HSO), to administer a set of stress micro-interventions which mimic existing therapeutic techniques, including somatic, positive psychology, meta cognitive, and cognitive behavioral categories. In a four-week field study, we compared random and machine-recommended interventions to interventions that were self-proposed by participants in order to investigate effective content and recommendation methods. Our primary findings suggest that both machine-recommended and self-proposed interventions had significantly higher momentary efficacy than random selection, whereas machine-recommended interventions offer more activity diversity compared to self-proposed interventions. We conclude with reflections on these results, discuss features and mechanisms which might improve efficacy, and suggest areas for future work.

https://programs.sigchi.org/chi/2023/index/content/96322

Michelle Carney

Experiencing Rapid Prototyping of Machine Learning Based Multimedia Applications in Rapsai

Ruofei Du, Na Li, Jing Jin, Michelle Carney, Xiuxiu Yuan, Ram Iyengar, Ping Yu, Adarsh Kowdle, Alex Olwal

We demonstrate Rapsai, a visual programming platform that aims to streamline the rapid and iterative development of end-to-end machine learning (ML)-based multimedia applications. Rapsai features a node-graph editor that enables interactive characterization and visualization of ML model performance, which facilitates the understanding of how the model behaves in different scenarios. Moreover, the platform streamlines end-to-end prototyping by providing interactive data augmentation and model comparison capabilities within a no-coding environment. Our demonstration showcases the versatility of Rapsai through several use cases, including virtual background, visual effects with depth estimation, and audio denoising. The implementation of Rapsai is intended to support ML practitioners in streamlining their workflow, making data-driven decisions, and comprehensively evaluating model behavior with real-world input.

https://programs.sigchi.org/chi/2023/index/content/98875

Rapsai: Accelerating Machine Learning Prototyping of Multimedia Applications through Visual Programming

Ruofei Du, Na Li, Jing Jin, Michelle Carney, Scott Miles, Maria Kleiner, Xiuxiu Yuan, Yinda Zhang, Anuva Kulkarni, Xingyu "Bruce" Liu, Ahmed Sabie, Sergio Orts-Escolano, Abhishek Kar, Ping Yu, Ram Iyengar, Adarsh Kowdle, Alex Olwal

In recent years, there has been a proliferation of multimedia applications that leverage machine learning (ML) for interactive experiences. Prototyping ML-based applications is, however, still challenging, given complex workflows that are not ideal for design and experimentation. To better understand these challenges, we conducted a formative study with seven ML practitioners to gather insights about common ML evaluation workflows. This study helped us derive six design goals, which informed Rapsai. Rapsai features a node-graph editor to facilitate interactive characterization and visualization of ML model performance. Rapsai streamlines end-to-end prototyping with interactive data augmentation and model comparison capabilities in its no-coding environment. Our evaluation of Rapsai in four real-world case studies (N=15) suggests that practitioners can accelerate their workflow, make more informed decisions, analyze strengths and weaknesses, and holistically evaluate model behavior with real-world input.

https://programs.sigchi.org/chi/2023/index/content/95967

Escapement: A Tool for Interactive Prototyping with Video via Sensor-Mediated Abstraction of Time

Molly Jane Nicholas, Nicolai Marquardt, Michel Pahud, Nathalie Riche, Hugo Romat, Christopher Collins, David Ledo, Rohan Kadekodi, Badrish Chandramouli, Ken Hinckley

We present Escapement, a video prototyping tool that introduces a powerful new concept for prototyping screen-based interfaces by flexibly mapping sensor values to dynamic playback control of videos. This recasts the time dimension of video mock-ups as sensor-mediated interaction. This abstraction of time as interaction, which we dub video-escapement prototyping, empowers designers to rapidly explore and viscerally experience direct touch or sensor-mediated interactions across one or more device displays. Our system affords cross-device and bidirectional remote (tele-present) experiences via cloud-based state sharing across multiple devices. This makes Escapement especially potent for exploring multi-device, dual-screen, or remote-work interactions for screen-based applications. We introduce the core concept of sensor-mediated abstraction of time for quickly generating video-based interactive prototypes of screen-based applications, share the results of observations of long-term usage of video-escapement techniques with experienced interaction designers, and articulate design choices for supporting a reflective, iterative, and open-ended creative design process.

https://programs.sigchi.org/chi/2023/index/content/96337