News/Research

Seed Grants Recipient: Celeste Kidd and The Role of Reasoning and Metacognition during Belief Formation in the Internet Era

23 Sep, 2020

Seed Grants Recipient: Celeste Kidd and The Role of Reasoning and Metacognition during Belief Formation in the Internet Era

This year the Berkeley Center for New Media offered two junior faculty research grants to seed ambitious academic scholarship in new media at Cal. Celeste Kidd was selected for “The Role of Reasoning and Metacognition during Belief Formation in the Internet Era.” Read more about the project below!

Overview

We used our award from the BCNM to conduct behavioral experiments and computational modeling work aimed at understanding why people believe things that aren’t true, despite unprecedented access to information online. We drew methods from cognitive science, anthropology, ecology, and mathematical modeling to formalize and test theories that can explain why people sometimes believe things that they shouldn’t. We’ve gained new insights into how this potentially destructive tendency might be overcome with the right interventions, and will be pursuing this direction of research for some time to come, along with colleagues Sarah Stolp, Jan Engelmann, Carolyn Baer, Adam Conover, and Andy Smart who have joined us in pursuing this work.

Our work under this award produced foundational pilot data that lead to the submission of numerous grant applications (including 2 funded by the Jacobs Foundation and the Hellman Fellows Fund, and 2 additional under review). Results from this project were presented at international academic venues via invited talks at the Cognitive Science Society, MIT Lincoln Laboratory, the Simons Institute on the Theory of Computing, and Neural Information Processing Systems (NeurIPS). The work was also presented as a World AI Summit Americas Keynote and a Tech Talk at X in Mountainview, CA.

We have already communicated a subset of findings from this grant via popular science publications (including ​VentureBeat​, M​ IT Technology Review​, ​Nature​, and ​Fatherly​) and podcasts (​Factually! with Adam Conover​ and ​TWIML AI​), and will continue this public communication work as additional manuscripts resulting from this award are published.

Scientific findings

(1) Human concepts vary widely in the population, yet this diversity is unrecognized by language users.

Summary: C​ ognitive psychology has developed sophisticated theories of conceptual representation and change, but little work has quantified the variation in these representations between individuals. We asked study participants to rate similarities between the meanings denoted by common words, following classic methods in cognitive psychology, and then used a non-parametric clustering scheme and ecological estimator to infer the number of different meanings for the same word that is present in the population at large. We find that typically at least ten to twenty variants of meanings exist for even common nouns, but that people are unaware of this variation. Instead, people exhibit a bias to erroneously believe that other people share their particular concepts, pointing to one factor that likely interferes with political and social discourse.

This finding was prepared as a manuscript entitled “Latent diversity in human concepts” and submitted for consideration at ​Nature.​ The results also formed the basis of talks given at NeurIPS, Google X, and MIT’s Lincoln Lab.

(2) Certainty is determined by feedback.

Summary:P​ eople’saccesstotruthintheworldhingesontheirwillingnesstocontinue collecting evidence in the world. Once a person is certain, they stop collecting subsequent data. Yet little work had specifically investigated how people become certain about high-level kinds of concepts. We conducted an empirical experiment and compared competing models of where certainty might come from in order to demonstrate that certainty is determined primarily by how well a person observes themself doing, not by idealized statistical inferences made from the data they observed.

Our findings were published as a manuscript entitled “Certainty is primarily determined by past performance during concept learning” published in ​Open Mind​. Under this award, we worked to extend these ideas to study children’s metacognitive abilities, navigation of information in the world, and belief formation processes. Data collected under this award was used to compose and submit a grant application with colleague Jan Engelmann to the Jacobs Foundation Young Scholars Program, entitled “Investigating the Effects of Uncertainty on Children’s Learning Decisions”, which was Selected for funding. That award, along with a Postdoctoral Fellowship from the Social Sciences and Humanities Research Council of Canada, has allowed us to hire Carolyn Baer, an expert on social and cognitive aspects of exploration and learning in development, as a postdoctoral scholar, who will execute and further develop the experimental ideas detailed in this proposal beginning in January 2021.

These findings have profound implications for technologies that select the order in which information is presented to users. Thus, these results were communicated during the duration of our award to those who work on machine learning and develop these technologies (via, for example, NeurIPS, and X Tech Talk, and at the World AI America Summit).

(3) Humans form beliefs quickly.

Summary: H​ umans tend to form high certainty beliefs with surprisingly little data. Funding from this award allowed us to conduct pilot studies which examined just how quickly humans form beliefs about a topic they know little to nothing about. After presenting participants with YouTube videos on an obscure viewpoint, participants quickly moved from entirely unsure to adopting the belief themselves with high confidence—after just 3 2- to 3-minute videos, 90% of participants endorsed the opinion espoused in the videos. Additionally, watching just a few videos that espoused the same opinion in succession led participants to expect that more people in the population shared that opinion. Overall, beliefs were shifted rapidly after only a few minutes of video watching. Since our previous work has shown that beliefs, once strongly held, are hard to shake, this means future intervention work will need to propose simple, fast-acting solutions that work before beliefs have become entrenched.

These data served as the foundation for multiple grant submissions, and we anticipate they will be integrated into 2 manuscripts—an empirical paper, and a review article—within the next year, as we expand these ideas and collect subsequent data in order to understand what factors influence people’s rapid credulity.

(4) Inaccurate beliefs are ubiquitous.

Summary: M​ ost contemporary proposed solutions to combating misinformation on the internet focus on “vaccine” inspired approaches, which envision the problem as stemming from a subset of the population which is either anti-science, or more vulnerable than others. Contrary to past evidence that inaccurate beliefs tend to cluster among a small group of “conspiracy theorists”, work funded by this award allowed us to discover that inaccurate beliefs are (1) common, (2) likely unavoidable, and (3) spread throughout the population, rather than concentrated in a subset of people. These discoveries are crucial for informing our next stage of work, looking at targeting interventions that focus on correctly particular problematic beliefs in the population rather than taking a more general science-education-at-large based approach.

This award allowed us to conduct pilot studies which showed that fringe beliefs are held by millions of people, and the majority of people tend to hold at least one fringe belief (68% in our sample), demonstrating that although any singular belief may be fringe, these beliefs as a group are ubiquitous. Additionally, our preliminary analysis estimates that about 1-3% of people in the U.S. believe that the Earth is flat; higher than previous estimates. Our ongoing work for which we have submitted grant proposals explores the relationship between how one type of belief predicts belief in another. For example, does believing coronavirus myths predict your beliefs about climate change?