Debugging Failure

Project Title: Debugging failure: Fostering youth academic resilience in computer science

The Debugging Failure research project has only been possible thanks to a dedicated partnership with 9 Dots, a non-profit organization that supports computer science education in Los Angeles.

Funding: National Science Foundation: Advancing Informal STEM Learning ($620,514 w/ 9 Dots and UCLA)
Collaborators9 Dots Community Learning CenterNoel Enyedy (UCLA), Francis Steen (UCLA), and David DeLiema (UC Berkeley).

Project Overview

Situated within the Advancing Informal STEM Learning: Research in Service to Practice program, this project seeks to design, implement, and evaluate an intervention aimed at fostering a culture of productive failure practices. The project responds to a broad concern in educational research and practice: Experiences of failure are frequently so negative that students shut down (Holt, 1964), lose agency (Weiner, 1985), and develop low self-efficacy (Bandura, 1982) and learned helplessness (Abramson, Seligman, & Teasdale, 1978). Surrendering too quickly to obstacles is particularly unfortunate, given experimental evidence that initially “getting it wrong” ultimately breeds deep and sustained learning (Kapur, 2008).

In order to learn how students can make the most of productive failure, the proposed project will study how a community of practice attempts to change its handling of learning obstacles. When confronted with failure, students and educators construct narratives about the causes of the error and possible resolutions (Graham, 1991). Because failure stories are socially constructed (Ochs et al., 1992), learning interventions can promote reflection on (and ultimately revisions to) the failure storytelling process. Stories about failure are valuable because they drive motivation (Weiner, 1983) and provide actionable roadmaps for planning learning (Lee & Johnston-Wilder, 2013).

Building on prior research documenting storytelling practices at our field site (DeLiema, 2015), our team now aims to embolden young students’ productive practices of failure storytelling in computer science, a field in which experts practice candid, pervasive, and collaborative discourse around errors (“bugs”). Our team of researchers and practitioners will implement cycles of design-based research with three interventions: setting new norms around encountering, interrogating, and practicing expert debugging practices; leading instructor education workgroups focused on helping instructors notice the structure of failure stories and rehearse discourse-based responses; and building coding software that gives students metadata on their struggles and provides authentic debugging resources.

These interventions will be implemented in more than 15 multi-week workshops with 5th–8th grade students from non-dominant communities. We will measure variations in debugging activities, reflections on debugging, students’ ideas about grit and growth mindset, and instructors’ struggles and successes with the new curriculum. The empirical results will consist of mixed-methods, micro-longitudinal accounts of how a community of practice works to reform its orientation to failure.

Intellectual Merit. Current approaches to handling failure in school promote steadfast effort (Graham, 1991) and grit (Duckworth, Peterson, Matthews, & Kelly, 2007). Our project extends this approach with the recognition that failure is also an information opportunity, a critical juncture where students can understand how to direct effort moving forward. Pulling together the domains of narrative analysis (Herman, 2009), meta-cognitive reflection (Lee & Johnston-Wilder, 2012), and control theories of motivation (Crandall et al. 1965), within the context of authentic computer-science debugging activity (Regeher, 2010), this study develops a theoretical framework that views productive responses to failure as a discipline-specific process of reflecting as a community on how to locate obstacles, how to construct causal theories about why those obstacles emerged, and how to plan productive responses.

Broader Impacts. The product of this research will be knowledge about how learning communities help students develop robust and efficient responses to failure. Because the project takes place at an after-school center that works with young students who are traditionally marginalized in both school and broader society, the results of the project will advance the urgent need in the United States to both promote computer science and understand how schools can help students develop a rigorous art to debugging obstacles to learning in school. The curriculum, empirical knowledge, and theoretical framework that will come from this research—which will be disseminated in academic journals, multiple universities, open-source software, and instructor-education workshops with partnership schools and after-school programs—will have the capacity to rewrite the way students think about failure, respond emotionally to failure, and use failure as a vital source of information for how to learn.


The 2019 Debugging Failure Team

Clockwise from lower left: David DeLiema, Kaela Seiersen, Geetanshi Sharma, Zach Ryan, Tiffany Kanamaru, Kirsten Nguyen, and Qiyuan Miao

Selected publications

DeLiema, D., Kwon, Y. A., Chisholm, A., Williams, I., Dahn, M., Flood, V. J., Abrahamson, D., & Steen, F. F. (2023). A multi-dimensional framework for documenting students’ heterogeneous experiences with programming bugs. Cognition and Instruction, 41(2), 158–200. https://doi.org/10.1080/07370008.2022.2118279

Flood, V. J., DeLiema, D., & Abrahamson, D. (2018). Bringing static code to life: The instructional work of animating computer programs with the body. In J. Kay & R. Luckin (Eds.), “Rethinking learning in the digital age: Making the Learning Sciences count,” Proceedings of the 13th International Conference of the Learning Sciences (Vol. 2, pp. 1085-1088). London: International Society of the Learning Sciences.

Flood, V. J., DeLiema, D., Harrer, B. W. & Abrahamson, D. (2018). Enskilment in the digital age: The interactional work of learning to debug. In J. Kay & R. Luckin (Eds.), “Rethinking learning in the digital age: Making the Learning Sciences count,” Proceedings of the 13th International Conference of the Learning Sciences (Vol. 3, pp. 1405-1406). London: International Society of the Learning Sciences.

DeLiema, D., Abrahamson, D., Enyedy, N., Steen, F., Dahn, M., Flood, V. J., Taylor, J., & Lee, L. (2018, April). Measuring debugging: How late elementary and middle school students handle broken code. In D. A.-L. Lui & Y. Kafai (Chairs & Organizers), Measuring making: Methods, tools, and strategies for capturing learning, participation, and engagement in maker activities. Symposium conducted at the annual meeting of the American Educational Research Association, New York City.

Walker–van Aalst, O., DeLiema, D., Flood, V. J., & Abrahamson, D. (2018, June). Peer conversations about refactoring computer code: Negotiating reflective abstraction through narrative, affect, and playPaper presented at the annual meeting of the Jean Piaget Society, Amsterdam, May 31 – June 2.