This is an archived version of CRA's website. This archive is available to provide historical content.

Please visit http://www.cra.org for the latest information.

Computing Research News

November 2005     Vol. 17/No. 5

Back

ACM SIGCSE Sponsors First International Computing Education Research (ICER2005) Workshop

By Mark Guzdial, Sally Fincher, and Richard Anderson

 

The University of Washington, Seattle hosted the first International Computing Education Research (ICER) workshop the first weekend of October 2005. Sponsored by ACM SIGCSE, the gathering drew nearly 60 participants from America, Europe, Asia, and Australia to present and discuss research on how people come to understand computing, and how to improve that understanding.

The presentations addressed issues ranging from explaining why some students succeed in their first computing courses, to developing evaluation metrics for student programming environments, to considering how much paradigm or language really matter in teaching programming.

The challenges facing computing education are enormous. Enrollment in computing courses and majors is declining. The percentage of women and minority groups is lower than one would expect or desire. Failure rates in computing courses are shockingly high. All of these are true at a time when we recognize the extent to which our society relies on computing technologies and the people who know how to design, structure, maintain, and utilize these technologies.

Computing education, as a research discipline, is the study of how people come to understand computational processes and devices, and how to improve that understanding. Computing education is important to both professionals in IT fields and a technologically literate citizenry. The research study of how the understanding of computation develops, and how to improve teaching and learning about computation, is critically important for the technology-dependent societies in which we live.

The goal of the ICER workshop was to gather high-quality contributions to the computing education discipline. The peer review process (less than half of the submitted papers were accepted) emphasized papers that drew on a theoretical foundation with a strong empirical basis. The papers presented at the workshop drew from a wide range of methods used in education, sociology, psychology, and cognitive and learning sciences to further our understanding of computing education.

The invited keynote speaker, Cindy Atman from the University of Washington, Seattle, kicked off the workshop by presenting more than ten years of empirical data collection and analysis on the stages of developing design expertise in engineering. She watched students, engineering faculty, and expert professional engineers practice design on the same problem, the design of a neighborhood playground. From a detailed analysis of a subject’s moment-by-moment “think aloud” commentary, she could show how students developed their design expertise over four years of schooling. She showed how different faculty varied in their design activity, from an approach not unlike senior engineering students to approaches that were much more focused on out-of-the-box generation of ideas. Her latest work with expert designers suggests similar ranges of design activity, from those who focus on innovative ideas to those who focus on issues of cost and real-world feasibility.

A significant percentage of the ICER attendees had been involved in one of several multi-institutional, multi-national studies (MIMN) in computing education that have been conducted over the past five years. The point of these studies was to test generality that no single classroom study at a single institution could. Raymond Lister (Institute of Technology, Sydney, Australia) presented a paper that described the value of this style of research in computing education and traced its history. The MIMN studies have played an important role in drawing new computing education researchers into the community.

The first of the papers, “What really matters in teaching about computing?” drew from data in one of the recent MIMN studies. Gary Lewandowski (Xavier University) presented a study exploring what students know, don’t know, and aren’t sure whether they know or not. They found that better-performing students are more certain of what they just don’t know. They found commonality in the terms that were the most confounding for students (e.g., typically abstract terms like “state” and “decomposition”) across many kinds of introductory classrooms using a variety of teaching paradigms. Allison Tew (Georgia Institute of Technology) presented work with a similar theme in her study of students from two quite different introductory courses who then took the same follow-up course. While there were significant differences in student understanding at the start of the second course, those differences disappeared by the end of the class, raising the issue of just what differences in introductory courses really matter.

One of the most commonly asked questions at ICER was, “What makes students succeed?” Susan Widenbeck (Drexel University) presented a model built up from multiple regression showing the important role that a student’s sense of his or her own ability (self-efficacy) played in success—and failure. Her model showed that students with low ability but inflated self-efficacy tended not to succeed. A study of the role of learning strategies presented by Desmond Traynor (NUI Maynooth, Ireland) emphasized that critical issues in learning computing were factors like students having good learning strategies (e.g., checking one’s own understanding) and finding value in what they were doing.

Powerful insights on “How students think about computing” came from studies employing a wide range of methods. Yifat Ben-David Kolikant (The Hebrew University of Jerusalem) interviewed, tested, and observed students to demonstrate how their sense of “systematic testing” often devolved into “testing every input I could think of.” She found that students have a sense of “relative correctness” that experts found was, frankly, “wrong.” Anna Eckerdal (Uppsala University, Sweden) used phenomenographic methods to explore what students think it means to utilize “programming thinking.” Her students exhibited some understanding of what she called “procedure conceptions,” where they saw programming as a set of rote procedures, to object conceptions, where they demonstrated deeper understanding of computing concepts. Beth Simon (University of California, San Diego) presented MIMN study data of the strategies students used in trying to solve multiple-choice questions about program code. Simon found that the students utilized a wide range of strategies rather than relying on one or two successful ones, and that this characterization fit across the hundreds of students they studied.

The question, “How should we teach computing?” was explored in several studies. Chris Hundhausen (Washington State University) used ethnographic field techniques in comparing two “studio-based” data structures classes, where students created visualization of algorithms in teams and then presented them to the rest of the class. In one class, students used an algorithm animation tool, and in the other, students used simple art supplies like transparency sheets, scissors, and markers. He found that the art supply group tended not to get caught up in details, but also didn’t develop their visualizations in enough detail to describe the algorithm as well as the tool-using students did. Jackie O’Kelly (NUI Maynooth, Ireland) described case studies where students were found utilizing software engineering problem-solving in non-programming problems. She is trying to come up with problems, outside of computing, that help students to develop expertise in these strategies for solving computing problems.

Computing education necessarily involves the use of computing tools, and papers at the ICER workshop addressed the issue, “How do we evaluate and choose novice programming environments?” Ari Korhonen (Helsinki University of Technology, Finland) presented a taxonomy of algorithm visualization tools emphasizing “effortlessness”—making it as easy as possible for teachers to create visualizations. Paul Gross (Washington University in St. Louis) discussed how novice programming environments were studied. Both papers noted a lack of theory in the design and evaluations of these tools and little replication of studies for more generality of findings. The bottom-line finding of these, among many other papers at ICER, was that the field of computing education research is new—we have a long way to go in developing our theories and methods.

ICER2005 proceedings will appear shortly in the ACM Digital Library. A CD of the proceedings will be included for all ACM SIGCSE members in the December issue of the Inroads bulletin. Additional information from this workshop is available at: http://icer2005.cs.washington.edu/. Planning for the next ICER workshop is already underway.

Mark Guzdial is an Associate Professor in the College of Computing at the Georgia Institute of Technology; Sally Fincher is Head of the Computing Education Research Group at the University of Kent; and Richard Anderson is a Professor in the Department of Computer Science and Engineering at the University of Washington. They were the organizers of the recent ICER 2005 workshop.

Back

CCC Logo

CERP Logo CRA-E Logo CRA-W Logo

1828 L STREET, NW SUITE 800, WASHINGTON, DC 20036 | P: 202-234-2111 | F: 202-667-1066