Measures to assess critical thinking in the classroom should be better integrated into the curriculum
There is considerable agreement that critical thinking skills are a desired learning outcome of any postsecondary credential – important in both the personal and professional lives of students and graduates – yet how to measure these skills remains unclear. A new study from the Higher Education Quality Council of Ontario (HEQCO) suggests that measurement tools should be carefully integrated into the curriculum in order to measure critical thinking skills at the course level.
Evaluating Critical Thinking and Problem Solving in Large Classes: Model Eliciting Activities for Critical Thinking Development assessed the development of students’ critical thinking skills (CTS) in a first-year engineering course at Queen’s University. The course focused on developing students’ problem solving, modeling and critical thinking skills by integrating real-world problems, known as model eliciting activities (MEAs), into the curriculum. These problems, which addressed the failure of a cable ferry, the design of a wind turbine and the prevention of heat loss in a home, required students to document not only the solution but also the process by which they came to the solution.
The study used five different instruments to measure students’ CTS at the beginning and end of the course. These included a multiple choice test (the Cornell Critical Thinking Test: Level Z (CLZ)); an essay-style test (the International Critical Thinking Essay test (ICTET)); performance and analytical tasks (the Collegiate Learning Assessment (CLA)); course surveys; and exercises where students were asked to think aloud their answers to a problem or scenario. Each test was administered as a pre- and post-test except the CLA, which was only used as a pre-test. The study took place in the fall 2012-13 semester with 542 students participating.
Integrating MEAs into the classroom improved student’s critical thinking skills as the problems emulated real-life situations an engineer could face in the field. However, only the course surveys and think aloud exercises were sensitive enough to measure critical thinking skills at the course level; the standardized tests (CLZ, ICTET and CLA) did not show significant gains between the pre- and post-tests. The authors suggest that the standardized tests may not be sensitive enough to measure critical thinking at the course level given the short testing window.
The administration of the post-tests during exam period also posed a challenge and resulted in decreased student motivation. Some students wondered why they were taking the same tests again while others put little effort into the post-tests.
The authors recommend better aligning the critical thinking teaching framework, the MEAs and the standardized instruments in order to measure critical thinking at the course level. Furthermore, to avoid testing fatigue and waning student motivation, the authors suggest integrating testing into the curriculum to make it indistinguishable from the course experience.
The authors also note other ways to measure CTS, including scoring student work over the span of an entire credential rather than a single course. They recommend that future research use student work in addition to standardized instruments to track skill development on a longitudinal basis.
Evaluating Critical Thinking and Problem Solving in Large Classes: Model Eliciting Activities for Critical Thinking Development was written by James Kaupp, Brian Frank and Ann Chen, Queen’s University.