Categories

Jill Scott – Data collection on student learning, or why I can’t paint that door

There is a door in my house – maybe you have one of these too – that cannot be painted. In recent renovations, I had to explain to the workmen that the data on the door is too valuable and must be preserved. You may have guessed that the door in question is where we’ve measured the growth of our children over a dozen years or more.

While this door has nothing to do with learning outcomes, it has a lot to do with data collection. Data collection with toddlers is tricky because you have to chase them around and make them stand still while you make a nick on the door to mark their height. It’s again difficult when they’re teenagers, because they roll their eyes and groan at having to stand up tall and straight next to the door, yet again.

Voices from HEQCO’s March 23-24, 2016 conference: Transitions: Learning across borders, sectors and silos
Voices from HEQCO’s March 23-24, 2016 conference: Transitions: Learning across borders, sectors and silos

And despite best efforts to ensure that these data collection moments happen at regular intervals to provide best comparisons in growth spurts between siblings, tracking for gender difference and other confounding factors, inevitably the timing is inconsistent and has a negative impact on the researcher’s ability to draw conclusions.

In the same way that I’m curious about my children’s growth, more and more, universities and colleges are interested in measuring the achievement of transferable cognitive skills, such as critical thinking, problem solving and communication. But identifying tools that can accurately measure learning gains is a considerable challenge.

While the Collegiate Learning Assessment is highly validated and uses sophisticated predictive algorithms to track specific strands of critical thinking, the per-student cost is high, the test takes ninety minutes and requires that each student be sitting at a computer with uninterrupted internet access. The Critical Thinking Assessment Test is less expensive, takes only an hour and requires nothing but a pencil, but it only has one iteration, so students taking it for the second time see the very same test. The VALUE Rubrics (Valid Assessment of Learning in Undergraduate Education), developed by the American Association of Colleges and Universities, can be used to assess skills in existing course assignments, but require lengthy consultations with faculty to align learning outcomes to tasks and student work needs to be assessed by graders who have received rigorous training to ensure inter-rater reliability.

If chasing toddlers to stand them up against the door frame is tough, it is no less difficult to get students to show up and sit a test. Embedding assessment into courses is one way around this. This is not so difficult in first year, when class sizes are large and the number of instructors is small. But upper-year courses tend to be small and it requires a great deal more time to coordinate with a large number of instructors.

And if teenagers are less motivated to stand up against a door and be measured – “Again, mom?!” – upper-year students are more savvy about prioritizing their time and less motivated to put maximum effort into an assessment. As a result, it can look as though learning gains taper off – like slouching teenagers – in upper years, even if they are actually exhibiting remarkable results in critical thinking in some assignments.

From this description of the challenges of data collection, you might conclude that I am not very keen on assessing student achievement. But you would be wrong. In fact, I’m passionate about demonstrating to all stakeholders – students, parents, taxpayers and government – that learners are gaining valuable transferable cognitive skills to enable them to succeed in the labour market and pursue meaningful careers, but also to contribute to society and become engaged citizens and lifelong learners.

In a world where time is money, I’m equally concerned to identify sustainable means of assessing outcomes. We cannot be naive about the challenges of developing cost-effective, valid and reliable methods and motivating students to show up and put in their best effort. But we have to start somewhere and now is the right time to make that investment in figuring this out.

Measuring our children’s growth has been fun, but we’re much more interested in their overall wellbeing once they’ve left the nest. Likewise, measuring learning gains in postsecondary is the first step to correlating with success in the labour market and life, 5, 10 or 20 years out. In order to do this, we need to start collecting data now.

Jill Scott is vice provost, teaching and learning, at Queen’s University.

Our opinion is that the opinions expressed by our guest bloggers are their opinions, and not necessarily those of HEQCO.

2 replies on “Jill Scott – Data collection on student learning, or why I can’t paint that door”

Jill, I enjoyed reading your post and agree with the challenges of measuring transferable cognitive skills. As educators we have improved in identifying learning outcomes or objectives, but embedded within are those same key cognitive skills that can be difficult to really assess. Added to all of these challenges is the digital environment which brings students who may be comfortable with this era, but lack appropriate literacies for behaviour, norms and rules of conduct, not to mention how to use digital tools, etc. This then becomes a compounded issue when these digital students cannot manage the plethora of digital information and content they access for veracity and reliability. There is much to be understood in the digital era for post-secondary education.

I found this post interesting because I was perhaps naive in my Scholarship of Teaching and Learning work in thinking that students would be delighted to participate in SoTL research! I’ve also had discussions with colleagues about keep NSSE response rates up. Survey fatigue is a real possibility now isn’t it? We can gather data, so we do. Your point about being strategic re: these kinds of assessments is thus a good one.

Leave a Reply

Your email address will not be published. Required fields are marked *