Author: dr. Carlos Collares, EBMA
When I first started using computerized adaptive testing in Brazil seven years ago, I had two main goals. First, increase progress test reliability while decreasing test length and thus decreasing student fatigue. Second, provide to my institution at that time a cost-effective solution, decreasing costs while providing a more enjoyable and meaningful assessment experience for the students – after all, students doing an adaptive test will have an individually customized test, in which an algorithm dynamically adjusts the difficulties of the items according to the performance of each test taker.
After moving to Maastricht, I had the honor and the privilege to implement a computerized adaptive version of the International Progress Test in Mexico (Monterrey Tec), Finland (University of Helsinki) under the umbrella of the European Board of Medical Assessors. Georgia (David Tvildiani Medical University) has recently done a pilot test with their students. More recently, under the auspices of the Educational Institute of the Faculty of Health, Medicine and Life Sciences, Maastricht University in the Netherlands and Suleiman Al Rajhi University in Saudi Arabia had the opportunity to enjoy this innovative assessment tool. In all institutions there were great evaluations by students and staff as well as high degrees of reliability for all academic years (0,90-0,96) with 50% of the original number of items. Therefore, we can already say computerized adaptive progress testing has been an excellent tool in the assessment of learning.
Having said that, I must share with you a question that changed the game for me. In the latest EBMA Conference, a participant asked me if adaptive testing is perhaps not aligned with the “assessment drives learning” principle because ‘it is more like “learning driving assessment”’. I had to agree to disagree, so to say.
It is true that the probabilistic approach to item difficulty applied by the Rasch model in our adaptive test will make the item selection criteria select items closest to a 50% probability of a correct answer, given the provisional ability level of the test taker, which is recalculated after each item. However, I must say that it is due to this very fact, that we can also say that the test is aligned to modern learning theories.
Let’s take constructivism, for example. By adjusting the difficulty of the test to the frontier of one’s knowledge, it is possible to affirm that the test represents a rough, imperfect, but yet useful approximation to what Vygotsky called “zone of proximal development”. Isn’t it this the area we have to explore if we want learning to happen? Another example could be cognitive load theory. Customization of test difficulty prevents the occurrence, or at least minimizes, both cognitive under- and overload, concurrently.
If we choose social cognitive theory, we can find an even more positive rationale. Computerized adaptive progress testing effectively overcomes one of the pitfalls of fixed, paper-based progress tests, namely the low reliability levels for students in the early academic years. Therefore, beginners are able to have a more accurate thermometer of their current knowledge levels, without the need to aggregate information from four tests to achieve high reliability. With more precise scores, students are less likely to have negative mastery experiences due to an unreliable scores, which could negatively affect their self-efficacy.
One feedback I have been receiving from students taking the test is making me particularly happy. After switching to an adaptive progress test, they say that now they are more motivated than ever to study, as they now perceive scores to be more accurate representations of their knowledge levels. Even students who failed shared with me the same perception. I understood they are eager for top quality feedback and with this test they now receive rich, precise, interactive online feedback. When the test is coupled with other programmatic assessment elements, such as a mentorship-portfolio programme, the educational utility of the progress test is maximized. Students can make systematic use of the rich feedback they receive, filling their knowledge gaps more quickly. So yes, one can undoubtedly say that computerized adaptive tests are an excellent assessment tool for learning too.
Finally, it’s safe to say that the question posed in EBMA conference about the alignment of computerized adaptive testing with modern approaches to assessment and learning has opened my eyes to a third goal for computerized adaptive progress tests. It is great how reliable and cost-effective computerized adaptive progress tests are, and it’s truly a useful assessment tool for learning. Nevertheless, I now need to demonstrate how computerized adaptive progress tests work as a learning moment in itself – which, in my opinion, is much more valuable than the use of progress tests just as a measurement moments, as some schools unfortunately still see it. If that is your case, I hope this reflection has inspired you to open your eyes to think of it as something much more than an assessment tool only.
In EBMA, we want to enable your medical school to experience this innovative, state-of-the-art, cost-effective assessment tool of, for and as learning. Contact us via info@ebma.eu and schedule a pilot of the computerized adaptive version of the International Progress Test at your university.