Programmatic Assessment: Are we there yet?

By Associate Professor Priya Khanna and Professor Gary Velan, UNSW Medicine & Health

Published 7 May 2024

Two classrooms side by side representing programmatic assessment

The year is 1917.

The United States decides to join the Great War. Concerned that ‘feeble-minded’ soldiers might easily fall into enemy traps, the response of the American Psychological Association was to test the intelligence of every recruit using the first group of intelligence tests - the Army Alpha and Beta tests. The underlying assumption of intelligence testing, including the multitude of aptitude and achievement tests developed since that time, was that such tests measure something (a trait) that is innate, stable, and immutable. Permeating its way into the fundamentals of educational assessment, trait testing, as noted by educational theorist David William (2017), led to a distinct separation between assessment and learning. Subsequently, the major focus of assessment became merely to evaluate the outcomes of instructional activities rather than enhancing learning.

Fast forward to the 21st century. 

A new approach to assessment sweeps the field of medical education. This movement, termed programmatic assessment, initiated by medical educators in the Netherlands (van der Vleuten & Schuwirth 2005), emphasised the intrinsic link between assessment and learning. Assessment drives learning, therefore, educational programs should focus on assessment for learning rather than solely of learning.

The notion of assessment for learning isn’t new. It can be traced back to the late 1960s when the formative assessments were proposed as an aid to students’ learning.

The novelty of programmatic assessment was the wake-up call that reliance on ‘objective’ and standardised examinations is neither educationally appropriate nor adequate to capture the attainment of complex competencies required to practise as a professional in rapidly evolving environments.  

Programmatic assessment is a systematic approach wherein the outcomes of a variety of purposefully selected assessment tasks are longitudinally collected, collated, and combined to obtain triangulated information about a learner's progress in developing key competency domains and capabilities. The accumulated data provides a basis for collective decision making on student progress by faculty (assessment of learning) but also serves to stimulate progress among learners (assessment for learning) (Heeneman 2021).

There are two fundamental assumptions to such a holistic approach to assessment. First is the notion that competencies are not innate or stable but are situated, specific, and contextual (thereby refuting the trait theory), and hence cannot be expressed as a single ‘score’. This underscores the problems inherent in the common practice of breaking down competencies or core outcomes into multiple sub-outcomes at the program or course level, then reassembling them into an aggregate score to make progression decisions. This reductionist approach is problematic because it obscures the essence of what and why we are assessing.   

Second, any assessment of performance, by its very nature, cannot be immune to human judgement and subjectivity. Hence, numeric scores need to be supplemented with narrative feedback to strengthen both the richness of the data as well as credibility of the decisions around student progression.  

Programmatic approaches to assessment, apart from being educationally sound, have intuitive appeal. Such approaches are regarded as suitable to reforming assessment systems, not only in medical education but in higher education in general. In its recent report of assessments reforms for the age of artificial intelligence, TEQSA (Tertiary Education Quality and Standards Agency) recommended a systemic/programmatic approach to assessment to enable a meaningful dialogue between students and educator and for evidencing progression of outcomes over a program of study.

Articulating a solution to a complex problem requires a robust and meticulously laid foundation. Emerging global data from implementations of programmatic assessment highlight the need to understand the complexities of major assessment reforms.

Collaborating with colleagues and fellow enthusiasts for programmatic assessment worldwide, we are trying to unpack the ramifications of such major assessment reforms, encompassing both the benefits and potential pitfalls. So far, our work (Khanna et al 2023; Roberts et al 2022) on evaluating large scale implementation of programmatic assessment has highlighted three pivotal mechanisms leading to gaps between its theoretical conceptualisation and implementation.

First, agnosis:

The tendency to hastily adapt programmatic assessment merely because it is in vogue, whilst remaining uninformed or indifferent to the fundamental assumptions and prerequisites.

Second, anomie:

A state of normlessness and diminished agency, experienced by both students and faculty, when these reforms are implemented without accounting for deeply ingrained prior beliefs regarding assessment.

Lastly, agenesis:

Naïve and simplistic versions of assessment systems that claim to be programmatic though are still rooted within the measurement paradigm and are incongruent with the constructivist notions of learning and developing competence.

Despite the recognition that assessment drives learning, the two concepts share an intricate, yet complex relationship tangled within a variety of contrasting paradigms and competing approaches. Perhaps the foundational step in crafting programmatic assessment, or indeed any significant assessment reform, lies in addressing these fundamental tensions.  

The starting point should be a deliberate exploration of meta-theoretical questions that simplify the fundamental nature of learning and assessments. Key considerations might include: what conditions are necessary for learning to thrive for a specific program of study; how best to define competencies and capabilities for each program; and what conditions are essential for enabling and sustaining assessment transformation to optimise learning. 

By framing these inquiries, we give equal privilege to both the journey of learning and the methods used to capture its attainment, thus paving the way for a more holistic and meaningful approach to developing an assessment system for learning. 

 

***

Reading this on a mobile? Scroll down to learn about the authors.


References 
  • Heeneman, S., de Jong, L. H., Dawson, L. J., Wilkinson, T. J., Ryan, A., Tait, G. R., ... & van der Vleuten, C. P..,2021. Ottawa 2020 consensus statement for programmatic assessment 1. Agreement on the principles. Medical teacher, 43(10), 1139-1148. 

  • Khanna, P., Roberts, C., Burgess, A., Lane, S. and Bleasel, J., 2023. Unpacking the impacts of programmatic approach to assessment system in a medical programme using critical realist perspectives. Journal of Critical Realism, pp.1 

  • Lodge, J. M., Howard, S., Bearman, M., Dawson, P, & Associates.,2023. Assessment reform for the age of Artificial Intelligence. Tertiary Education Quality and Standards Agency. 

  • Roberts, C., Khanna, P., Bleasel, J., Lane, S., Burgess, A., Charles, K., Howard, R., O'Mara, D., Haq, I. and Rutzou, T., 2022. Student perspectives on programmatic assessment in a large medical programme: A critical realist analysis. Medical Education, 56(9), pp.901-914. 

  • Van Der Vleuten, C. P. and L. W. Schuwirth., 2005. "Assessing professional competence: from methods to programmes." Medical education 39(3): 309-317. 

  • Wiliam, D.,2017. Learning and assessment: a long and winding road?, Taylor & Francis.

A/Prof. Priya Khanna is an Education Focussed academic and UNSW's Nexus Fellow

Learn about the Nexus Program overview here.

Prof. Gary Velan is an Education Focussed academic and UNSW's Scientia Education Academy Fellow

Learn more about Scientia Education Academy below.

Scientia Education Academy Blog Series

The UNSW Scientia Education Academy (SEA) recognises our most outstanding educators for their leadership and contributions to enriching education, and gives them a platform to showcase and facilitate excellence in teaching at UNSW and beyond. Learn more about UNSW Scientia Education Academy here.

See also

Reforming assessment to counter the rise of ‘Marksism’. Written by Professor Gary Velan.

Who benefits from grading first year?. Written by Professor Liz Angstmann.

 

Enjoyed this article? Share it with your network!

 

Comments