2 lanes or 6 lanes? It depends on what you are driving: Use of AI in Assessment

By Prof. Alex Steel, Director AI Strategy, Education

Published 12 July 2024

Image created by DALLE, six lane highway with car moving on either side

In 2023, Danny Liu and Adam Bridgeman caught everybody's imagination with their idea of a 'two-lane' approach to assessment in a world of generative AI. The idea was that in one lane students could not use AI at all, and in the other lane they could use AI in any way that they wanted to. This was driven by a sense that it was going to be increasingly impossible to detect the use of AI in written assessment. Earlier this year, an alternative model developed by Leon Furze was published, which suggested that there could be a five stage approach use of AI in assessment (named the AI Assessment Scale - AIAS), now collaboratively developed into a second version.

As it happens, at UNSW we had (since February 2023) been recommending a very similar 5 category approach to the permitted use of AI in assessment. We revised it in August 2023 and are now launching our third version – which has 6 options. Given the popularity of the 'two-lane' approach, it is worth exploring our thinking.  

The two-lane approach is brilliant at focussing the mind. But, like many binary options, it over-simplifies to make a point.

The main driver of that model is that we can’t prevent cheating with AI, so we either totally control the environment in order to ban it, or we allow open slather. That simplification continues in the descriptions of the two lanes: the no-AI lane is described as ‘assessment of learning’, and the AI lane is described as ‘assessment as learning’. Quite clearly this isn’t correct: there are many instances where an assessment without AI can be as or for learning – the main characteristic is whether the outcomes and feedback from the task are intended to be used in further learning. Similarly, there are likely to be many final competency tasks soon where a student uses AI and where there is only assessment of learning. Two lanes is a great metaphor, but reality is more nuanced.

If we look at these ideas from a student guidance perspective, more lanes are needed. Assessment for learning requires not only adequate feedback that can enable ongoing learning, but it also requires adequate instruction about the assessment task so that the student can focus their efforts on what is being assessed.

From an educator’s point of view, there are clear pedagogical reasons for asking students to use AI in different and limited ways at different points of their educational journey.

There are genuine fears that if students always rely on AI to generate answers, they may not develop critical skills – skills that come quickest through the hard labour of generating ideas and artefacts from scratch.

Students themselves have been reluctant to use AI for fear of being accused of cheating. More detailed categories can provide clear reassurance of permissible use.

For this reason, at UNSW, we have tried to keep both ideas in our minds at once:

Reshaping assessment to recognise that AI tools are widely available and are likely to be used to complete assessments if designed poorly for the circumstances (2 lanes), but also recognising there are good reasons to graduate the degree of use of AI – and that students want to understand how their assessment is guiding their learning (6 lanes).

Our assumption is that our students want to learn, and won’t take impermissible shortcuts if we explain the reasons for the restricted use of AI in particular assessments.

Of course, the onus is on us to make those explanations, and to make the assessments engaging enough or important enough to overcome temptations to take the shortcuts. Part of that mix of motivations are a range of supervised ‘lane one’ assessments at key points in the degree that test knowledge and skill without AI. Another part of that mix involves conversations with students about their submitted assessments that explore their learning process. This dialogic feedback can be a normal aspect of learning, not an interrogation – assessment conversations can both build a learning community and increase the integrity of the process. Doing these at scale is a current challenge, but it is a more palatable challenge than technological surveillance.

So, our approach is to clearly signal to students what the assessment is about.
  • NO ASSISTANCE: This assessment is designed for you to complete without the use of any generative AI.

  • SIMPLE EDITING ASSISTANCE: In completing this assessment, you are permitted to use standard editing and referencing functions in the software you use to complete your assessment.

  • PLANNING/DESIGN ASSISTANCE: You are permitted to use generative AI tools, software or services to generate initial ideas, structures, or outlines.

  • ASSISTANCE WITH ATTRIBUTION: This assessment requires you to write/create a first iteration of your submission yourself. You are then permitted to use generative AI tools, software or services to improve your submission.

  • GENERATIVE AI SOFTWARE-BASED ASSESSMENTS: This assessment is designed for you to use generative AI as part of the assessed learning outcomes.

  • NOT APPLICABLE: Generative AI is not considered to be of assistance to you in completing this assessment.

All of these are valid approaches to assessment for learning. Our course outline system now also prompts the educator to provide further clarification of exactly what sort of use is permitted and suggested for completing the assessment.

The categories are thus the start of a conversation with the student about expectations and the learning process.

Of course, along with this guidance there are detailed warnings about academic misconduct. When expert academic judgment considers there are issues, we continue to use a range of tools to investigate.

For those who prefer the simplicity of the two lanes – assessments can still be set using the AI-based or no-AI categories. The other lanes help to signal the pedagogical intentions of the assessment task. Not all options need to be used across a program, and for higher stakes assessment some options may not be appropriate.

But our first focus is on clear categories of approaches to learning.

Signposts, not just sorting.

 

***

Reading this on a mobile? Scroll down to learn about the author.

 

Enjoyed this article? Share it with your network!

 

Comments