By William Scates Frances and Dr James Bedford, Academic Learning Facilitators, Pro Vice-Chancellor, Education & Student Experience Portfolio
Published 28 August 2023
To attach a document, we click a digital paperclip. To save a document, we click a floppy disk. When designers sought a universal symbol for generative AI, they chose a magic wand. Intending to invoke AI’s transformational capacity they unintentionally invoked its chaotic and ambiguous potential. When a student’s work can be written by generative AI’s enchanted quill, and potentially graded by the same, the question then becomes: “what remains of the university?”. We know that academics are concerned by this question, teaching has shown us that it worries students too. AI threatens two of the main reasons they study in the first place: to learn skills and get a qualification. If with a wave of that wand their skills can be replaced, and their qualification rendered irrelevant, that is a problem for both sides of the lectern. We believe it is important to respond to this problem not with an academic-integrity arms race between students and educators, but instead, with a push towards shared ground.
The question of “what remains” inspired us to redesign the gen-ed course GENY0002: Academic Skills Plus. The subject’s broad nature can make it difficult to appeal to a variety of cohorts from different disciplines. To address this, we chose to unite students by creating a shared focus: examining the implications of AI for the future of higher education. In many ways, the results from this term have far exceeded our expectations and fundamentally changed the way we think about this topic.
We opened the course by addressing the challenges that generative AI posed. We used this conversation to create a co-designed AI policy for each of the course’s assessment tasks. Students worked together in small groups to examine UNSW’s AI assistance guidelines – ranging from Level 1 (no assistance) to Level 5 (full assistance with attribution) – and then worked together as a group to reach a consensus on responsible levels of AI use for each task.
What was striking about this process was the way the class needed no prompting to provide reasonable and realistic guidelines that were aligned with the course’s learning outcomes. When submitted alongside an assessment, the student acknowledgement forms gave us insights into each student’s use of AI. Moreover, because the form had been co-created, it helped create a culture of academic honesty and integrity.
Many early responses to generative AI have prioritised policing tools and punitive measures, but the discussion in that first tutorial convinced us that transparent conversations about process and an investment in the course outcomes can provide an alternative approach. While the value of a university degree is often cast in economic terms, our class found more appeal in versions of the course that offered them opportunities to think creatively and develop social skills. The students saw the diverse benefits that higher education provides and recognised that misusing AI’s magic wand could deny them that value. Making the case for participation in the course and completion of the assessments in terms of goals they have for self-development – answering questions of belonging and self, instead of just reaching for a qualification – was more successful than striving for the holy grail of “uncheatable” assessments (which don’t exist anyway).
An example of this approach was our design of an assessment that focused on process rather than a final product. Students submitted a learning portfolio made using the artefacts of their work, from lecture notes to marginalia, mind-maps and reflections. This alternative to a tutorial participation mark drew students’ attention to their own study habits while at the same time creating an assessment that was difficult (although, admittedly, not impossible) to complete using generative AI. The learning portfolios had the unexpected effect of encouraging us to demystify our own research processes as a model to be followed or improved upon. Showing the class how an academic reads, writes, thinks—despite the discomfort we felt while showing students our imperfect notes and bad reading habits—made it clear that many students view the process of academic work with as much perplexity as they do the algorithmic workings of ChatGPT. These portfolios were fascinating for the glimpses they offered into each students’ experience of the course, from their takeaways from lectures to their responses to assessment feedback.
Overall, students were engaged and excited about the prospect of using Al and exploring its limitations and potential in their studies. They were interested in finding ways to use generative AI tools like ChatGPT productively. This genuine enthusiasm, we believe, stems from showing students we value their thoughts and perspectives (because we do), which led them to produce thoughtful and engaging responses. These responses will inform our understanding of this topic for years to come.
So, what happens now? While much remains uncertain, what seems apparent is that any productive response to generative AI from universities requires open conversations across the lectern about the purpose of our institutions and the future of our disciplines. It demands an awareness of the current and future utility of these tools for our work and that of our students, along with a recognition of the threat they pose to our livelihoods, to equality, and to the creation of knowledge. Anything less is denying students the experience they’ve dreamed of, and the answer to “What remains of the university?” will be “a puff of purple smoke”.
Special thank you to Rupal Tyagi, learning designer for GENY, for helping in creating the assessments.
***
Reading this on a mobile? Scroll down to learn about the authors.