Ethical Use of Gen-AI for Designing Rubrics at UNSW: Nexus Fellows’ Perspectives

By Dr Cherie Lucas (UNSW Medicine & Health), Dr Mark Ian Jones (UNSW Arts, Design and Architecture), Dr Chris Campbell (UNSW Canberra) & Dr Helena Pacitti (UNSW Science)

The authors are all #UNSWNexus Fellows - learn more about the program here.

Published 5 June 2024

Gen AI Apps on iPhone

Since the worldwide launch of ChatGPT in November 2022, educators in higher education have had little choice but to step outside of their comfort zone and address the benefits and challenges of an increasing array of emerging Generative AI (Gen-AI) tools. While there are many challenges in ensuring academic integrity, there are a myriad of both scientific and anecdotal reports related to the lack of accuracy in parser output and citations of some of the tools. This space can be confusing when coupled with considerations of ethical and responsible use of such tools. However, it is obvious that Gen-AI is here to stay! An “if you can’t beat them, join them” attitude is the key to unlocking and embracing the opportunities and benefits to educators, as well.   

As UNSW educators in the first cohort of #UNSWNexus Fellows, our role as change agents is to lead priority projects across faculties and schools, to share pedagogy and educational best practice through resources and influence, and to motivate uptake in evidence-based and ethical use of AI and other innovations in education. One of the ethical uses for Gen-AI is the focus of this post - to showcase the design of rubrics using Gen-AI, which was previously a time-consuming exercise for educators.  

What are Rubrics? 

Rubrics are primarily utilised by educators as an assessment tool that provides students with the expectations for grading learning outcomes. A true rubric is distinct form other evaluation tools (e.g. checklists, rating scales) because it does not only specify criteria aligned to the assessment’s purpose, but also describes these criteria across a range of performance levels. Rubrics are usually in a matrix/table format and include criteria in the first column, with performance levels or standards (eg. High Distinction, Distinction, Credit, Pass, Fail) often with a numerical scoring scale across the top row, and performance descriptions (indicators) within each cell.  

What are the benefits of Rubrics?   

Implementing rubrics with each assessment task can help both educators and students. The benefits to educators include consistent and efficient grading and a tool to enhance student feedback. The use of rubrics assists students in several ways. This includes help with self-regulated learning,1 self-efficacy2 and transparency of their learning to achieve their desired academic performance outcomes.1 However, rubrics require careful consideration and take time to design effectively for every assessment task. If they are not designed with rigor and in alignment with the Course Learning Outcomes (CLOs), this could impact student learning and may impact its inter-rater reliability.3, 4 

Gen-AI and the Design of Rubrics – Where to Start?  

Gen-AI can be a beneficial educational tool for designing rubrics and is considered an ethical use of Gen-AI in education, assisting educators to ensure rigor within this process. First step is for educators to consider criteria that cover higher-order thinking skills. This is to ensure that educators reduce the risk of AI vulnerabilities when using AI in rubric design. For example, this may include cognitive verbs that emphasise higher-order thinking processes which are likely to be more resilient to AI. These could include descriptors that align with Blooms Taxonomy (1956) higher order cognitive verbs (eg: for Synthesis: you may include verbs such as create, construct, formulate, predict; whereas for Evaluation: you may consider using verbs such as argue, judge, prioritise, debate); or if using the Revised Taxonomy of Cognitive Domain (2001),Evaluating (eg: ranking, assessing) and Creating (eg: composing, actualising) are the two highest cognitive process dimensions.  

The next steps… 

Alignment to the CLOs will allow for a more robust rubric construction and transparent assessment design, which in turn will assist students in completing the assessment task.  

Based on our experience working in this space, we have formulated a series of steps and prompts that ensure a well-generated rubric. Have your course outline and assessment task instructions handy; and know the type of cohort you are aiming at (the more detail on diversity, stage in program, the better!) 

Prompt 1. Ask your AI agent to produce a rubric for a [first] year [under] graduate course with a diverse student cohort of [domestic/ non-domestic, %neurodiverse, etc.] based on the CLOs addressed in the assessment task. Be as specific as you can. 

Prompt 2. Enter the assessment description (and steps, etc) from the course outline or assessment brief. Ask for a range of grades between Fail and High Distinction. 

At this stage, the AI tool (AI agent) should produce a first draft rubric for your consideration. Review and ascertain if you need to refine any of your prompts. You may find that some of the criteria require tweaking, so ask your AI agent to revise. An example may be: 

Prompt 3. Please revise the criteria for process to include reflection [or other verbs based on the taxonomy above]. 

We would however encourage you to cross reference your Gen-AI agent with other outputs from publicly available Gen-AI tools, knowing well that, as an educator, you would still need to cast your critical eye over the final product. A peer review of your final product would also enhance the rigor in this process.  

Review the rubric and now you are good to go……. 

What other ethical uses of Gen-AI are used in your Institution, Faculty or School that have benefitted the teaching and learning space?  

 

***

Reading this on a mobile? Scroll down to learn about the authors.


References 
  1. Panadero E, García-Pérez D, Fernández Ruiz J, Fraile J, Sánchez-Iglesias I, Brown GTL. Feedback and year level effects on university students' self-efficacy and emotions during self-assessment: positive impact of rubrics vs. instructor feedback. Educational psychology (Dorchester-on-Thames). 2023;43(7):756-779. 
  2. Andrade HL, Wang X, Du Y, Akawi RL. Rubric-Referenced Self-Assessment and Self-Efficacy for Writing. The Journal of educational research (Washington, D.C.). 2009;102(4):287-302. 
  3. Lucas C, Bosnic-Anticevich S, Schneider CR, Bartimote-Aufflick K, McEntee M, Smith L. Inter-rater reliability of a reflective rubric to assess pharmacy students' reflective thinking. Curr Pharm Teach Learn. 2017;9(6):989-995. 
  4. Lucas C, Smith L, Lonie JM, Hough M, Rogers K, Mantzourani E. Can a reflective rubric be applied consistently with raters globally? A study across three countries. Curr Pharm Teach Learn. 2019;11(10):987-994. 
  5. Krathwohl DR, Anderson LW, Bloom BS. A taxonomy for learning, teaching, and assessing : a revision of Bloom's taxonomy of educational objectives. Complete ed. New York: Longman; 2001. 

 

Dr Cherie Lucas, Dr Mark Ian Jones, Dr Chris Campbell & Dr Helena Pacitti are all UNSW Nexus Fellows.  

  • Learn about the #UNSWNexus program from the Nexus Director here.
     
  • UNSW colleagues can also visit the internal info page here (SharePoint).

See other similar UNSW Education Blogs & Articles

What does Copilot’s internal release mean for our teaching?. Written by Professor Alex Steel.

Teaching students to value education rather than marks – competency and mastery grading at UNSW. Written by Professor Liz Angstmann, Associate Professor Helen Gibbon and Dr Ben Phipps.

Programmatic Assessment: Are we there yet?. Written by Associate Professor Priya Khanna and Professor Gary Velan.

Why Artificial Intelligence matters to us all. Written by Associate Professor Lynn Gribble.

What I learned about Self-Regulated Learning from A/Prof. Lodge of UQ. Written by Dr Helena Pacitti.

 

Enjoyed this article? Share it with your network!

 

Comments