Learning about constructive alignment (Biggs 1996, 1999) during the 2021 Teaching & Learning Induction at UWC changed forever my view of teaching. It seemed so simple and obvious, and yet is not always employed when designing courses. I was lucky that I had significant freedom in course design and so developed a programme for the third year practicals and the Computational Physics Honours course heavily guided by constructive alignment.
Constructive alignment was what inspired me to combine the teaching and assessment components into individual guided and scaffolded practicals. It also made me reflect heavily on what assessment really should mean in my courses. Because I had significant freedom, I needed to develop a set of assessment criteria largely from scratch (with some guidance from the yearbook and past materials of course).
I asked myself what it should mean for, for example, and Honours student talk walk away with 63% in computational physics. What would I, as a prospective postgraduate supervisor, expect this student to be able to do? In addition, I had to take into account the historically disadvantaged backgrounds of many of my students. It is unfair to compare them to their counterparts in more developed countries. My goal was to try and rapidly upskill these students so that they could compete with students from more advantaged parts of the world, through scaffolding and more fair assessment.
I thus split each practical roughly into three parts, with a mark break-down as shown on the right. This means that 40% of the marks lie in just getting to grips with the basics of programming. I introduce scaffolding by gradually making these programming questions harder, always pushing student development. Coding is an essential foundation upon which everything else rests but, as I often remind my students, my course is not a programming course.
Thus a further 30% of the marks lie in demonstrating mastery over basic computational physics concepts. These include some written questions and the most simple implementations.
My reasoning is that any student can go from having never programmed to passing this course well and will have sufficient familiarity with computational physics to be able to take on any standard modern physics MSc project (which all require some level of computational physics proficiency). A student with a strong handle on programming already should obtain 70% with relative ease, but if they want a first, they have to tackle the “challenge questions” - the remaining 30%. These questions are usually significantly harder than the rest (a point I make very clear to the students) and focus on solving more realistic computational physics problems. This means that a student with a first in computational physics can realistically be expected to tackle an MSc project with a strong computational component. I also expect them to be qualified for a junior data science or even developer job.
I develop a standard rubric for each practical (see an example on the left), always assessing based on “levels” of proficiency that allow little ambiguity in marking. I strongly support criterion-based assessment over norm-based or other forms. This is reflected in my rubrics and aligns well with Section 4.12 (criterion-referenced assessment) of the UWC Assessment Policy.
By working back from the expected outcomes of the course, keeping in mind my students’ experience and backgrounds, I hope that I have designed an assessment philosophy that is both fair and effective.
A recent BBC article highlighted for me the complexity involved when understanding plagiarism at universities. In it, the journalists interviewed several Kenyan students who support themselves and their families by “ghostwriting” essays for students from other universities, usually in developed countries. While most of the Kenyan students involved understood this was ethically wrong, they felt they had few options to financially support their own studies. Plagiarism is rife at universities all over the world and threatens the integrity of the academic system.
Because my courses don’t generally include written exams (which are usually considered artificial and unfair tests for such practical courses), I take plagiarism particularly seriously. I have taught few classes where I did not encounter at least one case of plagiarism so I have become interested in understanding what drives it and how to combat it.
It’s easy to assume the ubiquity of information on the internet and the ease with which classmates can be contacted would lead to an increase in plagiarism and indeed, a review by Newton (2018), which specifically investigated contract cheating such as ghostwriting, found a steady increase since 1990. However, Curtis and Tremayne (2021) conducted a more specific and homogeneous study from 2004 to 2019 and actually found an overall decrease in self-reported plagiarism from 2004 to 2014, whereafter it tended to level off. This decrease is likely related to the adoption of tools such as Turnitin, but what I found interesting is that the number of students who could correctly identify specific examples of plagiarism increased over the same period, which may indicate that a better understanding of plagiarism also reduces its incidence.
Realising this, I focus on training students in plagiarism, giving specific examples in class and tips on how to avoid it. I have implemented a plagiarism penalty that starts off soft with a steadily increasing punishment with repeated offenses (see below). Making use of this strategy, turnitin, question structure and probes of understanding, I found the prevalence of plagiarism decreases quickly as the course progresses.
I also realised students usually resort to cheating when they feel they do not have the skills required to complete the assignment and are under pressure. During 2021, the height of the pandemic and a dramatic increase in plagiarism, some students cheated extensively on a particularly difficult assignment I had set them. To go about understanding this, I used the interactive tool AhaSlides to ask them, anonymously, to tell me why they had cheated. I include here some of the responses. Their behaviour, while unethical, told me they needed more help than I had given them. This led me to restructure the practicals to improve scaffolding, demonstrate correct problem-solving techniques and bring in a tutor for extra support. While a handful of students continued to resort to unethical behaviour, the majority improved dramatically and passed the course.
Biggs, John. “Enhancing Teaching through Constructive Alignment.” Higher education 32.3 (1996): 347–364. Web.
Biggs, John. “What the Student Does: Teaching for Enhanced Learning.” Higher education research and development 18.1 (1999): 57–75. Web.
BBC News, 2021, "The Kenyans who are helping the world to cheat", Accessed October 2022, https://www.bbc.com/news/blogs-trending-58465189
Guy J. Curtis & Kell Tremayne (2021) "Is plagiarism really on the rise? Results from four 5-yearly surveys", Studies in Higher Education, 46:9, 1816-1826, DOI: https://doi.org/10.1080/03075079.2019.1707792
Philip M. Newton (2018) "How Common Is Commercial Contract Cheating in Higher Education and Is It Increasing? A Systematic Review", Front. Educ., DOI: https://doi.org/10.3389/feduc.2018.00067