Published January 27, 2023
The mention of Artificial Intelligence programs like ChatGPT – or Chat Generative Pre-trained Transformer – will cause readers to recoil with emotions including but not limited to anxiety and anger. For faculty and support staff considering how best to move forward in a world where ChatGPT is a reality, I recommend familiarizing yourselves with the Academic Integrity policy here at the University at Buffalo.
Fair warning: this blog post was composed using assistance from artificial intelligence (AI).
Nevertheless, if you have written an email, checked your smart watch for your fitness information, or interacted with Alexa for today’s weather, then you’ve also received assistance from an AI. Indeed, we live in blissful ignorance of the many technologies surrounding and delivering our modern creature comforts. Our understanding, or at least, working knowledge of their function, is not explicitly necessary for us to benefit from their use. For the most part, we know that our grammar is corrected, our heart rate is monitored, and our favorite song is playing from the speaker; all innocuous, nearly automatic processes. The corresponding perception of Alexa or Siri – a device which can record pretty much every conversation taking place in a given area – is generally one of indifference, to the point where one forgets it is there.
Unlike Alexa or Siri, however, I can almost guarantee that mention of ChatGPT – or Chat Generative Pre-trained Transformer – will cause readers to recoil with emotions including but not limited to anxiety and anger. It evokes mental images of HAL 9000, the terrifying supercomputer from 2001: A Space Odyssey. Legitimate privacy concerns aside, Alexa and Siri are more favorably viewed as “assistants,” and many see them as a means of simplifying their lives. The risks that come along with the use of these platforms are accepted as a minor tradeoff for our convenience. ChatGPT, on the other hand, is enveloped in misunderstandings as to its risks and benefits, as well as its potential and limitations. It is in that spirit that I am writing today, as I am seeking to develop an understanding of what AI brings to academia both good and bad, and more importantly, how AI can be leveraged to improve our pedagogical practices.
The history of AI spans the Twentieth Century, but the sophistication of contemporary tools like ChatGPT have only come to fruition starting in the late 1990s. We have travelled quite a ways since the Turing test of the 1950s, though detecting text generated by ChatGPT does seem to fundamentally hearken back to this milestone. That is, with the sophistication of AI in 2023, issues still abound concerning its use, not the least of which include ethical and scholarly concerns. For the sake of discussion, our scope is one of pedagogy: how can we ensure that students do not engage in academic dishonesty using artificial intelligence? I posit that the better question is: how can we employ our pedagogy so that students won’t use an AI to cheat, but instead, apply it positively and with the approval and direction of their professors?
It is important to understand the current potential and limitations of ChatGPT. The tool’s remarkable ability lies in the fact that it can access knowledge and information in a way that is arguably more efficient than a human being. If you ask the AI to write a cogent argument discussing the causes of the Great Depression, it would scour the information it has previously been fed to reply with what it determines to be the most reasonable responses. This illustrates one of the current limitations of this tool as ChatGPT is limited to information, sources, and events taking place prior to 2021, and lacks considerable nuance. The information it recalls can vary in its accuracy or even appropriateness to the prompt: in some instances, professors have found that the quality of the responses exceeds even the best students in their sections; yet in other examples, as a recent article published with data from an experiment from the University of Minnesota School of Law indicates, while the AI could pass exams, it did so at a marginal level. In terms of citations, the work done by ChatGPT is underwhelming, selecting general resources rather than specialized journals or monographs required by a particular assignment or germane to a specific field or discipline, providing a fairly easy tell.
For faculty and support staff considering how best to move forward in a world where ChatGPT is a reality, I would recommend first familiarizing themselves with the Academic Integrity policy here at the University at Buffalo. The policy will be updated for the upcoming Fall 2023 semester with specific language in response to the use of ChatGPT and similar AI-based tools. In short, students who use this tool in any way that misleads an instructor can currently be charged with an academic integrity violation. While mitigating the use of ChatGPT may appear as the preferred solution, there are applications where the use of such tools should be encouraged in the classroom instead. I strongly encourage faculty members to take time early in the semester to discuss ChatGPT and AI use in their course in order to to establish guidelines and expectations, and make it clear as to how and why the use of such tools is either acceptable or not.
Considering the use cases for ChatGPT in the classroom will be crucial as we seek to improve the writing skills of our students. Consider the fickle nature of peer review. We want students to provide authentic feedback, but there are inherent limitations, not the least of which is a student’s fear of being overly critical of their peers. However, ChatGPT has no feelings to hurt. Faculty might use the AI to create a hypothetical argument, and have their students critique its validity. Similarly, revising an AI generated assignment would remove that same pressure from peer editing. Lastly, as aforementioned, citations selected by ChatGPT tend to be lacking. Encourage students to evaluate the quality of the sources provided by the AI in support of its argument, and integrate it as part of a larger lesson on annotated bibliographies. Faculty should visit the University of Pennsylvania's Center for Teaching and Learning, which has numerous suggestions for using AI as a means to overcome some common pedagogical hang-ups.
There will also be relevant embedded options within the new Learning Management System, Brightspace, which will launch in Fall 2023. While these may not directly address ChatGPT, they can help discourage or otherwise eliminate student AI use in a course. For example, Brightspace allows faculty (and departments) to develop large question pools, randomizing the answer banks and question order. This will render old exams which may be in circulation useless and significantly eases the burden on faculty having to create new variations on questions which can be a time-intensive process each semester. A more rudimentary option would be to set strict time limits for examinations in conjunction with the use of the Respondus lockdown browser (when testing is remote), making it prohibitive for students to leave the environment and generate an AI response. While some faculty across the country are resorting to exclusively pen and paper exams, this is impractical in the era of remote learning and inaccessible for students requiring accommodations; instead, precise word counts and time-limited writing assignments (both easily achieved in Brightspace) can render ChatGPT ineffective.
More conceptually relevant to pedagogy, what makes the abovementioned methods appealing is that they are not only effective as responses to ChatGPT, but that they are also sound pedagogical practices regardless. For example, chunking and scaffolding assignments not only helps students to digest and retain material more effectively, but it also teaches them to construct their own responses and deters cheating with an AI by making cheating largely unnecessary. Requiring thorough citations and having students refer to external research, while teaching them skills like annotating bibliographies to discern the quality and applicability of sources, will make them better critical thinkers. Creating more precise assessments invoking authentic, reflective responses referring back to previous class sessions or materials (otherwise unknown to the AI) will ensure meaningful student participation and provide feedback to the instructor on the efficacy of their teaching. These steps will require work to preexisting assignments or assessments, but it will go a long way toward improving student success rates with or without AI as a factor.
The use of ChatGPT – and the continued evolution of artificial intelligence – is inevitable. That does not necessarily mean the end of education as we know it. The pandemic forced us to return to the drawing board as a way of reaching students no longer physically present in our classrooms. Confronted with this new (perceived) threat of artificial intelligence, we will once again reexamine our pedagogical practices. With some deliberate modification of existing assignments, and a nod to traditional, effective teaching methods, let's enter into this upcoming academic year with confidence that our students can be successful working alongside of, rather than in competition with, ChatGPT.
Office of Curriculum, Assessment and Teaching Transformation
716-645-7700
ubcatt@buffalo.edu