UB has no universal policy about student use of artificial intelligence. Instructors have the academic freedom to determine what tools students can and cannot use in pursuit of meeting course learning objectives. This includes artificial intelligence tools such as ChatGPT.
Because there is no universal UB policy about artificial intelligence tools, instructors need to give students clear guidance about what is and is not allowed in their course overall and/or on each assessment. In the same way students are told when exams are “open book” or not, they need to be told when AI tools are allowed or not. This should be done orally in class when discussing each and every assignment or exam as well as in your syllabus. Some examples of syllabus language from other institutions can be found here.
Deciding whether to allow students to use artificial intelligence on course assessments can best be determined by evaluating how that use affects fulfillment of the learning objectives. If an AI tool can be used to complete low-level work that students can already do, for example, it can be used a head start to propel students toward higher level thinking. But if use of the AI tool replaces the student thinking and process that you intend to assess, it should be disallowed.
Instructors should make the rationale for their rules around AI as overt as possible. Students don’t always understand how the assessments instructors design for them lead to fulfillment of learning objectives. As the expert, it is up to the instructor to help students identify in what ways using artificial intelligence can help or hinder those learning objectives. The more students understand why a tool is disallowed, the more likely they are to respect that rule.
Some of the common ways instructors can identify use of AI tools include:
The procedure for pursuing this suspicion is the same as in any academic dishonesty case, and instructors should follow the consultative resolution process dictated by the academic integrity policy. There are some additional considerations that may help here, however, including:
Whenever an instructor believes that it is “more likely than not” that academic dishonesty occurred, they are obligated to report it to OAI. This standard of evidence is called “preponderance.” Instructors do not need the certainty of “beyond a reasonable doubt.” In the case of unauthorized use of AI, preponderance can come in many forms. If you are uncertain about this, you can contact the Office of Academic Integrity for guidance.
While unauthorized use of AI on assessments can fall into a number of the violations described in the policy, it is commonly reported to OAI as “falsifying academic materials.” This violation includes, “submitting a report, paper, materials, computer data or examination (or any considerable part thereof) prepared by any person or technology (e.g., artificial intelligence) other than the student responsible for the assignment.” Since plagiarism implies violating ownership of ideas or language and AI can’t have ownership, these cases are typically not processed as plagiarism under UB’s policy.
There is no foolproof way to do this, but there are some steps you can take to prevent unauthorized AI use on assessments:
Guidance is provided for students on this topic here.
Grammarly was originally designed as a tool to check spelling and grammar usage. However, Grammarly, and other tools like Dal-E and Perplexity, now include a generative artificial intelligence technology that can suggest heavy re-writes of material and/or create new text. Please clarify with your students what is and is not acceptable in your courses. Additional guidance can be found from a human-written Inside Higher Ed article and an AI-generated/human-modified article in Scribe.