Against University-wide policies concerning AI writing assistants

Christopher Potts
4 min readFeb 18, 2023

For my Winter 2023 Stanford course Introduction to Semantics and Pragmatics, I adopted a pragmatic policy regarding the use of AI writing assistants for coursework. Since then, ChatGPT has become a notional member of our course community. In class, I often write prompts based on the assignment questions and offer color commentary on the model’s output.

Overall, I reckon ChatGPT would be getting a C– in the course. It’s good at working with lambda expressions, unpredictable when reasoning about foundational principles of linguistics, and sometimes gives answers that, if submitted by a student, would suggest the student had not even glanced at the course material.

ChatGPT output. The annotation says “No!!!”, and the text says, “However, it’s important to note that Cresswell’s Most Certain Principle is a principle of meaning, not a principle of truth. The principle states that if two sentences have different meanings, then there must be at least one possible situation in which one is true and the other is false. In other words, it’s a principle that links meaning and truth in the context of possible worlds semantics.”
ChatGPT making a classic mistake. Cresswell’s Most Certain Principle does not say this.

A number of students in the class are also taking Ethics, Public Policy, and Technological Change (CS 182), where Assignment 3 is to write a policy memo for Stanford’s leadership concerning how the university should respond to these new technologies, paying particular attention to issues of academic dishonesty. The assignment links to my own course policy (I am flattered!), and I’ve been having lively conversations with students about their draft memos and related issues.

This post is a summary of the thoughts I have had when talking with these students.

An issue of fairness

I feel that I myself have benefitted enormously from getting to conduct my entire professional life in my own native language, and I believe that I would be less successful if I had to do this in another language. AI writing assistants can level the playing field here, by automatically mapping English inputs to Standard Literary English. I cannot in good conscience stand in the way of this. If I had to create all my course materials in another language, I would be enormously grateful to have access to technology like this.

Any policy will be arbitrary

Everyone should run a spellcheck program, and many of us rely on autocomplete without worrying that we are somehow cheating. There is no clear dividing line between these tools and ChatGPT. This is easy to see when one considers Google Docs: it autocompletes short phrases now, and it’s a safe bet that it will offer entire paragraphs in the near future. What is the n such that autocompleting n words is okay and n+1 is not? There is no sensible answer to this question. Even if we settled on an n, we would face questions about whether of counts the same as osmosis. The four-word phrase out of this world is worth much less than the single word artifact in many contexts in my course. There is no end to the subtle distinctions one can make here.

GPT-3 (davinci-003) prompt with completion. The prompt is “Any policy against AI writing assistants will be arbitrary: Everyone should run a spellcheck program, and many of us rely on autocomplete” and the model continues with “and grammar-correction functions, so it would be hard to draw a line between acceptable and unacceptable tools.”
A sensible overall position statement, though in the end I did not use this formulation in my post.

Informal rules are better

Suppose we did define a precise general policy on where and how AI writing assistants could be used. As an educator, I want my students to be skeptical and contrarian in the face of confident authorities seeking to regulate their behavior without a clear rationale for doing so. Such students are going to think creatively about how to run right up against the policy without violating it. They will discover completely problematic things that are consistent with the policy and perfectly mundane things that violate it. All of this is a huge distraction from my actual goals as an educator. (I grant that exploiting rules and regulations in this way is a valuable, albeit dubious, skill in modern life, but it’s not one I particularly want to teach.)

I would refuse to comply with University-wide policies

I told my students they were free to use ChatGPT when working with lambda expressions. My own spongy brain is not especially good at these symbolic computations. (My actual advice to students was to use Python, since I worry about ChatGPT for these same reasons.) On the other hand, if I were teaching a class giving students the experience of writing an entire novel, I would forbid all use of these models. But I wouldn’t do that for creative writing in general, because we’ve seen already how these models can contribute to artistic expression.

ChatGPT prompt: “Convert ((lambda f (lambda x (x < f(4)))(lamdba y (1 + y))(2).” The first part of the model’s response is shown: “There is a typo in the original expression: “lamdba” should be “lambda”. Assuming the corrected expression is: ((lambda f (lambda x (x < f(4)))) (lambda y (1 + y))) (2)”
ChatGPT helpfully spots my typo and silently corrects my bracketing mistake. From here, it did A+ work.

For my own course, linking AI writing assistants to Stanford’s existing plagiarism policy seems right and good to me, but this is not a recommendation for all classes by any means. To date, all the concerns people have expressed to me about the policy have actually been concerns about plagiarism policies in general, rather than raising concerns about AI writing assistants in particular. Policies specifically crafted to target AI writing assistants will, though, bring lots of entirely new problems.

Acknowledgements

Thank you to my Linguist 130a/230a students, and to the CS 182 students who’ve visited my office hours! I am confident that you will critically and skeptically review the above positions. Please spellcheck your assignments and use a computer to do lambda conversion.

--

--

Christopher Potts

Stanford Professor of Linguistics and, by courtesy, of Computer Science, and member of @stanfordnlp and @StanfordAILab . He/Him/His.