It is now rare to open a newspaper or listen to a news program without some mention of AI. These discussions often have a slightly apocalyptic tone – the Terminators’ SkyNet lives! In essence, AI confronts us with a real transformation of the way we do things.  And that is both hugely exciting, and scary.

Educators, too, are grappling with the implications of AI, particularly the way that ChatGPT (and alternatives such as Bard) upends assumptions about how we work. Evaluating students’ learning by asking them to write essays and exams, staples of secondary and tertiary education, now might become outdated. Asking students about the use of ChatGPT in their schools, most have told me they know others who use it for everything from basic information gathering to full-on essay writing. No one has acknowledged doing the latter themselves, and I think this reflects that students are as much at sea about this as their teachers.

When ChatGPT writes an essay, it produces content that is of higher quality than most high school students can deliver in terms of sophisticated word usage and clarity of argument. Its use might therefore be irresistible to students feeling the pressure of high expectations, busy lives, and an absence of clear guidance on the appropriateness of its use in particular fields of study and for different assignments.

As someone who loves technology but claims no education or insight into its development, I signed up for both OpenAI’s free version of ChatGPT and the more advanced GPT-4, to see what the fuss is about. Asking it to write commentaries on everything from the risks inherent in AI and obscure historical questions to why the San Jose Sharks played so badly this year (the answer turned out to be a litany of the obvious, from poor goaltending to “simply being outplayed”), it formulated answers in impeccable language, and certainly faster than I could have done.

But I think that ultimately there are three reasons why I would discourage students from using chatbots in their work with me:

  • Having a chatbot do one’s work is arguably plagiarism: presenting someone else’s work as your own. As Stanford’s newly released policy on the use of any generative AI puts it, Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person. To quote ChatGPT directly, plagiarism is wrong because it violates academic integrity, undermines the value of original work, and is a form of dishonesty. (Colleges do recognize that there are academic spaces in which professors might well allow or even encourage students to use generative AI, including classes on the development of AI itself.)
  • Secondly, for work in which we use writing to explain something or set out an argument, ChatGPT undermines learning itself. After all, learning lies not in presenting a final product for a grade, but the process that precedes it: creatively seeking out sources of information, critically weighing the validity of the information, and, after wrestling with competing ideas and conflicting evidence, using our command of language to construct an argument. Using a tool such as ChatGPT can sidestep that entire process.
  • Finally, as an admission advisor, I don’t think ChatGPT will serve college applicants well in writing their essays. Admission officers look for many things in an essay, from good writing skills to critical thought, but also insight into how an individual student thinks, sees the world, empathizes with others, and might contribute to a campus community. Running one college admission question after another through ChatGPT, I was astonished at the speed and literacy with which it answered. But I was equally struck by how anodyne every answer felt. None of the responses had me in it.

I asked it, for example, to write a short essay on my identity as someone of South African descent, and ChatGPT quickly produced a well-written piece on how I have been shaped by the values of my family and culture and by the political struggles of my country.  All quite true – of me and every other person who asks that question. Since it responds in broad strokes scraped from the experiences of others, it could not add anything about my unique experiences of how and where I grew up or adequately capture my distinctive voice. This is not ChatGPT failing. It is just the nature of generative AI and is what would make such an essay easy to read, but ultimately unsuccessful in its task.

In writing a college application essay, the task is not to produce the smoothest essay covering the broadest possible ground. Rather, it is to share something of what makes you uniquely and distinctively you. That can only be done by doing it yourself.