It has been almost a year since Open AI released ChatGPT online. With all the attention ChatGPT has received and continues to receive, you may wonder where things stand with it. This edition of the APA Blog’s Substack newsletter takes a look at past APA coverage of ChatGPT, examining the challenges ChatGPT raises in academics and beyond. It explores the narratives that have come to surround ChatGPT and how a reassertion of values is necessary for guiding our response to the technology.
What is ChatGPT?
ChatGPT is essentially an online chatbot, but what makes it different from previous chatbots? Unlike customer service chatbots you may have encountered, ChatGPT runs off a novel large language model, known as a generative pre-trained transformer. Without getting too technical, this essentially means that ChatGPT has been trained on huge amounts of data to recognize patterns in language and generate responses to text prompts. This gives it the ability to answer questions, write poems, solve complicated math problems, and even perform computer coding tasks.
How did academics respond to ChatGPT?
As Trystan S. Goetze noted in “ChatGPT Reveals What We Value and What We Do Not”, there were three general reactions to ChatGPT when it was first released. Some were fearful that ChatGPT would facilitate academic plagiarism, others were hopeful that they would be able to use the program to automate mindless tasks like writing emails. Still others responded with cynicism, joking that they could now use the technology for important but often time-consuming tasks like writing student recommendation letters. As time went on, the philosophical concerns related to ChatGPT’s use became more clearly defined. Is automating certain tasks morally acceptable? Is there a difference between using ChatGPT to produce a generic report on a department meeting versus using it to provide individualized feedback on student work? When does a student using ChatGPT constitute cheating? With automation checkers still unreliable at best, should professors embrace the use of this technology or reject it?
In January, Derek O’Connell provided some practical reflections on how educators were responding to students using ChatGPT for assignments. When O’Connell wrote a follow-up piece in June, student use of ChatGPT was becoming more widespread – a fact I can attest to as I was working on my Master’s in philosophy at the time. While some students were using ChatGPT primarily as a thesaurus [What is a word that means…] or to simplify their language [Rewrite the following sentence using less words:…], others used it to generate examples [What is an example of…] and even to craft thesis statements and outlines [Provide an argument in support of the statement:…].
How has ChatGPT changed the classroom?
A lack of coordinated national response left many institutions to figure out their own policies. Some institutions, including the London School of Economics where I studied, encouraged each department to come up with their own guidelines for ChatGPT. A brainstorming session with Master’s students and the Philosophy faculty in charge of updating the honor code revealed the difficulties of responding to the range of ways students might use ChatGPT on assignments.
At first, the most logical policy seemed to be to treat ChatGPT the same way one would treat peer collaboration. It is permissible to ask a peer to look over an essay for grammar, and so it would also be permissible to use ChatGPT to do so. However, students pointed out that philosophy is often collaborative. You might discuss an argument outline with a peer who gives you feedback for strengthening your argument. So should students also be permitted to use ChatGPT to generate better examples or objections? These sorts of questions probe at the implicit underlying value of philosophy assessments, which may often be taken for granted. Should assessments push students to theorize and write as independently as possible, or should they teach them how to engage in productive dialogue with others? Is the goal to help students be the best philosophers they can be (whatever that means), or to develop the specific skills needed for a successful academic career? While these goals are of course not mutually exclusive, the difficulty of developing such a policy reveals the need for a clearer understanding of what it is we value about philosophical education. Developing effective fair-use policies for ChatGPT and future technologies will require that departments understand not only how students use the technology but how this use either contributes to or circumvents the goals of the assignment, and of a philosophical education more broadly.
What should our response be going forward?
While some measures can be taken to restrict the academic use of ChatGPT, I think it is inevitable that ChatGPT will change the nature of academic work. From automating certain tasks to assisting in idea generation, it is impossible to completely prevent academics from using the technology. Furthermore, ChatGPT may have unexpected upsides. In “AI and Social Justice: The latest technological ‘revolution’ and the Capability Approach,” Alessandra Buccella writes on the radical transformative power of AI, arguing that AI now plays “an active role in determining new conditions of possession and realization of at least some basic capabilities.” She discusses the ways that AI is being used to promote life, bodily health, and wildlife and environmental conservation.
Similarly, ChatGPT can be used to improve conditions supporting educational attainment. The internet has made information more accessible than ever. Someone who is interested in philosophy but does not have access to traditional forms of education now has a wealth of online resources. When used in the right way, ChatGPT can make finding and accessing such resources easier. Someone who has stumbled on an article on existentialism can easily ask ChatGPT who the major existentialist theorists are and what books of theirs should be read first. When they encounter a term that they do not understand, they can turn to ChatGPT for a quick definition. Yet, it should also be noted that because ChatGPT works off what information is available, it is also likely to reinforce pre-existing informational inequalities. When asked for information on notable white male figures it often provides a greater amount of information and more accurate information than when it is asked for information on those from marginalized backgrounds. This means that the technology will likely tend to reinforce dominant ideologies while exacerbating the marginalization of particular schools of thought.
When considering how academia should respond to ChatGPT, we should also recognize the value it can offer as an assistive technology for those with disabilities. ChatGPT has helped some with ADHD streamline their work processes. Rather than going to multiple sites and toggling through many tabs to put together a reading list, ChatGPT can be used to create a comprehensive list of possible selections, which can then be parsed through to create one’s own syllabus. It can also help those with dyslexia by simplifying difficult explanations and reorganizing class material into more digestible forms.
Mich Ciurria argues that academia’s response to ChatGPT has been problematically carceral, and that this has been motivated by academia’s commitment to elitist norms. The use of ChatGPT by disabled and otherwise marginalized students undermines the exclusionary evaluation systems of universities. If anyone can produce well-formed English sentences, grading systems that reward “proper English” can no longer penalize students with learning disabilities, those who learned English as a second language, or those from disadvantaged educational backgrounds who may struggle to employ standard conventions. Yet, Ciurria also warns that these technologies may doubly reinforce the compulsory use of standard English. Just as the technology has the potential to reinforce informational inequalities, it also displays a preference for standard English by providing responses in that form unless explicitly instructed to do otherwise. By further promoting standard English to the exclusion of alternative communicative modes like non-standard vernaculars, feminine rhetorical styles, and dysfluent communication, ChatGPT has the potential to act as a linguistically eugenic technology.
Ciurria’s reflections once again point to the importance of defining and enacting our values in the face of technological progress. Whether the widespread adoption of ChatGPT reinforces exclusionary linguistic norms will depend largely on how the technology is developed going forward. Will we recognize and assert the value of non-standard English forms and take steps to ensure that ChatGPT does not further exacerbate their exclusion? Involving marginalized communities in the training of large language models is a first step towards ensuring that these technologies promote the values we want it to, but more needs to be done. Rather than focusing on punitive solutions to ChatGPT, academics should be listening to the testimony of technology users, engaging in serious public discussions about what is valuable about our work, and creating policies that promote these goals.
Goetze reminds us that despite the “quasi-religious” progressive narratives hawked by many industry leaders, the adoption of these technologies is not inevitable. In the face of technological innovation, we must clearly define our values and ensure our use of automation remains consistent with them. Whether AI becomes an equalizing force or further increases pre-existing biases and inequalities depends wholly on our response to it. Do we reject ChatGPT and enact punitive measures for those who use it in academic contexts or do we attempt to guide its development in a way that ensures it does not exacerbate existing marginalization? Do we accept the proliferation of technologies like Ring that accelerate the panopticization of our homes and communities in exchange for promises of increased security? Do we trade our browsing and location data for content and advertisements better targeted to us? These are questions with significant implications for our future. Rather than allow them to be decided on our behalf, we ought to take an active role in ensuring that technology is being utilized to support the sort of future we want.
More from the APA:
What We’re Reading:
The Jessica Simulation: Love and loss in the age of A.I.
This 2021 article from the San Francisco Chronicle explores Joshua Barbeau’s experience creating a simulation of his late fiancée using a beta-version of GPT-3. This version enabled users to train a chatbot using their own data, although it would have a limited lifespan based on the number of credits users buy. The article raises fascinating questions related to grief, agency after death, and the anthropomorphization of AI.
From the Archive:
Recently Published Book Spotlight: Self-Improvement
In this Recently Published Book Spotlight, I talk with Mark Coeckelbergh, Professor of Philosophy of Media and Technology at the University of Vienna, about his book Self-Improvement: Technologies of the Soul in the Age of Artificial Intelligence (Columbia University Press, 2022). We discuss the role of technology in modern self-improvement culture, the need for a reorientation of values in light of current global crises, and how a new narrative approach may help accomplish this.
News Story of the Month:
How the Newly Updated Chat-GPT Reports the Latest News
This report by a journalist at the Reuters Institute explores the latest updates to the paid Chat-GPT model which enable it to search the internet. Researchers asked the new model a range of questions related to recent news. They found that the model was better at providing information on longer-running stories, and that it generally did so without political bias (although it would provide politically biased information when prompted). Notably, the chatbot seems to favor sources in whatever language the conversation is conducted. However English seems to be its default, meaning that non-English-language news outlets are often underrepresented in answers, even about news stories in their countries. This new function will likely improve with time, but it is worth considering how such biases might affect ChatGPT outputs especially as it begins to be used more widely by both businesses and individuals.