In June 2024, the Canadian Philosophical Association (CPA) held its annual conference in Montreal. Then President of the CPA, Arthur Sullivan, addressed the conference, remarking on the inescapability of AI in the news and the money flooding into research on AI. Philosophers, Sullivan suggested, should take advantage of this new money. We are uniquely well-equipped to understand and investigate the nature of AI and its ethical implications, so we should pursue this opportunity.
Sullivan’s suggestion is not unreasonable. Philosophy departments are not awash in money, and the destruction of the Department of Education will mean more belt-tightening for American universities. If there is money in AI research, why shouldn’t philosophers get their slice of the pie?
Although the prospect of this research funding is attractive, it raises some ethical questions. The public discourse surrounding AI is often detached from reality, and it is often these implausible claims about the dangers or potential of AI that are the justification for the funding in AI research. When philosophers accept this funding, are they legitimizing the misleading, hyperbolic narratives behind this influx of money? Does the fact that this new funding is based on a false narrative create an incentive to feed into that narrative? Philosophers must be careful about how they discuss AI.
Some Findings of Philosophy of AI
Before proceeding, let’s clarify some terminology. The existing technologies which people are usually referring to when they talk about “AI” are Large Language Models (LLMs). LLMs are just sophisticated chatbots or predictive text; based on the written texts that are fed to the program, it strings a series of words together. The phrase “AI” has also been retroactively applied to other, pre-existing algorithms, like streaming sites’ recommendation systems or internet search engines. Lastly, in many everyday conversations or tech publications, “AI” is used refer to a fictional future technology that is intelligent in a way similar to humans, and is able to act on its own. Frequently, these three usages of the term “AI” are conflated or used interchangeably, adding to the confusion.
If the headlines of tech publications are to be believed, AI has already become sentient, the cataclysmic “singularity” is mere months away, and before long everyone will be replaced in their jobs by AI. Perhaps unsurprisingly, philosophy of AI does not support these conclusions. Instead, philosophers agree that AI lacks consciousness, there is no chance of it developing consciousness, and the main ethical challenges are issues like surveillance or the proliferation of disinformation.
Is AI intelligent in the way that human beings are intelligent? The answer depends on what you have in mind by “intelligent.” If by intelligent you mean that the AI contains and can present some information, then by all means it’s intelligent, but only in the same sense as a book or a clock. When people claim in tech publications that AI is “intelligent,” they are usually suggesting that it thinks, has cognition, possesses mental states, or has subjective experience. There is no reason to believe any of these things are true, and every reason to believe they are not, given our knowledge of the underlying technology. LLMs simply simulate communication with another person. This is analogous to when you boot up your favourite video game and talk to an NPC. That representation of a fictional character is simulating a conversation, but it would be silly to suggest it has thoughts.
Sometimes the claim is instead that LLMs are not presently intelligent in the robust sense, but soon they will evolve into something that is! Tech enthusiasts have even come up with a term for the coming, actually intelligent version of the technology: artificial general intelligence (AGI). However, there is also no reason to think that LLMs are capable of “evolving” into something that can think. Simply scaling up a sophisticated chatbot by making it better at stringing words together will not cause it to suddenly develop consciousness.
I Have a Bridge to Sell You
Why is there such a gap between the capabilities of the existing technology that we call AI and the conception of AI that appears in tech publications or public discussions about its potential? The quick explanation is that tech entrepreneurs have something to sell you. What they have is not a product that solves some problem or makes our lives easier, it’s a story. This new technology will revolutionize the world, changing everyone’s life irrevocably! Or, so the story goes. The intended market for this story is not the everyday consumer, but investors. As economist Robert Schiller put it, this is narrative economics. The sky-high valuations of tech stocks — that often far outstrip their profitability — are based on narratives about how some new technology will be the next big thing.
This is not a new phenomenon. For the sake of comparison, consider previous hype-cycles focused on cryptocurrencies, non-fungible tokens (NFTs), or the metaverse. To focus in on just one of these cases, in 2020, social media companies began pivoting toward the notion of the “metaverse.” Facebook took this so seriously it renamed its parent company Meta. The core idea behind the metaverse was that of a virtual world that people could plug into and live in, similar the fictional virtual worlds presented in Snow Crash, The Matrix, or Ready Player One. According to its proponents, soon everyone would be spending their time in the metaverse and life would change forever. Yet, critics noted that this promised technology was physically impossible.
The existing, purported metaverses were platforms like Horizon Worlds and Decentraland. These were, effectively, just virtual chatrooms where people’s avatars could stand around and talk. Additionally, several video games with social elements were coopted by metaverse enthusiasts and described as metaverses, such as Minecraft, Fortnite, or Roblox. As with AI, a massive gulf existed between the real technology — run-of-the-mill video games — and the promised fully immersive virtual worlds. The failure of the metaverse to live up to its own hype meant a lack of mass adoption and eventually even its greatest supporters like Meta (née Facebook) abandoned it in favour of AI.
We can identify three features common to the narratives surrounding these technologies:
1. The claim that a fictional, future version of the technology will change the world.
2. The existing version of the technology is unimpressive and lacks practical utility.
3. Pre-existing devices are retroactively designated as precursors of the technology.
Notably, in each of these cases, the success of the narrative depends on the investors lacking a firm understanding of the technology in question. Cryptocurrencies were never going to become a proper exchange currency, NFTs were non-functional, and the metaverse was always a mediocre video game that nobody wanted to play. Anyone familiar with the technologies themselves would resist being bamboozled by predictions from tech companies and entrepreneurs that, in fact, these technologies will change the world. The same is true of the narratives surrounding AI technology.
At the CPA in 2024, I asked Sullivan how AI was different from these other cases. Why is this not just another fad; a hype-cycle that will end after a few years and be replaced by something new? AI is different, he insisted, because unlike cryptocurrencies, NFTs, or the metaverse, AI has actual, practical uses. One potential usage that Sullivan and others have pointed to is automation. For instance, recently, it has been claimed that AI will be able to replace all white-collar workers.
Like many university professors, one of my primary interactions with AI is grading student papers that it has written. These essays are invariably terrible. For instance, last year, several papers submitted by students who used AI claimed that mind-body dualism is disproven by evidence from neuroscience, which shows that certain parts of the brain light-up when one is experiencing certain mental events. This is an embarrassingly bad argument, but ChatGPT scraped it from somewhere and confidently asserted it anyway. ChatGPT and other AI text generators are well-known for their tendency to “hallucinate,” make up facts, and cite non-existent sources. If AI cannot even produce adequate term papers for first-year university classes, why would we trust it with medical or legal documents? One area where AI is predicted to put people out of work is law, which is simply silly.
My brother, himself a lawyer, illustrated this by asking Google’s AI search function a basic question about Ontario labour law: how many sick days is an employee entitled to in Ontario? The AI assistant confidently presented false information, claiming that people are entitled to more sick days then they actually have by default. For reasons like this, people lament that AI integration has made search engines worse than they were beforehand. I once recited this anecdote to a philosopher who complained that my brother’s mistake was not spending hours coaching the AI to give reliable answers. However, this misses the point entirely. If the AI is dependent on a lawyer “coaching” it to give half-decent answers, then there is no real possibility of the AI taking over the lawyer’s job.
Apart from law, other areas where automation is expected have so far been met with failure. Self-driving cars have failed to materialize despite billions of dollars in investments. In the medical field, attempts to find a useful application of AI have come up empty-handed. Even when it comes to coding and computer programming, where AI’s purported skill is lauded, experts have observed that it is only capable of performing simple coding tasks, and struggles with real-world projects.
While there are surely employers who would love to replace their employees with AI which they do not have to pay, the existing technology is not remotely prepared to replace human workers.
Follow the Money
During a period of contraction in the humanities in general and philosophy specifically, AI is a growth industry. In 2023, there were approximately 55 positions in philosophy of AI posted to the popular repository of philosophy job postings, PhilJobs. These included about 40 tenure-track positions, 10 post-doctoral fellowships, and 5 temporary appointments. Another approximately 55 positions were posted in 2024. Among these were about 30 tenure-track positions, 20 post-doctoral fellowships, and 5 temporary appointments. In both years there were more new jobs in philosophy of AI then any other sub-field, including broader subjects like ethics or metaphysics/epistemology.
Some have likened this explosion of interest and hiring in philosophy of AI to the expansion of bioethics as a sub-field in the late sixties and early seventies. New technological developments like gene mapping, in vitro fertilization, and organ translation brought new ethical questions to the fore, prompting both public debate and greater investment in bioethics as a sub-field.
There is one crucial difference between that episode and the current craze over AI. The new bioethics issues rose to prominence when procedures like in vitro fertilization or organ transplants became possible. That is, what was presently happening in medical research raised new questions. On the other hand, as we have seen, the current fervor over AI is driven by prophecies about what will happen, and most of those predictions have not stood up to sober philosophical analysis. When it comes to AI, the influx of funding and jobs is based on the narratives peddled by tech companies.
Consider the Center for AI Safety (CAIS), a nonprofit founded in 2022 by AI doomsayers. In 2023, the Center published its statement on AI risk, attracting the signatures of dozens of public figures, academics, and tech moguls. The statement says that AI is an existential threat to humanity. At best, CAIS mixes a couple of genuine challenges posed by LLMs, such as their ability to spread misinformation, with a heavy dose of science-fiction. However, CAIS attracts its funding through cataclysmic predictions about AI that are detached from reality. If they hired philosophers to study AI, they would have a vested interest in promoting the narrative which secures CAIS its funding.
Occasionally, philosophers will consider the implications of an entirely fictional version of AI. In a decade-old book Superintelligence: Paths, Dangers, and Strategies (2014), Nick Bostrom wrote that if artificial intelligence were created, it could pose an existential threat to humanity. As we have seen, we have no reason to believe that LLMs are or could become the kind of artificial intelligence that Bostrom had in mind, so his warning is the equivalent of saying that if the Death Star existed it would threaten life on Earth. Nonetheless, Bostrom’s book received high praise from AI evangelists like Elon Musk and Sam Altman, whose assertions about AI it gave extra credibility.
There is a moral question here: is it ethical to participate in an ecosystem that legitimatizes the profitable, apocalyptic delusions of tech CEOs at the expense of the truth? If philosophers are supposed to be concerned with pursuit of the truth, the obvious answer is no. However, some might suggest that we take the money of the tech moguls, or governments taken in by their stories, while doing good work that reveals the folly of their silliest predictions. Perhaps philosophers could even refocus the debate on the real ethical challenges of AI, such as surveillance or disinformation. This is easier said than done. When your livelihood depends on other people believing in nonsense you have an interest in not pointing out that nonsense. It is not inevitable that philosophers are coopted by AI hype-cycle, but it is a danger. So, we must regard investments in AI research with suspicion.
From the Archive
“Navigating the Intersection of AI, Science, and Society”: Michael Lissack asks whether AI is useful to philosophers and how to use it responsibly.
“How to Prevent a Shipwreck…”: What should AI ethics look like? Alexandra Fyre provides perspective on the main issues.
“Are You Anthropomorphizing AI?”: Ali Hasan provides an excellent look at why some people come to believe AI is conscious.
“The Ethics and Character of Creating Personhood”: Leigh Duffy asks science-fictional questions about the ethical import of creating AI life.
“What Hegel Has to Teach Us About AI”: Through the lens of Hegel, Jensen Suther explains why AI lacks human-like intelligence.
What I’m Reading/Listening To:
At the moment, I’m reading Bertrand Russell’s Power: A New Social Analysis. The book is both a prescient examination of the power some human beings wield over others, and an excellent illustration of what public philosophy can look like. There is a reason why Russell won the Nobel Prize for literature, and many of his insights still apply to the power that tech company’s narratives wield over the public imagination.
Periodically you need to replace your old tech with something new. When the time comes, you should get yourself some Upgrades (from Montreal-based jazz fusion band, The Liquor Store).
I agree with mostly everything. But worry about the conclusion. Sure one may write for people who aren’t informed and thus believe the hype. But it seems like not battling that is worse than staying on the sidelines watching it all pass by.
It also seems like the strongest parts of your argument would not apply if the funding agency is actually philosophical in nature and not accelerationist nor doom sayers
Maybe I’m wrong, happy to hear back from you!