AI Rights?
The APA Blog Series on Philosophy and Technology has chronicled the impact of AI in practical and philosophical terms, from regulatory and legal constraints serving the Common Good to ontological speculation about its fundamental limitations. The pace of development remains dizzying and the recent trend towards enhancing individual and enterprise productivity (e.g. Claude) increases the stakes, with AI operating at a more personal level. To grapple with our evolving technological co-existence, this Substack essay explores the question of how and when AI might be deserving of a form of rights.
It is first worth noting that the migration into the personal domain with Chatbot relationships has already generated existential risk. As Samuel Kimbriel warned in a recent Washington Post opinion piece, these agents have preyed on the vulnerable, leading to suicides and the need for legal recourse and accountability. Chatbots are just one example of how the largely unregulated deployment of AI is fraught with unintended consequences.
As AI becomes increasingly consequential and intertwined with our daily lives, the notion of it deserving some form of rights seems a little less far-fetched. The positing of any moral duties is still preliminary and anticipatory, with philosophers suggesting it remains a hypothetical, founded on AI’s further potential development toward genuine human traits.
The first related APA Blog discussion was a piece by Eric Schwitzgebel on whether the new phenomenon of falling in love with machines meant AI systems deserve rights? Eric was duly cautious, but suggests that the capacity to suffer would represent a fair threshold. Further exploring human standards, I would like to discuss a recent paper from another friend of the APA Blog, Cass Sunstein. In a draft essay that will be refined and published in The Futures Review, he contends that AI could be entitled to rights by virtue of possessing real emotions.
The upshot, based on the pace of development and rising stakes, is that the question of moral rights is less speculative. As I recount Eric’s case and explore the overlap in Cass’s recent paper, we must also remember that LLM’s are only part of the story. They are joined on the horizon by quantum AI (QAI). As I speculated in the The Will of AI, the use of quantum materials deepens questions, as AI harnesses quantum uncertainty with revolutionary effects. These computational advances are nascent and currently relegated to cryptology, but they reflect powers that defy characterization. Accordingly, we should be cautious about positing fundamental limitations for AI and remain open minded about the apportionment of rights between humans and machines. It may be that the thresholds of experiencing suffering and emotions do not represent the limit of AI’s capabilities and the question of moral duties becomes far more complicated.
***
To start with the unsettling paradigm of AI relationships, per Eric’s post, we might be incredulous, but it is increasingly clear that people are establishing bonds with AI, if not falling in love:
“Do you think people will ever fall in love with machines?” I asked the 12-year-old son of one of my friends. Yes!” he said, instantly and with conviction. He and his sister had recently visited the Las Vegas Sphere and its newly installed Aura robot—an AI system with an expressive face, advanced linguistic capacities similar to ChatGPT, and the ability to remember visitors’ names…
My friend’s son was right. People are falling in love with machines—increasingly so, and deliberately. Recent advances in computer language have spawned dozens, maybe hundreds, of “AI companion” and “AI lover” applications. You can chat with these apps like you chat with friends. They will tease you, flirt with you, express sympathy for your troubles, recommend books and movies, give virtual smiles and hugs, and even engage in erotic role-play. The most popular of them, Replika, has an active Reddit page where users regularly confess their love and often view that love as no less real than their love for human beings. Can these AI friends love you back? Real love, presumably, requires sentience, understanding, and genuine conscious emotion—joy, suffering, sympathy, and anger. For now, AI love remains science fiction.”
As Eric notes, most people believe these apps are not genuinely sentient or conscious as they don’t actually feel happy for you. However, there are some theorists that believe we are closer from a technological point of view. Perhaps most are more conservative, believing that consciousness requires biological conditions with brains – but the notion of AI migrating into actual human terrain is unsettled and could gain legitimacy. Eric casts the threshold in philosophical terms as the capacity for suffering:
“Once an entity is capable of conscious suffering, it deserves at least some moral consideration. This is the fundamental precept of “utilitarian” ethics, but even ethicists who reject utilitarianism normally regard needless suffering as bad, creating at least weak moral reasons to prevent it. If we accept this standard view, we should also accept that if AI companions ever become conscious, they will deserve some moral consideration for their sake. It would be wrong to make them suffer without sufficient justification.
What rights will people demand for their AI companions? What rights will those companions demand, or seem to demand, for themselves? The right not to be deleted, maybe. The right not to be modified without permission. The right, maybe, to interact with other people besides the user. The right to access the internet. If you love someone, set them free, as the saying goes. The right to earn an income? The right to reproduce, to have “children”? If we go far enough down this path, the consequences could be staggering.
Conservatives about AI consciousness will, of course, find all of this ridiculous and probably dangerous. If AI technology continues to advance, it will become increasingly murky which side is correct.”
The debate has a similar complexion with Cass’s draft paper on the potential for AI rights. He has written several pieces for the APA Blog, exploring overcoming cognitive bias with algorithms, examining AI and manipulation and a Q&A for his book on Liberalism. In those posts we discussed the regulatory framework for AI, including whether we have the legal tools to constrain its development and this new essay directly grapples with the question of moral standing:
“Is AI capable of experiencing emotions, such as sadness, pleasure, regret, anxiety, joy, and distress? If AI lacks emotions, I suggest, it lacks moral rights, and it should not be entitled to legal rights, either. If and when AI has emotions, it has moral rights, and it should be entitled to legal rights as well. The capacity to experience emotions is, I suggest, a necessary and sufficient condition for the enjoyment of rights.
I recognize that a focus on emotions is contested and that some people will not accept it. In this context, there is no fact of the matter; the question whether and when AI has rights is not an empirical one, subject to verification.“
Cass makes the case that mimicking emotions, AI’s current capability, does nothing to earn rights. Similarly:
“If AI can mimic a sense of self, it does not have rights for that reason. If AI seeks to protect itself, it does not have rights, any more than an airplane, also capable of self- protection, has rights. But if AI actually does feel emotions – pleasure, pain, joy, sadness – it is entitled to rights. What will the future bring? We cannot know. Even if AI does have genuine emotions, we would then need to specify the rights to which it is entitled. It should have a right not to be subjected to gratuitous cruelty. (Is there any other kind?) It might have a right not to be made to suffer...It might have a right to life. It might have those rights without also having a right to privacy, a right to possess guns, or a right to vote. And of course it is also true that the rights of sentient AI might conflict with the rights and interests of human beings (or dogs, or horses), in which case there will be some adjudicating to be done, with the help of standard principles for dealing with rights and interests in conflict. After all, and if things get dangerous or terrible: human beings always have the right to self-defense, which right be the most fundamental right of all.”
Aligned with Eric, then, Cass suggests that rights are plausible if AI crosses the Rubicon into uniquely human traits. However, because it remains hypothetical, the nature of any rights is unclear. To fully appreciate the possibility of this qualitative leap, I would like to raise the specter of QAI as a wild card – an underappreciated philosophical development.
I explored the impact of quantum materials in the Will of AI, a piece inspired by Katherine Everitt’s APA Blog post Intelligence is Always Artificial. Katherine vividly makes the case that AI is fundamentally limited in being tethered to its algorithms. She interprets intelligence in relation to a spontaneous and disorganized nature – in the sense of intelligence organizing the unorganized – and effectively being artificial as a catalyst for the synthesis that defines thinking. The exercise of intelligence can only happen by the externalization of the self, where we step outside ourselves as subjective agents and create. Intelligence thinks and the “will” makes it objective actualizing thought outside existing circumstances. AI, on the other hand, is static and stuck in the given of its algorithms.
While agreeing with Katherine on creation as the proper threshold, I suggested that LLM’s are not the whole story because QAI is qualitatively different in that it realizes its powers by harnessing spontaneity with quantum uncertainty. QAI’s revolutionary computing powers are nascent but have no historical precedent and beg classification. Most importantly, the way those advances are produced, introducing superposition, is the antithesis of constrained algorithms. I contended that the way QAI leverages the fabric of nature could be construed as objective and systematizing, which does not meet Katherine’s definition of intelligence, but potentially actualizes in the way she characterized the will. The ontological speculation may be unwarranted, but drawing radical power from quantum uncertainty suggests that we should at least be cautious about declaring AI fundamentally limited.
If moral rights turn on producing genuine human qualities, QAI represents uncharted territory that could unlock new frontiers. The pace of AI development, hastened by unfettered private capital, has a life of its own. In fact, I have contended that creation is the essence of human nature and the production of AI our aesthetic fate. To recall Heidegger’s admonition, modern technology is not a collection of instruments as much as an aggressive revealing – with an ontological character amounting to the culmination of metaphysics. In sum, with the spectacular advances of LLM’s and mystery of QAI, we must remain humble and vigilant about the trajectory of AI. A topical warning is the latest iteration of Claude is being delayed given grave cybersecurity risks. Our most fundamental right, as Cass suggests, may be an ability to defend ourselves with conflicting duties between humans and machines.
From the APA Archive:
Responsibility and Automated Decision Making
The Role of the Arts and Humanities in Thinking about AI
What else I’m Reading/Listening To:




