Cass Sunstein on Manipulation and AI
Public Philosophy Digest
This month’s APA Blog Substack Newsletter explores a new book on manipulation with friend of the APA Blog, Cass Sunstein. Cass wrote a previous Blog piece on overcoming cognitive bias with algorithms and his new book tackles the growing phenomenon of manipulation – what it is, why it’s bad and, most importantly, what to do about it. Cass highlights how manipulation restricts autonomy and dignity, profoundly threatening our capacity to exercise agency. At the same time, the assessment is prescriptive, describing how AI can be harnessed to counteract the dangers in our daily lives, offering guidance to protect consumers, investors and workers.
I want to extend his review of solutions, opening the aperture on the law and politics to explore options for controlling and leveraging digital forces. Drawing on prior Blog posts, I ask Cass whether we need new legal tools, or, alternatively, might look to the past, in form of the classical legal tradition, for answers. To the extent the solution is political, we conclude by discussing prospects for restraint in the modern, Liberal state.
To start, I would like to share highlights of Cass’s account of the growing phenomenon of manipulation. As he notes in the preface to the book, it is a close cousin of coercion but with a unique and pernicious character all its own:
“We live in an era in which manipulation is pervasive, especially online. A central purpose of manipulation is to attract attention…But the ability to manipulate people goes well beyond grabbing attention. Manipulators, often aided by (or consisting of artificial intelligence), can play on your desires, your hopes, your fears, and your weaknesses…Manipulation is a serious threat to well-functioning markets, which depend on informed choices. Manipulation is also a serious threat to democracy. Perhaps above all, it compromises freedom. It is a threat to our capacity to exercise agency.”
Cass begins with a definition that is oriented around policy or law:
“Manipulation is a form of influence, intended to affect thought or action (or both), and not involving lies or deception, that does not respect its victim’s capacity for reflective and deliberate choice”.
The fundamental problem is a lack of transparency – at a time when information increasingly represents the currency of our age. Manipulation, he notes, is insult to both autonomy and dignity:
“An act of manipulation does not treat people with respect. From the standpoint of autonomy, the problem is that manipulation can deprive people of agency…The most serious problems arise when people are not given clarity that they are committing themselves to certain terms…”.
To anticipate our Q&A, in chapter 9 of the book, discussing the promise and threat of AI, he explores ways it can provide critical information for good decision making. He outlines how nothing is fundamentally neutral, and choices are unavoidable and require clarity. Nudges, subtle interventions that influence and encourage, are essential to avoiding biases that can lead us to bad decisions – highlighting AI’s fundamental promise:
“Choice Engines can be defined as mechanisms, typically (but not always) online, by which choosers are given an opportunity to provide some information about themselves and their preferences, and then receive a recommendation, or a set of recommendations, about what they ought to choose”.
Cass goes on to reinforce that Choice Engines, powered by AI and authorized or required by law, can beneficially address information gaps and behavioral biases:
“Choice Engines, powered by artificial intelligence, have considerable potential to improve consumer welfare and also to reduce externalities, but without regulation, we have reason to question whether they will always or generally do that. Those who design Choice Engines may or may not count as fiduciaries, but at a minimum, it makes sense to scrutinize all forms of choice architecture for deception and manipulation, broadly understood.”
“In markets, artificial intelligence provides unprecedented opportunities for manipulating people, and for targeting informational deficits and behavioral biases. Still, a properly regulated system of AI-powered Choice Engines could produce massive benefits. It could provide more personalized assistance, or nudging, than have ever been possible before. It could make life easier to navigate…It could make life less nasty, less brutish, less short – and less hard.”
Charlie Taben (CT): Cass, thanks so much for your continued engagement with the APA Blog. I would like to frame your solutions in a broader context to explore a range of ways the law can harness the power of AI. Before pulling on the thread of your prescriptions, to start, please outline the stakes for managing digital threats and why you wrote the book.
Cass Sunstein (CS): My first goal, extending far beyond digital threats, is to get clear on what manipulation is. This is a sharply disputed issue in philosophical circles, and I have my own preferred definition, developed after many struggles with the issue. My second goal, also extending far beyond digital threats, is to see what is wrong with manipulation. There is a Kantian answer, having to do with human dignity, and a Millian answer, having to do with welfare. I like both, but in the end, I stand with Mill. My third goal is to explore what ought to be done about manipulation. I suggest that we need a right not to be manipulated, and I try to give content to it.
CT: Tapping your legal expertise to explore bases to change or expand the law to counteract manipulation, I would like to draw on an example of a novel claim from a prior Blog piece by Trystan Goetze on responsibility and automated decision-making. It is an illustration of how we might augment legal tools to address digital harms – for instance to secure the right not to be manipulated. He is introducing a broader framework to make someone responsible, where expanding the bounds of accountability could secure such a right.
Trystan explores holding designers of systems responsible for future vs. past events, given an inability to pinpoint an agent. He describes how both human and computerized decision makers exhibit bias, but there is no accountability with a computerized system. It is difficult to attribute blame to a particular person because of the so-called “responsibility gap”:
“While both human and computerized decision-makers can exhibit bias, the victim of a biased decision has options when dealing with another human being…There are a few reasons why attributing responsibility for the decisions taken by a computer system to a particular person may be fraught. Some stem from a typical understanding of moral responsibility in analytic philosophy. On this received view, to be responsible for something, you must (1) have been in control of your actions causing the thing (the control condition), and (2) been in a position to know what you were doing (the epistemic condition)…. Because of the ways in which an automated decision system may be developed, deployed, and operated, it can be contentious, difficult, or even impossible to specify a human individual or group who should be held accountable for the decisions...”
Trystan then argues that perhaps there could be a forward-looking sense of responsibility. Like a parent, you might have to take ownership for the system’s future action:
“What gives rise to duties to take responsibility in these ways? I claim that there exists a special kind of relationship between the agent and the entity for whose behavior the agent should take responsibility, which I call moral entanglement. This describes cases where aspects of one’s identity are tightly bound with another: parents and children are one example, but similar relationships exist between citizens and states, employees and employers, and even our present selves and past selves. In each of these instances, some aspect of our identity is connected to the other entity such that distinctive duties arise….With this account in mind, the application to computer systems is clear enough. The creators of a computer system stand in a relationship of moral entanglement with it because it is the result of their professional roles as technologists…Similar claims can be made of the users of the system.”
Trystan acknowledges that more details are needed to develop his parental analogy, where humans and machines should be responsible that their arrival will not harm others – but he highlights the potential of reinterpreting accountability. My question, then, is whether new harms, or a right not be manipulated, require adapting the law to novel circumstances. Similar to the right to a human decision proposed by another friend of the APA Blog, John Tasioulas, perhaps we need to reenvision what it means to make someone accountable. You were declarative on the use of law in the final sentence of your book – “Manipulation can be a form of theft. Social norms should stand against it. In the most egregious cases, so should law” – but do not discuss how it might need to change given distinctly new challenges. Do we need to augment law or create new precedents because the threats are different – or is the existing framework sufficient?
CS: I would like to keep things simple, or as simple as possible. We should focus on two things: consent and harm. If you are bound by terms that you did not see, because they were hidden, you have been manipulated, and you have been harmed. If you are tricked into agreeing to an arrangement that you did not understand, you have been manipulated, and you have probably been harmed. “Shrouded attributes,” as economists call them, are something to target. This is a short but not-so-short way of saying that we need something new – a right not to be manipulated – and to specify the right, we start with consent and harm, and with clear cases. In terms of who the defendant is: that’s usually pretty easy. If we’re speaking of AI or an algorithm, well, there’s some human actor behind it.
CT: To continue to assess the adequacy of our tools, my next question approaches leveraging the law in a different way – in fact, to look backward and draw on the heritage of classic legal tradition. To be more transparent, to explore the role of religion and how it can be harnessed in the Liberal state for restraining technology and serving the public interest. I have written several APA pieces making the case that the lineage and weight of the Common Good can be a powerful force for crystallizing public sentiment.
In The Common Good and AI, I described attending a fascinating panel with John Tasioulas at the UN. John was a keynote speaker alongside the Pope’s advisor on AI, Rev. Dr. Paolo Benanti. The discussion addressed ethical questions to ensure that machine learning does not inhibit human decision-making or the moral frameworks that have supported the development of humanity since the beginning of civilization. The panel of experts focused on finding common ground between secular humanistic ethicists and their counterparts in the faith-based sphere.
I made the case that the Common Good could fairly be construed as a proxy for the humanist approach to ethical boundaries and potentially invaluable in framing commercial development to serve the public interest. Indeed, using the example of your esteemed colleague, Adrian Vermuele, I specifically cited his Common Good Constitutionalism as a concrete means, which conceives law as “a reasoned ordering to the common good” – promoting the goods of a flourishing political community.
The question, then, is whether there might be solutions drawn from our past? Could Common Good Constitutionalism be a nose under the tent for changing our culture and therefore behavior – regulating commerce in intuitive ways to promote norms? Again, your prescriptions were more conventional – such as more aggressive controls that “might take for the form of fines and an order to cease and desist, enforced by an independent commission.” However, could an even broader reinterpretation be helpful in leveraging existing tools – such as Common Good Constitutionalism driving enforcement of antitrust laws, grounded in the primacy of the public interest?
CS: If we focus on consent and harm, we can make progress in a way that is agreeable to many different traditions. Suppose that people are tricked into giving a certain amount of money, every month and automatically, to a service that benefits them not at all. Suppose people end up in a contractual arrangement that they were given no chance to understand. I hope that people who differ on fundamental matters can agree that that is a problem. Utilitarians can agree, Kantians can agree, and people with different religious commitments, or no religious commitments, can agree.
CT: To end with the broadest question on solutions, and highlight a more pessimistic case from another APA Blog essay on why the internet became an autocracy, Vili Lehdonvirta suggests we have a more systemic and fundamental problem. The digital empires are tantamount to a new form of state – and “regime change” might be our realistic recourse:
“The Internet was supposed to change the structure of society. It was supposed to empower individuals and communities, create a “level playing field”, and “give everyone access to the same information”, according to eBay founder Pierre Omidyar. It was supposed to render obsolete centralized authorities that set up artificial boundaries and compile dossiers on us and be governed by “ethics” instead of “systems erected to impose order,” according to cyberspace visionary John Barlow. It was supposed to get rid of gatekeepers and middlemen. It was supposed to topple autocrats and promote individual liberty over top-down control. This is what technologists and visionaries promised us. But they delivered something very different. They created some of the most powerful gatekeepers in history. “
Vili goes on to make the case these visionaries created formal institutions – and assumed for themselves the role of the central authority that they had been trying to abolish:
“The task that Internet visionaries and technologists set for themselves—fostering large-scale market exchange—was the same task that modern state administration has, in many ways, evolved to do. Almost inevitably, then, the technologists converged on analogous solutions: centrally administered bundles of complementary formal institutions that function as infrastructures but also seek efficiencies from central planning. The same forces that once favored the rise of the state now led to the rise of platforms….A lot of what so-called technology companies now do is thus in a certain sense just traditional statecraft…Instead of revolutionizing our social order, they reimplemented it with computer code and customer service agents. Big data is statistics. Blockchain is sortition. Algorithmic decision making is just another word for bureaucracy. After a decade and a half of “moving fast and breaking things,” Mark Zuckerberg realized that Facebook had ended up “more like a government than a traditional company,” as he said in a 2018 Vox interview.
The reverse is also true: what states traditionally do is in a certain sense just technology. The world’s first databases were ancient Mesopotamian empires’ tax and administrative records. Ancient empires also developed packet switched postal networks, mathematical algorithms, mechanical calculators, and cryptography to govern their holdings….”
In this sense, Vili suggests the technology companies simply rediscovered what the state was doing all along and, in some cases, overtake it without the constraints of territorial boundaries. The technology companies are states without estates, empires in the cloud. Vili believes it is not unreasonable to intervene and make them accountable, whether by breaking them up or enforcing regulations. Provocatively, however, he thinks it will be unsuccessful:
“…what if users took matters into their own hands? What if the public that had a legitimate interest in controlling essential platform infrastructures was not the public of any particular nation state, but the public that used that infrastructure—the actual users of the platform? In Cloud Empires I tell stories of how merchants and artisans of the platform economy—much like burghers of old—have begun to organize and push back against the platform princes’ excesses. Via strikes, pressure campaigns, and outright hostilities, these citizens of cloud empires have begun to demand political rights for themselves—and in some cases even win them….”
If the idea of a joint-stock company eventually transforming into a public body with a democratic government seems outlandish to you, consider the following piece of history. The state of Virginia was literally a venture-funded start-up company at first, called the Virginia Company of London. It was founded in London in 1606 by a serial entrepreneur with seed funding from four high-net-worth individuals. The founder raised additional funding through a public share offering, in which hundreds of individual and institutional investors across England bought stakes in the venture. The business plan was similar to Amazon: construct a town in North America, attract artisans and tradesmen from Europe to do business there, and tax them. The town was governed from London by the company’s board of directors, and like any joint-stock company, it was expected to turn a profit for its owners. But these governance arrangements did not last; the artisans and tradesmen began to demand a say for themselves in how the place was being run…. Admittedly, this transformation from company to commonwealth took a couple of hundred years to unfold in Virginia. But the Internet so far has zoomed through history at an amazing speed. And already we can see rebellions brewing in the digital colonies.”
To finish with Vili’s dramatic plea, his case may be overwrought, but the modern state is under duress, and the stakes are high. The power of digital forces is also increasingly magnified by the tech sector’s capital accumulation and attendant economic power. The hegemony of private capital knows no bounds. To recall Adrian Vermeule again, the law is ultimately a reflection of the polity. All the tools are ultimately political in nature. You have been a vital public advocate for Liberalism. To what extent are you concerned about our capacity to face these digital challenges – what sustains your optimism?
CS: Suppose that you are trying to solve a problem – say, one of your teeth hurts. You might think: What can I do about that problem? Or you might think: Should I be optimistic? I much prefer the first thought. I guess it’s usually good to be optimistic about whether you can get your tooth to stop hurting, but please, focus first on how to address the problem.
Issues about digital forces and the tech sector are of course numerous, and each one has to be approached in its own right (even if many issues are related). In October, I have a book coming out with APS Press, called IMPERFECT ORACLE, which explores what AI can and cannot do. Its focus is on prediction problems, where machine-learning algorithms are terrific under identifiable conditions, and also not so terrific under identifiable conditions. The problem of manipulation is not small, but it is pretty specific.
On technologies as “states” – I am not sure how to think about that. In what respect? What is the definition of states here?
I also have a book coming out in September on liberalism, with the imaginative title, ON LIBERALISM. I almost called it “Big Tent Liberalism” – a bad title, I guess, but a descriptive one. Commitments to freedom and pluralism help define liberalism, big though its tent may be. One impetus for that book, which is closely linked to my book on manipulation, is a sense that some people (mostly on the left but also on the right) have produced a bizarre caricature of something (what? I am not sure!) and called it “liberalism,” even though no liberal would endorse or even recognize it. Another impetus for that book is a sense that commitments to freedom and pluralism are in serious jeopardy. We’ve seen that movie before. It’s a horror movie.
Back to optimism: If your tooth hurts, please try to find a way to get it to stop hurting; try not to think, am I optimistic about my tooth? But I will confess that I am an optimist by nature. Amos Tversky, the great psychologist who helped found behavioral economics, once said something like this: “I’m an optimist, and it’s rational to be an optimist. If you’re a pessimist, you suffer twice!” I aim to follow Tversky.
CT: Cass, your wise counsel is appreciated - distilling complex issues to offer practical solutions. I greatly appreciate your engagement with the APA Blog and, given your prolific output, I look forward to extending the dialogue in the future.
From the APA Archive:
Smartphones and Meaningfulness
Undermining Autonomy One Swipe at a Time
What else I’m Reading/Listening To:






