This month’s APA Blog Substack Newsletter explores the application of moral and ethical boundaries to artificial intelligence drawing on a panel led by friend of the APA Blog, John Tasioulas. John is the Inaugural Director of Oxford’s Institute for Ethics in AI and was a keynote speaker alongside the Pope’s advisor on AI, Rev. Dr. Paolo Benanti, at an event at the UN organized by the Permanent Observer Mission of the Sovereign Order of Malta and co-sponsored by the Permanent Missions of the Republic of San Marino and the Principality of Monaco.
The discussion addressed ethical questions to ensure that machine learning does not manipulate, nor supersede, either human decision-making capacity or the moral frameworks that have supported the growth and development of humanity since the beginning of civilization. The panel of experts focused on finding common ground between secular humanistic ethicists and their counterparts in the faith-based sphere, with the objective of providing a more comprehensive view of morality and AI.
The APA Blog has published several pieces with John exploring a humanistic approach to evaluating AI and, after attending the panel with John, I want to highlight how the Vatican’s position is wholly aligned with the Institute’s vision, advancing the cause of human flourishing.
Further, I will make the case that the Common Good could fairly be construed as a proxy for the humanist approach to ethical boundaries. I suggest its heritage and weight can be invaluable alongside voices like the Institute for Ethics in AI, framing commercial development to serve the public interest.
To explore how the Vatican’s position aligns with the Institute’s goals discussed in earlier APA pieces, let me start by drawing on a Blog Interview where John described the background and purpose of the Institute:
The Institute has its origins in a £150m donation made by Stephen A. Schwarzman to Oxford University in 2019—the largest single donation received by Oxford since the Renaissance. The purpose of the donation is to house, for the first time in its history, virtually all of Oxford’s humanities departments in a state-of-the-art, purpose-built Humanities Centre. But in addition to the humanities departments, the Schwarzman Centre for the Humanities will also include a Humanities Cultural Programme, which will bring practitioners in music, theatre, poetry, and so on into our midst, and also the Institute for Ethics in AI, which will connect the humanities to the rapid and hugely consequential developments occurring in AI and digital technology. So, the underlying vision is one in which the humanities play a pivotal role in our culture, engaging in a mutually beneficial dialogue with artistic practice on the one hand and science and technology on the other….
The fundamental aim of the Institute is to bring the rigor and the intellectual depth of the humanities to the urgent task of engaging with the wide range of ethical challenges posed by developments in Artificial Intelligence. These challenges range from the fairly specific, such as whether autonomous weapons systems should ever be deployed and, if so, under what conditions, to more fundamental challenges such as the implications of AI systems for humans’ self-conception as possessors of a special kind of dignity in virtue of our capacity for rational autonomy. The Institute is grounded in the idea that philosophy is the central discipline when it comes to ethics, but we also believe that it has to be a humanistic form of philosophy—one enriched by humanities disciplines such as classics, history, and literature. A humanistic approach is imperative, given Anglo-American philosophy’s own unfortunate tendency to lapse into a form of scientism that hampers it in playing the critical role it should be playing in a culture in which scientistic and technocratic modes of thought are already dangerously ascendant. In addition to being enriched by exchanges with other humanities colleagues, the Institute has also forged close connections with computer scientists at Oxford to ensure that our work is disciplined by attentiveness to the real capacities and potentialities of AI technology. Especially here, in a domain rife with hype and fear-mongering, it is important to resist the lure of philosophical speculations that escape the orbit of the feasible.
In a second APA Blog piece drawing on the importance of the Arts and Humanities in thinking about AI, John expands on his humanistic approach. It begins with understanding the need to make choices:
Perhaps the most fundamental contribution of the arts and humanities is to make vivid the fact that the development of AI is not a matter of destiny, but instead involves successive waves of highly consequential human choices. It’s important to identify the choices, to frame them in the right way, and to raise the question: who gets to make them and how?
This is important because AI, and digital technology generally, has become the latest focus of the historicist myth that social evolution is preordained, that our social world is determined by independent variables over which we, as individuals or societies, are able to exert little control. So we either go with the flow, or go under. As Aristotle put it: ‘No one deliberates about things that are invariable, nor about things that it is impossible for him to do.’…
The humanities are vital to combatting this historicist tendency, which is profoundly disempowering for individuals and democratic publics alike. They can do so by reminding us, for example, of other technological developments that arose the day before yesterday – such as the harnessing of nuclear power – and how their development and deployment were always contingent on human choices, and therefore hostage to systems of value and to power structures that could have been otherwise.
The second and related contribution of the arts and humanities is to frame the ethical questions informing these choices:
Ethics is inescapable because it concerns the ultimate values in which our choices are anchored, whether we realize it or not. These are values that define what it is to have a good life, and what we owe to others, including non-human animals and to nature. Therefore, all forms of ‘regulation’ that might be proposed for AI, whether one’s self-regulation in deciding whether to use a social robot to keep one’s aged mother company, or the content of the social and legal norms that should govern the use of such robots, ultimately implicate choices that reflect ethical judgments about salient values and their prioritization.
The arts and humanities in general, and not just philosophy, engage directly with the question of ethics – the ultimate ends of human life. And, in the context of AI, it is vital for them to fight against a worrying contraction that the notion of ethics is apt to undergo. Thanks in part to the incursion of big tech into the AI ethics space, ‘ethics’ is often interpreted in an unduly diminished way. For example, as a form of soft, self-regulation lacking legal enforceability. Or, even more strangely, it is identified with a narrow sub-set of ethical values.
Rather than adopt the dominant approach to shaping our understanding, reflected in data-driven mind-sets that emphasize optimization and the core operation of rationality, prioritizing formal and quantitative techniques, John, outlines a three-pronged humanistic approach: Pluralism, Procedures and Participation:
So the kind of ethics we should hope the arts and humanities steer us towards is one that ameliorates and transcends the limitations and distortions of this dominant paradigm derived from science and economics. I think such a humanistic ethic, informed by the arts and humanities, would have at least the following three features (the three Ps):
1. Pluralism – it would emphasize the plurality of values, both in terms of the elements of human wellbeing and the core components of morality. This pluralism calls into question the availability of some optimizing function in determining what is all-things-considered the right thing to do. It also undermines the facile assumption that the key to the ethics of AI will be found in one single master-concept, whether that be trustworthiness or human rights or something else…Admitting the existence of a plurality of values, with their nuanced relations and messy conflicts, heightens the need for choice adverted to previously, and accentuates the question of whose decision will prevail. This sensitive exploration of a plurality of values and their interactions is what the arts and humanities, at their best, do….
2. Procedures not just outcomes – I come now to the second feature of a humanistic approach to ethics, which is the importance of procedures not just outcomes. …to drive home the important point that what we rightly care about is not just the value of the outcomes that AI can deliver, but the processes through which it does so. Take the example of the use of AI in…sentencing of criminals...(where) there is a powerful intuition that being sentenced by the robot judge – even if the sentence is likely to be less biased or more consistent than one rendered by a human counterpart – means sacrificing important values relating to the process of decision. This point is familiar, of course, in relation to such process values as transparency, procedural fairness, explainability. But it goes even deeper, because of the dread many understandably feel in contemplating a dehumanised world in which judgements that bear on our deepest interests and moral standing have, at least as their proximate decision-makers, autonomous machines that do not have a share in human solidarity and cannot be held accountable for their decisions in the way that a human judge can.
3. Participation – the third feature relates to the importance of participation in the process of decision-making with respect to AI, whether participation as an individual or as part of a group of self-governing democratic citizens. At the level of individual wellbeing, this takes the focus away from theories that equate human wellbeing with some end-state, such as pleasure or preference-satisfaction. Such end states could in principle be brought about through a process in which the person who enjoys them is entirely passive, for example, by putting some anti-depressant drug in the water supply. Contrary to this passive view of wellbeing, it would stress, as Alasdair MacIntyre did in After Virtue, that the ‘good life for man is the life spent in seeking for the good life for man’. Or, put slightly differently, that successful engagement with valuable pursuits is at the core of human wellbeing.
I would now like to highlight how the panel discussion, reflected in a detailed note published by the Vatican on the Relationship Between Artificial Intelligence and Human Intelligence (“Note”), reinforces the convergence of the secular and faith-based initiatives. First, I will draw on several statements from the Note highlighting what distinguishes human intelligence and prescriptions for ethical consideration.
Per the Note, a fundamental consideration that grounds the ethical case is that human intelligence is embodied. Consistent with the philosophical and theological tradition, it is our own unique nature that sets us apart:
10. Underlying this and many other perspectives on the subject is the implicit assumption that the term “intelligence” can be used in the same way to refer to both human intelligence and AI. Yet, this does not capture the full scope of the concept. In the case of humans, intelligence is a faculty that pertains to the person in his or her entirety, whereas in the context of AI, “intelligence” is understood functionally, often with the presumption that the activities characteristic of the human mind can be broken down into digitized steps that machines can replicate.
16. Christian thought considers the intellectual faculties of the human person within the framework of an integral anthropology that views the human being as essentially embodied. In the human person, spirit and matter “are not two natures united, but rather their union forms a single nature.” In other words, the soul is not merely the immaterial “part” of the person contained within the body, nor is the body an outer shell housing an intangible “core.” Rather, the entire human person is simultaneously both material and spiritual. This understanding reflects the teaching of Sacred Scripture, which views the human person as a being who lives out relationships with God and others (and thus, an authentically spiritual dimension) within and through this embodied existence. The profound meaning of this condition is further illuminated by the mystery of the Incarnation, through which God himself took on our flesh and “raised it up to a sublime dignity.”
This holistic conception of intelligence leads to an integrated, humanistic form of authenticity, encompassing the full scope of one’s being: spiritual, cognitive, embodied and relational:
27. This engagement with reality unfolds in various ways, as each person, in his or her multifaceted individuality, seeks to understand the world, relate to others, solve problems, express creativity, and pursue integral well-being through the harmonious interplay of the various dimensions of the person’s intelligence. This involves logical and linguistic abilities but can also encompass other modes of interacting with reality. Consider the work of an artisan, who “must know how to discern, in inert matter, a particular form that others cannot recognize” and bring it forth through insight and practical skill. Indigenous peoples who live close to the earth often possess a profound sense of nature and its cycles. Similarly, a friend who knows the right word to say or a person adept at managing human relationships exemplifies an intelligence that is “the fruit of self-examination, dialogue and generous encounter between persons.” As Pope Francis observes, “in this age of artificial intelligence, we cannot forget that poetry and love are necessary to save our humanity.”
The Limits of AI, then, confined to logical-mathematical frameworks, inform the ethical considerations. Per the Note, AI should not be seen as an artificial form of human intelligence but rather as a product of it. Indeed, I have written Blog pieces that view our essence as a creator and construe AI as our aesthetic fate. The notion that AI is our production, per the Note, informs how it should be used to ensure our human flourishing, dignity and freedom as moral beings – echoing the humanist prescription:
36. Given these considerations, one can ask how AI can be understood within God’s plan. To answer this, it is important to recall that techno-scientific activity is not neutral in character but is a human endeavor that engages the humanistic and cultural dimensions of human creativity.
39. To address these challenges, it is essential to emphasize the importance of moral responsibility grounded in the dignity and vocation of the human person. This guiding principle also applies to questions concerning AI. In this context, the ethical dimension takes on primary importance because it is people who design systems and determine the purposes for which they are used. Between a machine and a human being, only the latter is truly a moral agent—a subject of moral responsibility who exercises freedom in his or her decisions and accepts their consequences. It is not the machine but the human who is in relationship with truth and goodness, guided by a moral conscience that calls the person “to love and to do what is good and to avoid evil,” bearing witness to “the authority of truth in reference to the supreme Good to which the human person is drawn.” Likewise, between a machine and a human, only the human can be sufficiently self-aware to the point of listening and following the voice of conscience, discerning with prudence, and seeking the good that is possible in every situation. In fact, all of this also belongs to the person’s exercise of intelligence.
43. The commitment to ensuring thatAI always supports and promotes the supreme value of the dignity of every human being and the fullness of the human vocationserves as a criterion of discernment for developers, owners, operators, and regulators of AI, as well as to its users. It remains valid for every application of the technology at every level of its use.
Finally, in concluding Note passages, the Vatican calls on the principle of the Common Good for the use of AI and promotion of human dignity:
110. As a result, it is crucial to know how to evaluate individual applications of AI in particular contexts to determine whether its use promotes human dignity, the vocation of the human person, and the common good. As with many technologies, the effects of the various uses of AI may not always be predictable from their inception. As these applications and their social impacts become clearer, appropriate responses should be made at all levels of society, following the principle of subsidiarity. Individual users, families, civil society, corporations, institutions, governments, and international organizations should work at their proper levels to ensure that AI is used for the good of all.
The Note’s summary invocation of the Common Good highlights the convergence of the humanist and faith-based approaches. Indeed, its use as a proxy for the humanistic vision can be a foundation for building a consensus to control commercial development - which has largely been unchecked, reflecting the hegemony of private capital.
As a witness to the panel discussion, it reinforced how the heritage and weight of the Common Good could crystallize public sentiment. Its adoption can be intuitive because of its heritage in our historical commercial and legal frameworks. In an earlier Blog Post on the dangers of technology, I explored how combative tools are already embedded in our legal tradition. The classical legal heritage is grounded in the Common Good and our Statutory frameworks already orient toward public welfare. For instance, patent standards formerly turned on beneficial vs. practical utility. The measure was not efficiency but serving the public. Another example topical for technology executives is antitrust law. Its original intent was to also serve the common interest, not simply to maximize free enterprise (promoting autonomy). This case is best reflected in Adrian Vermeule’s book, Common Good Constitutionalism, which conceives law as “a reasoned ordering to the common good” - promoting the goods of a flourishing political community.
The reason the UN panel discussion was so powerful, then, was because it highlighted the value of the alliance of faith-based and humanist communities. To continue the dialogue, the Institute for Ethics in AI, together with the APA Blog and the Aspen Institute, are planning to host a conversation/debate on tech accelerationism in Colorado. With speakers drawn from the private sector and government, participating alongside the renowned Aspen Institute, we aspire to extend the ethical discussion to attract more stakeholders.
From the APA Archive:
Responsibility and Automated Decision Making
Why Machines Will Never Rule the World
What else I’m Reading/Listening To:
The Case for a Philosophical Life