In June, 2024 at the international
political meeting of the G7, a group of seven industrial nations, the head of
the Roman Catholic Church, Pope Francis, spoke on the ethical dimension of artificial
intelligence, or machine-learning. Regarding what the Pope called the “techno-human
condition,” machines capable of AI are yet another manifestation of human
propensity, which our species has had since its inception, to use tools to
mediate with the environment. Although tools can be thought of as an extension
of our arms ad legs, it is important to distinguish the human from the machine,
even as we posit human characteristics onto some advanced machines, such as
computers. In the film, 2001, the computer Hal sounds human, and may
even seem to have human motivations, but any such attributions come to an
abrupt end when Hal is shut down. To say that Hal dies is to commit a basic
category mistake. It would be absurd, for example, to claim that Hal has an
after-life. So too, I submit, is there a category mistake in taking the Pope’s
talk on the ethics of AI as being religious in nature. Just as it
is easy to imprint the human mind on a machine-learning computer, it can be
tempting to superimpose the religious domain onto another. The Pope overreached
in arbitrarily bringing in religious garb on what is actually an ethical matter
in the “techno-human” world.
At its core, the human-machine
distinction regarding decision-making comes down to AI making only “algorithmic
choices,” that is, “technical choices ‘among several possibilities based either
on well-defined criteria or on statistical inferences,” whereas the human heart
is capable of influencing the choices that we make and so only we can have
wisdom.[1]
The Pope’s definition of wisdom differs from that which describes wisdom as the
combination of experience with knowledge. Even if experience comes into
the equation as intuition, that is distinct from the proverbial heart. If
statistical inferences don’t reach human intuition, then either conception of human
wisdom suffices to distinguish human from (machine-learning) machine. In the
philosophy of mind field, the difference between man and “thinking” machine can
be put into the following question: Is understanding merely the manipulation of
symbols according to rules? Do I, whose languages do not include Chinese,
understand the answers I write to questions if I use a chart that indicates
which Chinese characters to write if I’m presented with other Chinese characters?
If understanding is more than merely the manipulation of symbols according to
(linguistic) rules, then how much more different is human experience from statistical
inference (and even learning itself), and how much even greater is the
difference between human emotion, such as empathy or compassion, and AI. To the
extent that ethical judgment is, as David Hume claimed, a sentiment of
disapprobation (i.e., a visceral emotional reaction of disapproval), ethics can
only be on the human side, where the heart resides.
The ethical dimension is salient
in the advantages and (especially) the disadvantages of AI. On the one hand, the
Pope said, AI has potential in regard to the “democratization of access to knowledge,”
the “exponential advancement of scientific research,” and a reduction in “demanding
and arduous work.”[2] Given
the absolute value of reason (which assigns value to things), according to Kant,
rational beings should be treated not just as means, but also as ends in
ourselves. For all the advantages of specialization of labor as discussed in
Adam Smith’s Wealth of Nations, at some point rote repetition of the
same small action conflicts with Kant’s ethical imperative. In contrast,
opening up knowledge and advancing it are fully in line both with rational
nature and us being regarded as ends in ourselves and not merely means
to other ends.
Even so, as the Pope pointed out,
AI could also contribute towards a “greater injustice between advanced and
developing nations or between dominant and oppressed social classes.”[3]
Such a consequence would certainly violate Rawls’ ethical theory of justice in
which societal systems, whether political, economic, or social, should be
designed in such a way to disproportionately benefit the people at the bottom
rather than disproportionately harm them.
Public policy enters the picture because
we want to protect and perhaps even encourage the advantages and minimize or
obviate the downsides by government policy. From Rawls theory, for instance, legislators
might propose a law making college free for poor students in a society in which
AI has a significant presence. Another law could mandate that the state provide
computers to poor people free of charge. Rather than being technical policy
prescriptions having the semblance of objectivity, these examples stem from the
ethical dimension of AI.
Being that the Pope is the head
of a religious institution, it is only natural that he would attempt to
tack on the religious dimension. I submit that in doing so, he over-stepped. In
discussing wisdom, for example, his addition of “listening to Sacred Scripture”
in regard to the ancient Greek notion of phronesis (a type of intelligence
concerned with practical action) doesn’t really fit, or is not pertinent.[4]
It does not make phronesis religious in nature. Furthermore, he
describes algorithms in terms of only being able to “examine realities formalized
in numerical terms.”[5] The
metaphysical term, realities, is out of place here. The Pope’s choice of
the term was no accident. He sourced the mediating role of tools between us and
our environment from us being “inclined to what lies outside of us,” which, and
here is the stretch, in turn is in term so us being beings “radically open to
the beyond.”[6]
Such a beyond, as suggested by the use of the word radically, is
transcendent, and thus in reference to God being wholly other rather than just
immanent in Creation. In making this leap from humans needing to relate to the
environment in which we live (e.g., for food) to humans having an aptitude for
transcendence in a religious sense, the Pope jumped to another domain that is
not relevant. For we do not use tools to grow or get food, for example, because
we have an aptitude to transcend the limits of conception, perception, and
emotion to posit or yearn for another world. AI pertains to what the Pope
himself called the “techno-human condition.”[7]
In contrast, the religious domain is sourced in, and thus relates to—as its reference
point—that which transcends that condition. Even though AI represents “a true
cognitive-industrial revolution” that will lead to “complex epochal
transformations,” that revolution and the reverberating transformations emanating
from it are firmly in this world. Even the ethical dimension of AI is likewise
in the human condition; we need not be creatures radically open to beyondness
to be able to address the ethical implications of AI.
I conclude, moreover, that only the religious domain includes the radically transcendent and in fact can be defined or characterized in terms of it. This is not to say that the domain cannot influence or be influenced by other domains; rather, that which makes domains distinct should not be skipped over lest category mistakes and faulty overreaches occur. Historically, the Church has overreached in superimposing the religious domain on that of natural science (e.g., astronomy). Likewise, the domain of history has been allowed to overreach in serving as a litmus test for religious truth or meaning, as if the latter were only valid pending credible historical evidence of an event or a person. Several neighborhood children at play who run onto the front yard of an elderly couple’s house and make noise—something I remember from my own childhood—is one thing; should they start moving the lawn furniture around and demand that the couple keep the chairs in their new positions—would be much worse. Imposing from one’s own domain implies a basic lack of recognition (or respect) of boundaries and thus of domains as distinct territories. The ethics house may be next door to the religious house, but the two are distinct. Similarly, AI might be next to human understanding, but the two are distinct. That algorithms are “neither objective nor neutral” does not men that they share in human subjectivity.[8] In fact, the Pope claimed that AI is not even really generative; it does not “develop new analyses or concepts.”[9] Instead, it “repeats those that it finds, giving them an appealing form.”[10] Whether this is so or not is beyond my fields of study, but my point is merely that a fundamental qualitative distinction between us and machines exists, and that the ethical dimension, as distinct from the religious, pertains to AI from our side.