Digital technology has contributed to polarization. By reinforcing one's own ideas and discarding those of others, the algorithm contributes to a decrease in dialogue and, thus, less knowledge of what the other thinks. This is one of the theses of the Franciscan Paolo Benanti in his latest book, The collapse of Babel, published by Encuentro. But polarization is far from the only risk.
At the beginning of January, it was known that Grok, Elon Musk's AI model, facilitated the creation of sexual images from images uploaded by women on the social network “X”.
Friar Paolo Benanti (Rome, 1973), a moral theologian, is one of the world's leading experts on the ethics of Artificial Intelligence (AI). He chairs the Italian government's AI working group and the UN commission of experts on this issue. His point of view is particularly authoritative to speak on a topical issue, which is of concern to governments and society.
The last time he was heard in Spain was two months ago, at the Fundación Telefónica and at EncuentroMadrid, the annual event organized by Communion and Liberation, in Cuatro Vientos (Madrid). In this edition, Benanti spoke precisely on “Artificial intelligence and the fabrication of the eternal”.
Who watches over the AI?
- When we talk about artificial intelligence, we are not talking about a single technology, but about a family of algorithms, very different from each other. Some of them are very explainable. It reminds us a little bit of the first GPS: how many times did the GPS tell you to exit on the right and then immediately re-enter on the left? It was intelligent, but we understood that it was intelligent because it was shorter. That artificial intelligence does a job, which is the same thing that a natural intelligence would do.
But they are a black box. Some of these algorithms may have much smarter results, but they are a black box.
The question is: Can we use all types of algorithms for all types of functions?
This is one of the ethical problems of AI. Imagine you want to use AI to select coffee beans in a factory that produces coffee. This is something that used to be done by hand, selecting bean by bean because if a single coffee bean has mold, it gives a bad taste to all the others.
This process is done with an algorithm called Deep Learning. But it is not explainable.
The worst that can happen is that we throw away coffee beans that are worthwhile. But maybe that is more economical than hiring a person who picks bean by bean.
But that same algorithm can be used in a hospital emergency room to choose which patient goes in first.
It can be understood that it is not a problem of the algorithm, but of where we put it working within the social structure.
The problem of AI today is no longer a technical issue, but a problem of social justice that tells us what function a man or an algorithm has to develop. This requires multidisciplinarity.
Now, the interesting thing about this is that it is the matrix of the social doctrine of the Church. And it is the reason why Pope Leo XIV, in his first public speech, affirmed that Catholics, as such, the only thing we can offer is the social doctrine of the Church, which are not answers, but questions. Questions that seek to protect the dignity of man and man's work.
We are not afraid of change, but we want to be on the side of man.
The second element is that Pope Francis, when he wrote the guidelines for Catholic formation, especially for future priests, speaks of interdisciplinarity and transdisciplinarity. Therefore, again, the challenge is more than technical, it is cultural. This is the frontier on which today's debates are taking place.
Behind the AI
But who is behind this technology?
- The first thing to understand is that this technology changes the way we approach the problem. The whole 19th century has seen a fracture in scientific rationality. We used to be convinced of a deterministic model.
But if one thinks about what has happened with subatomic physics, where thanks to Heisenberg's indeterminacy principle we do not know where an electron is, or how fast it is going, we have had to mutate to a probabilistic model. The same happens with astrophysics, where what Einstein said speaks of a relativity. From a model of certainty we have moved to a model of probability.
If the model is statistical, there is no mind determining the steps, but there is a machine extracting models from the data in front of it.
This model makes it very complex to answer whether there is someone behind it or not. We often speak of “bias”, which in English is expressed by the word “bias”. But bias can also be translated as “systematic preference”.
Suppose I want to create an autonomous car. I take all the data on how people drive in Madrid. And the machine sees that there is a systematic preference to stop at red lights (I'm talking about Madrid, not Rome...). I want that systematic preference to exist.
But, for example, the machine could see that the car does not stop in the same way when a child or an adult crosses. And it might decide not to brake when there are children. Why? Because the child is less visible and the driver sees it later. So here the machine has a bias, a prejudice, with children. It could be the same at night with, for example, dark-skinned people. Would anyone be bad for applying that “prejudice”?
There is so much data that no human mind can control it all. What's the problem? Silicon Valley tells us that we are changing the world. But we don't know, no one knows to the bottom line, what are the schemes that the machine (the computer) has found.
It is an epistemological problem. And ethical. And legal. Who is responsible if the car hits the child? The owner? He's not driving. The producer? The software engineer? It's very complex.
The real problem with artificial intelligence is complexity.
On the other hand, it can save us a lot of money. Therefore, there is a tension and somehow we must regulate this tension to prevent those who decide to do so only for economic interests or out of fear.
AI and work
Could artificial intelligence end up making human labor superfluous?
- An artificial intelligence is not capable of doing all tasks in the same way. There is a paradox, which was developed by a computer scientist named Moravec, which says that it is much easier for a machine to perform a high intellectual task than a low one. That is, a solar calculator that does a square root you buy on the Internet for one euro. But a robotic hand that picks up a teaspoon and spins coffee costs 150,000 to 200,000 euros. Apply it to work.
A banker works with a lot of numbers. A manual laborer, a metal worker, works with a lot of hammer. This means that the first jobs that are going to jump are the best paid ones. This could generate social tension that if not managed politically could damage the democratic system.
And specifically in the field of, for example, journalism?
- Is the journalist simply someone who transforms something into text? Can we replace it with replicating with a text machine. Or is it a social function that guarantees a democratic space?
I am president of the Italian government's Commission for the study of the impact of AI on journalism and the publishing world. And we have concluded that the journalist has a fundamental role for democracy. But what makes it possible to have journalists is that there is a publishing industry that can pay them.
But then you have to recognize a problem, which is not born with AI, but with social networks. Why is it that if you journalists write something you can be taken in front of a judge, but if it is a social network nobody says anything to you?
Why can a director be taken to court? And an algorithm of a social network that chooses what I read, is free of everything. Today to this is added the computer's ability to write. But here again the problem is not the capacity of the machine. It is economic convenience.
It is in the nature of the profession that it is essential for the survival of the democratic space.
Years ago some scientists were calling for a moratorium on AI to see what to do with it.
- There is too much, too much money at stake. There are too many geopolitical interests. The competition between China and the United States is too high for either to trust the other in this so-called moratorium.
The past year has changed a lot of the narrative about this. We used to talk about science and technology, activities where, if I discover something (I'm thinking for example of Nobel prizes), it's for everybody. Everybody benefits.
But today it's all about the race. If I win, you lose. This makes the approach impossible.



