Catholics Who Rely on Algorithms

Source: FSSPX News

Several “Catholic” robots have been created based the ChatGPT model in a few months. Their goal is to give instant answers to questions concerning the doctrine of the Church. Questions of reliability have now come to the forefront.

On November 30, 2022, the world discovered the performance of artificial intelligence (AI), with a message from OpenIA: “You can now use ChatGPT.” In a few hours, more than a million people tested this new technology.

It is a new technology that looks like yet another Pandora’s box. Also, in May 2023, the Cardinal Archbishop of Utrecht, Netherlands, Archbishop Willem Eijk, launched an appeal for the Church to take an official position regarding AI. Anticipating profound consequences for society, the High Prelate explains that the Church must actively engage in the field of AI by providing ethical reflection.

An early call by the developers of Magisterium AI, a conversational agent coming from Longbeard, which is a company specializing in technology and digital marketing. Based in Rome, Longbeard’s clients include the Pontifical St. Thomas Aquinas University, the Vatican Observatory, and the Dicastery for Integral Human Development.

The Magisterium AI algorithm is based on a database of 456 official documents, including Sacred Scripture, the Catechism of the Catholic Church, the Code of Canon Law, the General Instruction of the Roman Missal, as well as 90 encyclicals, 7 apostolic constitutions, and 26 apostolic exhortations.

“When I worked for the Archdiocese of Toronto, many people asked me about the faith, the dogma of the Church. I spent a lot of time in the library so I could give reliable answers. At that time I dreamed that a technology like Magisterium AI existed!” explains Matthew Sanders, one of the creators of the Anglo-Saxon chatbot.

For several months, the language of Molière [French] has also had its own robot which is supposed to answer questions concerning Christian doctrine: the CatéGPT project was unveiled at the beginning of 2023 in Geneva.

Then comes the question of the reliability of this type of AI. Even if the answers do not lack relevance, and if the risk of error remains limited due to a closed database, users have noted inaccuracies.

“The difficulty of this system – a difficulty that a theologian does not have – is taking into account the context of the question, which makes it possible to avoid an off-topic answer,” stresses Yannick Liabaud, one of the leaders of the “Church and computer innovation association.”

Specialists are not surprised at the emergence of ChatGPT religious avatars: “ChatGPT is a very powerful tool to use to find information, but it can be well or badly used,” says Steve Bobillier. Asked by the daily newspaper 24H, the philosopher notes that “the risk here is taking the answers as “Gospel truth.” A text, including a religious one, always has an interpretation. AI gives one, and it’s not necessarily the most coherent.”

Alexei Grinbaum – member of the National Pilot Committee on Digital Ethics (CNPEN) – gives a more alarming observation: “The human condition evolves under the influence of talking machines, it is impossible to stop this evolution or to go back. Yet the need to maintain the distinction between man and machine at the level of speech is far from obvious here.”

Moreover, if it is indeed man who has created machines capable of “speaking,” or rather of producing sentences, the use of these same machines will have a direct influence on our way of reasoning. But these machines express themselves in a “smooth” way, without depth. In other words, they do not think up what they produce. The question is how far they will influence us.