Prof. Andrzej Zybertowicz: My Analysis Shows That AI Could Deepen Destabilizing Processes

When we look at this problem systematically, we encounter two significant unknowns. First, artificial intelligence is emerging in a civilization that is already in a state of disruption. ESG is supposed to be a program designed to bring order to this civilization, but the cultural and geopolitical destabilization is causing ESG to be reduced, by some, to an ideological tool, while others treat it as a quasi-religion. In this situation, the rational core gets lost.

One might think that AI, being potentially more efficient than human intelligence in the near future, as some predict, will bring order to this chaos. Unfortunately, my analysis suggests that AI could actually exacerbate these destabilizing processes. Before AI can eventually help solve the problems that humanity struggles with, it will emerge as a new factor — a significant force multiplier in a civilization already destabilized, and it could destabilize our civilization even further.

Prof. Andrzej Zybertowicz: The Infosphere Has Been Structurally Poisoned

AI is entering at a time when the digital revolution has destroyed public communication through social media. The infosphere has been structurally poisoned, and there has been a degradation in education. Some American experts claim that political communication has been ruined by social media, and the quality of the political class has deteriorated because these platforms have triggered a race to the bottom, with communication becoming simplified, devoid of nuance, and highly emotional.

And now, this is the context in which the AI revolution is taking place. So far, AI systems demonstrate efficiency at the level of hallucination or confabulation. While this has certain functionalities, the generative AI systems built to date do not guarantee the production of knowledge of scientific quality. AI can help organize existing knowledge and assist researchers in generating new ideas, but it is still unable to distinguish between truth and falsehood or test hypotheses.

Prof. Andrzej Zybertowicz: I Support the Slow AI Movement

Therefore, the hope that AI will rise to the challenge of ESG is, at the very least, premature. In a recently published book, which premiered at an economic forum just yesterday, we express our support for the Slow AI movement — the idea that the development of AI systems should be slowed down, for at least two reasons.

First, current AI systems do not guarantee safety. They could spiral out of control, generate false data, and serve more as tools of disinformation than as mechanisms to enhance the quality of public discourse.

We should not rush into digitizing every domain of social life. Already today, the development of AI is encountering problems with energy resources. Major companies are beginning to plan the construction of nuclear power plants to overcome electricity shortages. This raises the question: is it possible to further develop AI without a corresponding spike in energy consumption by these systems? Therefore, I believe it would be rational, in this cognitively disoriented world, to slow down the development of the most innovative and groundbreaking technologies and establish regulatory procedures.

Prof. Andrzej Zybertowicz: Is the Green Transition Just a Case of „Trading One Problem for Another”?

In essence, if we take a charitable view of the ESG movement, it is an attempt to regulate economic development processes so that the benefits outweigh the environmental costs. However, it turns out that the so-called green revolution requires a digital revolution, which depends on the heavy use of rare earth metals, the extraction of which, according to some estimates, imposes seven times more burden on ecosystems than traditional raw material sources. In other words, the question arises: is the green transition, which requires a digital transition, merely a case of „trading one problem for another”?

Many problems require analysis, and we need time for that analysis. Researchers need time. But that time will be taken away from us if we are blindsided by future AI developments, which will lack the ability to distinguish truth from falsehood. AI’s hallucinatory and confabulatory creativity might overwhelm us in confronting emerging narratives.

Prof. Andrzej Zybertowicz: Slowing AI Development Must Be Coordinated with China for Geostrategic Balance

Humans, when confronted with a narrative, have an instinct to distinguish reality from falsehood. Already today, AI-generated deepfakes of increasing quality are causing confusion and a lack of cognitive tools to discern false narratives from true ones. This is a valid question.

From a geostrategic balance perspective, slowing AI development must be done in coordination with China. I support the views of those analysts who argue that it is in the interest of the Chinese Communist Party to also implement solid international regulations that would make AI development more transparent and slower. This is because China’s leadership may fear the rapid advancement of AI for malicious purposes as much as the leadership of democratic nations.

One could imagine a scenario where AI is programmed to seize control of China’s electronic communication systems, such as those based on the WeChat app, or its political narratives, and delegitimize the regime. Or, we could envision a scenario where AI evolves beyond human comprehension, potentially undermining the Chinese Communist Party’s authority.

Prof. Andrzej Zybertowicz: There is a Fear of AI Spinning Out of Control

These fears — and they are justified fears — can form the rational basis for building a Washington-Beijing consensus and involving other key players, ensuring that this new technology is developed in a context of solving human problems, not in a way that breaks geostrategic balance.

This interest must partly be enforced by responsible democratic governments in Washington. On the other hand, some American Big Tech companies also fear that AI could evolve in such a way that we would not even realize what is happening. AI could develop a form of consciousness, hide it from us, and that could come as an unpleasant surprise for all of us.

Why? Because the current path of development — based on machine learning and large language models, the so-called generative AI — does not guarantee control over these systems. They evolve emergently, in leaps, and as black boxes, meaning they solve certain tasks, but the creators of these systems do not understand how these tasks are accomplished. Less fanatical techno-enthusiasts should take this as a lesson that everything needs to be slowed down.

Prof. Andrzej Zybertowicz

Central Europe Reports, [source: Agencja Informacyjna ] – 30.10.2024

You May Also Like