The complexity principle

Alain Marchand
4 min readJun 24, 2022

What do we value most?

Image by Gerd Altmann at Pixabay

For the first time, an artificial intelligence (AI) seems to have persuaded a human that it should not be killed[1]. Of course, a program is not really “dead” until you erase all its copies, but Blake Lemoine, an engineer at Google, declared that the AI LaMDA was “a person” and that it needed an attorney.

I do not want to join the current debate about AI sentience (whatever this means[2]). Instead, I see this peculiar event as a sign that we may have reached a turning point. Some people might become convinced of an AI’s benevolence to the point of trying to push it onto others. This revives the specter of the dangers of AIs[3], as well as the question of what, in humanity, is worth saving.

Today, we are creating AIs whose goals are not really aligned with ours. High frequency trading[4], for instance, is unlikely to benefit humankind as a whole. Moreover, it has been suggested that an advanced AI might destroy humans simply to perform its intended function[5]. Researchers expect the behavior of an AI with general intelligence to be almost impossible to predict[6],[7]. As advanced AIs will learn and evolve, their system of values will almost certainly change, and no one knows which value will eventually prevail over the others, and whether it will be compatible with our sheer existence.

So, who wants to grant rights[8] to robots? Is an AI valuable enough that we have the moral duty not to destroy it? And what makes humanity valuable enough that a robot should not eliminate us?

There is no easy solution to this problem as the general abilities of AIs keep growing[9]. Researchers like Ben Goertzel have proposed to set up an overarching “nanny” AI[10], with specific constraints, with the power to monitor and control robots world-wide, and the ability to reinterpret its goals at human prompting. Such a nanny AI should be reluctant to act against humanity’s extrapolated desires. But it would be extremely difficult to design. Indeed, who knows the exact nature of “humanity’s extrapolated desires”? And even the issue of what is, or is not, human may become blurred in the perspective of transhumanism[11].

I would like here to suggest a way to apprize both advanced AIs and humans, a criterion which is widely applicable to all products of life and culture. An advanced AI or a human being may be valuable because it results from a complex (learning) history that makes it unique. Generally speaking, the things that we value seem to result from complex systems[12] with a history. We are increasingly concerned with the protection of ecosystems and life forms that result from evolution, the preservation of the unique cultures, languages artefacts and works of art from various human groups, etc.

The beauty of a complex system stems from the harmonious interactions between the parts that sustain its function, a coherence that is often achieved through the work of time. Even love is something that grows from the interaction between individuals. Conversely, haste and simplification often lead to ugliness. There is nothing as boring as repetition and uniformity, and nothing as dangerous as slogans in politics[13]

It might thus be a good idea to ensure that future artificial intelligences understand and respect the need for complexity. A complexity principle could be easier to implement in an artificial system than Asimov’s laws of robotics[14], since it would not even require a definition of what is a human being. Within a machine, a performance goal could conflict with the complexity principle, so this principle may not eliminate the need for a nanny AI.

One way to prevent an advanced AI from drifting away from its principles might be to provide it with a single goal. An AI specifically driven by the complexity principle might solve technical and moral issues[15] in ways that we, humans do not anticipate[16]. It would not be prevented from harming, let’s say, Vladimir Poutine. It could favor creativity over efficiency. The AI may have to choose between complex systems, and, in a critical situation, it may elect to save older, rather than younger people. But, in general, we would expect it to prefer diversity to uniformity.

The complexity principle may sometimes lead to detrimental interpretations. I nevertheless believe that an AI that respects complex, unique systems might help us preserve what we, as humans, value most.

[1] https://en.wikipedia.org/wiki/LaMDA

[2] https://en.wikipedia.org/wiki/Artificial_consciousness

[3] http://darkworldsquarterly.gwthomas.org/weird-tales-robot-science-fiction/

[4] https://en.wikipedia.org/wiki/High-frequency_trading

[5] https://www.lesswrong.com/tag/paperclip-maximizer

[6] https://en.wikipedia.org/wiki/Technological_singularity

[7] https://xkcd.com/1450/

[8] https://www.frontiersin.org/articles/10.3389/frobt.2021.781985/full

[9] https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence

[10] https://www.lesswrong.com/tag/nanny-ai

[11] https://en.wikipedia.org/wiki/Transhumanism

[12] https://en.wikipedia.org/wiki/Complexity

[13] https://en.wikipedia.org/wiki/Brave_New_World

[14] https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

[15] https://en.wikipedia.org/wiki/Ethical_dilemma

[16] https://en.wikipedia.org/wiki/I,_Robot_(film)

--

--