Artificial intelligence and dangerous robots: barking up the wrong tree


Some famous people, among them the eminent physicict Stephen Hawking, have warned of the danger posed by artificial intelligence which may soon surpass that of humans and take over the world, making us superfluous and dispensable. But is this indeed so?

My answer: intelligent machines (robots, computers) can indeed turn out to be extremely dangerous, but largely because of the dangerous uses to which they are put by humans. For example, look at drones as they exist at this moment: fairly simple and certainly not highly intelligent relative to how they could potentially be in the not too distant future, nevertheless extremely dangerous if used for the wrong purposes in warfare or even in civil life. Drones have several times got very close to civilian airliners and disaster was just avoided. However, nobody would dream of attributing an intelligence approaching that of humans, or feelings like an urge to take over the world, to such robots. – A computer  has recently beaten the world champion in chess not just once but several times. Is he more intelligent than the human? Hardly, playing chess is a very special ability and human intelligence is  diverse and capable of tackling all sorts of problems. Does such a computer have a ‘consciousness’ that enables it to compete with humans? Certainly not. However, development of supercomputers including quantum computers could well lead to machines with abilities that vastly surpass human abilities in the near future. Will they perhaps be conscious and able to displace humankind? We don’t know. But do they want to take over the world?

The decisive question is: can an artificial intelligence, even in principle, have the urge to compete with humans and take over the world? That such an urge is required was illustrated in the science fiction movie ‘2001: A Space Odyssey’. The onboard computer HAL tries to take over control of the space vehicle, after it overheard a conversation between crew members by lipreading, in which they say that HAL was wrong in his analysis of a technical fault and decide to disconnect it. HAL attempts to eliminate all humans on board. It is motivated by human-like qualities, fear of being disconnected and of death, and hence a decision to take over control. Can AI, artificial intelligence, indeed have such feelings?

Recent discussions of the problem of artificial intelligence and consciousness concentrate largely or entirely on intelligence. The (false) argument goes that humans are intelligent and conscious, and any intelligent being, whether evolved or artificial, must therefore be conscious and capable of emotions and human-like actions driven by emotions. HAL: I am afraid to be disconnected and dying, and therefore I must act by destroying my enemy. But this sort of argument forgets that humans do not only have intelligence but emotions as well.

I have discussed this problem in an earlier post:

Arthur Schopenhauer,  in the first half of the 19th century, has discussed the problem in the context of Lamarck’s theory of the inheritance of acquired characters, and I believe his thoughts are very relevant and convincing in the context of modern evolutionary theory. According to him (my translation): Lamarck ‘puts the animal equipped with ‘Wahrnehmung’ (ability to perceive) but without any organs and ‘entschiedene Bestrebungen’ (clear aims) first: this enables it to perceive the conditions under which it has to live, leading to the  development of aims , i.e. its ‘Wille’ (will) and finally to organs’. Schopenhauer, in contrast, assumes that the will is primary. Put into modern evolutionary terms, Schopenhauer claims that the driving force of evolution is the will to succeed in the eternal struggle for existence, and that the facilities of the cognitive apparatus evolve as a consequence of that will. He further claims that ‘Aus diesem Grunde lässt sich auch annehmen, dass nirgends, auf keinem Planeten, oder Trabanten, die Materie in den Zustand endloser Ruhe gerathen werde, sondern die ihr innewohnenden Kräfte (d.h. der Wille, dessen blosse Sichtbarkeit sie ist) werden der eingetretenen Ruhe stets wieder ein Ende machen……um als mechanische, physikalische, chemische, organische Kräfte ihr Spiel von neuem zu beginnen, da sie allemal nur auf den Anlass warten (for this reason we must assume that on no planet will matter be in a state of non-ending rest, but that the forces within it (i.e. the will, whose visible appearances they are) will bring that rest to an end…… in order to begin the game anew as mechanical, physical, chemical or organic forces’).  – In other words, life must almost automatically begin, as soon as the necessary preconditions arise. This view corresponds closely to the view of Stuart Kauffman that self-organisation is a decisive factor in evolution and that life in the universe must again and again arise as soon as certain conditions exist. ‘We are not alone in the Universe’. – Schopenhauer considered the problem also in the context of the thing-in-itself. Immanuel Kant found that our cognitive apparatus uses the categories of time, space and causality to perceive the world (the phenomena). We have no knowledge of the thing-in itself’ (das Ding an sich, the noumena). Schopenhauer  concluded that we do have such a knowledge because we are not only objects that are perceived but also subjects who do the perceiving. And he identified the thing-in-itself as Will. Indeed, whenever we look into ourselves, we find that we have urges, emotions etc., expressions of that Will. Again: the Will in our consciousness is primary.

Looking at artifical intelligences/robots, as they exist today, it is clear that they have no Will, no urges or emotions. They are machines built to serve our purposes. Since they do not have a Will, it is impossible that they will become dangerous by their own initiative. They are dangerous only when humans use them for their own evil purposes. It seems to me that the only way AI’s can become dangerous per se is by combining them with organic entities (for example by implanting into humans quantum computers aligned with their brains) that have evolved over time and which possess a strong Will. Such joined entities may be able to mutate and evolve over time, potentially becoming dangerous. This means that the outlook can be optimistic as long as we control ourselves, not playing with fire by risking dangerous biological engineering feats.

An optimistic outlook was also presented by Marta Lenartowicz, who proposed that ‘Contrary to the prevailing pessimistic AI takeover scenarios, the theory of the Global Brain (GB) argues that this foreseen collective, distributed superintelligence is bound to include humans as its key beneficiaries. This prediction follows from the contingency of evolution: we, as already present intelligent forms of life, are in a position to exert selective pressures onto the emerging new ones. As a result, it is foreseen that the cognitive architecture of the GB will include human beings and such technologies, which will best prove to advance our collective wellbeing.’ But I would go further, humans in this ‘superintelligence’ or ‘Global Brain’ are not only part of it, they are the only component of such a postulated superintelligence  that can – in principle – evolve by their own initiative, as long as the other components are not evolved organic entities themselves.

In toto: Not AI (artificial intelligence) is the problem, it is AW (artificial Will).

A caveat: In the above discussion I assumed that intelligence and consciousness arise in organisms of a certain (unknown) complexity. Some thinkers have postulated that consciousness is a universal feature of very small particles, like for example the spin of an electron. In that case even simple computers might be conscious and perhaps capable of mischievous actions. See my discussion here:



Klaus Rohde (2009). Arthur Schopenhauer, Forerunner of Darwin? Schopenhauer on evolution and Lamarcks explanation, origin of man, overpopulation, origin of life and life on other planets.

Schopenhauer’s Sämmtliche Werke in Fünf Bänden. Grossherzog Wilhelm Ernst Ausgabe, Insel Verlag Leipzig.
I. Die Welt als Wille und Vorstellung I. Teil.
II. Die Welt als Wille und Vorstellung II. Teil.
III. Kleinere Schriften.
IV. Parerga und Paralipomena. I. Teil.
V. Parerga und Paralipomena. II. Teil.

Stuart A. Kauffman (1993). The Origin of Order. Self-Organization and Selection in Evolution. Oxford University Press, New York Oxford.

Marta Lenartowicz (2016). CREATURES OF THE SEMIOSPHERE. A problematic third party in the ‘humans plus technology’ cognitive architecture of the future global superintelligence. Working paper, v. 2.0. (16.05.2016)

© Klaus Rohde









Comments RSS
  1. Klaus Rohde

    A very interesting article by a psychologist, which is very relevant to the problem:

    ‘Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.
    We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key.’

  2. akshay pai

    This article is very interesting. I have some experience in woking in the AI field and I’m just fascinated by its potential. However, I feel that true AI is far from being implemented. You talk in this article about putting robots to bad use and that is very true. Let’s assume that these robots are now trained to eliminate enemy on its own decision, this may be potentially dangerous and an army of such robots might go about killing humans but this is not artificial intelligence.

    Consciousness is truly something that might be impossible to be transferred to a non-living being. But no matter what the facts and science say, I think we will still have some fear of Robo Apocalypse or Skynet ruling the world happening in the future. But this is definitely not going to stop me from working more and possibly bringing out advanced AI technology.

  3. Iashan

    Well i do appreciate this point of view about the the power of AI that one day will control or life. it is a risk and we need to work out and realize that aliens or robots should not become kings and we human being will become slave on their kingdom.

  4. Kumar

    I completely agree with the explanation. We have been working with various AI Applications. Based on the various training models, infrastructure needs, it’s not easy to create any AI Application which can compete with Human.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: