Blaming the AI Technology We Fear


Before we talk about fearing and blaming robots, we should mention another form of technology that we have feared for some time now: nuclear weapons.

Generally speaking, it does not make sense to blame nuclear weapons because they could be used to wipe out human civilization. However, we may blame those humans who used the science of atomic energy to create nuclear weapons in the first place.

There may be an analogy here with the advent of Lethal Autonomous Weapon Systems (LAWS). Just like our self-driving cars, these weapon systems may be autonomous in many ways, and they may gain more and more autonomy over time.

Despite the similarities, however, AI technology differs from nuclear technology. We do not worry that nuclear weapons will “wake up” some day and become aware of their own existence. Nevertheless, we do worry that AI technology may incrementally attain the intelligence it needs to become aware of its own existence. Will it have an over-arching purpose that we put into it? What would that purpose be? Can we give our AI-enabled robots and systems a purpose that ensures that their behavior will be compatible with the survival of human civilization. In other words, can we control our AI-enabled systems, or will they someday control us?

Alternatively, would it make more sense for us to control AI technology by merging with it via some kind of brain-computer interface? If we cannot beat them, then perhaps we should join them. Would that be a better way to control our AI technology, or would some kind of AI superintelligence end up controlling the intelligence encoded in our brain.

Finally, in addition to debating whether previously discussed manifestations of AI are compatible with human existence, we should at least mention the idea of a collective hybrid-intelligence. Daniel Hillis describes this kind of intelligence as follows:

“organizational super-intelligences are not just made of humans, they are hybrids of humans and the information technologies that allow them to coordinate.”


Discussion


BillD

Posts this Science Humor cartoon

Robot:
Why are you scared of us?

Human:
We’re not scared of you in particular. We’re scared of some kind of technological dystopia that results from your existence.

Robot:
What? You guys have dystopias all day long and they’re all created socially! No robots required!

The catastrophe is inside you. You know why tech worries you? Because you’re afraid it’ll make you more efficient at your own pursuits, which you know are fundamentally selfish and evil!

Fix yourself first before you blame us, human!

Human:
Stop saying true things or I will conflate my guilt with righteous anger.


Norbert Wiener

“””
I have spoken of machines, but not only of machines having brains of brass and thews of iron. When human atoms are knit into an organization in which they are used, not in their full right as responsible human beings, but as cogs and levers and rods, it matters little that their raw material is flesh and blood. What is used as an element in a machine, is in fact an element in the machine. Whether we entrust our decisions to machines of metal, or to those machines of flesh and blood which are bureaus and vast laboratories and armies and corporations, we shall never receive the right answers to our questions unless we ask the right questions … The hour is very late, and the choice of good and evil knocks at the door.
“””
The Human Use of Human Beings (1954)


FMB

Long before we humans developed computers and robots, we developed large scale organizations. Now, of course, these organizations are intra- and inter-connected with a vast array of information and network technologies.

Daniel Hillis builds on what Norbert Weiner had to say about this back in 1954 in Human Use of Human Beings.

In The First Machine Intelligences, Danny Hillis describes hybrid intelligences as follows: “organizational superintelligences are not just made of humans, they are hybrids of humans and the information technologies that allow them to coordinate.”


FMB

Here are four major geopolitical AI scenarios as outlined by Danny Hillis (DH):

I. The State/AI Scenario

DH: In this scenario, multiple machine intelligences will ultimately be controlled by, and allied with, individual nation-states. In this state/AI scenario, one can envision American and Chinese super-AIs wrestling with one another for resources on behalf of their state. In some sense, these AI would be citizens of their nation-state in the way that many commercial corporations often act as “corporate citizens” today. [FMB: China would seem to be following the State/AI path.]

II. The Corporate/AI Scenario

DH: The state/AI scenario is not our current course. [our meaning the USA] One can imagine a future in which corporations independently build their own machine intelligences … These machines will be designed to have goals aligned with those of the corporation… nation-states lag behind in developing their own artificial-intelligence capability and instead depend on their “corporate citizens” to do it for them. [FMB: The USA would seem to be following the Corporate/AI path.]

III. The Autonomous SuperAI Scenario

DH: In this scenario, the artificial intelligence will not be aligned with either human or hybrid superintelligences but will act solely in their own interests. They might even merge into a single machine superintelligence, since there may be no technical requirement for machine intelligences to maintain distinct identities.

IV. The Personal AI Scenario

DH: In this scenario, AI could help us restore the balance of power between the individual and the corporation, between the citizen and the state. It could help us solve the problems that have been created by hybrid superintelligences that subvert the goals of humans. In this scenario, AIs will empower us by giving us access to processing capacity and knowledge currently available only to corporations and states. In effect, they [our Personal AIs] could become extensions of our own individual intelligences, in furtherance of our human goals.


FMB

In addition to Daniel Hillis, we might also consider the work of Hugo de Garis.

In his book The Artilect War (2005), Hugo de Garis envisions a world where human-kind responds to the rise of artilects – i.e. artificial intellects – by dividing into two groups: Terrans and Cosmists.

However, as I see it, there are really four camps.

  1. Doubting Thomas Buffs – Those who think that the technology needed to create artilects (e.g. human-level artificial intelligence) is either impossible or very far off in the future.
  2. Terrans – Those who want to destroy artilects if they exist . If artilects do not exist, Terrans want to prevent them from being developed in the first place.
  3. Control Buff Cosmists – Those who want to continue developing artilects, but only as brainwashed slaves that are “humanity friendly.”
  4. Transhumanist Cosmists – Those who want to merge with artilects – i.e. to become trans-human cyborgs – so as to control AI technology.

One thought on “Blaming the AI Technology We Fear

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: