Opinion

Why Are We So Afraid of AI?

Seppo Keronen
Towards AI
Published in
7 min readAug 5, 2021

--

Photo by Andre Mouton on Unsplash

We humans have become the dominant species of “our” planet. Our previous niche, as small bands of social bipedal apes, exerted extreme selection pressure moulding our extraordinary brains, and leaving us with the scars to prove it!

  • We are afraid of anyone who is too different from ourselves — How alien are the emerging intelligent machines and their Artificial General Intelligence (AGI)?
  • We act without due regard for long term consequences. We build nuclear weapons, we exhaust and pollute our environment, we exploit the weaknesses of others. This makes us afraid of our own shadow — Is AGI going to be a stronger and even scarier version of ourselves?
  • Our psychological, social and economic order is based on the value of contributed individual effort. While the industrial and information revolutions have displaced routine work, will AI and robotics displace intellectual and contingent work as well?
  • Despite our violent past, human societies are gradually and haltingly becoming more peaceful, fair, just, and tolerant — Will intelligent machines help us keep the peace and will they free us from drudgery and poverty — Will they align with us?
  • Intelligent machines have been promised ever since the early years of computers when they were called “electronic brains”. Is this just another hype cycle that we can ignore?

In order to make sense of where we stand, let’s first reflect on ourselves as intelligent biological machines. Given this benchmark, we can better understand, compare and perhaps align our interests with the machine intelligences that emerge.

Figure 1 — Human Cognitive Architecture Sketch

Human Intelligence

As illustrated in figure 1, we perceive our environment (including the state of our own body) via sensory neurons, and we act via motor neurons. Our intelligence, or the lack of it, is what happens as the perceived sensory signals are processed, by our biological neural network, to produce more or less impressive actions.

We may distinguish levels of processing/thinking that have developed over millions of years. Evolution is a largely conservative process, that retains ancient structures as new ones are overlaid. Starting from the most ancient layer and working up to where we find ourselves now:

Level 0 — reflexive responses hard-wired during development by our genes. These neural circuits are only minimally modified functionally during our individual lifetime.

Level 1 — highly parallel learning and exploitation of spatial-temporal patterns that have predictive power for survival. The learning signal here is mismatch of expectation and subsequent ground truth. There are also value signals here that motivate us and that we experience as emotions.

Level 2 attention focused processing of expressions composed of concepts and associated symbols (signifiers) denoting entities, qualities, quantities, feelings, relationships and processes. This makes the human species a powerful super-organism able to transmit information between individuals across space and time.

Level 3 we are able to compose (imagine) hypothetical entities, situations, processes and other intensions without the corresponding external data (extension). This enables self-referential thinking and makes us into individually powerful engines of imagination.

Figure 2 — Symbolic AI

Symbolic AI (Expert Systems, GOFAI)

Boolean logic circuits and memory registers, from which computers are made, are ideal for arithmetic calculations and other symbolic operations. Symbolic AI is the practice of programming this computational substrate to emulate the operations of rule-following symbol rewriting systems, such as predicate calculus and production systems. Notably, the reverse, human computational substrate (networks of stochastic neurons and limited short-term memory) emulates symbolic computation very poorly.

During the second half of the 20th century, the term AI was used to refer to the practice of representing “knowledge” in the form of input symbolic expressions and processing these to produce output symbolic expressions. This Symbolic (good old-fashioned AI) tradition is illustrated in figure 2. Here we also illustrate it corresponding with the type 2 thinking of the benchmark model.

It was found that symbolic AI suffers from critical shortcomings, which eventually lead to a decade or so of depressed funding of research (AI winter) around the turn of the century. The main shortcomings identified were:

Concepts — Symbolic expressions provide the means to signify, but not represent the referent entities, processes and other concepts. A truth functional interpretation does not enable the deep semantic interpretation required to emulate, simulate and analyze sensory information.

Grounding — A symbolic language works as a useful tool when its symbols refer to elements of the world of interest, and its statements correspond with actual and potential states of that world. This mapping is not available in the language itself.

Learning — Instead of learning, symbolic AI relies on a repository of knowledge formulated as statements in a formal language. The scarce availability of experts to encode and verify such knowledge bases is referred to as the knowledge acquisition bottleneck.

Figure 3 — Connectionist AI

Connectionist AI (Neural Networks, DNNs)

Connectionist AI is an alternative to the above symbolic tradition. Here we arrange the circuits and memory cells of computer hardware to emulate the operation of networks of neuron-like elements. This approach has been pursued by dedicated researchers since the 1940s. Notably, a traditional von Neumann computer is not well suited to efficiently emulate a biological nervous system. and many simplifications and alternatives are pursued.

The terms deep learning and deep neural networks (DNNs) are often used to refer to the current incarnation of the connectionist AI paradigm. The emphasis here is on three key principles:

Representation — Multidimensional vectors of numbers (tensors) are used to represent all data and state. Where these tensors correspond to domain entities they are known as embeddings.

Processing — Large numbers of neuron-like elements, organised into numerous layers of non-linear processing, provide the required modeling power. The more layers there are, the deeper the network is said to be.

Learning — Statistical learning of patterns in data, using a back propagation algorithm to tune the weights (parameters) of the elements. These parameters determine the results computed, and constitute the long-term memory of the network.

Figure 3 illustrates DNNs relative to our benchmark architecture. Relative to symbolic AI, DNNs are a bottom-up, type 1 technology. DNNs accept multimodal sensory input (images, audio, touch, etc.) and drive actuators (speakers, displays, motion etc.) without complex engineering of encoding, decoding and analysis algorithms.

As well as sensor signal processing, large DNNs are able to model the structure of natural and artificial languages. Such large language models (LLMs) have recently ignited a storm of hope, fear and controversy. What is fuelling this storm?

Generative AI — LLMs trained and deployed as autoregressive generators of language, appear surprisingly competent. The largest LLMs incorporate and express (when appropriately prompted) super-human amounts of linguistic and pragmatic information.

Misinformation — A raw LLM model generates output that follows statistical patterns in its training data, without grounding of symbols to referents. The result is flamboyant confabulation and misinformation.

Emergent Generality — LLMs acquire and manifest semantic patterns implicit in their training data. LLMs can be prompted to exhibit basic logical, numeric, spatial, temporal and even social reasoning skills.

Agency — Rather than a stand-alone, single-pass LLM, intelligent agents that incorporate LLMs as components are emerging. Such agents can employ a short-term memory context, break down goals into subgoals, reason about intermediate results, consult external resources and direct physical actions.

Figure 4 — Prospective AGI

Artificial General Intelligence (AGI)

Assuming the reference schema of Figure 1 is meaningful, we are far from replicating human-like minds. That said, the endeavour to understand and engineer intelligence is part of our journey to discover who we humans are and what options we have for the future.

Figure 4 illustrates a feasible, prospective architecture for autonomous machines, spanning type 0, type 1 and type 2 characteristics. We can look forward to safe, autonomous machines that can learn new skills, perform useful work and even amuse and entertain us.

Safety — We will incorporate policies to ensure safe decisions and fail-safe behavior. Experience with safety critical systems. such as fly-by-wire aircraft, indicates that such programmed safety harnesses are feasible.

Language — Providing machines with grounded referential natural language is probably the most exciting, high return challenge. Progress with language models is addressing this concern.

Episodic Memory — Grounded representations will enable more efficient and expressive memory structures to deal with the complexity of the world, with partial models of the world constantly subject to improvement.

Hypotheticals — Even machines need to hallucinate possible, coherent worlds and delay “gratification” to make plans for the orchestration of actions.

Figure 5 — Reflection

The “dark matter and dark energy of intelligence”, illustrated in figure 5, still remain relative to the human benchmark of figure 1. What hides behind the words “self”, “motivations” and “emotions” and the “feelings”, “qualia”, “awareness” and “consciousness” associated with them? We don’t seem to even have the required concepts to answer such questions. Perhaps together with the AIs, we will discover some answers?

Are We Afraid?

The AIs are real, they are profoundly alien, they will be harnessed by ill-intentioned humans, they will perform intellectual and contingent work and eventually they will pursue goals and purposes that diverge from ours. As a part of this process, we humans will adapt and discover more about ourselves.

--

--