In developing AIs (artificial intelligences) there’s no guarantee that they will think like we do. We need to ask:
What possible type of minds could people have?
What possible type of minds could AIs have?
We’ll illustrate possible minds on (at least) a 2D (two dimensional) chart.
Let’s start with interpreting 1D, 2D and 3D graphs; then we’ll show how to graph possible minds.
1. What is intelligence?
“The whole of cognitive or intellectual abilities required to obtain knowledge, and to use that knowledge in a good way to solve problems that have a well described goal and structure.”
Resing, W., & Drenth, P. (2007). Intelligence: knowing and measuring. Amsterdam: Publisher Nieuwezijds
also see What is intelligence and IQ?
2. What is the Wechsler IQ scale?
A simplistic test to represent intelligence with a single number.
|IQ Range (“deviation IQ”)||IQ Classification|
|130 and above||Very Superior|
|69 and below||Extremely Low|
3. Is the Wechsler IQ scale 1D, 2D, or 3D?
A 1D (one dimensional) graph is used when there is only one variable.
Thermometer / Wechsler scale / Speedometer
4. In history class we sometimes plot political beliefs on a 1D scale.
What is being plotted on this axis?
5. However, not all positions can be accurately shown on a 2D graph.
We need at least 2 different dimensions. On this chart, what are axes being plotted?
6. Why is it better for some subjects to use 2D plotting instead of 1D?
7. How would we represent something that needs 3 different variables?
With a 3D plot.
On this chart, what are the 3 different dimensions (axes) being plotted?
8. For minds we would need more than 1D to represent ideas.
So this chart is insufficient.
We don’t really have just one intelligence dimension (“dumb-to-smart”)
We have many, such as:
ability to think and reason logically, problem solving
ability to have empathy, understand the emotional state of other people)
ability to understand one’s own emotional state/sentience
The Universe of Minds, on a 2D graph
By Roman V. Yampolskiy
What is a mind? No universal definition exists… Higher order animals are believed to have one as well and maybe lower level animals and plants or even all life forms.
We believe that an artificially intelligent agent such as a robot or a program running on a computer will constitute a mind….
The set of human minds (about 7 billion of them currently available and about 100 billion ever existed) is very homogeneous both in terms of hardware (embodiment in a human body) and software (brain design and knowledge).
The small differences between human minds are trivial in the context of the full infinite spectrum of possible mind designs. Human minds represent only a small constant size subset of the great mind landscape. Same could be said about the sets of other earthly minds such as dog minds, or bug minds or male minds or in general the set of all animal minds…
Yudkowsky describes the map of mind design space as follows:
“In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-in-general. The entire map floats in a still vaster space, the space of optimization processes. Natural selection creates complex functional machinery without mindfulness; evolution lies inside the space of optimization processes but outside the circle of minds”
Figure 1 illustrates one possible mapping inspired by this description.
Yudkowsky describes the map of mind design space as follows:
“In one corner, a tiny little circle contains all humans; within a larger tiny circle containing all biological life; and all the rest of the huge map is the space of minds-in-general. The entire map floats in a still vaster space, the space of optimization processes”
(Yudkowsky 2008, 311).
Ivan Havel writes:
All conceivable cases of intelligence (of people, machines, whatever) are represented by points in a certain abstract multidimensional “super space” that I will call the intelligence space (shortly IS).
Imagine that a specific coordinate axis in IS is assigned to any conceivable particular ability, whether human, machine, shared, or unknown (all axes having one common origin). If the ability is measurable the assigned axis is endowed with a corresponding scale. Hypothetically, we can also assign scalar axes to abilities, for which only relations like “weaker-stronger,” “better-worse,” “less-more” etc. are meaningful; finally, abilities that may be only present or absent may be assigned with “axes” of two (logical) values (yes-no).
Let us assume that all coordinate axes are oriented in such a way that greater distance from the common origin always corresponds to larger extent, higher grade, or at least to the presence of the corresponding ability. … (Havel 2013, 13)
What do we see here?
Human minds – what we humans have, from a day old baby, to a child in 3rd grade, to an adult businesswoman, to the greatest geniuses the world has ever seen, like Albert Einstein and Isaac Newton. We’re all represented in the pink circle, in the image above. The left side of the circle are the least smart people, the right side represent the smartest people. The vertical axis might represent sapience, sentience, or some other aspect of intelligence.
Transhuman minds – This larger salmon-colored region represents the possible minds of humans who have chosen to expand their brains. In theory, humans could use genetic engineering, or cybernetics, or both, to expand our intellectual powers.
Transhumanism is”the intellectual and cultural movement that affirms the possibility and desirability of fundamentally improving the human condition through applied reason, especially by using technology to eliminate aging and greatly enhance human intellectual, physical, and psychological capacities” – Nick Bostrum, 1999.
Posthuman minds – If humans continue to push their biology and minds past the transhuman state, the result would be a being that no longer looks or thinks like a human being at all.
Freepy AIs are any type of artificial intelligence that human beings might be able to make; although they may produce results we can understand, we can’t understand the way that they think. They are not only smarter than us, they think differently than we do.
Bipping AI’s are a kind of artificial intelligence so advanced that humans couldn’t even possibly design them. They might be designed by other AIs, or by transhumans, or posthumans. They are amazingly intelligent, but utterly nonhuman. It might not even be possible to have a conversation with them, since their view of reality and their way of thinking about the world is so different from our own.
Gloopy AI’s are a kind of artificial intelligence so advanced that humans couldn’t even possibly design them, but not necessarily smarter than us. They would have a capacity to think, but perhaps at a lesser organized level. It might not even be possible to have a conversation with them, since their view of reality and their way of thinking about the world is so different from our own.
All computation and physical action requires the physical resources of space, time, matter, and free energy. Almost any goal can be better accomplished by having more of these resources. In maximizing their expected utilities, systems will therefore feel a pressure to acquire more of these resources and to use them as efficiently as possible. Resources can be obtained in positive ways such as exploration, discovery, and trade. Or through negative means such as theft, murder, coercion, and fraud. Unfortunately the pressure to acquire resources does not take account of the negative externalities imposed on others. Without explicit goals to the contrary, AIs are likely to behave like human sociopaths in their pursuit of resources. Human societies have created legal systems which enforce property rights and human rights. These structures channel the acquisition drive into positive directions but must be continually monitored for continued efficacy.
Non-human intelligences here on Earth
What in god’s name was this octopus trying to do? Maybe that’s the wrong question. There’s no question that octopi are smart — they can puzzle their way through surprisingly complex tasks — but they’re also not a lot like humans.
There’s only a limited extent that we can empathize with animals — and there’s a good chance that we’ll get it wrong. (consider, for example, “What is it like to be a bat?” By Thomas Nagel)
Octopi, though. Octopi are particularly difficult, and I don’t know if “volition” is really the right model to describe what this animal is trying to do.
Most of an octopus’ neurons are in its arms. The rest are in a donut-shaped brain that surrounds its digestive tract. Vision and hearing are handled centrally, but proprioception, smell, touch, and taste are mostly delegated to the nerve cords in the arms.
Which means that, subjectively, an octopus is probably something like an unruly parliament of snakes ruled by a dog.
If you’ve ever gotten a chance to interact with an octopus in person, you’ll find that it really doesn’t have much control over the details of what its tentacles do. Run your finger over the sensory surface, and its suckers will cup your fingers and the end will curl around it. Only afterward — when the octopus actually looks at what you’re doing — does the octopus seem to get a grip on what its tentacle is gripping.
This octopus is crawling out of its tank. But it probably doesn’t have a great idea about where the tips of its tentacles are, and — because it can’t see what its arms are doing — probably doesn’t yet know that it’s trying to make a break for freedom.
CD.L2-07 Describe what distinguishes humans from machines, focusing on human intelligence
versus machine intelligence and ways we can communicate.
CD.L2-08 Describe ways in which computers use models of intelligent behavior (e.g., robot motion,
speech and language understanding, and computer vision).
CD.L3A-01 Describe the unique features of computers embedded in mobile devices and vehicles
(e.g., cell phones, automobiles, airplanes).
CD.L3A-10 Describe the major applications of artificial intelligence and robotics.
Common Core ELA. WHST.6-8.1 Write arguments focused on discipline-specific content.