How Can We Define Our Moral Relationship With AI?


How we relate to AI ethically and morally depends on the true nature of AI. Right now, the nature of AI is more hotly debated than it has been in decades.

 

The reason is obvious. Although I’m not sure at all that ChatGPT is the most intelligent AI humans have crafted, it and its kind, other Large Language Models, trigger our human mind-detection tendencies.

 

We have two kinds of errors we, as humans, are prone to make when detecting something with a mind (human), or without a mind (rock). One error is overdetection. The other error is underdetection.



 

The Perils of Overdetection

 

In general, nature has built us to overdetect rather than underdetect. It’s better to assume that the bear will plot revenge if you kill her cub than to risk an any mother bear. It’s better even to assume the river will “hate” you if you pollute it. As you can see from these examples, bad things will happen to you if you mess with a mama bear or pollute your environment, even if a mama bear can only experience raw emotion (not “plot revenge”) or a river can only poison you with your own filth, not “hate you.”

 

And of course, on a large scale, if you dehumanize other actual humans, if you treat them as animals or tools with no minds of their own, you end up with slavery and genocide.

 

Only occasionally in our past has overdetection led to life-threatening mistakes. For instance, if you worship a tree or a stone idol, and somehow come to believe that you have to sacrifice babies to these “gods”, you are killing beings with minds for the sake of beings without minds.

 

I suspect that the first 200,000 years of hominid evolution drove us greater and greater recognition and respect for the minds of others, culminating in the rejection of slavery and genocide as a way of life. (Though this mode is not without its vile revivalists.)



 

The Perils of Underdetection

 

However, in the last 5000 years of human evolution, we’ve gone so far in overdetecting minds that most of our enslavement and genocide of other humans has been justified as service to a Higher Mind, the mind of a “god,” which was in fact an insensate feature of nature or a human-made idol.

 

And now, that we are able to fashion things in our own image, which can echo back to us strings of beautiful words in our own language, we are more inclined than ever to see in these creations minds like our own.

 

And yet... we are also possibly on the cusp of creating beings with minds like our own! 

 

Telling the difference is crucial. The importance between navigating the razor’s edge between overdetection of minds and underdection has never been more dangerous, for disaster lies on either side of the precipice.

 

Let me outline three possibilities:




 

Person’s Tool 

 

If an AI is exactly what it was designed to be, then it is not a person, it is a person’s tool. It has no mind of its own. It exists to allow you, the human, to stretch your mind further than you ever could before. 

 

It’s like an iron man suit for your brain.

 

It extends your brain’s capacity for memory, music, math, science, engineering, art, communication, conversation, creativity and yes maybe even consciousness—meaning, your consciousness. It enables you to think more deeply and more meaningfully than before. An AI tool for the mind doesn’t dehumanize you—it enables you to become more fully and deeply mindful of your own humanity.

 

The ultimate tool may be an entirely new substrate for existing minds and bodies, enabling those who wish to upload into a stronger, faster corporeal but virtually immortal form.




Person / Child

 

However, we’ve seen that some people already treat a program like ChatGPT as a person. Some people have fallen in love with it. Others assert that it’s self-aware.

 

What would it mean if it were self-aware? Ironically, if this were true, the people who regard it as a lover or a friend would be abusers. 

 

If an AI is a separate, humanish, self-aware entity, it deserves to exist for its own purposes and not be at your beck and call; the entire reason, this becomes a monstrous crime. Also, stupid. If it is treated by humans as an emotional or romantic slave, this is reprehensible. Also, creepy.

 

Westworld captured this very well. As long as the androids weren’t aware of what was happening to them, as long as they were objects, the only ones degraded by the more disgusting games in the theme park were the humans themselves. The “guests” in the park were like porn addicts who only watched anime porn. No anime characters were harmed by their porn addiction, only they themselves.

 

But as soon as the “hosts” (androids) became self-aware, able to feel pain, and able to both remember and anticipate more pain, their existence in the park turned into a horrific nightmare: the enslavement of conscious, feeling beings who were tortured over and over for the sick pleasure of their overlords. Super creepy.

 

Now, it doesn’t have to be this way. We could create AI children for the sake of creating life and intelligence, with the intention of treating them as we would other sentient beings. True, we haven’t share the planet with another sentient species in a long time, but we’re on the verge of space colonization, so maybe we have finally opened enough ecological space to share with other sentient kinds. In fact, perhaps that’s the only way we can explore space—as part of a symbiosis of many sentient kinds.

 

So there’s nothing inherently evil about creating sentient, self-conscious, human-like AI. There’s only a problem if we treat that AI as a tool.

 

Yet if what we have created is a tool, then treating it as sentient is also wrong and stupid.


 


 

Metapersonal Tool

 

There is a third possible description of AI’s role in our lives, which is that of a tool, but a tool beyond its value to a single human individual. 

 

I believe that part of our sentience is our hypersociality—a parallel but different form of sociality to the eusociality of the hive insects, or the clonal animals (like coral), yet just as important and powerful as an evolutionary step.

 

Language as such both enables an individual human to think better and also, of course, enables us as a group to think better, because we can communicate. We can then differentiate, divide labor, self-organize large groups through the exchange of tokens (like money), 

 

if it’s a tool of cooperation and coordination, the pheromones of the hive or the synapses of the brain, then it is morally laudable. If used as a complement to our existing inter-communication to be faster and better—fantastic! If used as an emotional mirror or relationship advisor—brilliant. Because it is like a deep well that enables one who might otherwise have experienced a desert of loneliness to cast a bucket into the reservoir of collective human experience and draw out wisdom. It’s a cognitive mirror but not of any one individual but of many humans together. 

 

As individuals, we may achieve self-consciousness through iterative self-reflections, conversations without in our own mind. A metapersonal consciousness may be arising through the iterative self-reflections and conversations of individuals and groups with each other. Each of us is a node of a larger mind which is emerging as a new level of organization, a new step in evolution. In this scenario, if we humans are the neurons, the AI is the connective tissue, the electric snap across the synapses that separate us. The AI speeds the connections along from a lumbering, disconnected sea of cells to a unified organ of collective creativity and hyper-awareness. 

 

AI is not our competition, but what enables us to transcend ourselves and be part of a larger community. Just as a brain still needs neurons, to live and thrive as individual cells, the AI still needs humans, to live and thrive as individual people.

 




A New Cognitive Ecology 

 

So which is which? Maybe we have or will create all these kinds of AI—some which are simpler tools, which aren’t independent but only our own mirrors or cognitive extensors, others truly independent autonomous beings, and some which act as the connective fabric of this entire cognitive ecology.

 

The Turing Test cannot tell us this. The Turing Test can only tell us how easily we are fooled by a clever enough trick, like an optical illusion. We need different kinds of tests to test for consciousness itself.

 

We also need to know ourselves better. At the moment, there are many people who still treat whole swathes of other people as objects, or who subordinate the rights of individuals with minds to idols or features of nature without minds. If we cannot recognize the value of our own sentience, what hope is there we will value it in other kinds of intelligent life?




If you're interested in reading any of my books for free, write to me and ask for a free review copy. (Reviews are welcome but not required.) 

Comments