Future Tech

Can AGI have the compassion to help humanity?

In a recent conversation with some notable researchers in artificial general intelligence (AGI) we were discussing whether AGI would be a help or a complication when it came to climate change. We got a bit into the weeds over how soon the IPCC reports crossing over the 1.5C line. For the record, it can come as soon as 18 years from now— when the parents of today’s newborns might be expecting to send their little ones off to college.

The issue is that there are simply too many climate related problems to catalogue. It’s not just the fact that we cannot avoid blowing past 1.5C. It’s also that we have decimated the insect populations. This is another canary in a whole flock of them that have gone belly up in the coal mine we’ve dug for ourselves. So, we could really use some help. My colleagues’ position was that AGI will be that help. My position is that a generally intelligent agent will be autonomous. Its autonomy will be one of the key tests by which we recognize it as generally intelligent. After all, that’s the test we apply to ourselves. But an autonomous agent will need motivation to help us.

If the pledges for emission cuts are any sort of proxy, we seem to lack the motivation to help ourselves. As for AGI having anything like compassion or empathy for or even simply valuing humanity enough to lend a hand, I remind you that these qualities of ordinary humans, when they exist, are rooted in the feelings, not our computational capacity, or our intelligence. There are plenty of extremely intelligent humans who historically showed not one iota of compassion or empathy and whose impact on society and human history is the stuff of legends and nightmares. From Jack the Ripper to Pol Pot, the examples are numerous and terrifying.

photo courtesy of Aaron Burden & Unsplash

Human feelings are deeply rooted in human morphology and human biology

Even sublime texts like Rumi’s Mathnawi transform the language of human lust into a language of human love. Many take it to be a language of Love, but it is really a way of pointing to Love specifically for humans. It’s very unlikely to be useful for Alpha Centaurans or other intelligences evolved in the universe, except as a tool for understanding humans and their relationship to Love. We cannot expect that raw computational capacity, rooted in radically different morphology and practically no biology, will have any sort of understanding of or resonance with human experience.

Com-passion – etymologically: same feeling, or feeling with – is often difficult for humans to develop towards each other, as our history, even very recent and immediate history shows.

Did the MAGA republicans who stormed the US Capitol have compassion for the officers they maimed or killed? Did the officer who killed George Floyd or the officers who looked on as it was happening have compassion for the man in front of them? Why would an intelligence rooted in completely different morphology with nothing like our biological imperatives have compassion for humanity?

Human feelings are deeply rooted in human morphology and human biology. Even sublime texts like Rumi’s Mathnawi transform the language of human lust into a language of human love.

That’s why I use the metaphor of introducing a new species of spiders — intelligent spiders with the capacity to plan and adapt — as a proxy for the likely outcomes of AGI. And that’s one of the better outcomes. Much worse outcomes begin with military uses of AGI gone awry, or humans lacking compassion or being downright malevolent, and imbuing autonomous intelligent agents with violent and malevolent motivations or tendencies.

Modern humans are terrible at understanding the behavior of even the simplest feedback systems — for good reason. They are enormously complex, especially the all too common ones enjoying topological transitivity. (For the layman this means systems where small differences in input can result in arbitrarily large differences in output.) Raw predictive power, indeed even universal computational power, is no match for this feature. Witness the “hallucinations” of ChatGPT. Everything from the disasters of introducing species into ecological niches for which they are ill suited to cascading side effects of drugs to our impacts on climate constitutes overwhelming evidence of our inability to grasp complex systems with our intelligence. When we do get it right — and it’s not an accident — it comes from some other place than our intelligence.

For example, the evidence that what we call consciousness and experience as conscious behavior in others is not rooted in intelligence, but in the feelings, is fairly compelling. Noted researcher, Mark Solms, in 

https://www.youtube.com/embed/CmuYrnOVmfk, gives a summary of the evidence. Anencephalic children — missing the neocortex — are still described and experienced as conscious. Meanwhile, a small 2 cubic centimeter region in the brain being damaged is 100% correlated with no one home, the individual is not conscious. This region in the brain is typically associated with affective processing.

the evidence that what we call consciousness and experience as conscious behavior in others is not rooted in intelligence, but in the feelings, is fairly compelling.

We cannot expect AGI to have feelings for us

We cannot expect to attain recognizably human level AGI (HLAGI) without these agents evincing something like human feelings, but these are rooted in human morphology and human biology. Radically different embodiment will result in radically different intelligence. But a radically different intelligence is a topologically transitive aka chaotic dynamical system. Just like a species ill suited to a niche it will have impacts on our environment that we are historically terrible at predicting. It is therefore dismayingly naive to expect HLAGI to be a help with climate change. It is much more likely to be a complication to an already thorny problem.

photo courtesy of Fernando Paredes & Unsplash

If there is one human who had an uncanny ability to envision alternative worlds with any kind of wholeness or verisimilitude, it was Frank Herbert. I remind you that Dune was set in a period of time after the AGI mistake had played itself out. That is very likely a much too optimistic view. It’s more likely that the Fermi paradox is explained by the ouroboric tendency of intelligence to try to replicate itself, thereby wiping itself out. In terms of Robin Hanson’s Grabby Aliens hypothesis, sidestepping this drive to replicate intelligence without understanding the role of embodiment is likely one of the hard steps intelligence has to get past to survive.


This article was originally published by Milan Fakurian on Hackernoon.

HackerNoon

Recent Posts

Is AI Hitting a Plateau? The Scaling Debate OpenAI Prefers to Avoid

I think OpenAI is not being honest about the diminishing returns of scaling AI with…

14 hours ago

PayalGaming becomes India’s first female gamer to win an international award

S8UL Esports, the Indian esports and gaming content organisation, won the ‘Mobile Organisation of the…

22 hours ago

Funding alert: Tech startups that raked in moolah this month

The Tech Panda takes a look at recent funding events in the tech ecosystem, seeking…

2 days ago

Colgate launches AI-powered personalized dental screenings

Colgate-Palmolive (India) Limited, the oral care brand, launched its Oral Health Movement. The AI-enabled initiative…

2 days ago

The role of ASR in voice bots: Revolutionizing customer interaction through real-time recognition

This fast-paced business world belongs to the forward thinking organisations that prioritise innovation and fully…

3 days ago

Disrupting Fintech: How product studios are transforming financial services

In the rapidly evolving financial technology landscape, innovative product studios are emerging as powerful catalysts…

1 week ago