Soul and AI

Introduction

I have been doing a deep dive into AI in recent months – to get to the root of a personal sense of deep disquiet. 

The dominant conception of AI is entirely materialistic. The brain is assumed to be the foundation of intelligence. The idea that a computer can be used to a replicate human intelligence is, therefore, profoundly wrong. 

AI was conceived of as having a commercial value – as a technology like the many other technologies that have had a huge impact on how we live. As such a technology AI has huge potential for benefit and harm. 

Humans have long been the flexible ‘wetware’ in production systems, replaced by machines when they can do things faster, good enough or better, and cheaper. The Industrial Revolution, which kicked off in 1760, has shaped human life in the context of technology and commerce ever since. 

The human part

Buddhism frames human life in terms of desire. We crave organic experience, not in mere animalistic terms but in ways elaborated by culture and more. The ‘we’ who do the craving is what many call the soul – a form of consciousness that exists and persists independent of the physical realm. The fact of the soul is beyond reasonable dispute, but its nature can excite debate and speculation.

By ‘beyond reasonable dispute’ I mean the evidence drawn from historic and contemporary experiences and lore rules out any serious informed denial. Evidently if an inquiry is not undertaken at all or undertaken in a biased manner this assertion will be met with scorn.

But anthropology, guided by materialism, has tended to see human life in almost entirely animalistic terms- food getting, breeding and defending from predation. 

This has led to a fantasy perfectly expressed in the Star Trek, Second Generation, series. Humans on Earth live free from want. Their essential needs are met without the need to work. They are safe from predators for the most part (The `Borg’ excepted). Humanity is free to engage in self-improving and self-fulfilling activity. Sounds good. 

But reality check here. People who are well off and safe do not necessarily engage in self-enriching activity. These days we have a lot of leisure time available to us. But much of this is squandered on self-indulgence. 

On a biological level our natural instincts remain even when we have safety and abundance. Our need for community, safety and relationships continue to be active. But we humans have several things going on at once. As well as the biological imperative that drives our organic aspect, we have a soul dimension which has its own needs for community, relationship and meaning. It is these that are the foundation for our values of humanity. 

Humans are organic beings with a plus factor that we struggle to explain. We use imprecise terms like religion and spirituality – as if it is beyond our rational capacity to treat this factor as an objective matter of fact. There is, in fact an abundance of evidence to affirm that this plus factor is real. 

Thinking in terms of AI without understanding or accepting this is perilous for several reasons. 

The first is that imaging any technology in terms that are not objectively true means that assessments of risks and benefits cannot be accurate. 

The second is that our understanding of intelligence is necessarily limited and may also be wrong in critical respects. 

We once regarded Newtonian physics as the definitive way of understanding reality. Now we have Quantum physics. Newtonian physics still works though.  Materialism works but limits our imagination. Materialism has also led to perilous behaviors that have led to climate change and environmental systemic stresses and collapses. 

A theory can function to provide desired benefit while also generating considerable harm, which is discounted in favour of the benefit. We do this a lot. We form beliefs that serve our psychological needs and ignore the harm they may do to others and ourselves. 

This takes us back to Buddhism and the question about why we, as souls, are born into the material world. The Buddhist position, in its simplest form is that we are driven by desire. The question is, then, what happens when that desire is satisfied or realised? This matters because it shifts the theory of being human from an entirely organic matter to a more metaphysical one and enables us to ask about the real nature of human intelligence. Is it entirely organic or is there a more fundamental foundation? 

What are we solving with AI?

AI proponents see that AI is solving problems that have an economic dimension, not a moral one despite protestations to the contrary. 

The idea that the human mind is produced by the brain and is therefore replicable by some form of computing is fundamentally flawed. This must lead us to ask what other assumptions are also false. Building any kind of an argument on false assumptions is never a good move – especially if there is an intent to develop intellectual, economic or cultural practices based on them. 

Rationalist and materialistic mindsets have dominated our cultural development and have led to problems that have been generated because a ‘solution’ has been non-holistic. We have ‘solved’ one problem while creating another. The challenge isn’t just an intellectual one, but a moral one as well. This is because we can choose which mode of thinking is preferable to us and assert it as the right one when it meets our personal psychological needs. 

How to get things wrong 

In his 1995 book, Faces in the Clouds, anthropologist Stewart Guthrie builds a new theory of religion constructed from his assumption that spirits do not exist. This is from Amazon’s description of the book:

Guthrie says religion can best be understood as systematic anthropomorphism-that is, the attribution of human characteristics to nonhuman things and events.

Guthrie says that our tendency to find human characteristics in the nonhuman world stems from a deep-seated perceptual strategy: in the face of pervasive (if mostly unconscious) uncertainty about what we see, we bet on the most meaningful interpretation we can. If we are in the woods and see a dark shape that might be a bear or a boulder, for example, it is good policy to think it is a bear. If we are mistaken, we lose little, and if we are right, we gain much. So, Guthrie writes, in scanning the world we always look for what most concerns us–livings things, and especially, human ones. Even animals watch for human attributes, as when birds avoid scarecrows. In short, we all follow the principle — better safe than sorry.

The problem with this argument is that hunting also teaches us to distinguish appearances from actualities. The need to identify and fight predators and enemies will also spur us to refine our perception and interpretation of what we see. We might err reflexively and initially, but we can then amend our perception and response. Not always, of course, but mostly. But Guthrie’s theory serves the needs of those who wish to deny spirits exist and so explain belief in them as arising from error. What a pity that a whole intellectual position has been fabricated on the intentional denial of reality. What is that in service of?

There is a fundamental difference in the perceptive capacity of a person raised in a natural environment and one raised in a city. One will see things the other does not and interpret things seen in ways the other will not. A city person will not have their senses tuned to the same sensitivity as a person living in a forest and will not have the accumulated memories that provide wisdom and insight when about in the forest.

Guthrie’s inability to accept the reality of spirits is a personal choice that has led to a culture within the field of anthropology that is crafted from mistaken ideas and misinterpretations of human capabilities. The book is still used as a textbook, which the price reflects. A kindle copy will set you back AUD$82.90 (that’s around USD$59). The result is a flat-out wrong interpretation of human capabilities and behaviour – a nuts position for an anthropologist you’d think.

AI is in a similar situation. Our brains are not the creator of our intelligence or consciousness. Our organic brains process inputs from the material world. But they also process input from the non-material. In our normal state of inhabiting an organic body our brains process input from both domains, though we are mostly unconscious of this. Anyone who has had an OOBE will affirm that complex conscious processes continue without an organic brain. Research into NDEs and reincarnation affirms what mystics and shamans have known for millennia. Human consciousness is way more complex than is imagined by the materialistic mind.

We know enough about our organic brains to appreciate that they process a huge amount of input that deals with our organic body and its response to its physical environment. But we don’t know much about its response to its non-physical environment because the intellectual culture that pursues this knowledge asserts that reality ends at the boundary of the material. There is, however, very good neuroscience undertaken with practitioners of spiritual disciplines. Why God Won’t Go Away: Brain Science and the Biology of Belief (2002) by Andrew Newberg, Vince Rause, Eugene G. D’Aquili is a useful book.

AI is fueled by science fiction and fantasy. Spock in the first Star Trek series was supposedly an emotionless super rational guy who epitomized the ideal of reason being devoid of emotion. Data, in Star Trek 2nd gen was an android who was trying to become more human – more emotional. He was the very model of rationality, but he couldn’t understand humans without understanding emotions.

The idea of pure reason as a superior form of consciousness is a fantasy. An emotionless human is a monstrous conception. At the core of both Buddhism and Christianity is the understanding that refining our emotions is what makes us ‘higher beings’. In Buddhism we cease to desire things of the flesh and ego. In Christianity we must master our passions and express love. In both the intellect is an instrument that helps us do what we value and aspire to. 

AI is a misnomer in that it isn’t really intelligence. It is more processing power that we can harness to our benefit or harm – if we can get our emotions, passions and desires under control. 

A word on metacognition

Observers of the introduction of AI are asserting the value of metacognition in preparation for how we adapt to the expansion of AI in our working lives. A source of thought on this that I have a lot of time for is The Neuroleadership Institute.

Metacognition is what most of us would also call self-awareness and self-reflection. This is a combination of intellectual and emotional effort to develop a higher level of understanding who and what we are in relation to a technology that might make much of what we value as sources of meaning and purpose redundant. It might also push us out of our propensity for self-indulgence and to actually be able to live the Star Trek dream of a life where we have the means and leisure to engage in uplifting activity.

It also might give us the impetus to look at AI as a genuinely helpful technology and not just a way of eliminating jobs. Technology that replaces what humans do should be about making life better for all, not just the rich and those in power – who are not exactly suffering in any case.

Conclusion

AI’s hazard is that it is a potent technology that has arisen from deeply materialistic thinking and fueled by a science fiction view of human potential. The ardent proponents are, from my perspective, overly rationalistic and philosophically and psychologically immature. Most importantly they are just plain wrong about the nature of human intelligence and intelligence in general.

They have made something that works and which can deliver genuine benefits if it is thought about in a balanced way. In the meantime, it reeks of risk.

We must also appreciate that perpetuating wrong ideas when the evidence to amend them abounds is perilous. Religions have made the idea of soul problematic because of their affection for dogma and their appeal to people who want assurances and safety. But to deny the metaphysical just because some have made it a paradise for fools is to create an alternative dogma. Materialism is useful in countering some religious silliness when it demands reason and evidence. But it goes too far often, refusing to accept reason and evidence which is freely available.

AI isn’t inherently the hazard it risks becoming. It’s the psychological, intellectual and ethical culture that propels it that is the real problem. The fact that it is just wrong about consciousness and intelligence should worry us a lot. Can we do anything to sort things out?

Some useful sources

CBC’s Ideas has a useful show that was broadcast on 15 April 2026 entitled Literature vs the AI Industry. You can get the podcast on your regular provider or from the show’s website.

The show mentions three books. I have read them. They are:

  • The AI Con by Emily Bender (a computational linguist) and Alex Hanna (a sociologists who has worked in the tech industry)
  • Empire of AI by Karen Hao (a journalist specializing in AI – described as an ‘AI insider’)  
  • More Everything Forever by Adam Becker (a physicist and a journalist)

Please note that I use Amazon hyperlinks for the above books in the interest of inclusion for people with disabilities who rely on ebooks and audiobooks. If you buy 3D books please support your local independent book seller.

Leave a Reply

Your email address will not be published. Required fields are marked *