Travels with Myself

A Journal of Discovery and Transition
Doug Jordan, Author

25.12 The Soul of the New Machine

Artificial Intelligence – AI – or perhaps more accurately, Artificial General Intelligence (to distinguish from calculators, or machine learning devices), is the term we humans (with natural intelligence) use to describe powerful information processing systems operating on vast arrays of interconnected and internet connected machines processing large language methods software responding to questions from users, presumably human, providing thorough, articulate, written or even verbal, answers. 

Whew. That was a mouthful.

So, we’re not talking about calculators here. Nor robots. But what is really going on in that machine with ‘artificial intelligence’?

What exactly is intelligence? My favourite source, The American Heritage Dictionary of the English Language, defines intelligence as, ‘the ability to acquire and apply knowledge and skills’. Seems straightforward enough, except perhaps that word, apply…

Allan Turing’s test of intelligent behaviour seeks to establish whether a computing machine, operating as if it were ‘thinking’, can be distinguished from human intelligence: The Turing Test, originally called the Imitation Game by Alan Turing in 1949, is a test of a machine’s ability to process information and carry on a ‘conversation’ in such a way as to be indistinguishable from a human[1]. Turing, often referred to as the father of modern computing, was confident that such a machine could one day be built and pass his test of intelligence. He was certainly right about that. 

It is thoroughly evident today that the artificial intelligence demonstrated by digital computer technology operating in an extended internet environment fully meets Allan Turing’s test of ‘intelligence’. In many ways modern AI devices exceed by an extraordinary margin, the standard of human capacity to carry on a conversation with a human interlocutor, receive instructions, search and process information and provide a coherent written and oral report. And if a picture is worth a thousand words, it can draw too.

However, AI gives rise to big philosophical questions: if machine intelligence is equivalent to human intelligence, does such a machine have equivalence to humans in other respects as well? Is such a machine self-aware?, have free will?, a conscience? Is an AI device conscious? Does it have a ‘soul’[2]?

Is it appropriate, then, for humans to assign work to such a machine? Does a human have a ‘relationship’ with an AI device? Equal? Superior? Inferior?? Does an AI Machine need to give consent first? Is the machine capable of prevarication? How much reliance should we put in its answers? Is it ethical to allow students to have AI programs write their thesis papers? Or for consultants/ lawyers/ interior designers to offer AI advice to clients and charge substantial fees for that? Will AI devices replace accountants, lawyers, authors!, altogether? What implications may AI have for human society? 

These are big questions, some of which I’ll leave to my next post. 

But for now, the question I want to explore is, are AI machines, ‘conscious’? If a machine is clever enough to pass the Turing Test, do we then ask ourselves the next question, does this amount to consciousness? Is behaving intelligently the equivalent of being conscious? And how would we know? To date, we don’t have the equivalent of a Turing Test for Consciousness.

A Short Treatise on Consciousness

So, before we get deeper down the AGI ethical rabbit hole, how about a short overview of consciousness?

Let’s start with some definitions. According to my favourite resource, The American Heritage Dictionary of the English Language, consciousness is:

  1. the state of being aware of and responsive to one’s surroundings
  2. a person’s awareness or perception of something
  3. the mind’s awareness of itself and the world (consciousness emerges from the operations of the brain)

The last definition is the one most germane to this discussion. Dogs and intelligent people (and AI machines?) seem to qualify under the first definition of consciousness but only people (I’m sorry to say it), only people appear to be conscious in the sense of consciousness emerging from the operations of the brain. But is that really true? 

The brain of a human being is not only able to perceive the environment around it but also to perceive itself acting in that environment. This is the notion of ‘theory of mind’: the brain perceives its ‘self’ as somehow separate from itself. Some call this the ‘theatre of the mind’: we perceive ourselves behaving in our environment as if watching a movie of ourselves. Dogs don’t appear to have a theory of mind, (but they do apparently dream). They have no real sense of time, they don’t seem to plan; they may be capable of manipulating their people, even motivated to do so, but that doesn’t mean they’re actually conscious of their ability.

Theory of Mind and Dualism

People on the other hand do have a theory of mind: we have a sense that we (our ‘minds’) are somehow separate from our bodies; that the brain, a very special part of the body, somehow houses the ‘self’. This is the mind/body paradox. Our language betrays this bias: We talk of ‘my’ brain, ‘your’ heart, as if these parts belong to ‘me’ and ‘you’, but are only an extension of ‘us’. The perception we have of our ‘mind’ is a powerful impression. Most of us don’t give it another thought, it just is; and if questioned, we simply dismiss the questioner as a bit of a crackpot. 

I won’t go into a long discourse on the history of thinking about consciousness – and let us agree immediately, despite advances in our knowledge, we still don’t know exactly how it all works – but let’s at least mention the most commonly referenced notions. (In the footnotes I list some of the books on consciousness I’ve found the most intriguing (and at the same time comprehensible, almost[3]).)

René Descartes (1596 – 1650) wasn’t the first philosopher to ponder this question but is the most remembered; he boiled it down to a single sentence: cogito, ergo sum. He even translated it, into French: Je pense, donc je suis (well of course he did, he was a Frenchman after all); in English: I think, therefore I am. In his treatise on dualism he was examining the larger concept of reality/existence, but did he really think thinking was the equivalent of consciousness? Well, it appears he did. (Would he say the same thing of computers?) Dualism posits that the mind and body are fundamentally distinct entities. He speculated that consciousness existed in a particular part of the brain, the pineal gland. But if that is so, where in the pineal gland is this mysterious ‘self’? I’m not sure René actually expressed this as a theatre, but, being a mathematician, his rational ‘mind’ came to be referred to as the ‘Cartesian Theatre’.

René Descartes and his Cartesian Mind

So, where is the pineal mind in the computer? Does an AGI device have a theatre in which it watches itself?

Neuro-scientists and philosophers since Descartes have delved into this mysterious notion of ‘the mind’ and, while there a number of elaborate theories (many of which mostly having to do with energy (quantum theory)) probably don’t meet Occam’s Razor test, most theorists are pretty much convinced that the mind is distributed throughout the brain. At the same time, however, these theorists have also reported that many of the particular functions of the brain are concentrated in specific areas of the brain – speech centres, vison function, auditory function, motor function – but are these functions part of consciousness or just that, functions? How does the brain make meaning out of those perceptual functions? The brain seems to have the capacity to store these perceptions elsewhere in the brain, but where? As I reported in a previous post, we may know where various brain functions are located, but where are the words stored? In ‘memory’? But where is memory located? It seems that memory is stored all over the place in the brain. Is it memory, then, stimulated by external perception, that creates the impression that we have a ‘mind’, and that we are conscious? 

It seems more obvious where an AGI computer system stores its dictionaries – on a series of magnetic hard-drives. But does this computer memory, accessed by its operating system, make it conscious? Does an AGI device have a theory of mind? If large language methods of AGI are ‘intelligent’, can ‘think’ and process information, are they conscious?

Without being prompted by an external environmental cue (a question), does an AGI device self-germinate? Does it wonder? Speculate? Question on its own? If you ask the AGI what sound does a lion make, it can probably tell you, and demonstrate; but if it hears a random sound in the computer lab, does it ask itself, ‘what was that sound?’

We humans have a theory of mind but do AGI devices? We ‘feel’ that our minds are somehow different from and separate from our brains. Do AGI devices have the impression that they have a mind somehow different from its electronic components? But how do we know for sure?

Expanded Definition of Consciousness

Beyond the American Heritage Dictionary’s simple definition philosophers and neuro-scientists expand upon what is consciousness by adding a number of other qualities: 

• [self-awareness of] knowledge in general,

• intentionality,

• introspection (and the knowledge it specifically generates), and,

• phenomenal experience, i.e., perceptual experience, such as tastings and seeings (‘qualia’); bodily-sensational experiences, such as those of pains, tickles and itches; imaginative experiences, such as those of one’s own actions or perceptions; and streams of thought, as in the experience of thinking ‘in words’ or ‘in images’[4].

AGI Machines may give the impression of intelligence in its answers, but does it actually understand its own answers? Does it have intentionality, introspection, and phenomenal experiences: does it perceive ‘qualia’ and enjoy it: if it could taste broccoli, would it like it?

Increasingly, it is claimed, consciousness cannot be objectively defined because it is an internal subjective experience or feeling. And we cannot know what the Other is actually experiencing. Can we know what AGI Devices are thinking?

Most current philosophers and neuroscientists are convinced a machine cannot be conscious, and I think they are right: AGI Machines seem to provide no evidence that they experience understanding of their own knowledge, do not have intentionality, or introspection, and have no sense of phenomenology. But how can we be sure? 

To my knowledge, even though current versions of Artificial General intelligence demonstrate prodigious ‘intelligent behaviour’, AGI devices do not appear to have any self-awareness. It does not have any understanding of what it has just said. An AI machine does not appear to be conscious of what it is doing. (But then, that could be said of many people as well, the writer possibly excepted.)

I used to smile at the expression, ‘I’m pretty sure I am conscious, but I’m not so sure about you’. I’m pretty sure machines aren’t.

Lately, I’m not so sure about me?

Doug Jordan, reporting to you from Kanata 

© Douglas Jordan & AFS Publishing. All rights reserved. No part of these blogs and newsletters may be reproduced without the express permission of the author and/or the publisher, except upon payment of a small royalty, 5¢. 


[1] In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine and tries to identify which parts of the text is the machine; the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine’s ability to answer questions correctly, only on how closely its answers resembled those of a human. 

[2] In Tracy Kidder’s book, The Soul of the New Machine, 1981, the author was describing the race a team of computer engineers were in to build a new smaller computer to compete with large mainframe computers; soul in this case refers to the designers of the new machine, not the machine itself. The same might be said of today’s software engineers imbedding their souls in the AI programs. 

[3] Susan Blackmore, Consciousness, An Introduction,

Antonilo Domasio, The Feeling of Consciousness,

Jay Ingram, Theatre of the Mind,

Steven Pinker, How the Mind Works

Lisa Feldman, How Emotions are Made

[4] https://en.wikipedia.org/wiki/Consciousness

Like this article?

Get notified when a new blog is posted. Join the mailing list now!

AFS Publishing
djordan@afspublishing.ca
613 591-2332

Copyright ©2018–2025 AFS Publishing

Sign Up and Receive Updates

To Subscribe to The Travels with Myself Newsletter, please provide: