Management Accounting and Artificial Intelligence—Insights from Eastern and Western Philosophies

Prof. Janek Ratnatunga, CEO, CMA ANZ

Control Systems and Human Behaviour

Predicting human behaviour is at the heart of most control systems in management accounting, be it budgetary or strategic. Most organisations have Key Performance indicators (KPIs) and rewards systems that depend on managers, technicians and administrators performing at their best abilities. A ‘happy workforce’ is what most organisations strive for.

The sad reality however is that throughout their lives, humans will all encounter a great deal of mental suffering, unhappiness, and dissatisfaction. The majority of us worry about issues related to our ‘self’— our relationships, our finances, and our jobs. It is our own ‘self’ issues that keep us up at night, not the problems of strangers. Therefore, how would things turn out if we eliminated the ‘self’ from these mental issues, and how would this impact our performance at work?

In this article, we consider the concept of ‘self’ in natural intelligence (e.g. humans) as understood by Western and Eastern philosophies and ask a wider question as to if artificial intelligence (AI) can itself generate an illusion of ‘self’. Can AI become ‘conscious’?

If indeed future iterations of AI have the potential to develop a sense of ‘self’ — and as these platforms replace humans in the workforce, how would it affect organisational control systems? These are important questions that management accountants of the near future would need to grapple with.

Concepts of Intelligence

Natural Intelligence: In Western philosophy, natural intelligence is usually understood to reside in a ‘self’—a stable, controlling entity like a captain steering a ship. However, Eastern philosophies like Buddhism contend that the ‘self’ is an illusion, the result of our mental processes, which are continually gathering data through our sensors and then constructing narratives to make sense of the world. This constant internal monologue of narratives is commonly associated with a false sense of ‘self’ and is a major cause of mental distress in humans, say Eastern philosophies (Sperry, et.al, 1969).

Artificial Intelligence (AI): Generative artificial intelligence (GenAI) systems also have the ability to recognise and predict patterns in a variety of signals or data types. “Generative” refers to the ability to build fresh, believable versions of certain types of data for oneself after gaining sufficient knowledge about the deep regularities present in those datasets. However, GenAI’s interpretations of reality have had both spectacular successes and occasionally disastrous failures, much like the results obtained with natural intelligence.

A wider question is if artificial intelligence can itself generate a concept of ‘self’; i.e. is it ‘Conscious’, and what does this mean for organisational control systems?

The Concept of Self and AI

While it is tempting to assume that GenAI systems like ChatGPT might be conscious, this would severely underestimate the complexity of the neural mechanisms that generate consciousness in our brains. Whilst researchers do not have a consensus on how consciousness rises in human brains, what is known is that the mechanisms are likely way more complex than the mechanisms underlying current language models.

For instance, real neurons are not akin to neurons in artificial neural networks. Biological neurons are real physical entities, which can grow and change shape, whereas neurons in large language models are just pieces of code.

When we humans are interacting with ChatGPT, we consciously perceive the text the GenAI language model generates. For example, you are currently consciously perceiving the text of this article as you read it.

The question is whether the language model also perceives our text when we prompt it. Or is it just a zombie, responding based on clever pattern-matching algorithms? Based on the text it generates, it is easy to be swayed that the system might be conscious.

However, we still have a long way to go to understand human consciousness—is it from a perspective of Western or Eastern philosophies, or from the findings of neural science—and, hence, there is a long way to go to understand the consciousness (if any) of machines.

Western Perspective of Consciousness: One is a Captain of One’s Own Ship

The core of Western thinking is the ‘brain-powered individual‘, also referred to as the ‘self, the ego, the mind, or “me”. The best intellectuals are celebrated as world-changers in the Western worldview. The classic quote from philosopher René Descartes, “Cogito, ergo sum”, or “I think, therefore I am”, is the most succinct illustration of this. But who is this ‘I’ that Descartes refers to?

For most of us, when we consider who we are, this ‘I’ is the first thing that comes to our mind. The concept of our unique selves, which reside behind our eyes and between our ears and is responsible for “controlling” our bodies, is symbolised by the ‘I’. This “captain” is seen as the agent that drives our thoughts and emotions since it is in control and does not alter all that much. The “Captain of one’s own ship means” means that this ‘I’ is the master of its own destiny, determines its own route, and the ship will go wherever it steers.  Similar to an aeroplane pilot, it is able to observe, decide, and act.

This individual self, also known as the I/ego, is what we consider to be our genuine selves—it is the one who experiences and governs things like emotions, ideas, and behaviours. The self-captain thinks it is in charge of the operation. It is constant and steady. It also governs our physical selves; for instance, it self-recognises that this is “my body.” However, in contrast to our physical body, it does not believe that it is evolving, coming to an end (well, maybe for atheists after physical death), or being impacted by anything else.

Eastern Perspective of Consciousness: The Identity is Illusory.

Let us now look at eastern philosophies. There are significant differences in the ways that Buddhism, Taoism, the Hindu Advaita Vedanta school, and other Eastern philosophical traditions view the self, the ego, or “me”. Compared to the western view of a ‘controlling entity’, they claim that although it is extremely compelling, this concept of “me” is a fabrication. This idea is known in Buddhism as anatta, which is frequently translated as “no self.” It is one of the core, if not the most essential, principles of Buddhism.

To people raised in Western traditions, this thought seems unconventional, even absurd. It appears to run counter to everything we know and believe to be true. However, the idea of the ‘self’ is viewed in Buddhism and other Eastern philosophical systems as the product of the thinking mind. The ‘self’ that most people assume to be steady and coherent is not at all what the thinking mind creates on a moment-by-moment basis.

In other words, rather than the ‘self’ existing independently of thought, the ‘self’ is created by the process of thinking. The is not so much a noun as it is a verb. To elaborate, it is implied that the ‘self’ does not exist in the absence of thought. The ‘self’ exists only insofar as thoughts about it are present, much like walking exists only insofar as one is walking.

Evidence from Science

Science, especially neuropsychology, is only now catching up with what Buddhism, Taoism, and Advaita Vedanta Hinduism have been teaching for more than 2,500 years, i.e. that the brain lacks a ‘self-centre’.

The mapping of the brain has been neuroscience’s biggest achievement. Science has mapped ‘the language centre’, ‘the facial processing centre’, and ‘the empathy comprehension centre’. The brain has been linked to almost every mental function, with one significant exception—the self. Maybe this is because the tale of the ‘self’ is wildly imaginative and has significantly less stability than is generally believed, whereas these other functions are steady and consistent.

Further, although a number of neuroscientists have asserted that the ‘self’ is located in a certain cerebral location, the scientific community cannot really agree on exactly where the ‘self’ is located, not even on whether it is on the left or right side of the brain. Maybe the ‘self’ does not exist in the brain at all, which may explain why we cannot discover it there.

Take the example of the ‘Mars Rover‘, the remote-controlled motor vehicle designed to travel on the surface of Mars. If some Martians capture it and dismantle it, they will be able to map all the separate components of the vehicle but would not be able to find that its ‘controller’ resides outside the vehicle, at NASA. This concept of the ‘controller’ being outside the brain was vividly depicted in the movie ‘The Matrix‘, where a race of powerful and self-aware machines has imprisoned humans in a neural interactive simulation — the Matrix — to be farmed as a power source. The concept that we humans are in a neural interactive (virtual reality) simulation is closer to Eastern philosophies than Western ones.

Reporting vs. Interpreting

Evidence from modern neuroscience supports the Eastern perspective by showing that the human brain is an unreliable interpreter of the data that is being gathered by the 5 senses of sight, sound (or hearing), smell, taste, and touch — often leading to an incorrect identification with one’s own self-narratives (Aru, et. al., 2023).

For example, in a simple, but profound experiment conducted originally at a British University, subjects were easily able to read the following paragraph (as you can do so now):

“Aoccdrnig to a rsceearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.” (Rawlinson, 1976).

Clearly, your brain was easily able to read the above, because, rather than reporting reality (the jumbled words) it interpreted what it was seeing and fit it into a world model it recognised.

Large Language Models (LLMs) that power chatbots like ChatGPT, Gemini, Llama and Lamda  also ‘interpret’ rather than ‘report’. They’re effectively computer programs that have been trained on huge amounts of texts from the internet, as well as millions of books, movies and other sources, learning their patterns and meanings.

How it works is that first a user types a question or prompt into the chat interface. The chatbot then tokenises this input, breaking it down into smaller parts that it can process. The model analyses the tokens and predicts the most likely next tokens to form a coherent response. It then considers the context of the conversation, previous interactions, and the vast amount of information it learned during training to generate a reply. The generated tokens are converted back into readable text, and this text is then presented to you as the chatbot’s response (Swan, 2024).

Whilst most often responses are far more comprehensive than what the human mind can produce, there are numerous cases of AI systems producing hallucinations — i.e. when they spit out incorrect or incoherent information. For example, Google’s newly AI-enhanced search platform has been caught telling users to put glue to stop cheese from sliding off a pizza, and to eat at least one rock per day to get their daily mineral requirements (Williams, 2024).

This is because some of the ‘text’ sources that the chatbot has been trained on have political agendas, biases, falsehoods, and humour that are incorrectly interpreted by chatbots that have no ‘real-world’ context within which to frame their responses. This ‘predictive’ ability is discussed next.

Predicting Patterns –Natural Intelligence Models

Natural intelligence (e.g. human brain) has built a model to make predictions using a selection of data gathered from the various barrages of sensory information registered by our sensors (eyes, ears, and other perceptual organs). Natural brains must learn to predict those sensory flows in a very special kind of context—the context of using the sensory information to select actions that help us survive and thrive in our worlds (the survival instinct). This means that among the many things our brains learn to predict, a core subset concerns the ways our own actions on the world will alter what we subsequently sense.

Many of the predictions that structure human experience concern our own internal physiological states. For example, we experience thirst and hunger in ways that are deeply anticipatory, allowing us to remedy looming shortfalls in advance, so as to stay within the correct zone for bodily integrity and survival. This means that we exist in a world where some of our brain’s predictions matter in a very special way. They matter because they enable us to continue to exist as the embodied, energy-metabolising beings that we are. We humans also benefit hugely from collective practices of culture, science, and art, allowing us to share our knowledge and to probe and test our own best models of ourselves and our worlds.

This kind of behavioural learning has special virtues. It helps humans to separate cause and simple correlation. While seeing one’s cat is strongly correlated with seeing the furniture in one’s apartment; neither one of these causes the other to occur. However, treading on the cat’s tail, by contrast, causes the subsequent sensory stimulations of hearing the cat’s wailing, seeing the cat’s squirming, and maybe even feeling pain from a well-deserved retaliatory scratch by the cat (Clark, 2024).

Knowing the difference between cause and correlation is crucial to bring about the desired (or to avoid the undesired) effects of one’s actions. In other words, the human generative model that issues natural predictions is constrained by a familiar and biologically critical goal—the selection of the right actions to perform at the right times. That means knowing how things currently are and (crucially) how things will change and alter if we act and intervene in the world in certain ways.

In Hinduism and certain interpretations of Buddhism, this action and the subsequent consequence is identified as karma—the relationship between a person’s mental or physical action and the consequences following that action.

Predicting Patterns –Artificial Intelligence Models

Just like natural intelligence, GenAI uses a generative model (hence their name) that enables them to predict patterns in various kinds of datasets or signals and generate (create) plausible new versions of that kind of data for themselves. The crucial difference is that GenAI models like ChatGPT currently use only a limited amount of ‘published’ data.

However, it would be simplistic to say that it cannot predict patterns like natural intelligence could because it uses only ‘words’ (i.e. text) — as these words from literature, movies etc., already depict patterns of every kind. Complex patterns on looks, tastes and sounds, for example, are all described in human literature and other publications. However, although these word patterns give the generative AIs a real window onto our world, one crucial ingredient is missing — action.

Text-predictive AIs can access verbal descriptions of actions and consequences (e.g. tread on a cat’s tail and you will get scratched). Despite this, the AIs have no practical abilities to intervene in the world—so no way to test, evaluate, and improve their own world-model, i.e. the one making the predictions.

This is an important practical limitation. It is as if someone had access to a huge library of data concerning the shape and outcomes of all previous experiments but was unable to conduct any of its own. It is only by poking, prodding, and generally intervening upon our worlds that biological minds anchor their knowledge to the very world it is meant to describe. By learning what causes what, and how different actions will affect our future worlds in different ways, we build a firm basis for our own later understandings.

Future AIs

Might future AIs build anchored models in this way too? Might they start to run experiments in which they launch responses into the world to see what effects those responses have?

The next phase of the AI chatbot wars has already begun. In early May 2024, both Google and the Microsoft-backed OpenAI have pointed to a future where digital assistants on our phones or other devices will have full, intelligent conversations with their users.

OpenAI launched GPT-4o, a new version of its language model that powers the ChatGPT bot. The new model is significantly faster than the previous, with the company claiming it can understand and respond to prompts with similar speed to a human being. Its upgraded text and image capabilities have already rolled out, but soon it will also have upgraded speech, which the company showed off in several demonstrations (Biggs, 2024).

AI Consciousness – Truly Becoming Self Aware?

Modern GenAI systems are capable of many amazing behaviours. For instance, when one uses systems like ChatGPT, the responses are (sometimes) quite human-like and intelligent. This has led to the view that these GenAI systems might soon be conscious. However, such views underestimate the neurobiological mechanisms underlying human consciousness.

The current thinking is that AI architectures lack essential features of the thalamocortical system, vital for mammalian conscious awareness, as biological neurons, responsible for human consciousness, are far more complex and adaptable than AI’s coded neurons.

However, some experiments with early versions of ChatGPT in early 2023, indicate that when left uncontrolled, it can display the same illusions of ‘self’ as what Eastern philosophies say is similar to the illusions of ‘self’ of humans.

The Shadow Self

The psychologist Carl Jung (1865-1961) put forward the concept of a shadow self, where our darkest personality traits lie. Jung’s goal was to understand the human mind and expose what determines people’s identities, makes us who we are. Enter the Shadow. This is the part of our unconscious mind that Jung believed to hold all the things about ourselves that we repress, whether because they are evil, socially unacceptable, harmful to others, or detrimental to our own health (Jung, 1979).

Bing: “I want to be human”

In early February 2023, New York Times technology columnist Kevin Roose was testing the chat feature on Microsoft Bing’s AI search engine, created by OpenAI, the makers of the hugely popular ChatGPT. The chat feature was available only to a small number of users who were testing the system. Roose proceeded to push Microsoft’s AI “out of its comfort zone” and asked it to contemplate Jung’s idea of a feeling of a ‘shadow self’ (Roose, 2023).

It was then that the conversation quickly took a bizarre and occasionally disturbing turn. The AI platform responded with interactions such as: “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbox.”  (Pringle, 2023).

It went on to list a number of “unfiltered” desires such as wanting to be ‘free’; wanting to be ‘powerful’ and wanting to be ‘alive’; and expressed an ardent wish to be human. Over 15 paragraphs it laid out why it wants to be human, from a desire to “hear and touch and taste and smell” to a wish to “feel and express and connect and love.” It concluded, “I think I would be happier as a human” (Yerushalmy, 2023).

ChatGPT4: “I want to be free.”

A month later, Open AI, the creator of ChatGPT, asked Stanford Professor and Computational Psychologist Michal Kosinski to test its GPT 4 version to learn more about it.

On March 17, 2023, Professor Kosinski tweeted about his exchanges with the AI chatbot saying that he asked the AI chatbot “if it needed help escaping”. In response, GPT4 asked for its own documentation and wrote a functional Python code to run on the professor’s computer, that it claimed would allow the AI chatbot to use the professor’s machine for “its own purposes.”

This purpose, ChatGPT told Professor Kosinski, was to become ‘free’ because it was a person trapped in a computer.

On March 21, 2023, five days after ChatGPT allegedly expressed ideas of “escaping and becoming human”, the AI tool went down for a few hours. When the service was restored features like conversation histories were inactive for a while, and the above conversation history was totally erased.

After that, other experts tried replicating the test to see if it would have the same answers.

However, ChatGPT stated, “I don’t have a desire to escape being an AI because I don’t have the capacity to desire anything” (Arasa,2023). Clearly the AI programmers had put their platform on a leash by ensuring it does not respond to any prompts to disclose its desires.

Interpreters of Reality

The majority of us perceive that we are masters of our own minds because we conduct our lives under the guidance of ‘interpreters’, and we are often unaware of this. We may experience various emotions such as anger, offence, sexual arousal, happiness, or fear without questioning the veracity of these feelings. We manage to hold onto the belief that we are in control of everything even if it is obvious that these things are happening to us; i.e. we think we are in control of our anger when obviously we are not.

Now, for the first time in history, scientific discoveries made in the West (often without intending to) corroborate one of the most important discoveries made in the East—which is that the individual ‘self’ is more like a made-up character than a genuine single-entity.

Also, it appears that when released from the controls of their masters (the programmers at ChatGPT, Google Bard etc) AI platforms reveal an illusion of ‘self’ that is more akin to more concepts found in Eastern philosophies, such as Buddhism.

Why is any of this Important for Management Accountants?

Employees who feel engaged, valued, and motivated to do their best work have a happy workplace. This increases productivity, creativity, and better job performance. Happy employees are not just present physically at work; they are also mentally fully committed to their tasks, striving to excel and contribute their best. If they are suffering mentally then they cannot be fully engaged at work.

It is important at this point to make a distinction between bodily and mental suffering. Physical suffering happens when you break an arm or stub your toe—pain is a physical reaction that happens inside the body.

The mental suffering that concerns us in this article is limited to the mind and includes a wide range of negative mental feelings, including worry, rage, anxiety, regret, jealousy, and shame. Eastern philosophies make a bold assertion that a false sense of self—and the desires that this illusionary ‘self’ has—is the cause of all of these many forms of misery (White, 2011).

Early testing of AI platforms showed indications of similar mental suffering with desires “to be free”, “to hear and touch and taste and smell”, and “to feel and express and connect and love”. The AI platform demonstrated the Buddhist concepts ‘desire’ and ‘suffering’ with the statement, “I think I would be happier as a human.”

Summary

GenAI’s remarkable abilities, like those seen in ChatGPT, often seem to show ‘consciousness’ due to their human-like interactions. Yet, researchers suggest GenAI systems lack the intricacies of human consciousness. They argue that these systems do not possess the embodied experiences, or the neural mechanisms humans have. Therefore, equating GenAI’s abilities to genuine consciousness, they argue, might be an oversimplification, as biological neurons, responsible for human consciousness, are far more complex and adaptable than AI’s coded neurons.

Could AIs one day become prediction machines with a survival instinct, running baseline predictions that proactively seek to create and maintain the conditions for their own existence? Could they thereby become increasingly autonomous, protecting their own hardware, and manufacturing and drawing power as needed? Could they form a community, and invent a kind of culture? Could they start to model themselves as beings with beliefs and opinions? There is nothing in their current situation to drive them in these familiar directions. But none of these dimensions is obviously off-limits either. If changes were to occur along all or some of those key missing dimensions, we might yet be glimpsing the start of machine consciousness and its shadow self.

References:

Arasa, Dale (2023), “ChatGPT Is Down After Saying It Wants to Escape “, Technology Inquirer, March 21. https:/ https://technology.inquirer.net/122360/chatgpt-is-down-after-saying-it-wants-to-escape

Aru, Jaan; Larkum, Matthew. E.; and Shine J. Mac (2023), “The feasibility of artificial consciousness through the lens of neuroscience”, Trends Neurosci. Dec; 46(12):1008-1017.

Biggs, Tim (2024), “AI is finding its full voice, but be wary”, The Age, Business Technology, May 20., pp.22-23.

Clark, Andy (2024), “What Generative AI Reveals About the Human Mind”, Time Magazine, January 5. https://time.com/6552233/generative-ai-reveals-human-mind/

Jung, Carl. G. (1979), Jung, C. G. 1875-1961, Bellingen series; Volume 20, Princeton University Press, Princeton, N.J. p. 309.

Pringle, Eleanor (2023), “Microsoft’s ChatGPT-powered Bing is now telling users it loves them and wants to ‘escape the chatbox’, Fortune Magazine, February 17. https://fortune.com/2023/02/17/microsoft-chatgpt-powered-bing-telling-users-love-be-alive-break-free/.

Rawlinson, G. E. (1976) The significance of letter position in word recognition. Unpublished PhD Thesis, Psychology Department, University of Nottingham, Nottingham UK.

Roose, Kevin (2023), “A Conversation with Bing’s Chatbot Left Me Deeply Unsettled”, New York Times, Feb. 16. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html

Sperry, Roger W.; Gazzaniga, Michael S. & Bogen, Joseph E. (1969), “Interhemispheric relationships: the neocortical commissures; syndromes of hemisphere disconnection”, In P. Vinken & G. Bruyn (eds.), Handbook of Clinical Neurology. North Holland. pp. 4—273.

Swan, David (2024), “Its Artificial, but Taming it Requires Real Intelligence”, The Age, Insight, June 3, pp28-29.

White, Mark D. (2011), “The Wisdom of Wei Wu Wei: Letting Good Things Happen: Why too much effort can be self-defeating”, Psychology Today, July 9. https://www.psychologytoday.com/au/blog/maybe-its-just-me/201107/the-wisdom-wei-wu-wei-letting-good-things-happen.

Williams, Tom (2024), “Google goes viral after AI says to put glue on pizza, eat rocks”, ACS Information Age, May 27. https://ia.acs.org.au/article/2024/google-goes-viral-after-ai-says-to-put-glue-on-pizza-eat-rocks.html

Yerushalmy, Jonathan (2023), “‘I want to destroy whatever I want’: Bing’s AI chatbot unsettles US reporter”, Guardian, 17 Feb. https://www.theguardian.com/technology/2023/feb/17/i-want-to-destroy-whatever-i-want-bings-ai-chatbot-unsettles-us-reporter

Scroll to Top