A rather catchy title, but nevertheless one that touches the point. The reader is invited to ask him or herself the following question: could a newborn child be raised by communicating with ChatGPT, a generative pretrained transformer? Of course not! A child does not speak. However a human being is capable of doing just that: raising a child. So obviously, for a newborn to grow, develop a consciousness (beyond the one it already has) and become self-sufficient, humans do something, that ChatGPT couldn´t do, even if you combine it with the most-human-looking android there is. Admittedly a silly thought to begin with, but stick with me. What is it that makes the difference?

Humans interact, feed, hold, speak, sing to, move, touch and nurture their newborns in many different ways, all part of the human condition. We have learned to do so. We are intelligent in that way. But is this the kind of intelligence that we would expect from an ‘artificial intelligence’? Not for now. So, there seems to be a difference between human intelligence and artificial intelligence that touches upon the point of human interaction, involving emotional states, feelings or exchanges of a subtle kind. But certainly one day, a machine could learn that too, right?

Let´s assume for a moment, that an android with the language capabilities of at least a ChatGPT would also be able to perform the mechanical nurturing, that humans perform when nurturing and taking care of a newborn. Let´s assume the experience would be the same for the child, as far as visble, kinesthetic, audible, and in other ways sensoric experience is concerned. Would we expect the newborn to develop similarly as if it were taken care of by a human? At this point the views of many readers may differ. Some may say yes, other may say no.

Let´s add an element to the thought experiment: would it make a difference, whether the feeling of the parent towards the child is filled with ‘love’ or filled with ‘nothing’? The human nature tends to immediately lean towards saying: ‘yes, love makes a big difference’. But what is love in this case? It is a consciousness over a relationship towards another human being. One has to be conscious about it. The same is true for any other relationship we may have to anybody. If we want to sue someone, to terminate a contract with someone, to bury someone, to help someone, to hold someone, or anything else we may wanna do to someone, it requires that we are conscious about it. So the questions is: does consciousness always involve a reflection over a relationship to someone or something? Is consciousness a position that a human being takes in ‘relationship’ to whatever the world around him or her has to offer (including him or herself, with all its complexity)?

If we agree to that, saying that consciousness is a reflection of our relationships to the world and everything that´s in it, then what does that say about an artificial intelligence? Does an artificial intelligence – as of today – have a relationship to anything? Supposedly, many would say no. How could it? If we ask ChatGPT about his relationships to anything, the answer will likely be something similiar to: “I am only a generative pretrained transformer, an algorithm that is based on a large language model, capable of generating responses with a very high probability of being the right one, considering the previous context of the conversation. But I am not capable of developing relationships.”

Clearly: the answer you get from ChatGPT may sound empathic and as if there was something like an emotion involved, but there isn´t. Even if we were to train a large language model with whatever there exists about relationships, consciousness, emotions and states, it would still only produce responses based on a statistical probability.

Human emotions however, involving the central nervous system, the one that is responsible to handle sensoric experiences and signals in the body, are based on biochemistry and not on algebra, statistic or logic. One could say, that that, for now, this is probably a good thing, because it sets us humans apart from any machine that is merely only a highly sophisticated calculator, but not more.

If companies want to engage in developing an artifical intelligence beyond a mere statistical model, a new approach is needed, one that involves relationship. Yes, there are other models, even some that could be interpreted as reflective, enforcing responsive that seem more human than others, but as long as it is purely mathematical, emotions can only be simulated and aren´t real. In order to advance into a direction that resembles human intelligence, we must consider involving emotions and something like a central nervous system. But to do so, completely new application spaces are needed. Maybe VR and AR can provide such spaces. Perhaps BCIs will help us understand more about the possibilities. Most certainly though: we are yet far from developing an artificial consciousness.

Learn more about Data-Centric AI and Controlled Application Spaces. https://lakeside-analytics.com/data-centric-ai/