Skip to content
All posts

No, ChatGPT does not think like us

In the digital age, artificial intelligence has emerged as a groundbreaking “new” technology, redefining the realms of human-computer interaction. At the forefront of this revolution is ChatGPT, a powerful AI platform that acts as a virtual conversation partner and search engine. ChatGPT has opened discussions to the possibilities of AI, even to those who are not familiar with technology, a discussion often tainted with uncertainty (and a little anxiety) about what we do with this new technology that appears to behave like us, but better.

This prompts the inquiry: does ChatGPT think like a human? Evaluating consciousness in modern machine learning models presents challenges. That being said, robots have been successfully passing the Turing Test - a test which evaluates a machine's ability to mimic human behavior - since 2014. So how much like our brain is AI?

The AI behind ChatGPT is built on large language models and artificial neural networks. Neural networks are named after the brain's information representation through interconnected neurons. In a neural network model, neurons are analogized as nodes where information is stored. Nodes are connected with "weights," indicating the strength of interconnections between them, just like some neurons are more strongly interconnected than others.

At the most basic level, this is how the human brain operates. The strength of connection between networks of neurons will dictate output, with the strength of that connection largely dependent on your experiences or “input” from the world and/or prior knowledge. In addition, pattern finding is a common beloved hobby of both human and machine based neural networks. As AI was created by and is dependent on information it is fed from human knowledge, it is unsurprising that we get a feeling of “déjà vu” when we see the output and linguistic abilities of LLMs.

Despite these clear similarities between this type of AI and the human brain, it is important to highlight that the similarities between LLMs such as ChatGPT and the human brain are superficial at best. One cannot underestimate the complexity of processes we engage in everyday life. Our world is multisensory, our cognitive processes are in constant dance with our emotions, our memories and our personalities. How we weigh and value information to be stored, remembered, forgotten, used, specified or generalised is dependent on a long list of human traits that are simply unavailable to computational models in the same way.

Human brains are not simply dependent on the amount of information, but also the quality and multidimensionality of experiences through which they gain that information. To take a single (albeit enormous) example, what we choose to pay attention to at any given moment is, in itself, dependent on a whole host of factors both psychological and neurological. How you feel emotionally both during and about what you are perceiving will heavily influence brain processes and how you recall and use the information from an experience. While ChatGPT is logical and can simulate conversations, it lacks the complexity and depth of human decision-making, relying on the sheer volume of information rather than the intricate processes of the human brain. LLMs may operate on the same basic principles as our biological machines, but they most certainly cannot “think” like us.

Understanding human cognition is an important factor in developing resilient cybersecurity programs. If you would like to learn more about how the competencies at Praxis Security Labs can help you to leverage human factors at your organization, contact us today using the link below.