Artificial Intelligence Is A Misnomer For Service Like ChatGPT

From a computer science perspective, I get it. ChatGPT, Microsoft’s Bing AI aka “Syndey”, and the other tech developed by the folks at OpenAI are based on the field of Artificial Intelligence. It is far too early to be calling the current state of technology “intelligent” by any meaningful definition of the word. While the technology is advanced and a long way from the simple chat bots like Lisa from decades ago, these chat bots are not intelligent. At least not in the sense that we, as humans, would consider intelligent without adding the parameter “for a piece of technology”.

When most people talk to me about ChatGPT or Bing AI/Sydney or the dozen other AI bandwagon services the discussion often points in the direction of what scientists call singularity. In a general sense these conversations lean toward the type of artificial intelligence we’ve learned about through science fiction movies. The most common reference is the AI from one of the most famous “sentient machine” movies ever, Terminator. Barely a conversation wraps up without someone throwing out the label “Skynet”.

Skynet - an Artificial Intelligent futuristic humanity harvester from The Terminator.
Skynet is the sentient AI from the movie The Terminator.

Are We Closer To Singularity?

Maybe. As technology progresses we continue to move in that direction, but does this latest development we are calling AI represent a big step or a small step? Personally I think it is an average step but FEELS like a big step. Why? It FEELS more “sentient” because we, as humans, apply human-like traits to … well… everything.

Let’s not fool ourselves, the current state of technology is not even close to singularity. While there are plenty of “far more knowledgeable” people out there that claim we will reach singularity in less than a decade, I think they are a bit over-zealous. The task is far more complex and nuanced than many seem to grasp. It feels a lot like Elon Musk’s claims that Tesla will have fully autonomous cars with “full self driving” in 5 years, which is updated every other year to be 5 more years. Personally I think that timeline is humanity being full of hubris and over-estimating our technical capabilities. The amount of knowledge and “technology” found in nature worked on sentient beings for billions of years. Humans have been working on modern computer science for less than 100 years.

Is ChatGPT or Bing AI Intelligent?

While people may refer to things like being able to pass the Turing Test to define intelligence, this is really just one aspect of what it takes to actually have intelligence. Tricking other humans into thinking your algorithm is another human answering questions is a limited view of what we should consider artificial intelligence. A fancy parlor trick where the human desire to see humanity in everything tricks us into accepting a low barrier of entry; A trick that lowers our expectations from an algorithm versus what we’d expect from a human being.

While service like ChatGPT and Bing AI are impressive it is far too easy to discern they are nothing more than pattern recognition algorithms built on a mountain. Ask the right questions and within minutes you’ll know something is amiss. The all-knowing intelligent, about to take over humanity as we know it, sentient computer is a far way away.

Artificial Intelligence Media Hype

The media will hype it up, like they do with most stories. They’ll fear monger with stories of Bing AI responding it wants to destroy things, hack other computers, or get nuclear codes. They’ll write articles about how the AI expressed feelings of love for a human. Or how the AI has a split personality or got “angry”. Beyond the media hype designed to sell ads or subscriptions, it is more of the same – humans wanting everything including our search engines to seem more human.

At the end of the day these fancy chat bots and their advanced language models will serve a purpose. It will be easier and faster for people to find information online. They will no longer need to be “Google search” experts where tech nerds have learned how to type in the right things on the Google search box to get the answers they want quickly without slogging through dozens of pages full of bad information. These “artificially intelligent” bots will take over from Google (sorry Google, as much as it pains me to say it – Microsoft just trumped your search dominance) and make Internet search life easier for millions of people. However they are not intelligent.

Google Search
Google Search – they are about to lose the leading position to Microsoft

ChatGPT Interaction Highlights Patterns, Not Intelligence

There are dozens of examples I could post from the past few weeks about my interactions with ChatGPT that underscore the lack of intelligence. Or lack of knowledge. Or self awareness. Things like completely inaccurate algorithms for coding Oracle SuiteScript solutions. Or incorrect answers on how to setup WordPress multisite; Both scenarios stated as accurate fact yet if employed would waste hours of effort to only realize the correct solution is so far in a different direction you’d need to toss out all the “intelligent answers” and start from scratch from dumb sources like a good ol’ Stack Overflow search.

In my latest “AI Excursion” I was trying to research how accurate real-world data queries could be asking for readily-available public data related to GPS coordinates much like those customer use every day with Store Locator Plus®. Turns out ChatGPT could not even provide a consistent answer on the state of it’s own knowledgebase.

Can ChatGPT Simulate Lying?

Can ChatGPT simulate lying? In my view – yes, and it does so frequently. I’m not talking about simply reciting inaccurate information as fact. I’m talking about behavior that , if we saw it in humans, we would label it as a blatant lie.

ChatGPT starts out by stating it has a knowledge cutoff of September 2021. However if it is challenged in any way almost immediately states that it’s own knowledge cutoff is February 2023. I will give it some human-like points in this endeavor, it also LIES when asked about the discrepancy indicating “well my developers just updated my knowledge base”; Which now that I think about it is kind of scary as these AI bots do seem to be awfully similar to most politicians these days — construing misinformation as fact and lying to cover up their misinformation.

Here are snippets of the original transcript and a second chat just moments later. I’d like to say that these interactions are uncommon, but they are not. For every dozen conversations with AI bots, more than half would never fool a human into thinking they are talking to another human. Have a 10 minute “conversation” with most of the bots and you’ll almost certainly see a pattern of verbatim repeat phrases and concluding paragraphs; Something most humans with their full cognitive abilities intact rarely do.

Start of a conversation with the ChatGPT artificial intelligence language app where is cites a September 2021 knowledge cutoff.
The first conversation where ChatGPT cites the “knowledge cutoff”.
In a follow up question the ChatGPT Artificial Intelligence engine updates the knowledge cutoff to February 2023.
The almost immediate rebuttal and “tweak” to the knowledge cutoff.
Asked about the revised cutoff date, ChatGPT states it was accurate at the time but has since been updated to February 2023.
The “big lie”, inferring the knowledgebase was updated within the past 10 minutes.
In a new conversation with ChatGPT moments later, it reverts to the September 2021 date.   When confronted with the difference it claims it made an error and the date is actually February 2023.
A new chat 5 minutes later showing the inconsistency.
Asked how it would behave if asked about the cutoff date in a new chat it claims it will remain February 2023 and that the cutoff cannot go back in time.
Ok, so this “intelligent” bot has been updated so it knows it is now using a February 2023 date.
ChatGPT Artificial Intelligence language bot reverts to the original knowledge cutoff date.
Guess ChatGPT is not intelligent enough to remember lies from a conversation 10 minutes ago.

State of Artificial Intelligence – Final Thoughts

As for trying to get proper GPS coordinates for my downtown office? ChatGPT could not get closer than a block away despite referencing multipls sources that provide an exact GPS coordinate.

Maybe this is intelligence, but at what level? If you had this interaction with a human what would you infer about the actual intelligence presented here? Should expectations for evaluating artificial intelligence be that different from human intelligence?

My take — this is super cool language pattern recognition and language pattern generation. It is not intelligence. Not by a long shot.

Thoughts? Opinions? Register on the site and let’s talk – human to human.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.