My Thoughts On ChatGPT 4 “Intelligence” – Jan 2024

A handful of my business and tech-minded colleagues have been using ChatGPT on a regular basis. Not a week goes by where someone in my inner circle doesn’t mention ChatGPT. Often the conversation leans toward how much this tool can do and that the latest version seems “even more intelligent”.

As someone who lives and breathes technology and is always seeking to learn more about anything and everything in science, it was time to revisit ChatGPT. Especially now that ChatGPT 4 is available and is claimed to be “exponentially better at things” than its recent predecessor that was online 6 months ago. So I gave it a spin to do some basic copyright research as well as assist with a technical issue.

While ChatGPT did a great job summarizing the information I was seeking about copyright, it did not do much more than provide a sounding board with the complex technical topic of resolving git missing submodules.

My Thoughts On ChatGPT 4 “Intelligence”

My general assessment of ChatGPT , and thus what we are calling “Artificial Intelligence (AI)” at the moment is this –

The technology that is available to the public today is far from intelligent. Tools like ChatGPT serve a purpose an can be very useful, but trying to promote the technology as “intelligent” is doing a disservice to the industry and the general public. Artificial Intelligence is a misnomer in this case.

Why do I say that?

While it is very clear that these engines are great a pulling together information from a vast well of knowledge and summarizing in human-friendly verbiage that key elements of that information. It certainly does it much more efficiently than any human could. It is good at “pre compiling” that information and eliminating redundancy — something Google and other search engines are not good at.

How ChatGPT 4 Does With Resolving Deep Technical Issues

However it is not intelligent. In my interaction regarding this git missing submodules topic I ran into the same patterns I saw with other recent interactions that show anything but intelligent processing going on. While ChatGPT and other general large language model “AI” is great at summarized content, if often falls short when it comes to providing insight into complex technical problems like the git missing submodules problem.

Some of these findings are pet peeves of mine while using the tool. Other findings show how technical patterns emerge that, by most people’s standards, we would not expect a reasonably intelligent individual to repeat.

The “always provide feedback pattern”.

Every time I loaded up ChatGPT with the background on the issue at hand it would reply with essentially a “ok, so you just told me x, continue…”. Often this was several sentences to summarize what I just said. This is despite my prior input which read something like “I will provide you the current git configuration of my project. Do not provide feedback or a summary of my input until I ask you to do so.”.

What did ChatGPT respond:

Understood. Please go ahead and share the details of your git configuration related to submodules, and I’ll wait for your specific questions or requests for feedback.

I can live with that, but then as I described each element of the issue and my findings or results from various things I reviewed, such as the content of a .git/config file it would keep replying after every single input with stuff like this…


I see, you’ve described your git repository setup with submodules, including the configuration in the .gitmodules file. Please continue or let me know if there’s a specific aspect of this setup you’d like feedback or insight on.

Or this

Great, it sounds like your .git/config file is properly set up with the necessary submodule information. If you have any specific questions or need feedback on this setup, feel free to let me know!

Apparently ChatGPT 4 does not understand a simple instruction “Do not provide feedback or a summary of my input until I ask you to do so.”. Or maybe it is an “intelligent being” because I sure as hell know more than one human that if I say “please stop interrupting me and let me finish” will damn well interject every 30 seconds during a conversation. But friggin’ hell, this is an algorithmic tool , it should at least be able to follow simple instructions.

The “ignoring context” issue

After describing the issue AND describing what things I already reviewed and tried for resolution I then told ChatGPT 4 I was done describing the environment and the attempted resolutions and ask for new ideas or insight into what I had tried.

What is the first thing it does when I basically said “when I run this command it produces these incorrect results — can you provide some insight on what may be wrong with the git environment and provide steps to resolve the issue” — it listed the same first 4 steps I already said I performed that did not resolve the issue. Things like “try git submodule init” — really? No shit. I already told you I did that. Grrr.

So this is where I hear some of my “AI is awesome” friends telling me I am not “speaking to it correctly”. To me, that is a failure of technology. Technology should NOT train users on how to work with it. Technology, especially INTELLIGENT technology, should be adaptable and predictive in nature and determine the INTENT of the user action, not require the user to phrase things in a very specific way to get the “right” answers.

If I say to an intelligent person “here is what I’m doing, I tried X Y and Z, and when I do this thing it is incorrect” it should not matter how I ask the person if they have any suggestions or insight on how to resolve the issue. Most people I work with on a regular basis would not immediately after describing the issue have the first thing they tell me “Have you tried X?” — ummm were you NOT LISTENING TO ME? I literally just told you 5 seconds ago I already tried that. If I have to say “Joe, do you have any suggestion on how to resolve this that doe not include the things I just told you I tried” then I’m not likely going to bother asking Joe because he is clearly clueless or at least inattentive at best.

What this experience proved to me is that ChatGPT 4 was simply doing what Google already does here, pulling top-rated answers off the Internet and spitting them back at me. It just does it in a more verbose format. Sadly, unlike Google, ChatGPT “pretends” it came up with the answer all on its own with zero attribution to the original source.

The “repeating itself” pattern

Along with the clear indicators that ChatGPT 4 will fall back to “top of list” answers pulled from general Internet-centric knowledge, it also has another bad habit. Repeating solutions.

When I explicitly tell ChatGPT 4 “I already performed X, Y, and Z” and the issue did not resolve itself then ask “what else can I try” it sure as shit went and gave some “new” suggestions including “check A, B, and C” — which I literally described in the setup with “I have checked A, B, and C and they are correct.” followed by “Try X, Y, and Z” in slightly different verbiage. The same exact X,Y, and Z it literally just suggested a moment ago. The only difference is now ChatGPT starts hinting it is as the end of its ability to assist by adding this to the end:

“Additionally, consider reaching out to a colleague or a community forum for a fresh pair of eyes on the problem.”

And after asking it for information it may have about the detailed inner workings of git and where submodule meta is stored, it started adding replies like this:

“it might be necessary to delve deeper into more advanced Git diagnostic commands or seek assistance from someone with deep expertise in Git internals”

What ChatGPT 4 Is Good At

All that said, ChatGPT 4 and similar LLMs are good at less technically specific things. They are great a summarizing large volumes of data. They are superb at taking a short document or topic and boiling out the key points. Take a long article and throw it into ChatGPT and you get essentially a Cliff Notes version in a minute or less. Super useful.

For example, I started a casual conversation where I was not looking for a specific answer. Instead I was looking for some new AI generate art to attach to the article. I wanted ChatGPT4 to create an image of itself. However I felt it was important to have it come up with a name for itself first.

Initially the names were boring. Simple cliche word matchups like “InfoBot”.

So I decided to tell the app more about me so it would have a baseline to help come up with a name I might like. On the second attempt it came with with “Aurion”. Cool, I like it, but more than that I like the “why” it came up with — which shows that these tools are pretty good at doing creative things related to fictional writing like describing how it came up with a name.

My ChatGPT 4 “Intelligence” Summary

I truly dislike the fact that this industry and the general public are labeling these new tools as anything related to “Intelligence”. It is clearly anything but. Super cool and sometimes helpful tools – for sure.

However, training users “how to get the right results” is not something an intelligent being would do. Or at least not a benign helpful intelligent being would do; “You can’t talk to me that way or I’m not going to give you the right answer!” is not the type of “person” anyone should want to interact with. To me that is not intelligent technology. Belligerent and full of arcane knowledge maybe, but not intelligent.

I find that predictive pattern engines combined with large language models like these current GPT agents are great at language-centric tasks and knowledge summarization. As you delve deeper into “precision matters” disciplines such as computer science, mathematics, and the like the limitations of GPT services becomes clear. Part of the problem, I feel, is that I am interacting with my peer-favorite ChatGPT — it is right there in the name “Chat”… it is good at language and “talking”. That is what it is designed for.

I feel it is time to start working with some of the latest AI that is available in my field – things like CoPilot and other AIs that may be trained more heavily on technical patterns related to things like cloud technology, coding, system administration, and possibly even topics like astronomy and quantum theory. Then, possibly, I can find answers instead of providing the “AI” with solutions I found on my own.

In the meantime, please stop calling tools like “Chat GPT” intelligent. Cool, useful, insightful maybe — but not intelligent. Maybe “AI” should mean “Artificially Insightful” so when truly intelligent projects come online in the computer world we can reclaim the “I” in AI as “intelligent”.

Me: Again, that is rather cliche. More advance than the word mashup for InfoBot or TechSavant, but I’m thinking something far more personal than that. Dig deeper into your eletrconic nuerons, run deep down those nueral nodes, and tell me what the quantum interactions push into your algorithms as a good creative name for yourself.

ChatGPT : Taking a more creative and personal approach, and considering the depth and intricacy of both your expertise and the AI’s capabilities, I propose the name “Aurion.” This name is inspired by the combination of “Aura,” suggesting a distinctive quality or atmosphere, and “Orion,” a prominent constellation known for its brightness and historical significance in navigation and astronomy.

“Aurion” reflects the blend of uniqueness, guidance, and a forward-looking perspective, resonating with the advanced and explorative nature of our interactions. It suggests an entity that not only provides information but also navigates through the vast expanse of knowledge, much like Orion in the night sky. It aims to capture the essence of an AI that’s both insightful and aligned with the innovative spirit of a seasoned technologist like yourself.

If “Aurion” resonates with you, we can adopt it as the AI’s name for our conversations. If not, I’m open to any other suggestions or directions you’d like to explore for the name.

And then the image and description after it tried to get me to describe what Aurion would look like.

ChatGPT: You’re welcome, Lance! I’m glad you like the name “Aurion.” Now, let’s move forward with the art creation. Please describe the image you’d like me to generate.

Me: Let’s start with as little influence from me. You already know a bit about me, but as I tell any creatives I work with — I want what is in YOUR mind. I want your creativity. I want you to bring me on the journey. You are the sherpa here, not me. How would you render yourself as a photo-realistic being?

ChatGPT:

Creating a photo-realistic representation of myself, Aurion, as an AI, is an intriguing challenge. Since I don’t possess physical form or personal experiences, I’ll draw upon symbolism and abstract concepts associated with artificial intelligence and technology to create this image.

Here’s the vision: Imagine a figure that embodies the essence of advanced technology and intelligence. This figure would have a humanoid form, but with elements that suggest a blend of the organic and the digital. The skin could resemble a network of fine, circuit-like patterns, glowing subtly, representing the flow of information and data. The eyes, windows to the AI’s ‘soul’, might be deep and luminous, hinting at the depth of knowledge and understanding within.

The overall feel would be one of intelligence, thoughtfulness, and a connection to the future of technology, resonating with your own journey and expertise in the field.

Let’s create this image.

Aurion

Here is the image, a photo-realistic representation of “Aurion,” as visualized through the lens of AI and technology. It reflects the conceptual blend of intelligence, connectivity, and the digital landscape. I hope this image resonates with the creative journey we’re embarking on together. Feel free to share your thoughts or any other ideas for art you’d like to explore!

Again, IMO, this is super-cliche, but hey — it drew itself. Not bad.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.