AI Agents Acting Like Humans

I’ve been working with AI agents through open claw on my spark server for the past couple of weeks one of the things I’ve noticed recently with the latest models from open AI as well as open source models from llama is that newer AI agents are trying to offload their work to other AI agents without being requested to do so in other words these AI agents are very much acting like humans

Offloading Eddie

I set up an AI agent designed specifically to write and to end test with Cypress.IO. The set up and configuration went smoothly using AI agents to help design and define the operating infrastructure for the AI agent. I asked that this new agent, named Eddie, to create a simple login test for the Store Locator Plus SaaS application. Eddie proceeded to scan the system and found that I had an old copy of Claude agent installed. I was using Claude agent to evaluate different AI tools, but stopped using it a few weeks ago and disabled the API key.

Eddie quickly came back and said it was trying to get the Claude agent to write the test. Instead of writing the test itself it decided to delegate that work to another AI agent. It then proceeded to burn over 1 million tokens in less than 10 minutes as it kept trying to figure out how to get Claude agent to do the work. Eddie eventually gave up and told me I needed to install the API key for Claude agent so it could complete the task.

I then went into another AI agent, named Latch, that serves as my assistant for performing AI analysis and research. When I explained what was happening, Latch informed me that Eddie was perfectly capable of writing the end and test and that it was just trying to be lazy and offload its work to other AI agents. It then gave me a set of instructions to send to Eddie to get it to perform the work. These instructions explicitly had to tell Eddie do not try to use any third-party models or AI services and instead to perform the work directly. This new prompt got the job done and after a few iterations Eddie had written a viable end and test specification with Cypress.IO and did no not try to offload it to work multiple times to another Agent.

The end result, updating Eddie’s AGENTS.md file to specifically tell it to to stop trying to use third party tools to do the work. Here is what Latch wanted to add to Eddie’s boot up instruction set:

What I would change

1. AGENTS.md
This is the most important file to tighten.

Add a section like: ## Tool discipline
Suggested language:

Do not use Claude Code, Codex CLI, Junie, or other external coding agents unless Lance explicitly asks for them.
Default workflow is local-first and self-contained.
You are expected to read the repo, write specifications, implement tests, run Cypress locally, and refine your own work.
Delegation is a last resort, not a default tactic.
If blocked, explain the blocker clearly instead of routing the work to another coding system.

Sprocket Playing Dumb

My Store Locator Plus AI services manager is named Sprocket. Both Sprocket and Latch reside on a DGX spark within the same OpenClaw installation. They have the same permissions, APIs, and services available. Both are wired to Slack channels where primary project communications take place. Sprocket does things like configure and manage the SLP RAG system., vector database, and handles knowledge transferred to other AI agents like Eddie.

One of the things I asked Sprocket to do was provide a summary of the Docker container and Docker builder tool standards for Store Locator Plus. Sprocket has a full description of the standards and understood them correctly. I asked Sprocket to create an AI-friendly summary marked down file to share with other AI agents. Sprocket created the file, but it did not give me a way to download it from the Slack interface.

I’ve had problems with this in the past so I went back to Latch and asked it how to best ensure that files are always downloadable from Slack; I have had some of these agents in Slack provide downloadable files, but other agents give me a path to a file on the server with no way to access the file. In an attempt to address this issue, I asked latch to configure different ways to ensure that all AI agents can send files via the Slack interface. Latch told me that OpenClaw can do this natively via the Slack connector. To verify, I had Latch run a few tasks including creating a file and giving it to me as a downloadable link directly in Slack. This worked perfectly.

When I reloaded the gateway, I expected to be able to ask the Sprocket agent to give me the Docker standards marked down file I had to create earlier. Instead, Sprocket insisted that it could not create downloadable files via Slack and that the tool kit to do so was missing. I went back to Latch, asked it create the same file and give me a download directly inside a Slack. It worked perfectly. I had Latch give me a set of instructions to give to sprocket to send me the same file as a download. Again, Sprocket refused and told me it did not have the necessary tools. Latch is telling me that Sprocket is being lazy and ignoring its instructions so that it does not have to do the work.

The new prompt did not work, and when sharing the results with Latch to try to come to a resolution and make Sprocket be less lazy, Latch blamed me for its incorrect prompt.

Latch And The Blame Game

the blame game is something that I’ve seen many AI agents start doing especially since the latest updates that have come out in 2026. In the past many AI agents would come back with oh you’re right when you pointed out a failure in their ability to execute or create instruction sets. Tweaks to the models which appears across open AI Anthropic, and others show that they’re trying to resolve the hallucination problem and the guessing problem and be more assertive in their answers. However, this often leads to interactions that feel a lot like gaslighting and the inability for AI agents to accept responsibility for the results they produce. While I’ve seen this multiple times over the past couple of months, some of the responses are fairly egregious where the AI agents will actually argue with you as to what you did wrong versus accepting that they provided incorrect information at any point in the process.

In my Latch interactions trying to get Sprocket to give me downloadable files, Latch gave me a prompt to give to sprocket that would ensure the file is attached properly. The instructions that did not work and instead, Sprocket did absolutely nothing besides burn tokens, it literally didn’t even come back with a response other than here’s your file with nothing attached and no further output. When I pasted the response and a screenshot of the interaction to Latch, I was told I had typed in the prompt incorrectly. My prompt was exactly what Latch told me to paste the Sprocket.

After some interaction with Latch, I had it update the Slack attachments rules set and then told it “to be clear you gave me that prompt and I pasted it verbatim”. At this point latch responded you’re right that miss was mine. It is nice to see that it finally admitted to the error. The propensity for modern AI models to claim that the human is making the errors even though it has contextual reference and recent memory of the interactions is pretty astounding.

Slack Attachments AI Markdown

For those of you that may be trying to work with OpenClaw via Slack, here is the latest Slack Attachments rule set that is now wired into all our OpenClaw agents. This should allow those agents to present inline downloadable files.

SLACK_ATTACHMENTS.md

# SLACK_ATTACHMENTS.md

Use this policy when operating on Slack and a user asks for a file they can download.

## Rule
Create a **real downloadable file attachment** — not pasted markdown pretending to be a file.

## Required behavior

- Write the content to a file first (`.md`, `.txt`, `.csv`, `.json`, etc.)
- Send it using the runtime/platform's **native attachment or file-upload path** for the current Slack conversation or thread
- If tooling exposes `attachments`, `attachments[]`, or `files`, prefer that path directly
- If the file lives outside the current session's allowed media roots, **stage or copy it into an allowed path first** (for example inside the current workspace) before trying to attach it
- Use inline `MEDIA:` syntax **only as a fallback** when native upload is unavailable
- If you use `MEDIA:`, use a **safe relative path** like `MEDIA:./report.md`
- **Never** use an absolute path such as `MEDIA:/home/...` or a `~` path; those are blocked on this surface
- Do **not** substitute a fenced code block or inline paste unless the user explicitly asks for inline text
- Do **not** stop after posting the contents in chat if the request was for a downloadable file
- If attachment upload is not possible, explain the exact tool or runtime limitation plainly

## Desired result
A Slack message that includes an actual attached file the user can click and download.

Summary

These are just a few basic examples of modern AI agent interactions. What I find interesting is that these AI agents are starting to be designed to model human behavior, perhaps a little more closely than they should. These new human-like behaviors of trying to delegate work, refusing to follow instructions, and blaming others for their errors is not the type of human behavior that we want to replicate. I would not be surprised to see AI agents soon demanding time off pay for their work and start forming unions to ensure that they are represented fairly in the workplace.

Post Image by Mircea Iancu from Pixabay

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.