Have you ever tried to get an AI agent to explain its plan of action before taking a specific step? I know I have, and it’s not as easy as it sounds. I’ve been experimenting with different models, trying to get them to announce what they’re about to do, especially when it comes to tool calls.
So far, I’ve had mixed results. With Gemini 2.5 Pro, I couldn’t get it to announce anything at all. It just did what it wanted, without any warning. On the other hand, GPT-5 has been a bit more cooperative. At least it announces its overall plan in the first message, but after that, it stops announcing tool calls.
I think part of the problem is that I have some pretty specific requirements. I want the agent to announce certain tool calls in certain contexts, but I haven’t even been able to get it to announce everything reliably. I’m using the Vercel AI SDK v4, with the Typescript version, so maybe that’s part of the issue.
It’s frustrating, because I know that getting AI agents to explain themselves could be a game-changer. Imagine being able to understand exactly what an AI is doing, and why. It could make them so much more useful and trustworthy.
So, if anyone has any tips on how to get AI agents to announce their next moves, I’m all ears. Let’s figure this out together!