GPT-5: Smart but Not Always Right

GPT-5: Smart but Not Always Right

I recently spent 20 minutes asking GPT-5 a simple question: what was the temperature in Seattle today, and how did it rank in 2025? What I got back was a lengthy response that left me wondering if the model was trying to show off its intelligence or just couldn’t give a straight answer.

The transcript is a wild ride, with the model juggling different sources like NOAA, Weather Spark, and ExtremeWeatherWatch, but ultimately mixing up forecast and observed temperatures like a freshman who copied the wrong lab report. It’s not that GPT-5 is wrong in a dumb calculator kind of way; it’s wrong in an over-caffeinated valedictorian kind of way – it knows the right answer exists, but can’t resist showing off every intermediate thought and hedge.

The result is a verbal Rube Goldberg machine that’s more noise than signal. As the author of the Reddit post so aptly put it, ‘Intelligence without restraint is just noise.’ Until the model learns to ‘stop talking until it’s sure,’ we’ll be stuck with a smarter version of dumb.

It’s a reminder that even the most advanced AI models need to learn when to hold back and focus on giving us the right answer, rather than trying to show off their intelligence.

Leave a Comment

Your email address will not be published. Required fields are marked *