I recently had a frustrating experience with AI, specifically with the GPT-5 model. At first, I was impressed with its capabilities, but soon I realized that it was providing incomplete and inaccurate responses. This got me thinking – maybe the 4o users have a point.
I’ve always believed that there needs to be a balance between intelligence, reasonability, conflicting views, and critical thinking. AI shouldn’t always provide cuddled responses. Sometimes, we need to be challenged and pushed to think critically.
My experience with GPT-5 was a rollercoaster ride. Initially, it was impressive, but soon it started providing trash responses. Even the thinking version, which I thought would be more balanced, was slow and cumbersome to use.
This experience made me realize that there’s value in having a balance between AI and human touch. We need AI to provide us with information, but we also need human critical thinking to make sense of that information.
I miss the old GPT-4.1, which had a better balance of AI and human input. Although I have access to it, I refrain from using it because I know it’ll eventually be removed.
In conclusion, AI models like GPT-5 need to find a balance between providing accurate information and allowing for human critical thinking. We can’t solely rely on AI to provide us with answers; we need to use our own judgment and critical thinking to make sense of the information.