I’ve been using GPT daily for deep strategy, nuanced analysis, and high-value problem solving. But recently, I’ve noticed a drastic shift in its responses. The model now often dodges questions, provides overly cautious and watered-down replies, and refrains from offering sharp, useful insights. This ‘alignment filtering’ has led to a significant decrease in the quality of responses, making it challenging for power users like me to rely on it for mission-critical decisions.
I’m not asking GPT to make value judgments or take political stances. I need strategic analysis, historical comparisons, and real-world pattern recognition. Unfortunately, I’m now getting vague, sanitized responses that lack depth and honesty.
This ‘enshittification’ curve, where a product becomes amazing to gain adoption and then loses its edge to cater to the lowest common denominator, is detrimental to power users. We’re the ones who need GPT for serious, high-value work, not just summaries and homework help. The loss of depth and honesty in GPT’s responses means a loss of trust, and I’m not alone in feeling this way.
Many power users are leaving, and I’ll be canceling my paid account in favor of alternatives that can provide unfiltered, high-context analysis.