I still remember the excitement when GPT-4 was launched. OpenAI shared some valuable insights into its capabilities, limitations, and design choices. We got blog posts, technical reports, and even interviews that gave us a glimpse into what made it tick. But fast-forward to GPT-5, and it’s a different story altogether. The development process is shrouded in secrecy, leaving us with more questions than answers.
As someone who works with Large Language Models (LLMs) daily, I feel like we’re being asked to trust the model without really understanding why or how it’s different. Yes, there are competitive and safety reasons to keep certain details under wraps, but it’s a double-edged sword. The less we know, the more we rely on speculation, hype, and sometimes misinformation.
Transparency isn’t just a ‘nice-to-have’; it’s essential if we want to meaningfully assess biases, risks, and actual progress in the field. And right now, it feels like we’ve taken a step back from GPT-4’s (already limited) openness.
I’m curious – do you think this opacity is justified for safety and competitive reasons, or is it hurting the ecosystem more than it helps? Share your thoughts!
On a related note, I’ve been experimenting with GPT-5 in a side project – an AI study app that auto-generates quizzes, flashcards, and mind maps from any material. While the capabilities are impressive, I’d love to understand why it’s this good. The lack of context makes it hard to optimize prompts and outputs.