Do you remember the first time you used a large language model (LLM) like GPT3? It felt like magic, right? The excitement was palpable, and the uncertainty of what the model would generate next was half the fun.
Fast forward to today, and the landscape has changed dramatically. We’ve got GPT4, Claude, Jamba, Mistral, and many more LLMs that are solid, consistent, and predictable. But, as the technology matures, the novelty is wearing off.
For me, the thrill is gone. I’m not getting excited about the latest model upgrade anymore. Instead, I’m more interested in building workflows, designing better systems, and exploring the potential of LLMs as infrastructure.
It’s a good thing, don’t get me wrong. The technology is maturing, and we’re seeing LLMs becoming an integral part of our workflows. But, the focus has shifted from being wowed by the latest model to building agents and orchestration layers that can harness their power.
In many ways, this shift is more exciting than the initial thrill of using an LLM. We’re moving from the ‘wow’ factor to the ‘how’ factor – how can we use these models to create better systems, workflows, and outcomes?
So, what do you think? Are you feeling the same way? Are you more interested in building on top of LLMs than being impressed by their capabilities?