The Road to Human-Level AI: Are We Taking a Detour?

The Road to Human-Level AI: Are We Taking a Detour?

Remember when Yann LeCun, the director of AI Research at Meta, tweeted that Large Language Models (LLMs) are an ‘off-ramp’ on the highway towards human-level AI? That was back in 2023. Fast forward two and a half years, and it’s interesting to see how well his statement has aged.

LeCun’s concern was that pouring billions into LLMs might distract us from the true goal of Artificial General Intelligence (AGI) and ironically delay its development. He’s not alone in this sentiment. Bill Gates also predicted that GPT-5 would only be a slight incremental improvement over GPT-4, not a revolutionary leap.

So, where are we now? It’s becoming increasingly clear that we need a successor to the transformer, and that the transformer will only be an accessory component of future AGI systems, not the main engine. But who’s working on developing the engine of AGI? Unfortunately, very few resources are being invested in this area.

Even at Meta, most of the money is going into LLMs, while LeCun’s team is more like a small moonshot project. At DeepMind, Hassabis has always realized that LLMs are distractions, but he doesn’t have full control over his research direction since he’s owned by Google. If it were up to him, he’d be investing a lot more time and resources into R&D of new architectures.

It’s time to take a step back and assess our priorities. Are we focusing too much on the off-ramp and neglecting the highway? Let’s have an objective discussion about where we’re headed and what we need to do to get back on track.

*Further reading: [Yann LeCun’s tweets on LLMs and AGI](https://x.com/ylecun/status/1621805604900585472) and [Bill Gates’ skepticism on GPT-5](https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/from-plateau-predictions-to-buggy-rollouts-bill-gates-gpt-5-skepticism-looks-strangely-accurate)*

Leave a Comment

Your email address will not be published. Required fields are marked *