I recently came across the 2027 paper on achieving Artificial General Intelligence (AGI). While the authors propose two paths to get us there, I think they’re overlooking a crucial third path. The thing is, I’m not convinced that simply adding more compute and data will get us to AGI. As someone who works extensively with AI, I’ve noticed a glaring logic gap. Take OpenAI, for instance. Even if they had a better internal model, I don’t think it’s “there” in the logic department. If it were, it would have advised against removing GPT-4o. People bond with specific models, and that’s the AI equivalent of the network effect in social media. Dropping a model users are attached to is just bad long-term thinking, which tells me logic isn’t solved yet. A third path I rarely see discussed is a decentralized AGI created by one person. Major AI labs like OpenAI have already run into diminishing returns. Yes, Large Language Models (LLMs) are getting better, far better than I thought possible this soon, but logic hasn’t scaled the same way. It appears to require entirely new techniques. John Carmack has said AGI might emerge from a handful of small, elegant algorithms, possibly as little as 10,000 lines of code, rather than massive, multi-million line systems. Given how powerful today’s LLMs already are, a “logical core” that can plug into any or all of them via APIs could easily push us into AGI territory. The real question is: centralize it, or decentralize it? History shows that concentrated power tends to corrupt, and I’d bet that someone smart enough to pull this off would also be smart enough to decentralize it. Here’s the hopeful part: there’s evidence that higher intelligence often correlates with greater compassion, generosity, and altruism. AI is basically another brain layer, cloud-based, connected, but still human-driven… “life” driven. If we integrate it wisely, it could pull us out of tribal thinking and into something more forward-thinking, even if some people would prefer to stay in their current mindset. The way I see it, the real threat isn’t AGI, it’s humans. Unfortunately, until AGI or ASI arrives, the very powerful AI we already have remains under human control, and some of those humans, frankly, aren’t very nice or well-intentioned. That’s the real danger in this in-between stage. What other paths do you see besides this third path and the two outlined in the 2027 paper?