I’ve been following the discussions around GPT-5 and the current pace of AI improvement, and I have to say, I’m a bit surprised by the mixed reactions. Some people are underwhelmed by the latest advancements, while others are worried that the trajectory towards Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI) might be delayed.
But I think we’re missing the point. The relationship between marginal increases in intelligence and their impact on society is not as straightforward as we think. Just because the incremental jump in intelligence between models might not be as dramatic as we expect, it doesn’t mean the impact won’t be significant.
Take, for example, the difference in intelligence between a goldfish and me. I’d like to think I’m much smarter than a goldfish! And then there’s the difference between me and Einstein – he’s undoubtedly a genius. But here’s the thing: the marginal contribution to society from me and the goldfish is almost zero, while Einstein’s contributions have been immense and everlasting.
Now imagine if we had millions of Einstein-level AIs working 24/7. The potential for new discoveries in science, medicine, and other fields would be exponential. That’s the real potential of AI advancements, and it’s not just about the pace of progress – it’s about the impact it can have on humanity.
What do you think? Are we focusing too much on the pace of AI progress, and not enough on the potential impact?