As I dive into high-level medical studying, I’ve noticed a significant improvement in GPT-5’s ability to read medical literature. Previously, I had to abandon using it due to its tendency to ‘hallucinate’ wildly, providing inaccurate answers. But now, I’m giving it another try, and I’m pleased to see that it’s doing better. The hallucinations are still present, but they’re less frequent, making it comparable to Open Evidence in terms of accuracy.
However, it’s essential to double- and triple-check everything, as GPT-5 still has its limitations. For instance, it often reverts to reading only abstracts and unreliable sources, rather than the official full text. Additionally, even when I provide the PDF version, it struggles to read the document accurately.
While there’s still room for improvement, it’s encouraging to see GPT-5 moving in the right direction. As AI technology continues to evolve, I’m excited to see how it can aid in medical research and education.