I’m sure I’m not the only one who’s had a rough time with RunPod lately. I recently spent a whopping 5 hours trying to run a VLLM in multi-GPU mode, only to have it freeze during model init or sometimes load successfully. It’s like playing a game of GPU roulette.
I’m not asking for much, just a reliable way to utilize my GPUs without the constant frustration. It’s time for RunPod to step up their game and ensure their infrastructure can handle the demands of their users.
Has anyone else experienced similar issues with RunPod? Share your experiences and let’s hope they’re listening.
It’s time for a change, and I hope RunPod takes our feedback seriously.