As a developer, I’ve been waiting for what felt like an eternity – five long weeks – to hear back from Steam about my application. My app, Megan AI, allows users to input their own large language models (LLMs) locally and communicate with them. Finally, the response arrived, but it wasn’t what I was hoping for.
Steam told me that my app failed testing because it lacked the proper guardrails. They want me to block input and output for the LLM, essentially neutering the app’s core functionality. I was taken aback by this request. Has anyone successfully put an unguarded LLM on Steam before?
I decided to add a walled garden, a type of safeguard, and re-upload the app. But in the meantime, I made the full, unrestricted version available on Itch.io for those who want to give it a try.
## The Struggle is Real
As AI technology advances, it’s becoming increasingly important to have open and honest conversations about the implications of these powerful tools. I’m not alone in this struggle. Many developers are facing similar challenges as they try to bring AI-powered apps to the masses.
## The Importance of Guardrails
Steam’s concerns are valid. Unguarded LLMs can be dangerous in the wrong hands. But as developers, we need to find a balance between safety and innovation. Guardrails are necessary, but they can also stifle creativity and progress.
## A New Era for AI on Steam?
I’m not giving up. I’ll continue to work with Steam to find a solution that meets their requirements while still providing value to my users. This experience has made me realize that we’re at the dawn of a new era for AI on Steam, and it’s up to us as developers to shape its future.
If you’re interested in trying out Megan AI, you can find it on Itch.io. And if you’re a fellow developer who’s faced similar challenges, I’d love to hear from you in the comments.
—
*Check out Megan AI on Itch.io: