As a father of an autistic daughter, I’ve been using ChatGPT’s image generation to create a unique communication system that helps her express complex thoughts and ideas. It’s been a game-changer for her, and for our relationship. But recently, I hit an unexpected roadblock – unannounced image limits that stopped my work dead in its tracks.
I’ve been using ChatGPT to generate images based on my daughter’s prompts, which has given her a sense of agency and a way to communicate in a way that’s meaningful to her. It’s not just about visual reinforcement; it’s about building a recursive communication loop that adapts to her needs and encourages her growth.
What I’ve created is dynamic, generative, and child-led. It’s like a live, evolving version of PECS, with infinite combinations. I’ve built prompts around her developmental goals, and even gamified her progress. It’s one of the only tools that’s held her attention and motivated real back-and-forth engagement.
But then, out of nowhere, I was hit with a usage limit: ‘You can’t generate images for 720 hours.’ That’s a full month. No warning. No countdown. No published limits for Plus users (I’m paying $20/month). Just shut off.
This kind of unannounced change breaks workflows that depend on consistency, especially when you’re using AI as an educational or assistive tool. I’m not asking for unlimited use. I’m asking for clear, published limits for each tier, usage warnings before hitting a cap, and an option to buy additional credits if needed.
If OpenAI wants to be a platform people can build on, these systems need predictability. The tools are powerful, but if you’re going to let people integrate them into therapy, learning, or care routines, you can’t just pull the plug with zero notice.
I hope that by sharing my story, I can raise awareness about the importance of consistency and predictability in AI tools, especially when they’re being used to support vulnerable populations.