My Small Win: Running a Local AI Model on a Jetson Orin Nano

My Small Win: Running a Local AI Model on a Jetson Orin Nano

I’m thrilled to share that I’ve finally managed to set up a local AI model on my Jetson Orin Nano. It wasn’t a straightforward process, but the sense of accomplishment is worth it.

The hurdle was a simple one: using the wrong type of SD card. But once I got the right one, everything fell into place. Now, I’m running OpenWebUI in a container and Ollama as a service, and it’s working smoothly.

For now, the model is running from an SD card, but I plan to move it to the m.2 SATA soon. I’m also happy to report that the performance on a 3b model is more than acceptable.

This small win has got me excited about the possibilities of running local AI models. It’s amazing how much can be achieved with the right hardware and a bit of troubleshooting.

If you’re interested in exploring local AI models, I hope my experience can serve as a motivation to keep pushing forward, even when faced with seemingly insurmountable obstacles.

Leave a Comment

Your email address will not be published. Required fields are marked *