Imagine being able to edit videos at lightning speed, without the tedious process of manually selecting b-rolls, memes, and sound effects. What if I told you that an AI pipeline can do just that? I’ve been experimenting with a small pipeline that breaks down raw video into segments, uses Large Language Models (LLMs) to suggest b-rolls, memes, and sound effects, and generates an editing document that you can hand over to an editor. The result? Cuts revision cycles massively and opens the door to full auto-rendering in the next version. As someone who’s been playing with this technology, I can attest that it’s a game-changer for content creators and video editors. If you’re building AI workflows or edit content at scale, I’d love to hear from you and offer you free access to the beta after a quick feedback call.