UX Planet

UX Planet is a one-stop resource for everything related to user experience.

Follow publication

From Idea to Video in 30 Seconds: Combining the Power of Claude.ai with Runway ML Gen-2

Generative AI tools have only been getting more and more interesting recently, with an AI ‘arms race’ happening in the shadows of the tech world, so in this article I wanted to explore how AI assistants like Claude.ai combined with the text-to-video generator Runway ML Gen-2 can be used together to produce an abstract video from a text description, using minimal human input - barely in a few minutes!

What is Runway? — www.runwayml.com/

Runway ML has many incredible features, but its most impressive and talked about feature is its ability to generate short video clips from just a text prompt. While the technology is still early (limited to producing 4 second videos and 320 character prompts), it gives us a fascinating glimpse into the future of video.

Creating the Prompt — www.claude.ai/

To come up with a compelling text prompt, I enlisted the help of Claude — an AI assistant that directly challenges Open AI’s Chat GPT.

The first step was to ask Claude to craft a prompt that would capture the essence of a spiritual, abstract animation reminiscent of the surreal work of artist Hilma af Klint and similar to the artistic evocation within the Holy Mountain.

Claude suggested using geometric shapes, fractals, and mandalas floating through space as the visual elements, representing the mystical and transcendental qualities we want to evoke. Claude also spoke about grounding the imagery with occasional nature scenes to connect to worldly sensations, which I felt were perfectly represented in the abstract style after production.

With Claude, the next step was to iterate on the wording and length of the prompt until creating a 320 character description that acted as the perfect script for the vision:

From Prompt to Video in < 30 Seconds

We then took this prompt over to the Runway ML platform. In the text-to-video generator, we configured the output settings to produce a 4 second, 1080p video, and within seconds received a video that translated what Runway understood into a visual, moving piece.

The entire process, from Idea > Prompt > video didn’t even take 5 minutes.

The completed AI-generated video encapsulated the essence of our prompt, with a psychedelic feel deeply rooted not only in physical nature but a spiritual one, utilising colours that expertly evoked these feelings — I only wish, and keenly look forward to when we can produce longer than 4 second videos!

As someone with limited skill in video generation or animation, the translation from my internal thoughts of how the prompt could appear were interestingly close to what was produced. Hopefully in the future we will see larger input capability allowing more detailed prompt generation that better encapsulates our ideas.

In Summary

The potential for AI to augment creativity is clear, with smarter assistants enabling us to better communicate our desires and creative platforms like Runway ML to bring those ideas into reality.

There are a million ways we could go with Ai generation, but it’s clear that we’re entering an exciting new era of human-machine collaboration, where our internal visions are manifested into digital reality faster than ever!

🎨 By becoming a Medium Member, you get unlimited article access & Support Creators Like Myself!

📧 Check out my UX email newsletter, with Digestible Knowledge Every Wednesday.

🗃️ You can find all my links and resources here at my UX Masterlist.

If you enjoyed my content, Follow me, it keeps me writing!

Sign up to discover human stories that deepen your understanding of the world.

Published in UX Planet

UX Planet is a one-stop resource for everything related to user experience.

Written by Zacharia C.

Bringing my passion for Behavioural Design, Psychology & UX to influence better experiences — Support me: https://medium.com/@ZachariaCurtis/membership

No responses yet

Write a response