I’ve created a fun “Day in the life of a futurist” video using Luma’s text-to-video Dream application.
Here’s how I did it:
- Asked ChatGPT-4 to create 10 text prompts, each describing a different scene.
- Fed these prompts into Luma, which generated 10 five-second clips.
- I had my fantastic video editor stitch them together, adding captions, music, and branding.
The clips are quite cliched and they all look a bit like the film Robocop (from 2014). They also go a bit fast, so you might need to pause and revisit to see how they were created. BUT – Today is the worst this technology will ever be!!!
Here’s why Text-to-Video is worth paying attention to:
- It democratises video creation, making it accessible to anyone, not just professionals with expensive gear. Expect a surge of new creative content from diverse voices.
- It reduces the time and cost of making videos, making it faster and cheaper for businesses, educators, and creators.
- As the technology improves, text-to-video AI will revolutionise industries like filmmaking, animation, and advertising by enabling rapid prototyping and content creation.
- It represents a leap in multimodal capabilities, pushing the boundaries of what machine learning can do with large datasets of images and videos.
- While not perfect, its ability to create semi-coherent videos from text shows progress towards AI systems that can understand and generate data types.
- It raises important questions about the future of creativity, jobs, and intellectual property as AI starts to automate tasks that humans used to do.
I’m fascinated to see where the likes of Luma (and hopefully on general release soon, Sora) go.