SORA: OpenAI launches new instant text-to-video artificial intelligence tool

Prompt: Several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snow capped mountains in the distance, mid afternoon light with wispy clouds and a sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field.
Credit: OpenAI

Words by Shalet Serrao, ITV News Assistant News Editor

OpenAI has launched a new text-to-video artificial intelligence (AI) model that works on user prompts to generate visually stimulating videos up to a minute in length.

In a blog post announcing the release, the company says the model is able to “generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background.” 

The AI tech giant known for language models like ChatGPT is the first to create a model that can not only generate 60 seconds worth of video from text and images, but also has the visual complexity that that previous models like Google's Lumiere Pro have not been able to achieve.  

OpenAI also released several videos as examples of how the new technology could be used.

From California during the Gold Rush to a couple walking in snow covered Tokyo, the possibilities seem endless.  

Take the prompt “A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.” 

OpenAI has launched a new text-to-video Artificial Intelligence model. Video credit: OpenAI

There’s other examples that showcase Sora's flair for storytelling by using simple prompts to create a vivid cinematic narrative – a prompt where Sora was asked to give audiences a tour of the zoo, the final product has multiple shot changes and complex panning motions that aren’t stitched together. 

While Sora is currently not available to the public, its release has prompted a fresh wave of concern about AI's role in mass disinformation through deepfakes.

Sora is currently only available to a select group of "red teamers" - researchers and creators that are testing the model for areas of "harm" or "risk".

This involves testing its ability to bypass OpenAI's terms of service which forbids the use of “extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others," according to the blog.

Videos posted by OpenAi's Sam Altman today as responses to user prompts included a watermark to show that it was AI generated - whether it can be removed by paid users needs to be seen.

Another concern for creators is issues with copyright - the company has been sued in the past for alleged copyright infringement through its AI tools like ChatGPT which scours through online material.

OpenAI in its blogpost says that it's involving "visual artists, number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals."

Only recently OpenAI made the decision to add visible and invisible watermarks to images generated by its AI text-to-image generator DALL-E 3.

While Sora can only generate minute long clips and isn't a real threat to filmmakers, it's up to its creators to prevent a possible misinformation tsunami.

Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know...