How to Use Runway Gen-2 to Create AI Videos


By now, you have most likely already dabbled with Stable Diffusion or Midjourney to create some AI-generated footage, or used ChatGPT to see what all of the fuss has been about. But spend just a little time in social media and you will see a preferred pattern that does not contain nonetheless pictures nor textual content responses.

The present sizzling factor in world of machine studying is brief films, so let’s bounce into one thing referred to as Runway Gen-2 and see how this most up-to-date model can be utilized to make strange movies, and extra moreover.

How to get began

Runway is a younger firm, fashioned simply three years in the past by Cristobal Valenzuela, Anastasis Germanidis, and Alejandro Matamala, whereas working at New York University. The group makes AI tools for media/content material creation and manipulation, with Gen-2 being considered one of its latest and most attention-grabbing packages.

Where Gen-1 required a supply video to extrapolate a brand new film clip from it, Gen-2 does it totally from a textual content immediate or a single nonetheless picture.

To use it, head to main website or obtain the app (iOS only), and create a free account. With this, you may have fairly a number of restrictions on what you are able to do. For instance, you may begin with 125 credit and for each second of Gen-2 video you create, you may expend 5 of them. You cannot add extra credit with a Free account however it’s greater than sufficient to get an thought as to whether or not you need to pay for a subscription.

Once you’ve gotten your account, you may be taken to the Home web page – the format may be very self-explanatory, with clearly indicated tutorials, quite a lot of instruments, and the 2 major showcase options: text-to-video and image-to-video.

The first does for movies what Midjourney does for nonetheless pictures – interpret a string of phrases, referred to as a immediate, right into a 4 second video clip. And simply as with image mills, the extra descriptive you’re, the higher the tip consequence can be.

You may even embrace a reference picture to assist the neural community concentrate on precisely what you are searching for, and earlier than you even commit to making a full video, you may be proven a number of preview nonetheless pictures – merely click on on the one you want and the system will begin producing it.

Your first steps with RunwayML

Like the custom of getting a pc to say ‘Hello World!’ in coding, we’re beginning off with the outdated AI favourite of ‘an astronaut using a horse’. It’s a really quick immediate and never very informative, so we should not anticipate to see something spectacular.

To enter this, click on on the Text to Video picture on the prime of the Home web page, and your browser will then ahead you to a textual content subject. The immediate subject has a restrict of 320 characters, so that you want to keep away from being too verbose with the descriptions.

The furthest button on the underside left offers era choices, relying on what kind of account you’ve gotten – the power to upscale the video and take away the watermark aren’t out there to Free customers. However, you may toggle using interpolation, which smooths out transitions between frames, and you may alter the era seed.

This explicit worth controls what beginning values the neural community makes use of to generate the picture. If you utilize the identical immediate, seed, and different settings each time you run Gen-2 (or any AI picture generator, for that matter), you may all the time get the identical finish consequence. This means even when you do not suppose you may obtain something higher by utilizing a distinct immediate, you have nonetheless acquired numerous extra variants to discover, just by utilizing a distinct seed.

The Free Preview button will power Gen-2 to create the primary body of 4 iterations of the immediate. Using this command gave us the next pictures:

Granted, none of them look very similar to an astronaut, however the backside left-hand picture was fairly cool and appeared prefer it had the perfect potential to be animated, so we chosen that one for the video era. With all accounts, the request joins a queue of request for Runway’s servers to course of, however the dearer the subscription, the shorter the wait.

Although it was usually lower than a couple of minutes for us, it might generally take so long as quarter-hour. Depending on the place you reside and what time of the day you ship the request in, it might take even longer. But what is the finish consequence like? Considering how poor the immediate was, the generated video clip wasn’t too dangerous.

You can see that the precise animation entails little greater than shifting the place of the digital camera, which is a reasonably widespread trait in how RunwayML creates video clips – the horse and astronaut barely transfer in any respect, however that is partly down to the immediate we used.

After every era, you may be given the choice to price the consequence – that is a part of the machine studying suggestions system, serving to to enhance the community for future use. And supplied you’ve gotten sufficient credit, you may all the time rerun the content material creation, tweaking the immediate if needed, to get a greater output.

All content material will get saved in your Assets folder, however you can even obtain something you make. The customary video format that RunwayML makes use of is MP4 and a 4 second, 24 fps clip is available in round 1.4 MB in dimension, with the bitrate being a fraction beneath 2.8 Mbps. You can lengthen a clip era by a further 4 seconds however it will expend 20 credit, so Free account customers ought to keep away from doing this as a lot as doable.

Getting higher ends in Gen-2

To make our first try look extra like an precise astronaut, we used a extra detailed immediate – one which directs the neural community to concentrate on particular elements that we wish seen within the video. Our first try did not seem like an astronaut, so we tried the next immediate: ‘A extremely real looking video of an astronaut, carrying a full spacesuit with helmet and oxygen pack, using a galloping horse. The floor is roofed in lush grass and there are mountains and forests within the background. The solar is low on the horizon.’

As you may see, the result’s a contact higher, however nonetheless not excellent – the rider would not look precisely like an astronaut and the horse’s legs seem to come from one other universe. So we used the identical immediate once more, however this time included a picture of an astronaut using a horse (one we made utilizing Stable Diffusion) to see how that might assist.

There’s a picture icon on the facet of the immediate subject and clicking it will allow you to add an image; or you should utilize the Image tab, simply to the proper of the Text tab. Note that when you use the separate Image to Video system, then no immediate nor preview is required.

The consequence this time comprises a much better astronaut however was exceptionally worse, in each different means – disgrace in regards to the horse!

Why had been our outcomes so poor?

Well, to begin with, when was the final time you noticed an astronaut, carrying a full spacesuit, using a horse? Telling RunwayML to create one thing real looking considerably depends on the neural community being educated on enough materials that precisely covers what you are searching for.

And then there’s the immediate itself – do not forget that you are successfully making an attempt to make a really quick movie, so it is necessary to embrace phrases which can be related to cinematography. So including “in the style of anime” will alter the look considerably, whereas a instructions comparable to “use sharp focus” or “strong depth of field” will produce a delicate however helpful alteration.

Gen-2 places extra weighting on the immediate than any picture supplied, however the latter works finest when the image itself is photorealistic, reasonably than being a portray or a cartoon. We allotted the picture of the astronaut and simply frolicked tweaking the outline to obtain this magnificence of cinema.

Prompt used: an astronaut in a spacesuit and full helmet on a white horse, using away from the digital camera, galloping by a contemporary metropolis, wealthy and cinematic fashion, real looking mannequin, topic all the time in focus, sturdy depth of subject, sluggish pan of digital camera, vast angle lens, vivid colours

It’s value remembering that previews are free and will not eat into your credit, so make good use of this characteristic to work by modifications to your prompts, fine-tuning the era course of. Also, remember in regards to the era seed worth – adjusting this by only one digit might provide you with a considerably higher final result.

As you may solely use a single picture with the immediate, you might want to attempt experimenting with quite a lot of pictures to get the look you are after. RunwayML has its personal text-to-image generator (one picture prices 5 credit), however it would not appear as highly effective or feature-rich as Stable Diffusion or Midjourney.

Examples of what could be achieved

Just beneath the text-to-video interface is a group of clips that Runway presents as examples for inspiration – some simply use a immediate, whereas others embrace a selected picture. It’s value noting that each time you run a immediate by the system, the outcomes will differ, so that you’re unlikely to get the identical output while you attempt them.

The secret lifetime of cells

Prompt: Cells dividing seen by a microscope, cinematic, microscopic, excessive element, magnified, good publicity, topic in focus, dynamic motion, mesmerizing

Lost on a tropical island

Prompt: Aerial drone shot of a tropical seaside within the fashion of cinematic video, shallow depth of subject, topic in focus, dynamic motion

Future metropolis scape

Prompt: a futuristic wanting cityscape with an environmentally aware design, lush greenery, ample bushes, expertise intersecting with nature within the yr 2300 within the fashion of cinematic, cinematography, shallow depth of subject, topic in focus, stunning, filmic

Rolling thunder within the deep Midwest

Prompt: A thunderstorm within the American midwest within the fashion of cinematography, stunning composition, dynamic motion, shallow depth of subject, topic in focus, filmic, 8k

The strongest employees/avenger

This is an instance of Runway’s image-to-video generator. No textual content immediate is used, only a single picture supply.

Now, you will have already seen Runway Gen-2 clips on social media or different streaming websites and could also be considering that they are of a lot larger high quality than these proven above. It’s value noting that such creations usually contain a number of compositions, edited collectively, and additional improved through video software program.

While this may occasionally seem to be an unfair comparability, the next instance (utilizing a number of AI instruments) exhibits simply what could be achieved with time, willpower, and no small diploma of expertise.

Impressive and considerably unsettling in every measure, the scope of AI video era is just simply being realized. Considering that Runway has been in operation for only a handful of years, predicting how issues will transpire in a decade has handed, for instance, could be like making an attempt to guess how the newest video games look and play based mostly on a number of rounds of Pac-Man.

Getting extra from Runway’s AI instruments

If you are a eager animator or have loads of expertise in modifying movies, you then’ll be happy to see that Runway presents a bunch of AI instruments for manipulating and increasing the fundamental content material the neural community generates. These could be discovered on the Home web page and clicking the choice to View More Tools will present all of them.

For all these instruments, you do not have to use any video generated by Gen-1 or Gen-2, as every little thing works on any materials you may add. However, machine studying is applied by the processes for, say, blurring faces or including depth of subject. So it could not provide you with completely excellent outcomes, however your complete process can be far faster than doing all of it by hand.

If you are critical about exploring the world of AI video era with Runway Gen-2, you then’ll want to contemplate selecting a subscription plan and that is the place issues get fairly dear. Unfortunately, there is no means round this, as the price of AI servers and video file internet hosting is not low cost in any respect, so the extra options and choices you need, the extra you may be charged.

As of writing, the most affordable plan is $15 monthly, however you get a good quantity of beginning credit, bigger asset internet hosting, and the power to upscale movies and take away watermarks. At the skilled finish of the dimensions, costs climb quite a bit larger however the charges are nonetheless cheaper than having to create your personal AI neural networks and purchase the mandatory {hardware} to prepare them and course of the algorithm.

Spending as little as an hour taking part in about with Runway Gen-2 exhibits simply why AI content material creation has change into so fashionable – in a matter of minutes, anybody can produce picture or video clips that might probably maintain up properly in opposition to materials created by hand.

Obviously this has not been misplaced on the media trade and there are growing considerations as to the influence AI can have on jobs and content material authenticity, however for now, it is most likely finest to deal with it as considerably of a enjoyable novelty and simply give it a go. You by no means know, you might simply uncover a expertise for AI media era and end up hooked!



Leave a Reply

Your email address will not be published. Required fields are marked *