ByteDance’s next-gen AI model can generate clips based on text, images, audio, and video

ByteDance’s next-gen AI model can generate clips based on text, images, audio, and video
ByteDance’s next-gen AI model can generate clips based on text, images, audio, and video

ByteDance says its new AI video model can more accurately follow prompts. | Image: ByteDance

Big Tech’s race to leapfrog the latest AI models continues with the launch of ByteDance’s next-gen video generator. In a blog post, ByteDance – the China-based company behind TikTok – says Seedance 2.0 supports prompts that combine text, images, video, and audio.

The company claims it “delivers a substantial leap in generation quality,” offering improvements in generating complex scenes with multiple subjects and its ability to follow instructions. Users can refine their text prompts by feeding Seedance 2.0 up to nine images, three video clips, and three audio clips.

The model can generate up to 15-second clips with audio, while taking cam …

Read the full story at The Verge.


Discover more from RSS Feeds Cloud

Subscribe to get the latest posts sent to your email.

Discover more from RSS Feeds Cloud

Subscribe now to keep reading and get access to the full archive.

Continue reading