Byte's subsidiary iDream is conducting a beta test of the new generation AI video generation model Seedance 2.0.
On February 9th, it was announced that the Jimo platform under ByteDance is conducting a test of the new generation AI video generation model Seedance 2.0. With innovative technologies such as multimodal reference, integration of generation and editing, it has sparked extensive discussions within the AI industry. It is known that this model supports the simultaneous upload of 12 types of reference files including images, videos, and audio, and can accurately replicate the tracking trajectory, action details, and music atmosphere to generate a 15-second video in approximately 30 points. The speed has increased by more than 10 times compared to the previous version, and the waste rate has been significantly reduced. It is called “the efficiency revolution of AI video creation” by the industry. The core functional breakthroughs known so far include: “One-click script to short drama”, where users only need to import a short drama script and a reference image, and Seedance 2.0 can generate logically coherent dynamic images; “Novel to short film in seconds”, this model supports directly converting novel text into landscape short films; and “Action capture and combat optimization”
.

