Runway, a US-based AI research company specializing in creative software, has launched Act-One, a tool that generates character animations from simple video and voice inputs.
Runway states that Act-One is designed to simplify the animation production process, providing an easier alternative to the complex and resource-heavy techniques typically used in facial animation.
Traditional animation workflows for realistic facial expressions require motion capture equipment, multiple video references, and detailed face rigging—steps that can be costly and time-consuming.
Act-One bypasses these requirements by allowing users to create animated characters directly from a video and voice recording, making it feasible to produce animations with a simple camera setup, says Runway in an official blog.
The tool supports a range of character styles, from realistic portrayals to stylised designs. Act-One translates facial expressions and subtle movements—such as micro-expressions and eye-line adjustments—from actors onto different character designs, even if the character's proportions differ from the source footage. This capability enables new options in character design without the need for motion capture, as per the company.
Act-One also facilitates multi-character scenes, allowing a single actor to perform multiple roles. Runway adds that this feature, paired with the tool's high-fidelity outputs, may be suited for creators producing dialogue-focused videos without extensive production resources.
According to Runway, they have incorporated content moderation measures in Act-One, including safeguards to prevent the unauthorised generation of public figures and technical checks to verify users' rights to any custom voice created.
Act-One became available on October 22, 2024, with a phased rollout expected to expand access in the coming weeks, as per Runway's official announcement.
Source: The Daily Star
Bd-pratidin English/ Afia