DreamActor turns a single photo plus a driving video into a new clip. This review runs 5 tests and shows the raw outputs.
Model link: https://wiro.ai/models/bytedance/dreamactor
What this test covers
- 4 same-subject runs (reference image + matching driving video)
- 1 cross run (reference image from set #2 driven by motion from set #1)
- Inputs and outputs are hosted on WordPress (no external CDN links)
How DreamActor behaves in practice
DreamActor follows the motion in the driving video and keeps the look of the reference image. When it works, the face and clothing stay close to the reference while the body pose changes.
This test uses the sample inputs that ship with the DreamActor tool on Wiro. Each run uses one reference image and one driving video, then compares the output video against the source motion.
Test 1: Portrait motion transfer (set #1)
Reference image (inputImage)

Driving video (inputVideo)
Output video
Test 2: Full-body motion transfer (set #2)
Reference image (inputImage)

Driving video (inputVideo)
Output video
Test 3: Upper-body motion transfer (set #3)
Reference image (inputImage)

Driving video (inputVideo)
Output video
Test 4: Portrait motion transfer (set #4)
Reference image (inputImage)

Driving video (inputVideo)
Output video
Test 5: Cross drive (reference #2 + motion #1)
This run checks identity retention when the driving motion comes from a different set. One initial attempt returned a provider error and was re-run.
Reference image (inputImage)

Driving video (inputVideo)
Output video
Quick takeaways
- The sample runs tracked head motion and overall pose changes well.
- Full-body runs look best when the reference subject matches the driving subject scale.
- Cross drives can work, but results depend on how close the body proportions match.