Everything is decided here. A bad shoot cannot be saved in post. Follow this exactly.
Mount on a tripod. Tighten every joint until nothing moves. Turn off all stabilization: IBIS Off, OIS Off, Electronic Stabilization Off. Even image stabilization at rest introduces micro-drift that will make your clean plate mismatch.
Go fully manual: M mode on your camera. Lock these four things:
5600K), not "Auto"If any of these shift between the action shot and the clean plate, your composite will have visible lighting mismatches.
| Setting | Value | Why |
|---|---|---|
| Frame Rate | 60fps or 120fps | More frames = better freeze moment to choose |
| Resolution | 1920×1080 minimum | Higher gives more room to reframe |
| Codec | H.264 or ProRes if available | Avoid over-compressed formats |
| Shutter Speed | 1/125s or faster | Freeze the flour in mid-air cleanly |
Hit record. Let it roll for 5 seconds before the actor enters frame. Have the subject throw the flour. Let the camera roll 5 seconds after. Do at least 5 takes. You need one perfect freeze frame where the flour is spread dramatically mid-air. More takes = better selection.
Do NOT move the camera. Do NOT change exposure. Ask the subject to leave the frame completely. Record 10 seconds of the empty kitchen. This is your "clean plate" — the background without the person. You will use one still frame from this as your 3D projection texture.
Also shoot one high-quality still photo of the same frame with your camera's photo mode at maximum resolution for fSpy — the clean plate photo needs lens distortion to match your video frame.
Moving the camera after the action shoot. Even bumping the tripod slightly makes the clean plate useless — it will never align with the action shot. Physically tape the tripod feet to the floor if needed. Treat the tripod as sacred until all clean plate footage is recorded.
Extract a single perfect freeze-frame and cut the subject out with a polygon mask. Export as transparent PNG.
Open DaVinci Resolve. Click
Scrub through the timeline by dragging the playhead. Find the frame where the flour is most dramatically spread. Use the ← → arrow keys to move one frame at a time for precision.
With your playhead on the perfect frame: right-click on the clip in the timeline. Select
Make sure the playhead is on your freeze-frame clip. Click the MediaIn1 → MediaOut1.
Click once on the MediaIn1 node to select it (it should highlight yellow). Now press Shift + Space to open the tool search. Type Polygon and press Enter. A Polygon1 node appears.
Now connect it: click and drag from the output of Polygon1 to the mask input on MediaIn1. The mask input is the small triangle on the bottom of the node (not the main output arrow — the little triangle underneath). The connection is correct when the node border changes.
Click the Polygon1 node to select it. Look at the viewer above — you should see your frozen frame. In the Inspector panel on the left, make sure
Now click point by point around the outline of your subject in the viewer to draw a polygon. Click around the person's silhouette: head, shoulders, arms, torso, and importantly trace around any flour particles you want to keep. Double-click the final point to close the shape.
Look at the result in the viewer. If the background is visible and the person is cut out (wrong), click the Polygon1 node. In the Inspector, find
To see the alpha channel (transparency map): press A in the viewer. White = opaque (your subject). Black = transparent (background). Grey edges = soft transition. You want clean white on the person, clean black outside.
Use
In the node graph, make sure MediaIn1 is connected to MediaOut1. Now go to the
PNGPNG1920×1080)Click .png file with transparency. Name it subject_cutout.png.
Forgetting to enable "Export Alpha." Without this, your PNG exports as a flat image with a black or white background — no transparency. Every compositing step that follows depends on a real alpha channel. Double-check this before rendering. Also, connect the Polygon to the mask triangle input, not the image input, or Fusion will throw an error.
Tell Blender exactly what lens and position the real camera was at by matching perspective lines in fSpy.
Export one frame from your clean plate footage — use the same frozen frame time, but from the clean plate clip (no person). In DaVinci Resolve clean_plate_ref.png.
Launch fSpy. Go to clean_plate_ref.png. The image fills the fSpy window. You will see two sets of colored lines: the red lines are for the X-axis vanishing point, the blue lines are for the Y-axis vanishing point (or Z depending on orientation).
On the right panel in fSpy:
24mm, 35mm). Check your camera's recording metadata or EXIF data. If unknown, use 35mm as a default guess.APS-C = 23.5 × 15.6mm, Full Frame = 36 × 24mm)Look at your kitchen image. Find two real-world horizontal parallel lines that recede into the distance. Good examples: the top edge of a counter, the bottom edge of a cabinet, a tile grout line on the floor, the top of the backsplash.
Drag the two red line endpoints so each red line lies exactly along one of these parallel horizontal features. The lines must follow real geometry — drag each endpoint handle to snap to the edge of the counter/cabinet.
Where these two red lines intersect (the vanishing point) is computed automatically. You'll see a dot appear — that is the X-axis vanishing point.
Find two more real-world parallel lines, but this time going in a different direction — typically lines running toward the far wall perpendicularly. Cabinet sides, drawer fronts, tile edges running toward the back. Drag the two blue line endpoints to lie along these edges.
The fSpy solver updates in real-time. Watch the 3D origin axis overlay in the corner — it should look like it's sitting flat on your kitchen floor/countertop, not floating or tilted weirdly.
In the right panel, find
Click
Go to kitchen_match.fspy. This file stores the camera data. You will import it directly into Blender in the next phase.
Using lines that are not truly parallel in real life. Perspective lines must come from edges that are actually parallel in 3D space — two edges of the same counter, two rows of the same tile pattern. Using lines from different objects at different heights or angles gives fSpy wrong data, and your 3D camera will be angled incorrectly in Blender. If the axis overlay looks tilted, redo the lines more carefully.
Build a minimal 3D kitchen proxy and project the photo onto it. Animate the sweeping camera orbit.
Download the fSpy Blender add-on from fspy.io — it is a .zip file. Open Blender. Go to .zip, select it, click Import-Export: fSpy importer to enable it.
Open Blender. In the default scene, press A to select everything. Press X → click
Go to kitchen_match.fspy and click
Two things will appear: a Camera that matches your real lens, and a Background Image that shows your clean plate photo locked to the camera view. Press Numpad 0 to enter Camera View — you should see your kitchen photo exactly as your camera saw it.
In the
1920108060 fps)Press Shift + A → 3 then Enter to scale it to 3× size. Press G → Z → type a small value (e.g. 0) to ensure it sits at floor level (Z = 0). This plane represents your kitchen floor.
Add more primitives for key geometry. For each new object: Shift + A →
0.15 to flatten it. Press G → Z → 0.45 to lift it to counter height (~90cm ÷ 2 = 45 Blender units if your reference is 1m). Press S → X → 2 to stretch along the counter direction.90 → Enter. Move it back: G → Y → move to back wall position.90 → EnterYou don't need perfection. These are projection surfaces — rough shapes that approximate the real geometry are sufficient.
This projects the clean plate photo onto all geometry from the camera's point of view, creating a photorealistic-looking fake 3D kitchen.
Select all your proxy objects: A to select all, then click the fSpy-imported camera to deselect it (hold Shift + click). With all mesh objects selected:
Open the clean_plate_ref.png in the Image Texture node. Also add: Shift + A → Camera Data Window output → Mapping Vector input → Image Texture Vector input → Material Output Surface.
Then in the Mapping node set Point. This creates a camera-locked UV projection where every surface shows the kitchen photo exactly as the camera sees it.
First, set your timeline range. In the 120 (2 seconds at 60fps = full orbit arc).
Create an Empty as the orbit pivot: press Shift + A → 1.6.
Parent the camera to the Empty: Click the camera. Then Shift + click the Empty (Empty selected last). Press Ctrl + P →
Keyframe the orbit:
1 in the timeline. Click the Empty. Press I → 120. Press R → Z → type -100 → Enter (a 100° sweep). Press I → Press Space to play back — the camera now orbits 100 degrees around the subject's position over 2 seconds.
Click the Camera icon in Properties →
EEVEE (fastest for this type of shot)32 (sufficient for a photo-projection shot)Click
/render/kitchen_PNGRGBA (the A = alpha)Go to
Not enabling "Transparent" in Film settings before rendering. Without it, Blender fills the background with a solid grey/black. Your PNG sequence won't have an alpha channel and you can't composite through it. Also: after the camera orbit begins, the camera projection "slides" slightly because the texture is camera-locked — this is the desired parallax effect, but don't be alarmed when the kitchen photo appears to slightly swim as the camera moves. That is correct and realistic.
Stack the layers, sync timing, then add polish — motion blur, lens flash, and camera shake.
Open DaVinci Resolve. In the
subject_cutout.png (your transparent subject)In the V1, V2, V3.
subject_cutout.png here. This is the frozen, masked subject that floats over the 3D background.Your edit should flow like this:
1 → X: Normal speed action footage on V2. Subject enters, throws flour.X: The cut point. Action footage ends. Blender sequence begins on V1. Subject cutout PNG begins on V3.X → X+120: Blender 3D kitchen orbits. Frozen subject on top.Trim V2 so it ends exactly at frame X. Start V1 and V3 exactly at frame X. The subject on V3 should be scaled and positioned to match where the person was standing in the live footage — use Alt + F (Fit to Frame) and then nudge position with the Transform controls in the Inspector.
Go to the subject_cutout.png clip node. Add a
Click the Blender render sequence on V1. Hit the MediaIn1, press Shift + Space, type Motion Blur → add a MotionBlur node between MediaIn1 and MediaOut1.
4, 180This smears the background slightly as the camera orbits, matching the motion blur look of real handheld camera moves.
In the V4. Create a new X (the freeze point), lasting 6–10 frames.
Set the color to pure white (#FFFFFF). In the clip Inspector, set the composite mode to X set opacity to 80. At frame X+3 set opacity to 0. This creates a fast white flash that sells the "freeze" transition as if the camera just fired a strobe.
Create a new Final_Composite.
Click this compound clip. Go to Transform node between MediaIn1 and MediaOut1. Right-click the Center X parameter → +0.003 at frame 1, -0.002 at frame 3, +0.001 at frame 5, and so on in a random pattern. Keep values under 0.005 for subtlety. Repeat for Center Y. This adds organic handheld camera feel.
Go to
MP4H.2641920×1080Automatic or manual bitrate 20,000 kbpsClick
Forgetting to match the subject's position and scale. The frozen subject PNG was exported at full 1920×1080 resolution — it is already the right size and position. Do not scale it unless you are correcting a slight offset. The most common error is accidentally scaling the subject PNG and then the person appears floating above the floor in the 3D kitchen. Check at multiple points during the orbit that the subject's feet align with the projected floor surface.