1. Building an OpenCV Camera Tracker
The OpenCV-based tracker detects salient features using SIFT and tracks their motion via Lucas–Kanade optic flow across the video.
From the resulting motion vectors, the Essential matrix and camera matrix are used to estimate camera pose, with translation and rotation
vectors integrated over time to obtain trajectories in $$x$$, $$y$$, and $$z$$. Euler angles describing roll, pitch, and yaw are extracted for
each frame, forming a camera path that can be imported into Unreal Engine.
2. Experimental Video Dataset
Five 10‑second videos of a hand‑held walk along a path were recorded at $$4K$$ resolution (3840 × 2160). Each video was versioned by
adding Gaussian noise at 25 %, 50 %, and 75 % of pixels and by encoding at multiple resolutions (1080p, 720p, 480p). These versions
allowed a systematic study of how noise and resolution influence tracking performance for each system.
3. Comparative Tracking Systems
In addition to the OpenCV tracker, the study uses:
- Blender – open-source 3D software with camera tracking tools based on feature detection and optic flow.
- Adobe After Effects – proprietary compositing and tracking tool widely used in industry.
All systems estimate camera motion, and the resulting tracks are evaluated using quantitative metrics (mean error per frame and standard deviation)
and qualitative inspection in Unreal Engine.
4. Integration into Unreal Engine
Camera paths extracted from each tracking system are transformed to Unreal Engine’s left‑handed coordinate system. A Python API is used to create
camera actors and import coordinates as keyframes. Virtual scenes are constructed to match the physical environment, enabling side‑by‑side
comparison between video footage and virtual renders.