Terminology¶
Capture Volume¶
3-dimensional area (volume) with sufficient camera coverage to support 3D tracking.
Calibration¶
Link to a section of the 'braindump' video discussing capture volume calibration
Charuco Board¶
Link to a section of the 'braindump' video discussing capture volume calibration
Mediapipe¶
https://google.github.io/mediapipe/solutions/holistic
Processing Stages¶
This is a brief description of each of the processing 'stages' necessary to use a bunch of USB webcams to reconstruct the 3D kinematic (i.e. mocap) data of the human subject!
Some parts refer to the folder and function names of the pre-alpha
version of the code, but the concepts are mostly the same in the alpha
version.
- Stage 1 - Record Videos
- Record raw videos from attached USB webcams and timestamps for each frame
-
Raw Videos saved to
FreeMoCap_Data/[Session Folder]/RawVideos
-
Stage 2 - Synchronize Videos
- Use recorded timestamps to re-save raw videos as synchronized videos (same start and end and same number of frames). Videos saved to
-
Synchronized Videos saved to
FreeMoCap_Data/[Session Folder]/SynchedVideos
-
Stage 3 - Calibrate Capture Volume
- Use Anipose's Charuco-based calibration method to determine the location of each camera during a recording session and calibrate the capture volume
-
Calibration info saved to
[sessionID]_calibration.toml
and[sessionID]_calibration.pickle
-
Stage 4 - Track 2D points in videos and Reconstruct 3D <-This is where the magic happens ✨
- Apply user specified tracking algorithms to Synchronized videos (currently supporting MediaPipe, OpenPose, and DeepLabCut) to generate 2D data
- Save to
FreeMoCap_Data/[Session Folder]/DataArrays/
folder (e.g.mediaPipeData_2d.npy
)
- Save to
- Combine 2d data from each camera with calibration data from Stage 3 to reconstruct the 3d trajectory of each tracked point
- Save to
/DataArrays
folder (e.g.openPoseSkel_3d.npy
)
- Save to
- NOTE - you might think it would make sense to separate the 2d tracking and 3d reconstruction into different stages, but the way the code is currently set up it's cleaner to combine them into the same processing stage ¯\_(ツ)_/¯
- Apply user specified tracking algorithms to Synchronized videos (currently supporting MediaPipe, OpenPose, and DeepLabCut) to generate 2D data
-
Stage 5 - Use Blender to generate output data files (optional, requires Blender installed. set
freemocap.RunMe(useBlender=True)
to use)- Hijack a user-installed version of Blender to format raw mocap data into a
.blend
file including the raw data as keyframed emtpies with a (sloppy, inexpertly) rigged and meshed armatured based on the Rigify Human Metarig - Save
.blend
file to[Session_Folder]/[Session_ID]/[Session_ID].blend
- You can double click that
.blend
file to open it in Blender. - For instructions on how to navigate a Blender Scene, try this YouTube Tutorial
- Hijack a user-installed version of Blender to format raw mocap data into a
-
Stage 6 - Save Skeleton Animation!
- Create a Matplotlib based output animation video.
- Saves Animation video to:
[Session Folder]/[SessionID]_animVid.mp4
- Note - This part takes for-EVER 😅
Reprojection Error¶
"Reprojection error" is the distance (in pixels?) between the originally measured point (i.e. the 2d skeleton) and the reconstructed 3d point reprojected back onto the image plane.
The intuition is that if the 3d reconstruction and original 2d track are perfect, then reprojection error will be Zero. If it isn't, then there is some inaccurate in either:
- the original 2d tracks (i.e. bad skeleton detection in one or more cameras),
- in the 3d reconstruction (i.e. bad camera calibration),
- a combination of the two
Click here to follow a conversation about reprojection error on discord