This wiki is intended for older versions of Motive. For the latest documentation, please refer to

Reconstruction and 2D Mode

Main page

Page Scope

This page explains how 3D coordinates are derived from captured camera images through the reconstruction process. A clear understanding of this process will allow you to fully utilize Motive for analyzing and optimizing captured 3D tracking data. Once the basic concept of the reconstruction process has been covered, we will go over some tips and instructions on how to inspect and optimize the configurations for best tracking results.

Reconstruction: Basic Concept

Reconstruction in motion capture is a process of deriving 3D points from 2D coordinates obtained by captured camera images, and the Point Cloud Reconstruction Engine is the core engine that runs this process. When multiple synchronized images are captured, 2D centroid locations of detected marker reflections are triangulated frame-by-frame in order to obtain respective 3D positions within the calibrated capture volume.

The Reconstruction Settings contain all of the settings and filter options for the reconstruction engine, and you can modify the parameters to optimize the quality of reconstructions. Real-time reconstruction settings are configured from the Live-Reconstruction tab under the Application Settings, and the post-processing reconstruction settings for recorded Takes are configured under corresponding Take properties.

Note that optimal configurations may vary depending on capture applications and environmental conditions. For most common applications, default settings should work fine, but still, understanding these settings will allow you to fully utilize the 3D reconstruction capabilities in Motive. In this page, we will focus on the Reconstruction Settings, 2D Filter settings, and the Camera Settings, which are the key settings that have direct affects on the reconstruction outcome.

3D markers reconstructed from captured 2D images.

Camera Settings

Camera settings can be configured under the Devices pane. In general, the overall quality of 3D reconstructions is affected by the quality of captured camera images. Thus, the camera settings, such as camera exposure and IR intensity values, must always be optimized for capturing clear images of tracked markers. The following sections highlight some of the settings that are directly related to 3D reconstruction.

Enable Reconstruction

  • Under the live-reconstruction tab in the Application Settings, the box next to the Enable Point Cloud Reconstruction must be checked to enable real-time 3D reconstruction.
  • Tracking mode vs. Reference mode: Only the cameras that are configured in the tracking mode (Object or Precision) will contribute to reconstructions. Cameras in the reference mode (MJPEG or Grayscale) will NOT contribute to reconstructions. See Camera Video Types page for more information.
  • If you wish to omit certain cameras from contributing to 3D reconstructions, disable Reconstruction on the cameras from the Devices pane. These cameras will still record the captured 2D frames into the TAK, but their data will not contribute to the real-time reconstruction. This reduces the amount of data to be processed in real-time, which could be beneficial for high-camera count setups where there is a lot of data to process in the Live mode. You will still be able to utilize the 2D frames using post-processing reconstruction pipeline.
  • Real-time reconstruction enabled under the application settings.
  • One of the camera is disabled from contributing to real-time reconstruction.

THR Setting

The THR setting is located in the camera properties in Motive. When cameras are set to tracking mode, only the pixels with brightness values greater than the configured threshold setting are captured and processed. The pixels brighter than the threshold are referred to as thresholded pixels, and all other pixels that do not satisfy the brightness get filtered out. Only the clusters of thresholded pixels are then filtered through the 2D Object Filter to be potentially considered as marker reflections.


We do not recommend lowering the THR value (default:200) for the cameras since lowering THR settings can introduce false reconstructions and noise in the data


To inspect brightness values of the pixels, set the Pixel Inspection to true under the View tab in the Application Settings.

  • Analyzing pixel brightness values using the pixel inspector
  • THR setting under camera properties

Reconstruction Settings

The Reconstruction Settings control the Point Cloud reconstruction in Motive. When a camera system captures multiple synchronized 2D frames, the images are processed through two main filter stages before getting reconstructed into 3D data. The first filter is on the camera hardware level and the other filter is on the software level, and both of them are important in deciding which 2D reflections get identified as marker reflections and be reconstructed into 3D data. Adjust these settings to optimize the 3D data acquisition in both live-reconstruction and post-processing reconstruction of capture data.

2D Object Filter


2D Object Filter has been migrated over to the Cameras tab under the Applications Settings in Motive 2.1.

2D Filter section of the cameras tab in the Application Settings.

When a frame of image is captured by a camera, the 2D Object Filter is applied. By judging on sizes and shapes of the detected reflections, this filter determines which of them can be accepted as marker reflections. Parameters for the 2D Object Filter are configured in the Application Settings under the Cameras tab. Please note that the 2D Object filter settings can be configured in Live mode only. This filter is applied at the hardware level when the 2D frames are first captured; thus, you will not be able to modify these setting on a recorded Take.

Min/Max Thresholded Pixels

The Min/Max Thresholded Pixels settings determine lower and upper boundaries of the size filter. Only reflections with pixel counts within the boundaries will be considered as marker reflections, and any other reflections below or above the defined boundary will be filtered out. Thus, it is important to assign appropriate values to the minimum and maximum thresholded pixel settings.

For example, in a close-up capture application, marker reflections appear bigger on camera's view. In this case, you may want to lower the maximum threshold value to allow reflections with more thresholded pixels to be considered as marker reflections. For common applications, however, the default range should work fine.


Enable Marker Size under the visual aids (Viewport16.png) in the Camera Preview viewport to inspect which reflections are accepted, or omitted, by the size filter.

Reflections accepted (white) or rejected (red) by the size filter.


In addition to the size filter, the 2D Object Filter also identifies marker reflections based on their shape; specifically, the roundness. It assumes that all marker reflections have circular shapes and filters out all non-circular reflections detected by each camera. The allowable circularity value is defined under the Marker Circularity settings in the Reconstruction pane. The valid range is between 0 and 1, with 0 being completely flat and 1 being perfectly round. Only reflections with circularity values bigger than the defined threshold will be considered as marker reflections.


Enable Marker Circularity under the visual aids Viewport16.png in the Camera Preview viewport to inspect which reflections are accepted, or omitted, by the circularity filter.

Reflections accepted (white) or rejected (red) by the size filter.

Object mode vs. Precision Mode

The Object Mode and Precision Mode deliver slightly different data to the host PC. In the object mode, cameras capture 2D centroid location, size, and roundness of markers and deliver to the host PC. In the precision mode, cameras capture only centroid region of interests. Then, this region is delivered to the host PC for additional processing to determine the centroid location, size, and roundness of the reflections. Read more about Video Types.

Marker Rays

After the 2D object filter has been applied, each of 2D centroids captured by each camera forms a marker ray, which is a 3D vector ray that connects a detected centroid to a 3D coordinate in a capture volume. When a minimum of 2 rays (defined by Minimum Rays) converge and intersect within the allowable maximum offset distance (defined by Maximum Residual settings), reconstruction of a 3D marker occurs.

Monitoring marker rays is an efficient way of live-inspecting reconstruction outcomes. To visualize these marker rays, Motive must be live-reconstructing from either live or recorded 2D data. Then, the rays must be enabled for viewing under the visual aids Viewport16.png options in the Perspective View pane. The live-reconstruction, which will be covered later in this page, occurs in either the Live mode or the 2D mode. There are three different types of marker rays that are visualized in Motive:

Marker Ray Types

There are two different types of marker rays in Motive: tracked rays and untracked rays. By inspecting these marker rays, you can easily find out which cameras are contributing to the reconstruction of a selected marker.

Tracked Ray (Green)

Tracked rays are marker rays that represent detected 2D centroids that are contributing to 3D reconstructions within the volume. Tracked Rays will be visible only when reconstructions are selected from the viewport.

Untracked Ray (Red)

An untracked ray is a marker ray that fails to contribute to a reoncstruction of a 3D point. Untracked rays occurs when reconstruction requirements, usually the ray count or the max residuals, are not met.

Software Filter: Reconstruction Settings

Reconstruction settings are located under the Live Reconstruction tab in the Application Settings.

Motive processes markers rays with camera calibration to reconstruct respective 3D coordinates, and here, another stage of filtering is applied. When marker rays converge into a 3D point, the Point Cloud reconstruction engine refers to the reconstruction parameters and determines which sets of converging rays are acceptable to be reconstructed into 3D data. These parameters are defined under the reconstruction tab in the Application Settings, and only those that do not qualify the given conditions will be occluded. Some of the commonly modified key parameters are summarized below.

Maximum Residual

Residual offset among the converging rays

The maximum allowable offset distance (in mm) between the converging rays contributing to a 3D reconstruction. This distance is referred as the residual distance in Motive. When a marker ray converges on a set of other rays with a residual distance larger than the defined maximum value, the ray will not contribute to reconstruction of the 3D point. Lower this setting if you want 3D markers to be reconstructed only when marker rays are precisely converging onto a 3D point. For larger capture volume setups, increase this value to be more lenient on reconstructions that have bigger residual offsets.


When you select a marker in the live-reconstruction mode, a respective residual value will be displayed on the status bar. Smaller residual values represents precisely converged tracked rays and are more accurate representation of 3D coordinates.


When calibration quality of a camera system is degraded, the residual value of the system will increase.

Minimum Ray Count

This setting sets a minimum number of tracked marker rays required for a 3D point to be reconstructed. In other words, this is the required number of calibrated cameras that need to see the marker. Increasing the minimum ray count may prevent extraneous reconstructions, and decreasing it may prevent marker occlusions from not enough cameras seeing markers. In general, modifying this is recommended only for high camera count setups.

RC TrackedRays.png

More Settings

There are other reconstruction setting that can be adjusted to improve the acquisition of 3D data. For detailed description of each setting, read through the Application Settings: Live Reconstruction page.

Live Reconstruction

Live-reconstruction enabled.

Live-reconstruction is a real-time reconstruction of 3D coordinates directly from captured, or recorded, 2D data. To allow Motive to live-reconstruct, the Enable Point Cloud Reconstruction must be enabled under the Reconstruction tab in the Application Settings pane. When Motive is live-reconstructing, you can examine the marker rays from the viewport, inspect the point cloud reconstruction, and optimize the 3D data acquisition.

There are two modes where Motive is reconstructing 3D data in real-time:

  • Live mode (Live 2D data capture)
  • 2D mode (Recorded 2D data)

Live Mode

In the Live Mode, Motive is reconstructing from captured 2D frames in real-time, and you can inspect and monitor the marker rays from the perspective vew. Any changes to the Point Cloud reconstruction settings will be reflected immediately in the Live mode.

Motive Rays Live.png

2D Mode

The 2D Mode is used to monitor 2D data in the post-processing of a captured Take. When Motive records a capture, both 2D camera data and reconstructed 3D data are saved into a Take file, and by default, the latter is always loaded first when a recorded Take file is opened.

Recorded 3D data contains only the 3D coordinates that were live-reconstructed at the moment of capture; in other words, this data is completely independent of the 2D data once a capture has been recorded. You can still, however, view and use the recorded 2D data to optimize the Point Cloud parameters and reconstruct a fresh set of 3D data from it. To do so, you need to switch into the 2D Mode in the Data pane. In the 2D Mode, you will be able to inspect the reconstructions and marker rays from the viewports. In the 2D Mode, Motive is live-reconstructing from recorded 2D data and any changes to the reconstruction settings will be reflected.

Switching to 2D Mode

Under the Data pane, click ContextMenu dotdotdot.png to access the menu options and check the 2D Mode option.
Reconstruction 2DMode.png


Once the reconstruction parameters have been optimized, the post-processing reconstruction pipeline needs to be performed on the Take in order to reconstruct a new set of 3D data. Here, note that the existing 3D data will get overwritten and all of the post-processing edits on it will be discarded.

Post-Processing Reconstruction

The post-processing reconstruction pipeline allows you to convert 2D data from recorded Take into 3D data. In other words, you can obtain a fresh set of 3D data from recorded 2D camera frames by performing reconstruction on a Take. Also, if any of the Point Cloud reconstruction parameters have been optimized post-capture, the changes will be reflected on the newly obtained 3D data.

  • Performing post-processing reconstruction. To perform post-processing reconstruction, open the Data pane, select desired Takes, Right-click on the Take selection, and use either the Reconstruct pipeline or the Reconstruct and Auto-label pipeline from the context menu.
  • 2D Filter Settings In Edit mode, 2D filters can still be modified from the tracking group properties in the Devices pane. Modified filter settings will change which 2D data gets processed through the point cloud reconstruction engine.
  • Reconstruction Settings When you re-reconstruct the Take, a new set of 3D data will be reconstructed from filtered 2D data. In this step, the reconstruction settings defined under corresponding Take properties in the Properties pane will be used. Note that the reconstruction properties under the Application Settings are for the Live capture systems only.
  • Reconstruct and Auto-label, will additionally apply the auto-labeling pipeline on the obtained 3D data and label any markers that associate with existing asset (rigid body or skeleton) definitions. The auto-labeling pipeline will be explained more in the Labeling page.
  • Performing Reconstruct and Auto-label pipeline on a selected TAK to obtain a new set of 3D data.
  • When applying post-processing reconstruction, the reconstruction settings configured under the Properties pane will be applied.


  • Post-processing reconstruction can be performed either on an entire Take frame range or only within desired frame range by selecting the range under the Control Deck or in the Graph View pane. When nothing is selected, reconstruction will be applied to all frames.
  • Entire frames of multiple Takes can be selected and processed altogether by selecting desired Takes under the Data Management pane.


  • Reconstructing recorded Takes again either by Reconstruct or Reconstruct and Auto-label pipeline will completely overwrite existing 3D data, and any post-processing edits on trajectories and marker labels will be discarded.
  • Also, for Takes involving skeleton assets, if the skeletons are never in well-trackable poses throughout the captured Take, the recorded skeleton marker labels, which were intact during the live capture, may be discarded, and reconstructed markers may not be auto-labeled again. This is another reason why you want to start a capture with a calibration pose (e.g. T-pose).

See Also: