NatNet: Data Types
Main page → NatNet SDK → NatNet: Data Types
This page provides an overview of the general data structure used in the NatNet software development kit (SDK) and how the library is used to parse received tracking information.
For specific details on each of the data types, please refer to the NatNetTypes.h header file.
When receiving streamed data using the NatNet SDK library, its data descriptions should be received before receiving the tracking data. NatNet data is packaged mainly into two different formats: data descriptions and frame-specific tracking data. Utilizing this format, the client application can discover which data are streamed out from the server application in advance to accessing the actual tracking data.
For every asset (e.g. reconstructed markers, rigid bodies, skeletons, force plates) included within streamed capture sessions, their descriptions and tracking data are stored separately. This format allows frame-independent parameters (e.g. name, size, and number) to be stored within instances of the description structs, and frame-dependent values (e.g. position and orientation) to be stored within instances of the frame data structs. When needed, two different packets of an asset can be correlated by referencing to its unique identifier values.
- Dataset Descriptions contains descriptions of the motion capture data sets for which a frame of motion capture data will be generated. (e.g. sSkeletonDescription, sRigidBodyDescription)
- Frame of Mocap Data contains a single frame of motion capture data for all the datasets described from the Dataset Descriptions. (e.g. sSkeletonData, sRigidBodyData)
When streaming from Motive, received NatNet data will contain only the assets that are enabled in the Project pane and the asset types that are set to true under Streaming Settings in the Data Streaming pane.
To receive data descriptions from a connected server, use the NatNetClient::GetDataDescriptionList method. Calling this function saves a list of available descriptions in an instance of sDataSetDescriptions.
The sDataSetDescriptions structure stores an array of multiple descriptions for each of assets (MarkerSets, RigidBodies, Skeletons, and Force Plates) involved in a capture and necessary information can be parsed from it. The following table lists out the main data description structs that are available through the SDK.
Refer to the NatNetTypes.h header file for more information on each data type and members of each description struct.
|Data Type||Saved struct Type||Description|
|Native Library||Managed Assembly|
|Server Description||sServerDescription||ServerDescription|| Contains basic network information of the connected server application and the host computer that it is running on. Server descriptions are obtained by calling the GetServerDescription method from the NatNetClient class.
|Data Descriptions||sDataDescriptions||List<DataDescriptor>||Contains an array of data descriptions for each active asset in a capture, and basic information about corresponding asset is stored in each description packet. Data descriptions are obtained by calling the GetDataDescriptions method from the NatNetClient class. Descriptions of each asset type is explained below.|
|MarkerSet Description||sMarkerSetDescription||MarkerSet|| MarkerSet description contains a total number of markers in a MarkerSet and each of their labels. Note that rigid body and skeleton assets are included in the MarkerSet as well. Also, for every mocap session, there is a spcial MarkerSet named all, which contains a list of all of the labeld markers from the capture.
|Rigid Body Description||sRigidBodyDescription||RigidBody|| Rigid body description contains corresponding rigid body names. Skeleton bones are also considered as rigid bodies, and in this case, the description also contains hierarchial relationship for parent/chile rigid bodies.
|Skeleton Description||sSkeletonDescription||Skeleton|| Skeleton description contains corresponding skeleton asset name, skeleton ID, and total number of rigid bodies (bones) involved in the asset. The skeleton desciption also contains an array of rigid body descriptions which relates to individual bones of the corresponding skeleton.
|Force Plate Description||sForcePlateDescription||ForcePlate|| Force plate description contains names and IDs of the plate and its channels as well as other hardware parameter settings. Please refer to the NatNetTypes.h header file for specific details.
|Device Description||sDeviceDescription||Device||An instance of the sDeviceDescription contains information of the data acquisition (NI-DAQ) devices. It includes information on both the DAQ device (ID, name , serial number) as well as its corresponding channels (channel count, channel data type, channel names). Please refer to the NatNetTypes.h header file for specific details.
Frame of Mocap Data
As mentioned in the beginning, frame-specific tracking data are stored separately from the DataDescription instances as this cannot be known ahead of time or out of band but only by per frame basis. These data gets saved into instances of sFrameOfMocapData for corresponding frames, and they will contain arrays of frame-specific data structs (e.g.sRigidBodyData, sSkeletonData) for each types of assets included in the capture. Respective frame number, timecode, and streaming latency values are also saved in these packets.
The sFrameOfMocapData can be obtained by setting up a frame handler function using the NatNetClient::SetFrameReceivedCallback method. In most cases, a frame handler function must be assigned in order to make sure every frames are promptly processed. Refer to the provided SampleClient project for an exemplary setup.
|Frame Count||iFrame||Host (server) defined frame number.|
|Labeled Markers||nLabeledMarkers||nMarkers||A total number of labeled markers in the frame.|
|LabeledMarkers||A list of ordered, padded, point cloud solved, model filled (where occluded) labeled marker data. The data includes the unique ID, x/y/z positions, marker size, and its residual value from reconstruction.
|Unlabeled Markers||nOtherMarkers||A total number of unlabeled markers in the frame.|
|OtherMarkers||A list of point cloud solved 3D positions (X, Y, Z) for all unlabeled markers in the frame.
|MarkerSet Data||nMarkerSets||A total number of markersets.|
A collection of MarkerSets (MarkerSet, Rigid Body, or Skeletons) in the frame. The struct includes name, number of involved markers, and their corresponding X/Y/Z locations.
|Rigid Body Data||nRigidBodies||A total number of rigid body assets, both tracked and untracked, in the frame.|
|sRigidBodyData||RigidBodyData||A named segment with a unique ID, position, and orientation data. For skeletons rigid bodies, this will represent one of the segments on a skeleton asset.
|Skeleton Data||nSkeletons||A total number of skeleton assets, both tracked and untracked, in the frame.|
|sSkeletonData||SkeletonData||A named, hierarchical collection of RigidBody data in sRigidBodyData struct.
|Force Plate Data||nForcePlates||A total number of force plates.|
|ForcePlates||Force plate channel data (Fx, Fy, Fz, Mx, My, Mz). Each channel data is saved as an instance of the sAnalogchnnelData which contains values measured from corresponding channel as well as the total number of analog subframes contained per mocap frame. Force plate data will contain multiple samples per mocap frame, depending upon the force plate acquisition rate. The total number of subframes per mocap frame can be quiried from a AnalogChannelData instance of each channel.|
|Device Data||nDevices||A total number of analog devices in the capture. (e.g. NI-DAQ)|
|Devices||An array containing data from each of analog device channels (e.g. NI-DAQ). Each channel data will be saved as an instance of the sAnalogchnnelData which contains values measured from corresponding channel as well as the total number of analog subframes contained per mocap frame.|
(Deprecated)Now, more accurate system latency values can be derived from the reported timestamp values. For more information, read through the Latency Measurements page.
(Deprecated)Now, more accurate software latency values can be derived from the reported timestamp values. For more information, read through the Latency Measurements page.
|Time Information||Timecode||Timing information for the frame. If SMPTE timecode is detected in the system, this time information is also included. See: OptiTrack Timecode
|TimecodeSubframe||The subframe value of the timecode. See: OptiTrack Timecode|
|fTimestamp||Software timestamp value. Reports the time since software start.|
|CameraMidExposureTimestamp||Given in host's high resolution ticks, this stores a timestamp value of when the cameras expose. The timestamp precisely indicates the center of the exposure window. For more information, refer to the Latency Measurements article.|
|CameraDataReceivedTimestamp||Given in host's high resolution ticks, this stores a timestamp value of when Motive receives the camera data. For more information, refer to the Latency Measurements article.|
|TransmitTimestamp||Given in host's high resolution ticks, this stores a timestamp value of when tracking data is fully processed and ready to be streamed out. For more information, refer to the Latency Measurements article.|
- One reconstructed 3D marker can be stored in two different places (e.g. in LabeledMarkers and in RigidBody) within a frame of mocap data. In those cases, unique identifier values of the marker can be used to correlate them in the client application if necessary.
- Declarations for these data types are listed in the NatNetTypes.h header files within the SDK. The SampleClient project included in the
\NatNet SDK\Samplefolder illustrates how to retrieve and interpret the data descriptions and frame data.
Most of the NatNet SDK data packets contain ID values. This value is assigned uniquely to individual markers as well as each of assets within a capture. These values can be used to figure out which asset a given data packet is associated with. One common use is for correlating data descriptions and frame data packets of an asset.
Decoding Member IDs
For each member object that is included within a parental model, its unique ID value points to both its parental model and the member itself. Thus, the ID value of a member object needs to be decoded in order to parse which objects and the parent models they are referencing to.
For example, a skeleton asset is a hierarchical collection of bone rigid bodies, and each of its bone rigid bodies has unique ID that references to the involved skeleton model and the rigid body itself. When analyzing skeleton bones, its ID value needs to be decoded in order to extract the segment rigid body ID, and only then, it can be used to reference its descriptions.