Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
The AlignPointClouds function uses an iterative algorithm to align two sets of oriented point clouds and calculate the camera's relative pose. This is a generic function which can be used independently of a Reconstruction Volume with sets of overlapping point clouds.
All images must be the same size and have the same camera parameters.
To find the frame to frame relative transformation between two sets of point clouds in the camera local frame of reference (created by DepthFloatFrameToPointCloud), set the observedToReferenceTransform parameter to the identity.
To calculate the frame-to-model pose transformation between point clouds calculated from new depth frames with DepthFloatFrameToPointCloud and point clouds calculated from an existing Reconstruction volume with CalculatePointCloud (e.g. from the previous frame), pass the CalculatePointCloud image as the reference frame, and the current depth frame point cloud from DepthFloatFrameToPointCloud as the observed frame. Set the observedToReferenceTransform to the previous frames calculated camera pose that was used in the CalculatePointCloud call.
To calculate the pose transformation between new depth frames and an existing Reconstruction volume, pass in previous frames point cloud from RenderReconstruction as the reference frame, and the current frame point cloud (from DepthFloatFrameToPointCloud) as the observed frame. Set the observedToReferenceTransform parameter to the previous frames calculated camera pose.
Note that here the current frame point cloud will be in the camera local frame of reference, whereas the raycast points and normals will be in the global/world coordinate system. By passing the observedToReferenceTransform you make the algorithm aware of the transformation between the two coordinate systems.
The observedToReferenceTransform pose supplied can also take into account information you may have from other sensors or sensing mechanisms to aid the tracking. To do this multiply the relative frame to frame delta transformation from the other sensing system with the previous frame's pose before passing to this function. Note that any delta transform used should be in the same coordinate system as that returned by the DepthFloatFrameToPointCloud calculation.
Syntax
public static bool AlignPointClouds (
FusionPointCloudImageFrame referencePointCloudFrame,
FusionPointCloudImageFrame observedPointCloudFrame,
int maxAlignIterationCount,
FusionColorImageFrame deltaFromReferenceFrame,
ref Matrix4 observedToReferenceTransform
)
Parameters
- referencePointCloudFrame
Type: FusionPointCloudImageFrame
The point cloud frame of the reference camera, or the previous Kinect point cloud frame. - observedPointCloudFrame
Type: FusionPointCloudImageFrame
The point cloud frame of the observed camera, or the current Kinect frame. - maxAlignIterationCount
Type: Int32
The maximum number of iterations of the algorithm to run. The minimum value is 1. Using only a small number of iterations will have a faster runtime, however, the algorithm may not converge to the correct transformation. - deltaFromReferenceFrame
Type: FusionColorImageFrame
Optionally, a pre-allocated color image frame, to be filled with color-coded data from the camera tracking. Values vary depending on whether the pixel was a valid pixel used in tracking (green) or failed in different tests. Pass null if not required. from the camera tracking. This may be used as input to additional vision algorithms such as object segmentation. Values vary depending on whether the pixel was a valid pixel used in tracking (inlier) or failed in different tests (outlier). 0xff000000 indicates an invalid input vertex (e.g. from 0 input depth), or one where no correspondences occur between point cloud images. Outlier vertices rejected due to too large a distance between vertices are coded as 0xff008000. Outlier vertices rejected due to to large a difference in normal angle between point clouds are coded as 0xff800000. Inliers are color shaded depending on the residual energy at that point, with more saturated colors indicating more discrepancy between vertices and less saturated colors (i.e. more white) representing less discrepancy, or less information at that pixel. Pass null if this image is not required. - observedToReferenceTransform
Type: Matrix4
A pre-allocated transformation matrix. At entry to the function this should be filled with the best guess for the observed to reference transform (usually the last frame's calculated pose). At exit this is filled with he calculated pose or identity if the calculation failed.
Return Value
Type: Boolean
Returns true if successful; returns false if the algorithm encountered a problem aligning the input point clouds and could not calculate a valid transformation, and the observedToReferenceTransform parameter is set to identity.
Remarks
This method raises the following exceptions:
Exception | Raised on |
ArgumentNullException | Thrown when the referencePointCloudFrame or the observedPointCloudFrame parameter is null. |
ArgumentException | Thrown when the referencePointCloudFrame or observedPointCloudFrame or deltaFromReferenceFrame parameters are different image sizes. Thrown when the referencePointCloudFrame or observedPointCloudFrame or deltaFromReferenceFrame parameters have different camera parameters. Thrown when the maxAlignIterationCount parameter is less than 1. |
OutOfMemoryException | Thrown if a CPU memory allocation failed. |
InvalidOperationException | Thrown when the Kinect Runtime could not be accessed, the device is not connected, a GPU memory allocation failed or the call failed for an unknown reason. |
Requirements
Namespace: Microsoft.Kinect.Toolkit.Fusion
Assembly: Microsoft.Kinect.Toolkit.Fusion (in microsoft.kinect.toolkit.fusion.dll)
See Also
Reference
FusionDepthProcessor Class
FusionDepthProcessor Members
Microsoft.Kinect.Toolkit.Fusion Namespace