Share via


FusionDepthProcessor.DepthFloatFrameToPointCloud Method

Construct an oriented point cloud in the local camera frame of reference from a depth float image frame.

Here we calculate the 3D position of each depth float pixel with the optical center of the camera as the origin. We use a right-hand coordinate system, and (in common with bitmap images with top left origin) +X is to the right, +Y down, and +Z is now forward from the Kinect camera into the scene, as though looking into the scene from behind the Kinect camera. Both images must be the same size and have the same camera parameters.

Syntax

public static void DepthFloatFrameToPointCloud (
         FusionFloatImageFrame depthFloatFrame,
         FusionPointCloudImageFrame pointCloudFrame
)

Parameters

Remarks

This method raises the following exceptions:

Exception Raised on
ArgumentNullException Thrown when the depthFloatFrame or the pointCloudFrame parameter is null.
ArgumentException Thrown when the depthFloatFrame or pointCloudFrame parameters are different image sizes.
OutOfMemoryException Thrown if a CPU memory allocation failed.
InvalidOperationException Thrown when the Kinect Runtime could not be accessed, the device is not connected, a GPU memory allocation failed or the call failed for an unknown reason.

Requirements

Namespace: Microsoft.Kinect.Toolkit.Fusion

Assembly: Microsoft.Kinect.Toolkit.Fusion (in microsoft.kinect.toolkit.fusion.dll)

See Also

Reference

FusionDepthProcessor Class
FusionDepthProcessor Members
Microsoft.Kinect.Toolkit.Fusion Namespace