com.primesense.nite
Class HandTracker

java.lang.Object
  extended by com.primesense.nite.HandTracker

public class HandTracker
extends java.lang.Object

This is the main object of the Hand Tracker algorithm. It (along with UserTracker) is one of two main classes in NiTE. All NiTE algorithms are accessable through one of these two classes.

HandTracker provides access to all algorithms relates to tracking individual hands, as well as detecting gestures in the depthmap.

The core of the hand tracking is an algorithm that finds human hands in each from of the depthmap, and reports the position of those hands in space. This can be used for simple detection of higher level gestures and implimentation of gesture based user interfaces. Unlike full body tracking algorithms, handpoint based tracking works on users that are sitting and does not require a full body be visible.

Gesture tracking is generally used to initiate hand tracking. It allows detection of gestures in the raw depth map, without requiring hand points (in contrast to higher-level gestures that might be used to impliment a UI using handpoints). These gestures can be located in space to provide a hint to the hand tracking algorithm on where to start tracking.

The output of the HandTracker occurs one frame at a time. For each input depth frame, a hand tracking frame is output with hand positions, gesture positions, etc. A listener class is provided that allows for event driven reaction to each new frame as it arrives.

Note that creating a HandTracker requires a valid OpenNI 2.0 Device to be initialized in order to provide depth information. See the OpenNI 2.0 documentation for information on connecting a device and starting the stream of depth maps that will drive this algorithm.

See Also:
UserTracker, NiTE

Nested Class Summary
static interface HandTracker.NewFrameListener
          This is a listener interface that is used to react to events generated by the HandTracker class.
 
Method Summary
 void addNewFrameListener(HandTracker.NewFrameListener listener)
          Adds a NewFrameListner object to this HandTracker so that it will respond when a new frame is generated.
 Point2D<java.lang.Float> convertDepthCoordinatesToHand(Point3D<java.lang.Integer> point)
           In general, two coordinate systems are used in OpenNI 2.0.
 Point2D<java.lang.Float> convertHandCoordinatesToDepth(Point3D<java.lang.Float> point)
           In general, two coordinate systems are used in OpenNI 2.0.
static HandTracker create()
           Creates and initializes an empty User Tracker.
static HandTracker create(org.openni.Device device)
           Creates and initializes an empty User Tracker.
 void destroy()
          Shuts down the hand tracker and releases all resources used by it.
 long getHandle()
          Getter function for handle of hand tracker.
 float getSmoothingFactor()
          Queries the current hand smoothing factor.
 HandTrackerFrameRef readFrame()
          Gets the next snapshot of the algorithm.
 void removeNewFrameListener(HandTracker.NewFrameListener listener)
          Removes a NewFrameListener object from this HandTracker's list of listeners.
 void setSmoothingFactor(float factor)
          Control the smoothing factor of the hand points.
 void startGestureDetection(GestureType type)
           Start detecting a specific gesture.
 short startHandTracking(Point3D<java.lang.Float> position)
           Starts tracking a hand at a specific point in space.
 void stopGestureDetection(GestureType type)
          Stop detecting a specific gesture.
 void stopHandTracking(short id)
          Commands the algorithm to stop tracking a specific hand.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Method Detail

create

public static HandTracker create(org.openni.Device device)

Creates and initializes an empty User Tracker. This function should be the first one called when a new UserTracker object is constructed.

An OpenNI device with depth capabilities is required for this algorithm to work. See the OpenNI 2.0 documentation for more information about using an OpenNI 2.0 compliant hardware device and creating a Device object.

Parameters:
device - Initialized OpenNI 2.0 Device object that provides depth streams.
Returns:
Initialized an empty Hand Tracker.

create

public static HandTracker create()

Creates and initializes an empty User Tracker. This function should be the first one called when a new UserTracker object is constructed.

An OpenNI device with depth capabilities is required for this algorithm to work. See the OpenNI 2.0 documentation for more information about using an OpenNI 2.0 compliant hardware device and creating a Device object.

Returns:
Initialized an empty Hand Tracker.

destroy

public void destroy()
Shuts down the hand tracker and releases all resources used by it. This is the opposite of create(). This function is called automatically by the destructor in the current implimentation, but it is good practice to run it manually when the algorithm is no longer required. Running this function more than once is safe -- it simply exits if called on a non-valid HandTracker.


getHandle

public long getHandle()
Getter function for handle of hand tracker.

Returns:
Handle of hand tracker.

readFrame

public HandTrackerFrameRef readFrame()
Gets the next snapshot of the algorithm. This causes all data to be generated for the next frame of the algorithm -- algorithm frames correspond to the input depth frames used to generate them.

Returns:
Next frame.

setSmoothingFactor

public void setSmoothingFactor(float factor)
Control the smoothing factor of the hand points. Factor should be between 0 (no smoothing at all) and 1 (no movement at all). Experimenting with this factor should allow you to fine tune the hand tracking performance. Higher values will produce smoother movement of the handpoints, but may make the handpoints feel less responsive to the user.

Parameters:
factor - The smoothing factor.
See Also:
getSmoothingFactor()

getSmoothingFactor

public float getSmoothingFactor()
Queries the current hand smoothing factor.

Returns:
Current hand smoothing factor.
See Also:
setSmoothingFactor(float factor)

startHandTracking

public short startHandTracking(Point3D<java.lang.Float> position)

Starts tracking a hand at a specific point in space. Use of this function assumes that there actually is a hand in the location given. In general, the hand algorithm is much better at tracking a specific hand as it moves around than it is at finding the hand in the first place.

This function is typically used in conjunction with gesture detection. The position in space of the gesture is used to initiate hand tracking. It is also possible to start hand tracking without a gesture if your application will constrain users to place their hands in a certain known point in space. A final possibility is for applications or third party middleware to impliment their own hand 'finding' algorithm either in depth or from some other information source, and using that data to initialize the hand tracker.

The position in space of the hand point is specified in "real world" coordinates. See OpenNI 2.0 documentation for more information on coordinate systems.

Parameters:
position - Point where hand is known/suspected to exist.
Returns:
ID to assign a hand once tracking starts.

stopHandTracking

public void stopHandTracking(short id)
Commands the algorithm to stop tracking a specific hand. Note that the algorithm may be tracking more than one hand. This function only halts tracking on the single hand specified.

Parameters:
id - Id of the hand to quit tracking.

addNewFrameListener

public void addNewFrameListener(HandTracker.NewFrameListener listener)
Adds a NewFrameListner object to this HandTracker so that it will respond when a new frame is generated.

Parameters:
listener - A listener to add.

removeNewFrameListener

public void removeNewFrameListener(HandTracker.NewFrameListener listener)
Removes a NewFrameListener object from this HandTracker's list of listeners. The listener will no longer respond when a new frame is generated.

Parameters:
listener - A listener to remove.

startGestureDetection

public void startGestureDetection(GestureType type)

Start detecting a specific gesture. This function will cause the algorithm to start scanning the entire field of view for any hand that appears to be performing the gesture specified. Intermediate progress is available to aid in providing feedback to the user.

Gestures are detected from the raw depth map. They don't depend on hand points. They are most useful for determining where a hand is in space to start hand tracking. Unlike handpoints, they do not follow a specific hand, so they will react to a hand anywhere in the room.

If you want to detect user gestures for input purposes, it is often better to use a single "focus" gesture to start hand tracking, and then detect other gestures from the handpoints. This enables an application to focus on a single user, even in a crowded room.

Hand points can also be more computationally efficient. The gesture tracking algorithm for any given gesture uses about as much CPU bandwidth as the hand tracker. Adding more gestures or also running the hand tracker increases CPU consumption linearly. Finding gestures from hand points, on the other hand, can be done for negligable CPU cost once the handpoint algorithm has run. This means that user interface complexity will scale better with CPU complexity.

Parameters:
type - GestureType you wish to detect.

stopGestureDetection

public void stopGestureDetection(GestureType type)
Stop detecting a specific gesture. This disables detection of the specified gesture. Doing this when that gesture is no longer required prevents false detection and saves CPU bandwidth.

Parameters:
type - GestureType you would like to stop detecting.

convertHandCoordinatesToDepth

public Point2D<java.lang.Float> convertHandCoordinatesToDepth(Point3D<java.lang.Float> point)

In general, two coordinate systems are used in OpenNI 2.0. These conventions are also followed in NiTE 2.0.

Hand point and gesture positions are provided in "Real World" coordinates, while the native coordinate system of depth maps is the "projective" system. In short, "Real World" coordinates locate objects using a Cartesian coordinate system with the origin at the sensor. "Projective" coordinates measure straight line distance from the sensor (perpendicular to the sensor face), and indicate x/y coordinates using pixels in the image (which is mathematically equivalent to specifying angles). See the OpenNI 2.0 documentation online for more information.

Note that no output is given for the Z coordinate. Z coordinates remain the same when performing the conversion. An input value is still required for Z, since this can affect the x/y output.

This function allows you to convert the coordinates of a hand point or gesture to the native coordinates of a depth map. This is useful if you need to find the hand position on the raw depth map.

Parameters:
point - A point in the "real world" coordinate system.
Returns:
A point in the "projective" coordinate system.

convertDepthCoordinatesToHand

public Point2D<java.lang.Float> convertDepthCoordinatesToHand(Point3D<java.lang.Integer> point)

In general, two coordinate systems are used in OpenNI 2.0. These conventions are also followed in NiTE 2.0.

Hand pont and gesture positions are provided in "Real World" coordinates, while the native coordinate system of depth maps is the "projective" system. In short, "Real World" coordinates locate objects using a Cartesian coordinate system with the origin at the sensor. "Projective" coordinates measure straight line distance from the sensor, and indicate x/y coordinates using pixels in the image (which is mathematically equivalent to specifying angles). See the OpenNI 2.0 documentation online for more information.

This function allows you to convert the native depth map coordinates to the system used by the hand points. This might be useful for performing certain types of measurements (eg distance between a hand and an object identified only in the depth map).

Note that no output is given for the Z coordinate. Z coordinates remain the same when performing the conversion. An input value is still required for Z, since this can affect the x/y output.

Parameters:
point - A point in the "projective" coordinate system.
Returns:
A point in the "real world" coordinate system.