|
||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Objectcom.primesense.nite.HandTracker
public class HandTracker
This is the main object of the Hand Tracker algorithm. It (along with UserTracker
) is one
of two main classes in NiTE. All NiTE algorithms are accessable through one of these two classes.
HandTracker provides access to all algorithms relates to tracking individual hands, as well as detecting gestures in the depthmap.
The core of the hand tracking is an algorithm that finds human hands in each from of the depthmap, and reports the position of those hands in space. This can be used for simple detection of higher level gestures and implimentation of gesture based user interfaces. Unlike full body tracking algorithms, handpoint based tracking works on users that are sitting and does not require a full body be visible.
Gesture tracking is generally used to initiate hand tracking. It allows detection of gestures in the raw depth map, without requiring hand points (in contrast to higher-level gestures that might be used to impliment a UI using handpoints). These gestures can be located in space to provide a hint to the hand tracking algorithm on where to start tracking.
The output of the HandTracker occurs one frame at a time. For each input depth frame, a hand tracking frame is output with hand positions, gesture positions, etc. A listener class is provided that allows for event driven reaction to each new frame as it arrives.
Note that creating a HandTracker requires a valid OpenNI 2.0 Device to be initialized in order to provide depth information. See the OpenNI 2.0 documentation for information on connecting a device and starting the stream of depth maps that will drive this algorithm.
UserTracker
,
NiTE
Nested Class Summary | |
---|---|
static interface |
HandTracker.NewFrameListener
This is a listener interface that is used to react to events generated by the HandTracker class. |
Method Summary | |
---|---|
void |
addNewFrameListener(HandTracker.NewFrameListener listener)
Adds a NewFrameListner object to this HandTracker so that it will respond when a new
frame is generated. |
Point2D<java.lang.Float> |
convertDepthCoordinatesToHand(Point3D<java.lang.Integer> point)
In general, two coordinate systems are used in OpenNI 2.0. |
Point2D<java.lang.Float> |
convertHandCoordinatesToDepth(Point3D<java.lang.Float> point)
In general, two coordinate systems are used in OpenNI 2.0. |
static HandTracker |
create()
Creates and initializes an empty User Tracker. |
static HandTracker |
create(org.openni.Device device)
Creates and initializes an empty User Tracker. |
void |
destroy()
Shuts down the hand tracker and releases all resources used by it. |
long |
getHandle()
Getter function for handle of hand tracker. |
float |
getSmoothingFactor()
Queries the current hand smoothing factor. |
HandTrackerFrameRef |
readFrame()
Gets the next snapshot of the algorithm. |
void |
removeNewFrameListener(HandTracker.NewFrameListener listener)
Removes a NewFrameListener object from this HandTracker's list of listeners. |
void |
setSmoothingFactor(float factor)
Control the smoothing factor of the hand points. |
void |
startGestureDetection(GestureType type)
Start detecting a specific gesture. |
short |
startHandTracking(Point3D<java.lang.Float> position)
Starts tracking a hand at a specific point in space. |
void |
stopGestureDetection(GestureType type)
Stop detecting a specific gesture. |
void |
stopHandTracking(short id)
Commands the algorithm to stop tracking a specific hand. |
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Method Detail |
---|
public static HandTracker create(org.openni.Device device)
Creates and initializes an empty User Tracker. This function should be the first one called when a new UserTracker object is constructed.
An OpenNI device with depth capabilities is required for this algorithm to work. See the OpenNI 2.0 documentation for more information about using an OpenNI 2.0 compliant hardware device and creating a Device object.
device
- Initialized OpenNI 2.0 Device object that provides depth streams.
public static HandTracker create()
Creates and initializes an empty User Tracker. This function should be the first one called when a new UserTracker object is constructed.
An OpenNI device with depth capabilities is required for this algorithm to work. See the OpenNI 2.0 documentation for more information about using an OpenNI 2.0 compliant hardware device and creating a Device object.
public void destroy()
create()
. This function is called automatically by the destructor in the current
implimentation, but it is good practice to run it manually when the algorithm is no longer
required. Running this function more than once is safe -- it simply exits if called on a
non-valid HandTracker.
public long getHandle()
public HandTrackerFrameRef readFrame()
public void setSmoothingFactor(float factor)
factor
- The smoothing factor.getSmoothingFactor()
public float getSmoothingFactor()
setSmoothingFactor(float factor)
public short startHandTracking(Point3D<java.lang.Float> position)
Starts tracking a hand at a specific point in space. Use of this function assumes that there actually is a hand in the location given. In general, the hand algorithm is much better at tracking a specific hand as it moves around than it is at finding the hand in the first place.
This function is typically used in conjunction with gesture detection. The position in space of the gesture is used to initiate hand tracking. It is also possible to start hand tracking without a gesture if your application will constrain users to place their hands in a certain known point in space. A final possibility is for applications or third party middleware to impliment their own hand 'finding' algorithm either in depth or from some other information source, and using that data to initialize the hand tracker.
The position in space of the hand point is specified in "real world" coordinates. See OpenNI 2.0 documentation for more information on coordinate systems.
position
- Point where hand is known/suspected to exist.
public void stopHandTracking(short id)
id
- Id of the hand to quit tracking.public void addNewFrameListener(HandTracker.NewFrameListener listener)
HandTracker
so that it will respond when a new
frame is generated.
listener
- A listener to add.public void removeNewFrameListener(HandTracker.NewFrameListener listener)
listener
- A listener to remove.public void startGestureDetection(GestureType type)
Start detecting a specific gesture. This function will cause the algorithm to start scanning the entire field of view for any hand that appears to be performing the gesture specified. Intermediate progress is available to aid in providing feedback to the user.
Gestures are detected from the raw depth map. They don't depend on hand points. They are most useful for determining where a hand is in space to start hand tracking. Unlike handpoints, they do not follow a specific hand, so they will react to a hand anywhere in the room.
If you want to detect user gestures for input purposes, it is often better to use a single "focus" gesture to start hand tracking, and then detect other gestures from the handpoints. This enables an application to focus on a single user, even in a crowded room.
Hand points can also be more computationally efficient. The gesture tracking algorithm for any given gesture uses about as much CPU bandwidth as the hand tracker. Adding more gestures or also running the hand tracker increases CPU consumption linearly. Finding gestures from hand points, on the other hand, can be done for negligable CPU cost once the handpoint algorithm has run. This means that user interface complexity will scale better with CPU complexity.
type
- GestureType
you wish to detect.public void stopGestureDetection(GestureType type)
type
- GestureType
you would like to stop detecting.public Point2D<java.lang.Float> convertHandCoordinatesToDepth(Point3D<java.lang.Float> point)
In general, two coordinate systems are used in OpenNI 2.0. These conventions are also followed in NiTE 2.0.
Hand point and gesture positions are provided in "Real World" coordinates, while the native coordinate system of depth maps is the "projective" system. In short, "Real World" coordinates locate objects using a Cartesian coordinate system with the origin at the sensor. "Projective" coordinates measure straight line distance from the sensor (perpendicular to the sensor face), and indicate x/y coordinates using pixels in the image (which is mathematically equivalent to specifying angles). See the OpenNI 2.0 documentation online for more information.
Note that no output is given for the Z coordinate. Z coordinates remain the same when performing the conversion. An input value is still required for Z, since this can affect the x/y output.
This function allows you to convert the coordinates of a hand point or gesture to the native coordinates of a depth map. This is useful if you need to find the hand position on the raw depth map.
point
- A point in the "real world" coordinate system.
public Point2D<java.lang.Float> convertDepthCoordinatesToHand(Point3D<java.lang.Integer> point)
In general, two coordinate systems are used in OpenNI 2.0. These conventions are also followed in NiTE 2.0.
Hand pont and gesture positions are provided in "Real World" coordinates, while the native coordinate system of depth maps is the "projective" system. In short, "Real World" coordinates locate objects using a Cartesian coordinate system with the origin at the sensor. "Projective" coordinates measure straight line distance from the sensor, and indicate x/y coordinates using pixels in the image (which is mathematically equivalent to specifying angles). See the OpenNI 2.0 documentation online for more information.
This function allows you to convert the native depth map coordinates to the system used by the hand points. This might be useful for performing certain types of measurements (eg distance between a hand and an object identified only in the depth map).
Note that no output is given for the Z coordinate. Z coordinates remain the same when performing the conversion. An input value is still required for Z, since this can affect the x/y output.
point
- A point in the "projective" coordinate system.
|
||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |