FUBI, otherwise known as the Full Body Interaction Framework was designed as a framework for recognizing full body gestures and postures. This is done in real time from the data of an OpenNI-applicable depth sensor, especially the Microsoft Kinect sensor.
The Framework therefore distinguishes between four gesture categories:
· Static postures: Configuration of several joints, no movement.
· Gestures with linear movement: Linear movement of several joints with specific direction and speed.
· Combination of postures and linear movement: Combination of 1 and 2 with specific time constraints.
· Complex gestures: Detailed observation of one (or more) joints over a certain amount of time.
What's New in This Release:
· Enhanced the Add..Recognizer(..) functions to have the same functionality as available in XML.
· Added new face tracking rendering
· Fixed bugs for maxAngleDiff-property, clearRecognizers()-function, OpenCV-preprocessor, face joints names, the updateUsers()-function, the updateTrackingData()-function and some more minor ones
· Local positions now have the complete torso transformation (translation and rotation) removed, so they might be useful in cases where we want to look at directions only relative to the body (not the world coordinate system)
· Added body measurements usable for joint relation recognizers as alternative to concrete millimeters
· All printfs now are replaced by advanced logging functions that can also be deactivated according to the logging level (set in the FubiConfig.h)
· OpenNI2 sensor also approximates face positions for chin/forehead/nose/ears to make those more usable (Note: they are dependent on the torso orientation, but not on the head orientation [as this is not tra...