Tensorflow Lite Posenet Demo
by LazyDroid
Estimates the pose of a person using the spatial locations of key body joints
App Name | Tensorflow Lite Posenet Demo |
---|---|
Developer | LazyDroid |
Category | Education |
Download Size | 21 MB |
Latest Version | 3.0 |
Average Rating | 0.00 |
Rating Count | 0 |
Google Play | Download |
AppBrain | Download Tensorflow Lite Posenet Demo Android app |
Tensorflow Lite Posenet or Pose estimation is the task of using an ML model to estimate the pose of a person from an image or a video by estimating the spatial locations of key body joints (keypoints).
Pose estimation refers to computer vision techniques that detect human figures in images and videos, so that one could determine, for example, where someone’s elbow shows up in an image. It is important to be aware of the fact that pose estimation merely estimates where key body joints are and does not recognize who is in an image or video.
The PoseNet model takes a processed camera image as the input and outputs information about keypoints. The keypoints detected are indexed by a part ID, with a confidence score between 0.0 and 1.0. The confidence score indicates the probability that a keypoint exists in that position.
Performance benchmarks
Performance varies based on your device and output stride (heatmaps and offset vectors). The PoseNet model is image size invariant, which means it can predict pose positions in the same scale as the original image regardless of whether the image is downscaled. This means that you configure the model to have a higher accuracy at the expense of performance.
The output stride determines how much the output is scaled down relative to the input image size. It affects the size of the layers and the model outputs.
The higher the output stride, the smaller the resolution of layers in the network and the outputs, and correspondingly their accuracy. In this implementation, the output stride can have values of 8, 16, or 32. In other words, an output stride of 32 will result in the fastest performance but lowest accuracy, while 8 will result in the highest accuracy but slowest performance. The recommended starting value is 16.
Recent changes:
- Updated Posenet library
- Updated SDK versions
- Latest version Posenet
Pose estimation refers to computer vision techniques that detect human figures in images and videos, so that one could determine, for example, where someone’s elbow shows up in an image. It is important to be aware of the fact that pose estimation merely estimates where key body joints are and does not recognize who is in an image or video.
The PoseNet model takes a processed camera image as the input and outputs information about keypoints. The keypoints detected are indexed by a part ID, with a confidence score between 0.0 and 1.0. The confidence score indicates the probability that a keypoint exists in that position.
Performance benchmarks
Performance varies based on your device and output stride (heatmaps and offset vectors). The PoseNet model is image size invariant, which means it can predict pose positions in the same scale as the original image regardless of whether the image is downscaled. This means that you configure the model to have a higher accuracy at the expense of performance.
The output stride determines how much the output is scaled down relative to the input image size. It affects the size of the layers and the model outputs.
The higher the output stride, the smaller the resolution of layers in the network and the outputs, and correspondingly their accuracy. In this implementation, the output stride can have values of 8, 16, or 32. In other words, an output stride of 32 will result in the fastest performance but lowest accuracy, while 8 will result in the highest accuracy but slowest performance. The recommended starting value is 16.
Recent changes:
- Updated Posenet library
- Updated SDK versions
- Latest version Posenet