Ağustos 06, 2019 —
Posted by Eileen Mao and Tanjin Prity, Engineering Practicum Interns at Google, Summer 2019
We are excited to release a TensorFlow Lite sample application for human pose estimation on Android using the PoseNet model. PoseNet is a vision model that estimates the pose of a person in an image or video by detecting the positions of key body parts. As an example, the model can estimate the position of…
PoseNet App workflow |
estimateSinglePose()
, a method that runs the TensorFlow Lite interpreter on a processed RGB bitmap and returns a Person
object. This page explains how to interpret PoseNet’s inputs and outputs.// Estimate the body part positions of a single person.
// Pass in a Bitmap and obtain a Person object.
estimateSinglePose(bitmap: Bitmap): Person {...}
The Person
class contains the locations of the key body parts with their associated confidence scores. The confidence score of a person is the average of the confidence scores of each key point, which indicates the probability that a key point exists in that position.// Person class holds a list of key points and an associated confidence score.
class Person {
var keyPoints: List = listOf()
var score: Float = 0.0f
}
Each KeyPoint
holds information on the Position
of a certain BodyPart
and the confidence score of that key point. A list of all the defined key points can be accessed here.// KeyPoint class holds information about each bodyPart, position, and score.
class KeyPoint {
var bodyPart: BodyPart = BodyPart.NOSE
var position: Position = Position()
var score: Float() = 0.0f
}
// Position class contains the x and y coordinates of a key point on the bitmap.
class Position {
var x: Int = 0
var y: Int = 0
}
// BodyPart class holds the names of seventeen body parts.
enum class BodyPart {
NOSE,
LEFT_EYE,
RIGHT_EYE,
...
RIGHT_ANKLE
}
YUV_420_888
to ARGB_888
format.Bitmap
object to hold the pixels from the RGB format frame data. Crop and scale the Bitmap
to the model input size so that it can be passed to the model.estimateSinglePose()
function from the PoseNet library to get the Person object.Bitmap
back to the screen size. Draw the new Bitmap on a Canvas
object.Person
object to draw a skeleton on the canvas. Display the key points with a confidence score above a certain threshold, which by default is 0.5.SurfaceView
was used for the output display instead of separate View
instances for the pose and the camera. SurfaceView
takes care of placing the surface on the screen without a delay by getting, locking, and painting on the View
canvas.
Ağustos 06, 2019
—
Posted by Eileen Mao and Tanjin Prity, Engineering Practicum Interns at Google, Summer 2019
We are excited to release a TensorFlow Lite sample application for human pose estimation on Android using the PoseNet model. PoseNet is a vision model that estimates the pose of a person in an image or video by detecting the positions of key body parts. As an example, the model can estimate the position of…