kovo 09, 2020 —
Posted by Ann Yuan and Andrey Vakunov, Software Engineers at Google
Today we’re excited to release two new packages: facemesh and handpose for tracking key landmarks on faces and hands respectively. This release has been a collaborative effort between the MediaPipe and TensorFlow.js teams within Google Research.
Try the demos live in your browserThe facemesh package finds facial boundaries and la…
Facemesh package |
Handpose package |
import * as facemesh from '@tensorflow-models/facemesh;
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/facemesh"></script>
// Load the MediaPipe facemesh model assets.
const model = await facemesh.load();
// Pass in a video stream to the model to obtain
// an array of detected faces from the MediaPipe graph.
const video = document.querySelector("video");
const faces = await model.estimateFaces(video);
// Each face object contains a `scaledMesh` property,
// which is an array of 468 landmarks.
faces.forEach(face => console.log(face.scaledMesh));
The input to estimateFaces
can be a video, a static image, or even an ImageData interface for use in node.js pipelines. Facemesh then returns an array of prediction objects for the faces in the input, which include information about each face (e.g. a confidence score, and the locations of 468 landmarks within the face). Here is a sample prediction object: {
faceInViewConfidence: 1,
boundingBox: {
topLeft: [232.28, 145.26], // [x, y]
bottomRight: [449.75, 308.36],
},
mesh: [
[92.07, 119.49, -17.54], // [x, y, z]
[91.97, 102.52, -30.54],
...
],
scaledMesh: [
[322.32, 297.58, -17.54],
[322.18, 263.95, -30.54]
],
annotations: {
silhouette: [
[326.19, 124.72, -3.82],
[351.06, 126.30, -3.00],
...
],
...
}
}
Refer to our README for more details about the API. import * as handtrack from '@tensorflow-models/handpose;
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/handpose"></script>
// Load the MediaPipe handpose model assets.
const model = await handpose.load();
// Pass in a video stream to the model to obtain
// a prediction from the MediaPipe graph.
const video = document.querySelector("video");
const hands = await model.estimateHands(video);
// Each hand object contains a `landmarks` property,
// which is an array of 21 3-D landmarks.
hands.forEach(hand => console.log(hand.landmarks));
As with facemesh
, the input to estimateHands can be a video, a static image, or an ImageData interface. The package then returns an array of objects describing hands in the input. Here is a sample prediction object: {
handInViewConfidence: 1,
boundingBox: {
topLeft: [162.91, -17.42], // [x, y]
bottomRight: [548.56, 368.23],
},
landmarks: [
[472.52, 298.59, 0.00], // [x, y, z]
[412.80, 315.64, -6.18],
...
],
annotations: {
indexFinger: [
[412.80, 315.64, -6.18],
[350.02, 298.38, -7.14],
...
],
...
}
}
Refer to our README for more details about the API.
kovo 09, 2020
—
Posted by Ann Yuan and Andrey Vakunov, Software Engineers at Google
Today we’re excited to release two new packages: facemesh and handpose for tracking key landmarks on faces and hands respectively. This release has been a collaborative effort between the MediaPipe and TensorFlow.js teams within Google Research.
Try the demos live in your browserThe facemesh package finds facial boundaries and la…