|
| 1 | +# MediaPipeFaceMesh |
| 2 | + |
| 3 | +MediaPipeHands-TFJS uses TF.js runtime to execute the model, the preprocessing and postprocessing steps. |
| 4 | + |
| 5 | +Please try our our live [demo](https://storage.googleapis.com/tfjs-models/demos/face-landmarks-detection/index.html?model=mediapipe_facemesh). |
| 6 | +In the runtime-backend dropdown, choose 'tfjs-webgl'. |
| 7 | + |
| 8 | +-------------------------------------------------------------------------------- |
| 9 | + |
| 10 | +## Table of Contents |
| 11 | + |
| 12 | +1. [Installation](#installation) |
| 13 | +2. [Usage](#usage) |
| 14 | + |
| 15 | +## Installation |
| 16 | + |
| 17 | +To use MediaPipeFaceMesh, you need to first select a runtime (TensorFlow.js or MediaPipe). |
| 18 | +This guide is for TensorFlow.js |
| 19 | +runtime. The guide for MediaPipe runtime can be found |
| 20 | +[here](https://github.com/tensorflow/tfjs-models/tree/master/face-landmarks-detection/src/mediapipe). |
| 21 | + |
| 22 | +Via script tags: |
| 23 | + |
| 24 | +```html |
| 25 | +<!-- Require the peer dependencies of face-landmarks-detection. --> |
| 26 | +<script src="https://cdn.jsdelivr.net/npm/@mediapipe/face_mesh"></script> |
| 27 | +<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script> |
| 28 | +<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script> |
| 29 | + |
| 30 | +<!-- You must explicitly require a TF.js backend if you're not using the TF.js union bundle. --> |
| 31 | +<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-backend-webgl"></script> |
| 32 | + |
| 33 | +<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/face-landmarks-detection"></script> |
| 34 | +``` |
| 35 | + |
| 36 | +Via npm: |
| 37 | + |
| 38 | +```sh |
| 39 | +yarn add @tensorflow-models/face-landmarks-detection |
| 40 | +yarn add @tensorflow/tfjs-core, @tensorflow/tfjs-converter |
| 41 | +yarn add @tensorflow/tfjs-backend-webgl |
| 42 | +yarn add @mediapipe/face_mesh |
| 43 | +``` |
| 44 | + |
| 45 | +----------------------------------------------------------------------- |
| 46 | +## Usage |
| 47 | + |
| 48 | +If you are using the face-landmarks-detection API via npm, you need to import the libraries first. |
| 49 | + |
| 50 | +### Import the libraries |
| 51 | + |
| 52 | +```javascript |
| 53 | +import * as faceLandmarksDetection from '@tensorflow-models/face-landmarks-detection'; |
| 54 | +import '@tensorflow/tfjs-core'; |
| 55 | +// Register WebGL backend. |
| 56 | +import '@tensorflow/tfjs-backend-webgl'; |
| 57 | +import '@mediapipe/face_mesh'; |
| 58 | +``` |
| 59 | +### Create a detector |
| 60 | + |
| 61 | +Pass in `handPoseDetection.SupportedModels.MediaPipeFaceMesh` from the |
| 62 | +`faceLandmarksDetection.SupportedModel` enum list along with a `detectorConfig` to the |
| 63 | +`createDetector` method to load and initialize the model. |
| 64 | + |
| 65 | +`detectorConfig` is an object that defines MediaPipeFaceMesh specific configurations for `MediaPipeFaceMeshTfjsModelConfig`: |
| 66 | + |
| 67 | +* *runtime*: Must set to be 'tfjs'. |
| 68 | + |
| 69 | +* *maxFaces*: Defaults to 1. The maximum number of faces that will be detected by the model. The number of returned faces can be less than the maximum (for example when no faces are present in the input). It is highly recommended to set this value to the expected max number of faces, otherwise the model will continue to search for the missing faces which can slow down the performance. |
| 70 | + |
| 71 | +* *refineLandmarks*: Defaults to false. If set to true, refines the landmark coordinates around the eyes and lips, and output additional landmarks around the irises. |
| 72 | + |
| 73 | +* *detectorModelUrl*: An optional string that specifies custom url of |
| 74 | +the detector model. This is useful for area/countries that don't have access to the model hosted on tf.hub. It also accepts `io.IOHandler` which can be used with |
| 75 | +[tfjs-react-native](https://github.com/tensorflow/tfjs/tree/master/tfjs-react-native) |
| 76 | +to load model from app bundle directory using |
| 77 | +[bundleResourceIO](https://github.com/tensorflow/tfjs/blob/master/tfjs-react-native/src/bundle_resource_io.ts#L169). |
| 78 | +* *landmarkModelUrl* An optional string that specifies custom url of |
| 79 | +the landmark model. This is useful for area/countries that don't have access to the model hosted on tf.hub. It also accepts `io.IOHandler` which can be used with |
| 80 | +[tfjs-react-native](https://github.com/tensorflow/tfjs/tree/master/tfjs-react-native) |
| 81 | +to load model from app bundle directory using |
| 82 | +[bundleResourceIO](https://github.com/tensorflow/tfjs/blob/master/tfjs-react-native/src/bundle_resource_io.ts#L169). |
| 83 | + |
| 84 | +```javascript |
| 85 | +const model = faceLandmarksDetection.SupportedModels.MediaPipeFaceMesh; |
| 86 | +const detectorConfig = { |
| 87 | + runtime: 'tfjs', |
| 88 | +}; |
| 89 | +detector = await faceLandmarksDetection.createDetector(model, detectorConfig); |
| 90 | +``` |
| 91 | + |
| 92 | +### Run inference |
| 93 | + |
| 94 | +Now you can use the detector to detect faces. The `estimateFaces` method |
| 95 | +accepts both image and video in many formats, including: `tf.Tensor3D`, |
| 96 | +`HTMLVideoElement`, `HTMLImageElement`, `HTMLCanvasElement` and `Tensor3D`. If you want more |
| 97 | +options, you can pass in a second `estimationConfig` parameter. |
| 98 | + |
| 99 | +`estimationConfig` is an object that defines MediaPipeFaceMesh specific configurations for `MediaPipeFaceMeshTfjsEstimationConfig`: |
| 100 | + |
| 101 | +* *flipHorizontal*: Optional. Defaults to false. When image data comes from camera, the result has to flip horizontally. |
| 102 | + |
| 103 | +* *staticImageMode*: Optional. Defaults to false. If set to true, face detection |
| 104 | +will run on every input image, otherwise if set to false then detection runs |
| 105 | +once and then the model simply tracks those landmarks without invoking |
| 106 | +another detection until it loses track of any of the faces (ideal for videos). |
| 107 | + |
| 108 | +The following code snippet demonstrates how to run the model inference: |
| 109 | + |
| 110 | +```javascript |
| 111 | +const estimationConfig = {flipHorizontal: false}; |
| 112 | +const faces = await detector.estimateFaces(image, estimationConfig); |
| 113 | +``` |
| 114 | + |
| 115 | +Please refer to the Face API |
| 116 | +[README](https://github.com/tensorflow/tfjs-models/blob/master/face-landmarks-detection/README.md#how-to-run-it) |
| 117 | +about the structure of the returned `faces` array. |
0 commit comments