Skip to content

Commit 5d17611

Browse files
committed
Merge branch 'dbscan' of https://github.com/asvsfs/ml5-library into dbscan
2 parents 2ec61c6 + a231fdb commit 5d17611

File tree

18 files changed

+1216
-880
lines changed

18 files changed

+1216
-880
lines changed

.all-contributorsrc

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1163,6 +1163,15 @@
11631163
"code",
11641164
"ideas"
11651165
]
1166+
},
1167+
{
1168+
"login": "RaglandCodes",
1169+
"name": "Ragland Asir",
1170+
"avatar_url": "https://avatars3.githubusercontent.com/u/39048764?v=4",
1171+
"profile": "http://raglandcodes.github.io",
1172+
"contributions": [
1173+
"doc"
1174+
]
11661175
}
11671176
],
11681177
"contributorsPerLine": 7,

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -248,6 +248,7 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
248248
</tr>
249249
<tr>
250250
<td align="center"><a href="https://daleonai.com"><img src="https://avatars1.githubusercontent.com/u/2328571?v=4" width="100px;" alt=""/><br /><sub><b>Dale Markowitz</b></sub></a><br /><a href="https://github.com/ml5js/ml5-library/commits?author=dalequark" title="Code">💻</a> <a href="#ideas-dalequark" title="Ideas, Planning, & Feedback">🤔</a></td>
251+
<td align="center"><a href="http://raglandcodes.github.io"><img src="https://avatars3.githubusercontent.com/u/39048764?v=4" width="100px;" alt=""/><br /><sub><b>Ragland Asir</b></sub></a><br /><a href="https://github.com/ml5js/ml5-library/commits?author=RaglandCodes" title="Documentation">📖</a></td>
251252
</tr>
252253
</table>
253254

44.5 KB
Loading

docs/_sidebar.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,7 @@
3838
* [PoseNet](/reference/posenet.md)
3939
* [BodyPix](/reference/bodypix.md)
4040
* [UNET](/reference/unet.md)
41+
* [Facemesh](/reference/facemesh.md)
4142
* [FaceApi](/reference/face-api.md)
4243
* [StyleTransfer](/reference/style-transfer.md)
4344
* [pix2pix](/reference/pix2pix.md)

docs/reference/facemesh.md

Lines changed: 186 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,186 @@
1+
# Facemesh
2+
3+
4+
<center>
5+
<img style="display:block; max-height:20rem" alt="A screenshot of a video video feed where a person sits at their chair inside of a bedroom while green dots are drawn over different locations on their face." src="_media/reference__header-facemesh.jpg">
6+
</center>
7+
8+
9+
## Description
10+
11+
Facemesh is a machine-learning model that allows for facial landmark detection in the browser. It can detect multiple faces at once and provides 486 3D facial landmarks that describe the geometry of each face. Facemesh works best when the faces in view take up a large percentage of the image or video frame and it may struggle with small/distant faces.
12+
13+
The ml5.js Facemesh model is ported from the [TensorFlow.js Handpose implementation](https://github.com/tensorflow/tfjs-models/tree/master/facemesh#keypoints).
14+
15+
## Quickstart
16+
17+
```js
18+
let predictions = [];
19+
const video = document.getElementById('video');
20+
21+
// Create a new facemesh method
22+
const facemesh = ml5.facemesh(video, modelLoaded);
23+
24+
// When the model is loaded
25+
function modelLoaded() {
26+
console.log('Model Loaded!');
27+
}
28+
29+
// Listen to new 'predict' events
30+
facemesh.on('predict', results => {
31+
predictions = results;
32+
});
33+
```
34+
35+
36+
## Usage
37+
38+
### Initialize
39+
You can initialize ml5.facemesh with an optional `video`, configuration `options` object, or a `callback` function.
40+
```js
41+
const facemesh = ml5.facemesh(?video, ?options, ?callback);
42+
```
43+
44+
#### Parameters
45+
* **video**: OPTIONAL. Optional HTMLVideoElement input to run predictions on.
46+
* **options**: OPTIONAL. A object that contains properties that effect the Facemesh model accuracy, results, etc. See documentation on the available options in [TensorFlow's Facemesh documentation](https://github.com/tensorflow/tfjs-models/tree/master/facemesh#parameters-for-facemeshload).
47+
```js
48+
const options = {
49+
flipHorizontal: false, // boolean value for if the video should be flipped, defaults to false
50+
maxContinuousChecks: 5, // How many frames to go without running the bounding box detector. Only relevant if maxFaces > 1. Defaults to 5.
51+
detectionConfidence: 0.9, // Threshold for discarding a prediction. Defaults to 0.9.
52+
maxFaces: 10, // The maximum number of faces detected in the input. Should be set to the minimum number for performance. Defaults to 10.
53+
iouThreshold: 0.3, // A float representing the threshold for deciding whether boxes overlap too much in non-maximum suppression. Must be between [0, 1]. Defaults to 0.3.
54+
scoreThreshold: 0.75, // defaults to 0.75
55+
}
56+
```
57+
58+
* **callback**: OPTIONAL. A function that is called once the model has loaded.
59+
60+
### Properties
61+
***
62+
#### .video
63+
> *Object*. HTMLVideoElement if given in the constructor. Otherwise it is null.
64+
***
65+
66+
***
67+
#### .config
68+
> *Object*. containing all of the configuration options passed into the model.
69+
***
70+
71+
***
72+
#### .model
73+
> *Object*. The bodyPix model.
74+
***
75+
76+
***
77+
#### .modelReady
78+
> *Boolean*. Truthy value indicating the model has loaded.
79+
***
80+
81+
### Methods
82+
83+
***
84+
#### .predict()
85+
> A function that returns the results of a single face detection prediction.
86+
87+
```js
88+
facemesh.predict(inputMedia, callback);
89+
```
90+
91+
📥 **Inputs**
92+
* **inputMedia**: REQUIRED. An HMTL or p5.js image, video, or canvas element that you'd like to run a single prediction on.
93+
94+
* **callback**: OPTIONAL. A callback function to handle new face detection predictions. For example:
95+
96+
```js
97+
facemesh.predict(inputMedia, results => {
98+
// do something with the results
99+
console.log(results);
100+
});
101+
```
102+
103+
📤 **Outputs**
104+
105+
* **Array**: Returns an array of objects describing each detected face. See the [Facemesh keypoints map](https://github.com/tensorflow/tfjs-models/tree/master/facemesh#keypoints) to determine how the keypoint related to facial landmarks.
106+
107+
```js
108+
[
109+
{
110+
faceInViewConfidence: 1, // The probability of a face being present.
111+
boundingBox: { // The bounding box surrounding the face.
112+
topLeft: [232.28, 145.26],
113+
bottomRight: [449.75, 308.36],
114+
},
115+
mesh: [ // The 3D coordinates of each facial landmark.
116+
[92.07, 119.49, -17.54],
117+
[91.97, 102.52, -30.54],
118+
...
119+
],
120+
scaledMesh: [ // The 3D coordinates of each facial landmark, normalized.
121+
[322.32, 297.58, -17.54],
122+
[322.18, 263.95, -30.54]
123+
],
124+
annotations: { // Semantic groupings of the `scaledMesh` coordinates.
125+
silhouette: [
126+
[326.19, 124.72, -3.82],
127+
[351.06, 126.30, -3.00],
128+
...
129+
],
130+
...
131+
}
132+
}
133+
]
134+
```
135+
136+
***
137+
138+
#### .on('predict', ...)
139+
> An event listener that returns the results when a new face detection prediction occurs.
140+
141+
```js
142+
facemesh.on('predict', callback);
143+
```
144+
145+
📥 **Inputs**
146+
147+
* **callback**: REQUIRED. A callback function to handle new face detection predictions. For example:
148+
149+
```js
150+
facemesh.on('predict', results => {
151+
// do something with the results
152+
console.log(results);
153+
});
154+
```
155+
156+
📤 **Outputs**
157+
158+
* **Array**: Returns an array of objects describing each detected face as an array of objects exactly like the output of the `.predict()` method described above. See the [Facemesh keypoints map](https://github.com/tensorflow/tfjs-models/tree/master/facemesh#keypoints) to determine how the keypoint related to facial landmarks.
159+
160+
161+
## Examples
162+
163+
**p5.js**
164+
* [Facemesh_Image](https://github.com/ml5js/ml5-library/tree/development/examples/p5js/Facemesh/Facemesh_Image)
165+
* [Facemesh_Webcam](https://github.com/ml5js/ml5-library/tree/development/examples/p5js/Facemesh/Facemesh_Webcam)
166+
167+
**p5 web editor**
168+
* [Facemesh_Image](https://editor.p5js.org/ml5/sketches/Facemesh_Image)
169+
* [Facemesh_Webcam](https://editor.p5js.org/ml5/sketches/Facemesh_Webcam)
170+
171+
## Demo
172+
173+
No demos yet - contribute one today!
174+
175+
## Tutorials
176+
177+
No tutorials yet - contribute one today!
178+
179+
## Acknowledgements
180+
181+
**Contributors**:
182+
* Ported to ml5.js by [Bomani Oseni McClendon](https://bomani.xyz/).
183+
184+
## Source Code
185+
186+
* [/src/Facemesh](https://github.com/ml5js/ml5-library/tree/development/src/Facemesh)
17.4 KB
Loading
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
<html>
2+
<head>
3+
<meta charset="UTF-8" />
4+
<title>Handpose with Webcam</title>
5+
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.9.0/p5.min.js"></script>
6+
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.9.0/addons/p5.dom.min.js"></script>
7+
<script src="http://localhost:8080/ml5.js" type="text/javascript"></script>
8+
</head>
9+
<body>
10+
<h1>Handpose with single image</h1>
11+
<script src="sketch.js"></script>
12+
</body>
13+
</html>
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
let handpose;
2+
let predictions = [];
3+
let img;
4+
5+
// load the image before the main program starts
6+
function preload(){
7+
img = loadImage("data/hand.jpg");
8+
}
9+
10+
function setup() {
11+
// Create a canvas that's at least the size of the image.
12+
createCanvas(400, 350);
13+
// call modelReady() when it is loaded
14+
handpose = ml5.handpose(modelReady);
15+
16+
frameRate(1); // set the frameRate to 1 since we don't need it to be running quickly in this case
17+
}
18+
19+
// when poseNet is ready, do the detection
20+
function modelReady() {
21+
console.log("Model ready!");
22+
23+
// when the predict function is called, tell
24+
// handpose what to do with the results.
25+
// in this case we assign the results to our global
26+
// predictions variable
27+
handpose.on("predict", results => {
28+
predictions = results;
29+
});
30+
31+
handpose.predict(img);
32+
}
33+
34+
// draw() will not show anything until poses are found
35+
function draw() {
36+
if (predictions.length > 0) {
37+
image(img, 0, 0, width, height);
38+
drawKeypoints();
39+
noLoop(); // stop looping when the poses are estimated
40+
}
41+
}
42+
43+
// A function to draw ellipses over the detected keypoints
44+
function drawKeypoints() {
45+
for (let i = 0; i < predictions.length; i += 1) {
46+
const prediction = predictions[i];
47+
for (let j = 0; j < prediction.landmarks.length; j += 1) {
48+
const keypoint = prediction.landmarks[j];
49+
fill(0, 255, 0);
50+
noStroke();
51+
ellipse(keypoint[0], keypoint[1], 10, 10);
52+
}
53+
}
54+
}
Lines changed: 11 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,13 @@
11
<html>
2-
3-
<head>
4-
<meta charset="UTF-8">
5-
<title>Handpose with Webcam</title>
6-
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.9.0/p5.min.js"></script>
7-
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.9.0/addons/p5.dom.min.js"></script>
8-
<script src="http://localhost:8080/ml5.js" type="text/javascript"></script>
9-
10-
<style></style>
11-
</head>
12-
13-
<script src="sketch.js"></script>
14-
15-
<body>
2+
<head>
3+
<meta charset="UTF-8" />
4+
<title>Handpose with Webcam</title>
5+
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.9.0/p5.min.js"></script>
6+
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.9.0/addons/p5.dom.min.js"></script>
7+
<script src="http://localhost:8080/ml5.js" type="text/javascript"></script>
8+
</head>
9+
<body>
1610
<h1>Handpose with Webcam</h1>
17-
</body>
18-
19-
</html>
11+
<script src="sketch.js"></script>
12+
</body>
13+
</html>

0 commit comments

Comments
 (0)