Skip to content

Commit 1745182

Browse files
authored
Merge pull request #87 from cosanlab/licensing
Licensing
2 parents e21f0c6 + 5e13618 commit 1745182

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

72 files changed

+67135
-301
lines changed

.gitignore

+1
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@ feat/tests/data/fex_s20_cv_20170330.txt
1414
feat/tests/data/cond_pain_2017-03-30_s20_r01.txt
1515
notebooks/content/*.csv
1616
notebooks/content/*.mp4
17+
notebooks/content/dev_*
1718

1819
#Ignoreresources folder.
1920
resources/

LICENSE

+8-1
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,17 @@
11

22
MIT License
33

4-
Copyright (c) 2018, Jin Hyun Cheong, TianKang Xie, Sophie Byrne, Nathaniel Hanes, Luke Chang
4+
Copyright (c) 2018, Jin Hyun Cheong, TianKang Xie, Sophie Byrne, Luke Chang
55

66
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
77

88
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
99

1010
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
11+
12+
FaceBoxes: https://github.com/cleardusk/3DDFA_V2/blob/master/LICENSE
13+
RetinaFace: https://github.com/biubug6/Pytorch_Retinaface/blob/master/LICENSE.MIT
14+
MTCNN: https://github.com/ipazc/mtcnn/blob/master/LICENSE
15+
FacialLandmarks: https://github.com/cunjian/pytorch_face_landmark
16+
Residual Masking Network: https://github.com/phamquiluan/ResidualMaskingNetwork/issues/18
17+
JAA-Net: https://github.com/ZhiwenShao/PyTorch-JAANet/issues/19

README.md

+7-3
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,14 @@
1-
# Py-FEAT: Python Facial Expression Analysis Toolbox (FEAT)
1+
# Py-FEAT: Python Facial Expression Analysis Toolbox
22
[![Package versioning](https://img.shields.io/pypi/v/py-feat.svg)](https://pypi.org/project/py-feat/)
33
[![Build Status](https://api.travis-ci.org/cosanlab/py-feat.svg?branch=master)](https://travis-ci.org/cosanlab/py-feat/)
44
[![Coverage Status](https://coveralls.io/repos/github/cosanlab/py-feat/badge.svg?branch=master)](https://coveralls.io/github/cosanlab/py-feat?branch=master)
5+
![Python Versions](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9-blue)
6+
[![GitHub license](https://img.shields.io/github/license/cosanlab/py-feat)](https://github.com/cosanlab/py-feat/blob/master/LICENSE)
57

68

79
Py-FEAT is a suite for facial expressions (FEX) research written in Python. This package includes tools to detect faces, extract emotional facial expressions (e.g., happiness, sadness, anger), facial muscle movements (e.g., action units), and facial landmarks, from videos and images of faces, as well as methods to preprocess, analyze, and visualize FEX data.
810

9-
For detailed examples, tutorials, and API please refer to the [Py-FEAT website](https://cosanlab.github.io/feat/).
11+
For detailed examples, tutorials, and API please refer to the [Py-FEAT website](https://cosanlab.github.io/py-feat/).
1012

1113
## Installation
1214
Option 1: Easy installation for quick use
@@ -41,7 +43,7 @@ out = detector.detect_image("input.png")
4143
out.plot_detections()
4244
```
4345
### 3. Preprocessing & analyzing FEX data
44-
See examples in our [tutorial](https://cosanlab.github.io/py-feat/content/analysis.html#).
46+
We provide a number of preprocessing and analysis functionalities including baselining, feature extraction such as timeseries descriptors and wavelet decompositions, predictions, regressions, and intersubject correlations. See examples in our [tutorial](https://cosanlab.github.io/py-feat/content/analysis.html#).
4547

4648
## Supported Models
4749
Please respect the usage licenses for each model.
@@ -76,3 +78,5 @@ Emotion detection models
7678
4. Run the tests again with `pytest tests/` to make sure everything still passes, including your new feature. If you broke something, edit your feature so that it doesn't break existing code.
7779
5. Create a pull request to the main repository's `master` branch.
7880

81+
## Licenses
82+
Py-FEAT is provided under the MIT license. You also need to respect the licenses of each model you are using. Please see the LICENSE file for links to each model's license information.

docs/conf.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@
6464

6565
# General information about the project.
6666
project = u'FEAT'
67-
copyright = u"2019, Jin Hyun Cheong, TianKang Xie, Sophie Byrne, Nathaniel Hanes, Luke Chang "
67+
copyright = u"2019, Jin Hyun Cheong, TianKang Xie, Sophie Byrne, Luke Chang "
6868

6969
# The version info for the project you're documenting, acts as replacement
7070
# for |version| and |release|, also used in various other places throughout
@@ -218,7 +218,7 @@
218218
latex_documents = [
219219
('index', 'feat.tex',
220220
u'FEAT Documentation',
221-
u'Jin Hyun Cheong, Sophie Byrne, Nathaniel Hanes, Luke Chang ', 'manual'),
221+
u'Jin Hyun Cheong, Tiankang Xie, Sophie Byrne, Luke Chang ', 'manual'),
222222
]
223223

224224
# The name of an image file (relative to this directory) to place at
@@ -249,7 +249,7 @@
249249
man_pages = [
250250
('index', 'feat',
251251
u'FEAT Documentation',
252-
[u'Jin Hyun Cheong, Sophie Byrne, Nathaniel Hanes, Luke Chang '], 1)
252+
[u'Jin Hyun Cheong, Tiankang Xie, Sophie Byrne, Luke Chang '], 1)
253253
]
254254

255255
# If true, show URL addresses after external links.
@@ -264,7 +264,7 @@
264264
texinfo_documents = [
265265
('index', 'feat',
266266
u'FEAT Documentation',
267-
u'Jin Hyun Cheong, TianKang Xie, Sophie Byrne, Nathaniel Hanes, Luke Chang ',
267+
u'Jin Hyun Cheong, TianKang Xie, Sophie Byrne, Luke Chang ',
268268
'feat',
269269
'One line description of project.',
270270
'Miscellaneous'),

feat/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
from __future__ import absolute_import
66

7-
__author__ = """Jin Hyun Cheong, Tiankang Xie, Sophie Byrne, Nathaniel Hanes, Luke Chang """
7+
__author__ = """Jin Hyun Cheong, Tiankang Xie, Sophie Byrne, Luke Chang """
88
__email__ = '[email protected]'
99
__all__ = ['detector', 'data','utils','plotting','__version__']
1010

feat/plotting.py

+35-12
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@
1212
import seaborn as sns
1313
import matplotlib.colors as colors
1414
from collections import OrderedDict
15+
from sklearn.preprocessing import minmax_scale
1516

1617
__all__ = [
1718
"draw_lineface",
@@ -662,6 +663,12 @@ def draw_muscles(currx, curry, au=None, ax=None, *args, **kwargs):
662663
"orb_oris_u": orb_oris_u,
663664
"orb_oc_l": orb_oc_l,
664665
"cor_sup_l": cor_sup_l,
666+
"pars_palp_l": orb_oc_l_inner,
667+
"pars_palp_r": orb_oc_r_inner,
668+
"masseter_l_rel": masseter_l,
669+
"masseter_r_rel": masseter_r,
670+
"temporalis_l_rel": temporalis_l,
671+
"temporalis_r_rel": temporalis_r,
665672
}
666673

667674
muscle_names = [
@@ -700,6 +707,12 @@ def draw_muscles(currx, curry, au=None, ax=None, *args, **kwargs):
700707
"orb_oris_l",
701708
"orb_oris_u",
702709
"cor_sup_l",
710+
"pars_palp_l",
711+
"pars_palp_r",
712+
"masseter_l_rel",
713+
"masseter_r_rel",
714+
"temporalis_l_rel",
715+
"temporalis_r_rel",
703716
]
704717
todraw = {}
705718
facet = False
@@ -729,7 +742,6 @@ def draw_muscles(currx, curry, au=None, ax=None, *args, **kwargs):
729742
del kwargs[muscle]
730743
for muscle in todraw.keys():
731744
if todraw[muscle] == "heatmap":
732-
# muscles[muscle].set_color(get_heat(muscle, au, facet))
733745
muscles[muscle].set_color(get_heat(muscle, au, facet))
734746
else:
735747
muscles[muscle].set_color(todraw[muscle])
@@ -783,9 +795,9 @@ def get_heat(muscle, au, log):
783795
"""Function to create heatmap from au vector
784796
785797
Args:
786-
au: vector of action units
787-
muscle: string representation of a muscle
788-
boolean: whether the action unit values are on a log scale
798+
muscle (string): string representation of a muscle
799+
au (list): vector of action units
800+
log (boolean): whether the action unit values are on a log scale
789801
790802
791803
Returns:
@@ -828,18 +840,21 @@ def get_heat(muscle, au, log):
828840
"orb_oc_r_inner": 16,
829841
"orb_oris_l": 13,
830842
"orb_oris_u": 13,
843+
"pars_palp_l": 19,
844+
"pars_palp_r": 19,
845+
"masseter_l_rel": 17,
846+
"masseter_r_rel": 17,
847+
"temporalis_l_rel": 17,
848+
"temporalis_r_rel": 17,
831849
}
832850
if muscle in aus:
833851
unit = aus[muscle]
834852
if log:
835853
num = int(100 * (1.0 / (1 + 10.0 ** -(au[unit]))))
836854
else:
837-
num = int(au[unit] * 20)
855+
num = int(au[unit])
838856
# set alpha (opacity)
839-
if au[unit] == 0:
840-
alpha = 0
841-
else:
842-
alpha = 0.5
857+
alpha = au[unit]/100
843858
# color = colors.to_hex(q[num])
844859
# return str(color)
845860
color = colors.to_rgba(q[num], alpha=alpha)
@@ -852,6 +867,7 @@ def plot_face(
852867
vectorfield=None,
853868
muscles=None,
854869
ax=None,
870+
feature_range=False,
855871
color="k",
856872
linewidth=1,
857873
linestyle="-",
@@ -867,6 +883,7 @@ def plot_face(
867883
vectorfield: (dict) {'target':target_array,'reference':reference_array}
868884
muscles: (dict) {'muscle': color}
869885
ax: matplotlib axis handle
886+
feature_range (tuple, default: None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.
870887
color: matplotlib color
871888
linewidth: matplotlib linewidth
872889
linestyle: matplotlib linestyle
@@ -888,14 +905,16 @@ def plot_face(
888905
"Don't forget to pass an 'au' vector of len(20), "
889906
"using neutral as default"
890907
)
891-
892-
landmarks = predict(au, model)
908+
909+
landmarks = predict(au, model, feature_range = feature_range)
893910
currx, curry = [landmarks[x, :] for x in range(2)]
894911

895912
if ax is None:
896913
ax = _create_empty_figure()
897914

898915
if muscles is not None:
916+
# Muscles are always scaled 0 - 100 b/c color palette is 0-100
917+
au = minmax_scale(au, feature_range=(0,100))
899918
if not isinstance(muscles, dict):
900919
raise ValueError("muscles must be a dictionary ")
901920
draw_muscles(currx, curry, ax=ax, au=au, **muscles)
@@ -933,12 +952,13 @@ def plot_face(
933952
return ax
934953

935954

936-
def predict(au, model=None):
955+
def predict(au, model=None, feature_range=None):
937956
"""Helper function to predict landmarks from au given a sklearn model
938957
939958
Args:
940959
au: vector of action unit intensities
941960
model: sklearn pls object (uses pretrained model by default)
961+
feature_range (tuple, default: None): If a tuple with (min, max), scale input AU intensities to (min, max) before prediction.
942962
943963
Returns:
944964
landmarks: Array of landmarks (2,68)
@@ -956,6 +976,9 @@ def predict(au, model=None):
956976
if len(au.shape) == 1:
957977
au = np.reshape(au, (1, -1))
958978

979+
if feature_range:
980+
au = minmax_scale(au, feature_range=feature_range, axis=1)
981+
959982
landmarks = np.reshape(model.predict(au), (2, 68))
960983
# landmarks[1, :] = -1 * landmarks[1, :] # this might not generalize to other models
961984
return landmarks

feat/tests/test_plot.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ def test_plot_face():
8989
assert_plot_shape(plt.gca())
9090
plt.close()
9191

92-
plot_face(au=au, vectorfield={"reference": predict(au2)})
92+
plot_face(au=au, vectorfield={"reference": predict(au2)}, feature_range=(0,1))
9393
assert_plot_shape(plt.gca())
9494
plt.close()
9595

notebooks/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,5 +23,5 @@ git commit -m "updated jupyter book"
2323

2424
5. Upload to gh-pages
2525
```
26-
ghp-import -n -p -f notebooks/_build/html
26+
ghp-import -n -p -f -c py-feat.org notebooks/_build/html
2727
```
1.66 KB
Binary file not shown.
Binary file not shown.
Binary file not shown.
2.51 KB
Binary file not shown.
Binary file not shown.
1.69 KB
Binary file not shown.
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading
Loading

notebooks/_build/html/_sources/content/dev_plotting.ipynb

+20,971-48
Large diffs are not rendered by default.

notebooks/_build/html/_sources/content/dev_trainAUvisModel.ipynb

+29-7
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@
1010
},
1111
{
1212
"cell_type": "code",
13-
"execution_count": 10,
13+
"execution_count": 9,
1414
"metadata": {
1515
"ExecuteTime": {
16-
"end_time": "2021-03-27T00:28:05.706631Z",
17-
"start_time": "2021-03-27T00:28:03.811433Z"
16+
"end_time": "2021-03-29T14:46:54.002864Z",
17+
"start_time": "2021-03-29T14:46:51.722215Z"
1818
}
1919
},
2020
"outputs": [
@@ -196,13 +196,13 @@
196196
},
197197
{
198198
"cell_type": "code",
199-
"execution_count": 7,
199+
"execution_count": 10,
200200
"metadata": {
201201
"ExecuteTime": {
202-
"end_time": "2021-03-26T23:58:58.227979Z",
203-
"start_time": "2021-03-26T23:58:57.624279Z"
202+
"end_time": "2021-03-29T14:46:58.671126Z",
203+
"start_time": "2021-03-29T14:46:58.081734Z"
204204
},
205-
"scrolled": true
205+
"scrolled": false
206206
},
207207
"outputs": [
208208
{
@@ -265,6 +265,28 @@
265265
"# hf.close()"
266266
]
267267
},
268+
{
269+
"cell_type": "markdown",
270+
"metadata": {},
271+
"source": [
272+
"Load h5 model"
273+
]
274+
},
275+
{
276+
"cell_type": "code",
277+
"execution_count": 4,
278+
"metadata": {
279+
"ExecuteTime": {
280+
"end_time": "2021-03-29T14:46:09.331977Z",
281+
"start_time": "2021-03-29T14:46:09.327240Z"
282+
}
283+
},
284+
"outputs": [],
285+
"source": [
286+
"from feat.utils import load_h5\n",
287+
"clf = load_h5('../../feat/resources/pyfeat_aus_to_landmarks.h5')"
288+
]
289+
},
268290
{
269291
"cell_type": "markdown",
270292
"metadata": {

notebooks/_build/html/_sources/content/intro.md

+9-6
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@ Py-Feat: Python Facial Expression Analysis Toolbox
33
[![Package versioning](https://img.shields.io/pypi/v/py-feat.svg)](https://pypi.org/project/py-feat/)
44
[![Build Status](https://api.travis-ci.org/cosanlab/py-feat.svg?branch=master)](https://travis-ci.org/cosanlab/py-feat/)
55
[![Coverage Status](https://coveralls.io/repos/github/cosanlab/py-feat/badge.svg?branch=master)](https://coveralls.io/github/cosanlab/py-feat?branch=master)
6+
![Python Versions](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8%20%7C%203.9-blue)
67
[![GitHub forks](https://img.shields.io/github/forks/cosanlab/py-feat)](https://github.com/cosanlab/py-feat/network)
78
[![GitHub stars](https://img.shields.io/github/stars/cosanlab/py-feat)](https://github.com/cosanlab/py-feat/stargazers)
89
[![GitHub license](https://img.shields.io/github/license/cosanlab/py-feat)](https://github.com/cosanlab/py-feat/blob/master/LICENSE)
@@ -57,13 +58,15 @@ from feat import Detector
5758
## Available models
5859
Below is a list of models implemented in Py-Feat and ready to use. The model names are in the titles followed by the reference publications.
5960
### Action Unit detection
60-
- `rf`: Random Forest model trained on Histogram of Oriented Gradients.
61-
- `svm`: SVM model trained on Histogram of Oriented Gradients.
62-
- `logistic`: Logistic Classifier model trained on Histogram of Oriented Gradients.
63-
- `JAANET`: Joint facial action unit detection and face alignment via adaptive attention ([Shao et al., 2020](https://arxiv.org/pdf/2003.08834v1.pdf))
61+
- `rf`: Random Forest model trained on Histogram of Oriented Gradients extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets
62+
- `svm`: SVM model trained on Histogram of Oriented Gradients extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets
63+
- `logistic`: Logistic Classifier model trained on Histogram of Oriented Gradients extracted from BP4D, DISFA, CK+, UNBC-McMaster shoulder pain, and AFF-Wild2 datasets
64+
- `JAANET`: Joint facial action unit detection and face alignment via adaptive attention trained with BP4D and BP4D+ ([Shao et al., 2020](https://arxiv.org/pdf/2003.08834v1.pdf))
6465
- `DRML`: Deep region and multi-label learning for facial action unit detection by ([Zhao et al., 2016](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Zhao_Deep_Region_and_CVPR_2016_paper.pdf))
6566
### Emotion detection
66-
- `FeatNet` by Tiankang Xie
67+
- `rf`: Random Forest model trained on Histogram of Oriented Gradients extracted from ExpW, CK+, and JAFFE datasets
68+
- `svm`: SVM model trained on Histogram of Oriented Gradients extracted from ExpW, CK+, and JAFFE datasets
69+
- `fernet`: Deep convolutional network
6770
- `ResMaskNet`: Facial expression recognition using residual masking network by ([Pham et al., 2020](https://ailb-web.ing.unimore.it/icpr/author/3818))
6871
### Face detection
6972
- `MTCNN`: Multi-task cascaded convolutional networks by ([Zhang et al., 2016](https://arxiv.org/pdf/1604.02878.pdf); [Zhang et al., 2020](https://ieeexplore.ieee.org/document/9239720))
@@ -78,4 +81,4 @@ Below is a list of models implemented in Py-Feat and ready to use. The model nam
7881
We are excited for people to add new models and features to Py-Feat. Please see the [contribution guides](https://cosanlab.github.io/feat/content/contribute.html).
7982

8083
## License
81-
Py-Feat is under the [MIT license](https://github.com/cosanlab/feat/blob/master/LICENSE) but the provided models may have separate licenses. Please cite the corresponding references for the models you've used.
84+
Py-FEAT is provided under the [MIT license](https://github.com/cosanlab/py-feat/blob/master/LICENSE). You also need to cite and respect the licenses of each model you are using. Please see the LICENSE file for links to each model's license information.

0 commit comments

Comments
 (0)