Azure Computer Vision for Face offers comprehensive capabilities to deal with photos with faces.
You can visit Face Portal to test and evaluate the different capabilities.
Part of Cognitive Services Face APIs is Face Verify, which verify whether two faces belong to a same person or whether one face belongs to a person.
Contoso Shop Manage app Biometric Authentication uses Face Verify to authenticate a live capture face image against a predefined list of employee faces.
In order to achieve this scenario, some preparation work needs to be done. Face APIs have capabilities to store faces in a secure data store.
In the below section we will provision and configure the required resources.
First let's provision a dedicated Cognitive Service for Face APIs.
This can be done easily by heading out to Azure Portal and click Create New Service then select Computer Vision - Face:
After successfully provisioning the service, take a note of both the endpoint and subscription key as you will need them when accessing the Face APIs.
In order to authenticate faces, you need to compare them against pre-defined faces.
Faces vector data is stored under a Person Group or Large Person Group.
A person group is the container of the uploaded person data, including face images and face recognition features. You can think of it as container that will hold the faces for the company or department.
After creation, use PersonGroup Person - Create to add persons into the group, and then call PersonGroup - Train to get this group ready for Face - Identify.
NOTE: The different between Person Group and Large Person Group is size. Large Person Group can hold up to 1,000,000 people while Person Group can handle 10,000 (under S0-tier subscription)
In order to have a working Faces database, you need to roughly do the following:
- Create New PersonGroup (the company private database of faces)
- Create New Person under the PersonGroup (single person under the company)
- Create Faces for that new Person (you can associate multiple faces with a single person to improve quality)
- Train The Faces API on the new PersonGroup data.
- Use both Face Detection and Face Verify to compare detected face in an image against a specific person.
You can accomplish all the above steps by using the different Face APIs preparation work via Postman.
This workshop includes a Face-API collection that can be imported.
Collection is organized in folders that each include the relevant operations for each part of the Face APIs usage process.
Also you need to import the environment variables Dev. It include all variables that are being used through out the APIs (like the base url for your Face API and Key).
NOTE: All APIs Postman collections used through out this workshop can be found under Src/Postman-APIs
Steps to leverage Face Authentication scenario include:
- Create a Person Group (consider this as the organization or department, default ID used is 1)
- Create a Person (inside the created Person Group (represent an individual inside the organization)
- Create a Person Face (with a URL to the image that hold face of the person. You can add multiple faces to a person for better recognition)
- Train the system on the newly formed Person Group (you must do this every time you add or update Person in the Person Group)
- Verify the training status to ensure that Face APIs are using up to date model.
- Now you can start verifying faces using Face Detect and Verify
NOTE: Please take note of the generated PersonId as you need to update in the sample user created in CosmosDB users collection. You can do this via CosmosDB data explorer on Azure Portal.
The sample users is created as part of the API initialization here MockDataSeeder.cs
Included with this workshop a nice Angular web application that provides GUI to interact with the Face APIs from setup to verification.
You can access the source code for Face Explorer here.
If you wish to run the Angular app locally on you machine, you need to have Angular CLI installed. Below are the steps you should perform:
- NodeJs must be installed.
- You can then use NPM to install Angular CLI using the following command:
npm install -g @angular/cli
NOTE: You can now open the project inside Visual Studio Code to perform the next steps. Once it is opened, you can launch a new terminal window like the screen shot below:
- Next step will be installing all project dependencies using NPM (must be run inside the FaceExplorer-App folder):
npm install
- You can then use command line or Visual Studio Code to build and run the Angular project (command must be executed in folder FaceExplorer-App).
ng serve
- Copy the URL from the terminal and pasted in the browser.
- Add your endpoint and subscription key to Face API service from Azure to the Face Explorer Service Configuration here FaceExplorer-App/src/app/services/face-api-service.service.ts
Endpoint (line 11):
private baseUrl = '<specify Face API base URL here>';
Subscription Key (line 130):
const httpOptions = {
headers: new HttpHeaders({
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '<specify Face API key here>'
})
};
- Save the changes and refresh your browser and everything should be set to go.
Now that the Computer Vision for Face service is provisioned, you can start testing it through using Face APIs in Postman or Face Explorer App.
Now let's check how using this cognitive service to achieve the required business scenario for Employee Face authentication.
These are the process from the client to the backend:
All backend APIs are encapsulated in in a nice ClientSDK that offers strongly typed access to the cognitive services. Checkout the implementation here CognitivePipeline.ClientSDK.
All ClientSDK services have unit test associated with them. You can check this out here ClientSDK.Tests
protected FaceAuthClient clientInstance;
[Test]
public async Task SubmitValidCorrectFace()
{
string ownerId = Constants.OwnerId;
string expectedValue = "Mohamed Saif";
string testFileName = "valid_id.png";
byte[] doc = TestFilesHelper.GetTestFile(testFileName);
bool isAsync = false;
bool isMinimum = true;
var response = await clientInstance.FaceAuth(ownerId, doc, isAsync, isMinimum);
IsResultTypeValid(response);
Assert.IsTrue(response.IsAuthenticationSuccessful, "Authentication successful");
Assert.AreEqual(response.DetectedFaceName, expectedValue, $"expected result ({expectedValue}) matched");
}
ClientSDK ->
- Call the services through FaceAuthClient
API Management Endpoint ->
- ClientSDK make a call to API Management endpoint passing in the base url and the access key.
CognitivePipeline.API ->
- API Management FaceAuth API is connected to FaceAuthController.cs.
CognitivePipeline.BackgroundServices.NewSmartDocReq ->
- NewSmartReq.cs will execute synchronously to retrieve and process the results based on the requested the cognitive instructions passed InstructionFlag.cs.
public enum InstructionFlag
{
AnalyzeImage,
AnalyzeText,
Thumbnail,
FaceAuthentication,
ShelfCompliance
}
- It is worth noting that this function also execute the business logic related to producing business relevant result.
- For example, CognitivePipelineResultProcessor takes the raw results from the cognitive services and apply business rules and type mapping to return relevant optimized results (like returning FaceAuthCard after validating it against the database of users).
CognitivePipeline.BackgroundServices.NewCognitiveFaceAuth ->
- Face authentication processing will happen through a dedicated function NewCognitiveFaceAuth
- This function connect to cognitive services and pass through the API results. This means it can be used with any image not only employee faces.