Camera API and Face API for Business Central

I planned to publish this blog for 6 months, but only the latest 16.1 update of Business Central gave me such opportunity.

Old Camera API (before 16.1)

The ability to take pictures from Business Central was there for a long time. But, it was possible only in the mobile app. There are a number of nice blogposts of how you can take a picture in AL using a mobile camera. 

from Mike

from Andrew

from Stefano

The weird thing was that there were 2 pages: CameraInteraction and Camera. Both had the option to take pictures, and it was not very clear which page to use and how. 

This is the example of how to use CameraInteraction page. 

image

But personally, even with that, I had a problem in the Wave 2 2019 with a double take-picture screen.

New Camera API module (16.1)

After that in the Wave 1 2020, the new Camera API description page appeared, together with the separate module. I tried that with v16.0, but it didn’t work. And finally, in BC 16.1 it’s there, with 2 main differences from the previous implementation

  • supports Web Client, not just mobile client
  • Camera codeunit introdused

Let’s have a close look at how to get pictures from Business Central starting from 16.1

image

Yep, 1 line of code!

If you will run this, the browser will ask for permission to turn the camera on

image

open the take-picture page

image

where, after taking a shot you can Use, Retake or Cancel the process.

image

And all that with 1 line of code. Cool, right?

You get the result in the InStream variable. You can save it to the Temp Blob codeunit, or media field type etc… but I was not able to pass InStream outside if the function, where I get it. 

That’s why the last line in my function is

image

Temporary Save Picture with Persistent Blob

There is a nice way how to temporary save InStream, and then get it from another function, object or even session – “Persistent Blob”.

With the power of function CopyFromInStream you can save your InStream to the Persistent Blob table and get the unique id. Later you can use this id to retrieve the OutStream and use it. 

image

Face API

When we have a picture from the Web client, usually it’s a face. Right? 

So, why not to utilise Microsoft Face API Cognitive Service to have some fun. For example, add face authorization permission level to post documents.

With the bird view, this process will look like

image
image

Let’s have a closer look at the SendPictureToAzureAndVerifyUser(). To verify the user you should have an original photo saved somewhere, then an actual photo, and some smart service to compare them.

Original Photo

I simply added a new field to the User Setup

image

together with the page part and the logic to upload a user photo

image
image

nothing special. I used my official photo for that.

image

Actual Photo

We already know how to take the actual photo snapshot with the new Camera codeunit and temporary save it with Persistent Blob codeunit. 

Verification Process

The verification process is a bit more complicated as we can think of. It’s described here. The idea behind is that we need to follow the next flow

image

Before being able to verify if two faces belong to the same person, we need to get id’s of these faces in the Azure cognitive service. If the id is empty – that means that there is no face detected in the picture.

So, yes – 3 calls to Azure to verify face.

  • Get Id of the original face
  • Get Id of the actual face
  • Send 2 id’s and verify

The id’s of faces are stored in the Azure for the 24h, so you can skip sending the original photo every time, and save some cents. 30K calls per months are free.

Detect Face

To detect if there is a face on the picture and get it’s ID we will consume https://.cognitiveservices.azure.com/face/v1.0/detect?returnFaceId=true endpoint

And as usual, to get you should create face resource under your Azure subscription

image

In AL to send a photo to the Face API to get the id, we start with retrieving the OutStream from Persistent Blob codeunit, and then create a InStream from it using Temp Blob codeunit.

image

Btw, If somebody knows a better way, please let me know.

Then we “place” InStream to HttpContent and specify the request parameters. The key you can get in your published azure face resource. 

image

and uri

image
image

Let’s now send a request

image

A response will come back in a simple JSON format

"faceId": "49d55c17-e018-4a42-ba7b-8cbbdfae7c6f"}]

And we will deserialize JSON and get the faceID

image

Verify Face

Face verification service will “compare” two faces and if they are identical (belong to the same person) then it will return the boolean response with some accuracy. 

To verify faces we will consume https://.cognitiveservices.azure.com/face/v1.0/verify endpoint

The face verification part is similar to the face detection part, with some differences. 

First, we need to construct a body of the HTTP request, containing 2 face ids.

image

then we “place” the body to HttpContent and specify the request parameters

image

and uri

image
image

Let’s now send a request

image

A response will come back in a simple JSON format

{ "isIdentical": true, "confidence": 0.9 }

And we will deserialize JSON and get if the faces are identical

image

Allow or reject posting

The last piece in the puzzle is quite simple

image

Here is the process in motion image

image

Enjoy!

Share Post:

Leave a Reply

About Me

DMITRY KATSON

A Microsoft MVP, Business Central architect and a project manager, blogger and a speaker, husband and a twice a father. With more than 15 years in business, I went from developer to company owner. Having a great team, still love to put my hands on code and create AI powered Business Central Apps that just works.

Follow Me

Recent Posts

Tags