How to propose image for action using AI. Part 2. Azure Machine Learning.

In previous blog I’ve described architecture of iAL extension for Visual Studio Code and it’s Azure Functions part.

The first version (0.1.0) allows you to get image for action.

Just type the name/description of your action, and iAL will find the best image for it, among 1100+ standard images using machine learning algorithms. 

Today I will cover the main part of process.

Azure Machine Learning 

The hart of all magic is Azure Machine Learning web service Get-NAV-ActionImage.

To create it, we should

1)    Create training dataset

2)    Create an experiment

3)    Choose and setup corresponding predictive model

4)    Train model with training dataset

5)    Repeat points 3 and 4, changing parameters, until you get proper results

6)    Publish web service

I will not cover 2-6, cos this is well documented here and also on my video Azure Machine Learning Demo Walmart Sales (with english subtitles) and also in this example experiment for analysing tweets,  which I took as a base for my prediction experiment Get-NAV-ActionImage.

Non of predictive models works without training dataset. This is the main and the most time-consuming part of the whole project.

Create Training dataset

Training dataset will teach our model to predict image, based on name of action. This is called “training with a teacher”.

Dataset will have

  • one feature – “Action name”
  • one label (teacher) – “Action image”

I cannot imagine best teacher as standard NAV =) So, let’s create training dataset from standard NAV2018 W1.

I use PowerShell script for this. Thanks Waldo for this blog.

First export all base pages to text file NAV2018BaseObjects.txt 

Second run this script.

I also remove Report actions on fly, cos they will corrupt our model – too many different action captions with one image.

					for(;;) {
    $TextLine = $DistriText.ReadLine()
    if ($TextLine -eq $null) { break }


    switch ($true)
        {$Textline.Contains("OBJECT ")} {$CurrentObject = $TextLine.TrimStart()}
        {$Textline.Contains("  CaptionML=ENU=")} {$CurrentENUCaption = (([regex]"CaptionML=ENU=(.+)").Replace($TextLine.TrimStart(),'$1')) -replace '[&;]',''}
        {$Textline.Contains("  ApplicationArea=")} {$CurrentAppArea = ([regex]"ApplicationArea=(.+)").Replace($TextLine.Trim(),'$1')  -replace '[;]' ,''}

        {$Textline.Contains(" Image=") -and !([String]::IsNullOrEmpty($CurrentENUCaption))} {
            $CurrentImage = (([regex]"Image=(.+)").Replace($TextLine.TrimStart(), '$1')).Trim("}",";")

           if (-Not $CurrentImage.Contains("Report"))


                  $MyObject = New-Object System.Object

                  $MyObject | Add-Member -MemberType NoteProperty -Name Object -Value $CurrentObject
                  $MyObject | Add-Member -MemberType NoteProperty -Name Caption -Value $CurrentENUCaption
                  $MyObject | Add-Member -MemberType NoteProperty -Name "Application Area" -Value $CurrentAppArea
                  $MyObject | Add-Member -MemberType NoteProperty -Name Image -Value $CurrentImage
                  #$MyObject | Add-Member -MemberType NoteProperty -Name Function -Value $CurrentFunction            
                  $ResultArray += $MyObject

                  $CurrentENUCaption = ''


        Default {}

    Write-Progress -Activity "Reading through objects.." -Status $CurrentObject



$ResultArray | ogv 


The result is 



Third, convert it to .csv file.


Analyse dataset

That’s also interesting to analyse our dataset.

  • Total number of actions is 9113 (without reports)
  • Top 10 images 


  • 298 images used only once for one particular action

I’ve added also Application Area column in dataset, to include it in next iAL release.

Training dataset (full list of actions and images) is here

Machine Learinig Experiment

 Final experiment, which finds image for action, is illustrated on next picture.


To measure prediction accuracy, we split training dataset on 2 datasets: training and test.


Test dataset doesn’t contain teacher column (“Action Image”). Model try to find image based on “Action name”.  Then it compares founded value (Scored Image) to initial Dataset.

What is interesting – is to analyse differences. And here are some examples. 


  • Text_on_action – is standard name of action
  • Action_image – is standard image of action
  • Scored Labels – is image proposed by our model

From my point of you some of differences are debatable, meaning that some of standard actions don’t have correct image =) 

For example, action “Register” should have image “Register” and not “Confirm”…


General accuracy of model is 90,249%, which is quite good result.


What’s next

In next blog, I will describe how to create Visual Code Extension to propose Image for Action.

Share Post:

Leave a Reply

About Me


A Microsoft MVP, Business Central architect and a project manager, blogger and a speaker, husband and a twice a father. With more than 15 years in business, I went from developer to company owner. Having a great team, still love to put my hands on code and create AI powered Business Central Apps that just works.

Follow Me

Recent Posts