This article will look into generating dropdown for enums in Blazor.
The repository for the source code listed in the article is here:
https://github.com/toreaurstadboss/DallEImageGenerationImgeDemoV4
First off, a helper class for enums that will use the InputSelect control. The helper class will support setting the display text for enum options / alternatives via resources files using the display attribute.
Enumhelper.cs | C# source code
using DallEImageGenerationImageDemoV4.Models;
using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.Forms;
using System.ComponentModel.DataAnnotations;
using System.Linq.Expressions;
using System.Resources;
namespaceDallEImageGenerationImageDemoV4.Utility
{
publicstaticclassEnumHelper
{
publicstatic RenderFragment GenerateEnumDropDown<TEnum>(object receiver,
TEnum selectedValue,
Action<TEnum> valueChanged)
where TEnum : Enum
{
Expression<Func<TEnum>> onValueExpression = () => selectedValue;
var onValueChanged = EventCallback.Factory.Create<TEnum>(receiver, valueChanged);
return builder =>
{
// Set the selectedValue to the first enum value if it is not setif (EqualityComparer<TEnum>.Default.Equals(selectedValue, default))
{
object? firstEnum = Enum.GetValues(typeof(TEnum)).GetValue(0);
if (firstEnum != null)
{
selectedValue = (TEnum)firstEnum;
}
}
builder.OpenComponent<InputSelect<TEnum>>(0);
builder.AddAttribute(1, "Value", selectedValue);
builder.AddAttribute(2, "ValueChanged", onValueChanged);
builder.AddAttribute(3, "ValueExpression", onValueExpression);
builder.AddAttribute(4, "class", "form-select"); // Adding Bootstrap class for styling
builder.AddAttribute(5, "ChildContent", (RenderFragment)(childBuilder =>
{
foreach (varvaluein Enum.GetValues(typeof(TEnum)))
{
childBuilder.OpenElement(6, "option");
childBuilder.AddAttribute(7, "value", value?.ToString());
childBuilder.AddContent(8, GetEnumOptionDisplayText(value)?.ToString()?.Replace("_", " ")); // Ensure the display text is clean
childBuilder.CloseElement();
}
}));
builder.CloseComponent();
};
}
///<summary>/// Retrieves the display text of an enum alternative ///</summary>privatestaticstring? GetEnumOptionDisplayText<T>(T value)
{
string? result = value!.ToString()!;
var displayAttribute = value
.GetType()
.GetField(value!.ToString()!)
?.GetCustomAttributes(typeof(DisplayAttribute), false)?
.OfType<DisplayAttribute>()
.FirstOrDefault();
if (displayAttribute != null)
{
if (displayAttribute.ResourceType != null && !string.IsNullOrWhiteSpace(displayAttribute.Name))
{
result = new ResourceManager(displayAttribute.ResourceType).GetString(displayAttribute!.Name!);
}
elseif (!string.IsNullOrWhiteSpace(displayAttribute.Name))
{
result = displayAttribute.Name;
}
}
return result;
}
}
}
The following razor component shows how to use this helper.
It would be possible to instead make a component than such a helper method that just passes a typeref parameter of the enum type.
But using such a programmatic helper returning a RenderFragment. As the code shows, returning a builder which uses the
RenderTreeBuilder let's you register the rendertree to return here. It is possible to use OpenComponent and CloseComponent.
Using AddAttribute to add attributes to the InputSelect.
And a childbuilder for the option values.
Sometimes it is easier to just make such a class with helper method instead of a component. The downside is that it is a more manual process, it is similar to how MVC uses HtmlHelpers. What is the best option from using a component or such a RenderFragment helper is not clear, it is a technique many developers using Blazor should be aware of.
This article shows how you can add User Secrets for a Blazor app, or other related .NET client technology supporting them. User secrets are stored on the individual computer, so one do not have to expose them to others.
They can still be shared between different people if they are told what the secrets are, but is practical in many cases where one for example do not want to expose the secrets such as a password, by checking it into source
code repositories. This is due to the fact as mentioned that the user secrets are as noted saved on the individual computer.
User secrets was added in .NET Core 1.0, already released in 2016. Not all developers are familiar with them. Inside Visual Studio 2022, you can right click on the Project of a solution and choose Manage User Secrets.
When you choose that option, a file called secrets.json is opened. The file location for this file is shown if you hover over the file. The file location is saved here:
Let's first look at the way we can set up user secrets inside a startup file for the application. Note the usage of reloadOnChange set to true. And adding the user secrets as a singleton service wrapped inside IOptionsMonitor.
Note that IOptionsMonitor<ModelSecrets> is injected here and that in the OnInitializedAsync method, the injected value uses the OnChange method and and action callback then sets the value of the ModelSecrets and calls InvokeAsync method with StateHasChanged. We output the CurrentValue into the razor view of the Blazor app.
Home.razor
@page "/"
<PageTitle>Home</PageTitle>
<h1>Hello, world!</h1>
Welcome to your new app.
Your user secret is:
<div>
@ModelSecrets?.ApiKey
</div>
I have added a Blazor component for Blazor, which uses INPUT of type range and additional CSS styling for more flexible setup of look and feel.
The Blazor component is available on Github in my repo here:
This repository contains Blazor lib Slider component that shows an input of type 'range'.
The slider got default horizontal layout, where the minimum value for the slider is shown to the most left of the scale, which
goes along the x-axis for the slider got towards higher values and the maximum value is the value to the most right. The slider
x-axis goes along the 'slider track'.
The value of the slider is indicated by the 'slider thumb'.
Below the slider are shown 'tick marks', which are controlled by the Minimum and Maximum values and StepSize.
Note that the supported data types are the data types that are IConvertible and struct, and the code expects
types that can be converted to double. You can use integers for example, but also decimals or floats and so on.
In addition, enums can be used, but it works only if your enum got consecutive values, for example
0,1,2,3,4 . The best results are if these consecutive values got the same StepSize.
To start using the Blazor slider, add this using in your .razor file where you want to use the component.
@using BlazorSliderLib
Please note that the slider has been tested using Bootstrap, more specifically this version:
"bootstrap@5.3.3"
Here is sample markup you can add to test out the Blazor slider (3 sliders are rendered using a custom model and
the updated values are shown in labels below :
<divclass="container"><divclass="row"><divclass="form-control col-md-4"><p><b>EQ5D-5L question 1.</b><br />Mobility. Ability to walk.</p><BlazorSliderLib.SliderT="Eq5dWalk"UseAlternateStyle="AlternateStyle.AlternateStyleInverseColorScale"Title="Ability to walk"ValueChanged="@((e) => UpdateEq5dQ1(e))"MinimumDescription="No Problems = The best ability to walk you can imagine"MaximumDescription="Incapable = The worst ability to walk you can imagine" /></div></div><divclass="row"><divclass="form-control col-md-4"><p><b>EQ5D-5L question 6.</b><br />We would like to how good or bad your health is TODAY.</p></div></div><divclass="row"><divclass="form-control col-md-4"><BlazorSliderLib.SliderT="int"UseAlternateStyle="AlternateStyle.AlternateStyle"Minimum="0"Maximum="100" @bind-Value="@(Model.Data.Eq5dq6)"Stepsize="5"Title="Your health today"MinimumDescription="0 = The worst health you can imagine"MaximumDescription="100 = The best health you can imagine" /></div></div><divclass="row"><divclass="form-control col-md-4"><p><b>EQ5D-5L question 6.</b><br />We would like to how good or bad your health is TODAY. V2 field.</p></div></div><divclass="row"><divclass="form-control col-md-4"><BlazorSliderLib.SliderT="int"Minimum="0"Maximum="100"ValueChanged="@((e) => UpdateEq5dq6V2(e))"Stepsize="5"Title="Your health today (v2 field)"MinimumDescription="0 = The worst health you can imagine"MaximumDescription="100 = The best health you can imagine" /></div></div><divclass="row"><divclass="form-control col-md-4"><p>Value of Model.Data.Eq5dq1</p>
@Model.Data.Eq5dq1
</div></div><divclass="row"><divclass="form-control col-md-4"><p>Value of Model.Data.Eq5d6</p>
@Model.Data.Eq5dq6
</div></div><divclass="row"><divclass="form-control col-md-4"><p>Value of Model.Data.Eq5d6V2</p>
@Model.Data.Eq5dq6V2
</div></div></div>
The different setup of sliders
The slider is set up either with an alternate style or using the default styling for sliders, that is, the slider uses
an input type of 'range' and the default documented styling on Mozilla Developer Network (MDN) to render a Blazor slider.
In addition, it is possible to set up the alternate style to use a inverted color range where higher values will get a
reddish color and lower values will get a greenish color. The standard alternate style will show greenish colors for higher values.
The following screenshot shows the possible styling that is possible. Note that the default styling is shown in the
slider at the bottom, which will render a bit different in different browsers. In Chrome for example, the slider will
render with a bluish color. In Edge Chromium, a grayish color is used for the 'slider tick' and 'slider thumb'.
Screenshots showing the sliders:
The following parameters can be used:
Title
Required. The title is shown below the slider component and centered horizontally
along the center of the x-axis which the slider is oriented.
Value
The value of the slider. It can be data bound using either the @bind-Value directive attribute that supports two-way data binding.
You can instead use the @ValueChanged event callback, if desired.
Minimum
The minimum value along the slider. It is default set to 0 for numbers. For enums, the lowest value is chosen of the enum (minimum enum alternative, converted to double internally).
Maximum
The maximum value along the slider. It is default set to 100 for numbers. For enums, the higheset value is chosen of the enum (maximum enum alternative, converted to double internally).
Stepsize
The step size for the slider. It is default set to 5 for numbers. For enums, it is set to 1. (note that internally the slider must use double values to work with the _tickmarks_, which expects double values).
ShowTickmarks
Shows tick marks for slider. It is default set to 'true'. Tick marks are generated from the values of Minimum, Maximum and StepSize.
MinimumDescription
Shows additionally description for the minimum value, shown as a small label below the slider.
It will only be shown in the value is not empty.
UseAlternateStyle
If the UseAlternateStyle is set to either AlternateStyle and AlternateStyleInverseColorScale,
alternate styling is used.
CSS rules to enable the slider
Actually, it is necessary to define a set of CSS rules to make the slider work.
The slider's css rules are defined in two different files.
Default CSS rules
`Slider.css`
The CSS rules below are taken from MDN Mozilla Developer Network page for the input type 'range' control.
Input type 'range' control MDN article:
Additional settings are set up. The width is set to 100% so the slider can get as much horizontal space as possible and 'stretch'.
There are also basic styles set up for both the tick label and datalist.The datalist is the tickmarks for the slider.
The tick marks are automatically generated for the slider.
`SliderAlternate.css`
The alternate CSS rules are setting up additional styling, where color encoding is used for the
'slider track' where higher values along the 'slider track' get a more 'greenish color', while lower values
gets 'reddish values'. It is possible to set up the inverse color encoding here, with higher values getting 'reddish color'.
Lower values gets more 'greenish colors' in this setup.
.alternate-styleinput[type="range"] {
-webkit-appearance: none; /* Remove default styling */width: 100%;
height: 8px;
background: #ddd;
outline: none;
opacity: 0.7;
transition: opacity .2s;
}
.alternate-styleinput[type="range"]:hover {
opacity: 1;
}
.alternate-styleinput[type="range"]::-webkit-slider-runnable-track {
width: 100%;
height: 8px;
background: linear-gradient(to left, #A5D6A7, #FFF9C4, #FFCDD2); /* More desaturated gradient color */border: none;
border-radius: 3px;
}
.alternate-style-inverse-colorscaleinput[type="range"]::-webkit-slider-runnable-track {
background: linear-gradient(to right, #A5D6A7, #FFF9C4, #FFCDD2) !important; /* More desaturated gradient color, inverted color range */
}
.alternate-styleinput[type="range"]::-webkit-slider-thumb {
-webkit-appearance: none; /* Remove default styling */
appearance: none;
width: 25px;
height: 25px;
background: #2E7D32; /* Even darker green thumb color */cursor: pointer;
border-radius: 50%;
margin-top: -15px!important; /* Move the thumb up */
}
.alternate-styleinput[type="range"]::-moz-range-track {
width: 100%;
height: 8px;
background: linear-gradient(to left, #A5D6A7, #FFF9C4, #FFCDD2); /* More desaturated gradient color */border: none;
border-radius: 3px;
}
.alternate-style-inverse-colorscaleinput[type="range"]::-moz-range-track {
background: linear-gradient(to right, #A5D6A7, #FFF9C4, #FFCDD2!important; /* More desaturated gradient color, inverted color range */
}
.alternate-styleinput[type="range"]::-moz-range-thumb {
width: 25px;
height: 25px;
background: #2E7D32; /* Even darker green thumb color */cursor: pointer;
border-radius: 50%;
transform: translateY(-15px); /* Move the thumb up */
}
The implementation for the Blazor slider looks like this, in the codebehind file for the Slider:
using Microsoft.AspNetCore.Components;
namespaceBlazorSliderLib
{
///<summary>/// Slider to be used in Blazor. Uses input type='range' with HTML5 element datalist and custom css to show a slider./// To add tick marks, set the <see cref="ShowTickmarks"/> to true (this is default)///</summary>///<typeparam name="T"></typeparam>publicpartialclassSlider<T> : ComponentBasewhereT : struct, IComparable
{
///<summary>/// Initial value to set to the slider, data bound so it can also be read out///</summary>
[Parameter]
public T Value { get; set; }
publicdouble ValueAsDouble { get; set; }
publicdoubleGetValueAsDouble()
{
if (typeof(T).IsEnum)
{
if (_isInitialized)
{
var e = _enumValues.FirstOrDefault(v => Convert.ToDouble(v).Equals(Convert.ToDouble(Value)));
return Convert.ToDouble(Convert.ChangeType(Value, typeof(int)));
}
else
{
return0;
}
}
else
{
return Convert.ToDouble(Value);
}
}
[Parameter, EditorRequired]
public required string Title { get; set; }
[Parameter]
publicstring? MinimumDescription { get; set; }
[Parameter]
publicstring? MaximumDescription { get; set; }
[Parameter]
publicdouble Minimum { get; set; } = typeof(T).IsEnum ? Enum.GetValues(typeof(T)).Cast<int>().Select(e => Convert.ToDouble(e)).Min() : 0.0;
[Parameter]
publicdouble Maximum { get; set; } = typeof(T).IsEnum ? Enum.GetValues(typeof(T)).Cast<int>().Select(e => Convert.ToDouble(e)).Max() : 100.0;
[Parameter]
publicdouble? Stepsize { get; set; } = typeof(T).IsEnum ? 1 : 5.0;
[Parameter]
publicbool ShowTickmarks { get; set; } = true;
[Parameter]
public AlternateStyle UseAlternateStyle { get; set; } = AlternateStyle.None;
[Parameter]
public EventCallback<T> ValueChanged { get; set; }
public List<double> Tickmarks { get; set; } = new List<double>();
private List<T> _enumValues { get; set; } = new List<T>();
privatebool _isInitialized = false;
privateasync Task OnValueChanged(ChangeEventArgs e)
{
if (e.Value == null)
{
return;
}
if (typeof(T).IsEnum && e.Value != null)
{
var enumValue = _enumValues.FirstOrDefault(v => Convert.ToDouble(v).Equals(Convert.ToDouble(e.Value)));
if (Enum.TryParse(typeof(T), enumValue.ToString(), out _)) {
Value = enumValue; //check that it was a non-null value set from the slider
}
else
{
return; //if we cannot handle the enum value set, do not process further
}
}
else
{
Value = (T)Convert.ChangeType(e.Value!, typeof(T));
}
ValueAsDouble = GetValueAsDouble();
await ValueChanged.InvokeAsync(Value);
}
privatestring TickmarksId = "ticksmarks_" + Guid.NewGuid().ToString("N");
protectedoverrideasync Task OnParametersSetAsync()
{
if (_isInitialized)
{
return ; //initialize ONCE
}
if (!typeof(T).IsEnum && Value.CompareTo(0) == 0)
{
Value = (T)Convert.ChangeType((Convert.ToDouble(Maximum) - Convert.ToDouble(Minimum)) / 2, typeof(T));
ValueAsDouble = GetValueAsDouble();
}
if (Maximum.CompareTo(Minimum) < 1)
{
thrownew ArgumentException("The value for parameter 'Maximum' is set to a smaller value than {Minimum}");
}
GenerateTickMarks();
BuildEnumValuesListIfRequired();
_isInitialized = true;
await Task.CompletedTask;
}
privatevoidBuildEnumValuesListIfRequired()
{
if (typeof(T).IsEnum)
{
foreach (var item in Enum.GetValues(typeof(T)))
{
_enumValues.Add((T)item);
}
}
}
privatevoidGenerateTickMarks()
{
Tickmarks.Clear();
if (!ShowTickmarks)
{
return;
}
if (typeof(T).IsEnum)
{
int enumValuesCount = Enum.GetValues(typeof(T)).Length;
double offsetEnum = 0;
double minDoubleValue = Enum.GetValues(typeof(T)).Cast<int>().Select(e => Convert.ToDouble(e)).Min();
double maxDoubleValue = Enum.GetValues(typeof(T)).Cast<int>().Select(e => Convert.ToDouble(e)).Max();
double enumStepSizeCalculated = (maxDoubleValue - minDoubleValue) / enumValuesCount;
foreach (var enumValue in Enum.GetValues(typeof(T)))
{
Tickmarks.Add(offsetEnum);
offsetEnum += Math.Round(enumStepSizeCalculated, 0);
}
return;
}
for (double i = Convert.ToDouble(Minimum); i <= Convert.ToDouble(Maximum); i += Convert.ToDouble(Stepsize))
{
Tickmarks.Add(i);
}
}
}
publicenum AlternateStyle
{
///<summary>/// No alternate style. Uses the ordinary styling for the slider (browser default of input type 'range')///</summary>
None,
///<summary>/// Applies alternate style, using in addition to the 'slider track' an additional visual hint with an additional 'slider track' right below that shows a reddish color for lowest parts of the scale to the slider and towards yellow and greenish hues for higher values/// The alternate style uses a larger 'slider thumb' and alternate style to the 'slider-track'. The alternate style gives a more interesting look, especially in Microsoft Edge Chromium.///</summary>
AlternateStyle,
///<summary>/// Similar in style to the alternate style, but uses the inverse scale for the colors along the slider///</summary>
AlternateStyleInverseColorScale
}
}
This article will look at detecting Person Identifiable Information (Pii) using Azure Cognitive Services.
I have created a demo using .NET Maui Blazor has been created and the Github repo is here:
https://github.com/toreaurstadboss/PiiDetectionDemo
After creating the Language resource, look up the keys and endpoints for you service.
Using Azure CLI inside Cloud shell, you can enter this command to find the keys, in Azure many services has got two keys you can exchange with new keys through regeneration:
az cognitiveservices account keys list --resource-group SomeAzureResourceGroup --name SomeAccountAzureCognitiveServices
This is how you can query after endpoint of language resource using Azure CLI :
az cognitiveservices account show --query "properties.endpoint" --resource-group SomeAzureResourceGroup --name SomeAccountAzureCognitiveServices
Next, the demo of this article. Connecting to the Pii Removal Text Analytics is possible using this Nuget package (REST calls can also be done manually):
- Azure.AI.TextAnalytics version 5.3.0
Here is the other Nugets of my Demo included from the .csproj file :
using Azure;
using Azure.AI.TextAnalytics;
namespacePiiDetectionDemo.Util
{
publicinterfaceIPiiRemovalTextAnalyticsClientService
{
Task<Response<PiiEntityCollection>> RecognizePiiEntitiesAsync(string? document, string? language);
}
}
namespacePiiDetectionDemo.Util
{
publicclassPiiRemovalTextAnalyticsClientService : IPiiRemovalTextAnalyticsClientService
{
private TextAnalyticsClient _client;
publicPiiRemovalTextAnalyticsClientService()
{
var azureEndpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICE_ENDPOINT");
var azureKey = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICE_KEY");
if (string.IsNullOrWhiteSpace(azureEndpoint))
{
thrownew ArgumentNullException(nameof(azureEndpoint), "Missing system environment variable: AZURE_COGNITIVE_SERVICE_ENDPOINT");
}
if (string.IsNullOrWhiteSpace(azureKey))
{
thrownew ArgumentNullException(nameof(azureKey), "Missing system environment variable: AZURE_COGNITIVE_SERVICE_KEY");
}
_client = new TextAnalyticsClient(new Uri(azureEndpoint), new AzureKeyCredential(azureKey));
}
publicasync Task<Response<PiiEntityCollection>> RecognizePiiEntitiesAsync(string? document, string? language)
{
var piiEntities = await _client.RecognizePiiEntitiesAsync(document, language);
return piiEntities;
}
}
}
The UI codebehind of the razor component page showing the UI looks like this:
Home.razor.cs
using Azure;
using Microsoft.AspNetCore.Components;
using PiiDetectionDemo.Models;
using PiiDetectionDemo.Util;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespacePiiDetectionDemo.Components.Pages
{
publicpartialclassHome
{
private IndexModel Model = new();
privatebool isProcessing = false;
privatebool isSearchPerformed = false;
privateasync Task Submit()
{
isSearchPerformed = false;
isProcessing = true;
try
{
var response = await _piiRemovalTextAnalyticsClientService.RecognizePiiEntitiesAsync(Model.InputText, null);
Model.RedactedText = response?.Value?.RedactedText;
Model.UpdateHtmlRedactedText();
Model.AnalysisResult = response?.Value;
StateHasChanged();
}
catch (Exception ex)
{
await Console.Out.WriteLineAsync(ex.ToString());
}
isProcessing = false;
isSearchPerformed = true;
}
privatevoidremoveWhitespace(ChangeEventArgs args)
{
Model.InputText = args.Value?.ToString()?.CleanupAllWhiteSpace();
StateHasChanged();
}
}
}
To get the redacted or censored text void of any Pii that the Pii detection feature was able to detect, access the
Value of type Azure.AI.TextAnalytics.PiiEntityCollection. Inside this object, the string RedactedText contains the censored / redacted text.
The IndexModel looks like this :
using Azure.AI.TextAnalytics;
using Microsoft.AspNetCore.Components;
using PiiDetectionDemo.Util;
using System.ComponentModel.DataAnnotations;
using System.Text;
namespacePiiDetectionDemo.Models
{
publicclassIndexModel
{
[Required]
publicstring? InputText { get; set; }
publicstring? RedactedText { get; set; }
publicstring? HtmlRedactedText { get; set; }
public MarkupString HtmlRedactedTextMarkupString { get; set; }
publicvoidUpdateHtmlRedactedText()
{
var sb = new StringBuilder(RedactedText);
if (AnalysisResult != null && RedactedText != null)
{
foreach (var piiEntity in AnalysisResult.OrderByDescending(a => a.Offset))
{
sb.Insert(piiEntity.Offset + piiEntity.Length, "</b></span>");
sb.Insert(piiEntity.Offset, $"<span style='background-color:lightgray;border:1px solid black;corner-radius:2px; color:{GetBackgroundColor(piiEntity)}' title='{piiEntity.Category}: {piiEntity.SubCategory} Confidence: {piiEntity.ConfidenceScore} Redacted Text: {piiEntity.Text}'><b>");
}
}
HtmlRedactedText = sb.ToString()?.CleanupAllWhiteSpace();
HtmlRedactedTextMarkupString = new MarkupString(HtmlRedactedText ?? string.Empty);
}
privatestringGetBackgroundColor(PiiEntity piiEntity)
{
if (piiEntity.Category == PiiEntityCategory.PhoneNumber)
{
return"yellow";
}
if (piiEntity.Category == PiiEntityCategory.Organization)
{
return"orange";
}
if (piiEntity.Category == PiiEntityCategory.Address)
{
return"green";
}
return"gray";
}
publiclong ExecutionTime { get; set; }
public PiiEntityCollection? AnalysisResult { get; set; }
}
}
This article presents code how to extract Health information from arbitrary text using Azure Health Information extraction in Azure Cognitive Services. This technology uses NLP - natural language processing combined with AI techniques.
A Github repo exists with the code for a running .NET MAUI Blazor demo in .NET 7 here:
A screenshot from the demo shows how it works below.
The demo uses Azure AI Healthcare information extraction to extract entities of the text, such as a person's age, gender, employment and medical history and condition such as diagnosises, procedures and so on.
The returned data in the demo is shown at the bottom of the demo, the raw data shows it is in the format as a json and in a FHIR format. Since we want FHIR format, we must use the REST api to get this information.
Azure AI Healthcare information also extracts relations, which is connecting the entities together for semantic analysis of the text. Also, links exist for each entity for further reading.
These are external systems such as Snomed CT and Snomed codes for each entity.
Let's look at the source code for the demo next.
We define a named http client in the MauiProgram.cs file which starts the application. We could move the code into a middleware extension method, but the code is kept simple in the demo.
MauiProgram.cs
var azureEndpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_ENDPOINT");
var azureKey = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_KEY");
if (string.IsNullOrWhiteSpace(azureEndpoint))
{
thrownew ArgumentNullException(nameof(azureEndpoint), "Missing system environment variable: AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_ENDPOINT");
}
if (string.IsNullOrWhiteSpace(azureKey))
{
thrownew ArgumentNullException(nameof(azureKey), "Missing system environment variable: AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_KEY");
}
var azureEndpointHost = new Uri(azureEndpoint);
builder.Services.AddHttpClient("Az", httpClient =>
{
string baseUrl = azureEndpointHost.GetLeftPart(UriPartial.Authority); //https://stackoverflow.com/a/18708268/741368
httpClient.BaseAddress = new Uri(baseUrl);
//httpClient..Add("Content-type", "application/json");//httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));//ACCEPT header
httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", azureKey);
});
The content-type header will be specified instead inside the HttpRequestMessage shown further below and not in this named client. As we see, we must add both the endpoint base url and also the key in the Ocp-Apim-Subscription-key http header.
Let's next look at how to create a POST request to the language resource endpoint that offers the health text analysis below.
HealthAnalyticsTextClientService.cs
using HealthTextAnalytics.Models;
using System.Diagnostics;
using System.Text;
using System.Text.Json.Nodes;
namespaceHealthTextAnalytics.Util
{
publicclassHealthAnalyticsTextClientService : IHealthAnalyticsTextClientService
{
privatereadonly IHttpClientFactory _httpClientFactory;
privateconstint awaitTimeInMs = 500;
privateconstint maxTimerWait = 10000;
publicHealthAnalyticsTextClientService(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}
publicasync Task<HealthTextAnalyticsResponse> GetHealthTextAnalytics(string inputText)
{
var client = _httpClientFactory.CreateClient("Az");
string requestBodyRaw = HealthAnalyticsTextHelper.CreateRequest(inputText);
//https://learn.microsoft.com/en-us/azure/ai-services/language-service/text-analytics-for-health/how-to/call-api?tabs=nervar stopWatch = Stopwatch.StartNew();
HttpRequestMessage request = CreateTextAnalyticsRequest(requestBodyRaw);
var response = await client.SendAsync(request);
var result = new HealthTextAnalyticsResponse();
var timer = new PeriodicTimer(TimeSpan.FromMilliseconds(awaitTimeInMs));
int timeAwaited = 0;
while (await timer.WaitForNextTickAsync())
{
if (response.IsSuccessStatusCode)
{
result.IsSearchPerformed = true;
var operationLocation = response.Headers.First(h => h.Key?.ToLower() == Constants.Constants.HttpHeaderOperationResultAvailable).Value.FirstOrDefault();
var resultFromHealthAnalysis = await client.GetAsync(operationLocation);
JsonNode resultFromService = await resultFromHealthAnalysis.GetJsonFromHttpResponse();
if (resultFromService.GetValue<string>("status") == "succeeded")
{
result.AnalysisResultRawJson = await resultFromHealthAnalysis.Content.ReadAsStringAsync();
result.ExecutionTimeInMilliseconds = stopWatch.ElapsedMilliseconds;
result.Entities.AddRange(HealthAnalyticsTextHelper.GetEntities(result.AnalysisResultRawJson));
result.CategorizedInputText = HealthAnalyticsTextHelper.GetCategorizedInputText(inputText, result.AnalysisResultRawJson);
break;
}
}
timeAwaited += 500;
if (timeAwaited >= maxTimerWait)
{
result.CategorizedInputText = $"ERR: Timeout. Operation to analyze input text using Azure HealthAnalytics language service timed out after waiting for {timeAwaited} ms.";
break;
}
}
return result;
}
privatestatic HttpRequestMessage CreateTextAnalyticsRequest(string requestBodyRaw)
{
var request = new HttpRequestMessage(HttpMethod.Post, Constants.Constants.AnalyzeTextEndpoint);
request.Content = new StringContent(requestBodyRaw, Encoding.UTF8, "application/json");//CONTENT-TYPE headerreturn request;
}
}
}
The code is using some helper methods to be shown next. As the code above shows, we must poll the Azure service until we get a reply from the service. We poll every 0.5 second up to a maxium of 10 seconds from the service. Typical requests takes about 3-4 seconds to process. Longer input text / 'documents' would need more processing time than 10 seconds, but for this demo, it works great.
HealthAnalyticsTextHelper.CreateRequest method
publicstaticstringCreateRequest(string inputText)
{
//note - the id 1 here in the request is a 'local id' that must be unique per request. only one text is supported in the //request genreated, however the service allows multiple documents and id's if necessary. in this demo, we only will send in one text at a timevar request = new
{
analysisInput = new
{
documents = new[]
{
new { text = inputText, id = "1", language = "en" }
}
},
tasks = new[]
{
new { id = "analyze 1", kind = "Healthcare", parameters = new { fhirVersion = "4.0.1" } }
}
};
return JsonSerializer.Serialize(request, new JsonSerializerOptions { WriteIndented = true });
}
Creating the body of POST we use a template via a new anonymized object shown above which is what the REST service excepts. We could have multiple documents here, that is input texts, in this demo only one text / document is sent in. Note the use of id='1' and 'analyze 1' here.
We have some helper methods in System.Text.Json here to extract the JSON data sent in the response.
I have added the Domain classes from the service using the https://json2csharp.com/ website on the intial responses I got from the REST service using Postman. The REST Api might change in the future, that is, the JSON returned.
In that case, you might want to adjust the domain classes here if the deserialization fails. It seems relatively stable though, I have tested the code for some weeks now.
Finally, the categorized colored text code here had to remove newlines to get a correct indexing of the different entities found in the text. This code shows how to get rid of newlines of the inputted text.
I have added a demo .NET MAUI Blazor app that uses Image Analysis in Computer Vision in Azure Cognitive Services.
Note that Image Analysis is not available in all Azure data centers. For example, Norway East does not have this feature.
However, North Europe Azure data center do have the feature, the data center i Ireland.
A Github repo exists for this demo here:
A screen shot for this demo is shown below:
The demo allows you to upload a picture (supported formats are .jpeg, .jpg and .png, but Azure AI Image Analyzer supports a lot of other image formats too).
The demo shows a preview of the selected image and to the right an image of bounding boxes of objects in the image. A list of tags extracted from the image are also shown. Raw data from the Azure Image Analyzer
service is shown in the text box area below the pictures, with a list of tags to the right.
The demo is written with .NET Maui Blazor and .NET 6.
Let us look at some code for making this demo.
ImageSaveService.cs
using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
namespaceOcr.Handwriting.Azure.AI.Services
{
publicclassImageSaveService : IImageSaveService
{
publicasync Task<ImageSaveModel> SaveImage(IBrowserFile browserFile)
{
var buffers = newbyte[browserFile.Size];
var bytes = await browserFile.OpenReadStream(maxAllowedSize: 30 * 1024 * 1024).ReadAsync(buffers);
string imageType = browserFile.ContentType;
var basePath = FileSystem.Current.AppDataDirectory;
var imageSaveModel = new ImageSaveModel
{
SavedFilePath = Path.Combine(basePath, $"{Guid.NewGuid().ToString("N")}-{browserFile.Name}"),
PreviewImageUrl = $"data:{imageType};base64,{Convert.ToBase64String(buffers)}",
FilePath = browserFile.Name,
FileSize = bytes / 1024,
};
await File.WriteAllBytesAsync(imageSaveModel.SavedFilePath, buffers);
return imageSaveModel;
}
}
}
//Interface defined inside IImageService.cs shown belowusing Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
namespaceOcr.Handwriting.Azure.AI.Services
{
publicinterfaceIImageSaveService
{
Task<ImageSaveModel> SaveImage(IBrowserFile browserFile);
}
}
The ImageSaveService saves the uploaded image from the IBrowserFile into a base-64 string from the image bytes of the uploaded IBrowserFile via OpenReadStream of the IBrowserFile.
This allows us to preview the uploaded image. The code also saves the image to the AppDataDirectory that MAUI supports - FileSystem.Current.AppDataDirectory.
Let's look at how to call the analysis service itself, it is actually quite straight forward.
ImageAnalyzerService.cs
using Azure;
using Azure.AI.Vision.Common;
using Azure.AI.Vision.ImageAnalysis;
namespaceImage.Analyze.Azure.Ai.Lib
{
publicclassImageAnalyzerService : IImageAnalyzerService
{
public ImageAnalyzer CreateImageAnalyzer(string imageFile)
{
string key = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_SECONDARY_KEY");
string endpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_SECONDARY_ENDPOINT");
var visionServiceOptions = new VisionServiceOptions(new Uri(endpoint), new AzureKeyCredential(key));
using VisionSource visionSource = CreateVisionSource(imageFile);
var analysisOptions = CreateImageAnalysisOptions();
var analyzer = new ImageAnalyzer(visionServiceOptions, visionSource, analysisOptions);
return analyzer;
}
privatestatic VisionSource CreateVisionSource(string imageFile)
{
usingvar stream = File.OpenRead(imageFile);
usingvar reader = new StreamReader(stream);
byte[] imageBuffer;
using (var streamReader = new MemoryStream())
{
stream.CopyTo(streamReader);
imageBuffer = streamReader.ToArray();
}
usingvar imageSourceBuffer = new ImageSourceBuffer();
imageSourceBuffer.GetWriter().Write(imageBuffer);
return VisionSource.FromImageSourceBuffer(imageSourceBuffer);
}
privatestatic ImageAnalysisOptions CreateImageAnalysisOptions() => new ImageAnalysisOptions
{
Language = "en",
GenderNeutralCaption = false,
Features =
ImageAnalysisFeature.CropSuggestions
| ImageAnalysisFeature.Caption
| ImageAnalysisFeature.DenseCaptions
| ImageAnalysisFeature.Objects
| ImageAnalysisFeature.People
| ImageAnalysisFeature.Text
| ImageAnalysisFeature.Tags
};
}
}
//interface shown below publicinterfaceIImageAnalyzerService
{
ImageAnalyzer CreateImageAnalyzer(string imageFile);
}
We retrieve environment variables here and we create an ImageAnalyzer. We create a Vision source from the saved picture we uploaded and open a stream to it using File.OpenRead method on System.IO.
Since we saved the file in the AppData folder of the .NET MAUI app, we can read this file.
We set up the image analysis options and the vision service options. We then call the return the image analyzer.
Let's look at the code-behind of the index.razor file that initializes the Image analyzer, and runs the Analyze method of it.
Index.razor.cs
using Azure.AI.Vision.ImageAnalysis;
using Image.Analyze.Azure.Ai.Extensions;
using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
using Microsoft.JSInterop;
using System.Text;
namespaceImage.Analyze.Azure.Ai.Pages
{
partialclassIndex
{
private IndexModel Model = new();
//https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40?WT.mc_id=twitter&pivots=programming-language-csharpprivatestring ImageInfo = string.Empty;
privateasync Task Submit()
{
if (Model.PreviewImageUrl == null || Model.SavedFilePath == null)
{
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor Image Analyzer App", $"You must select an image first before running Image Analysis. Supported formats are .jpeg, .jpg and .png", "Ok", "Cancel");
return;
}
usingvar imageAnalyzer = ImageAnalyzerService.CreateImageAnalyzer(Model.SavedFilePath);
ImageAnalysisResult analysisResult = await imageAnalyzer.AnalyzeAsync();
if (analysisResult.Reason == ImageAnalysisResultReason.Analyzed)
{
Model.ImageAnalysisOutputText = analysisResult.OutputImageAnalysisResult();
Model.Caption = $"{analysisResult.Caption.Content} Confidence: {analysisResult.Caption.Confidence.ToString("F2")}";
Model.Tags = analysisResult.Tags.Select(t => $"{t.Name} (Confidence: {t.Confidence.ToString("F2")})").ToList();
var jsonBboxes = analysisResult.GetBoundingBoxesJson();
await JsRunTime.InvokeVoidAsync("LoadBoundingBoxes", jsonBboxes);
}
else
{
ImageInfo = $"The image analysis did not perform its analysis. Reason: {analysisResult.Reason}";
}
StateHasChanged(); //visual refresh here
}
privateasync Task CopyTextToClipboard()
{
await Clipboard.SetTextAsync(Model.ImageAnalysisOutputText);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor Image Analyzer App", $"The copied text was put into the clipboard. Character length: {Model.ImageAnalysisOutputText?.Length}", "Ok", "Cancel");
}
privateasync Task OnInputFile(InputFileChangeEventArgs args)
{
var imageSaveModel = await ImageSaveService.SaveImage(args.File);
Model = new IndexModel(imageSaveModel);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor ImageAnalyzer app App", $"Wrote file to location : {Model.SavedFilePath} Size is: {Model.FileSize} kB", "Ok", "Cancel");
}
}
}
In the code-behind above we have a submit handler called Submit. We there analyze the image and send the result both to the UI and also to a client side Javascript method using IJSRuntime in .NET MAUI Blazor.
Let's look at the two helper methods of ImageAnalysisResult next.
ImageAnalysisResultExtensions.cs
using Azure.AI.Vision.ImageAnalysis;
using System.Text;
namespaceImage.Analyze.Azure.Ai.Extensions
{
publicstaticclassImageAnalysisResultExtensions
{
publicstaticstringGetBoundingBoxesJson(this ImageAnalysisResult result)
{
var sb = new StringBuilder();
sb.AppendLine(@"[");
int objectIndex = 0;
foreach (var detectedObject in result.Objects)
{
sb.Append($"{{ \"Name\": \"{detectedObject.Name}\", \"Y\": {detectedObject.BoundingBox.Y}, \"X\": {detectedObject.BoundingBox.X}, \"Height\": {detectedObject.BoundingBox.Height}, \"Width\": {detectedObject.BoundingBox.Width}, \"Confidence\": \"{detectedObject.Confidence:0.0000}\" }}");
objectIndex++;
if (objectIndex < result.Objects?.Count)
{
sb.Append($",{Environment.NewLine}");
}
else
{
sb.Append($"{Environment.NewLine}");
}
}
sb.Remove(sb.Length - 2, 1); //remove trailing comma at the end
sb.AppendLine(@"]");
return sb.ToString();
}
publicstaticstringOutputImageAnalysisResult(this ImageAnalysisResult result)
{
var sb = new StringBuilder();
if (result.Reason == ImageAnalysisResultReason.Analyzed)
{
sb.AppendLine($" Image height = {result.ImageHeight}");
sb.AppendLine($" Image width = {result.ImageWidth}");
sb.AppendLine($" Model version = {result.ModelVersion}");
if (result.Caption != null)
{
sb.AppendLine(" Caption:");
sb.AppendLine($" \"{result.Caption.Content}\", Confidence {result.Caption.Confidence:0.0000}");
}
if (result.DenseCaptions != null)
{
sb.AppendLine(" Dense Captions:");
foreach (var caption in result.DenseCaptions)
{
sb.AppendLine($" \"{caption.Content}\", Bounding box {caption.BoundingBox}, Confidence {caption.Confidence:0.0000}");
}
}
if (result.Objects != null)
{
sb.AppendLine(" Objects:");
foreach (var detectedObject in result.Objects)
{
sb.AppendLine($" \"{detectedObject.Name}\", Bounding box {detectedObject.BoundingBox}, Confidence {detectedObject.Confidence:0.0000}");
}
}
if (result.Tags != null)
{
sb.AppendLine($" Tags:");
foreach (var tag in result.Tags)
{
sb.AppendLine($" \"{tag.Name}\", Confidence {tag.Confidence:0.0000}");
}
}
if (result.People != null)
{
sb.AppendLine($" People:");
foreach (var person in result.People)
{
sb.AppendLine($" Bounding box {person.BoundingBox}, Confidence {person.Confidence:0.0000}");
}
}
if (result.CropSuggestions != null)
{
sb.AppendLine($" Crop Suggestions:");
foreach (var cropSuggestion in result.CropSuggestions)
{
sb.AppendLine($" Aspect ratio {cropSuggestion.AspectRatio}: "
+ $"Crop suggestion {cropSuggestion.BoundingBox}");
};
}
if (result.Text != null)
{
sb.AppendLine($" Text:");
foreach (var line in result.Text.Lines)
{
string pointsToString = "{" + string.Join(',', line.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}";
sb.AppendLine($" Line: '{line.Content}', Bounding polygon {pointsToString}");
foreach (var word in line.Words)
{
pointsToString = "{" + string.Join(',', word.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}";
sb.AppendLine($" Word: '{word.Content}', Bounding polygon {pointsToString}, Confidence {word.Confidence:0.0000}");
}
}
}
var resultDetails = ImageAnalysisResultDetails.FromResult(result);
sb.AppendLine($" Result details:");
sb.AppendLine($" Image ID = {resultDetails.ImageId}");
sb.AppendLine($" Result ID = {resultDetails.ResultId}");
sb.AppendLine($" Connection URL = {resultDetails.ConnectionUrl}");
sb.AppendLine($" JSON result = {resultDetails.JsonResult}");
}
else
{
var errorDetails = ImageAnalysisErrorDetails.FromResult(result);
sb.AppendLine(" Analysis failed.");
sb.AppendLine($" Error reason : {errorDetails.Reason}");
sb.AppendLine($" Error code : {errorDetails.ErrorCode}");
sb.AppendLine($" Error message: {errorDetails.Message}");
}
return sb.ToString();
}
}
}
Finally, let's look at the client side Javascript function that we call and send the bounding boxes json to draw the boxes. We will use Canvas in HTML 5 to show the picture and the bounding boxes of objects found in the image.
index.html
<script type="text/javascript">
var colorPalette = ["red", "yellow", "blue", "green", "fuchsia", "moccasin", "purple", "magenta", "aliceblue", "lightyellow", "lightgreen"];
function rescaleCanvas() {
var img = document.getElementById('PreviewImage');
var canvas = document.getElementById('PreviewImageBbox');
canvas.width = img.width;
canvas.height = img.height;
}
function getColor() {
var colorIndex = parseInt(Math.random() * 10);
var color = colorPalette[colorIndex];
return color;
}
function LoadBoundingBoxes(objectDescriptions) {
if (objectDescriptions == null || objectDescriptions == false) {
alert('did not find any objects in image. returning from calling load bounding boxes : ' + objectDescriptions);
return;
}
var objectDesc = JSON.parse(objectDescriptions);
//alert('calling load bounding boxes, starting analysis on clientside js : ' + objectDescriptions);
rescaleCanvas();
var canvas = document.getElementById('PreviewImageBbox');
var img = document.getElementById('PreviewImage');
var ctx = canvas.getContext('2d');
ctx.drawImage(img, img.width, img.height);
ctx.font = "10px Verdana";
for (var i = 0; i < objectDesc.length; i++) {
ctx.beginPath();
ctx.strokeStyle = "black";
ctx.lineWidth = 1;
ctx.fillText(objectDesc[i].Name, objectDesc[i].X + objectDesc[i].Width / 2, objectDesc[i].Y + objectDesc[i].Height / 2);
ctx.fillText("Confidence: " + objectDesc[i].Confidence, objectDesc[i].X + objectDesc[i].Width / 2, 10 + objectDesc[i].Y + objectDesc[i].Height / 2);
}
for (var i = 0; i < objectDesc.length; i++) {
ctx.fillStyle = getColor();
ctx.globalAlpha = 0.2;
ctx.fillRect(objectDesc[i].X, objectDesc[i].Y, objectDesc[i].Width, objectDesc[i].Height);
ctx.lineWidth = 3;
ctx.strokeStyle = "blue";
ctx.rect(objectDesc[i].X, objectDesc[i].Y, objectDesc[i].Width, objectDesc[i].Height);
ctx.fillStyle = "black";
ctx.fillText("Color: " + getColor(), objectDesc[i].X + objectDesc[i].Width / 2, 20 + objectDesc[i].Y + objectDesc[i].Height / 2);
ctx.stroke();
}
ctx.drawImage(img, 0, 0);
console.log('got these object descriptions:');
console.log(objectDescriptions);
}
</script>
The index.html file in wwwroot is the place we usually put extra css and js for MAUI Blazor apps and Blazor apps. I have chosen to put the script directly into the index.html file and not in a .js file, but that is an option to be chosen to tidy up a bit more.
So there you have it, we can relatively easily find objects in images using Azure analyze image service in Azure Cognitive Services. We can get tags and captions of the image. In the demo the caption is shown above the picture loaded.
Azure Computer vision service is really good since it has got a massive training set and can recognize a lot of different objects for different usages.
As you see in the source code, I have the key and endpoint inside environment variables that the code expects exists. Never expose keys and endpoints in your source code.
This article shows how you can use Azure Computer vision in Azure Cognitive Services to perform Optical Character Recognition (OCR).
The Computer vision feature is available by adding a Computer Vision resource in Azure Portal.
I have made a .NET MAUI Blazor app and the Github Repo for it is available here :
https://github.com/toreaurstadboss/Ocr.Handwriting.Azure.AI.Models
Let us first look at the .csproj of the Lib project in this repo.
The following class generates ComputerVision clients that can be used to extract different information from streams and files containing video and images. We are going to focus on
images and extracting text via OCR. Azure Computer Vision can also extract handwritten text in addition to regular text written by typewriters or text inside images and similar. Azure Computer Vision also can
detect shapes in images and classify objects. This demo only focuses on text extraction form images.
ComputerVisionClientFactory
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
namespaceOcr.Handwriting.Azure.AI.Lib
{
publicinterfaceIComputerVisionClientFactory
{
ComputerVisionClient CreateClient();
}
///<summary>/// Client factory for Azure Cognitive Services - Computer vision.///</summary>publicclassComputerVisionClientFactory : IComputerVisionClientFactory
{
// Add your Computer Vision key and endpointstaticstring? _key = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_KEY");
staticstring? _endpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_ENDPOINT");
publicComputerVisionClientFactory() : this(_key, _endpoint)
{
}
publicComputerVisionClientFactory(string? key, string? endpoint)
{
_key = key;
_endpoint = endpoint;
}
public ComputerVisionClient CreateClient()
{
if (_key == null)
{
thrownew ArgumentNullException(_key, "The AZURE_COGNITIVE_SERVICES_VISION_KEY is not set. Set a system-level environment variable or provide this value by calling the overloaded constructor of this class.");
}
if (_endpoint == null)
{
thrownew ArgumentNullException(_key, "The AZURE_COGNITIVE_SERVICES_VISION_ENDPOINT is not set. Set a system-level environment variable or provide this value by calling the overloaded constructor of this class.");
}
var client = Authenticate(_key!, _endpoint!);
return client;
}
publicstatic ComputerVisionClient Authenticate(string key, string endpoint) =>
new ComputerVisionClient(new ApiKeyServiceClientCredentials(key))
{
Endpoint = endpoint
};
}
}
The setup of the endpoint and key of the Computer Vision resource is done via system-level envrionment variables.
Next up, let's look at retrieving OCR text from images. Here we use ComputerVisionClient. We open up a stream of a file, an image, using File.OpenReadAsync and then the method
ReadInStreamAsync of Computer vision client. The image we will load up in the app is selected by the user and the image is previewed and saved using MAUI Storage lib (inside the Appdata folder).
OcrImageService.cs
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using Microsoft.Extensions.Logging;
using System.Diagnostics;
using ReadResult = Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models.ReadResult;
namespaceOcr.Handwriting.Azure.AI.Lib
{
publicinterfaceIOcrImageService
{
Task<IList<ReadResult?>?> GetReadResults(string imageFilePath);
Task<string> GetReadResultsText(string imageFilePath);
}
publicclassOcrImageService : IOcrImageService
{
privatereadonly IComputerVisionClientFactory _computerVisionClientFactory;
privatereadonly ILogger<OcrImageService> _logger;
publicOcrImageService(IComputerVisionClientFactory computerVisionClientFactory, ILogger<OcrImageService> logger)
{
_computerVisionClientFactory = computerVisionClientFactory;
_logger = logger;
}
private ComputerVisionClient CreateClient() => _computerVisionClientFactory.CreateClient();
publicasync Task<string> GetReadResultsText(string imageFilePath)
{
var readResults = await GetReadResults(imageFilePath);
var ocrText = ExtractText(readResults?.FirstOrDefault());
return ocrText;
}
publicasync Task<IList<ReadResult?>?> GetReadResults(string imageFilePath)
{
if (string.IsNullOrWhiteSpace(imageFilePath))
{
returnnull;
}
try
{
var client = CreateClient();
//Retrieve OCR results using (FileStream stream = File.OpenRead(imageFilePath))
{
var textHeaders = await client.ReadInStreamAsync(stream);
string operationLocation = textHeaders.OperationLocation;
string operationId = operationLocation[^36..]; //hat operator of C# 8.0 : this slices out the last 36 chars, which contains the guid chars which are 32 hexadecimals chars + four hyphens
ReadOperationResult results;
do
{
results = await client.GetReadResultAsync(Guid.Parse(operationId));
_logger.LogInformation($"Retrieving OCR results for operationId {operationId} for image {imageFilePath}");
}
while (results.Status == OperationStatusCodes.Running || results.Status == OperationStatusCodes.NotStarted);
IList<ReadResult?> result = results.AnalyzeResult.ReadResults;
return result;
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
returnnull;
}
}
privatestaticstringExtractText(ReadResult? readResult) => string.Join(Environment.NewLine, readResult?.Lines?.Select(l => l.Text) ?? new List<string>());
}
}
Let's look at the MAUI Blazor project in the app.
The MauiProgram.cs looks like this.
MauiProgram.cs
using Ocr.Handwriting.Azure.AI.Data;
using Ocr.Handwriting.Azure.AI.Lib;
using Ocr.Handwriting.Azure.AI.Services;
using TextCopy;
namespaceOcr.Handwriting.Azure.AI;
publicstaticclassMauiProgram
{
publicstatic MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.ConfigureFonts(fonts =>
{
fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");
});
builder.Services.AddMauiBlazorWebView();
#if DEBUG
builder.Services.AddBlazorWebViewDeveloperTools();
builder.Services.AddLogging();
#endif
builder.Services.AddSingleton<WeatherForecastService>();
builder.Services.AddScoped<IComputerVisionClientFactory, ComputerVisionClientFactory>();
builder.Services.AddScoped<IOcrImageService, OcrImageService>();
builder.Services.AddScoped<IImageSaveService, ImageSaveService>();
builder.Services.InjectClipboard();
return builder.Build();
}
}
We also need some code to preview and save the image an end user chooses. The IImageService looks like this.
ImageSaveService
using Microsoft.AspNetCore.Components.Forms;
using Ocr.Handwriting.Azure.AI.Models;
namespaceOcr.Handwriting.Azure.AI.Services
{
publicclassImageSaveService : IImageSaveService
{
publicasync Task<ImageSaveModel> SaveImage(IBrowserFile browserFile)
{
var buffers = newbyte[browserFile.Size];
var bytes = await browserFile.OpenReadStream(maxAllowedSize: 30 * 1024 * 1024).ReadAsync(buffers);
string imageType = browserFile.ContentType;
var basePath = FileSystem.Current.AppDataDirectory;
var imageSaveModel = new ImageSaveModel
{
SavedFilePath = Path.Combine(basePath, $"{Guid.NewGuid().ToString("N")}-{browserFile.Name}"),
PreviewImageUrl = $"data:{imageType};base64,{Convert.ToBase64String(buffers)}",
FilePath = browserFile.Name,
FileSize = bytes / 1024,
};
await File.WriteAllBytesAsync(imageSaveModel.SavedFilePath, buffers);
return imageSaveModel;
}
}
}
Note the use of maxAllowedSize of IBrowserfile.OpenReadStream method, this is a good practice since IBrowserFile only supports 512 kB per default. I set it in the app to 30 MB to support some high res images too.
We preview the image as base-64 here and we also save the image also. Note the use of FileSystem.Current.AppDataDirectory as base path here. Here we use nuget package Microsoft.Maui.Storage.
These are the packages that is used for the MAUI Blazor project of the app.
Ocr.Handwriting.Azure.AI.csproj
@page "/"
@using Ocr.Handwriting.Azure.AI.Models;
@using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
@using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
@using Ocr.Handwriting.Azure.AI.Lib;
@using Ocr.Handwriting.Azure.AI.Services;
@using TextCopy;
@inject IImageSaveService ImageSaveService
@inject IOcrImageService OcrImageService
@inject IClipboard Clipboard
<h1>Azure AI OCR Text recognition</h1>
<EditForm Model="Model" OnValidSubmit="@Submit" style="background-color:aliceblue">
<DataAnnotationsValidator />
<label><b>Select a picture to run OCR</b></label><br />
<InputFile OnChange="@OnInputFile" accept=".jpeg,.jpg,.png" />
<br />
<code class="alert-secondary">Supported file formats: .jpeg, .jpg and .png</code>
<br />
@if (Model.PreviewImageUrl != null) {
<label class="alert-info">Preview of the selected image</label>
<div style="overflow:auto;max-height:300px;max-width:500px">
<img class="flagIcon" src="@Model.PreviewImageUrl" /><br />
</div>
<code class="alert-light">File Size (kB): @Model.FileSize</code>
<br />
<code class="alert-light">File saved location: @Model.SavedFilePath</code>
<br />
<label class="alert-info">Click the button below to start running OCR using Azure AI</label><br />
<br />
<button type="submit">Submit</button> <button style="margin-left:200px" type="button"class="btn-outline-info" @onclick="@CopyTextToClipboard">Copy to clipboard</button>
<br />
<br />
<InputTextArea style="width:1000px;height:300px"readonly="readonly" placeholder="Detected text in the image uploaded" @bind-Value="Model!.OcrOutputText" rows="5"></InputTextArea>
}
</EditForm>
@code {
private IndexModel Model = new();
privateasync Task OnInputFile(InputFileChangeEventArgs args)
{
var imageSaveModel = await ImageSaveService.SaveImage(args.File);
Model = new IndexModel(imageSaveModel);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor OCR App", $"Wrote file to location : {Model.SavedFilePath} Size is: {Model.FileSize} kB", "Ok", "Cancel");
}
publicasync Task CopyTextToClipboard()
{
await Clipboard.SetTextAsync(Model.OcrOutputText);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor OCR App", $"The copied text was put into the clipboard. Character length: {Model.OcrOutputText?.Length}", "Ok", "Cancel");
}
privateasync Task Submit()
{
if (Model.PreviewImageUrl == null || Model.SavedFilePath == null)
{
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor OCR App", $"You must select an image first before running OCR. Supported formats are .jpeg, .jpg and .png", "Ok", "Cancel");
return;
}
Model.OcrOutputText = await OcrImageService.GetReadResultsText(Model.SavedFilePath);
StateHasChanged(); //visual refresh here
}
}
The UI works like this. The user selects an image. As we can see by the 'accept' html attribute, the .jpeg, .jpg and .png extensions are allowed in the file input dialog. When the user selects an image, the image is saved and
previewed in the UI.
By hitting the Submit button, the OCR service in Azure is contacted and text is retrieved and displayed in the text area below, if any text is present in the image. A button allows copying the text into the clipboard.
Here are some screenshots of the app.
This article shows code how to build a universal translator using Azure AI Cognitive Services. This includes Azure AI Textanalytics to detect languge from text input, and
using Azure AI Translation services.
The Github repo is here :
https://github.com/toreaurstadboss/MultiLingual.Translator
The following Nuget packages are used in the Lib project csproj file :
We are going to build a .NET 6 cross platform MAUI Blazor app. First off, we focus on the Razor Library project called 'Lib'. This
project contains the library util code to detect language and translate into other language.
Let us first look at creating the clients needed to detect language and to translate text.
TextAnalyticsFactory.cs
using Azure;
using Azure.AI.TextAnalytics;
using Azure.AI.Translation.Text;
using System;
namespaceMultiLingual.Translator.Lib
{
publicstaticclassTextAnalyticsClientFactory
{
publicstatic TextAnalyticsClient CreateClient()
{
string? uri = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICE_ENDPOINT", EnvironmentVariableTarget.Machine);
string? key = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICE_KEY", EnvironmentVariableTarget.Machine);
if (uri == null)
{
thrownew ArgumentNullException(nameof(uri), "Could not get system environment variable named 'AZURE_COGNITIVE_SERVICE_ENDPOINT' Set this variable first.");
}
if (key == null)
{
thrownew ArgumentNullException(nameof(uri), "Could not get system environment variable named 'AZURE_COGNITIVE_SERVICE_KEY' Set this variable first.");
}
var client = new TextAnalyticsClient(new Uri(uri!), new AzureKeyCredential(key!));
return client;
}
publicstatic TextTranslationClient CreateTranslateClient()
{
string? keyTranslate = Environment.GetEnvironmentVariable("AZURE_TRANSLATION_SERVICE_KEY", EnvironmentVariableTarget.Machine);
string? regionForTranslationService = Environment.GetEnvironmentVariable("AZURE_TRANSLATION_SERVICE_REGION", EnvironmentVariableTarget.Machine);
if (keyTranslate == null)
{
thrownew ArgumentNullException(nameof(keyTranslate), "Could not get system environment variable named 'AZURE_TRANSLATION_SERVICE_KEY' Set this variable first.");
}
if (keyTranslate == null)
{
thrownew ArgumentNullException(nameof(keyTranslate), "Could not get system environment variable named 'AZURE_TRANSLATION_SERVICE_REGION' Set this variable first.");
}
var client = new TextTranslationClient(new AzureKeyCredential(keyTranslate!), region: regionForTranslationService);
return client;
}
}
}
The code assumes that there is four environment variables at the SYSTEM level of your OS.
Further on, let us now look at the code to detect language. This uses a TextAnalyticsClient detect the language an input text is written in, using this client.
IDetectLanguageUtil.cs
using Azure.AI.TextAnalytics;
namespaceMultiLingual.Translator.Lib
{
publicclassDetectLanguageUtil : IDetectLanguageUtil
{
private TextAnalyticsClient _client;
publicDetectLanguageUtil()
{
_client = TextAnalyticsClientFactory.CreateClient();
}
///<summary>/// Detects language of the <paramref name="inputText"/>.///</summary>///<param name="inputText"></param>///<remarks><see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>publicasync Task<DetectedLanguage> DetectLanguage(string inputText)
{
DetectedLanguage detectedLanguage = await _client.DetectLanguageAsync(inputText);
return detectedLanguage;
}
///<summary>/// Detects language of the <paramref name="inputText"/>. Returns the language name.///</summary>///<param name="inputText"></param>///<remarks><see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>publicasync Task<string> DetectLanguageName(string inputText)
{
DetectedLanguage detectedLanguage = await DetectLanguage(inputText);
return detectedLanguage.Name;
}
///<summary>/// Detects language of the <paramref name="inputText"/>. Returns the language code.///</summary>///<param name="inputText"></param>///<remarks><see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>publicasync Task<string> DetectLanguageIso6391(string inputText)
{
DetectedLanguage detectedLanguage = await DetectLanguage(inputText);
return detectedLanguage.Iso6391Name;
}
///<summary>/// Detects language of the <paramref name="inputText"/>. Returns the confidence score///</summary>///<param name="inputText"></param>///<remarks><see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>publicasync Task<double> DetectLanguageConfidenceScore(string inputText)
{
DetectedLanguage detectedLanguage = await DetectLanguage(inputText);
return detectedLanguage.ConfidenceScore;
}
}
}
The Iso6391 code is important when it comes to translation, which will be shown soon. But first let us look at the supported languages of Azure AI Translation services.
LanguageCode.cs
As there are about 5-10 000 languages in the World, the list above shows that Azure AI translation services supports about 130 of these, which is 1-2 % of the total amount of languages. Of course, the languages supported by Azure AI are also including the most spoken languages in the World.
Let us look at the translation util code next.
ITranslateUtil.cs
using Azure.AI.Translation.Text;
using MultiLingual.Translator.Lib.Models;
namespaceMultiLingual.Translator.Lib
{
publicclassTranslateUtil : ITranslateUtil
{
private TextTranslationClient _client;
publicTranslateUtil()
{
_client = TextAnalyticsClientFactory.CreateTranslateClient();
}
///<summary>/// Translates text using Azure AI Translate services. ///</summary>///<param name="targetLanguage"><see cref="LanguageCode" for a list of supported languages/></param>///<param name="inputText"></param>///<param name="sourceLanguage">Pass in null here to auto detect the source language</param>///<returns></returns>publicasync Task<string?> Translate(string targetLanguage, string inputText, string? sourceLanguage = null)
{
var translationOfText = await _client.TranslateAsync(targetLanguage, inputText, sourceLanguage);
if (translationOfText?.Value == null)
{
returnnull;
}
var translation = translationOfText.Value.SelectMany(l => l.Translations).Select(l => l.Text)?.ToList();
string? translationText = translation?.FlattenString();
return translationText;
}
}
}
We use a little helper extension method here too :
StringExtensions.cs
using System.Text;
namespaceMultiLingual.Translator.Lib
{
publicstaticclassStringExtensions
{
///<summary>/// Merges a collection of lines into a flattened string separating each line by a specified line separator./// Newline is deafult///</summary>///<param name="inputLines"></param>///<param name="lineSeparator"></param>///<returns></returns>publicstaticstring? FlattenString(this IEnumerable<string>? inputLines, string lineSeparator = "\n")
{
if (inputLines == null || !inputLines.Any())
{
returnnull;
}
var flattenedString = inputLines?.Aggregate(new StringBuilder(),
(sb, l) => sb.AppendLine(l + lineSeparator),
sb => sb.ToString().Trim());
return flattenedString;
}
}
}
Here are some tests for detecting language :
DetectLanguageUtilTests.cs
And here are some translation util tests :
TranslateUtilTests.cs
using FluentAssertions;
using MultiLingual.Translator.Lib.Models;
namespaceMultiLingual.Translator.Lib.Test
{
publicclassTranslateUtilTests
{
private TranslateUtil _translateUtil;
publicTranslateUtilTests()
{
_translateUtil = new TranslateUtil();
}
[Theory]
[InlineData("Jeg er fra Norge og jeg liker brunost", "i'm from norway and i like brown cheese", LanguageCode.Norwegian, LanguageCode.English)]
[InlineData("Jeg er fra Norge og jeg liker brunost", "i'm from norway and i like brown cheese", null, LanguageCode.English)] //auto detect language is tested here
[InlineData("Ich bin aus Hamburg und ich liebe bier", "i'm from hamburg and i love beer", LanguageCode.German, LanguageCode.English)]
[InlineData("Ich bin aus Hamburg und ich liebe bier", "i'm from hamburg and i love beer", null, LanguageCode.English)] //Auto detect source language is tested here
[InlineData("tlhIngan maH", "we are klingons", LanguageCode.Klingon, LanguageCode.English)] //Klingon force !publicasync Task TranslationReturnsExpected(string input, string expectedTranslation, string sourceLanguage, string targetLanguage)
{
string? translation = await _translateUtil.Translate(targetLanguage, input, sourceLanguage);
translation.Should().NotBeNull();
translation.Should().BeEquivalentTo(expectedTranslation);
}
}
}
Over to the UI. The app is made with MAUI Blazor.
Here are some models for the app :
LanguageInputModel.cs
namespaceMultiLingual.Translator.Models
{
publicclassNameValue
{
publicstring Name { get; set; }
publicstring Value { get; set; }
}
}
The UI consists of this razor code in, written for Blazor MAUI app.
Index.razor
@page "/"
@inject ITranslateUtil TransUtil
@inject IDetectLanguageUtil DetectLangUtil
@inject IJSRuntime JS
@using MultiLingual.Translator.Lib;
@using MultiLingual.Translator.Lib.Models;
@using MultiLingual.Translator.Models;
<h1>Azure AI Text Translation</h1>
<EditForm Model="@Model" OnValidSubmit="@Submit"class="form-group" style="background-color:aliceblue;">
<DataAnnotationsValidator />
<ValidationSummary />
<div class="form-group row">
<label for="Model.InputText">Text to translate</label>
<InputTextArea @bind-Value="Model!.InputText" placeholder="Enter text to translate" @ref="inputTextRef" id="textToTranslate" rows="5" />
</div>
<div class="form-group row">
<span>Detected language of text to translate</span>
<InputText class="languageLabelText"readonly="readonly" placeholder="The detected language of the text to translate" @bind-Value="Model!.DetectedLanguageInfo"></InputText>
@if (Model.DetectedLanguageInfo != null){
<img src="@FlagIcon"class="flagIcon" />
}
</div>
<br />
<div class="form-group row">
<span>Translate into language</span>
<InputSelect placeholder="Choose the target language" @bind-Value="Model!.TargetLanguage">
@foreach (var item in LanguageCodes){
<option value="@item.Value">@item.Name</option>
}
</InputSelect>
<br />
@if (Model.TargetLanguage != null){
<img src="@TargetFlagIcon"class="flagIcon" />
}
</div>
<br />
<div class="form-group row">
<span>Translation</span>
<InputTextArea readonly="readonly" placeholder="The translated text target language" @bind-Value="Model!.TranslatedText" rows="5"></InputTextArea>
</div>
<button type="submit"class="submitButton">Submit</button>
</EditForm>
@code {
private Azure.AI.TextAnalytics.TextAnalyticsClient _client;
private InputTextArea inputTextRef;
public LanguageInputModel Model { get; set; } = new();
privatestring FlagIcon {
get
{
return$"images/flags/png100px/{Model.DetectedLanguageIso6391}.png";
}
}
privatestring TargetFlagIcon {
get
{
return$"images/flags/png100px/{Model.TargetLanguage}.png";
}
}
private List<NameValue> LanguageCodes = typeof(LanguageCode).GetFields().Select(f => new NameValue {
Name = f.Name,
Value = f.GetValue(f)?.ToString(),
}).OrderBy(f => f.Name).ToList();
privateasyncvoidSubmit()
{
var detectedLanguage = await DetectLangUtil.DetectLanguage(Model.InputText);
Model.DetectedLanguageInfo = $"{detectedLanguage.Iso6391Name}{detectedLanguage.Name}";
Model.DetectedLanguageIso6391 = detectedLanguage.Iso6391Name;
if (_client == null)
{
_client = TextAnalyticsClientFactory.CreateClient();
}
Model.TranslatedText = await TransUtil.Translate(Model.TargetLanguage, Model.InputText, detectedLanguage.Iso6391Name);
StateHasChanged();
}
protectedoverrideasync Task OnAfterRenderAsync(bool firstRender)
{
if (firstRender)
{
Model.TargetLanguage = LanguageCode.English;
await JS.InvokeVoidAsync("exampleJsFunctions.focusElement", inputTextRef?.AdditionalAttributes.FirstOrDefault(a => a.Key?.ToLower() == "id").Value);
StateHasChanged();
}
}
}
Finally, a screenshot how the app looks like :
You enter the text to translate, then the detected language is shown after you hit Submit. You can select the target language to translate the text into. English is selected as default. The Iso6391 code of the selected language is shown as a flag icon, if there exists a 1:1 mapping between the Iso6391 code and
the flag icons available in the app. The top flag show the detected language via the Iso6391 code, IF there is a 1:1 mapping between this code and the available Flag icons.