Showing posts with label Azure Cognitive Services. Show all posts
Showing posts with label Azure Cognitive Services. Show all posts

Thursday, 9 May 2024

Azure Cognitive Synthesized Text To Speech with voice styles

Using Azure Cognitive Services, it is possible to translate text into other languages and also synthesize the text to speech. It is also possible to add voice effects such as style of the voice. This adds more realism by adding emotions to a synthesized voice. The voice is already trained by neural net training and adding voice style makes the synthesized speech even more realistic and multi-purpose. The Github repo for this is available here as .NET Maui Blazor client written with .NET 8 :

MultiLingual translator DEMO Github repo

Not all the voices supported in Azure Cognitive Services do support voice effects. An overview of which voices are shown here:

https://learn.microsoft.com/nb-no/azure/ai-services/speech-service/language-support?tabs=tts#voice-styles-and-roles

More and more synthetic voices in Azure Cognitive Services gets more and more voice styles which express emotions. For now, most of the voices are either english (en-US) or chinese (zh-CN) and a few other languages got some few voices supporting styles. This will most likely be improved into the future where these neural net trained voices are trained in voice styles or some generic voice style algorithm is achieved that can infer emotions on a generic level, although that still sounds a bit sci-fi.

Azure Cognitive Text-To-Speech Voices with support for emotions / voice styles


Voice Styles Roles
de-DE-ConradNeural1 cheerful Not supported
en-GB-SoniaNeural cheerful, sad Not supported
en-US-AriaNeural angry, chat, cheerful, customerservice, empathetic, excited, friendly, hopeful, narration-professional, newscast-casual, newscast-formal, sad, shouting, terrified, unfriendly, whispering Not supported
en-US-DavisNeural angry, chat, cheerful, excited, friendly, hopeful, sad, shouting, terrified, unfriendly, whispering Not supported
en-US-GuyNeural angry, cheerful, excited, friendly, hopeful, newscast, sad, shouting, terrified, unfriendly, whispering Not supported
en-US-JaneNeural angry, cheerful, excited, friendly, hopeful, sad, shouting, terrified, unfriendly, whispering Not supported
en-US-JasonNeural angry, cheerful, excited, friendly, hopeful, sad, shouting, terrified, unfriendly, whispering Not supported
en-US-JennyNeural angry, assistant, chat, cheerful, customerservice, excited, friendly, hopeful, newscast, sad, shouting, terrified, unfriendly, whispering Not supported
en-US-NancyNeural angry, cheerful, excited, friendly, hopeful, sad, shouting, terrified, unfriendly, whispering Not supported
en-US-SaraNeural angry, cheerful, excited, friendly, hopeful, sad, shouting, terrified, unfriendly, whispering Not supported
en-US-TonyNeural angry, cheerful, excited, friendly, hopeful, sad, shouting, terrified, unfriendly, whispering Not supported
es-MX-JorgeNeural chat, cheerful Not supported
fr-FR-DeniseNeural cheerful, sad Not supported
fr-FR-HenriNeural cheerful, sad Not supported
it-IT-IsabellaNeural chat, cheerful Not supported
ja-JP-NanamiNeural chat, cheerful, customerservice Not supported
pt-BR-FranciscaNeural calm Not supported
zh-CN-XiaohanNeural affectionate, angry, calm, cheerful, disgruntled, embarrassed, fearful, gentle, sad, serious Not supported
zh-CN-XiaomengNeural chat Not supported
zh-CN-XiaomoNeural affectionate, angry, calm, cheerful, depressed, disgruntled, embarrassed, envious, fearful, gentle, sad, serious Boy, Girl, OlderAdultFemale, OlderAdultMale, SeniorFemale, SeniorMale, YoungAdultFemale, YoungAdultMale
zh-CN-XiaoruiNeural angry, calm, fearful, sad Not supported
zh-CN-XiaoshuangNeural chat Not supported
zh-CN-XiaoxiaoNeural affectionate, angry, assistant, calm, chat, chat-casual, cheerful, customerservice, disgruntled, fearful, friendly, gentle, lyrical, newscast, poetry-reading, sad, serious, sorry, whisper Not supported
zh-CN-XiaoyiNeural affectionate, angry, cheerful, disgruntled, embarrassed, fearful, gentle, sad, serious Not supported
zh-CN-XiaozhenNeural angry, cheerful, disgruntled, fearful, sad, serious Not supported
zh-CN-YunfengNeural angry, cheerful, depressed, disgruntled, fearful, sad, serious Not supported
zh-CN-YunhaoNeural2 advertisement-upbeat Not supported
zh-CN-YunjianNeural3,4 angry, cheerful, depressed, disgruntled, documentary-narration, narration-relaxed, sad, serious, sports-commentary, sports-commentary-excited Not supported
zh-CN-YunxiaNeural angry, calm, cheerful, fearful, sad Not supported
zh-CN-YunxiNeural angry, assistant, chat, cheerful, depressed, disgruntled, embarrassed, fearful, narration-relaxed, newscast, sad, serious Boy, Narrator, YoungAdultMale
zh-CN-YunyangNeural customerservice, narration-professional, newscast-casual Not supported
zh-CN-YunyeNeural angry, calm, cheerful, disgruntled, embarrassed, fearful, sad, serious Boy, Girl, OlderAdultFemale, OlderAdultMale, SeniorFemale, SeniorMale, YoungAdultFemale, YoungAdultMale
zh-CN-YunzeNeural angry, calm, cheerful, depressed, disgruntled, documentary-narration, fearful, sad, serious OlderAdultMale, SeniorMale

Screenshot from the DEMO showing its user interface. You enter the text to translate at the top and the language of the text is detected using Azure Cognitive Services text detection functionality. And you can then select which language to translate the text into. It will call a REST call to Azure Cognitive Services to translate the text. And it is also possible to hear the speech of the text. Now, it is also added to add voice style. Use the table shown above to select a voice actor that supports a voice style you want to test. As noted, voice styles are still limited to a few languages and voice actors supporting emotions or voice styles. You will hear the voice from the voice actor in a normal mood or voice style if additional emotions or voice styles are not supported.
Let's look at some code for this DEMO too. You can study the Github repo and clone it to test it out yourself. The TextToSpeechUtil class handles much of the logic of creating voice from text input and also create the SSML-XML contents and performt the REST api call to create the voice file. Note that SSML mentioned here, is the Speech Synthesis Markup Language (SSML). The SSML standard is documented here on MSDN, it is a standard adopted by others too including Google.

https://learn.microsoft.com/en-us/azure/ai-services/speech-service/speech-synthesis-markup



using Microsoft.Extensions.Configuration;
using MultiLingual.Translator.Lib.Models;
using System;
using System.Security;
using System.Text;
using System.Xml.Linq;
using static System.Runtime.InteropServices.JavaScript.JSType;

namespace MultiLingual.Translator.Lib
{
    public class TextToSpeechUtil : ITextToSpeechUtil
    {

        public TextToSpeechUtil(IConfiguration configuration)
        {
            _configuration = configuration;
        }

        public async Task<TextToSpeechResult> GetSpeechFromText(string text, string language, TextToSpeechLanguage[] actorVoices, 
            string? preferredVoiceActorId, string? preferredVoiceStyle)
        {
            var result = new TextToSpeechResult();

            result.Transcript = GetSpeechTextXml(text, language, actorVoices, preferredVoiceActorId, preferredVoiceStyle, result);
            result.ContentType = _configuration[TextToSpeechSpeechContentType];
            result.OutputFormat = _configuration[TextToSpeechSpeechXMicrosoftOutputFormat];
            result.UserAgent = _configuration[TextToSpeechSpeechUserAgent];
            result.AvailableVoiceActorIds = ResolveAvailableActorVoiceIds(language, actorVoices);
            result.LanguageCode = language;

            string? token = await GetUpdatedToken();

            HttpClient httpClient = GetTextToSpeechWebClient(token);

            string ttsEndpointUrl = _configuration[TextToSpeechSpeechEndpoint];
            var response = await httpClient.PostAsync(ttsEndpointUrl, new StringContent(result.Transcript, Encoding.UTF8, result.ContentType));

            using (var memStream = new MemoryStream()) {
                var responseStream = await response.Content.ReadAsStreamAsync();
                responseStream.CopyTo(memStream);
                result.VoiceData = memStream.ToArray();
            }

            return result;
        }

        private async Task<string?> GetUpdatedToken()
        {
            string? token = _token?.ToNormalString();
            if (_lastTimeTokenFetched == null || DateTime.Now.Subtract(_lastTimeTokenFetched.Value).Minutes > 8)
            {
                token = await GetIssuedToken();
            }

            return token;
        }

        private HttpClient GetTextToSpeechWebClient(string? token)
        {
            var httpClient = new HttpClient();
            httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", token);
            httpClient.DefaultRequestHeaders.Add("X-Microsoft-OutputFormat", _configuration[TextToSpeechSpeechXMicrosoftOutputFormat]);
            httpClient.DefaultRequestHeaders.Add("User-Agent", _configuration[TextToSpeechSpeechUserAgent]);
            return httpClient;
        }
       
        public string GetSpeechTextXml(string text, string language, TextToSpeechLanguage[] actorVoices, string? preferredVoiceActorId,
              string? preferredVoiceStyle, TextToSpeechResult result)
        {
            result.VoiceActorId = ResolveVoiceActorId(language, preferredVoiceActorId, actorVoices);
            string speechXml = $@"
            <speak version='1.0' xml:lang='en-US' xmlns:mstts='https://www.w3.org/2001/mstts'>
                <voice xml:gender='Male' name='Microsoft Server Speech Text to Speech Voice {result.VoiceActorId}'>
                    <prosody rate='1'>{text}</prosody>
                </voice>
            </speak>";

            speechXml = AddVoiceStyleEffectIfDesired(preferredVoiceStyle, speechXml);

            return speechXml;
        }

        /// <summary>
        /// Adds voice style / expression to the SSML markup for the voice
        /// </summary>
        private static string AddVoiceStyleEffectIfDesired(string? preferredVoiceStyle, string speechXml)
        {
            if (!string.IsNullOrWhiteSpace(preferredVoiceStyle) && preferredVoiceStyle != "normal-neutral")
            {
                var voiceDoc = XDocument.Parse(speechXml); //https://learn.microsoft.com/nb-no/azure/ai-services/speech-service/speech-synthesis-markup-voice#use-speaking-styles-and-roles

                XElement? prosody = voiceDoc.Descendants("prosody").FirstOrDefault();
                if (prosody?.Value != null)
                {
                    // Create the <mstts:express-as> element, for now skip the ':' letter and replace at the end

                    var expressedAsWrappedElement = new XElement("msttsexpress-as",
                        new XAttribute("style", preferredVoiceStyle));
                    expressedAsWrappedElement.Value = prosody!.Value;
                    prosody?.ReplaceWith(expressedAsWrappedElement);
                    speechXml = voiceDoc.ToString().Replace(@"msttsexpress-as", "mstts:express-as");
                }
            }

            return speechXml;
        }

        private List<string> ResolveAvailableActorVoiceIds(string language, TextToSpeechLanguage[] actorVoices)
        {
            if (actorVoices?.Any() == true)
            {
                var voiceActorIds = actorVoices.Where(v => v.LanguageKey == language || v.LanguageKey.Split("-")[0] == language).SelectMany(v => v.VoiceActors).Select(v => v.VoiceId).ToList();
                return voiceActorIds;
            }
            return new List<string>();
        }

        private string ResolveVoiceActorId(string language, string? preferredVoiceActorId, TextToSpeechLanguage[] actorVoices)
        {
            string actorVoiceId = "(en-AU, NatashaNeural)"; //default to a select voice actor id 
            if (actorVoices?.Any() == true)
            {
                var voiceActorsForLanguage = actorVoices.Where(v => v.LanguageKey == language || v.LanguageKey.Split("-")[0] == language).SelectMany(v => v.VoiceActors).Select(v => v.VoiceId).ToList();
                if (voiceActorsForLanguage != null)
                {
                    if (voiceActorsForLanguage.Any() == true)
                    {
                        var resolvedPreferredVoiceActorId = voiceActorsForLanguage.FirstOrDefault(v => v == preferredVoiceActorId);
                        if (!string.IsNullOrWhiteSpace(resolvedPreferredVoiceActorId))
                        {
                            return resolvedPreferredVoiceActorId!;
                        }
                        actorVoiceId = voiceActorsForLanguage.First();
                    }
                }
            }
            return actorVoiceId;
        }

        private async Task<string> GetIssuedToken()
        {
            var httpClient = new HttpClient();
            string? textToSpeechSubscriptionKey = Environment.GetEnvironmentVariable("AZURE_TEXT_SPEECH_SUBSCRIPTION_KEY", EnvironmentVariableTarget.Machine);
            httpClient.DefaultRequestHeaders.Add(OcpApiSubscriptionKeyHeaderName, textToSpeechSubscriptionKey);
            string tokenEndpointUrl = _configuration[TextToSpeechIssueTokenEndpoint];
            var response = await httpClient.PostAsync(tokenEndpointUrl, new StringContent("{}"));
            _token = (await response.Content.ReadAsStringAsync()).ToSecureString();
            _lastTimeTokenFetched = DateTime.Now;
            return _token.ToNormalString();
        }

        public async Task<List<string>> GetVoiceStyles()
        {
            var voiceStyles = new List<string>
            {
                "normal-neutral",
                "advertisement_upbeat",
                "affectionate",
                "angry",
                "assistant",
                "calm",
                "chat",
                "cheerful",
                "customerservice",
                "depressed",
                "disgruntled",
                "documentary-narration",
                "embarrassed",
                "empathetic",
                "envious",
                "excited",
                "fearful",
                "friendly",
                "gentle",
                "hopeful",
                "lyrical",
                "narration-professional",
                "narration-relaxed",
                "newscast",
                "newscast-casual",
                "newscast-formal",
                "poetry-reading",
                "sad",
                "serious",
                "shouting",
                "sports_commentary",
                "sports_commentary_excited",
                "whispering",
                "terrified",
                "unfriendly"
            };
            return await Task.FromResult(voiceStyles);
        }

        private const string OcpApiSubscriptionKeyHeaderName = "Ocp-Apim-Subscription-Key";
        private const string TextToSpeechIssueTokenEndpoint = "TextToSpeechIssueTokenEndpoint";
        private const string TextToSpeechSpeechEndpoint = "TextToSpeechSpeechEndpoint";        
        private const string TextToSpeechSpeechContentType = "TextToSpeechSpeechContentType";
        private const string TextToSpeechSpeechUserAgent = "TextToSpeechSpeechUserAgent";
        private const string TextToSpeechSpeechXMicrosoftOutputFormat = "TextToSpeechSpeechXMicrosoftOutputFormat";

        private readonly IConfiguration _configuration;

        private DateTime? _lastTimeTokenFetched = null;
        private SecureString _token = null;

    }
}

 
 

The REST call to generate the voice file is using following set up: TTS endpoint url: https://norwayeast.tts.speech.microsoft.com/cognitiveservices/v1 The transcript (text to translate into speech) is the following in my test as a SSML-XML document:


<speak version="1.0" xml:lang="en-US" xmlns:mstts="https://www.w3.org/2001/mstts">
  <voice xml:gender="Male" name="Microsoft Server Speech Text to Speech Voice (en-US, JaneNeural)">
    <mstts:express-as style="angry">I listen to Eurovision and cheer for Norway</mstts:express-as>
  </voice>
</speak>


The SSML also contains an extension called mstts extension language that adds features to SSML such as the express-as set to a voice style or emotion of "angry". Not all emotions or voice styles are supported by every voice actor in Azure Cognitive Services. But this is a list of the voice styles that could be supported, it varies which voice actor you choose (and inherently which language).
  • "normal-neutral"
  • "advertisement_upbeat"
  • "affectionate"
  • "angry"
  • "assistant"
  • "calm"
  • "chat"
  • "cheerful"
  • "customerservice"
  • "depressed"
  • "disgruntled"
  • "documentary-narration"
  • "embarrassed"
  • "empathetic"
  • "envious"
  • "excited"
  • "fearful"
  • "friendly"
  • "gentle"
  • "hopeful"
  • "lyrical"
  • "narration-professional"
  • "narration-relaxed"
  • "newscast"
  • "newscast-casual"
  • "newscast-formal"
  • "poetry-reading"
  • "sad"
  • "serious"
  • "shouting"
  • "sports_commentary"
  • "sports_commentary_excited"
  • "whispering"
  • "terrified"
  • "unfriendly
Microsoft has come a long way from the early work with SAPI - Microsoft Speech API with Microsoft SAM around 2000. The realism of synthetic voices more than 20 years ago were rather crude and robotic. Nowaydays, voice actors provided by Azure Cloud computing platform as shown here are neural net trained and very realistic based upon training from real voice actors and now more and more voice actor voices support emotions or voice styles. The usages of this can be diverse. Making use of text synthesis can serve in automated answering services and apps in diverse fields such as healthcare and public services or education and more. Making this demo has been fun for me and it can be used to learn languages and with the voice functionality you can train on not only the translation but also pronounciation.

Saturday, 14 October 2023

Using Image Analysis in Azure AI Cognitive Services

I have added a demo .NET MAUI Blazor app that uses Image Analysis in Computer Vision in Azure Cognitive Services. Note that Image Analysis is not available in all Azure data centers. For example, Norway East does not have this feature. However, North Europe Azure data center do have the feature, the data center i Ireland. A Github repo exists for this demo here:

https://github.com/toreaurstadboss/Image.Analyze.Azure.Ai

A screen shot for this demo is shown below: Demo screenshot The demo allows you to upload a picture (supported formats are .jpeg, .jpg and .png, but Azure AI Image Analyzer supports a lot of other image formats too). The demo shows a preview of the selected image and to the right an image of bounding boxes of objects in the image. A list of tags extracted from the image are also shown. Raw data from the Azure Image Analyzer service is shown in the text box area below the pictures, with a list of tags to the right. The demo is written with .NET Maui Blazor and .NET 6. Let us look at some code for making this demo. ImageSaveService.cs


using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;

namespace Ocr.Handwriting.Azure.AI.Services
{

    public class ImageSaveService : IImageSaveService
    {

        public async Task<ImageSaveModel> SaveImage(IBrowserFile browserFile)
        {
            var buffers = new byte[browserFile.Size];
            var bytes = await browserFile.OpenReadStream(maxAllowedSize: 30 * 1024 * 1024).ReadAsync(buffers);
            string imageType = browserFile.ContentType;

            var basePath = FileSystem.Current.AppDataDirectory;
            var imageSaveModel = new ImageSaveModel
            {
                SavedFilePath = Path.Combine(basePath, $"{Guid.NewGuid().ToString("N")}-{browserFile.Name}"),
                PreviewImageUrl = $"data:{imageType};base64,{Convert.ToBase64String(buffers)}",
                FilePath = browserFile.Name,
                FileSize = bytes / 1024,
            };

            await File.WriteAllBytesAsync(imageSaveModel.SavedFilePath, buffers);

            return imageSaveModel;
        }

    }
}

//Interface defined inside IImageService.cs shown below
using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;

namespace Ocr.Handwriting.Azure.AI.Services
{
  
    public interface IImageSaveService
    {

        Task<ImageSaveModel> SaveImage(IBrowserFile browserFile);

    }

}


The ImageSaveService saves the uploaded image from the IBrowserFile into a base-64 string from the image bytes of the uploaded IBrowserFile via OpenReadStream of the IBrowserFile. This allows us to preview the uploaded image. The code also saves the image to the AppDataDirectory that MAUI supports - FileSystem.Current.AppDataDirectory. Let's look at how to call the analysis service itself, it is actually quite straight forward. ImageAnalyzerService.cs


using Azure;
using Azure.AI.Vision.Common;
using Azure.AI.Vision.ImageAnalysis;

namespace Image.Analyze.Azure.Ai.Lib
{

    public class ImageAnalyzerService : IImageAnalyzerService
    {

        public ImageAnalyzer CreateImageAnalyzer(string imageFile)
        {
            string key = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_SECONDARY_KEY");
            string endpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_SECONDARY_ENDPOINT");
            var visionServiceOptions = new VisionServiceOptions(new Uri(endpoint), new AzureKeyCredential(key));

            using VisionSource visionSource = CreateVisionSource(imageFile);

            var analysisOptions = CreateImageAnalysisOptions();

            var analyzer = new ImageAnalyzer(visionServiceOptions, visionSource, analysisOptions);
            return analyzer;

        }

        private static VisionSource CreateVisionSource(string imageFile)
        {
            using var stream = File.OpenRead(imageFile);
            using var reader = new StreamReader(stream);
            byte[] imageBuffer;
            using (var streamReader = new MemoryStream())
            {
                stream.CopyTo(streamReader);
                imageBuffer = streamReader.ToArray();
            }

            using var imageSourceBuffer = new ImageSourceBuffer();
            imageSourceBuffer.GetWriter().Write(imageBuffer);
            return VisionSource.FromImageSourceBuffer(imageSourceBuffer);
        }

        private static ImageAnalysisOptions CreateImageAnalysisOptions() => new ImageAnalysisOptions
        {
            Language = "en",
            GenderNeutralCaption = false,
            Features =
              ImageAnalysisFeature.CropSuggestions
            | ImageAnalysisFeature.Caption
            | ImageAnalysisFeature.DenseCaptions
            | ImageAnalysisFeature.Objects
            | ImageAnalysisFeature.People
            | ImageAnalysisFeature.Text
            | ImageAnalysisFeature.Tags
        };

    }

}

//interface shown below 

 public interface IImageAnalyzerService
 {
     ImageAnalyzer CreateImageAnalyzer(string imageFile);
 }



We retrieve environment variables here and we create an ImageAnalyzer. We create a Vision source from the saved picture we uploaded and open a stream to it using File.OpenRead method on System.IO. Since we saved the file in the AppData folder of the .NET MAUI app, we can read this file. We set up the image analysis options and the vision service options. We then call the return the image analyzer. Let's look at the code-behind of the index.razor file that initializes the Image analyzer, and runs the Analyze method of it. Index.razor.cs
 
 
 using Azure.AI.Vision.ImageAnalysis;
using Image.Analyze.Azure.Ai.Extensions;
using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
using Microsoft.JSInterop;
using System.Text;

namespace Image.Analyze.Azure.Ai.Pages
{
    partial class Index
    {

        private IndexModel Model = new();

        //https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40?WT.mc_id=twitter&pivots=programming-language-csharp

        private string ImageInfo = string.Empty;

        private async Task Submit()
        {
            if (Model.PreviewImageUrl == null || Model.SavedFilePath == null)
            {
                await Application.Current.MainPage.DisplayAlert($"MAUI Blazor Image Analyzer App", $"You must select an image first before running Image Analysis. Supported formats are .jpeg, .jpg and .png", "Ok", "Cancel");
                return;
            }

            using var imageAnalyzer = ImageAnalyzerService.CreateImageAnalyzer(Model.SavedFilePath);

            ImageAnalysisResult analysisResult = await imageAnalyzer.AnalyzeAsync();

            if (analysisResult.Reason == ImageAnalysisResultReason.Analyzed)
            {
                Model.ImageAnalysisOutputText = analysisResult.OutputImageAnalysisResult();
                Model.Caption = $"{analysisResult.Caption.Content} Confidence: {analysisResult.Caption.Confidence.ToString("F2")}";
                Model.Tags = analysisResult.Tags.Select(t => $"{t.Name} (Confidence: {t.Confidence.ToString("F2")})").ToList();
                var jsonBboxes = analysisResult.GetBoundingBoxesJson();
                await JsRunTime.InvokeVoidAsync("LoadBoundingBoxes", jsonBboxes);
            }
            else
            {
                ImageInfo = $"The image analysis did not perform its analysis. Reason: {analysisResult.Reason}";
            }

            StateHasChanged(); //visual refresh here
        }

        private async Task CopyTextToClipboard()
        {
            await Clipboard.SetTextAsync(Model.ImageAnalysisOutputText);
            await Application.Current.MainPage.DisplayAlert($"MAUI Blazor Image Analyzer App", $"The copied text was put into the clipboard. Character length: {Model.ImageAnalysisOutputText?.Length}", "Ok", "Cancel");
        }

        private async Task OnInputFile(InputFileChangeEventArgs args)
        {
            var imageSaveModel = await ImageSaveService.SaveImage(args.File);
            Model = new IndexModel(imageSaveModel);
            await Application.Current.MainPage.DisplayAlert($"MAUI Blazor ImageAnalyzer app App", $"Wrote file to location : {Model.SavedFilePath} Size is: {Model.FileSize} kB", "Ok", "Cancel");
        }


    }
}
 
 
In the code-behind above we have a submit handler called Submit. We there analyze the image and send the result both to the UI and also to a client side Javascript method using IJSRuntime in .NET MAUI Blazor. Let's look at the two helper methods of ImageAnalysisResult next. ImageAnalysisResultExtensions.cs
 
 
 using Azure.AI.Vision.ImageAnalysis;
using System.Text;

namespace Image.Analyze.Azure.Ai.Extensions
{
    public static class ImageAnalysisResultExtensions
    {

        public static string GetBoundingBoxesJson(this ImageAnalysisResult result)
        {
            var sb = new StringBuilder();
            sb.AppendLine(@"[");

            int objectIndex = 0;
            foreach (var detectedObject in result.Objects)
            {
                sb.Append($"{{ \"Name\": \"{detectedObject.Name}\", \"Y\": {detectedObject.BoundingBox.Y}, \"X\": {detectedObject.BoundingBox.X}, \"Height\": {detectedObject.BoundingBox.Height}, \"Width\": {detectedObject.BoundingBox.Width}, \"Confidence\": \"{detectedObject.Confidence:0.0000}\" }}");
                objectIndex++;
                if (objectIndex < result.Objects?.Count)
                {
                    sb.Append($",{Environment.NewLine}");
                }
                else
                {
                    sb.Append($"{Environment.NewLine}");
                }
            }
            sb.Remove(sb.Length - 2, 1); //remove trailing comma at the end
            sb.AppendLine(@"]");
            return sb.ToString();
        }

        public static string OutputImageAnalysisResult(this ImageAnalysisResult result)
        {
            var sb = new StringBuilder();

            if (result.Reason == ImageAnalysisResultReason.Analyzed)
            {

                sb.AppendLine($" Image height = {result.ImageHeight}");
                sb.AppendLine($" Image width = {result.ImageWidth}");
                sb.AppendLine($" Model version = {result.ModelVersion}");

                if (result.Caption != null)
                {
                    sb.AppendLine(" Caption:");
                    sb.AppendLine($"   \"{result.Caption.Content}\", Confidence {result.Caption.Confidence:0.0000}");
                }

                if (result.DenseCaptions != null)
                {
                    sb.AppendLine(" Dense Captions:");
                    foreach (var caption in result.DenseCaptions)
                    {
                        sb.AppendLine($"   \"{caption.Content}\", Bounding box {caption.BoundingBox}, Confidence {caption.Confidence:0.0000}");
                    }
                }

                if (result.Objects != null)
                {
                    sb.AppendLine(" Objects:");
                    foreach (var detectedObject in result.Objects)
                    {
                        sb.AppendLine($"   \"{detectedObject.Name}\", Bounding box {detectedObject.BoundingBox}, Confidence {detectedObject.Confidence:0.0000}");
                    }
                }

                if (result.Tags != null)
                {
                    sb.AppendLine($" Tags:");
                    foreach (var tag in result.Tags)
                    {
                        sb.AppendLine($"   \"{tag.Name}\", Confidence {tag.Confidence:0.0000}");
                    }
                }

                if (result.People != null)
                {
                    sb.AppendLine($" People:");
                    foreach (var person in result.People)
                    {
                        sb.AppendLine($"   Bounding box {person.BoundingBox}, Confidence {person.Confidence:0.0000}");
                    }
                }

                if (result.CropSuggestions != null)
                {
                    sb.AppendLine($" Crop Suggestions:");
                    foreach (var cropSuggestion in result.CropSuggestions)
                    {
                        sb.AppendLine($"   Aspect ratio {cropSuggestion.AspectRatio}: "
                            + $"Crop suggestion {cropSuggestion.BoundingBox}");
                    };
                }

                if (result.Text != null)
                {
                    sb.AppendLine($" Text:");
                    foreach (var line in result.Text.Lines)
                    {
                        string pointsToString = "{" + string.Join(',', line.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}";
                        sb.AppendLine($"   Line: '{line.Content}', Bounding polygon {pointsToString}");

                        foreach (var word in line.Words)
                        {
                            pointsToString = "{" + string.Join(',', word.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}";
                            sb.AppendLine($"     Word: '{word.Content}', Bounding polygon {pointsToString}, Confidence {word.Confidence:0.0000}");
                        }
                    }
                }

                var resultDetails = ImageAnalysisResultDetails.FromResult(result);
                sb.AppendLine($" Result details:");
                sb.AppendLine($"   Image ID = {resultDetails.ImageId}");
                sb.AppendLine($"   Result ID = {resultDetails.ResultId}");
                sb.AppendLine($"   Connection URL = {resultDetails.ConnectionUrl}");
                sb.AppendLine($"   JSON result = {resultDetails.JsonResult}");
            }
            else
            {
                var errorDetails = ImageAnalysisErrorDetails.FromResult(result);
                sb.AppendLine(" Analysis failed.");
                sb.AppendLine($"   Error reason : {errorDetails.Reason}");
                sb.AppendLine($"   Error code : {errorDetails.ErrorCode}");
                sb.AppendLine($"   Error message: {errorDetails.Message}");
            }

            return sb.ToString();
        }

    }
}


  
 
Finally, let's look at the client side Javascript function that we call and send the bounding boxes json to draw the boxes. We will use Canvas in HTML 5 to show the picture and the bounding boxes of objects found in the image. index.html
 
 
 	<script type="text/javascript">

		var colorPalette = ["red", "yellow", "blue", "green", "fuchsia", "moccasin", "purple", "magenta", "aliceblue", "lightyellow", "lightgreen"];

		function rescaleCanvas() {
			var img = document.getElementById('PreviewImage');
			var canvas = document.getElementById('PreviewImageBbox');
			canvas.width = img.width;
			canvas.height = img.height;
		}

		function getColor() {
			var colorIndex = parseInt(Math.random() * 10);
			var color = colorPalette[colorIndex];
			return color;
		}

		function LoadBoundingBoxes(objectDescriptions) {
			if (objectDescriptions == null || objectDescriptions == false) {
				alert('did not find any objects in image. returning from calling load bounding boxes : ' + objectDescriptions);
				return;
			}

			var objectDesc = JSON.parse(objectDescriptions);
			//alert('calling load bounding boxes, starting analysis on clientside js : ' + objectDescriptions);

			rescaleCanvas();
			var canvas = document.getElementById('PreviewImageBbox');
			var img = document.getElementById('PreviewImage');

			var ctx = canvas.getContext('2d');
			ctx.drawImage(img, img.width, img.height);

			ctx.font = "10px Verdana";

			for (var i = 0; i < objectDesc.length; i++) {
				ctx.beginPath();
				ctx.strokeStyle = "black";
				ctx.lineWidth = 1;
				ctx.fillText(objectDesc[i].Name, objectDesc[i].X + objectDesc[i].Width / 2, objectDesc[i].Y + objectDesc[i].Height / 2);
				ctx.fillText("Confidence: " + objectDesc[i].Confidence, objectDesc[i].X + objectDesc[i].Width / 2, 10 + objectDesc[i].Y + objectDesc[i].Height / 2);
			}

			for (var i = 0; i < objectDesc.length; i++) {
				ctx.fillStyle = getColor();
				ctx.globalAlpha = 0.2;
				ctx.fillRect(objectDesc[i].X, objectDesc[i].Y, objectDesc[i].Width, objectDesc[i].Height);
				ctx.lineWidth = 3;
				ctx.strokeStyle = "blue";
				ctx.rect(objectDesc[i].X, objectDesc[i].Y, objectDesc[i].Width, objectDesc[i].Height);
				ctx.fillStyle = "black";
				ctx.fillText("Color: " + getColor(), objectDesc[i].X + objectDesc[i].Width / 2, 20 + objectDesc[i].Y + objectDesc[i].Height / 2);

				ctx.stroke();
			}

			ctx.drawImage(img, 0, 0);


			console.log('got these object descriptions:');
			console.log(objectDescriptions);

		}
	</script>

 
  
The index.html file in wwwroot is the place we usually put extra css and js for MAUI Blazor apps and Blazor apps. I have chosen to put the script directly into the index.html file and not in a .js file, but that is an option to be chosen to tidy up a bit more. So there you have it, we can relatively easily find objects in images using Azure analyze image service in Azure Cognitive Services. We can get tags and captions of the image. In the demo the caption is shown above the picture loaded. Azure Computer vision service is really good since it has got a massive training set and can recognize a lot of different objects for different usages. As you see in the source code, I have the key and endpoint inside environment variables that the code expects exists. Never expose keys and endpoints in your source code.

Tuesday, 19 September 2023

Using Azure AI TextAnalytics and translation service to build an universal translator

This article shows code how to build a universal translator using Azure AI Cognitive Services. This includes Azure AI Textanalytics to detect languge from text input, and using Azure AI Translation services. The Github repo is here :
https://github.com/toreaurstadboss/MultiLingual.Translator
The following Nuget packages are used in the Lib project csproj file :

 <ItemGroup>
    <PackageReference Include="Azure.AI.Translation.Text" Version="1.0.0-beta.1" />
    <PackageReference Include="Microsoft.AspNetCore.Components.Web" Version="6.0.19" />
    <PackageReference Include="Azure.AI.TextAnalytics" Version="5.3.0" />
  </ItemGroup>


We are going to build a .NET 6 cross platform MAUI Blazor app. First off, we focus on the Razor Library project called 'Lib'. This project contains the library util code to detect language and translate into other language. Let us first look at creating the clients needed to detect language and to translate text. TextAnalyticsFactory.cs


using Azure;
using Azure.AI.TextAnalytics;
using Azure.AI.Translation.Text;
using System;

namespace MultiLingual.Translator.Lib
{
    public static class TextAnalyticsClientFactory
    {

        public static TextAnalyticsClient CreateClient()
        {
            string? uri = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICE_ENDPOINT", EnvironmentVariableTarget.Machine);
            string? key = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICE_KEY", EnvironmentVariableTarget.Machine);
            if (uri == null)
            {
                throw new ArgumentNullException(nameof(uri), "Could not get system environment variable named 'AZURE_COGNITIVE_SERVICE_ENDPOINT' Set this variable first.");
            }
            if (key == null)
            {
                throw new ArgumentNullException(nameof(uri), "Could not get system environment variable named 'AZURE_COGNITIVE_SERVICE_KEY' Set this variable first.");
            }
            var client = new TextAnalyticsClient(new Uri(uri!), new AzureKeyCredential(key!));
            return client;
        }

        public static TextTranslationClient CreateTranslateClient()
        {
            string? keyTranslate = Environment.GetEnvironmentVariable("AZURE_TRANSLATION_SERVICE_KEY", EnvironmentVariableTarget.Machine);
            string? regionForTranslationService = Environment.GetEnvironmentVariable("AZURE_TRANSLATION_SERVICE_REGION", EnvironmentVariableTarget.Machine);

            if (keyTranslate == null)
            {
                throw new ArgumentNullException(nameof(keyTranslate), "Could not get system environment variable named 'AZURE_TRANSLATION_SERVICE_KEY' Set this variable first.");
            }
            if (keyTranslate == null)
            {
                throw new ArgumentNullException(nameof(keyTranslate), "Could not get system environment variable named 'AZURE_TRANSLATION_SERVICE_REGION' Set this variable first.");
            }
            var client = new TextTranslationClient(new AzureKeyCredential(keyTranslate!), region: regionForTranslationService);
            return client;
        }

    }
}


The code assumes that there is four environment variables at the SYSTEM level of your OS. Further on, let us now look at the code to detect language. This uses a TextAnalyticsClient detect the language an input text is written in, using this client. IDetectLanguageUtil.cs


using Azure.AI.TextAnalytics;

namespace MultiLingual.Translator.Lib
{
    public interface IDetectLanguageUtil
    {
        Task<DetectedLanguage> DetectLanguage(string inputText);
        Task<double> DetectLanguageConfidenceScore(string inputText);
        Task<string> DetectLanguageIso6391(string inputText);
        Task<string> DetectLanguageName(string inputText);
    }
}


DetectLanguageUtil.cs


using Azure.AI.TextAnalytics;

namespace MultiLingual.Translator.Lib
{

    public class DetectLanguageUtil : IDetectLanguageUtil
    {

        private TextAnalyticsClient _client;

        public DetectLanguageUtil()
        {
            _client = TextAnalyticsClientFactory.CreateClient();
        }

        /// <summary>
        /// Detects language of the <paramref name="inputText"/>.
        /// </summary>
        /// <param name="inputText"></param>
        /// <remarks> <see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>
        public async Task<DetectedLanguage> DetectLanguage(string inputText)
        {
            DetectedLanguage detectedLanguage = await _client.DetectLanguageAsync(inputText);
            return detectedLanguage;
        }

        /// <summary>
        /// Detects language of the <paramref name="inputText"/>. Returns the language name.
        /// </summary>
        /// <param name="inputText"></param>
        /// <remarks> <see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>
        public async Task<string> DetectLanguageName(string inputText)
        {
            DetectedLanguage detectedLanguage = await DetectLanguage(inputText);
            return detectedLanguage.Name;
        }

        /// <summary>
        /// Detects language of the <paramref name="inputText"/>. Returns the language code.
        /// </summary>
        /// <param name="inputText"></param>
        /// <remarks> <see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>
        public async Task<string> DetectLanguageIso6391(string inputText)
        {
            DetectedLanguage detectedLanguage = await DetectLanguage(inputText);
            return detectedLanguage.Iso6391Name;
        }

        /// <summary>
        /// Detects language of the <paramref name="inputText"/>. Returns the confidence score
        /// </summary>
        /// <param name="inputText"></param>
        /// <remarks> <see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>
        public async Task<double> DetectLanguageConfidenceScore(string inputText)
        {
            DetectedLanguage detectedLanguage = await DetectLanguage(inputText);
            return detectedLanguage.ConfidenceScore;
        }

    }

}



The Iso6391 code is important when it comes to translation, which will be shown soon. But first let us look at the supported languages of Azure AI Translation services. LanguageCode.cs


namespace MultiLingual.Translator.Lib.Models
{
    /// 
    /// List of supported languages in Azure AI services
    /// https://learn.microsoft.com/en-us/azure/ai-services/translator/language-support
    /// 
    public static class LanguageCode
    {

        public const string Afrikaans = "af";
        public const string Albanian = "sq";
        public const string Amharic = "am";
        public const string Arabic = "ar";
        public const string Armenian = "hy";
        public const string Assamese = "as";
        public const string AzerbaijaniLatin = "az";
        public const string Bangla = "bn";
        public const string Bashkir = "ba";
        public const string Basque = "eu";
        public const string BosnianLatin = "bs";
        public const string Bulgarian = "bg";
        public const string CantoneseTraditional = "yue";
        public const string Catalan = "ca";
        public const string ChineseLiterary = "lzh";
        public const string ChineseSimplified = "zh-Hans";
        public const string ChineseTraditional = "zh-Hant";
        public const string chiShona = "sn";
        public const string Croatian = "hr";
        public const string Czech = "cs";
        public const string Danish = "da";
        public const string Dari = "prs";
        public const string Divehi = "dv";
        public const string Dutch = "nl";
        public const string English = "en";
        public const string Estonian = "et";
        public const string Faroese = "fo";
        public const string Fijian = "fj";
        public const string Filipino = "fil";
        public const string Finnish = "fi";
        public const string French = "fr";
        public const string FrenchCanada = "fr-ca";
        public const string Galician = "gl";
        public const string Georgian = "ka";
        public const string German = "de";
        public const string Greek = "el";
        public const string Gujarati = "gu";
        public const string HaitianCreole = "ht";
        public const string Hausa = "ha";
        public const string Hebrew = "he";
        public const string Hindi = "hi";
        public const string HmongDawLatin = "mww";
        public const string Hungarian = "hu";
        public const string Icelandic = "is";
        public const string Igbo = "ig";
        public const string Indonesian = "id";
        public const string Inuinnaqtun = "ikt";
        public const string Inuktitut = "iu";
        public const string InuktitutLatin = "iu-Latn";
        public const string Irish = "ga";
        public const string Italian = "it";
        public const string Japanese = "ja";
        public const string Kannada = "kn";
        public const string Kazakh = "kk";
        public const string Khmer = "km";
        public const string Kinyarwanda = "rw";
        /// 
        /// Fear my Bak'leth ! 
        /// 
        public const string Klingon = "tlh-Latn";
        public const string KlingonplqaD = "tlh-Piqd";
        public const string Konkani = "gom";
        public const string Korean = "ko";
        public const string KurdishCentral = "ku";
        public const string KurdishNorthern = "kmr";
        public const string KyrgyzCyrillic = "ky";
        public const string Lao = "lo";
        public const string Latvian = "lv";
        public const string Lithuanian = "lt";
        public const string Lingala = "ln";
        public const string LowerSorbian = "dsb";
        public const string Luganda = "lug";
        public const string Macedonian = "mk";
        public const string Maithili = "mai";
        public const string Malagasy = "mg";
        public const string MalayLatin = "ms";
        public const string Malayalam = "ml";
        public const string Maltese = "mt";
        public const string Maori = "mi";
        public const string Marathi = "mr";
        public const string MongolianCyrillic = "mn-Cyrl";
        public const string MongolianTraditional = "mn-Mong";
        public const string Myanmar = "my";
        public const string Nepali = "ne";
        public const string Norwegian = "nb";
        public const string Nyanja = "nya";
        public const string Odia = "or";
        public const string Pashto = "ps";
        public const string Persian = "fa";
        public const string Polish = "pl";
        public const string PortugueseBrazil = "pt";
        public const string PortuguesePortugal = "pt-pt";
        public const string Punjabi = "pa";
        public const string QueretaroOtomi = "otq";
        public const string Romanian = "ro";
        public const string Rundi = "run";
        public const string Russian = "ru";
        public const string SamoanLatin = "sm";
        public const string SerbianCyrillic = "sr-Cyrl";
        public const string SerbianLatin = "sr-Latn";
        public const string Sesotho = "st";
        public const string SesothosaLeboa = "nso";
        public const string Setswana = "tn";
        public const string Sindhi = "sd";
        public const string Sinhala = "si";
        public const string Slovak = "sk";
        public const string Slovenian = "sl";
        public const string SomaliArabic = "so";
        public const string Spanish = "es";
        public const string SwahiliLatin = "sw";
        public const string Swedish = "sv";
        public const string Tahitian = "ty";
        public const string Tamil = "ta";
        public const string TatarLatin = "tt";
        public const string Telugu = "te";
        public const string Thai = "th";
        public const string Tibetan = "bo";
        public const string Tigrinya = "ti";
        public const string Tongan = "to";
        public const string Turkish = "tr";
        public const string TurkmenLatin = "tk";
        public const string Ukrainian = "uk";
        public const string UpperSorbian = "hsb";
        public const string Urdu = "ur";
        public const string UyghurArabic = "ug";
        public const string UzbekLatin = "uz";
        public const string Vietnamese = "vi";
        public const string Welsh = "cy";
        public const string Xhosa = "xh";
        public const string Yoruba = "yo";
        public const string YucatecMaya = "yua";
        public const string Zulu = "zu";
    }
}


As there are about 5-10 000 languages in the World, the list above shows that Azure AI translation services supports about 130 of these, which is 1-2 % of the total amount of languages. Of course, the languages supported by Azure AI are also including the most spoken languages in the World. Let us look at the translation util code next. ITranslateUtil.cs


namespace MultiLingual.Translator.Lib
{
    public interface ITranslateUtil
    {
        Task<string?> Translate(string targetLanguage, string inputText, string? sourceLanguage = null);
    }
}


TranslateUtil.cs


using Azure.AI.Translation.Text;
using MultiLingual.Translator.Lib.Models;

namespace MultiLingual.Translator.Lib
{

    public class TranslateUtil : ITranslateUtil
    {
        private TextTranslationClient _client;


        public TranslateUtil()
        {
            _client = TextAnalyticsClientFactory.CreateTranslateClient();
        }

        /// <summary>
        /// Translates text using Azure AI Translate services. 
        /// </summary>
        /// <param name="targetLanguage"><see cref="LanguageCode" for a list of supported languages/></param>
        /// <param name="inputText"></param>
        /// <param name="sourceLanguage">Pass in null here to auto detect the source language</param>
        /// <returns></returns>
        public async Task<string?> Translate(string targetLanguage, string inputText, string? sourceLanguage = null)
        {
            var translationOfText = await _client.TranslateAsync(targetLanguage, inputText, sourceLanguage);
            if (translationOfText?.Value == null)
            {
                return null;
            }
            var translation = translationOfText.Value.SelectMany(l => l.Translations).Select(l => l.Text)?.ToList();
            string? translationText = translation?.FlattenString();
            return translationText;
        }

    }
}


We use a little helper extension method here too : StringExtensions.cs


using System.Text;

namespace MultiLingual.Translator.Lib
{
    public static class StringExtensions
    {

        /// <summary>
        /// Merges a collection of lines into a flattened string separating each line by a specified line separator.
        /// Newline is deafult
        /// </summary>
        /// <param name="inputLines"></param>
        /// <param name="lineSeparator"></param>
        /// <returns></returns>
        public static string? FlattenString(this IEnumerable<string>? inputLines, string lineSeparator = "\n")
        {
            if (inputLines == null || !inputLines.Any())
            {
                return null;
            }
            var flattenedString = inputLines?.Aggregate(new StringBuilder(),
                (sb, l) => sb.AppendLine(l + lineSeparator),
                sb => sb.ToString().Trim());

            return flattenedString;
        }

    }
}


Here are some tests for detecting language : DetectLanguageUtilTests.cs

  
using Azure.AI.TextAnalytics;
using FluentAssertions;

namespace MultiLingual.Translator.Lib.Test
{
    public class DetectLanguageUtilTests
    {

        private DetectLanguageUtil _detectLanguageUtil;

        public DetectLanguageUtilTests()
        {
            _detectLanguageUtil = new DetectLanguageUtil();
        }

        [Theory]
        [InlineData("Donde esta la playa", "es", "Spanish")]
        [InlineData("Jeg er fra Trøndelag og jeg liker brunost", "no", "Norwegian")]
        public async Task DetectLanguageDetailsSucceeds(string text, string expectedLanguageIso6391, string expectedLanguageName)
        {
            string? detectedLangIso6391 = await _detectLanguageUtil.DetectLanguageIso6391(text);
            detectedLangIso6391.Should().Be(expectedLanguageIso6391);
            string? detectedLangName = await _detectLanguageUtil.DetectLanguageName(text);
            detectedLangName.Should().Be(expectedLanguageName);
        }

        [Theory]
        [InlineData("Du hast mich", "de", "German")]
        public async Task DetectLanguageSucceeds(string text, string expectedLanguageIso6391, string expectedLanguageName)
        {
            DetectedLanguage detectedLanguage = await _detectLanguageUtil.DetectLanguage(text);
            detectedLanguage.Iso6391Name.Should().Be(expectedLanguageIso6391);            
            detectedLanguage.Name.Should().Be(expectedLanguageName);
        }

    }
}  
  

And here are some translation util tests : TranslateUtilTests.cs


using FluentAssertions;
using MultiLingual.Translator.Lib.Models;

namespace MultiLingual.Translator.Lib.Test
{

    public class TranslateUtilTests
    {

        private TranslateUtil _translateUtil;

        public TranslateUtilTests()
        {
            _translateUtil = new TranslateUtil();                
        }

        [Theory]
        [InlineData("Jeg er fra Norge og jeg liker brunost", "i'm from norway and i like brown cheese", LanguageCode.Norwegian,  LanguageCode.English)]
        [InlineData("Jeg er fra Norge og jeg liker brunost", "i'm from norway and i like brown cheese", null, LanguageCode.English)] //auto detect language is tested here
        [InlineData("Ich bin aus Hamburg und ich liebe bier", "i'm from hamburg and i love beer", LanguageCode.German, LanguageCode.English)]
        [InlineData("Ich bin aus Hamburg und ich liebe bier", "i'm from hamburg and i love beer", null, LanguageCode.English)] //Auto detect source language is tested here
        [InlineData("tlhIngan maH", "we are klingons", LanguageCode.Klingon, LanguageCode.English)] //Klingon force !
        public async Task TranslationReturnsExpected(string input, string expectedTranslation, string sourceLanguage, string targetLanguage)
        {
            string? translation = await _translateUtil.Translate(targetLanguage, input, sourceLanguage);
            translation.Should().NotBeNull();
            translation.Should().BeEquivalentTo(expectedTranslation);
        }

    }
}
  

Over to the UI. The app is made with MAUI Blazor. Here are some models for the app : LanguageInputModel.cs


namespace MultiLingual.Translator.Models
{
    public class LanguageInputModel
    {
        public string InputText { get; set; }

        public string DetectedLanguageInfo { get; set; }

        public string DetectedLanguageIso6391 { get; set; }

        public string TargetLanguage { get; set; }

        public string TranslatedText { get; set; }

    }
}



NameValue.cs


namespace MultiLingual.Translator.Models
{
    public class NameValue
    {
        public string Name { get; set; }
        public string Value { get; set; }
    }
}


The UI consists of this razor code in, written for Blazor MAUI app. Index.razor


@page "/"
@inject ITranslateUtil TransUtil
@inject IDetectLanguageUtil DetectLangUtil
@inject IJSRuntime JS

@using MultiLingual.Translator.Lib;
@using MultiLingual.Translator.Lib.Models;
@using MultiLingual.Translator.Models;

<h1>Azure AI Text Translation</h1>

<EditForm Model="@Model" OnValidSubmit="@Submit" class="form-group" style="background-color:aliceblue;">
    <DataAnnotationsValidator />
    <ValidationSummary />

    <div class="form-group row">
        <label for="Model.InputText">Text to translate</label>
        <InputTextArea @bind-Value="Model!.InputText" placeholder="Enter text to translate" @ref="inputTextRef" id="textToTranslate" rows="5" />
    </div>

    <div class="form-group row">
        <span>Detected language of text to translate</span>
        <InputText class="languageLabelText" readonly="readonly" placeholder="The detected language of the text to translate" @bind-Value="Model!.DetectedLanguageInfo"></InputText>
        @if (Model.DetectedLanguageInfo != null){
            <img src="@FlagIcon" class="flagIcon" />
        }
    </div>
    <br />
    
    <div class="form-group row">
        <span>Translate into language</span>
        <InputSelect placeholder="Choose the target language"  @bind-Value="Model!.TargetLanguage">
            @foreach (var item in LanguageCodes){
                <option value="@item.Value">@item.Name</option>
            }
        </InputSelect>
        <br />
          @if (Model.TargetLanguage != null){
            <img src="@TargetFlagIcon" class="flagIcon" />
        }
    </div>
    <br />

    <div class="form-group row">
        <span>Translation</span>
        <InputTextArea readonly="readonly" placeholder="The translated text target language" @bind-Value="Model!.TranslatedText" rows="5"></InputTextArea>
    </div>

    <button type="submit" class="submitButton">Submit</button>

</EditForm>

@code {
    private Azure.AI.TextAnalytics.TextAnalyticsClient _client;

    private InputTextArea inputTextRef;

    public LanguageInputModel Model { get; set; } = new();

    private string FlagIcon {
        get
        {
            return $"images/flags/png100px/{Model.DetectedLanguageIso6391}.png";
        }
    }

    private string TargetFlagIcon {
        get
        {
            return $"images/flags/png100px/{Model.TargetLanguage}.png";
        }
    }

    private List<NameValue> LanguageCodes = typeof(LanguageCode).GetFields().Select(f => new NameValue {
	 Name = f.Name,
	 Value = f.GetValue(f)?.ToString(),
	}).OrderBy(f => f.Name).ToList();


    private async void Submit()
    {
        var detectedLanguage = await DetectLangUtil.DetectLanguage(Model.InputText);
        Model.DetectedLanguageInfo = $"{detectedLanguage.Iso6391Name} {detectedLanguage.Name}";
        Model.DetectedLanguageIso6391 = detectedLanguage.Iso6391Name;
        if (_client == null)
        {
            _client = TextAnalyticsClientFactory.CreateClient();
        }
        Model.TranslatedText = await TransUtil.Translate(Model.TargetLanguage, Model.InputText, detectedLanguage.Iso6391Name);

        StateHasChanged();
    }

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        if (firstRender)
        {
            Model.TargetLanguage = LanguageCode.English;
            await JS.InvokeVoidAsync("exampleJsFunctions.focusElement", inputTextRef?.AdditionalAttributes.FirstOrDefault(a => a.Key?.ToLower() == "id").Value);
            StateHasChanged();
        }
    }

}


Finally, a screenshot how the app looks like : You enter the text to translate, then the detected language is shown after you hit Submit. You can select the target language to translate the text into. English is selected as default. The Iso6391 code of the selected language is shown as a flag icon, if there exists a 1:1 mapping between the Iso6391 code and the flag icons available in the app. The top flag show the detected language via the Iso6391 code, IF there is a 1:1 mapping between this code and the available Flag icons.