This article will look into generating dropdown for enums in Blazor.
The repository for the source code listed in the article is here:
https://github.com/toreaurstadboss/DallEImageGenerationImgeDemoV4
First off, a helper class for enums that will use the InputSelect control. The helper class will support setting the display text for enum options / alternatives via resources files using the display attribute.
Enumhelper.cs | C# source code
using DallEImageGenerationImageDemoV4.Models;
using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.Forms;
using System.ComponentModel.DataAnnotations;
using System.Linq.Expressions;
using System.Resources;
namespaceDallEImageGenerationImageDemoV4.Utility
{
publicstaticclassEnumHelper
{
publicstatic RenderFragment GenerateEnumDropDown<TEnum>(object receiver,
TEnum selectedValue,
Action<TEnum> valueChanged)
where TEnum : Enum
{
Expression<Func<TEnum>> onValueExpression = () => selectedValue;
var onValueChanged = EventCallback.Factory.Create<TEnum>(receiver, valueChanged);
return builder =>
{
// Set the selectedValue to the first enum value if it is not setif (EqualityComparer<TEnum>.Default.Equals(selectedValue, default))
{
object? firstEnum = Enum.GetValues(typeof(TEnum)).GetValue(0);
if (firstEnum != null)
{
selectedValue = (TEnum)firstEnum;
}
}
builder.OpenComponent<InputSelect<TEnum>>(0);
builder.AddAttribute(1, "Value", selectedValue);
builder.AddAttribute(2, "ValueChanged", onValueChanged);
builder.AddAttribute(3, "ValueExpression", onValueExpression);
builder.AddAttribute(4, "class", "form-select"); // Adding Bootstrap class for styling
builder.AddAttribute(5, "ChildContent", (RenderFragment)(childBuilder =>
{
foreach (varvaluein Enum.GetValues(typeof(TEnum)))
{
childBuilder.OpenElement(6, "option");
childBuilder.AddAttribute(7, "value", value?.ToString());
childBuilder.AddContent(8, GetEnumOptionDisplayText(value)?.ToString()?.Replace("_", " ")); // Ensure the display text is clean
childBuilder.CloseElement();
}
}));
builder.CloseComponent();
};
}
///<summary>/// Retrieves the display text of an enum alternative ///</summary>privatestaticstring? GetEnumOptionDisplayText<T>(T value)
{
string? result = value!.ToString()!;
var displayAttribute = value
.GetType()
.GetField(value!.ToString()!)
?.GetCustomAttributes(typeof(DisplayAttribute), false)?
.OfType<DisplayAttribute>()
.FirstOrDefault();
if (displayAttribute != null)
{
if (displayAttribute.ResourceType != null && !string.IsNullOrWhiteSpace(displayAttribute.Name))
{
result = new ResourceManager(displayAttribute.ResourceType).GetString(displayAttribute!.Name!);
}
elseif (!string.IsNullOrWhiteSpace(displayAttribute.Name))
{
result = displayAttribute.Name;
}
}
return result;
}
}
}
The following razor component shows how to use this helper.
It would be possible to instead make a component than such a helper method that just passes a typeref parameter of the enum type.
But using such a programmatic helper returning a RenderFragment. As the code shows, returning a builder which uses the
RenderTreeBuilder let's you register the rendertree to return here. It is possible to use OpenComponent and CloseComponent.
Using AddAttribute to add attributes to the InputSelect.
And a childbuilder for the option values.
Sometimes it is easier to just make such a class with helper method instead of a component. The downside is that it is a more manual process, it is similar to how MVC uses HtmlHelpers. What is the best option from using a component or such a RenderFragment helper is not clear, it is a technique many developers using Blazor should be aware of.
The speech service uses AI trained speech to provide natural speech and ease of use. You can just provide text and get it read out aloud.
An overview of supported languages in the Speech service is shown here:
You can create a TTS - Text To Speech service using Azure AI service for this. This Speech service in this demo uses the library Nuget Microsoft.CognitiveServices.Speech.
This repo contains a simple demo using Azure AI Speech synthesis using Azure.CognitiveServices.SpeechSynthesis.
It provides a simple way of synthesizing text to speech using Azure AI services. Its usage is shown here:
The code provides a simple builder for creating a SpeechSynthesizer instance.
using Microsoft.CognitiveServices.Speech;
namespaceToreAurstadIT.AzureAIDemo.SpeechSynthesis;
publicclassProgram
{
privatestaticasync Task Main(string[] args)
{
Console.WriteLine("Your text to speech input");
string? text = Console.ReadLine();
using (var synthesizer = SpeechSynthesizerBuilder.Instance.WithSubscription().Build())
{
using (var result = await synthesizer.SpeakTextAsync(text))
{
string reasonResult = result.Reason switch
{
ResultReason.SynthesizingAudioCompleted => $"The following text succeeded successfully: {text}",
_ => $"Result of speeech synthesis: {result.Reason}"
};
Console.WriteLine(reasonResult);
}
}
}
}
The builder looks like this:
using Microsoft.CognitiveServices.Speech;
namespaceToreAurstadIT.AzureAIDemo.SpeechSynthesis;
publicclassSpeechSynthesizerBuilder
{
privatestring? _subscriptionKey = null;
privatestring? _subscriptionRegion = null;
publicstatic SpeechSynthesizerBuilder Instance => new SpeechSynthesizerBuilder();
public SpeechSynthesizerBuilder WithSubscription(string? subscriptionKey = null, string? region = null)
{
_subscriptionKey = subscriptionKey ?? Environment.GetEnvironmentVariable("AZURE_AI_SERVICES_SPEECH_KEY", EnvironmentVariableTarget.User);
_subscriptionRegion = region ?? Environment.GetEnvironmentVariable("AZURE_AI_SERVICES_SPEECH_REGION", EnvironmentVariableTarget.User);
returnthis;
}
public SpeechSynthesizer Build()
{
var config = SpeechConfig.FromSubscription(_subscriptionKey, _subscriptionRegion);
var speechSynthesizer = new SpeechSynthesizer(config);
return speechSynthesizer;
}
}
Note that I observed that the audio could get chopped off in the very end. It might be a temporary issue, but if you encounter it too, you can add an initial pause to avoid this:
string? intialPause = " .... "; //this is added to avoid the text being cut in the start
This article will present both code and tips around getting Azure AI Search to utilize additional data sources.
The article builds upon the previous article in the blog:
This code will use Open AI Chat GPT-4 together with additional data source. I have tested this using Storage account in Azure which contains blobs
with documents.
First off, create Azure AI services if you do not have this yet.
Then create an Azure AI Search
Choose the location and the Pricing Tier. You can choose the Free (F) pricing tier to test out the Azure AI Search. The standard pricing tier comes in at about 250 USD per month, so a word of caution here as billing might incur if you do not choose the Free tier.
Head over to the Azure AI Search service after it is crated and note inside the Overview the Url.
Expand the Search management and choose the folowing menu options and fill out them in this order:
Data sources
Indexes
Indexers
There are several types of data sources you can add.
Azure Blog Storage
Azure Data Lake Storage Gen2
Azure Cosmos DB
Azure SQL Database
Azure Table Storage
Fabric OneLake files
Upload files to the blob container
I have tested out adding a data source using Azure Blob Storage. I had to create a new storage account and I believe Azure might have changed it over the years, so for best compability, add a brand new storage account. Then choose a blob container inside the Blob storage, then hit the Create button.
Head over to your Storage browser inside your storage account, then choose Blob container. You can add a Blob container and then after it is created, click the Upload button.
You can then upload multiple files into the blob container (it is like a folder, which saves your files as blobs).
Setting up the index
After the Blob storage (storage account) is added to the data source, choose the Indexes menu button inside Azure AI search. Click Add index.
After the index is added, choose the button Add field
Add a field name called : Edit.String of type Edm.String.
Click the checkbox for Retrievable and Searchable. Click the button Save
Setting up the indexer
Choose to add an Indexer via button Add indexer
Choose the Index you added
Choose the Data source you added
Select the indexed extensions and specify which file types to index. Probably you should select text based files here, such as .md and .markdown files and even some binary file type such as .pdf and .docx can be selected here
The code for this article is available in the branch:
feature/openai-search-documentsources
To add the data source to our ChatClient instance, we do the following. Please note that this method will be changed in the Azure AI SDK in the future :
ChatCompletionOptions? chatCompletionOptions = null;
if (dataSources?.Any() == true)
{
chatCompletionOptions = new ChatCompletionOptions();
foreach (var dataSource in dataSources!)
{
#pragmawarning disable AOAI001 // Type is for evaluation purposes only and is subject to change or removal in future updates. Suppress this diagnostic to proceed.
chatCompletionOptions.AddDataSource(new AzureSearchChatDataSource()
{
Endpoint = new Uri(dataSource.endpoint),
IndexName = dataSource.indexname,
Authentication = DataSourceAuthentication.FromApiKey(dataSource.authentication)
});
#pragmawarning restore AOAI001 // Type is for evaluation purposes only and is subject to change or removal in future updates. Suppress this diagnostic to proceed.
}
}
The updated version of the extension class of OpenAI.Chat.ChatClient then looks like this:
ChatClientExtensions.cs
using Azure.AI.OpenAI.Chat;
using OpenAI.Chat;
using System.ClientModel;
using System.Text;
namespaceToreAurstadIT.OpenAIDemo
{
publicstaticclassChatclientExtensions
{
///<summary>/// Provides a stream result from the Chatclient service using AzureAI services.///</summary>///<param name="chatClient">ChatClient instance</param>///<param name="message">The message to send and communicate to the ai-model</param>///<returns>Streamed chat reply / result. Consume using 'await foreach'</returns>publicstatic AsyncCollectionResult<StreamingChatCompletionUpdate> GetStreamedReplyAsync(this ChatClient chatClient, string message,
(string endpoint, string indexname, string authentication)[]? dataSources = null)
{
ChatCompletionOptions? chatCompletionOptions = null;
if (dataSources?.Any() == true)
{
chatCompletionOptions = new ChatCompletionOptions();
foreach (var dataSource in dataSources!)
{
#pragmawarning disable AOAI001 // Type is for evaluation purposes only and is subject to change or removal in future updates. Suppress this diagnostic to proceed.
chatCompletionOptions.AddDataSource(new AzureSearchChatDataSource()
{
Endpoint = new Uri(dataSource.endpoint),
IndexName = dataSource.indexname,
Authentication = DataSourceAuthentication.FromApiKey(dataSource.authentication)
});
#pragmawarning restore AOAI001 // Type is for evaluation purposes only and is subject to change or removal in future updates. Suppress this diagnostic to proceed.
}
}
return chatClient.CompleteChatStreamingAsync(
[new SystemChatMessage("You are an helpful, wonderful AI assistant"), new UserChatMessage(message)], chatCompletionOptions);
}
publicstaticasync Task<string> GetStreamedReplyStringAsync(this ChatClient chatClient, string message, (string endpoint, string indexname, string authentication)[]? dataSources = null, bool outputToConsole = false)
{
var sb = new StringBuilder();
awaitforeach (var update inGetStreamedReplyAsync(chatClient, message, dataSources))
{
foreach (var textReply in update.ContentUpdate.Select(cu => cu.Text))
{
sb.Append(textReply);
if (outputToConsole)
{
Console.Write(textReply);
}
}
}
return sb.ToString();
}
}
}
The updated code for the demo app then looks like this, I chose to just use tuples here for the endpoint, index name and api key:
ChatpGptDemo.cs
using OpenAI.Chat;
using OpenAIDemo;
using System.Diagnostics;
namespaceToreAurstadIT.OpenAIDemo
{
publicclassChatGptDemo
{
publicasync Task<string?> RunChatGptQuery(ChatClient? chatClient, string msg)
{
if (chatClient == null)
{
Console.WriteLine("Sorry, the demo failed. The chatClient did not initialize propertly.");
returnnull;
}
Console.WriteLine("Searching ... Please wait..");
var stopWatch = Stopwatch.StartNew();
var chatDataSources = new[]{
(
SearchEndPoint: Environment.GetEnvironmentVariable("AZURE_SEARCH_AI_ENDPOINT", EnvironmentVariableTarget.User) ?? "N/A",
SearchIndexName: Environment.GetEnvironmentVariable("AZURE_SEARCH_AI_INDEXNAME", EnvironmentVariableTarget.User) ?? "N/A",
SearchApiKey: Environment.GetEnvironmentVariable("AZURE_SEARCH_AI_APIKEY", EnvironmentVariableTarget.User) ?? "N/A"
)
};
string reply = "";
try
{
reply = await chatClient.GetStreamedReplyStringAsync(msg, dataSources: chatDataSources, outputToConsole: true);
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
Console.WriteLine($"The operation took: {stopWatch.ElapsedMilliseconds} ms");
Console.WriteLine();
return reply;
}
}
}
The code here expects that three user-specific environment variables exists. Please note that the API key can be found under the menu item Keys in Azure AI Search.
There are two admin keys and multiple query keys. To distribute keys to other users, you of course share the API query key, not the admin key(s).
The screenshot below shows the demo. It is a console application, it could be web application or other client :
Please note that the Free tier of Azure AI Search is rather slow and seems to only allow queryes at a certain interval, it will suffice to just test it out. To really test it out in for example an Intranet scenario, the standard tier Azure AI search service is recommended, at about 250 USD per month as noted.
Conclusions
Getting an Azure AI Chat service to work in intranet scenarios using a combination of Open AI Chat GPT-4 together with a custom collection of files that are indexed offers a nice combination of building up a knowledge base which you can query against. It is rather convenient way of building an on-premise solution for intranet AI chat service using Azure cloud services.
This article will show some code how you can opt in something called lazy loading in EF. This means you do not load in all the related data for an entity until you need the data.
Lets look at a simple entity called Customer. We will add to navigational properties, that is related entities. Without eager loading enabled automatically or lazy loading enabled
automatically, EF Core 8 will not populated these navigational properties, which is pointing to the related entities. The fields will be null without active measure on the loading part.
Let's inspect how to lazy load such navigational properties.
Customer.cs
publicclassCustomer {
// more code.. publicCustomer()
{
AddressCustomers = new HashSet<AddressCustomer>();
}
// more code .. privateCustomer(ILazyLoader lazyLoader)
{
LazyLoader = lazyLoader;
}
public CustomerRank CustomerRank { get; set; }
publicvirtual ICollection<AddressCustomer> AddressCustomers { get; set; }
}
First off, the ILazyLoader service is from Microsoft.EntityFrameworkCore.Infrastructure. It is injected inside the entity, preferably using a private constructor of the entity.
Now you can set up lazy loading a for a navigational property like this :
public CustomerRank CustomerRank
{
get => LazyLoader.Load(this, ref _customerRank);
set => _customerRank = value;
}
If it feels a bit unclean to mix entity code with behavioral code since we inject a service into our domain models or entities, you can use the Fluent api instead while setting up the DbContext.
If automatically lazy loading the data (the data will be loaded upon access of the navigational property) seems a bit little flexible, one can also set up loading manually wherever in the application code using the methods Entry and either Reference or Collection
and then the Load method.
Once more, note that the data is still lazy loaded, their content will only be loaded when you access the particular navigational property pointing to the related data. Also note that if you debug in say VS 2022, data might look like they are automatically loaded, but this is because the debugger loads the contents if it can and will even do so for lazy loaded navigational fields. If you instead make in your application code a programmatic access to this navigational property and output the data you will see the data also being loaded, but this happens once it is programatic access. For example if we made the private field _customerRank public (as we should not do to protect our domain model's data) you can see this while debugging :
//changed a field in Customer.cs to become public for external access :// public CustomerRank _customerRank;
Console.WriteLine(customer._customerRank);
Console.WriteLine(customer.CustomerRank);
// considering this set up public CustomerRank CustomerRank
{
get => LazyLoader.Load(this, ref _customerRank);
set => _customerRank = value;
}
The field _customerRank is initially null, it is when we access the property CustomerRank which I set to be AutoInclude i.e. lazy loaded I see that data is loaded.
Using Azure Cognitive Services, it is possible to translate text into other languages and also synthesize the text to speech. It is also possible to add voice effects such as style of the voice.
This adds more realism by adding emotions to a synthesized voice. The voice is already trained by neural net training and adding voice style makes the synthesized speech even more realistic and multi-purpose.
The Github repo for this is available here as .NET Maui Blazor client written with .NET 8 :
More and more synthetic voices in Azure Cognitive Services gets more and more voice styles which express emotions. For now, most of the voices are either english (en-US) or chinese (zh-CN) and a few other languages got some few voices supporting styles.
This will most likely be improved into the future where these neural net trained voices are trained in voice styles or some generic voice style algorithm is achieved that can infer emotions on a generic level, although that still sounds a bit sci-fi.
Azure Cognitive Text-To-Speech Voices with support for emotions / voice styles
angry, calm, cheerful, depressed, disgruntled, documentary-narration, fearful, sad, serious
OlderAdultMale, SeniorMale
Screenshot from the DEMO showing its user interface. You enter the text to translate at the top and the language of the text is detected using Azure Cognitive Services text detection functionality. And you can then select which language to translate the text into. It will call a REST call to Azure Cognitive Services to translate the text. And it is also possible to hear the speech of the text. Now, it is also added to add voice style. Use the table shown above to select a voice actor that supports a voice style you want to test. As noted, voice styles are still limited to a few languages and voice actors supporting emotions or voice styles. You will hear the voice from the voice actor in a normal mood or voice style if additional emotions or voice styles are not supported.
Let's look at some code for this DEMO too. You can study the Github repo and clone it to test it out yourself.
The TextToSpeechUtil class handles much of the logic of creating voice from text input and also create the SSML-XML contents and performt the REST api call to create the voice file.
Note that SSML mentioned here, is the Speech Synthesis Markup Language (SSML).
The SSML standard is documented here on MSDN, it is a standard adopted by others too including Google.
<speakversion="1.0"xml:lang="en-US"xmlns:mstts="https://www.w3.org/2001/mstts"><voicexml:gender="Male"name="Microsoft Server Speech Text to Speech Voice (en-US, JaneNeural)"><mstts:express-asstyle="angry">I listen to Eurovision and cheer for Norway</mstts:express-as></voice></speak>
The SSML also contains an extension called mstts extension language that adds features to SSML such as the express-as set to a voice style or emotion of "angry". Not all emotions or voice styles are supported by every voice actor in Azure Cognitive Services.
But this is a list of the voice styles that could be supported, it varies which voice actor you choose (and inherently which language).
"normal-neutral"
"advertisement_upbeat"
"affectionate"
"angry"
"assistant"
"calm"
"chat"
"cheerful"
"customerservice"
"depressed"
"disgruntled"
"documentary-narration"
"embarrassed"
"empathetic"
"envious"
"excited"
"fearful"
"friendly"
"gentle"
"hopeful"
"lyrical"
"narration-professional"
"narration-relaxed"
"newscast"
"newscast-casual"
"newscast-formal"
"poetry-reading"
"sad"
"serious"
"shouting"
"sports_commentary"
"sports_commentary_excited"
"whispering"
"terrified"
"unfriendly
Microsoft has come a long way from the early work with SAPI - Microsoft Speech API with Microsoft SAM around 2000. The realism of synthetic voices more than 20 years ago were rather crude and robotic. Nowaydays, voice actors provided by Azure Cloud computing platform as shown here
are neural net trained and very realistic based upon training from real voice actors and now more and more voice actor voices support emotions or voice styles.
The usages of this can be diverse. Making use of text synthesis can serve in automated answering services and apps in diverse fields such as healthcare and public services or education and more.
Making this demo has been fun for me and it can be used to learn languages and with the voice functionality you can train on not only the translation but also pronounciation.