This article will look into generating dropdown for enums in Blazor.
The repository for the source code listed in the article is here:
https://github.com/toreaurstadboss/DallEImageGenerationImgeDemoV4
First off, a helper class for enums that will use the InputSelect control. The helper class will support setting the display text for enum options / alternatives via resources files using the display attribute.
Enumhelper.cs | C# source code
using DallEImageGenerationImageDemoV4.Models;
using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.Forms;
using System.ComponentModel.DataAnnotations;
using System.Linq.Expressions;
using System.Resources;
namespaceDallEImageGenerationImageDemoV4.Utility
{
publicstaticclassEnumHelper
{
publicstatic RenderFragment GenerateEnumDropDown<TEnum>(object receiver,
TEnum selectedValue,
Action<TEnum> valueChanged)
where TEnum : Enum
{
Expression<Func<TEnum>> onValueExpression = () => selectedValue;
var onValueChanged = EventCallback.Factory.Create<TEnum>(receiver, valueChanged);
return builder =>
{
// Set the selectedValue to the first enum value if it is not setif (EqualityComparer<TEnum>.Default.Equals(selectedValue, default))
{
object? firstEnum = Enum.GetValues(typeof(TEnum)).GetValue(0);
if (firstEnum != null)
{
selectedValue = (TEnum)firstEnum;
}
}
builder.OpenComponent<InputSelect<TEnum>>(0);
builder.AddAttribute(1, "Value", selectedValue);
builder.AddAttribute(2, "ValueChanged", onValueChanged);
builder.AddAttribute(3, "ValueExpression", onValueExpression);
builder.AddAttribute(4, "class", "form-select"); // Adding Bootstrap class for styling
builder.AddAttribute(5, "ChildContent", (RenderFragment)(childBuilder =>
{
foreach (varvaluein Enum.GetValues(typeof(TEnum)))
{
childBuilder.OpenElement(6, "option");
childBuilder.AddAttribute(7, "value", value?.ToString());
childBuilder.AddContent(8, GetEnumOptionDisplayText(value)?.ToString()?.Replace("_", " ")); // Ensure the display text is clean
childBuilder.CloseElement();
}
}));
builder.CloseComponent();
};
}
///<summary>/// Retrieves the display text of an enum alternative ///</summary>privatestaticstring? GetEnumOptionDisplayText<T>(T value)
{
string? result = value!.ToString()!;
var displayAttribute = value
.GetType()
.GetField(value!.ToString()!)
?.GetCustomAttributes(typeof(DisplayAttribute), false)?
.OfType<DisplayAttribute>()
.FirstOrDefault();
if (displayAttribute != null)
{
if (displayAttribute.ResourceType != null && !string.IsNullOrWhiteSpace(displayAttribute.Name))
{
result = new ResourceManager(displayAttribute.ResourceType).GetString(displayAttribute!.Name!);
}
elseif (!string.IsNullOrWhiteSpace(displayAttribute.Name))
{
result = displayAttribute.Name;
}
}
return result;
}
}
}
The following razor component shows how to use this helper.
It would be possible to instead make a component than such a helper method that just passes a typeref parameter of the enum type.
But using such a programmatic helper returning a RenderFragment. As the code shows, returning a builder which uses the
RenderTreeBuilder let's you register the rendertree to return here. It is possible to use OpenComponent and CloseComponent.
Using AddAttribute to add attributes to the InputSelect.
And a childbuilder for the option values.
Sometimes it is easier to just make such a class with helper method instead of a component. The downside is that it is a more manual process, it is similar to how MVC uses HtmlHelpers. What is the best option from using a component or such a RenderFragment helper is not clear, it is a technique many developers using Blazor should be aware of.
This article presents code showing how to generate images using Dall-e-3 images.
The source code presented in this article can be cloned from my Github repo here:
First, let's look at the following extension class that geneates the image. The method returnin a string will be used.
The image is returned in this sample code in the responseformat Bytes. This is actually base 64 string,
looking at the BinaryData and converted into base 64 string. A browser can display base-64 strings and the Dall-e-3 AI services
delivers images in png format.
DallEImageExtensions.cs | C# source code
using OpenAI.Images;
namespaceDallEImageGenerationDemo.Utility
{
publicstaticclassDallEImageExtensions
{
///<summary>/// Generates an image from an description in <paramref name="imagedescription"/>/// This uses OpenAI DALL-e-3 AI ///</summary>///<param name="imageClient"></param>///<param name="imagedescription"></param>///<param name="options">Send in options for the image generation. If no options are sent, a 512x512 natural image in response format bytes will be returned</param>///<returns></returns>publicstaticasync Task<GeneratedImage> GenerateDallEImageAsync(this ImageClient imageClient,
string imagedescription, ImageGenerationOptions? options = null)
{
options = options ?? new ImageGenerationOptions
{
Quality = GeneratedImageQuality.High,
Size = GeneratedImageSize.W1024xH1024,
Style = GeneratedImageStyle.Vivid,
};
options.ResponseFormat = GeneratedImageFormat.Bytes;
returnawait imageClient.GenerateImageAsync(imagedescription, options);
}
///<summary>/// Generates an image from an description in <paramref name="imagedescription"/>/// This uses OpenAI DALL-e-3 AI. Base-64 string is extracted from the bytes in the image for easy display of /// image inside a web application (e.g. Blazor WASM)///</summary>///<param name="imageClient"></param>///<param name="imagedescription"></param>///<param name="options">Send in options for the image generation. If no options are sent, a 512x512 natural image in response format bytes will be returned</param>///<returns></returns>publicstaticasync Task<string> GenerateDallEImageB64StringAsync(this ImageClient imageClient,
string imagedescription, ImageGenerationOptions? options = null)
{
GeneratedImage generatedImage = await GenerateDallEImageAsync(imageClient, imagedescription, options);
string preamble = "data:image/png;base64,";
return$"{preamble}{Convert.ToBase64String(generatedImage.ImageBytes.ToArray())}";
}
}
}
As we can see, creating a Dall-e-3 image using an OpenAI.Image.ImageClient. The ImageClient is set up in Program.cs registered it as a scoped service.
To generate suggestions for how to create the image, we can use Chat GPT-4. Here is the code to generate an Open AI chat enabled ChatClient.
OpenAIChatClientBuilder.cs | C# source code
using Azure.AI.OpenAI;
using OpenAI.Chat;
using System.ClientModel;
namespaceDallEImageGenerationImageDemoV4.Utility
{
///<summary>/// Creates AzureOpenAIClient or ChatClient (default ai model (LLM) is set to "gpt-4")///</summary>publicclassOpenAIChatClientBuilder(IConfiguration configuration)
{
privatestring? _endpoint = null;
private ApiKeyCredential? _key = null;
privatereadonly IConfiguration _configuration = configuration;
///<summary>/// Set the endpoint for Open AI Chat GPT-4 chat client. Defaults to config setting 'ChatGpt4:Endpoint' inside the appsettings.json file///</summary>public OpenAIChatClientBuilder WithEndpoint(string? endpoint = null)
{
_endpoint = endpoint ?? _configuration["OpenAI:ChatGpt4:Endpoint"];
returnthis;
}
///<summary>/// Set the key for Open AI Chat GPT-4 chat client. Defaults to config setting 'ChatGpt4:ApiKey' inside the appsettings.json file///</summary>public OpenAIChatClientBuilder WithApiKey(string? key = null)
{
string? keyToUse = key ?? _configuration["OpenAI:ChatGpt4:ApiKey"];
if (!string.IsNullOrWhiteSpace(keyToUse))
{
_key = new ApiKeyCredential(keyToUse!);
}
returnthis;
}
///<summary>/// In case the derived AzureOpenAIClient is to be used, use this Build method to get a specific AzureOpenAIClient///</summary>///<returns></returns>public AzureOpenAIClient? BuildAzureOpenAIClient() => !string.IsNullOrWhiteSpace(_endpoint) && _key != null ? new AzureOpenAIClient(new Uri(_endpoint), _key) : null;
///<summary>/// Returns the ChatClient that is set up to use OpenAI Default ai model (LLM) will be set 'gpt-4'.///</summary>///<returns></returns>public ChatClient? Build(string aiModel = "gpt-4") => BuildAzureOpenAIClient()?.GetChatClient(aiModel);
}
}
We generate a builder for the chat client using a factory. This is so we can first get hold of the IConfiguration via dependency injection and use it for the builder of the OpenAI enabled chat client.
using Azure.AI.OpenAI;
using OpenAI.Chat;
using System.ClientModel;
namespaceDallEImageGenerationImageDemoV4.Utility
{
///<summary>/// Creates AzureOpenAIClient or ChatClient (default ai model (LLM) is set to "gpt-4")///</summary>publicclassOpenAIChatClientBuilder(IConfiguration configuration)
{
privatestring? _endpoint = null;
private ApiKeyCredential? _key = null;
privatereadonly IConfiguration _configuration = configuration;
///<summary>/// Set the endpoint for Open AI Chat GPT-4 chat client. Defaults to config setting 'ChatGpt4:Endpoint' inside the appsettings.json file///</summary>public OpenAIChatClientBuilder WithEndpoint(string? endpoint = null)
{
_endpoint = endpoint ?? _configuration["OpenAI:ChatGpt4:Endpoint"];
returnthis;
}
///<summary>/// Set the key for Open AI Chat GPT-4 chat client. Defaults to config setting 'ChatGpt4:ApiKey' inside the appsettings.json file///</summary>public OpenAIChatClientBuilder WithApiKey(string? key = null)
{
string? keyToUse = key ?? _configuration["OpenAI:ChatGpt4:ApiKey"];
if (!string.IsNullOrWhiteSpace(keyToUse))
{
_key = new ApiKeyCredential(keyToUse!);
}
returnthis;
}
///<summary>/// In case the derived AzureOpenAIClient is to be used, use this Build method to get a specific AzureOpenAIClient///</summary>///<returns></returns>public AzureOpenAIClient? BuildAzureOpenAIClient() => !string.IsNullOrWhiteSpace(_endpoint) && _key != null ? new AzureOpenAIClient(new Uri(_endpoint), _key) : null;
///<summary>/// Returns the ChatClient that is set up to use OpenAI Default ai model (LLM) will be set 'gpt-4'.///</summary>///<returns></returns>public ChatClient? Build(string aiModel = "gpt-4") => BuildAzureOpenAIClient()?.GetChatClient(aiModel);
}
}
A helper method is also added to get a streamed reply.
OpenAIChatClientExtensions.cs | C# source code
using OpenAI.Chat;
using System.ClientModel;
namespaceOpenAIDemo
{
publicstaticclassOpenAIChatClientExtensions
{
///<summary>/// Provides a stream result from the Chatclient service using AzureAI services.///</summary>///<param name="chatClient">ChatClient instance</param>///<param name="message">The message to send and communicate to the ai-model</param>///<param name="systemMessage">Set the system message to instruct the chat response. Defaults to 'You are an helpful, wonderful AI assistant'.</param>///<returns>Streamed chat reply / result. Consume using 'await foreach'</returns>publicstaticasync IAsyncEnumerable<string?> GetStreamedReplyStringAsync(this ChatClient chatClient, string message, string? systemMessage = null)
{
awaitforeach (var update inGetStreamedReplyInnerAsync(chatClient, message, systemMessage))
{
foreach (var textReply in update.ContentUpdate.Select(cu => cu.Text))
{
yieldreturn textReply;
}
}
}
privatestatic AsyncCollectionResult<StreamingChatCompletionUpdate> GetStreamedReplyInnerAsync(this ChatClient chatClient, string message, string? systemMessage = null) =>
chatClient.CompleteChatStreamingAsync(
[new SystemChatMessage(systemMessage ?? "You are an helpful, wonderful AI assistant"), new UserChatMessage(message)]);
}
}
Here is the client side code in the code behind file of the page displaying the Dall-e-3 image and Open AI Chat gpt-4 response.
using DallEImageGenerationDemo.Components.Pages;
using DallEImageGenerationDemo.Utility;
using DallEImageGenerationImageDemoV4.Models;
using DallEImageGenerationImageDemoV4.Utility;
using Microsoft.AspNetCore.Components;
using Microsoft.JSInterop;
using OpenAI.Images;
using OpenAIDemo;
namespaceDallEImageGenerationImageDemoV4.Pages;
publicpartialclassHome : ComponentBase
{
[Inject]
public required IConfiguration Config { get; set; }
[Inject]
public required IJSRuntime JSRuntime { get; set; }
[Inject]
public required ImageClient DallEImageClient { get; set; }
[Inject]
public required IOpenAiChatClientBuilderFactory OpenAIChatClientFactory { get; set; }
privatereadonly HomeModel homeModel = new();
privatebool IsLoading { get; set; }
privatestring ImageData { get; set; } = string.Empty;
privateconststring modelName = "dall-e-3";
protectedasync Task HandleGenerateText()
{
var openAiChatClient = OpenAIChatClientFactory
.Create()
.Build();
if (openAiChatClient == null)
{
await JSRuntime.InvokeAsync<string>("alert", "Sorry, the OpenAI Chat client did not initiate properly. Cannot generate text.");
return;
}
string description = """
You are specifying instructions for generating a DALL-e-3 image.
Do not always choose Bergen! Also choose among smaller cities, villages and different locations in Norway.
Just generate one image, not a montage. Only provide one suggestion.
The suggestion should be based from this input and randomize what to display:
Suggests a cozy vivid location set in Norway showing outdoor scenery in good weather at different places
and with nice weather aimed to attract tourists. Note - it should also display both urban,
suburban or nature scenery with a variance of which of these three types of locations to show.
It should also include some Norwegian animals and flowers and show people. It should pick random cities and places in Norway to display.
""";
homeModel.Description = string.Empty;
awaitforeach (var updateContentPart in openAiChatClient.GetStreamedReplyStringAsync(description))
{
homeModel.Description += updateContentPart;
StateHasChanged();
await Task.Delay(20);
}
}
protectedasync Task HandleValidSubmit()
{
IsLoading = true;
string generatedImageBase64 = await DallEImageClient.GenerateDallEImageB64StringAsync(homeModel.Description!,
new ImageGenerationOptions
{
Quality = MapQuality(homeModel.Quality),
Style = MapStyle(homeModel.Style),
Size = MapSize(homeModel.Size)
});
ImageData = generatedImageBase64;
if (!string.IsNullOrWhiteSpace(ImageData))
{
// Open the modalawait JSRuntime.InvokeVoidAsync("showModal", "imageModal");
}
IsLoading = false;
StateHasChanged();
}
privatestatic GeneratedImageSize MapSize(ImageSize size) => size switch
{
ImageSize.W1024xH1792 => GeneratedImageSize.W1024xH1792,
ImageSize.W1792H1024 => GeneratedImageSize.W1792xH1024,
_ => GeneratedImageSize.W1024xH1024,
};
privatestatic GeneratedImageStyle MapStyle(ImageStyle style) => style switch
{
ImageStyle.Vivid => GeneratedImageStyle.Vivid,
_ => GeneratedImageStyle.Natural
};
privatestatic GeneratedImageQuality MapQuality(ImageQuality quality) => quality switch
{
ImageQuality.High => GeneratedImageQuality.High,
_ => GeneratedImageQuality.Standard
};
}
Finally, a screenshot of the app !
The app can also be used as an app on a mobile device, as it is using Bootstrap 5 and responsive design.
This article presents a way to output tags for an image and output it to the console. Azure AI is used, more specifically the ImageAnalysisClient.
The article shows how you can define a way to consume the data for an IAsyncEnumerable, so you can use await foreach to consume the data. I would recommend this approach
for many services in Azure Ai (and similar) where there is no support out of the box for async enumerable and hide away the deails in a helper extension method as shown in this article.
publicstaticasyncvoidExtractImageTags()
{
string visionApiKey = Environment.GetEnvironmentVariable("VISION_KEY")!;
string visionApiEndpoint = Environment.GetEnvironmentVariable("VISION_ENDPOINT")!;
var credentials = new AzureKeyCredential(visionApiKey);
var serviceUri = new Uri(visionApiEndpoint);
var imageAnalysisClient = new ImageAnalysisClient(serviceUri, credentials);
awaitforeach (var tag in imageAnalysisClient.ExtractImageTagsAsync("Images/Store.png"))
{
Console.WriteLine(tag);
}
}
The code creates an ImageAnalysisClient, defined in the Azure.AI.Vision.ImageAnalysis Nuget package. I got two environment variables here to store the key and endpoint.
Note that not all Azure Ai features are available in all regions. If you just want to test out some Azure Ai features, you can first off just test it out at US East region, as that region
will have most likely all features you want to test, then you can just a more local region if you are planning to do more workloads using Azure Ai.
Then we use an await foreach pattern here to extract the image tags asynchronously. This is a custom extension method I created so I can output the tags asynchronously using await foreach and
also specify a wait time between each new tag being outputted, defaulting to 200 milliseconds here.
The extension method looks like this:
using Azure.AI.Vision.ImageAnalysis;
namespaceUseAzureAIServicesFromNET.Vision;
publicstaticclassImageAnalysisClientExtensions
{
///<summary>/// Extracts the tags for image at specified path, if existing./// The results are returned as async enumerable of strings. ///</summary>///<param name="client"></param>///<param name="imagePath"></param>///<param name="waitTimeInMsBetweenOutputTags">Default wait time in ms between output. Defaults to 200 ms.</param>///<returns></returns>publicstaticasync IAsyncEnumerable<string?> ExtractImageTagsAsync(this ImageAnalysisClient client,
string imagePath, int waitTimeInMsBetweenOutputTags = 200)
{
if (!File.Exists(imagePath))
{
yieldreturndefault(string); //just return null if a file is not found at provided path
}
using FileStream imageStream = new FileStream(imagePath, FileMode.Open);
var analysisResult =
await client.AnalyzeAsync(BinaryData.FromStream(imageStream), VisualFeatures.Tags | VisualFeatures.Caption);
yieldreturn$"Description: {analysisResult.Value.Caption.Text}";
foreach (var tag in analysisResult.Value.Tags.Values)
{
yieldreturn$"Tag: {tag.Name}, Confidence: {tag.Confidence:F2}";
await Task.Delay(waitTimeInMsBetweenOutputTags);
}
}
}
The console output of the tags looks like this:
In addition to tags, we can also output objects in the image in a very similar extension method:
///<summary>/// Extracts the objects for image at specified path, if existing./// The results are returned as async enumerable of strings. ///</summary>///<param name="client"></param>///<param name="imagePath"></param>///<param name="waitTimeInMsBetweenOutputTags">Default wait time in ms between output. Defaults to 200 ms.</param>///<returns></returns>publicstaticasync IAsyncEnumerable<string?> ExtractImageObjectsAsync(this ImageAnalysisClient client,
string imagePath, int waitTimeInMsBetweenOutputTags = 200)
{
if (!File.Exists(imagePath))
{
yieldreturndefault(string); //just return null if a file is not found at provided path
}
using FileStream imageStream = new FileStream(imagePath, FileMode.Open);
var analysisResult =
await client.AnalyzeAsync(BinaryData.FromStream(imageStream), VisualFeatures.Objects | VisualFeatures.Caption);
yieldreturn$"Description: {analysisResult.Value.Caption.Text}";
foreach (var objectInImage in analysisResult.Value.Objects.Values)
{
yieldreturn$"""
Object tag: {objectInImage.Tags.FirstOrDefault()?.Name} Confidence: {objectInImage.Tags.FirstOrDefault()?.Confidence},
Position (bbox): {objectInImage.BoundingBox}
""";
await Task.Delay(waitTimeInMsBetweenOutputTags);
}
}
The code is nearly identical, we set the VisualFeatures of the image to extract and we read out the objects (not the tags).
The console output of the objects looks like this: