Tuesday, 22 April 2025

Predicting variable using Regression with ML.net

This article will look at regression with ML.net In the example, the variable "Poverty rate" measured as a percentage against amount "teenage pregnancies" per 1,000 birth. The data is fetched from a publicly available CSV file. The data is obtained from Jeff Prosise repos of ML.net here on Github:

https://github.com/jeffprosise/ML.NET/blob/master/MLN-SimpleRegression/MLN-SimpleRegression/Data/poverty.csv



In this article, Linqpad 8 will be used. First off, the following two Nuget packages are added :
  • Microsoft.ML
  • Microsoft.ML.Mkl.Components
The following method will plot a scatter graph from provided MLContext data, and add a standard linear trendline, which will work with the example in this article.

Plotutils.cs



void PlotScatterGraph<T>(MLContext mlContext, IDataView trainData, Func<T, PointItem> pointCreator, string chartTitle) where T : class, new()
{
	//Convert the IDataview to an enumerable collection
	var data = mlContext.Data.CreateEnumerable<T>(trainData, reuseRowObject: false).Select(x => pointCreator(x)).ToList();

	// Calculate trendline (simple linear regression)
	double avgX = data.Average(d => d.X);
	double avgY = data.Average(d => d.Y);
	double slope = data.Sum(d => (d.X - avgX) * (d.Y - avgY)) / data.Sum(d => (d.X - avgX) * (d.X - avgX));
	double intercept = avgY - slope * avgX;
	var trendline = data.Select(d => new { X = d.X, Y = slope * d.X + intercept }).ToList();

	//Plot the scatter graph
	var plot = data.Chart(d => d.X)
		.AddYSeries(d => d.Y, LINQPad.Util.SeriesType.Point, chartTitle)
		.AddYSeries(d => trendline.FirstOrDefault(t => t.X == d.X)?.Y ?? 0, Util.SeriesType.Line, "Trendline")
		.ToWindowsChart();
		
	plot.AntiAliasing = System.Windows.Forms.DataVisualization.Charting.AntiAliasingStyles.All;
	plot.Dump();
}



Let's look at the code for loading the CSV data and into the MLContext and then used the method TrainTestSplit to split the data into training data and testing data. Note also the classes Input and Output and the usage of LoadColumn and ColumnName

Program.cs



void Main()
{

	string inputFile = Path.Combine(Path.GetDirectoryName(Util.CurrentQueryPath)!, @"Sampledata\poverty2.csv"); //linqpad tech

	var context = new MLContext(seed: 0);

	//Train the model 
	var data = context.Data
		.LoadFromTextFile<Input>(inputFile, hasHeader: true, separatorChar: ';');
	
	// Split data into training and test sets 
	var split = context.Data.TrainTestSplit(data, testFraction: 0.2);
	var trainData = split.TrainSet;
	var testData = split.TestSet;

	var pipeline = context
		.Transforms.NormalizeMinMax("PovertyRate")
		.Append(context.Transforms.Concatenate("Features", "PovertyRate"))
		.Append(context.Regression.Trainers.Ols());

	var model = pipeline.Fit(trainData);
	// Use the model to make a prediction
	var predictor = context.Model.CreatePredictionEngine<Input, Output>(model);
	var input = new Input { PovertyRate = 8.4f };

	var actual = 36.8f;

	var prediction = predictor.Predict(input);
	Console.WriteLine($"Input poverty rate: {input.PovertyRate} . Predicted birth rate per 1000: {prediction.TeenageBirthRate:0.##}");
	Console.WriteLine($"Actual birth rate per 1000: {actual}");

	// Evaluate the regression model 
	var predictions = model.Transform(testData);
	var metrics = context.Regression.Evaluate(predictions);
	Console.WriteLine($"R-squared: {metrics.RSquared:0.##}");
	Console.WriteLine($"Root Mean Squared Error: {metrics.RootMeanSquaredError:0.##}");
	Console.WriteLine($"Mean Absolute Error: {metrics.MeanAbsoluteError:0.##}");
	Console.WriteLine($"Mean Squared Error: {metrics.MeanSquaredError:0.##}");


	PlotScatterGraph<Input>(context, trainData, (Input input) => 
		new PointItem { X = (float) Math.Round(input.PovertyRate, 2), Y = (float) Math.Round(input.TeenageBirthRate, 2) },
		"Poverty rate (%) vs Teenage Pregnancies per 1,000 birth");

}

public class PointItem {
	public float X { get; set; }
	public float Y { get; set; }
}

void PlotScatterGraph<T>(MLContext mlContext, IDataView trainData, Func<T, PointItem> pointCreator, string chartTitle) where T : class, new()
{
	//Convert the IDataview to an enumerable collection
	var data = mlContext.Data.CreateEnumerable<T>(trainData, reuseRowObject: false).Select(x => pointCreator(x)).ToList();

	// Calculate trendline (simple linear regression)
	double avgX = data.Average(d => d.X);
	double avgY = data.Average(d => d.Y);
	double slope = data.Sum(d => (d.X - avgX) * (d.Y - avgY)) / data.Sum(d => (d.X - avgX) * (d.X - avgX));
	double intercept = avgY - slope * avgX;
	var trendline = data.Select(d => new { X = d.X, Y = slope * d.X + intercept }).ToList();

	//Plot the scatter graph
	var plot = data.Chart(d => d.X)
		.AddYSeries(d => d.Y, LINQPad.Util.SeriesType.Point, chartTitle)
		.AddYSeries(d => trendline.FirstOrDefault(t => t.X == d.X)?.Y ?? 0, Util.SeriesType.Line, "Trendline")
		.ToWindowsChart();
		
	plot.AntiAliasing = System.Windows.Forms.DataVisualization.Charting.AntiAliasingStyles.All;
	plot.Dump();
}



public class Input
{

	[LoadColumn(1)]
	public float PovertyRate;

	[LoadColumn(5), ColumnName("Label")]
	public float TeenageBirthRate { get; set; }

}
public class Output
{
	[ColumnName("Score")]
	public float TeenageBirthRate;

}


A pipeline is defined for the machine learning here consisting of the following :
  • The method NormalizeMinMax will transform the poverty rate into a normalized scale between 0 and 1. The Concatenate method will be used to specify the "Features", in this case only the column Poverty rate is the feature of which we want to predict a score, this is the rate of teenage pregnancy births per 1,000 births. Note that our CSV data set contains more columns, but this is a simple regression where only one variable is taken into account.
  • The trainers used to train the machine learning algorithm is Ols, the Ordinary Least Squares.
  • The method fit will train using the training data defined from the method TrainTestSplit.
  • The resulting model is used to create a prediction engine.
  • Using the prediction engine, it is possible to predict a value value using the Predict method given one input item. Our prediction engine expects input objects of type Input and Output.
  • Using the testdata, the method Transform using the model gives us multiple predictions and it is possible to evalute the regression analysis from the predictions to check how accurate the regression model is.
  • Returning from this evaluation, we get the R-squared for example. This is a value from 0 to 1.0 where it describes how accurate the regression is in in describing the total variation of the residues of the model, the amount the data when plotted in a scatter graph where residue is the offset between the actual data and what the regression model predicts.
  • Other values such as RMSE and MSE are the root and mean squared error, which are absolute values.
  • Using the code above we got a fairly accurate regression model, but more accuracy would be achieved by taking in additional factors.


  • Output from the Linqpad 8 application shown in this article :
    
        
    Input poverty rate: 8,4 . Predicted birth rate per 1000: 35,06
    Actual birth rate per 1000: 36,8
    R-squared: 0,59
    Root Mean Squared Error: 8,99
    Mean Absolute Error: 8,01
    Mean Squared Error: 80,83
        
      
    
    Please note that there are some standard column names used for machine learning.
    
    Label: Represents the target variable (the value to predict).
    
    Features: Contains the input features used for training.
    
    Score: Stores the predicted value in regression models.
    
    PredictedLabel: Holds the predicted class in classification models.
    
    Probability: Represents the probability of a predicted class.
    
    FeatureContributions: Shows how much each feature contributes to a prediction.
    
    
    In the code above, the column names "Label", "Features" and "Score" was used to instruct the regression being calculated in the code here for ML.Net context model. The attribute ColumnName was being used here together with the Concatenate method.

Tuesday, 15 April 2025

Adding plugins to use for Semantic Kernel

With Microsoft Semantic Kernel, it is possible to consume AI services such as Azure AI and OpenAI with less code, as this framework provides simplification and standardization for consuming these services. A repo on Github with the code shown in this article is provided here :

https://github.com/toreaurstadboss/SemanticKernelPluginDemov4

The demo code is a Blazor server app. It demonstrates how to use Microsoft Semantic Kernel with plugins. I have decided to provide the Northwind database as the extra data the plugin will use. Via debugging and seeing the output, I see that the plugin is successfully called and used. It is also easy to add plugins, which provides additional data to the AI model. This is suitable for providing AI powered solutions with private data that you want to provide to the AI model. For example, when using OpenAI Chat GPT-4, providing a plugin will make it possible to specify which data are to be presented and displayed. It is a convenient way to provide a natural language interface for doing data reporting such as listing up results from this plugin. The plugin can provide kernel functions, using attributes on the method. Let's first look at the Program.cs file for wiring up the semantic kernel for a Blazor Server demo app.

Program.cs



using Microsoft.EntityFrameworkCore;
using Microsoft.SemanticKernel;
using SemanticKernelPluginDemov4.Models;
using SemanticKernelPluginDemov4.Services;


namespace SemanticKernelPluginDemov4
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var builder = WebApplication.CreateBuilder(args);

            // Add services to the container.
            builder.Services.AddRazorPages();
            builder.Services.AddServerSideBlazor();

            // Add DbContext
            builder.Services.AddDbContextFactory<NorthwindContext>(options =>
                options.UseSqlServer(builder.Configuration.GetConnectionString("DefaultConnection")));

            builder.Services.AddScoped<IOpenAIChatcompletionService, OpenAIChatcompletionService>();

            builder.Services.AddScoped<NorthwindSemanticKernelPlugin>();

            builder.Services.AddScoped(sp =>
            {
                var kernelBuilder = Kernel.CreateBuilder();
                kernelBuilder.AddOpenAIChatCompletion(modelId: builder.Configuration.GetSection("OpenAI").GetValue<string>("ModelId")!,
                    apiKey: builder.Configuration.GetSection("OpenAI").GetValue<string>("ApiKey")!);

                var kernel = kernelBuilder.Build();

                var dbContextFactory = sp.GetRequiredService<IDbContextFactory<NorthwindContext>>();
                var northwindSemanticKernelPlugin = new NorthwindSemanticKernelPlugin(dbContextFactory);
                kernel.ImportPluginFromObject(northwindSemanticKernelPlugin);

                return kernel;
            });

            var app = builder.Build();

            // Configure the HTTP request pipeline.
            if (!app.Environment.IsDevelopment())
            {
                app.UseExceptionHandler("/Error");
                // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
                app.UseHsts();
            }

            app.UseHttpsRedirection();

            app.UseStaticFiles();

            app.UseRouting();

            app.MapBlazorHub();
            app.MapFallbackToPage("/_Host");

            app.Run();
        }
    }
}



In the code above, note the following:
  • The usage of IDbContextFactory for creating a db context, injected into the plugin. This is a Blazor server app, so this service is used to create db contet, since a Blazor server will have a durable connection between client and the server over Signal-R and there needs to use this interface to create dbcontext instances as needed
  • Using the method ImportPluginFromObject to import the plugin into the semantic kernel built here. Note that we register the kernel as a scoped service here. Also the plugin is registered as a scoped service here.
The plugin looks like this.

NorthwindSemanticKernelplugin.cs



using Microsoft.EntityFrameworkCore;
using Microsoft.SemanticKernel;
using SemanticKernelPluginDemov4.Models;
using System.ComponentModel;

namespace SemanticKernelPluginDemov4.Services
{

    public class NorthwindSemanticKernelPlugin
    {
        private readonly IDbContextFactory<NorthwindContext> _northwindContext;

        public NorthwindSemanticKernelPlugin(IDbContextFactory<NorthwindContext> northwindContext)
        {
            _northwindContext = northwindContext;
        }

        [KernelFunction]
        [Description("When asked about the suppliers of Nortwind database, use this method to get all the suppliers. Inform that the data comes from the Semantic Kernel plugin called : NortwindSemanticKernelPlugin")]
        public async Task<List<string>> GetSuppliers()
        {
            using (var dbContext = _northwindContext.CreateDbContext())
            {
                return await dbContext.Suppliers.OrderBy(s => s.CompanyName).Select(s => "Kernel method 'NorthwindSemanticKernelPlugin:GetSuppliers' gave this: " + s.CompanyName).ToListAsync();
            }
        }

        [KernelFunction]
        [Description("When asked about the total sales of a given month in a year, use this method. In case asked for multiple months, call this method again multiple times, adjusting the month and year as provided. The month and year is to be in the range 1-12 for months and for year 1996-1998. Suggest for the user what the valid ranges are in case other values are provided.")]
        public async Task<decimal> GetTotalSalesInMontAndYear(int month, int year)
        {
            using (var dbContext = _northwindContext.CreateDbContext())
            {
                var sumOfOrders = await (from Order o in dbContext.Orders
                             join OrderDetail od in dbContext.OrderDetails on o.OrderId equals od.OrderId
                             where o.OrderDate.HasValue && (o.OrderDate.Value.Month == month
                             && o.OrderDate.Value.Year == year) 
                             select (od.UnitPrice * od.Quantity) * (1 - (decimal)od.Discount)).SumAsync();

                return sumOfOrders;
            }
        }

    }
}


In the code above, note the attributes used. KernelFunction tells that this is a method the Semantic kernel can use. The description attribute instructs the AI LLM model how to use the method, how to provide parameter values if any and when the method is to be called. Let's look at the OpenAI service next.

OpenAIChatcompletionService.cs



using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.OpenAI;

namespace SemanticKernelPluginDemov4.Services
{

    public class OpenAIChatcompletionService : IOpenAIChatcompletionService
    {
        private readonly Kernel _kernel;

        private IChatCompletionService _chatCompletionService;

        public OpenAIChatcompletionService(Kernel kernel)
        {
            _kernel = kernel;
            _chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();
        }

        public async IAsyncEnumerable<string?> RunQuery(string question)
        {
            var chatHistory = new ChatHistory();

            chatHistory.AddSystemMessage("You are a helpful assistant, answering only on questions about Northwind database. In case you got other questions, inform that you only can provide questions about the Northwind database. It is important that only the provided Northwind database functions added to the language model through plugin is used when answering the questions. If no answer is available, inform this.");

            chatHistory.AddUserMessage(question);

            await foreach (var chatUpdate in _chatCompletionService.GetStreamingChatMessageContentsAsync(chatHistory, CreateOpenAIExecutionSettings(), _kernel))
            {
                yield return chatUpdate.Content;
            }
        }

        private OpenAIPromptExecutionSettings? CreateOpenAIExecutionSettings()
        {
            return new OpenAIPromptExecutionSettings
            {
                ToolCallBehavior = ToolCallBehavior.AutoInvokeKernelFunctions
            };
        }

    }
}



In the code above, the kernel is injected into this service. The kernel was registered in Program.cs as a scoped service, so it is injected here. The method GetRequiredService is similar to the method with same name of IServiceProvider used inside Program.cs. Note the use of ToolCallBehavior set to AutoInvokeKernelFunctions. Extending the AI powered functionality with plugins requires little extra code with Microsoft Semantic kernel. A screenshot of the demo is shown below.

Monday, 31 March 2025

Generating Dall-e-3 images using Microsoft Semantic Kernel

In this demo, Dall-e-3 images are generated from a console app using Microsoft Semantic Kernel. The semantic kernel is a library that offers different plugins for different AI services. It is supported for multiple languages, these are C#, Java and Python. Its goal is to ease the use of consuming AI services and building a shared infrastructure for these services and offer a way to conceptualize and abstract the consumption of these services. It can also be seen as a middleware for the services and offering a framework where consuming AI services becomes a more standardized process. A Github repo has been created with the code for this demo here:

Github repo for this demo
Dall-e-3 image generator with semantic kernel

The demo contains two steps, first building the semantic kernel itself and then the image generation. First off, the .csproj file has package references to the latest as of March 2025 nuget package of Microsoft Semantic Kernel.
DalleImageGeneratorWithSemanticKernel.csproj


<Project Sdk="Microsoft.NET.Sdk"> 

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>net8.0</TargetFramework>
    <ImplicitUsings>enable</ImplicitUsings>
    <Nullable>enable</Nullable>
    <NoWarn>$(NoWarn);CS8618,IDE0009,CA1051,CA1050,CA1707,CA1054,CA2007,VSTHRD111,CS1591,RCS1110,RCS1243,CA5394,SKEXP0001,SKEXP0010,SKEXP0020,SKEXP0040,SKEXP0050,SKEXP0060,SKEXP0070,SKEXP0101,SKEXP0110</NoWarn>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="Microsoft.SemanticKernel" Version="1.44.0" />
    <PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="8.0.1" />
  </ItemGroup>

</Project>


Note that multiple warnings are marked as no warning as semantic kernel is open for change in the future and thus flags multiple different warnings. The image generation demo is set up like this in the class ImageGeneration. Note how the Kernel object is built up here. It got a builder that offers many methods to add AI services. In this case we add an ITextToImageService. The modelName used here is "dall-e-3".
ImageGeneration.cs


using DalleImageGeneratorWithSemanticKernel;
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.TextToImage;
using OpenAI.Images;
using System;
using System.Diagnostics;

namespace UseSemanticKernelFromNET;

public class ImageGeneration
{
    public async Task GenerateBasicImage(string modelName)
    {
        Kernel kernel = Kernel
            .CreateBuilder()
            .AddOpenAITextToImage(modelId:modelName, apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")!).Build();

        ITextToImageService imageService = kernel.GetRequiredService<ITextToImageService>();

        Console.WriteLine("##### SEMANTIC KERNEL - IMAGE GENERATOR DALL-E-3 CONSOLE APP #####\n\n");


        string prompt =
           """
            In the humorous image, Vice President JD Vance and his wife are seen stepping out of their plane onto the icy runway of
            Thule Air Base. Just as they set foot on the frozen ground, a bunch of playful polar bears greet them enthusiastically, much like 
            overzealous fans welcoming celebrities. The surprised expressions on their faces are priceless as the couple finds 
            themselves being "chased" by these bundles of fur and excitement. JD Vance, with a mix of amusement and alarm, has one 
            shoe comically left behind in the snow, while his wife, holding onto her hat against the chilly wind, can't suppress a laugh.
            The scene is completed with members of the Air Base
            staff in the background, chuckling and capturing the moment on their phones, adding to the light-heartedness of the unexpected encounter.  
            The plane should carry the AirForce One Colors and read "United States of America". 
         """;

        Console.WriteLine($"\n ### STORY FOR THE IMAGE TO GENERATE WITH DALL-E-3 ### \n{prompt}\n\n");

        Console.WriteLine("\n\nStarting generation of dall-e-3 image...");

        var cts = new CancellationTokenSource();
        var cancellationToken = cts.Token;

        var rotationTask = Task.Run(() => ConsoleUtil.RotateDash(cancellationToken), cts.Token);

        var image = await imageService.GetOpenAIImageContentAsync(prompt,
            kernel: kernel,
            size: (1024, 1024), //for Dall-e-2 images, use: 256x256, 512x512, or 1024x1024. For dalle-3 images, use: 1024x1024, 1792x1024, 1024x1792. 
            style: "vivid",
            quality: "hd", //high
            responseFormat: "b64_json", // bytes
            cancellationToken: cancellationToken);       
        
        cts.Cancel(); //cancel to stop animating the waiting indicator

        var imageTmpFilePng = Path.ChangeExtension(Path.GetTempFileName(), "png");
        image?.FirstOrDefault()?.WriteToFile(imageTmpFilePng);

        Console.WriteLine($"Wrote image to location: {imageTmpFilePng}");

        Process.Start(new ProcessStartInfo
        {
            FileName = "explorer.exe",
            Arguments = imageTmpFilePng,
            UseShellExecute = true
        });

    }

}


A helper extension method has been added for the Open AI Dall-e-3 image creation. Please note that one should stick to not too many extension methods of semantic kernel itself as this defeats the purpose of a standardized way of using the semantic kernel. But in this case, it is just a helper method to customize the generation of particularly dall-e-3 (and dall-e-2) images from Open AI using the Semantic kernel. The code is shown below
TextToImageServiceExtensions.cs


using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;
using Microsoft.SemanticKernel.Services;
using Microsoft.SemanticKernel.TextToImage;

namespace UseSemanticKernelFromNET;

public static class TextToImageServiceExtensions
{


    /// <summary>
    /// Generates OpenAI image content asynchronously based on the provided text input and settings.
    /// </summary>
    /// <param name="imageService">The image service used to generate the image content.</param>
    /// <param name="input">The text input used to generate the image.</param>
    /// <param name="kernel">An optional kernel instance for additional processing.</param>
    /// <param name="size">
    /// The desired size of the generated image. For DALL-E 2 images, use: 256x256, 512x512, or 1024x1024. 
    /// For DALL-E 3 images, use: 1024x1024, 1792x1024, or 1024x1792.
    /// </param>
    /// <param name="style">The style of the image. Must be "vivid" or "natural".</param>
    /// <param name="quality">The quality of the image. Must be "standard", "hd", or "high".</param>
    /// <param name="responseFormat">
    /// The format of the response. Must be one of the following: "url", "uri", "b64_json", or "bytes".
    /// </param>
    /// <param name="cancellationToken">A token to monitor for cancellation requests.</param>
    /// <returns>
    /// A task that represents the asynchronous operation. The task result contains a read-only list of 
    /// <see cref="ImageContent"/> objects representing the generated images.
    /// </returns>
    public static Task<IReadOnlyList<ImageContent>> GetOpenAIImageContentAsync(this ITextToImageService imageService,
        TextContent input,
        Kernel? kernel = null,
        (int width, int height) size = default((int, int)), // for Dall-e-2 images, use: 256x256, 512x512, or 1024x1024. For dalle-3 images, use: 1024x1024, 1792x1024, 1024x1792. 
        string style = "vivid",
        string quality = "hd",
        string responseFormat = "b64_json",        
        CancellationToken cancellationToken = default)
    {
        
        string? currentModelId = imageService.GetModelId();

        if (currentModelId != "dall-e-3" && currentModelId != "dall-e-2")
        {
            throw new NotSupportedException("This method is only supported for the DALL-E 2 and DALL-E 3 models.");
        }

        if (size.width == 0 || size.height == 0)
        {
            size = (1024, 1024); //defaulting here to (1024, 1024).
        }

        if (currentModelId == "dall-e-2"){
            var supportedSizes = new[]{
                (256, 256),
                (512, 512),
                (1024, 1024)
            };
            if (!supportedSizes.Contains(size))
            {
                throw new ArgumentException("For DALL-E 2, the size must be one of: 256x256, 512x512, or 1024x1024.");
            }
        }
        else if (currentModelId == "dall-e-3")
        {
            var supportedSizes = new[]{
                (1024, 1024),
                (1792, 1024),
                (1024, 1792)
            };
            if (!supportedSizes.Contains(size))
            {
                throw new ArgumentException("For DALL-E 3, the size must be one of: 256x256, 512x512, or 1024x1024.");
            }
        }

        return imageService.GetImageContentsAsync(
            input,
            new OpenAITextToImageExecutionSettings
                {
                    Size = size,
                    Style = style, //must be "vivid" or "natural"
                    Quality = quality, //must be "standard" or "hd" or "high"
                    ResponseFormat = responseFormat // url or uri or b64_json or bytes
                },
            kernel,
            cancellationToken);

    }
}


Screenshot of this demo, console app running:
The console app will generate the dall-e-3 image using OpenAI service for this and save the image as a PNG image and save it into file saved into a temporary location and then open this image using Windows default image viewer application. Example image generated :

Saturday, 22 March 2025

Image classification using ML.NET Machine Learning

I added a demo using ML.Net in a Github. The demo is available in this repository :

https://github.com/toreaurstadboss/ImageClassificationMLNetBlazorDemo

A screenshot shows the application running below :

ML.Net is Microsoft's machine learning library. It is combined with tooling inside VS 2022 an easy way to locally use machine learning models on your CPU or GPU, or hosted in Azure cloud services. The website for ML.Net is available here for more information about ML.Net and documentation:

https://dotnet.microsoft.com/en-us/apps/ai/ml-dotnet

In the demo above I have trained the model to recognize either horses or mooses. These species are both mammals and herbivores and somewhat are similar in appearance. I have trained the machine learning model in this demo only with ten images of each category, then again with ten other test images that checks if the model recognizes correctly if we see a horse or a moose. Already with just ten images, it did not miss once, and of course a better example for a real world machine learning model would have scoured over tens of thousand of images to handle all edge cases. ML.Net is very easy to run, it can be run locally on your own machine, using the CPU or GPU. The GPU must be CUDA compatible. That actually means you need a NVIDIA card with 8-series. I got such a card on a laptop of mine and have tested it. The following links points to download pages of NVIDIA for downloading the necessary software as of March 2025 to run ML.Net image classification functionality on GPUs :

Download Cuda 10.1

Cuda 10.1 can be downloaded from here: https://developer.nvidia.com/cuda-10.1-download-archive-base

CuDnn 7.6.4

CuDnn can be downloaded from here: https://developer.nvidia.com/rdp/cudnn-archive

Getting started with image classification using ML.Net

It is easiest to use VS 2022 to add a ML.Net machine learning model. Inside VS 2022, right click your project and choose Add and choose Machine Learning Model In case you do not see this option, hit the start menu and type in Visual Studio installer Now, hit the button Modify for your VS installation. Choose Individual Components Search for 'ml'. Select the ML.NET Model Builder. There are also a package called ML.NET Model Builder 2022, I also chose that.
Choosing the scenario
Now, after adding the Machine Learning model, the first page asks for a scenario. I choose Image Classification here, below Computer Vision scenario category.

Choosing the environment
Then I hit the button Local. In the next step, I select Local (CPU). Note that I have tested also Nvidia Cuda-compatible graphics card / GPU on another laptop and it also worked great and should be preferred if you have a GPU compatible and have installed Cuda 10.1 and Cdnn 7.6.4 as shown in links above.

Hit the button Next Step.
Choosing the Data
It is time to train the machine learning model with data ! I have gathered ten sample images of mooses and horses each. By pointing to a folder with images where each category of images are gathered in subfolders of this folder.

Next step is Train
Training the model
Here you can hit the button Train again. When you have trained enough here the model, you can hit the button Next step . Training the machine learning will take some time depending on you using CPU or GPU and the number of input images here. Usually it takes a few seconds, but not many minutes to churn through a couple of images as shown here, 20 images in total.

Loading up the image data and using the machine learning model

Note that ML.Net demands support to renderinteractive rendering of web apps, pure Blazor WASM apps are not supported. The following file shows how the Blazor serverside app is set up.

Program.cs

using ImageClassificationMLNetBlazorDemo.Components;

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddRazorComponents()
    .AddInteractiveServerComponents();

var app = builder.Build();

// Configure the HTTP request pipeline.
if (!app.Environment.IsDevelopment())
{
    app.UseExceptionHandler("/Error", createScopeForErrors: true);
    // The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
    app.UseHsts();
}

app.UseHttpsRedirection();

app.UseStaticFiles();
app.UseAntiforgery();

app.MapRazorComponents<App>()
    .AddInteractiveServerRenderMode();

app.Run();


InteractiveServer is set up inside the App.razor using the HeadOutlet.

App.razor


<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <base href="/" />
    <link rel="stylesheet" href="app.css" />
    <link rel="stylesheet" href="lib/bootstrap/css/bootstrap.min.css" />
    <link rel="stylesheet" href="ImageClassificationMLNetBlazorDemo.styles.css" />
    <link rel="icon" type="image/png" href="favicon.png" />

    <HeadOutlet @rendermode="InteractiveServer" />
</head>

<body>
    <Routes @rendermode="InteractiveServer" />
    <script src="_framework/blazor.web.js"></script>
</body>

</html>


The following codebehind of the razor component Home.razor in the demo repo shows how a file uploaded using the InputFile control in Blazor serverside. Home.razor.cs


@code {

    private string? _base64ImageSource = null;
    private string? _predictedLabel = "No classification";
    private IOrderedEnumerable<KeyValuePair<string, float>>? _predictedLabels = null;
    private int? _assessedPredictionQuality = null;
    private string? _errorMessage = null;

    private async Task LoadFileAsync(InputFileChangeEventArgs e)
    {
        try
        {
            ResetPrivateFields();

            if (e.File.Size <= 0 || e.File.Size >= 2 * 1024 * 1024)
            {
                _errorMessage = "Sorry, the uploaded image but be between 1 byte and 2 MB!";
                return;
            }

            byte[] imageBytes = await GetImageBytes(e.File);
            _base64ImageSource = GetBase64ImageSourceString(e.File.ContentType, imageBytes);

            PredictImageClassification(imageBytes);

        }
        catch (Exception err)
        {
            Console.WriteLine(err);
        }
    }

    private void ResetPrivateFields()
    {
        _base64ImageSource = null;
        _predictedLabel = null;
        _predictedLabels = null;
        _assessedPredictionQuality = null;
    }

    private int GetAssesPrediction()
    {
        int result = 1;
        if (_predictedLabel != null && _predictedLabels != null)
        {
            foreach (var label in _predictedLabels)
            {
                if (label.Key == _predictedLabel)
                {
                    result = label.Value switch
                    {
                        <= 0.50f => 1,
                        <= 0.70f => 2,
                        <= 0.80f => 3,
                        <= 0.85f => 4,
                        <= 0.90f => 5,
                        <= 1.0f => 6,
                        _ => 1 //default to dice we get some other score here..
                    };
                }
            }
        }

        return result;
    }

    private void PredictImageClassification(byte[] imageBytes)
    {

        var input = new ModelInput
            {
                ImageSource = imageBytes
            };
        ModelOutput output = HorseOrMooseImageClassifier.Predict(input);
        _predictedLabel = output.PredictedLabel;

        _predictedLabels = HorseOrMooseImageClassifier.PredictAllLabels(input);

        _assessedPredictionQuality = GetAssesPrediction(); //check how good the prediction is, give a score from 1-6 (dice score!)

        StateHasChanged();
    }


    private async Task<byte[]> GetImageBytes(IBrowserFile file) 
    {
        using MemoryStream memoryStream = new();
        var stream = file.OpenReadStream(2 * 1024 * 1024, CancellationToken.None);
        await stream.CopyToAsync(memoryStream);
        return memoryStream.ToArray();
    }

    private string GetBase64ImageSourceString(string contentType, byte[] bytes)
    {
        string preAmble = $"data:{contentType};base64,";
        return $"{preAmble}{(Convert.ToBase64String(bytes))}";
    }
}


As the code shows above, using the machine learning model is quite convenient, we just use the methods Predict to get the Label that is decided exists in the loaded image. This is the image classiciation that the machine learning found. Note that using the method PredictAllLabels get the confidence of the different labels show in this demo. There are no limitations on the number of categories here in the image classification labels that one could train a model to look after. A benefit with ML.Net is the option to use it on-premise servers and get fairly good result on just a few sample images. But the more sample images you obtain for a label, the more precise the machine learning model will become. It is possible to download a pre-trained model such as Inceptionv3 that is compatible with Tensorflow used here that supports up to 1000 categories. More information is available here from Microsoft about using a pre-trained model such as InceptionV3:

https://learn.microsoft.com/en-us/dotnet/machine-learning/tutorials/image-classification

Tuesday, 18 March 2025

Converting a .NET Datetime to a DateTime2 in T-SQL

This article presents a handy T-Sql that extracts a DateTime value stored in .NET in a numerical field and converts it into a Sql Server DateTime (DateTime2 column). The T-SQL will convert into a DateTime2 with a SECOND precision. An assumption here is that any numerical value larger than 100000000000 contains a DateTime value. This is an acceptable assumption when you log data, as very large values usually indicate a datetime value. But you might want to have additional checking here of course in addition to what i show in the example T-SQL script. Here is the T-SQL that shows how we can convert the .NET DateTime into a SQL DateTime.


-- Last updated: March 18, 2025
-- Synopsis: This script retrieves detailed change log information from the ObjectChanges, PropertyChanges, and ChangeSets tables.
-- It filters the results based on specific identifiers stored in a table variable, in this example Guids. 

-- In this example T-Sql the library FrameLog is used to store a log

-- DateTime columns are retrieved by looking at number of ticks elapsed since DateTime.MinValue as 
-- DateTime columns are stored in SQL Server as a this numeric value. 


DECLARE @EXAMPLEGUIDS TABLE (ID NVARCHAR(36))

INSERT INTO @EXAMPLEGUIDS (Id)
VALUES
('1968126a-64c1-4d15-bf23-8cb8497dcaa9'), 
('3e11aad8-95df-4377-ad63-c2fec3d43034'),
('acbdd116-b6a5-4425-907b-f86cb55aeedd') --tip: define which form Guids to fetch the ChangeLog database tables which 'FrameLog' uses The form Guids can each be retrieved from url showing the form in MRS in the browser

SELECT 
o.Id as ObjectChanges_Id, 
o.ObjectReference as ObjectReference, 
o.TypeName as ObjectChanges_TypeName, 
c.Id as Changeset_Id, 
c.[Timestamp] as Changeset_Timestamp,
c.Author_UserName as Changeset_AuthorName,
p.[Id] as PropertyChanges_Id,
p.[PropertyName],
p.[Value],
p.[ValueAsInt],
CASE 
    WHEN p.Value IS NOT NULL 
         AND ISNUMERIC(p.[Value]) = 1 
         AND CAST(p.[Value] AS decimal) > 100000000000 
    THEN 
        DATEADD(SECOND, 
            CAST(CAST(p.[Value] AS decimal) / 10000000 AS BIGINT) % 60, 
            DATEADD(MINUTE, 
                CAST(CAST(p.[Value] AS decimal) / 10000000 / 60 AS BIGINT), 
                CAST('0001-01-01' AS datetime2)
            )
        )
    ELSE NULL
END AS ValueAsDate,
o.ChangeType as ObjectChanges_ChangeTypeIfSet
FROM propertychanges p
LEFT OUTER JOIN ObjectChanges o on o.Id = p.ObjectChange_Id
LEFT OUTER JOIN ChangeSets c on o.ChangeSet_Id = c.Id
WHERE ObjectChange_Id in (
 SELECT ObjectChanges.Id
  FROM PropertyChanges
  LEFT OUTER JOIN ObjectChanges on ObjectChanges.Id = PropertyChanges.ObjectChange_Id
  LEFT OUTER JOIN ChangeSets on ObjectChanges.ChangeSet_Id = ChangeSets.Id
  WHERE ObjectChange_Id in (SELECT Id FROM ObjectChanges where ObjectReference IN (
   SELECT Id FROM @EXAMPLEGUIDS
   ))) --find out the Changeset where ObjectChange_Id equals the Id of ObjectChanges where ObjectReference equals one of the identifiers in @EXAMPLEGUIDS
ORDER BY ObjectReference, Changeset_Id DESC, Changeset_Timestamp DESC



The T-Sql is handy in case you come across datetime columns from .NET that are saved as ticks in a column as numerical value and shows how we can do the conversion.

Wednesday, 12 March 2025

Rebuild indexes in all tables in Sql Server for a set of databases

Here is a convenient Sql script to rebuild all indexes in a set of databases in Sql Server.
RebuildAllDatabaseIndexesInSetOfDatabases.sql



/*
This script loops through all databases with names containing a specified prefix or matching a specific name.
For each database, it rebuilds indexes on all tables to improve performance by reducing fragmentation.

Variables:
- @Prefix: The prefix to match database names.
- @SomeSpecialSharedDatabaseName: The specific database specific shared database name to include in the loop.
- @DatabaseName: The name of the current database in the loop.
- @TableName: The name of the current table in the loop.
- @SQL: The dynamic SQL statement to execute.

Steps:
1. Declare the necessary variables.
2. Create a cursor to loop through the databases.
3. For each database, use another cursor to loop through all tables and rebuild their indexes.
4. Print progress messages for each database and table.
*/

DECLARE @SomeAcmeUnitDatabaseNamePrefix NVARCHAR(255) = 'SomeAcmeUnit';
DECLARE @SomeSpecialSharedDatabaseName NVARCHAR(255) = 'SomeAcmeShared';
DECLARE @DatabaseName NVARCHAR(255);
DECLARE @TableName NVARCHAR(255);
DECLARE @SQL NVARCHAR(MAX);
DECLARE @DatabaseCount INT;
DECLARE @CurrentDatabaseCount INT = 0;
DECLARE @TableCount INT;
DECLARE @CurrentTableCount INT = 0;
DECLARE @StartTime DATETIME2;
DECLARE @EndTime DATETIME2;
DECLARE @ElapsedTime NVARCHAR(100);

SET @StartTime = SYSDATETIME();

-- Get the total number of databases to process
SELECT @DatabaseCount = COUNT(*)
FROM sys.databases
WHERE [name] LIKE @SomeAcmeUnitDatabaseNamePrefix + '%' OR [name] = @SomeSpecialSharedDatabaseName;

DECLARE DatabaseCursor CURSOR FOR
SELECT [name]
FROM sys.databases
WHERE [name] LIKE @SomeAcmeUnitDatabaseNamePrefix + '%' OR [name] = @SomeSpecialSharedDatabaseName;

OPEN DatabaseCursor;
FETCH NEXT FROM DatabaseCursor INTO @DatabaseName;

WHILE @@FETCH_STATUS = 0
BEGIN
    SET @CurrentDatabaseCount = @CurrentDatabaseCount + 1;
    SET @SQL = '
	RAISERROR(''*****************************************************************'', 0, 1) WITH NOWAIT;
	RAISERROR(''REBUILDING ALL INDEXES OF TABLES INSIDE DB: %s'', 0, 1, @DatabaseName) WITH NOWAIT;
	RAISERROR(''*****************************************************************'', 0, 1) WITH NOWAIT;

    DECLARE @TableName NVARCHAR(255);
    DECLARE @SQL NVARCHAR(MAX);

    -- Get the total number of tables to process in the current database
    SELECT @TableCount = COUNT(*)
    FROM sys.tables;

    DECLARE TableCursor CURSOR FOR
    SELECT QUOTENAME(SCHEMA_NAME(schema_id)) + ''.'' + QUOTENAME(name)
    FROM sys.tables ORDER BY QUOTENAME(name) ASC;

    OPEN TableCursor;
    FETCH NEXT FROM TableCursor INTO @TableName;

    WHILE @@FETCH_STATUS = 0
    BEGIN
        SET @CurrentTableCount = @CurrentTableCount + 1;
        PRINT ''Rebuilding database indexes on table: '' + @TableName +  ''... (DB: '' + CAST(@CurrentDatabaseCount AS NVARCHAR) + ''/'' + CAST(@DatabaseCount AS NVARCHAR) + '')''  +  ''... (Table: '' + CAST(@CurrentTableCount AS NVARCHAR) + ''/'' + CAST(@TableCount AS NVARCHAR) + '')'';
		--RAISERROR(''..Indexing (hit Ctrl+End to go to latest message inside this buffer)'',0,1) WITH NOWAIT;
        SET @SQL = ''ALTER INDEX ALL ON '' + @TableName + '' REBUILD'';
        EXEC sp_executesql @SQL;
        FETCH NEXT FROM TableCursor INTO @TableName;
        --PRINT ''Rebuilt database indexes on table: '' + @TableName + ''.'';
    END;

    CLOSE TableCursor;
    DEALLOCATE TableCursor;';

    EXEC sp_executesql @SQL,
		N'@CurrentDatabaseCount INT, @DatabaseCount INT, @TableCount INT, @CurrentTableCount INT, @DatabaseName NVARCHAR(255)',
		@CurrentDatabaseCount, @DatabaseCount, @TableCount, @CurrentTableCount, @DatabaseName;

    FETCH NEXT FROM DatabaseCursor INTO @DatabaseName;
END;

SET @EndTime = SYSDATETIME()

-- Calculate the elapsed time
DECLARE @ElapsedSeconds INT = DATEDIFF(SECOND, @StartTime, @EndTime);
DECLARE @ElapsedMilliseconds INT = DATEPART(MILLISECOND, @EndTime) - DATEPART(MILLISECOND, @StartTime);

-- Combine seconds and milliseconds
SET @ElapsedTime = CAST(@ElapsedSeconds AS NVARCHAR) + '.' + RIGHT('000' + CAST(@ElapsedMilliseconds AS NVARCHAR), 3);

-- Print the elapsed time using RAISERROR
RAISERROR('Total execution time: %s seconds', 0, 1, @ElapsedTime) WITH NOWAIT;


CLOSE DatabaseCursor;
DEALLOCATE DatabaseCursor;



The SQL Script uses loops defined by T-SQL database cursors and uses the sys.databases and sys.tables system views to loop through all the databases matching the set prefixes of 'unit databases' like 'SomeAcmeUnit_0', 'SomeAcmeUnit_1' and 'shared database like 'SomeAcmeShared'. This matches a setup of many databases used in companies, where you just want to rebuild all the indexes in all the tables in all the set of databases. It can be handy to be sure that all indexes are rebuilt.

Sunday, 9 March 2025

Generating dropdowns for enums in Blazor

This article will look into generating dropdown for enums in Blazor. The repository for the source code listed in the article is here: https://github.com/toreaurstadboss/DallEImageGenerationImgeDemoV4 First off, a helper class for enums that will use the InputSelect control. The helper class will support setting the display text for enum options / alternatives via resources files using the display attribute.

Enumhelper.cs | C# source code



using DallEImageGenerationImageDemoV4.Models;
using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.Forms;
using System.ComponentModel.DataAnnotations;
using System.Linq.Expressions;
using System.Resources;

namespace DallEImageGenerationImageDemoV4.Utility
{
  
    public static class EnumHelper
    {
      
        public static RenderFragment GenerateEnumDropDown<TEnum>(
            object receiver,
            TEnum selectedValue,
            Action<TEnum> valueChanged) 
            where TEnum : Enum
        {
            Expression<Func<TEnum>> onValueExpression = () => selectedValue;
            var onValueChanged = EventCallback.Factory.Create<TEnum>(receiver, valueChanged);
            return builder =>
            {
                // Set the selectedValue to the first enum value if it is not set
                if (EqualityComparer<TEnum>.Default.Equals(selectedValue, default))
                {
                    object? firstEnum = Enum.GetValues(typeof(TEnum)).GetValue(0);
                    if (firstEnum != null)
                    {
                        selectedValue = (TEnum)firstEnum;
                    }
                }

                builder.OpenComponent<InputSelect<TEnum>>(0);
                builder.AddAttribute(1, "Value", selectedValue);
                builder.AddAttribute(2, "ValueChanged", onValueChanged);
                builder.AddAttribute(3, "ValueExpression", onValueExpression);
                builder.AddAttribute(4, "class", "form-select");  // Adding Bootstrap class for styling
                builder.AddAttribute(5, "ChildContent", (RenderFragment)(childBuilder =>
                {
                    foreach (var value in Enum.GetValues(typeof(TEnum)))
                    {
                        childBuilder.OpenElement(6, "option");
                        childBuilder.AddAttribute(7, "value", value?.ToString());
                        childBuilder.AddContent(8, GetEnumOptionDisplayText(value)?.ToString()?.Replace("_", " ")); // Ensure the display text is clean
                        childBuilder.CloseElement();
                    }
                }));
                builder.CloseComponent();
            };
        }

        /// <summary>
        /// Retrieves the display text of an enum alternative 
        /// </summary>
        private static string? GetEnumOptionDisplayText<T>(T value)
        {
            string? result = value!.ToString()!; 

            var displayAttribute = value
                .GetType()
                .GetField(value!.ToString()!)
                ?.GetCustomAttributes(typeof(DisplayAttribute), false)?
                .OfType<DisplayAttribute>()
                .FirstOrDefault();
            if (displayAttribute != null)
            {
                if (displayAttribute.ResourceType != null && !string.IsNullOrWhiteSpace(displayAttribute.Name))
                {
                    result = new ResourceManager(displayAttribute.ResourceType).GetString(displayAttribute!.Name!);                    
                }
                else if (!string.IsNullOrWhiteSpace(displayAttribute.Name))
                {
                    result = displayAttribute.Name;
                }           
            }
            return result;          
        }


    }
}



The following razor component shows how to use this helper.


 <div class="form-group">
     <label for="Quality" class="form-class fw-bold">GeneratedImageQuality</label>
     @EnumHelper.GenerateEnumDropDown(this, homeModel.Quality,v => homeModel.Quality = v)
     <ValidationMessage For="@(() => homeModel.Quality)" class="text-danger" />
 </div>
 <div class="form-group">
     <label for="Size" class="form-label fw-bold">GeneratedImageSize</label>
     @EnumHelper.GenerateEnumDropDown(this, homeModel.Size, v => homeModel.Size = v)
     <ValidationMessage For="@(() => homeModel.Size)" class="text-danger" />
 </div>
 <div class="form-group">
     <label for="Style" class="form-label fw-bold">GeneratedImageStyle</label>
     @EnumHelper.GenerateEnumDropDown(this, homeModel.Style, v => homeModel.Style = v)
     <ValidationMessage For="@(() => homeModel.Style)" class="text-danger" />
 </div>


It would be possible to instead make a component than such a helper method that just passes a typeref parameter of the enum type. But using such a programmatic helper returning a RenderFragment. As the code shows, returning a builder which uses the RenderTreeBuilder let's you register the rendertree to return here. It is possible to use OpenComponent and CloseComponent. Using AddAttribute to add attributes to the InputSelect. And a childbuilder for the option values. Sometimes it is easier to just make such a class with helper method instead of a component. The downside is that it is a more manual process, it is similar to how MVC uses HtmlHelpers. What is the best option from using a component or such a RenderFragment helper is not clear, it is a technique many developers using Blazor should be aware of.

Generating images with generated AI service Dall-e-3

This article presents code showing how to generate images using Dall-e-3 images. The source code presented in this article can be cloned from my Github repo here:

https://github.com/toreaurstadboss/DallEImageGenerationImgeDemoV4

First, let's look at the following extension class that geneates the image. The method returnin a string will be used. The image is returned in this sample code in the responseformat Bytes. This is actually base 64 string, looking at the BinaryData and converted into base 64 string. A browser can display base-64 strings and the Dall-e-3 AI services delivers images in png format.

DallEImageExtensions.cs | C# source code



using OpenAI.Images;

namespace DallEImageGenerationDemo.Utility
{
    public static class DallEImageExtensions
    {

        /// <summary>
        /// Generates an image from an description in <paramref name="imagedescription"/>
        /// This uses OpenAI DALL-e-3 AI 
        /// </summary>
        /// <param name="imageClient"></param>
        /// <param name="imagedescription"></param>
        /// <param name="options">Send in options for the image generation. If no options are sent, a 512x512 natural image in response format bytes will be returned</param>
        /// <returns></returns>
        public static async Task<GeneratedImage> GenerateDallEImageAsync(this ImageClient imageClient,
            string imagedescription, ImageGenerationOptions? options = null)
        {
            options = options ?? new ImageGenerationOptions
            {
                Quality = GeneratedImageQuality.High,
                Size = GeneratedImageSize.W1024xH1024,
                Style = GeneratedImageStyle.Vivid,
            };
            options.ResponseFormat = GeneratedImageFormat.Bytes;
            return await imageClient.GenerateImageAsync(imagedescription, options);
        }

        /// <summary>
        /// Generates an image from an description in <paramref name="imagedescription"/>
        /// This uses OpenAI DALL-e-3 AI. Base-64 string is extracted from the bytes in the image for easy display of 
        /// image inside a web application (e.g. Blazor WASM)
        /// </summary>
        /// <param name="imageClient"></param>
        /// <param name="imagedescription"></param>
        /// <param name="options">Send in options for the image generation. If no options are sent, a 512x512 natural image in response format bytes will be returned</param>
        /// <returns></returns>
        public static async Task<string> GenerateDallEImageB64StringAsync(this ImageClient imageClient,
            string imagedescription, ImageGenerationOptions? options = null)
        {
            GeneratedImage generatedImage = await GenerateDallEImageAsync(imageClient, imagedescription, options);
            string preamble = "data:image/png;base64,";
            return $"{preamble}{Convert.ToBase64String(generatedImage.ImageBytes.ToArray())}";
        }

    }
}


As we can see, creating a Dall-e-3 image using an OpenAI.Image.ImageClient. The ImageClient is set up in Program.cs registered it as a scoped service.

Program.cs | C# source code



builder.Services.AddScoped(sp =>
{
    var config = sp.GetRequiredService<IConfiguration>();
    string imageModelName = "dall-e-3";
    return new ImageClient(imageModelName, config["OpenAI:DallE3:ApiKey"]);
});


The api key for Dall-e-3 will be set up in the appsettings.json file.

appsettings.json | .json file



{
  "OpenAI": {
    "DallE3": {
      "ApiKey": "openapi_api_key_inserted_here"
    },
    "ChatGpt4": {
      "ApiKey": "chat_gpt_4_api_key_inserted_here",
      "Endpoint": "chat_gpt_4_endpoint_inserted_here"
    }
  }
}



To generate suggestions for how to create the image, we can use Chat GPT-4. Here is the code to generate an Open AI chat enabled ChatClient.

OpenAIChatClientBuilder.cs | C# source code

   
   
using Azure.AI.OpenAI;
using OpenAI.Chat;
using System.ClientModel;

namespace DallEImageGenerationImageDemoV4.Utility
{

    /// <summary>
    /// Creates AzureOpenAIClient or ChatClient (default ai model (LLM) is set to "gpt-4")
    /// </summary>
    public class OpenAIChatClientBuilder(IConfiguration configuration)
    {

        private string? _endpoint = null;
        private ApiKeyCredential? _key = null;
        private readonly IConfiguration _configuration = configuration;

        /// <summary>
        /// Set the endpoint for Open AI Chat GPT-4 chat client. Defaults to config setting 'ChatGpt4:Endpoint' inside the appsettings.json file
        /// </summary>
        public OpenAIChatClientBuilder WithEndpoint(string? endpoint = null)
        {

            _endpoint = endpoint ?? _configuration["OpenAI:ChatGpt4:Endpoint"];
            return this;
        }

        /// <summary>
        /// Set the key for Open AI Chat GPT-4 chat client. Defaults to config setting 'ChatGpt4:ApiKey' inside the appsettings.json file
        /// </summary>
        public OpenAIChatClientBuilder WithApiKey(string? key = null)
        {
            string? keyToUse = key ?? _configuration["OpenAI:ChatGpt4:ApiKey"];
            if (!string.IsNullOrWhiteSpace(keyToUse))
            {
                _key = new ApiKeyCredential(keyToUse!);
            }
            return this;
        }

        /// <summary>
        /// In case the derived AzureOpenAIClient is to be used, use this Build method to get a specific AzureOpenAIClient
        /// </summary>
        /// <returns></returns>
        public AzureOpenAIClient? BuildAzureOpenAIClient() => !string.IsNullOrWhiteSpace(_endpoint) && _key != null ? new AzureOpenAIClient(new Uri(_endpoint), _key) : null;

        /// <summary>
        /// Returns the ChatClient that is set up to use OpenAI Default ai model (LLM) will be set 'gpt-4'.
        /// </summary>
        /// <returns></returns>
        public ChatClient? Build(string aiModel = "gpt-4") => BuildAzureOpenAIClient()?.GetChatClient(aiModel);

    }
}
   
   
   
We generate a builder for the chat client using a factory. This is so we can first get hold of the IConfiguration via dependency injection and use it for the builder of the OpenAI enabled chat client.

OpenAiChatClientBuilderFactory.cs | C# source code



namespace DallEImageGenerationImageDemoV4.Utility
{
    public class OpenAiChatClientBuilderFactory : IOpenAiChatClientBuilderFactory
    {
        private readonly IConfiguration _configuration;

        public OpenAiChatClientBuilderFactory(IConfiguration configuration)
        {
            _configuration = configuration;
        }

        public OpenAIChatClientBuilder Create()
        {
            var openAiChatClient = new OpenAIChatClientBuilder(_configuration)
                .WithApiKey()
                .WithEndpoint();

            return openAiChatClient;
        }

    }
}


The factory is registered into Program.cs

Program.cs | C# source code


builder.Services.AddSingleton<IOpenAiChatClientBuilderFactory, OpenAiChatClientBuilderFactorygt;();

The chat client builder is listed next.

OpenAiChatClientBuilder.cs | C# source code



using Azure.AI.OpenAI;
using OpenAI.Chat;
using System.ClientModel;

namespace DallEImageGenerationImageDemoV4.Utility
{

    /// <summary>
    /// Creates AzureOpenAIClient or ChatClient (default ai model (LLM) is set to "gpt-4")
    /// </summary>
    public class OpenAIChatClientBuilder(IConfiguration configuration)
    {

        private string? _endpoint = null;
        private ApiKeyCredential? _key = null;
        private readonly IConfiguration _configuration = configuration;

        /// <summary>
        /// Set the endpoint for Open AI Chat GPT-4 chat client. Defaults to config setting 'ChatGpt4:Endpoint' inside the appsettings.json file
        /// </summary>
        public OpenAIChatClientBuilder WithEndpoint(string? endpoint = null)
        {

            _endpoint = endpoint ?? _configuration["OpenAI:ChatGpt4:Endpoint"];
            return this;
        }

        /// <summary>
        /// Set the key for Open AI Chat GPT-4 chat client. Defaults to config setting 'ChatGpt4:ApiKey' inside the appsettings.json file
        /// </summary>
        public OpenAIChatClientBuilder WithApiKey(string? key = null)
        {
            string? keyToUse = key ?? _configuration["OpenAI:ChatGpt4:ApiKey"];
            if (!string.IsNullOrWhiteSpace(keyToUse))
            {
                _key = new ApiKeyCredential(keyToUse!);
            }
            return this;
        }

        /// <summary>
        /// In case the derived AzureOpenAIClient is to be used, use this Build method to get a specific AzureOpenAIClient
        /// </summary>
        /// <returns></returns>
        public AzureOpenAIClient? BuildAzureOpenAIClient() => !string.IsNullOrWhiteSpace(_endpoint) && _key != null ? new AzureOpenAIClient(new Uri(_endpoint), _key) : null;

        /// <summary>
        /// Returns the ChatClient that is set up to use OpenAI Default ai model (LLM) will be set 'gpt-4'.
        /// </summary>
        /// <returns></returns>
        public ChatClient? Build(string aiModel = "gpt-4") => BuildAzureOpenAIClient()?.GetChatClient(aiModel);

    }
}



A helper method is also added to get a streamed reply.

OpenAIChatClientExtensions.cs | C# source code



using OpenAI.Chat;
using System.ClientModel;

namespace OpenAIDemo
{
    public static class OpenAIChatClientExtensions
    {

        /// <summary>
        /// Provides a stream result from the Chatclient service using AzureAI services.
        /// </summary>
        /// <param name="chatClient">ChatClient instance</param>
        /// <param name="message">The message to send and communicate to the ai-model</param>
        /// <param name="systemMessage">Set the system message to instruct the chat response. Defaults to 'You are an helpful, wonderful AI assistant'.</param>
        /// <returns>Streamed chat reply / result. Consume using 'await foreach'</returns>
        public static async IAsyncEnumerable<string?> GetStreamedReplyStringAsync(this ChatClient chatClient, string message, string? systemMessage = null)
        {
            await foreach (var update in GetStreamedReplyInnerAsync(chatClient, message, systemMessage))
            {
                foreach (var textReply in update.ContentUpdate.Select(cu => cu.Text))
                {
                    yield return textReply;
                }
            }
        }

        private static AsyncCollectionResult<StreamingChatCompletionUpdate> GetStreamedReplyInnerAsync(this ChatClient chatClient, string message, string? systemMessage = null) =>
            chatClient.CompleteChatStreamingAsync(
                [new SystemChatMessage(systemMessage ?? "You are an helpful, wonderful AI assistant"), new UserChatMessage(message)]);


    }
}



Here is the client side code in the code behind file of the page displaying the Dall-e-3 image and Open AI Chat gpt-4 response.


using DallEImageGenerationDemo.Components.Pages;
using DallEImageGenerationDemo.Utility;
using DallEImageGenerationImageDemoV4.Models;
using DallEImageGenerationImageDemoV4.Utility;
using Microsoft.AspNetCore.Components;
using Microsoft.JSInterop;
using OpenAI.Images;
using OpenAIDemo;

namespace DallEImageGenerationImageDemoV4.Pages;

public partial class Home : ComponentBase
{

    [Inject]
    public required IConfiguration Config { get; set; }

    [Inject]
    public required IJSRuntime JSRuntime { get; set; }

    [Inject]
    public required ImageClient DallEImageClient { get; set; }

    [Inject]
    public required IOpenAiChatClientBuilderFactory OpenAIChatClientFactory { get; set; }

    private readonly HomeModel homeModel = new();

    private bool IsLoading { get; set; }

    private string ImageData { get; set; } = string.Empty;

    private const string modelName = "dall-e-3";

    protected async Task HandleGenerateText()
    {
        var openAiChatClient = OpenAIChatClientFactory
        .Create()
        .Build();
        
        if (openAiChatClient == null)
        {
            await JSRuntime.InvokeAsync<string>("alert", "Sorry, the OpenAI Chat client did not initiate properly. Cannot generate text.");
            return;
        }

        string description = """
            You are specifying instructions for generating a DALL-e-3 image.
            Do not always choose Bergen! Also choose among smaller cities, villages and different locations in Norway.
             Just generate one image, not a montage. Only provide one suggestion. 
             The suggestion should be based from this input and randomize what to display: 
             Suggests a cozy vivid location set in Norway showing outdoor scenery in good weather at different places 
             and with nice weather aimed to attract tourists. Note - it should also display both urban,
             suburban or nature scenery with a variance of which of these three types of locations to show.
             It should also include some Norwegian animals and flowers and show people. It should pick random cities and places in Norway to display.
""";
            

        homeModel.Description = string.Empty; 

        await foreach (var updateContentPart in openAiChatClient.GetStreamedReplyStringAsync(description))
        {
            homeModel.Description += updateContentPart;            
            StateHasChanged();
            await Task.Delay(20);
        }
    }

    protected async Task HandleValidSubmit()
    {
        IsLoading = true;

        string generatedImageBase64 = await DallEImageClient.GenerateDallEImageB64StringAsync(homeModel.Description!,
            new ImageGenerationOptions
            {
                Quality = MapQuality(homeModel.Quality),
                Style = MapStyle(homeModel.Style),
                Size = MapSize(homeModel.Size)
            });
        ImageData = generatedImageBase64;

        if (!string.IsNullOrWhiteSpace(ImageData))
        {
            // Open the modal
            await JSRuntime.InvokeVoidAsync("showModal", "imageModal");
        }

        IsLoading = false;
        StateHasChanged();
    }

    private static GeneratedImageSize MapSize(ImageSize size) => size switch
    {
        ImageSize.W1024xH1792 => GeneratedImageSize.W1024xH1792,
        ImageSize.W1792H1024 => GeneratedImageSize.W1792xH1024,
        _ => GeneratedImageSize.W1024xH1024,
    };

    private static GeneratedImageStyle MapStyle(ImageStyle style) => style switch
    {
        ImageStyle.Vivid => GeneratedImageStyle.Vivid,
        _ => GeneratedImageStyle.Natural
    };

    private static GeneratedImageQuality MapQuality(ImageQuality quality) => quality switch
    {
        ImageQuality.High => GeneratedImageQuality.High,
        _ => GeneratedImageQuality.Standard
    };

}


Finally, a screenshot of the app !
The app can also be used as an app on a mobile device, as it is using Bootstrap 5 and responsive design.

Sunday, 16 February 2025

Outputting tags/objects using Azure AI

This article presents a way to output tags for an image and output it to the console. Azure AI is used, more specifically the ImageAnalysisClient. The article shows how you can define a way to consume the data for an IAsyncEnumerable, so you can use await foreach to consume the data. I would recommend this approach for many services in Azure Ai (and similar) where there is no support out of the box for async enumerable and hide away the deails in a helper extension method as shown in this article.



  public static async void ExtractImageTags()
  {
      string visionApiKey = Environment.GetEnvironmentVariable("VISION_KEY")!;
      string visionApiEndpoint = Environment.GetEnvironmentVariable("VISION_ENDPOINT")!;

      var credentials = new AzureKeyCredential(visionApiKey);
      var serviceUri = new Uri(visionApiEndpoint);

      var imageAnalysisClient = new ImageAnalysisClient(serviceUri, credentials);
      await foreach (var tag in imageAnalysisClient.ExtractImageTagsAsync("Images/Store.png"))
      {
          Console.WriteLine(tag);
      }           
  }
  

The code creates an ImageAnalysisClient, defined in the Azure.AI.Vision.ImageAnalysis Nuget package. I got two environment variables here to store the key and endpoint. Note that not all Azure Ai features are available in all regions. If you just want to test out some Azure Ai features, you can first off just test it out at US East region, as that region will have most likely all features you want to test, then you can just a more local region if you are planning to do more workloads using Azure Ai.

Then we use an await foreach pattern here to extract the image tags asynchronously. This is a custom extension method I created so I can output the tags asynchronously using await foreach and also specify a wait time between each new tag being outputted, defaulting to 200 milliseconds here.

The extension method looks like this:


using Azure.AI.Vision.ImageAnalysis;

namespace UseAzureAIServicesFromNET.Vision;

public static class ImageAnalysisClientExtensions
{

    /// <summary>
    /// Extracts the tags for image at specified path, if existing.
    /// The results are returned as async enumerable of strings. 
    /// </summary>
    /// <param name="client"></param>
    /// <param name="imagePath"></param>
    /// <param name="waitTimeInMsBetweenOutputTags">Default wait time in ms between output. Defaults to 200 ms.</param>
    /// <returns></returns>
    public static async IAsyncEnumerable<string?> ExtractImageTagsAsync(this ImageAnalysisClient client, 
    	string imagePath, int waitTimeInMsBetweenOutputTags = 200)
    {
        if (!File.Exists(imagePath))
        {
            yield return default(string); //just return null if a file is not found at provided path
        }
        using FileStream imageStream = new FileStream(imagePath, FileMode.Open);
        var analysisResult = 
        	await client.AnalyzeAsync(BinaryData.FromStream(imageStream), VisualFeatures.Tags | VisualFeatures.Caption);
        yield return $"Description: {analysisResult.Value.Caption.Text}";
        foreach (var tag in analysisResult.Value.Tags.Values)
        {
            yield return $"Tag: {tag.Name}, Confidence: {tag.Confidence:F2}";        
            await Task.Delay(waitTimeInMsBetweenOutputTags);
        }
    }

}


The console output of the tags looks like this:

In addition to tags, we can also output objects in the image in a very similar extension method:


/// <summary>
/// Extracts the objects for image at specified path, if existing.
/// The results are returned as async enumerable of strings. 
/// </summary>
/// <param name="client"></param>
/// <param name="imagePath"></param>
/// <param name="waitTimeInMsBetweenOutputTags">Default wait time in ms between output. Defaults to 200 ms.</param>
/// <returns></returns>
public static async IAsyncEnumerable<string?> ExtractImageObjectsAsync(this ImageAnalysisClient client,
string imagePath, int waitTimeInMsBetweenOutputTags = 200)
{
    if (!File.Exists(imagePath))
    {
        yield return default(string); //just return null if a file is not found at provided path
    }
    using FileStream imageStream = new FileStream(imagePath, FileMode.Open);
    var analysisResult =
    	await client.AnalyzeAsync(BinaryData.FromStream(imageStream), VisualFeatures.Objects | VisualFeatures.Caption);
    yield return $"Description: {analysisResult.Value.Caption.Text}";
    foreach (var objectInImage in analysisResult.Value.Objects.Values)
    {
            yield return $"""
Object tag: {objectInImage.Tags.FirstOrDefault()?.Name} Confidence: {objectInImage.Tags.FirstOrDefault()?.Confidence}, 
Position (bbox): {objectInImage.BoundingBox}
""";
        await Task.Delay(waitTimeInMsBetweenOutputTags);
    }
}           
             

The code is nearly identical, we set the VisualFeatures of the image to extract and we read out the objects (not the tags). The console output of the objects looks like this:

Monday, 27 January 2025

Kinesisk nyttår i .NET

新年快乐

Det nærmer seg Kinesisk nyttår i 2025 ! Dette skjer 29. januar i år. I denne artikkelen vil metoder for å regne ut Kinesisk nyttår i .NET presenteres. Det er en del utregninger, heldigvis har .NET en hjelpeklasse for nettopp dette. Xīn nián kuài lè Xīn - Nytt nián - År kuài lè - Godt/lykkelig Uttale er ca slik: "Xin nien quai løe"




Regne ut Kinesisk nyttår


I denne artikkelen skal vi se på hvordan vi kan regne ut Kinesisk nyttår i .NET ! Vi kan regne ut Kinesisk nyttår i .NET på en intrikat måte. Det er definert som den andre nymånen etter vintersolverv. Nyttår faller dermed vanligvis mellom 21. januar og 20. februar og vil derfor variere fra år til år. I .NET har vi en klasse som heter ChineseLunisolarCalendar som en kan regne over til vår gregorianske kalender, som vi har hatt i Vesten siden
1582. Noen land, som Russland, Serbia og Etiopia bruker den julianske kalenderen i visse religiøse sammenhenger. Obs ! Denne klassen kan regne ut maks frem til og med år 2100 kinesisk nyttår, som jo er et stykke frem i tid. Men om 76 år må nok kildekoden her endres ! Kanskje en "V2" av denne klassen? La oss se på selve utregningen i .NET


ChineseCalendarUtils.cs


/// <summary>
/// Provides methods to calculate the date of Chinese New Year.
/// </summary>
public class ChineseNewYearCalculator
{
    /// <summary>
    /// Gets the date of Chinese New Year for a given year.
    /// </summary>
    /// <param name="year">The Gregorian year.</param>
    /// <returns>The date of Chinese New Year as a <see cref="DateTime"/>.</returns>
    public static DateTime GetChineseNewYear(int year)
    {
        System.Globalization.ChineseLunisolarCalendar chinese = new ChineseLunisolarCalendar();
        DateTime chineseNewYear = chinese.ToDateTime(year, 1, 1, 0, 0, 0, 0);
        return chineseNewYear;
    }
}


Her bruker vi ChineseLunisolarCalendar og vi bruker metoden ToDateTime for 1. januar for spesifisert år. Dette gjør en transformasjon som gir oss kinesisk nyttår regnet ut i vår gregorianske kalender i Vesten.

Neste 10 år med Kinesisk nyttår

Year Date Animal
2025 January 29 Snake
2026 February 17 Horse
2027 February 6 Goat
2028 January 26 Monkey
2029 February 13 Rooster
2030 February 3 Dog
2031 January 23 Pig
2032 February 11 Rat
2033 January 31 Ox
2034 February 19 Tiger


Kinesisk tolvårssyklus

Den kinesiske tolvårssyklusen stammer fra en myte (det er ikke stedfestet hvor eller når denne kongen levde, i og med at det er en myte, men det stemmer fra veldig gammelt av Kina), hvor Jadekongen Yù Huáng Dà Dì
(玉皇大帝) inviterte alle dyr i kongeriket til et løp som gikk på tvers av en elv. De 12 første dyrene som krysset elven, skulle få et år kalt opp etter dem i den kinesiske 12 års Zodiac syklusen. En utregning av Kinesisk år, altså det dyret som året er "tilegnet" blir da en enkel modulo 12 utregning, som vist under.

ChineseCalendarUtils.cs


string GetChineseZodiac(int year)
{
	string[] zodiacAnimals =
	{
		"Rat", "Ox", "Tiger", "Rabbit", "Dragon", "Snake",
		"Horse", "Goat", "Monkey", "Rooster", "Dog", "Pig"
	};

	int index = (year - 4) % 12;
	return zodiacAnimals[index];
}





Jadekongen Yù Huáng Dà Dì (玉皇大帝) og hans Zodiac-hjul (DALL-e 3 generert kunst)

Saturday, 18 January 2025

Monitoring User Secrets inside Blazor

This article shows how you can add User Secrets for a Blazor app, or other related .NET client technology supporting them. User secrets are stored on the individual computer, so one do not have to expose them to others. They can still be shared between different people if they are told what the secrets are, but is practical in many cases where one for example do not want to expose the secrets such as a password, by checking it into
source code repositories. This is due to the fact as mentioned that the user secrets are as noted saved on the individual computer.

User secrets was added in .NET Core 1.0, already released in 2016. Not all developers are familiar with them. Inside Visual Studio 2022, you can right click on the Project of a solution and choose Manage User Secrets. When you choose that option, a file called secrets.json is opened. The file location for this file is shown if you hover over the file. The file location is saved here:

  
    %APPDATA%\Microsoft\UserSecrets\<user_secrets_id>\secrets.json
  
  
You can find the id here, the user_secrets_id, inside the project file (the .csproj file).
Example of such a setting below:
  
  <Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>net8.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>
    <UserSecretsId>339fab44-57cf-400c-89f9-46e037bb0392</UserSecretsId>
  </PropertyGroup>

</Project>

  

Let's first look at the way we can set up user secrets inside a startup file for the application. Note the usage of reloadOnChange set to true. And adding the user secrets as a singleton service wrapped inside IOptionsMonitor.

Program.cs



builder.Configuration.Sources.Clear();
builder.Configuration
        .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
        .AddJsonFile($"appsettings.{builder.Environment.EnvironmentName}.json", optional: true, reloadOnChange: true)
        .AddUserSecrets(Assembly.GetEntryAssembly()!, optional:false, reloadOnChange: true)
        .AddEnvironmentVariables();

builder.Services.Configure<ModelSecrets>(builder.Configuration.GetSection("ModelSecrets"));
builder.Services.AddSingleton<IOptionsMonitor<ModelSecrets>>, OptionsMonitor<ModelSecrets>>();



The Model secrets class looks like the following.



namespace StoringSecrets
{
    public class ModelSecrets
    {
        public string ApiKey { get; set; } = string.Empty;
    }
}


Home.razor.cs

Let's then look how we can fetch values from the user secrets. The code file for a file Home.razor is put in a file of its own, Home.razor.cs


using Microsoft.AspNetCore.Components;
using Microsoft.Extensions.Options;

namespace ToreAurstadIt.StoringSecrets.Components.Pages
{
    public partial class Home
    {
        [Inject]
        public required IOptionsMonitor<ModelSecrets> ModelSecretsMonitor { get; set; }

        public ModelSecrets? ModelSecrets  { get; set; }

        protected override async Task OnInitializedAsync()
        {
            ModelSecrets = ModelSecretsMonitor.CurrentValue;
            ModelSecretsMonitor.OnChange(updatedSecrets =>
            {
                ModelSecrets = updatedSecrets;
                InvokeAsync(StateHasChanged);

            });
            await Task.CompletedTask;
        }
    }
}

Note that IOptionsMonitor<ModelSecrets> is injected here and that in the OnInitializedAsync method, the injected value uses the OnChange method and and action callback then sets the value of the ModelSecrets and calls InvokeAsync method with StateHasChanged. We output the CurrentValue into the razor view of the Blazor app.

Home.razor



@page "/"

<PageTitle>Home</PageTitle>

<h1>Hello, world!</h1>

Welcome to your new app.

Your user secret is: 
<div>
    @ModelSecrets?.ApiKey
</div>


The secret.json file looks like this:

secrets.json



{
  "ModelSecrets": {
    "ApiKey": "SAMPLE_VALUE_UPDATE_9"
  }
}


A small app shows how this can be done, by changing the user secrets file and then reloading the page, changes should be immediately seen:

Calculating day of the week from Lewis Carrol's algorithm

This article will present the Lewis Carrol's algorithm to calculate the day of the week.

Lewis Carrol was a mathematician and logician at Oxford University. He is a wellknown name today for his children novels such as "Alice in Wonderland", published in 1871.

The name Lewis Carrol is a pseudonym, his real name was Charles Ludwidge Dodgson. The algorithm presented here was released in 1883.

Let's look at the algorithm next for calculating the day of week. Please note that this algorithm got inaccuracies for leap years and february, it is about 90-95% correct for all dates. In many cases it will give correct results for other dates. The algorithm presented here is for dates after September 12, 1752 due to changes in our calendar system at that date.

Construct a Number n consisting of four parts where

n = CCYYMMDD | CC = century, YY = year, MM = month , DD = day

For example, the date 18th of January, 2025 is written as

n = 202501818 , CC = 20, YY = 25, MM = 01, DD = 18

The calculation will sum up the contributions from the numbers into a sum S as :

S = Sum_cc + Sum_yy + Sum_mm + Sum_dd

The sum S is then taken the modulo of 7, where the number 0 means Sunday and number 6 means Saturday. The day and week is therefore the modulo 7 of S where starting from Sunday, the remainder from S mod 7 is the weekday, starting with index 0 for Monday.

Let's look at the different part sums next.

Sum_cc is calculated as the 3 subtracted by the century modulo 4 and this is multiplied by two. Sum_yy is the year divided by 12 plus the year modulo 12 and the year divided by 4. Sum_mm is defined as the month looked up in a lookup table of values shown in the source code. Sum_dd is just the date of the month.

The summmation looks like this:


private int DayOfWeekLevisCarrol(int date){
 int century = date / 1_000_000;
 int year = (date / 10_000) % 100;
 int month = (date / 100) % 100;
 int dayValue = (date % 100); 
 int[] monthValues = [0, 3, 3, 6, 1, 4, 6, 2, 5, 0, 3, 12]; 
 int centuryValue = (3-(century % 4))*2;
 int yearValue = (year/12) + (year % 12) + ((year % 12)/4);
 int monthValue = monthValues[month-1]; 
 var sumValues= (centuryValue+yearValue+monthValue+dayValue) % 7;
 return sumValues;
}


This helper method shows the weekday presented as text:


public string WeekDayLevisCarrol(int date) => DayOfWeekLevisCarrol(date) switch {
 0 => "Sunday",
 1 => "Monday",
 2 => "Tuesday",
 3 => "Wednesday",
 4 => "Thursday",
 5 => "Friday",
 6 => "Saturday",
 _ => "Impossible"	
};


A more compressed way of expressing this is the following:


/// <summary>
/// Calculates the day of the week using Lewis Carroll's method.
/// </summary>
/// <param name="date">The date in yyyymmdd format.</param>
/// <returns>The day of the week as an integer (0 for Sunday, 1 for Monday, etc.).</returns>
private int DayOfWeekLevisCarrolV2(int date) =>
    ((3 - (date / 1_000_000 % 4)) * 2 +
    (date / 10_000 % 100 / 12) + (date / 10_000 % 100 % 12) + ((date / 10_000 % 100 % 12) / 4) +
    new int[] { 0, 3, 3, 6, 1, 4, 6, 2, 5, 0, 3, 12 }[(date / 100 % 100) - 1] +
    (date % 100)) % 7;



Running the metod for today (I used 18th of January 2025):

void Main()
{
	int todayDate = int.Parse(DateTime.Today.ToString("yyyyMMdd"));
	string todayWeekdate = WeekDayLevisCarrol(todayDate);
	Console.WriteLine($"Levis-Carrol's method calculated weekday for {DateTime.Today.ToShortDateString()} written as number {todayDate} is: {todayWeekdate}");
}


Output is:

Levis-Carrol's method calculated weekday for 18.01.2025 written as number 20250118 is: Saturday As noted, this algorithm is about 90-95% correct since it is not precise in leap years and in February. There are more precise calculation algorithms today, the algorithm is to be considered a crude approach and more modern algorithms exists today. In .NET, the calculation of the weekday is done inside the DateTime struct calculated from the start of Epoch at year 1 and there are precise calculations considering shifts in calendars and leap years, so of course you should use DayOfWeek of a DateTime in .NET. But it is a fun trivia that the author of "Alice in Wonderland" was a also a noted mathematician and according to history, he calculated weekdays for dates in few seconds using this algorithm, often being correct.