Friday, 1 April 2022

GraphQL in Asp.Net Core - Creating a flexible API

More and more .NET Developers have heard about GraphQL. This started as an in-house project in Facebook 2012 to provide a flexible way of sending customized data to mobile clients. Giving the clients the possible to query after tailored data meant sending less data over the wire to the mobiles with less bandwidth. As cell phones moves over to 5G networks, the issue means less and less (in urban areas with good base station coverage), however we should of course seek to always optimize our data transfer as pure bandwidth usage is always a valued thing to optimize. And added dimension is the less cost of creating APIs as we can tailor our data needs. Instead of creating methods for either returning lookup ids and then querying after entire data objects, we can project only the data we need to retrieve to present data on the mobile clients in a meaningful way. Whatever makes your boat rock for showing interest in GraphQL, this article will discuss how you can get started with GraphQL in Asp.Net Core. I have prepared a demo here:
 
  https://github.com/toreaurstadboss/AspNetCore-GraphQLDemo
 
The demo repository shows a list of the tallest mountains in the municipialites in Norway. Norway is a land of mountains and it is always to know which mountain is the very tallest in the municipiality you are visiting! (I enjoy mountain climbing and hiking now and then in my spare time). The demo page shows a text area where you can customize the data to load here. Of course we can only load the data provided for us. We can also use the Ui playground for GraphQL added for us here too:
First off, we need to grab some Nuget packages for GraphQL. We will be using Asp.Net Core 3.1. in this article.
 
        <PackageReference Include="GraphQL" Version="2.4.0" />
	<PackageReference Include="GraphQL.Server.Transports.AspNetCore" Version="3.4.0" />
	<PackageReference Include="GraphQL.Server.Transports.WebSockets" Version="3.4.0" />
	<PackageReference Include="GraphQL.Server.Ui.Playground" Version="3.5.0-alpha0046" />  
 
Then we need to specify in our Startup class the needed setup.
 
Startup.cs
using AspNetCore_GraphQLDemo.GraphQL; using AspNetCore_GraphQLDemo.GraphQL.Messaging; using Data; using Data.Repositories; using GraphQL; using GraphQL.Server; using GraphQL.Server.Ui.Playground; using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Diagnostics; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.WebSockets; using Microsoft.EntityFrameworkCore; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; using Newtonsoft.Json; namespace AspNetCore_GraphQLDemo { public class Startup { private readonly IWebHostEnvironment _env; public Startup(IConfiguration configuration, IWebHostEnvironment env) { _env = env; Configuration = configuration; } public IConfiguration Configuration { get; } // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { // If using IIS: services.Configure<IISServerOptions>(options => { options.AllowSynchronousIO = true; }); services.AddControllersWithViews(); services.AddHttpContextAccessor(); services.AddRazorPages().AddRazorRuntimeCompilation(); services.AddDbContext<MountainDbContext>(options => { options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")); }); services.AddScoped<IMountainRepository, MountainRepository>(); services.AddScoped<IDependencyResolver>(s => new FuncDependencyResolver(s.GetRequiredService)); services.AddScoped<MountainSchema>(); services.AddSingleton<MountainMessageService>(); services.AddSingleton<MountainDetailsDisplayedMessageService>(); services.AddGraphQL(x => { x.EnableMetrics = true; x.ExposeExceptions = _env.IsDevelopment(); x.SetFieldMiddleware = true; }).AddGraphTypes(ServiceLifetime.Scoped) .AddUserContextBuilder(httpContext => httpContext.User) .AddDataLoader() .AddWebSockets(); services.AddCors(options => { options.AddPolicy(name: "MyAllowSpecificOrigins", builder => { builder.AllowAnyOrigin().AllowAnyMethod(); }); }); } //static IEnumerable<Type> GetGraphQlTypes() //{ // return typeof(Startup).Assembly // .GetTypes() // .Where(x => !x.IsAbstract && // (typeof(IObjectGraphType).IsAssignableFrom(x) || // typeof(IInputObjectGraphType).IsAssignableFrom(x))); //} // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseBrowserLink(); } app.UseExceptionHandler(errorApp => { errorApp.Run(async context => { context.Response.Redirect("/Error"); context.Response.StatusCode = 500; var exceptionHandlerPathFeature = context.Features.Get<IExceptionHandlerPathFeature>(); var exception = exceptionHandlerPathFeature.Error; var result = JsonConvert.SerializeObject(new { error = exception.Message }); context.Response.ContentType = "application/json"; await context.Response.WriteAsync(result); }); }); app.UseStaticFiles(); app.UseRouting(); app.UseCors("MyAllowSpecificOrigins"); app.UseWebSockets(); app.UseGraphQLWebSockets<MountainSchema>("/graphql"); //app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapDefaultControllerRoute(); }); app.UseGraphQL<MountainSchema>(); if (env.IsDevelopment()) { app.UseGraphQLPlayground(new GraphQLPlaygroundOptions { }); } } } }
In ConfigureServices method above we register the schema for our GraphQL method.
 
 
    services.AddScoped<MountainSchema>(); 
 
 
We also add GraphQL itself and setup also web sockets (which are needed for GraphQL).
 
       services.AddGraphQL(x =>
                {
                    x.EnableMetrics = true; x.ExposeExceptions = _env.IsDevelopment(); x.SetFieldMiddleware = true; }).AddGraphTypes(ServiceLifetime.Scoped)
            .AddUserContextBuilder(httpContext => httpContext.User)
            .AddDataLoader()
            .AddWebSockets(); 
 
Just as a side note, you want to add Cors also:
 
     services.AddCors(options =>
            {
                options.AddPolicy(name: "MyAllowSpecificOrigins",
                    builder =>
                    {
                        builder.AllowAnyOrigin().AllowAnyMethod();
                    });
            });
 
Inside Configure method we add also the following to enable GraphQL:
 
           app.UseCors("MyAllowSpecificOrigins");

            app.UseWebSockets();

            app.UseGraphQLWebSockets<MountainSchema>("/graphql");

            //app.UseAuthorization();

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapDefaultControllerRoute();
            });

            app.UseGraphQL<MountainSchema>();
            if (env.IsDevelopment())
            {
                app.UseGraphQLPlayground(new GraphQLPlaygroundOptions
                {
                    
                });
            }
        }  
 
Our Mountainschema looks like this:
MountainSchema.cs
using AspNetCore_GraphQLDemo.GraphQL.Types; using AspNetCore_GraphQLDemo.GraphQL.Types.Directives; using GraphQL; using GraphQL.Instrumentation; using GraphQL.Types; namespace AspNetCore_GraphQLDemo.GraphQL { public class MountainSchema : Schema { public MountainSchema(IDependencyResolver resolver) : base(resolver) { Query = resolver.Resolve<MountainQuery>(); Mutation = resolver.Resolve<MountainMutation>(); Subscription = resolver.Resolve<MountainSubscription>(); RegisterDirective(new LowercaseDirective()); RegisterDirective(new OrderbyDirective()); var builder = new FieldMiddlewareBuilder(); builder.Use<LowercaseFieldsMiddleware>(); builder.ApplyTo(this); builder.Use(next => { return context => { return next(context).ContinueWith(x => { var c = context; var result = x.Result; result = OrderbyQuery.OrderIfNecessary(context, result); return result; }); }; }); builder.ApplyTo(this); //builder.Use<CustomGraphQlExecutor<MountainSchema>>(); //builder.ApplyTo(this); } } }
We pass in a IDependencyResolver (dependency!) into the constructor and resolve the classes we desire (we inherit from Schema class). We wire up our schema here to the Query, Mutation and Subscription we desire and register directives. Here is how the Query property is set:
 
MountainQuery.cs
using AspNetCore_GraphQLDemo.GraphQL.Types; using Data; using Data.Repositories; using GraphQL.Types; namespace AspNetCore_GraphQLDemo.GraphQL { public class MountainQuery : ObjectGraphType { public MountainQuery(IMountainRepository mountainRepository) { Field<ListGraphType<MountainType>>("mountains", resolve: context => mountainRepository.GetAll() ); FieldAsync<MountainType>("mountain", arguments: new QueryArguments(new QueryArgument<NonNullGraphType<MountainIdInputType>> {Name = "id"}), resolve: async context => { var mountain = context.GetArgument<MountainInfo>("id"); var mountainFromDb = await mountainRepository.GetById(mountain.Id); return mountainFromDb; }); //FieldAsync<MountainType>("selectmountain", // arguments: new QueryArguments(new QueryArgument(typeof(int)) { Name = "id" }), // resolve: async context => // { // var mountain = context.GetArgument<MountainInfo>("id"); // var mountainFromDb = await mountainRepository.GetById(mountain.Id); // return mountainFromDb; // }); //sadly, we need to inherit from IGraphType and cannot just have simple scalar arguments in GraphQL.Net.. } } }
As you can see, we can define multiple queries. We inherit from ObjectGraphType and pass in a IMountainRepository. This is an interface for your repository, which fetches data via Entity Framework Core and you can then load data into GraphQL from the local database (The DEMO uses Sql Server (SQLEXPRESS)) via EF Core in a simple manner by only providing the repo via dependency injection. We define via the methods Field and FieldAsync our methods (note the use of string constants as a string value we can use in GraphQL queries of ours that resides in the Schema) and the resolve lambda tells how data is to be fetched. We can specify arguments also. The "mountain" FieldAsync method also accepts arguments via the
arguments lambda and this allows us parameterized access to our data. Over to the Subscription property. It looks like this:
 
using AspNetCore_GraphQLDemo.GraphQL.Messaging;
using AspNetCore_GraphQLDemo.GraphQL.Types;
using GraphQL.Resolvers;
using GraphQL.Types;

namespace AspNetCore_GraphQLDemo.GraphQL
{
    public class MountainSubscription : ObjectGraphType
    {
        public MountainSubscription(MountainDetailsDisplayedMessageService mountainDetailsDisplayedMessageService)
        {
            Name = "Subscription";
            AddField(new EventStreamFieldType
            {
                Name = "detailsDisplayed",
                Type = typeof(MountainDetailsMessageType),
                Resolver = new FuncFieldResolver<MountainDetailsMessage>(c => c.Source as MountainDetailsMessage),
                Subscriber = new EventStreamResolver<MountainDetailsMessage>(c => mountainDetailsDisplayedMessageService.GetMessages())
            });
        }
    }
}
 
 
Here we inherit from ObjectGraphType (as we did for Query) and we use the MountainDetailsDisplayedMessageService. This was added as a (concrete class) singleton in the Startup.cs file. The message service uses RxJs serverside to handle the Pub-sub pattern of the subscriber. We are using System.Reactive.Subjects here.
 
MountainSubscription.cs
using System; using System.Reactive.Linq; using System.Reactive.Subjects; namespace AspNetCore_GraphQLDemo.GraphQL.Messaging { public class MountainDetailsDisplayedMessageService { private readonly ISubject<MountainDetailsMessage> _messageStream = new ReplaySubject<MountainDetailsMessage>(1); public MountainDetailsMessage AddMountainDetailsMessage(int id) { var message = new MountainDetailsMessage { Id = id }; _messageStream.OnNext(message); return message; } public IObservable<MountainDetailsMessage> GetMessages() { return _messageStream.AsObservable(); } } }
The mutation looks like this:
MountainMutation.cs
using AspNetCore_GraphQLDemo.GraphQL.Messaging; using AspNetCore_GraphQLDemo.GraphQL.Types; using Data; using Data.Repositories; using GraphQL.Types; namespace AspNetCore_GraphQLDemo.GraphQL { public class MountainMutation : ObjectGraphType { public MountainMutation(IMountainRepository mountainRepository, MountainMessageService mountainMessageService) { FieldAsync<MountainType>("createMountain", arguments: new QueryArguments( new QueryArgument<NonNullGraphType<MountainInputType>> {Name = "mountain"}), resolve: async context => { var mountain = context.GetArgument<MountainInfo>("mountain"); await mountainRepository.AddMountain(mountain); mountainMessageService.AddMountainAddedMessage(mountain); return mountain; }); FieldAsync<MountainType>("removeMountain", arguments: new QueryArguments( new QueryArgument<NonNullGraphType<MountainIdInputType>> { Name = "id" }), resolve: async context => { var mountain = context.GetArgument<MountainInfo>("id"); await mountainRepository.RemoveMountain(mountain.Id); return mountain; }); } } }
We can create a mountain like this in GraphQL Query:
 
 mutation {
  createMountain(mountain: {
   county: "Svalbard"
  muncipiality: "Svalbard"
  officialName: "Newtontoppen"
  referencePoint: "Isbjønn på toppen"
  comments: "Husk rask snøskuter",
  metresAboveSeaLevel: "1713",
  primaryFactor: "1713"
  }) {    
    id
  }
} 
  
 
And we can remove a mountain (don't we all?) like this:
 

# Write your query or mutation here
mutation {
  removeMountain(id: {
    id: 370
  }) { id }
}
  
 
If you clone the repo you will find more source code concerning directives such as lowercase and sorting. As you saw in MountainSchema I use the FieldMiddlewareBuilder to do the sorting as this needs to tap into the pipeline more of GraphQL.Net. We also need some more code - for the client side of course. The client side code relies on Apollo Client lib like this:
 
index.cshtml
<script src="https://unpkg.com/apollo-client-browser@1.7.0"></script>
The libman.json file (the similar file to package.json when it comes to specifying client-side libraries in .net core mvc solutions) of the demo solution looks like this I have used looks like this:
 
libman.json
{ "version": "1.0", "defaultProvider": "cdnjs", "libraries": [ { "library": "twitter-bootstrap@4.2.1", "destination": "wwwroot/lib/bootstrap", "files": [ "js/bootstrap.bundle.js", "css/bootstrap.min.css" ] }, { "library": "jquery@3.3.1", "destination": "wwwroot/lib/jquery", "files": [ "jquery.min.js" ] }, { "provider": "unpkg", "library": "font-awesome@4.7.0", "destination": "wwwroot/lib/font-awesome/" }, { "provider": "unpkg", "library": "toastr@2.1.4", "destination": "wwwroot/lib/toastr/" } ] }
We then need some client side code to load data from GraphQL server of ours.
 
  <script>

    function LoadGraphQLDataIntoUi(result) {

        var tableBody = $("#mountainsTableBody");
        tableBody.empty();

        var tableHeaderRow = $("#mountainsTableHeaderRow");
        tableHeaderRow.empty();

        var rowIndex = 0;

        result.data.mountains.forEach(mountain => {

            if (rowIndex == 0) {
                Object.keys(mountain).forEach(key => {
                    if (key === '__typename') {
                        return;
                    }
                    tableHeaderRow.append(`<th>${key}</th>`);
                });;
            }

            tableBody.append('<tr>');

            Object.keys(mountain).forEach(key => {
                if (key === '__typename') {
                    return;
                }
                if (key === 'id') {
                    tableBody.append(`<td><a href='/home/mountaindetails/?id=${mountain[key]}'><i class='fa fa-arrow-right'></i></a> ${mountain[key]}</td>`);
                    return;
                }
                tableBody.append(`<td>${mountain[key]}</td>`);

            });

            tableBody.append('</tr>');

            rowIndex++;

        });

        toastr.success('Loaded GraphQL data from server into the UI successfully.');


    }

    $("#btnConnect").click(function () {
        ConnectDemo();

    });


    $("#btnLoadData").click(function () {
        var gqlQueryContents = $("#GraphQLQuery").val();
        LoadGraphQLData(gqlQueryContents, LoadGraphQLDataIntoUi);
        toastr.info('Retrieving data from API using GraphQL.');
    });

    $(document).ready(function () {

        console.log('loading');

        var initialQuery = `
                {
                    mountains {
                        id
                        fylke: county
                        kommune: muncipiality
                        hoydeOverHavet: calculatedMetresAboveSeaLevel
                        offisieltNavn: officialName
                        primaerfaktor: calculatedPrimaryFactor
                        referansePunkt: referencePoint
                    }
                }`;

        $("#GraphQLQuery").val(initialQuery);

    });

</script>
 
 
And then a method using Apollo client lib to load the data:
 
 /**
 * Loads GraphQL data specified by query expression and passes the 'result' array to the callBackFunction
 * callBackFunction should be Js method (function) that accept one parameter, preferably called result, which is an object
 * that contains a result.data object.
 */
function LoadGraphQLData(gqlQuery, callBackFunction) {

    var apolloClient = new Apollo.lib.ApolloClient({
        networkInterface: Apollo.lib.createNetworkInterface({
            uri: 'http://localhost:2542/graphql',
            transportBatching: true,
        }), connectToDevTools: true
    });
    var query = Apollo.gql(gqlQuery);

    apolloClient.query({
        query: query,
        variables: {}
    }).then(result => {
        callBackFunction(result);
    }).catch(error => {
        //debugger
        toastr.error(error, 'GraphQL loading failed');
    });
}
 

Saturday, 19 March 2022

Using C# 9 language features in .NET Framework and .NET Standard projects

C# 7.0 came out in March 2017 and Microsoft has published other frameworks later, such as .NET Core and .NET 5 plus .NET 6. If you are working with a .NET Framework based solution (or .NET Standard 2.0), you can actually get support for C# 8 and C# 9 language version - enabling you to utilize more of C# language features. The following steps can be used to enable C# language 9 in for example .NET Framework 4.8 (tested and verified that I could use records (a C# 9 language feature).
  • Specify in the .csproj file(s) that you want to use <LangVersion> element set to 9.0
  • Consider using a file called Directory.Build.props (at the root level of your solution) (Case sensitive on Linux) with this shared setting to enabled C# 9.0 version in all projects.
  • Using C# 9 language version also requires you to include a small file in each project listed below, call it IsExternalInitPatch.cs for example.
File IsExternalInitPatch.cs should include this :

  
namespace System.Runtime.CompilerServices
{
    internal static class IsExternalInit { }
}
  


Now you can start playing around with C# 9 in a .NET Framework 4.8 solution for example, which earlier has been limited to C# 7.1 and no later language version features of C#.

namespace SomeAcme.SomeProduct.Common.Test
{
    /// <summary>
    /// This is just a test of csharp 9 for SomeAcme.SomeProduct
    /// Note that Directory.Build.props in this branch uses LangVersion set to 9.0 and we need the file IsExternalInit.cs in every project
    /// </summary>
    /// <remarks>
    /// See these two urls: 
    /// https://btburnett.com/csharp/2020/12/11/csharp-9-records-and-init-only-setters-without-dotnet5.html
    /// https://blog.ndepend.com/using-c9-record-and-init-property-in-your-net-framework-4-x-net-standard-and-net-core-projects/
    /// </remarks>
    [TestFixture]
    public class TestOutCsharpNine
    {

        public record Operasjon (DateTime StartTid, bool ErElektiv, string PasientNavn);


        [Test]
        public void Test_Records_ChsharpNine_And_Deconstruction_And_Discardable_Variables()
        {
            var op = new Operasjon(DateTime.Today.AddHours(8).AddMinutes(15), true, "Bjarne Brøndbo");
            (_, _, string pasientNavn) = op;
            pasientNavn.Should().Be(op.PasientNavn);
        }

        [Test]
        public void Test_Init_Only_Props()
        {
            var op = new OperasjonWithInitOnlyProps
            {
                ErElektiv = true,
                PasientNavn = "Thomas Brøndbo"
            };
            // op.PasientNavn = "foo"; 
            //uncommenting line above should demonstrate init only property giving compiller error if trying to mutate or alter this property
            op.PasientNavn.Should().Contain("Brøndbo");
        }

        [DataContract]
        public class OperasjonWithInitOnlyProps
        {
            [DataMember]
            public string PasientNavn { get; init; }
            [DataMember]
            public bool ErElektiv { get; init; }
        }
    }
}


The CSharp compiler sets up default the CSharp language features according to these rules: The compiler determines a default based on these rules:
Target framework	version	C# language version default
.NET	6.x	C# 10
.NET	5.x	C# 9.0
.NET Core	3.x	C# 8.0
.NET Core	2.x	C# 7.3
.NET Standard	2.1	C# 8.0
.NET Standard	2.0	C# 7.3
.NET Standard	1.x	C# 7.3
.NET Framework	all	C# 7.3
So .NET Framework and .NET Standard based solution has not gotten per default any modernization of C# sharp features since March 2017 (five years ago), but we can with some small modification still use C# 9.0 which came out 1.5 years ago. Of course, this C# language version is meant to be used with .NET 5, so do not expect everything to be supported on it. However, chances are high that much of C# 8 and C# 9 language features could be handy to use in many .NET Framework and .NET Standard based projects. For example, records with their support for immutability is definately a big new thing in C# compared to what is avilable in C# language version 8 or earlier. Lastly, you must also consider how to build C# 9 language features (which assumes .NET 5 available) on a build server. For Team City for example, you must install .NET 5 SDK on the build agent.
Also, most likely you have for example a MS Build step in Team City, so you should use MS Build 16 (VS 2019 Build tools) and install the build tools for VS 2019 on the build agent from this url or google for Build Tools for VS 2019: https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools&rel=16&src=myvs&utm_medium=microsoft&utm_source=my.visualstudio.com&utm_campaign=download&utm_content=vs+buildtools+2019 For Azure Devops, choose the VS 2022 agent. I still had to add a "Use NET Core" task and choose 'Package to install' set to 'SDK (contains runtime)', the YAML looks like this:

steps:
- task: UseDotNet@2
  displayName: 'Use .NET Core sdk 5.0.100'
  inputs:
    version: 5.0.100
    includePreviewVersions: true

Also note this - albeit you might have .NET Framework 4.8 in a project, your config file like app.config might have this :
 
  


<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6.1" />
    </startup>
	<appSettings>
 
 
The supportedRuntime might force you to in a specific project to have a LangVersion set to a lower value anyways. So you might need for example to have LangVersion set to 7.1 in one project and default to LangVersion 9.0. To sum up :
  • .NET Framework and .NET Standard can still use C# language version 8 or 9. You need to do the adjustments I mentioned in this article.
  • C# language version 10 is only supported by .NET 6. To use this language version you have to upgrade framework..
  • Also - test out new language features in one project first and use the basic features first. If you use advanced language features of C# language version 8 or 9 you might consider some glitches.. However, you should get a compiler warning for most errors you encounter.
  • Don't forget that your build agent must be able to build the solution too. So you can use VS 2022 hosted agent and consider also the USE NET Core Sdk task I mentioned here if you build in Azure Devops. If you use a self-hosted agent, like Team City on-premises build agent, you need to install the newest VS 2019 SDK / Build Tools to ensure that you have the C# langversion.
In the Developer command prompt on the build agent you can run this command
 
  csc -langversion:? 
 
This should output the langversions of C# your build agent supports. It also works on a developer PC (use the VS 2019 command prompt). As I noted, C# 10 is only supported in .NET 6. We might have a future situation where C# 11 is still supported in a .NET 6 solution - I am not sure what Microsoft is planning here. But for other and earlier frameworks, it looks like C# 9 is the end of the road of language versions - we have to upgrade to .NET 6 to utilize newer language features (or consider dragging in Nuget compiler packages ..)

Saturday, 12 March 2022

Added NinUtilsNorway - shows modulo 11 algorithm for verifying Norwegian personal identifier numbers

I added a library to Nuget called NinUtilsNorway. Nin is an acronym for Norwegian Identifier Number. It is handy for verifying that Norwegian personal identifiers - or PID - are correct. It supports addional formats of Personal Identifiers Numbers (fnr - fødselsnummere) and these kinds of PID / NIN are supported:
  • Fnr (ordinary PIDs / NINs)
  • D-number - handed out to those working in Norway in a temporary period of some months or years - 'guest workers'
  • H-number - "Nødnummer" - given to tourists, unidentified people et cetera - those on temporary visit to Norway on visa e.g
  • DUF-number - given to asylum seekers by UDI
  • FH-number - a variant of H-number with relaxed formatting - supports more numbers
Install it using these commands : .NET Framework 4.6.2 - at least .NET Framework 4.7.1 recommended : Install-Package NinUtilsNorway -Version 1.1.0 .NET Core and .NET 5 and .NET 6 : dotnet add package NinUtilsNorway --version 1.1.0 In the future, in 2032, a new standard - also called PID - will replace todays format. New citizens (newborns etc) will get new NIN / PID fødselsnummere. But people born before 2032 will default keep their
NIN / PID /fnr. We can still read out age and gender from these fnr using the existing modulo-11 algorithm - actually both the age and the gender are readily resolved without checking the two last controls digits via a modulo 11 algorithm. They follow rules establised. You can browse the source code on my Github repo to see how we resolve information from NIN / PID. The Github repo is here: Github repo Here you can browse the source code of NinUtils. The lib is written in netstandard 2.0 so you can use .NET Framework 4.7+ (theoretically 4.6.2 can also be supported) and .NET Core or .NET 5 and .NET 6. A sample client in .NET 6 Console app is:
 
 
using static NinUtilsNorway.NinUtilsNorway;


// See https://aka.ms/new-console-template for more information
Console.WriteLine("Hello, World!");

Console.WriteLine("Enter your fnr: ");
string fnr =  Console.ReadLine();

bool isValid = IsValidNin(fnr);
NinUtilsNorway.Gender gender = GetGender(fnr);

Console.WriteLine($"The fnr {fnr} is valid? {isValid} Gender of fnr is: {gender}");

 
This page shows information how the last two numbers of a fnr is used to verify the validity of a fnr: http://www.fnrinfo.no/Teknisk/KontrollsifferSjekk.aspx In short we calculate the last two control digits k1 and k2 and check that both numbers are divisible by 11 like this - C# code: Note - for a fnr with digits d1, d2, .. d11 we calculate mathematically the sum of each digit by multiplying these weights: k1 : weights are {3,7,6,1,8,9,4,5,2,1} k2 : weights are { 5, 4, 3, 2, 7, 6, 5, 4, 3, 2, 1 } Check the link I gave above, it has a very easy example. The modulo-11 algorithm is very similar to what is used in Norwegian banks for KID - customer identifier in checques / billings to have another example.
 
 

   /// >summary<
        /// Calculates validity of Nin according to modulo 11 algorithm. 
        /// >/summary<
        /// >param name="nin"<>/param<
        /// >returns<>/returns<
        /// >remarks<>see href="http://www.fnrinfo.no/Teknisk/KontrollsifferSjekk.aspx"
        /// Example of a Modulo-11 algorithm mathematical basis is shown here: 
        /// >see href="http://www.pgrocer.net/Cis51/mod11.html"/<
        /// >/remarks<
        public static bool IsValidNin(string nin)
        {
            nin = nin?.Trim();
            if (nin?.Length != 11)
            {
                return false;
            }

            if (!long.TryParse(nin, out var _))
            {
                return false;
            }

            if (IsDNumber(nin))
            {
                nin = (byte.Parse(nin[0].ToString()) - 4).ToString() + nin.Skip(1);
            }

            int k1 = 0, k2 = 0; //weighted sums be 
            int[] k1_weights = new int[] { 3, 7, 6, 1, 8, 9, 4, 5, 2, 1 };
            foreach (var item in nin.Select((digit, index) =< (digit, index)))
            {
                if (item.index == 10)
                {
                    break; //only considering first 10 digits of nin
                }
                k1 += int.Parse(item.digit.ToString()) * k1_weights[item.index];
            }
            if (k1 % 11 != 0)
            {
                return false; //k1 must be divisible by 11!
            }
            int[] k2_weights = new int[] { 5, 4, 3, 2, 7, 6, 5, 4, 3, 2, 1 };
            foreach (var item in nin.Select((digit, index) =< (digit, index)))
            {
                k2 += int.Parse(item.digit.ToString()) * k2_weights[item.index];
            }
            if (k2 % 11 != 0)
            {
                return false;
            }

            return true; //k1 and k2 is now known to be both divisible with 11
        }


 
The Readme of the Nuget is added below:
 
 
 NinUtilsNorway
Summary
Util methods for Nin (National identifier number) in Norway

Note : Nin standards will be replaced by PID standard in 2032. Nin will be kept, but new Nins handed out will follow PID standard.

The following types of Nin Numbers exists, basically five types of which ordinary Nin and D-number are the most typical. They all consist of 11 digits of which the last two are control digits. (usually Modulo 11 algorithm is used):

Ordinary Nin (fødselsnummer)
D-number (temporary given to foreign workers, may span multiple years)
Help numbers H-numbers (tourists, infants, unconcious people, unidentified people, etc)
FH help numbers FH-numbers (similar to H-numbers)
DUF-number (given to asylum seekers by UDI)
About change of Nin into PID standard as of 2032 : https://www.skatteetaten.no/en/deling/opplysninger/folkeregisteropplysninger/pid/ Sample test persons can be retrieved from here which was helpful in building the util methods. https://skatteetaten.github.io/folkeregisteret-api-dokumentasjon/test-for-konsumenter

See useful list of definitions here: https://www.ehelse.no/standardisering/standarder/identifikatorer-for-personer And DUF-numbers (UDI) : https://www.udi.no/ord-og-begreper/duf-nummer/ Note - Gender calculation will not necessarily be possible after 2032, as you are not guaranteed that Nin will contain correct gender information when PID is introduced. People will keep their Nin as before, but the semantic of Gender where the ninth digit (last of the three digits of 'individual number' is even number means = FEMALE and odd number = MALE is halted after 2032. Newborns and new Nin (PID) will be gender-less, i.e. you cannot read gender out of Nin handed out after 2032.

NinUtilsNorway.NinUtilsNorway.GetGender(System.String)"
        <summary>
        Resolves gender from nin. Rule is that the first six digits are the birth date
        DDMMYYYY followed by 3 'individual digits' (individnummer) and finally two
        control digits (kontrollsiffer). The third digit of 'individual digits' are the 
        indicator for gender. If even number, female individual, if odd number, male individual.
        </summary>
        <param name="nin"></param>
        <returns></returns>
        <remarks>Documentation about Norwegian Nin structure is here<see href="https://www.skatteetaten.no/person/folkeregister/fodsel-og-navnevalg/barn-fodt-i-norge/fodselsnummer/"/></remarks>
    </member>
NinUtilsNorway.NinUtilsNorway.IsDufNumber(System.String,NinUtilsNorway.IDateTimeNowProvider)
        <summary>
        Checks if this is a DUF-number. These numbers are given by UDI 
        The check is only checking it is a number with 12 digits. The number must also 
        reside in UDI data systems, which this method do not check.
        </summary>
        <returns></returns>
        <remarks>Se notes from eHelse here: <see href="https://www.ehelse.no/standardisering/standarder/identifikatorer-for-personer#DUF-nummer"/></remarks>
    </member>
NinUtilsNorway.NinUtilsNorway.IsHelpNumber(System.String,System.Boolean)
        <summary>
        Checks if this is a help number H-number. The default convention for H-number is that it we add 
        the number 4 to the third digit 
        </summary>
        <param name="useEightNineConvention">Use special convention that if the first digit is 8 or 9, it signals 
        a Help Number. Note - this usually designates a FH-help number instead</param>
        <returns></returns>
    </member>
NinUtilsNorway.NinUtilsNorway.IsFHNumber(System.String)
        <summary>
        FH numbers are developed by KITH as a proposal established as a standard 18.01.2010. It is similar to Nin 
        fødselsnumre with 11 digits, and the first digit is 8 or 9. The numbers in position 2 - 9 are generated
        as random numbers. This standard conceals also gender, birthdate or which order the number is provided.
        The algorithms allows about 200 million numbers minus 17% of these due to incorrect control digits (last two digits). 
        Examples of people getting a FH-number are tourists, newborn (infants), unconcious people not identified, 
        unidentified people or similar reasons that a fødselsnummer Nin or D-Number is not available. 
        </summary>
        <param name="number"></param>
        <returns></returns>
    </member>
M:NinUtilsNorway.NinUtilsNorway.IsDNumber(System.String)
        <summary>
        Returns true if a person is having a D-number. A d-number is given to foreign workers in 
        Norway as a temporary identifier during their work period. It is similar to a ordinary Nin (fødselsnummer), but 
        for the first digit in the nin, we add 4. This gives 4,5,6,7 as possible digits for the first digits.
        A lot of other characteristics of D-number are similar to ordinary Nin, including the two control digits follow same rules.
        </summary>
        <param name="nin"></param>
        <returns></returns>
    </member>
NinUtilsNorway.NinUtilsNorway.GetAge(System.String,NinUtilsNorway.IDateTimeNowProvider)
        <summary>
        Calculates age from Nin
        </summary>
        <param name="nin"></param>
        <param name="nowTimeProvider">Provide an implementation to override now time. 
        Useful for mocking</param>
        <returns></returns>
        <remarks>About individual numbers - the 7-9 digits of Nin - and rules of centuries. 
        See explanation here: <see href="https://no.wikipedia.org/wiki/F%C3%B8dselsnummer" /></remarks>
NinUtilsNorway.NinutilsNorway.GetControlDigitsForNin(string nin)
     <summary>
    /// Nin are composed of two control digits at the end. We can calculate these digits. 
    /// Usage: pass in the first NINE digits of the Nin. The last two digits will then be calculated. 
    /// For given first nine digits of we calculate the control digits, last two digits of the nin
    //  Pass in the first nine digits. 11 - (the weighted sum modulo 11) is then returned for first control digit
    //  k1. And the second control digit 2 is similarly calculated, but include the first control digit also as a 
    //  self correcting mechanism.
    /// </summary>
    /// <param name="nin"></param>
    /// <returns></returns>
    </summary>
NinUtilsNorway.NinUtilsNorway.IsValidNin(string nin)
    /// <summary>
    /// Calculates validity of Nin according to modulo 11 algorithm. 
    /// </summary>
    /// <param name="nin"></param>
    /// <returns></returns>
    /// <remarks><see href="http://www.fnrinfo.no/Teknisk/KontrollsifferSjekk.aspx"
    /// Example of a Modulo-11 algorithm mathematical basis is shown here: 
    /// <see href="http://www.pgrocer.net/Cis51/mod11.html"/>
    /// </remarks>
    /// </summary>
Finally, note about testing. See :

https://skatteetaten.github.io/folkeregisteret-api-dokumentasjon/test-for-konsumenter/

For test data.

Also note that you can implement IDateTimeNowProvider to statically set "today date" for predicatable results while testing.
 
 

Thursday, 16 December 2021

AngularJs directive for clearing a text field

I wrote an AngularJs directive at work today for clearing a text field. We still use this in multiple projects for front-end (although I have worked more with Angular than AngularJs last 2-3 years). The directive ended up like this (we use Bootstrap v2.3.2) :

angular.module('formModule').directive('addClearTextFieldBtn', function ($compile) {
        function link(scope, element, attrs) {
            var targetId = attrs.id;
            var targetNgModel = attrs.ngModel;
            var minimumChars = attrs.clearTextFieldBtnMintextlength ? attrs.clearTextFieldBtnMintextlength : "1";
            var emptyValue = "''";
            var templateAppend = '<i id="clear' + targetId + '" ng-if="' + targetNgModel + ' && + ' + targetNgModel + '.length >= ' + minimumChars + '"' + 'ng-click="' + targetNgModel;
            templateAppend += ' = ' + emptyValue + '" class="glyphicon icon-remove form-control-feedback" title="Tøm innhold" style="cursor:pointer; pointer-events: all;" tooltip="clear"></i >';
            var clearButton = angular.element(templateAppend);
            clearButton.insertAfter(element);
            $compile(clearButton)(scope);
        }
        return {
            restrict: 'A',
            replace: false,
            link: link
        };
    });

Example usage inside a HTML helper in MVC for example:
add_clear_text_field_btn = "model.Icd10", data_clear_text_field_btn_mintextlength="3"
We pass in a HTML5 data value attribute to specify the minimum length to show the button to clear the field.

Sunday, 12 December 2021

Displaying errors in Event Log with Out-GridView in Powershell

A user friendly way to view errors in Event log source from Powershell.
$rawUI = $Host.UI.RawUI
$oldSize = $rawUI.BufferSize
$typeName = $oldSize.GetType( ).FullName
$newSize = New-Object $typeName (500, $oldSize.Height)
$rawUI.BufferSize = $newSize

get-eventlog -logname someacmecompanyname | where-object { $_.source -like '*someeventlogsourcename*' -and $_.EntryType -in ('Error', 'Warning', 'Critical') } | out-gridview

Wednesday, 24 November 2021

Scanning solutions for NUnit test adapter via Powershell

Checking that we have added test adapter for NUnit so that our tests in Azure Devops are run

A challenge with running tests inside Powershell can be if NUnit test adapter Nuget package is missing from the solution. If you run test using NUnit 2.x, you require NUnitTestAdapter. If you use NUnit 3.x, NUnit3TestAdapter is required. The following Powershell script can be used to check if we have added a Nuget package reference at least to one such test project in the solution. We have here some tests that will list up all PackageReference in csproj files of the solution. Note: this requires the following setup of your Nuget package references listed in the solution.
  • You have to have csproj projects in the solution
  • You must use PackageReference, i.e. list up nuget packages in the csproj file. This will not work if you instead use packages lock json format or packages.config.
The Powershell functions are these:
 
 
 
 Function Get-ProjectInSolution {
    [CmdletBinding()] param (
        [Parameter()][string]$Solution
    )
    $SolutionPath = $Solution
    $SolutionFile = Get-Item $SolutionPath
    $SolutionFolder = $SolutionFile.Directory.FullName

    Get-Content $Solution |
        Select-String 'Project\(' |
        ForEach-Object {
            $projectParts = $_ -Split '[,=]' | ForEach-Object { $_.Trim('[ "{}]') }
            [PSCustomObject]@{
                File = $projectParts[2]
                Guid = $projectParts[3]
                Name = $projectParts[1]
            }
        } |
        Where-Object File -match "csproj$" |
        ForEach-Object {
            Add-Member -InputObject $_ -NotePropertyName FullName -NotePropertyValue (Join-Path $SolutionFolder $_.File) -PassThru
        }
}

Function Get-TestProjectInSolution {
[CmdletBinding()] param (
[Parameter()][string]$Solution)
  
  $projects = & Get-ProjectInSolution $Solution
  $testProjects = $projects | Where-Object { $_.Name -like '*Test*' }
  return $testProjects
}


Function Get-PackagesInProject {
[CmdletBinding()] param (
[Parameter()][string]$ProjectFile)

Get-Content $ProjectFile | Write-Host 
}


# Get-ProjectInSolution "C:\dev\somesolution\someacme.sln" 

Function List-PackagesOfTestProjectInSolution {
[CmdletBinding()] param (
[Parameter()][string]$SolutionFile)

  & Get-TestProjectInSolution $SolutionFile | ForEach-Object {
   $filePath = $_.FullName 
   Write-Host $filePath
  (Get-Content $_.FullName | Find "<PackageReference Include")
}
}

 
    Function Get-PackagesOfTestProjectInSolution {
[CmdletBinding()] param (
[Parameter()][string]$SolutionFile)

$dict = @{}

  & Get-TestProjectInSolution $SolutionFile | ForEach-Object {
    $filePath = $_.FullName 
    # Write-Host $filePath
    if (-not $dict.ContainsKey($filePath)) {
        $dict[$filePath] = (Get-Content $_.FullName | Find "<PackageReference Include")
    }
    return $dict 
  }

}

Function Has-NunitTestAdapterPackageInTestProjectinSolution {
[CmdletBinding()] param (
[Parameter()][string]$SolutionFile) 
 $packagesDict = Get-PackagesOfTestProjectInSolution $SolutionFile
$allPackagesString = $packagesDict.Values
$isNunitTestAdapterFound = ($allPackagesString -like "*NUnit*TestAdapter*").Length -gt 0
return $isNunitTestAdapterFound
}


Get-PackagesOfTestProjectInSolution "C:\dev\someacme\someacme.sln" 

$isNunitTestAdapterPresent = Has-NunitTestAdapterPackageInTestProjectinSolution "C:\dev\someacme\somecme.sln" 

Write-Host "Is NUnit test adapter added?" $isNunitTestAdapterPresent

    
    
    
    
For example, we could run the function call : List-PackagesOfTestProjectInSolution "C:\dev\someacme\someacme.sln" And we get our lists of package references in that solution (here we only look inside projects with a name containing "Test":

Friday, 9 July 2021

Immutable lists in C# - Adding a wrapper class

This article will discuss the immutable collections in C#, more precisely immutable lists of generic type T wrapped inside a class. This makes it possible to easier use immutable lists and these lists can only be altered via functional calls. Remember that an immutable list always returns a new immutable list. For easier use, we can have a wrapper for this. First of, inside Linqpad 5, being used in this article, hit F4. In case you want to use Visual Studio instead, the same code should work there (except Linqpad's Dump method). In the tab Additional referencesNow choose Add Nuget.. Then seach for System.Collections.Immutable. After selecting this Nuget package, choose the tab Additional Namesapce Imports. Now paste this demo code:
 
 void Main()
{
	var numbersInImmutableList = new ImmutableWrappedList<int>();
	numbersInImmutableList.AddRange(new[] { 3, 1, 4, 1, 5, 9, 2 }); 
	numbersInImmutableList.AddRange(new[]{ 2, 7, 1, 8, 2, 1, 8 });
	numbersInImmutableList.RemoveAt(2);
	numbersInImmutableList.Contents.Dump(); 	
}

public class ImmutableWrappedList<T>  {
	public ImmutableList<T> _internalList;
    	public ImmutableList<T> Contents => _internalList;
	
	public ImmutableWrappedList()
	{
		_internalList = ImmutableList.Create<T>(); 	
	}
	
	public void Clear() => _internalList.Clear(); 	
	public void AddRange(IEnumerable<T> itemsToAdd) => _internalList = _internalList.AddRange(itemsToAdd);
	public void Add(T itemToAdd) => _internalList = _internalList.Add(itemToAdd);
	public void Remove(T itemToAdd) => _internalList = _internalList.Remove(itemToAdd);
	public void RemoveAt(int index) => _internalList = _internalList.RemoveAt(index);
	public void Insert(T itemToAdd, int position) => _internalList = _internalList.Insert(position, itemToAdd);
}


 
As we can see, the wrapper class can add items to the immutable collections and we also reassign the result modifying operation to the same _internalList field, which has a private setter and is initialized to an empty array in the constructor. This gives you mutability to the immutable collection without having to remember to reassign the variable, which is error prone in itself. Note - we have called the _internalList and you see that we can get thi What is the benefit of this ? Well, although we can reach into the internal collection with the Contents method here, the immutable list is still immutable. If you want to change it, you have to call specific methods here on it offered in the wrapping class. So, data-integrity wise, we have data that only can change via the methods offered in the wrapping class. A collection which is not immutable can be changed in many ways only by giving access to it. We still have control over the data via the wrapper and we make it easier to consume the immutable class by reassigning the collection.

Wednesday, 7 July 2021

Dapper - Inner Joins between two tables - Helper methods

Many developers use Entity Framework (EF) today as the library of their data access library to communicate against the database. EF is a ORM, object-relational mapper and while it boasts much functionality like change tracking and mapping relationships, Dapper at the other line of ORMs is a Micro-ORM. A Micro-ORM has less functionality, but offers usually more speed and less overhead. Dapper is a great Micro-ORM, however, writing SQL manually is often error-prone or tedious. Some purists love writing the SQL manually and be sure which SQL they send off to the DB. That is much of the point of Dapper. However, lending a hand to developers in building their SQL should still be allowed. The query compilation time added to such helper methods are miniscule anyways compared to the heavy overhead of an advanced ORM like EF. Anyways, the code in this article shows some code I am working with for building inner joins between to tables. The relationship between the two tables are 1:1 in my test case and the inner join does for now not support a where predicate filter, although adding such a filter should be easy. The source code for DapperUtils of mine is available on GitHub: https://github.com/toreaurstadboss/DapperUtils
First, we make use of SqlBuilder from DapperUtils addon lib for Dapper.

using Dapper;
using System.Collections.Generic;
using System.Linq;
using System.Text.RegularExpressions;

namespace DapperUtils.ToreAurstadIT
{
    /// <summary>
    /// Original is fetched from: https://raw.githubusercontent.com/DapperLib/Dapper/main/Dapper.SqlBuilder/SqlBuilder.cs
    /// 
    /// </summary>
    public class SqlBuilder
    {
        private readonly Dictionary<string, Clauses> _data = new Dictionary<string, Clauses>();
        private int _seq;

        private class Clause
        {
            public string Sql { get; set; }
            public object Parameters { get; set; }
            public bool IsInclusive { get; set; }
        }

        private class Clauses : List<Clause>
        {
            private readonly string _joiner, _prefix, _postfix;

            public Clauses(string joiner, string prefix = "", string postfix = "")
            {
                _joiner = joiner;
                _prefix = prefix;
                _postfix = postfix;
            }

            public string ResolveClauses(DynamicParameters p)
            {
                foreach (var item in this)
                {
                    p.AddDynamicParams(item.Parameters);
                }
                return this.Any(a => a.IsInclusive)
                    ? _prefix +
                      string.Join(_joiner,
                          this.Where(a => !a.IsInclusive)
                              .Select(c => c.Sql)
                              .Union(new[]
                              {
                                  " ( " +
                                  string.Join(" OR ", this.Where(a => a.IsInclusive).Select(c => c.Sql).ToArray()) +
                                  " ) "
                              }).ToArray()) + _postfix
                    : _prefix + string.Join(_joiner, this.Select(c => c.Sql).ToArray()) + _postfix;
            }
        }

        public class Template
        {
            private readonly string _sql;
            private readonly SqlBuilder _builder;
            private readonly object _initParams;
            private int _dataSeq = -1; // Unresolved

            public Template(SqlBuilder builder, string sql, dynamic parameters)
            {
                _initParams = parameters;
                _sql = sql;
                _builder = builder;
            }

            private static readonly Regex _regex = new Regex(@"\/\*\*.+?\*\*\/", RegexOptions.Compiled | RegexOptions.Multiline);

            private void ResolveSql()
            {
                if (_dataSeq != _builder._seq)
                {
                    var p = new DynamicParameters(_initParams);

                    rawSql = _sql;

                    foreach (var pair in _builder._data)
                    {
                        rawSql = rawSql.Replace("/**" + pair.Key + "**/", pair.Value.ResolveClauses(p));
                    }
                    parameters = p;

                    // replace all that is left with empty
                    rawSql = _regex.Replace(rawSql, "");

                    _dataSeq = _builder._seq;
                }
            }

            private string rawSql;
            private object parameters;

            public string RawSql
            {
                get { ResolveSql(); return rawSql; }
            }

            public object Parameters
            {
                get { ResolveSql(); return parameters; }
            }
        }

        public Template AddTemplate(string sql, dynamic parameters = null) =>
            new Template(this, sql, parameters);

        protected SqlBuilder AddClause(string name, string sql, object parameters, string joiner, string prefix = "", string postfix = "", bool isInclusive = false)
        {
            if (!_data.TryGetValue(name, out Clauses clauses))
            {
                clauses = new Clauses(joiner, prefix, postfix);
                _data[name] = clauses;
            }
            clauses.Add(new Clause { Sql = sql, Parameters = parameters, IsInclusive = isInclusive });
            _seq++;
            return this;
        }

        public SqlBuilder Intersect(string sql, dynamic parameters = null) =>
            AddClause("intersect", sql, parameters, "\nINTERSECT\n ", "\n ", "\n", false);

        public SqlBuilder InnerJoin(string sql, dynamic parameters = null) =>
            AddClause("innerjoin", sql, parameters, "\nINNER JOIN ", "\nINNER JOIN ", "\n", false);

        public SqlBuilder LeftJoin(string sql, dynamic parameters = null) =>
            AddClause("leftjoin", sql, parameters, "\nLEFT JOIN ", "\nLEFT JOIN ", "\n", false);

        public SqlBuilder RightJoin(string sql, dynamic parameters = null) =>
            AddClause("rightjoin", sql, parameters, "\nRIGHT JOIN ", "\nRIGHT JOIN ", "\n", false);

        public SqlBuilder Where(string sql, dynamic parameters = null) =>
            AddClause("where", sql, parameters, " AND ", "WHERE ", "\n", false);

        public SqlBuilder OrWhere(string sql, dynamic parameters = null) =>
            AddClause("where", sql, parameters, " OR ", "WHERE ", "\n", true);

        public SqlBuilder OrderBy(string sql, dynamic parameters = null) =>
            AddClause("orderby", sql, parameters, " , ", "ORDER BY ", "\n", false);

        public SqlBuilder Select(string sql, dynamic parameters = null) =>
            AddClause("select", sql, parameters, " , ", "", "\n", false);

        public SqlBuilder AddParameters(dynamic parameters) =>
            AddClause("--parameters", "", parameters, "", "", "", false);

        public SqlBuilder Join(string sql, dynamic parameters = null) =>
            AddClause("join", sql, parameters, "\nJOIN ", "\nJOIN ", "\n", false);

        public SqlBuilder GroupBy(string sql, dynamic parameters = null) =>
            AddClause("groupby", sql, parameters, " , ", "\nGROUP BY ", "\n", false);

        public SqlBuilder Having(string sql, dynamic parameters = null) =>
            AddClause("having", sql, parameters, "\nAND ", "HAVING ", "\n", false);

        public SqlBuilder Set(string sql, dynamic parameters = null) =>
             AddClause("set", sql, parameters, " , ", "SET ", "\n", false);

    }
}

Using SqlBuilder, we can define a Sql template and add extension methods and helper methods required to build and retrieve the inner join. The helper methods in use are added also below the extension method InnerJoin. Make note that we use SqlBuilder here to do much of the SQL template processing to end up
with the SQL that is sent to the DB (RawSql property of SqlBuilder instance).

        /// <summary>
        /// Inner joins the left and right tables by specified left and right key expression lambdas.
        /// This uses a template builder and a shortcut to join two tables without having to specify any SQL manually
        /// and gives you the entire inner join result set. It is an implicit requirement that the <paramref name="leftKey"/>
        /// and <paramref name="rightKey"/> are compatible data types as they are used for the join.
        /// This method do for now not allow specifying any filtering (where-clause) or logic around the joining besides
        /// just specifying the two columns to join.
        /// </summary>
        /// <typeparam name="TLeftTable">Type of left table</typeparam>
        /// <typeparam name="TRightTable">Type of right table</typeparam>
        /// <param name="connection">IDbConnection to the DB</param>
        /// <param name="leftKey">Member expression of the left table in the join</param>
        /// <param name="rightKey">Member expression to the right table in the join</param>
        /// <returns>IEnumerable of ExpandoObject. Tip: Iterate through the IEnumerable and save each ExpandoObject into a variable of type dynamic to access the variables more conveniently if desired.</returns>
        public static IEnumerable<ExpandoObject> InnerJoin<TLeftTable, TRightTable>(this IDbConnection connection, 
            Expression<Func<TLeftTable, object>> leftKey, Expression<Func<TRightTable, object>> rightKey)
        {
            var builder = new SqlBuilder();
            string leftTableSelectClause = string.Join(",", GetPublicPropertyNames<TLeftTable>("l"));
            string rightTableSelectClause = string.Join(",", GetPublicPropertyNames<TRightTable>("r"));
            string leftKeyName = GetMemberName(leftKey);
            string rightKeyName = GetMemberName(rightKey); 
            string leftTableName = GetDbTableName<TLeftTable>();
            string rightTableName = GetDbTableName<TRightTable>(); 
            string joinSelectClause = $"select {leftTableSelectClause}, {rightTableSelectClause} from {leftTableName} l /**innerjoin**/";
            var selector = builder.AddTemplate(joinSelectClause);
            builder.InnerJoin($"{rightTableName} r on l.{leftKeyName} = r.{rightKeyName}");
            var joinedResults = connection.Query(selector.RawSql, selector.Parameters)
                .Select(x => (ExpandoObject)DapperUtilsExtensions.ToExpandoObject(x)).ToList();
            return joinedResults;
        }
        
          private static string[] GetPublicPropertyNames<T>(string tableQualifierPrefix = null) {
            return typeof(T).GetProperties(System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance)
                 .Where(x => !IsNotMapped(x))
                 .Select(x => !string.IsNullOrEmpty(tableQualifierPrefix) ? tableQualifierPrefix + "." + x.Name : x.Name).ToArray();
        }

     private static bool IsNotMapped(PropertyInfo x)
        {
            var notmappedAttr = x.GetCustomAttributes<NotMappedAttribute>()?.OfType<NotMappedAttribute>().FirstOrDefault();
            return notmappedAttr != null;
        }
       /// <summary>
        /// Returns database table name, either via the System.ComponentModel.DataAnnotations.Schema.Table attribute
        /// if it exists, or just the name of the <typeparamref name="TClass"/> type parameter. 
        /// </summary>
        /// <typeparam name="TClass"></typeparam>
        /// <returns></returns>
        private static string GetDbTableName<TClass>()
        {
            var tableAttribute = typeof(TClass).GetCustomAttributes(typeof(TableAttribute), false)?.FirstOrDefault() as TableAttribute;
            if (tableAttribute != null)
            {
                if (!string.IsNullOrEmpty(tableAttribute.Schema))
                {
                    return $"[{tableAttribute.Schema}].[{tableAttribute.Name}]";
                }
                return tableAttribute.Name;
            }
            return typeof(TClass).Name;
        }     

        private static string GetMemberName<T>(Expression<Func<T, object>> expression)
        {
            switch (expression.Body)
            {
                case MemberExpression m:
                    return m.Member.Name;
                case UnaryExpression u when u.Operand is MemberExpression m:
                    return m.Member.Name;
                default:
                    throw new NotImplementedException(expression.GetType().ToString());
            }
        }

        /// <summary>
        /// Returns database table name, either via the System.ComponentModel.DataAnnotations.Schema.Table attribute
        /// if it exists, or just the name of the <typeparamref name="TClass"/> type parameter. 
        /// </summary>
        /// <typeparam name="TClass"></typeparam>
        /// <returns></returns>
        private static string GetDbTableName<TClass>()
        {
            var tableAttribute = typeof(TClass).GetCustomAttributes(typeof(TableAttribute), false)?.FirstOrDefault() as TableAttribute;
            if (tableAttribute != null)
            {
                if (!string.IsNullOrEmpty(tableAttribute.Schema))
                {
                    return $"[{tableAttribute.Schema}].[{tableAttribute.Name}]";
                }
                return tableAttribute.Name;
            }
            return typeof(TClass).Name;
        }     

        public static ExpandoObject ToExpandoObject(object value)
        {
            IDictionary<string, object> dapperRowProperties = value as IDictionary<string, object>;
            IDictionary<string, object> expando = new ExpandoObject();
            if (dapperRowProperties == null)
            {
                return expando as ExpandoObject;
            }
            foreach (KeyValuePair<string, object> property in dapperRowProperties)
            {
                if (!expando.ContainsKey(property.Key))
                {
                    expando.Add(property.Key, property.Value);
                }
                else
                {
                    //prefix the colliding key with a random guid suffixed 
                    expando.Add(property.Key + Guid.NewGuid().ToString("N"), property.Value);
                } 
            }
            return expando as ExpandoObject;
        }       
        
        

Here are some Nuget packages in use in the small lib functions here are in test project too:

	   <!-- lib project .NET 5 -->
       <PackageReference Include="Dapper" Version="2.0.90" />
	   <PackageReference Include="Microsoft.CSharp" Version="4.7.0" />
	   <PackageReference Include="System.ComponentModel.Annotations" Version="5.0.0" />
       
       <!-- test project-->
        <PackageReference Include="FluentAssertions" Version="5.10.3" />
		<PackageReference Include="Microsoft.CSharp" Version="4.7.0" />
		<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="3.1.16" />
		<PackageReference Include="Microsoft.Extensions.Options.ConfigurationExtensions" Version="3.1.16" />
		<PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.10.0" />
		<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
		<PackageReference Include="NUnit" Version="3.13.2" />
		<PackageReference Include="NUnit3TestAdapter" Version="4.0.0" />
		<PackageReference Include="System.ComponentModel.Annotations" Version="5.0.0" />
		<PackageReference Include="System.Data.SqlClient" Version="4.8.2" />

Two unit tests shows how easier syntax we get with this helper method. The downside is that you cant fully control the sql yourself, but the benefit is quicker to implement.
  
       [Test]
        public void InnerJoinWithManualSqlReturnsExpected()
        {
            var builder = new SqlBuilder();
            var selector = builder.AddTemplate("select p.ProductID, p.ProductName, p.CategoryID, c.CategoryName, s.SupplierID, s.City from products p /**innerjoin**/");
            builder.InnerJoin("categories c on c.CategoryID = p.CategoryID");
            builder.InnerJoin("suppliers s on p.SupplierID = s.SupplierID");
            dynamic joinedproductsandcategoryandsuppliers = Connection.Query(selector.RawSql, selector.Parameters).Select(x => (ExpandoObject)DapperUtilsExtensions.ToExpandoObject(x)).ToList();
            var firstRow = joinedproductsandcategoryandsuppliers[0];
            Assert.AreEqual(firstRow.ProductID + firstRow.ProductName + firstRow.CategoryID + firstRow.CategoryName + firstRow.SupplierID + firstRow.City, "1Chai1Beverages1London");
        }

        [Test]
        public void InnerJoinWithoutManualSqlReturnsExpected()
        {
            var joinedproductsandcategory = Connection.InnerJoin<Product, Category>(l => l.CategoryID, r => r.CategoryID);
            dynamic firstRow = joinedproductsandcategory.ElementAt(0);
            Assert.AreEqual(firstRow.ProductID + firstRow.ProductName + firstRow.CategoryID + firstRow.CategoryName + firstRow.SupplierID, "1Chai1Beverages1");
        }
  
Our POCO classes used in the tests are these two. We use the Nuget package System.ComponentModel.Annotations and attributes TableName and NotMapped to control the SQL built here to specify the DB table name for the POCO (if they are the same, the name of the type is used as fallback if attribute TableName is missing) and NotMapped in case there are properties like relationship properties ("navigation properties in EF for Dapper") that should not be used in the SQL select clause.
 
 using System.ComponentModel.DataAnnotations.Schema;

namespace DapperUtils.ToreAurstadIT.Tests
{
    [Table("Products")]
    public class Product
    {
        public int ProductID { get; set; }
        public string ProductName { get; set; }
        public int? SupplierID { get; set; }
        public int? CategoryID { get; set; }
        public string QuantityPerUnit { get; set; }
        public decimal? UnitPrice { get; set; }
        public short? UnitsInStock { get; set; }
        public short? UnitsOnOrder { get; set; }
        public short? ReorderLevel { get; set; }
        public bool? Discontinued { get; set; }
        [NotMapped]
        public Category Category { get; set; }
    }
}

using System.ComponentModel.DataAnnotations.Schema;

namespace DapperUtils.ToreAurstadIT.Tests
{
    [Table("Categories")]
    public class Category
    {
        public int CategoryID { get; set; }
        public string CategoryName { get; set; }
        public string Description { get; set; }
        public byte Picture { get; set; }
    }
}

 
In the end, we have a easy way to do a standard join. An improvement here could be the following:
  • Support for where predicates to filter the joins
  • More control on the join condition if desired
  • Support for joins accross three tables (or more?) - SqlBuilder already supports this, what is missing is lambda expression support for Intellisense support
  • What if a property does not match against db column ? Should support ColumnName attribute from System.ComponentModel.DataAnnotations.
  • Investigate other join types such as left outer joins - this should be just a minor adjustment actually.

Thursday, 1 July 2021

SelectMany / Flattening multiple arrays at arbitrary depth in Typescript (Javascript)

I just added a flatten method of my SimpleTsLinq library today! The Github repo is at: The Npm page is at: This method can flatten multiple arrays at desired depth (defaults to Infinity) and each array itself may have arbitrary depth. The end result is that the multiple (nested arrays) are returned as a flat, single array. Much similar to SelectMany in Linq! First I added the method to generic interface Array below
 
 export { } //creating a module of below code
declare global {
  type predicate<T> = (arg: T) => boolean;
  type sortingValue<T> = (arg: T) => any;
  type keySelector<T> = (arg: T) => any;
  type resultSelector<T, TInner> = (arg: T, arg2: TInner) => any;
  interface Array<T> {
    AddRange<T>(itemsToAdd: T[]);
    InsertRange<T>(index: number, itemsToAdd: T[]);
    RemoveAt(index: number): T;
    RemoveWhere<T>(condition: predicate<T>): T[];
    FirstOrDefault<T>(condition: predicate<T>): T;
    SingleOrDefault<T>(condition: predicate<T>): T;
    First<T>(condition: predicate<T>): T;
    Single<T>(condition: predicate<T>): T;
    LastOrDefault<T>(condition: predicate<T>): T;
    Join<T, TInner>(otherArray: TInner[], outerKeySelector: keySelector<T>,
      innerKeySelector: keySelector<TInner>, res: resultSelector<T, TInner>): any[];
    Where<T>(condition: predicate<T>): T[];
    Count<T>(): number;
    CountBy<T>(condition: predicate<T>): number;
    Select<T>(...properties: (keyof T)[]): any[];
    GroupBy<T>(groupFunc: (arg: T) => string): any[];
    EnumerableRange(start: number, count: number): number[];
    Any<T>(condition: predicate<T>): boolean;
    Contains<T>(item: T): boolean;
    All<T>(condition: predicate<T>): boolean;
    MaxSelect<T>(property: (keyof T)): any;
    MinSelect<T>(property: (keyof T)): any;
    Average<T>(): number;
    AverageSelect<T>(property: (keyof T)): number;
    Max(): any;
    Min(): any;
    Sum(): any;
    Reverse<T>(): T[];
    Empty<T>(): T[];
    Except<T>(otherArray: T[]): T[];
    Intersect<T>(otherArray: T[]): T[];
    Union<T>(otherArray: T[]): T[];
    Cast<TOtherType>(TOtherType: Function): TOtherType[];
    TryCast<TOtherType>(TOtherType: Function): TOtherType[];
    GetProperties<T>(TClass: Function, sortProps: boolean): string[];
    Concat<T>(otherArray: T[]): T[];
    Distinct<T>(): T[];
    DistinctBy<T>(property: (keyof T)): any;
    SumSelect<T>(property: (keyof T)): any;
    Intersect<T>(otherArray: T[]): T[];
    IntersectSelect<T>(property: (keyof T), otherArray: T[]): T[];
    MinSelect<T>(property: (keyof T)): any;
    OrderBy<T>(sortMember: sortingValue<T>): T[];
    OrderByDescending<T>(sortMember: sortingValue<T>): T[];
    ThenBy<T>(sortMember: sortingValue<T>): T[];
    OfType<T>(compareObject: T): T[];
    SequenceEqual<T>(compareArray: T): boolean;
    Take<T>(count: number): T[];
    ToDictionary<T>(keySelector: (arg: T) => any): any;
    TakeWhile<T>(condition: predicate<T>): T[];
    SkipWhile<T>(condition: predicate<T>): T[];
    Skip<T>(count: number): T[];
    defaultComparerSort<T>(x: T, y: T);
    ElementAt<T>(index: number);
    ElementAtOrDefault<T>(index: number);
    Aggregate<T>(accumulator: any, currentValue: any, reducerFunc: (accumulator: any, currentValue: any) => any): any;
    AggregateSelect<T>(property: (keyof T), accumulator: any, currentValue: any, reducerFunc: (accumulator: any, currentValue: any) => any): any;
    Flatten<T>(otherArrays: T[][], depth: number): T[];
  }
}
 

Now we can implement the method as follows:
 
  
if (!Array.prototype.Flatten) {
  Array.prototype.Flatten = function <T>(otherArrays: T[][] = null, depth = Infinity) {
    let flattenedArrayOfThis = [...flatten(this, depth)];
    if (otherArrays == null || otherArrays == undefined) {
      return flattenedArrayOfThis;
    }
    return [...flattenedArrayOfThis, ...flatten(otherArrays, depth)];
  }
}

function* flatten(array, depth) {
  if (depth === undefined) {
    depth = 1;
  }
  for (const item of array) {
    if (Array.isArray(item) && depth > 0) {
      yield* flatten(item, depth - 1);
    } else {
      yield item;
    }
  }
}

 
The implementation uses a generator (identified by the * suffix) method which is recursively called if we have an array within an array Two tests below are run in Karma to test it out.
 
    it('can flatten multiple arrays into a single array', () => {
    let oneArray = [1, 2, [3, 3]];
    let anotherArray = [4, [4, 5], 6];
    let thirdArray = [7, 7, [7, 7]];
    let threeArrays = [oneArray, anotherArray, thirdArray];
    let flattenedArrays = oneArray.Flatten([anotherArray, thirdArray], Infinity);
    let isEqualInContentToExpectedFlattenedArray = flattenedArrays.SequenceEqual([1, 2, 3, 3, 4, 4, 5, 6, 7, 7, 7, 7]);
    expect(isEqualInContentToExpectedFlattenedArray).toBe(true);
  });

  it('can flatten one deep array into a single array', () => {
    let oneArray = [1, 2, [3, 3]];
    let flattenedArrays = oneArray.Flatten(null, 1);
    let isEqualInContentToExpectedFlattenedArray = flattenedArrays.SequenceEqual([1, 2, 3, 3]);
    expect(isEqualInContentToExpectedFlattenedArray).toBe(true);
  }); 
 

Saturday, 12 June 2021

Concepts of a simple draw ink control in Windows Forms

This article will present a simple draw ink control in Windows Forms. The code is run in Linqpad and the concepts here should be easily portable to a little application. Note - there is already built in controls for Windows Forms for this (and WPF and UWP too). That is not the point of this article. The point is to display how you can use System.Reactive and Observable.FromEventPattern method to create an event source stream from
CLR events so you can build reactive applications where the source pushes updates to its target / receiver instead of traditional pull based scenario of event subscriptions. First off, we install Linqpad from: https://www.linqpad.net I used Linqpad 5 for this code, you can of course download Linqpad 6 with .Net core support, but this article is tailored for Linpad 5 and .NET Framework. After installing Linqpad 5, start it and hit F4. Choose Add Nuget. Now choose Search online and type the following four nuget packages to get started with Reactive extensions for .NET.
  • System.Reactive
  • System.Reactive.Core
  • System.Reactive.Interfaces
  • System.Reactive.Linq
Also choose Add.. and choose System.Windows.Forms. Also, choose the tab Additional Namespace Imports. Import these namespaces
  • System.Reactive
  • System.Reactive.Linq
  • System.Windows.Forms
Over to the code, first we create a Form with a PictureBox to draw onto like this in C# program:


void Main()
{
	var form = new Form();
	form.Width = 800;
	form.Height = 800;
	form.BackColor = Color.White;
	
	var canvas = new PictureBox();
	canvas.Height = 400;
	canvas.Width = 400;
	canvas.BackColor = Color.AliceBlue;
	form.Controls.Add(canvas);
    .. //more code soon


Next up we create a list of Point to add the points to. We also use Observable.FromEventPattern to track events using the System.Reactive method to create an observable from a CLR event. We then subscribe to the three events we have set up with observables and add the logic to draw anti-aliased Bezier curves. Actually, drawing a Bezier curve usually consists of the end user defining four control points, the start and end of the bezier line and two control points (for the simplest Bezier curve). However, I chose anti-aliased Bezier curves that just uses the last four points from the dragged line, since smooth Bezier curves looks way better than using DrawLine for example for simple polylines. I use GDI CreateGraphics() method of the Picturebox (this is also available on most other Windows Forms controls, including Forms, but I wanted to have the drawing restricted to the PictureBox). The full code then is the entire code snippet below:
 
 void Main()
{
	var form = new Form { Width = 800, Height = 800, BackColor = Color.White };
	var canvas = new PictureBox { Height = 400, Width = 400, BackColor = Color.AliceBlue };
	form.Controls.Add(canvas);	
    var points = new List<Point>();
	bool isDrag = false;	
	var mouseDowns = Observable.FromEventPattern<MouseEventArgs>(canvas, "MouseDown");
	var mouseUps = Observable.FromEventPattern<MouseEventArgs>(canvas, "MouseUp");
	var mouseMoves = Observable.FromEventPattern<MouseEventArgs>(canvas, "MouseMove");
	mouseDowns.Subscribe(m =>
	{
		if (m.EventArgs.Button == MouseButtons.Right)
		{
			isDrag = false;
			points.Clear();
			canvas.CreateGraphics().Clear(Color.AliceBlue);
			return;
		}
	 isDrag = true;	 
	});	
	mouseUps.Subscribe(m => {
		isDrag = false;
	});	
	mouseMoves.Subscribe(move =>  {
	 points.Add(new Point(move.EventArgs.Location.X, move.EventArgs.Location.Y));
	 if (isDrag && points.Count > 4) {
			//form.CreateGraphics().DrawLine(new Pen(Color.Blue, 10), points[points.Count - 2].X, points[points.Count - 2].Y, points[points.Count - 1].X, points[points.Count - 1].Y);
			var pt1 = new PointF(points[points.Count - 4].X, points[points.Count - 4].Y);
			var pt2 = new PointF(points[points.Count - 3].X, points[points.Count - 3].Y);
			var pt3 = new PointF(points[points.Count - 2].X, points[points.Count - 2].Y);
			var pt4 = new PointF(points[points.Count - 1].X, points[points.Count - 1].Y);			
			var graphics = canvas.CreateGraphics();
			graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
			graphics.DrawBezier(new Pen(Color.Blue, 4.0f), pt1, pt2, pt3, pt4);			
		}		
	});	
	form.Show();
}


 
Linqpad/System.Reactive/GDI Windows Forms in action ! Screenshot:

I have added comments here for defining a polyline also instead of Bezier, since this also works and is quicker than the nicer Bezier curve. Maybe you want to display this on a simple device with less processing power etc. To clear the line, just hit right click button. To start drawing, just left click and drag and let go again. Now look how easy this code really is to create a simple Ink control in Windows Forms ! Of course Windows Forms today are more and more "dated" compared to younger frameworks, but it still does its job. WPF got its own built-in InkControl. But in case you want an Ink control in Windows Forms, this is an easy way of creating one and also a good Hello World to Reactive extensions. In .NET Core, the code should be really similar to the code above. Windows Forms is available with .NET Core 3.0 or newer. https://devblogs.microsoft.com/dotnet/windows-forms-designer-for-net-core-released/

Monday, 7 June 2021

Json serialization using Utf8JsonReaderSerializer in .net core

.NET 5 and .net core contains a lot of new methods for Json functionality in the System.Text.Json namespace. I created a helper class for reading a file using Utf8JsonReaderSerializer and this just outputs the json to a formatted json string. With optimizations, the serialization could be done even faster. For now, I need to use a conversion between StringBuilder toString to remove last commas of arrays and properties of objects as the Utf8JsonReaderSerializer is sequential, forward-only as mentioned in the API page at: https://docs.microsoft.com/en-us/dotnet/api/system.text.json.utf8jsonreader?view=net-5.0
This is the helper method I came up with to read a file and take the way via Utf8JsonReaderSerializer:
 

using System;
using System.IO;
using System.Linq;
using System.Text;
using System.Text.Json;

namespace SystemTextJsonTestRun
{
    public static class Utf8JsonReaderSerializer
    {

        public static string ReadFile(string filePath)
        {         
            if (!File.Exists(filePath))
            {
                throw new FileNotFoundException(filePath);
            }

            var jsonBytes = File.ReadAllBytes(filePath);
            var jsonSpan = jsonBytes.AsSpan();
            var json = new Utf8JsonReader(jsonSpan);
            var sb = new StringBuilder();

            while (json.Read())
            {
                if (json.TokenType == JsonTokenType.StartObject)
                {
                    sb.Append(Environment.NewLine);
                }
                else if (json.TokenType == JsonTokenType.EndObject)
                {
                    //remove last comma added 

                    sb.RemoveLast(",");

                    sb.Append(Environment.NewLine);
                }

                if (json.CurrentDepth > 0)
                {
                    for (int i = 0; i < json.CurrentDepth; i++)
                    {
                        sb.Append(" "); //space indentation
                    }
                }

                sb.Append(GetTokenRepresentation(json));


                if (json.TokenType == JsonTokenType.EndObject || json.TokenType == JsonTokenType.EndArray)
                {
                    sb.AppendLine();
                }

                if (new[] { JsonTokenType.String, JsonTokenType.Number, JsonTokenType.Null, JsonTokenType.False,
                JsonTokenType.Number, JsonTokenType.None, JsonTokenType.True }.Contains(json.TokenType))
                {
                    sb.AppendLine(",");
                }

            }

            //remove last comma for EndObject 

            sb.RemoveLast(",");

            return sb.ToString(); 


        }


        private static string GetTokenRepresentation(Utf8JsonReader json) =>
          json.TokenType switch
          {
              JsonTokenType.StartObject => $"{{{Environment.NewLine}",
              JsonTokenType.EndObject => "},",
              JsonTokenType.StartArray => $"[{Environment.NewLine}",
              JsonTokenType.EndArray => $"]",
              JsonTokenType.PropertyName => $"\"{json.GetString()}\":",
              JsonTokenType.Comment => json.GetString(),
              JsonTokenType.String => $"\"{json.GetString()}\"",
              JsonTokenType.Number => GetNumberToString(json),
              JsonTokenType.True => json.GetBoolean().ToString().ToLower(),
              JsonTokenType.False => json.GetBoolean().ToString().ToLower(),
              JsonTokenType.Null => string.Empty,
              _ => "Unknown Json token type"
          };

        //TODO: Use the Try methods of the Utf8JsonReader more than trying and failing here 

        private static string GetNumberToString(Utf8JsonReader json)
        {
            try
            {
                if (int.TryParse(json.GetInt32().ToString(), out var res))
                    return res.ToString();
            }
            catch
            {
                try
                {
                    if (float.TryParse(json.GetSingle().ToString(), out var resFloat))
                        return resFloat.ToString();
                }
                catch
                {
                    try
                    {
                        if (decimal.TryParse(json.GetDouble().ToString(), out var resDes))
                            return resDes.ToString();
                    }
                    catch
                    {
                        return "?";
                    }
                }
            }
            return $"?"; //fallback to a string if not possible to deduce the type
        }

    }
}

  

The json file I tested the code with inputted came out again as this string:

{
 "courseName": "Build Your Own Application Framework",
 "language": "C#",
 "author":
 {
  "firstName":  "Matt",
  "lastName":  "Honeycutt"

 },
 "publishedAt": "2012-03-13T12:30:00.000Z",
 "publishedYear": 2014,
 "isActive": true,
 "isRetired": false,
 "tags": [
  "aspnet",
  "C#",
  "dotnet"
 ]

}

This code validates against Json Lint also: https://jsonlint.com Now why even bother parsing a Json file just to output the file again to a json string? Well, first of all, we use a very fast parser Utf8JsonReader from .NET and we can for example do various processing along the forward-only sequential processing and formatting indented the file. Utf8JsonReader will also validate the json document strictly to the Json specification - RFC 8259. Hence, we can get validation for free here to by catching any errors and returning true or false in method that scans this file by adding a method for this looking at the json.Read() method (if it returns false) or catching JsonException if a node of the json document does not validate. Also, a low level analysis of the Utf8JsonReader let's you see which different tokens of the json document structure .NET provides. We could transform the document or add specific formatting and so on by altering the code displayed here. To run the code test with a sample json document like this:

   class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Utf8JsonReader sample");

            string json = Utf8JsonReaderSerializer.ReadFile("sample.json");
            string tempFile = Path.ChangeExtension(Path.GetTempFileName(), "json"); 
            File.WriteAllText(tempFile, json);
            Console.WriteLine($"Json file read and processed result in location: {tempFile}");
            Console.WriteLine($"Json file contents: {Environment.NewLine}{json}");

        
        }

I have added the code for this here: https://github.com/toreaurstadboss/Utf8DataJsonReaderTest/

Sunday, 25 April 2021

Making NUnit tests run in Team City for NUnit 3.x

Team City has several bugs when it comes to running NUnit tests. The following guide shows how you can prepare the Team City build agent to run NUnit 3.x tests. We need first to install NUnit Console runner Tips around this was found in the following Stack Overflow thread: This is also mentioned in the documentation of Team City: First off, add two Command line steps and add the two commands into each step - this step can be run at the start of the pipeline in Team City.


%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe install NUnit.Console -version 3.10.0 -o packages  -ExcludeVersion -OutputDirectory %system.teamcity.build.tempDir%\NUnit %teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe install NUnit.Extension.NUnitProjectLoader -version 3.6.0 -o packages
The following Nuget packages for NUnit was used:
  • NUnit 3.2.0
  • NUnit.ConsoleRunner 3.10.0
  • NUnit.Extension.NUnitProjectLoader 3.6.0
  • NUnit.Extension.TeamCityEventListener 1.0.7
  • NUnit3TestAdapter 3.16.1
Inside the NUnit runner type step, configure also the NUnit console path: Use this path:
packages\NUnit.ConsoleRunner.3.8.0\tools\nunit3-console.exe For the testassemblies make sure you use a path like this: **\bin\%BuildConfiguration%\*.Test.dll Add the %BuildConfiguration% parameter and set it to: Debug
More tips here: https://stackoverflow.com/questions/57953724/nunit-teamcity-process-exited-with-code-4

And here:
https://stackoverflow.com/questions/36996564/nunit-3-2-1-teamcity-could-not-load-file-or-assembly-nunit-framework

Sunday, 18 April 2021

Implementing a Strip method with regex in C#

This article will present a Strip method that accepts a Regex which defines the pattern of allowed characters. It is similar to Regex Replace, but it works in the inverted way. Instead of removing the chars matching the pattern in Regex.Replace, this utility method instead lets you define the allowed chars, i.e. these chars defined in this regex are the chars I want to keep. First off we define the utility method, as an extension method.
 

        /// <summary>
        /// Strips away every character not defined in the provided regex <paramref name="allowedChars"/>
        /// </summary>
        /// <param name="s">Input string</param>
        /// <param name="allowedChars">The allowed characters defined in a Regex with pattern, for example: [A-z|0-9]+/</param>
        /// <returns>Input string with only the allowed characters</returns>
        public static string Strip(this string s, Regex allowedChars)
        {
            if (s == null)
            {
                return s;
            }
            if (allowedChars == null)
            {
                return string.Empty;
            }
            Match match = Regex.Match(s, allowedChars.ToString());
            List<char> allowedAlphabet = new List<char>();
            while (match.Success)
            {
                if (match.Success)
                {
                    for (int i = 0; i < match.Groups.Count; i++)
                    {
                        allowedAlphabet.AddRange(match.Groups[i].Value.ToCharArray());
                    }
                }
                match = match.NextMatch();
            }
            return new string(s.Where(ch => allowedAlphabet.Contains(ch)).ToArray());
        }
        
          
Here are some tests that tests out this Strip method:
 
 
 	 	[Test]
        [TestCase("abc123abc", "[A-z]+", "abcabc")]
        [TestCase("abc123def456", "[0-9]+", "123456")]
     	[TestCase("The F-32 Lightning II is a newer generation fighter jets than the F-16 Fighting Falcon", "[0-9]+", "3216")]
		[TestCase("Here are some Norwegian letters : ÆØÅ and in lowercase: æøå", "[æ|ø|å]", "æøå")]
		public void TestStripWithRegex(string input, string regexString, string expectedOutput)
        {
            var regex = new Regex(regexString);
            input.Strip(regex).Should().Be(expectedOutput);
        }
 
 

Monday, 1 March 2021

Implementing ToDictionary in Typescript

In this article I will present some code I just did in my SimpleTsLinq library, which you can easily install using Npm. The library is here on Npmjs.com : The ToDictionary method looks like this:

  
  if (!Array.prototype.ToDictionary) {
  Array.prototype.ToDictionary = function <T>(keySelector: (arg: T) => any): any {
    let hash = {};
    this.map(item => {
      let key = keySelector(item);
      if (!(key in hash)) {
        hash[key] = item;
      }
      else {
        if (!(Array.isArray(hash[key]))) {
          hash[key] = [hash[key]];
        }
        hash[key].push(item);
      }
    });
    return hash;
  }
}
  

Here is a unit test (spec) for this method :

  
    it('can apply method ToDictionary on an array, allowing specificaton of a key selector for the dictionary object', () => {
    let heroes = [{ name: "Han Solo", age: 47, gender: "M" }, { name: "Leia", age: 29, gender: "F" }, { name: "Luke", age: 24, gender: "M" }, { name: "Lando", age: 47, gender: "M" }];
    let dictionaryOfHeroes = heroes.ToDictionary<Hero>(x => x.gender);

    let expectedDictionary = {
      "F": {
        name: "Leia", age: 29, gender: "F"
      },
      "M": [
        { name: "Han Solo", age: 47, gender: "M" },
        { name: "Luke", age: 24, gender: "M" },
        { name: "Lando", age: 47, gender: "M" }
      ]
    };
    expect(dictionaryOfHeroes).toEqual(expectedDictionary);
  });
  
  

You can also test out this library using Npm RunKit here: We can make a dictionary with different keys, image example: