I have written a presentation about GraphQL demo repository of mine here. It is in Norwegian, so it will not be translated to english here.
For Norwegian readers :
Jeg har skrevet en presentasjon på norsk om GraphQL som går igjennom et demo repository som benytter GraphQL, med HotChocolate i backend, sammen med Entity Framework Core 6 og .net 6 (C# selvsagt) og som i frontend benytter Blazor og StrawberryShake ! Dere kan lese de 43 slidesene vedlagt OneDrive lenken under. I foredraget går jeg igjennom key giveaways om GraphQL, inkludert case insensitive søk, omtale om paginerte data og projisering og gir et overblikk av hva GraphQL går ut på og hvilke fordeler man kan få ut av det. Sentrale fordeler med GraphQL er :
* Fleksibilitet - spesifiser hvilke felter du vil ha for å unngå "overfetching"
* -Ytelse - færre API kall og unngår waterfall opphenting hvor man må hente opp stadig flere ressurser som i fra et REST API, men får en aggreggert tilpasset struktur av de data man faktisk vil ha tilbake
* Ett felles endepunkt /graphql - man slipper å lage controllere som i REST API - som ofte føles som unødvendig.
GraphQL er ikke noe som kan løse alle utfordringer i API design, men det kan gi klienter mye mer fleksibilitet og også unngå at API designere må stadig lage flere metoder og som har både "overfetching" eller enda verre - underfetching - som gir flere API kall og dårligere ytelse. Man utnytter båndbredde og serverressurser bedre ved å kun hente ut informasjon man trenger. Og GraphQL er ikke bare orientert rundt spørringer, men også endringer (mutations), pub sub event pattern (Subscriptions) og en hel del annen funksjonalitet som tilhører API design !
Du kan lese Powerpoint presentasjonen her (43 slides, lesetid ca 1 time om du vil studere det nøye, en 15 minutt om du vil skumlese mer).
#blazor #hotchocolate #strawberryshake #chillicream #apidesign #csharp #dotnet #codinggrounds
The presentation is here :
Powerpoint presentation (Norwegian, 11th december 2022) :
https://1drv.ms/u/s!AhGGDxs-tzqJcFrls6Fue8Xnjx4?e=lWYYwU
Sunday, 11 December 2022
Friday, 18 November 2022
Case insensitive search in HotChocolate GraphQL
I tested out the contains operator on string fields today in GraphQL. It is actually case sensitive, and this is counter-intuitive since I have connected the GraphQL database to a SQL database, which performs usually a case insensitive search with the 'contains' operator (using 'LIKE' operator
under the hood). The following adjustments need to be made to make it work : First off, define a class inheriting QueryableStringOperationHandler
under the hood). The following adjustments need to be made to make it work : First off, define a class inheriting QueryableStringOperationHandler
using HotChocolate.Data.Filters;
using HotChocolate.Data.Filters.Expressions;
using HotChocolate.Language;
using System.Linq.Expressions;
using System.Reflection;
namespace AspNetGraphQLDemoV2.Server
{
public class QueryableStringInvariantContainsHandler : QueryableStringOperationHandler
{
private static readonly MethodInfo _contains = typeof(string).GetMethod(nameof(string.Contains), new[] { typeof(string) })!;
public QueryableStringInvariantContainsHandler(InputParser inputParser) : base(inputParser)
{
}
protected override int Operation => DefaultFilterOperations.Contains;
public override Expression HandleOperation(QueryableFilterContext context, IFilterOperationField field,
IValueNode value, object? parsedValue)
{
Expression property = context.GetInstance();
if (parsedValue is string str)
{
var toLower = Expression.Call(property, typeof(string).GetMethod("ToLower", Type.EmptyTypes)!); //get the ToLower method of string class via reflection. The Type.EmptyTypes will retrieve the method overload of ToLower which accept no arguments.
var finalExpression = Expression.Call(toLower, _contains, Expression.Constant(str.ToLower()));
return finalExpression;
}
throw new InvalidOperationException();
}
}
}
We overload the Operation to 'Contains' so we are going to adjust how we treat the expression tree of GraphQL, overriding the HandleOperation. This is similar to the QueryableStringOperationHandler presented here: https://chillicream.com/docs/hotchocolate/api-reference/extending-filtering, in our case we support the contains method instead.
The finalExpression 'DebugView' evaluates to :
.Call (.Call ($_s0.OfficialName).ToLower()).Contains("tind")
To actually use this adaption of filtering of string properties in HotChocolate, we do the following in program.cs (startup class in .net 6) to not only add filtering support to HotChocolate, but also add a filter convention extension first to make it easier to register and avoid adding cluttering to the startup code in program.cs :
using HotChocolate.Data.Filters;
using HotChocolate.Data.Filters.Expressions;
namespace AspNetGraphQLDemoV2.Server
{
public class FilterConventionExtensionForInvariantContainsStrings : FilterConventionExtension
{
protected override void Configure(IFilterConventionDescriptor descriptor)
{
descriptor.AddProviderExtension(new QueryableFilterProviderExtension(
y => y.AddFieldHandler<QueryableStringInvariantContainsHandler>()));
}
}
}
You can see an example how I register this case insensitive contains filter in the program.cs example code below. Note both the usage of .AddFiltering() and the .AddConvention() call.
var builder = WebApplication.CreateBuilder(args);
var connectionString = builder.Configuration.GetConnectionString("MountainsV2Db");
builder.Services
.AddDbContext<MountainDbContext>(options =>
{
options.UseSqlServer(connectionString);
})
.AddCors()
.AddGraphQLServer()
.AddProjections()
.AddFiltering()
.AddConvention<IFilterConvention, FilterConventionExtensionForInvariantContainsStrings>()
.AddSorting()
.RegisterDbContext<MountainDbContext>()
.AddQueryType<MountainQueries>()
.AddMutationType<MountainMutations>()
.AddSubscriptionType<MountainSubscriptions>()
.AddInMemorySubscriptions();
var app = builder.Build();
Now, sample .graphql file (containing a GraphQL query) that shows how the new filtering capability can be used :
query {
mountains (where: { officialName: {contains: "TiND"}}) {
officialName
}
}
The backend code retrieves a list of data from a table and I have added the [UseFiltering] attribute to specify for HotChocolate that filtering should be supported to the method.
public class MountainQueries
{
[UseFiltering]
[UseSorting]
public async Task<List<Mountain>> GetMountains([Service] MountainDbContext mountainDb)
{
return await mountainDb.Mountains.ToListAsync();
}
//..
Sunday, 4 September 2022
Faster expansion of databases in Sql Server Management Studio
I have observed a slow expansion of databases tree in Sql Server Management Studio. This can actually be fixed on many Sql Server instances by running this T-SQL :
EXECUTE sp_MSforeachdb
'
IF (''?'' NOT IN (''master'', ''tempdb'', ''msdb'', ''model''))
EXECUTE (''ALTER DATABASE [?] SET AUTO_CLOSE OFF WITH NO_WAIT'')'
Now, this does not look that much similar to SQL, since it is T-SQL. But it will execute to disable auto_close for every database on your database server (that is, 'instance') and this will make
expanding databases in your database tree fast again in SSMS !
This is a variant of what Pinal Dave mentions here, I just made an iterative variant of what he suggests :
https://blog.sqlauthority.com/2016/09/22/sql-server-set-auto_close-database-option-off-better-performance/
Saturday, 27 August 2022
DynamicObject : Thread safety and anonymous type initialization
This article will present code that shows how we can create a custom dynamic object and control its behavior, supporting thread safety when setting and getting members and allowing anonymous type initialization.
The code is available in this GitHub repo :
git clone https://github.com/toreaurstadboss/DynamicObjectThreadSafe.git
The custom dynamic object inherits from the class DynamicObject and uses a thread safe dictionary.
using System;
using System.Collections;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Dynamic;
using System.Linq;
using System.Reflection;
using System.Text.Json;
using System.Text.Json.Serialization;
namespace DynamicObjectThreadSafe
{
public class ThreadSafeDynamicObject : DynamicObject, IEnumerable<KeyValuePair<string, object>>
{
public ThreadSafeDynamicObject()
{
}
public ThreadSafeDynamicObject(dynamic members)
{
dynamic membersDict = ToDictionary(members);
InitMembers(membersDict);
}
private IDictionary<string, object> ToDictionary(object data)
{
var attr = BindingFlags.Public | BindingFlags.Instance;
var dict = new Dictionary<string, object>();
foreach (var property in data.GetType().GetProperties(attr))
{
if (property.CanRead)
{
dict.Add(property.Name, property.GetValue(data, null));
}
}
return dict;
}
private void InitMembers(IDictionary<string, object> membersDict)
{
foreach (KeyValuePair<string, object> member in membersDict)
{
_members.AddOrUpdate(member.Key, member.Value, (key, oldValue) => member.Value);
}
}
private readonly ConcurrentDictionary<string, object> _members = new ConcurrentDictionary<string, object>();
public override bool TryGetMember(GetMemberBinder binder, out object result)
{
return _members.TryGetValue(binder.Name, out result);
}
public override bool TrySetMember(SetMemberBinder binder, object value)
{
_members.AddOrUpdate(binder.Name, value, (key, oldvalue) => value);
return true;
}
public override IEnumerable<string> GetDynamicMemberNames()
{
return _members.Keys.ToList().AsReadOnly();
}
public override string ToString()
{
return JsonSerializer.Serialize(_members);
}
public IEnumerator<KeyValuePair<string, object>> GetEnumerator()
{
return _members.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return _members.GetEnumerator();
}
}
}
The method ToDictionary transforms an input object, for example an anonymous class object via boxing (by accepting an object as parameter and implicitly 'boxing' it as a object) and the InitMember method will then populate the
ConcurrentDictionary<string,object> object. This will allow us to pass anonymous objects and cast the object into a dynamic object, for further consumption. For example outputting fields of it. Now why would you use dynamic objects
like this would you say? Dynamic objects are practical in many situations where you do not know the type until runtime. Many imlpementations of custom dynamic objects use Dictionary as a 'backing store' for the fields/properties/members of the object.
This implementation uses ConcurrentDictionary and there should be thread safe concerning retrieving or setting members as shown in the overrides of methods TryGetMember and TrySetMember.
The override for method GetDynamicMemberNames is for showing members in the debugger in the 'Dynamic view' to inspect the dynamic object properly. The GetEnumerator method overrides are to support casting the dynamic object to IDictionary<string,object>
The following tests is then passing :
[Fact]
public void It_Can_Accept_Anonymous_Type_initialization()
{
dynamic threadSafeToyota = new ThreadSafeDynamicObject(new
{
Make = "Toyota",
Model = "CR-H",
Propulsion = new
{
IsHybrid = true,
UsesPetrol = true,
ElectricMotor = true
}
});
Assert.Equal("Toyota", threadSafeToyota.Make);
Assert.Equal("CR-H", threadSafeToyota.Model);
Assert.Equal(true, threadSafeToyota.Propulsion.IsHybrid);
Assert.Equal(true, threadSafeToyota.Propulsion.UsesPetrol);
Assert.Equal(true, threadSafeToyota.Propulsion.ElectricMotor);
}
And since this object is dynamic, we can extend it and adjust its members as dynamic allows you and as you can see we both can instantiate the dynamic object via anonymous type instance or populating it manually one property at a time. And doing so in a thread-safe many, for better support in
multithreaded environments which are to expected many places today.
Monday, 22 August 2022
Splitting a ReadOnlySpan by a separator
This article will look into using the ReadonlySpan and doing an equivalent string split operation. The method uses ReadOnlySpan of type T (char) to split into 'words' or 'tokens', separated by a split char (separator).
A span was introduced in C# 7.3 and can be used in .net standard 2.0 or .net framework using the Nuget package System.Memory.
from the consumer side be a bit cumbersome. You could though have a method like 'get nth word' using the split indexes here and so on.
<ItemGroup>
<PackageReference Include="System.Memory" Version="4.5.5" />
</ItemGroup>
If you use newer target frameworks, you will get Span included as long as C# 7.3 is supported.
The code here is just demonstration code, it successfully splits a ReadOnlySpan of T (char) using the contiguous memory on the stack, however, we must still convert to string here the 'tokens' or 'words' after the split operation.
Note the usage of Slice method here to retrieve a range from the ReadOnlySpan of char here, we use a List of string to get the words or 'tokens'.
It would be nice to somehow avoid string as much as possible, but we want to have an array of strings back anyways, so a List of string is used here to get the 'tokens'. What would be optimal would be to just return the split indexes as the
code already extracts here and return those split indexes, which later could be used to build a string array. We have all the characters in the ReadOnlySpan of char here, so only having the split indexes would be sufficient. However, this would from the consumer side be a bit cumbersome. You could though have a method like 'get nth word' using the split indexes here and so on.
using System;
using System.Collections.Generic;
namespace SpanStringSplit
{
public static class SpanExtensions
{
public static string[] SplitViaSpan(this string input, char splitChar, StringSplitOptions splitOptions)
{
if (string.IsNullOrWhiteSpace(input) || input.IndexOf(splitChar) < 0)
{
return new string[] { input };
}
var tokens = SplitSpan(input.AsSpan(), splitChar, splitOptions);
return tokens;
}
public static string[] SplitSpan(this ReadOnlySpan<char> inputSpan, char splitChar, StringSplitOptions splitOptions)
{
if (inputSpan == null)
{
return new string[] { null };
}
if (inputSpan.Length == 0)
{
return splitOptions == StringSplitOptions.None ? new string[] { string.Empty } : new string[0];
}
bool isSplitCharFound = false;
foreach (char letter in inputSpan)
{
if (letter == splitChar)
{
isSplitCharFound = true;
break;
}
}
if (!isSplitCharFound)
{
return new string[] { inputSpan.ToString() };
}
bool IsTokenToBeAdded(string token) => !string.IsNullOrWhiteSpace(token) || splitOptions == StringSplitOptions.None;
var splitIndexes = new List<int>();
var tokens = new List<string>();
int charIndx = 0;
foreach (var ch in inputSpan)
{
if (ch == splitChar)
{
splitIndexes.Add(charIndx);
}
charIndx++;
}
int currentSplitIndex = 0;
foreach (var indx in splitIndexes)
{
if (currentSplitIndex == 0)
{
string firstToken = inputSpan.Slice(0, splitIndexes[0]).ToString();
if (IsTokenToBeAdded(firstToken))
{
tokens.Add(firstToken);
}
}
else if (currentSplitIndex <= splitIndexes.Count)
{
string intermediateToken = inputSpan.Slice(splitIndexes[currentSplitIndex - 1] + 1, splitIndexes[currentSplitIndex] - splitIndexes[currentSplitIndex - 1] - 1).ToString();
if (IsTokenToBeAdded(intermediateToken))
{
tokens.Add(intermediateToken);
}
}
currentSplitIndex++;
}
string lastToken = inputSpan.Slice(splitIndexes[currentSplitIndex - 1] + 1).ToString();
if (IsTokenToBeAdded(lastToken))
{
tokens.Add(lastToken);
}
return tokens.ToArray();
}
}
}
And we have our suceeding unit tests :
using NUnit.Framework;
using System;
namespace SpanStringSplit.Test
{
[TestFixture]
public class SpanExtensionsSpec
{
[Test]
public void SplitStringsViaSpan()
{
var tokens = ",,The,quick,brown,fox,jumped,over,the,lazy,,dog".SplitViaSpan(',', StringSplitOptions.RemoveEmptyEntries);
CollectionAssert.AreEqual(new string[] { "The", "quick", "brown", "fox", "jumped", "over", "the", "lazy", "dog" }, tokens);
}
[Test]
public void SplitStringsUsingSpan()
{
ReadOnlySpan<char> s = ",,The,quick,brown,fox,jumped,over,the,lazy,,dog".ToCharArray();
var tokens = s.SplitSpan(',', StringSplitOptions.RemoveEmptyEntries);
CollectionAssert.AreEqual(new string[] { "The", "quick", "brown", "fox", "jumped", "over", "the", "lazy", "dog" }, tokens);
}
}
}
To sum up - we can use Span here to get an contiguous space of memory (stack in this case).
To get a span from a string we use the extension method 'AsSpan()'
To get a string from a range of a span (Slice), we just use the Slice method and then call ToString().
The following code then extracts the nth token (or word) only using the stack and first extracts the split indices (considering up to a given index + 1 of split indices if available) and then using the Splice method to get the chars of the token or 'word'.
public static string GetNthToken(this ReadOnlySpan<char> inputSpan, char splitChar, int nthToken)
{
if (inputSpan == null)
{
return null;
}
int[] splitIndexes = inputSpan.SplitIndexes(splitChar, nthToken);
if (splitIndexes.Length == 0)
{
return inputSpan.ToString();
}
if (nthToken == 0 && splitIndexes.Length > 0)
{
return inputSpan.Slice(0, splitIndexes[0]).ToString();
}
if (nthToken > splitIndexes.Length)
{
return null;
}
if (nthToken == splitIndexes.Length)
{
var split = inputSpan.Slice(splitIndexes[nthToken-1]+1).ToString();
return split;
}
if (nthToken <= splitIndexes.Length + 1)
{
var split = inputSpan.Slice(splitIndexes[nthToken-1]+1, splitIndexes[nthToken] - splitIndexes[nthToken-1]-1).ToString();
return split;
}
return null;
}
public static int[] SplitIndexes(this ReadOnlySpan<char> inputSpan, char splitChar,
int? highestSplitIndex = null)
{
if (inputSpan == null)
{
return Array.Empty<int>();
}
if (inputSpan.Length == 0)
{
return Array.Empty<int>();
}
bool isSplitCharFound = false;
foreach (char letter in inputSpan)
{
if (letter == splitChar)
{
isSplitCharFound = true;
break;
}
}
if (!isSplitCharFound)
{
return Array.Empty<int>();
}
var splitIndexes = new List<int>();
var tokens = new List<string>();
int charIndex = 0;
foreach (var ch in inputSpan)
{
if (ch == splitChar)
{
if (highestSplitIndex.HasValue && highestSplitIndex + 1 < splitIndexes.Count)
{
break;
}
splitIndexes.Add(charIndex);
}
charIndex++;
}
return splitIndexes.ToArray();
}
Now why would you use this instead of just sticking to ordinary string class methods. The main goal was to look into Span and how we can use it to look at contiguous memory and work at sub parts of this memory using the Slice method. In some applications, such as games and graphics in general, such micro optimizations are more important to avoid allocating a lot of string variables. Finding the split incides first (up to a given index if available) and then retrieving the nth token or word can be very useful instead of spitting into an array of strings.
The unit tests are also passing for GetNthToken method :
[Test]
[TestCase(",, The, quick, brown, fox, jumped, over, the, lazy,, dog", 5, "fox")]
[TestCase(",, The, quick, brown, fox, jumped, over, the, lazy,, dog", 0, "")]
[TestCase(",, The, quick, brown, fox, jumped, over, the, lazy,, dog", 1, "")]
[TestCase(",, The, quick, brown, fox, jumped, over, the, lazy,, dog", 2, "The")]
[TestCase(",, The, quick, brown, fox, jumped, over, the, lazy,, dog", 3, "quick")]
[TestCase(",, The, quick, brown, fox, jumped, over, the, lazy,, dog", 7, "over")]
[TestCase(",, The, quick, brown, fox, jumped, over, the, lazy,, dog", 11, "dog")]
[TestCase(",, The, quick, brown, fox, jumped, over, the, lazy,, dog", 12, null)]
[TestCase(",, The, quick, brown, fox, jumped, over, the, lazy,, dog", 13, null)]
public void GetNthWord(string input, int nthWord, string expectedWord)
{
ReadOnlySpan<char> s = ",,The,quick,brown,fox,jumped,over,the,lazy,,dog".ToCharArray();
var word = s.GetNthToken(',', nthWord);
Assert.AreEqual(word, expectedWord);
}
Sunday, 7 August 2022
Fiks for Tieto Min Arbeidsplan Auto-complete funksjon
Flere bruker Tieto Min Arbeidsplan på jobb i offentlig sektor. Dette produktet har en stor feil i seg når man skal redigere feks standardoppgaver.
Når man skal søke opp en prosjektkode og man har mange koder, så ser man problemet. I stedet for å filtrere eller scrolle ned til riktig kode som
matcher det man har skrevet inn, så blir matchende elementer stylet med uthevet tekst (bold) og man scroller ikke. Dette er egentlig håpløst UI funksjonalitet. Her er en hotfix du kan gjøre. 1. Trykk F12 i nettleseren for å åpne Utviklingsverktøy. Testet OK med Firefox, Edge Chromium og Chrome. 2. Velg fanen Konsoll / Console. 3. Lim så inn Javascript funksjonen her :
css klassen 'ui-select-choices-row'. Så itererer vi vha en for-løkke alle elementene vi finner her og vi ser på children av hvert element sin div tag. Hvis vi finner en substring som matcher (case insensitivt)
og man har skrevet
tre tegn, så scroller man matchende rad element into view, altså slik at man scroller slik at matchende rad er synlig. Det er ikke altså lagt til noe filtrering her siden det ble litt mer kompleks patch, i stedet er dette en viktig scrolle fix så man slipper å bruke masse tid på å manuelt scrolle etter hvilken rad fikk styling med uthevet tekst. Forhåpentligvis får Tieto fikset denne feilen / bugen snart.
matcher det man har skrevet inn, så blir matchende elementer stylet med uthevet tekst (bold) og man scroller ikke. Dette er egentlig håpløst UI funksjonalitet. Her er en hotfix du kan gjøre. 1. Trykk F12 i nettleseren for å åpne Utviklingsverktøy. Testet OK med Firefox, Edge Chromium og Chrome. 2. Velg fanen Konsoll / Console. 3. Lim så inn Javascript funksjonen her :
(function() {
document.getElementsByClassName("ui-select-search")[0].addEventListener("keydown", function(evt){
var searchQueryText = evt.srcElement.value;
var rowsInSelect = document.getElementsByClassName("ui-select-choices-row");
for (var i=0;i<rowsInSelect.length;i++) {
var rowInSelect = rowsInSelect[i];
var targetInnerDiv = rowInSelect.querySelector('div');
//debugger
if (targetInnerDiv != null && i >= 0 && searchQueryText.length >= 3 && targetInnerDiv.textContent.toLowerCase().indexOf(searchQueryText.toLowerCase()) >= 0) {
rowInSelect.scrollIntoView();
break;
}
}
});
})();
Forklaring:
Dette er en 'iffy', som er en Javascript funksjon som kaller seg selv etter å ha blitt opprettet. Vi legger en til event listener på 'keydown' eventen for
søkefeltet som har css klassen 'ui-select-search' (dvs. alle slike søkeelementer, vanligvis kun 1 søkefelt der man er inne på siden 'Rediger standardoppgaver'.
Når vi har 'keydown' og skriver på tastaturet så søker vi også opp alle elementer i DOM-en (Document Object Model, HTML-ens trestruktur av elementer/noder) som har css klassen 'ui-select-choices-row'. Så itererer vi vha en for-løkke alle elementene vi finner her og vi ser på children av hvert element sin div tag. Hvis vi finner en substring som matcher (case insensitivt)
og man har skrevet
tre tegn, så scroller man matchende rad element into view, altså slik at man scroller slik at matchende rad er synlig. Det er ikke altså lagt til noe filtrering her siden det ble litt mer kompleks patch, i stedet er dette en viktig scrolle fix så man slipper å bruke masse tid på å manuelt scrolle etter hvilken rad fikk styling med uthevet tekst. Forhåpentligvis får Tieto fikset denne feilen / bugen snart.
Sunday, 24 July 2022
Generic repository pattern for Azure Cosmos DB
I have looked into Azure Cosmos DB to learn a bit about this schemaless database in Azure cloud. It is a powerful 'document database' which saves and loads data inside 'containers'
in databases in Azure. You can work against this database strongly typed in C# by for example creating a repository pattern.
The code I have made is in a class library which you can clone from here:
git clone https://github.com/toreaurstadboss/AzureCosmosDbRepositoryLib.git
To get started with Azure Cosmos DB, you must create a user first that is against Azure Cosmos DB. This will be your 'db user' in the cloud of course. When you start up Azure Cosmos DB,
select the data explorer tab to view your data, where you can enter manual queries and look at data (and manipulate it).
Note that there are already more official packages for repository pattern .NET SDK available by David Pine, which you should consider using as shown here:
https://docs.microsoft.com/en-us/events/azure-cosmos-db-azure-cosmos-db-conf/a-deep-dive-into-the-cosmos-db-repository-pattern-dotnet-sdk
However, I have also published and pushed a simple GitHub repo I have created here which might be easier to get started with and understand. My goal anyways was to have a learning experience myself with testing out
Azure Cosmos DB.
The Github repo is available here:
https://github.com/toreaurstadboss/AzureCosmosDbRepositoryLib
The methods of the repository is listed inside IRepository :
using AzureCosmosDbRepositoryLib.Contracts;
using Microsoft.Azure.Cosmos;
using System.Linq.Expressions;
namespace AzureCosmosDbRepositoryLib;
/// <summary>
/// Repository pattern for Azure Cosmos DB
/// </summary>
public interface IRepository<T> where T : IStorableEntity
{
/// <summary>
/// Adds an item to container in DB.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="item"></param>
/// <returns></returns>
Task<ISingleResult<T>?> Add(T item);
/// <summary>
/// Retrieves an item to container in DB. Param <paramref name="partitionKey"/> and param <paramref name="id"/> should be provided.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="id"></param>
/// <returns></returns>
Task<ISingleResult<T>?> Get(IdWithPartitionKey id);
/// <summary>
/// Searches for a matching items by predicate (where condition) given in <paramref name="searchRequest"/>.
/// </summary>
/// <param name="searchRequest"></param>
/// <returns></returns>
Task<ICollectionResult<T>?> Find(ISearchRequest<T> searchRequest);
/// <summary>
/// Searches for a matching items by predicate (where condition) given in <paramref name="searchRequest"/>.
/// </summary>
/// <param name="searchRequest"></param>
/// <returns></returns>
Task<ISingleResult<T>?> FindOne(ISearchRequest<T> searchRequest);
/// <summary>
/// Removes an item from container in DB. Param <paramref name="partitionKey"/> and param <paramref name="id"/> should be provided.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="partitionKey"></param>
/// <param name="id"></param>
/// <returns></returns>
Task<ISingleResult<T>?> Remove(IdWithPartitionKey id);
/// <summary>
/// Removes items from container in DB. Param <paramref name="ids"/> must be provided.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="partitionKey"></param>
/// <param name="id"></param>
/// <returns></returns>
Task<ICollectionResult<T>?> RemoveRange(List<IdWithPartitionKey> ids);
/// <summary>
/// Adds a set of items to container in DB. A shared partitionkey is used and the items are added inside a transaction as a single operation.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="items"></param>
/// <param name="partitionKey"></param>
/// <returns></returns>
Task<ICollectionResult<T>?> AddRange(IDictionary<PartitionKey, T> items);
/// <summary>
/// Adds or updates items via 'Upsert' method in container in DB.
/// </summary>
/// <param name="item"></param>
/// <returns></returns>
Task<ICollectionResult<T>?> AddOrUpdateRange(IDictionary<PartitionKey, T> items);
/// <summary>
/// Adds or updates an item via 'Upsert' method in container in DB.
/// </summary>
/// <param name="item"></param>
/// <returns></returns>
Task<ISingleResult<T>?> AddOrUpdate(T item);
/// <summary>
/// Retrieves results paginated of page size. Looks at all items of type <typeparamref name="T"/> in the container. Send in a null value for continuationToken in first request and then use subsequent returned continuation tokens to 'sweep through' the paged data divided by <paramref name="pageSize"/>.
/// </summary>
/// <param name="pageSize"></param>
/// <param name="continuationToken"></param>
/// <param name="sortDescending">If true, sorting descending (sorting via LastUpdate property so newest items shows first)</param>
/// <returns></returns>
Task<IPaginatedResult<T>?> GetAllPaginated(int pageSize, string? continuationToken = null, bool sortDescending = false, Expression<Func<T, object>>[]? sortByMembers = null);
/// <summary>
/// On demand method exposed from exposing this respository on demands. Frees up resources such as CosmosClient object inside.
/// </summary>
void Dispose();
/// <summary>
/// Returns name of database in Azure Cosmos DB
/// </summary>
/// <returns></returns>
string? GetDatabaseName();
/// <summary>
/// Returns Container id inside database in Azure Cosmos DB
/// </summary>
/// <returns></returns>
string? GetContainerId();
}
Let's first look at retrieving the paginated results using a 'continuation token' which is how you do paging inside Azure Cosmos DB where this token is a 'bookmark'.
public async Task<IPaginatedResult<T>?> GetAllPaginated(int pageSize, string? continuationToken = null, bool sortDescending = false,
Expression<Func<T, object>>[]? sortByMembers = null)
{
string sortByMemberNames = sortByMembers == null ? "c.LastUpdate" :
string.Join(",", sortByMembers.Select(x => "c." + x.GetMemberName()).ToArray());
var query = new QueryDefinition($"SELECT * FROM c ORDER BY {sortByMemberNames} {(sortDescending ? "DESC" : "ASC")}".Trim()); //default query - will filter to type T via 'ItemQueryIterator<T>'
var queryRequestOptions = new QueryRequestOptions
{
MaxItemCount = pageSize
};
var queryResultSetIterator = _container.GetItemQueryIterator<T>(query, requestOptions: queryRequestOptions,
continuationToken: continuationToken);
var result = queryResultSetIterator.HasMoreResults ? await queryResultSetIterator.ReadNextAsync() : null;
if (result == null)
return null!;
var sourceContinuationToken = result.ContinuationToken;
var paginatedResult = new PaginatedResult<T>(sourceContinuationToken, result.Resource);
return paginatedResult;
}
We sort by LastUpdate member default and we send in the pagesize, sorting ascending default and allowing to specify sorting members. A helper method to get the name of the property expressions to use as sorting member is also used here. We get a query item iterator from the 'container' and then read the found items which is in the Resource property, all asynchronously. Note that we return the continutation token here in the end and we initially send in null as the continuation token. Each call to getting a new page will get a new continuation token so we can browse through the data in pages. When the continuation token is null, we have come to the end of the data.
PaginatedResult looks like this:
namespace AzureCosmosDbRepositoryLib.Contracts
{
public interface IPaginatedResult<T>
{
public IList<T> Items { get; }
public string? ContinuationToken { get; set; }
}
public class PaginatedResult<T> : IPaginatedResult<T>
{
public PaginatedResult(string continuationToken, IEnumerable<T> items)
{
Items = new List<T>();
if (items != null)
{
foreach (var item in items)
{
Items.Add(item);
}
}
ContinuationToken = continuationToken;
}
public IList<T> Items { get; private set; }
public string? ContinuationToken { get; set; }
}
}
Another thing in the lib is the contract IStorableEntity, which is a generic interface of type T, which defined a Id property - note the usage of JsonProperty attribute. Also, we set up a partition key for the item here.
using Microsoft.Azure.Cosmos;
using Newtonsoft.Json;
namespace AzureCosmosDbRepositoryLib.Contracts
{
public interface IStorableEntity
{
[JsonProperty("id")]
string Id { get; set; }
PartitionKey? PartitionKey { get; }
DateTime? LastUpdate { get; set; }
}
}
It is important to both have set the id and partitionkey when you save, update and delete items in container in Azure Cosmos DB so it works as expected.
There are other methods in this repo as seen in the IRepository interface. The repository class will take care of creating the database and container in Azure Cosmos DB if required.
Note also that it is important in intranet scenarios to set up Gateway connection mode. This is done default and the reason why this is done is because of firewall issues.
private void InitializeDatabaseAndContainer(CosmosClientOptions? clientOptions, ThroughputProperties? throughputPropertiesForDatabase, bool defaultToUsingGateway)
{
_client = clientOptions == null ?
defaultToUsingGateway ?
new CosmosClient(_connectionString, new CosmosClientOptions
{
ConnectionMode = ConnectionMode.Gateway //this is the connection mode that works best in intranet-environments and should be considered as best compatible approach to avoid firewall issues
}) :
new CosmosClient(_connectionString) :
new CosmosClient(_connectionString, _cosmosClientOptions);
//Run initialization
if (throughputPropertiesForDatabase == null)
{
_database = Task.Run(async () => await _client.CreateDatabaseIfNotExistsAsync(_databaseName)).Result; //create the database if not existing (will go for default options regarding scaling)
}
else
{
_database = Task.Run(async () => await _client.CreateDatabaseIfNotExistsAsync(_databaseName, throughputPropertiesForDatabase)).Result; //create the database if not existing - specify specific through put options
}
// The container we will create.
_container = Task.Run(async () => await _database.CreateContainerIfNotExistsAsync(_containerId, _partitionKeyPath)).Result;
}
Another example using another iterator than the item query generator is the linq query generator. This is used inside the Find method :
public async Task<ICollectionResult<T>?> Find(ISearchRequest<T>? searchRequest)
{
if (searchRequest?.Filter == null)
return await Task.FromResult<ICollectionResult<T>?>(null);
var linqQueryable = _container.GetItemLinqQueryable<T>();
var stopWatch = Stopwatch.StartNew();
try
{
using var feedIterator = linqQueryable.Where(searchRequest.Filter).ToFeedIterator();
while (feedIterator.HasMoreResults)
{
var items = await feedIterator.ReadNextAsync();
var result = BuildSearchResultCollection(items.Resource);
result.ExecutionTimeInMs = stopWatch.ElapsedMilliseconds;
return result;
}
}
catch (Exception err)
{
return await Task.FromResult(BuildSearchResultCollection(err));
}
return await Task.FromResult<ICollectionResult<T>?>(null);
}
using System.Linq.Expressions;
namespace AzureCosmosDbRepositoryLib.Contracts
{
public interface ISearchRequest<T>
{
Expression<Func<T, bool>>? Filter { get; set; }
}
public class SearchRequest<T> : ISearchRequest<T>
{
public Expression<Func<T, bool>>? Filter { get; set; }
}
}
Note that this lib is using Microsoft.Azure.Cosmos of version 3.29. There are differences between the major version obviously, so the methods shown here applies to Azure Cosmos DB 3.x. This is the only nuget package this lib requires. You should consider the SDK that David Pine created, but if you want to create a repository pattern against Azure Cosmos DB - then maybe you find my repository pattern a starting point and you can borrow some code from it.
One final note - I had troubles doing a batch insert in the lib for a type of T using transactions in Azure Cosmos DB, that this seems to require a common partition key - it ended up in a colission. So the AddRange method in the lib is not batched with one partition but done sequentially looping through the items for now.. Other than that - the lib should work for some core usages in ordinary scenarios. The lib should log errors a bit better too, so the lib is primarily for demonstration usages and showing essential CRUD operations in Azure Cosmos DB.
Note that the connection string should be saved into a appsettings.json file for example or in dev environments consider using dotnet user secrets as I have done so we do not expose secrets to source control.
The connection string is shown under the 'Keys' tab in the Azure cosmos. You will look for 'primary connetion string' here as this is how you connect to your database and container(s), where data resides. Use the 'data explorer' tab to work with the data.
Saturday, 16 July 2022
Using ValueConverter in EF Core 6 to handle custom data types
Usually we stick to the familiar data types, but EF Core 6 (Entity Framework) can also both save a custom data type and also load it up again. Usually, you would save the data type in a familiar representation and then have some mapper
method map into the desired type in a data transfer object - DTO. In this example we will anyways look at an alternative, a ValueConverter. This have been supported since EF Core 2.1.
We first off define a ValueConverter by inhering from the ValueConverter class. This takes a 'from' and 'to' generic type argument. We convert from a type into another type and vice versa. We send in two expressions to do these mappings to the
base constructor.
I have pushed the code here if you want to test out ValueConverter in EF Core yourself.
git clone https://github.com/toreaurstadboss/EfCore6ValueConversionTemplate.git
Let's see how we for example can save a Color (System.Drawing.Color) via its color name and load it up again.
using Microsoft.EntityFrameworkCore.Storage.ValueConversion;
using System.Drawing;
using System.Linq.Expressions;
namespace PublisherData
{
public class ColorValueConverter : ValueConverter<Color, string>
{
private static Expression<Func<Color, string>> ColorString =
c => new string(c.Name);
static Expression<Func<string, Color>> ColorStruct =
s => Color.FromName(s);
public ColorValueConverter() : base(ColorString, ColorStruct) {
}
}
}
Next up we 'bulk-configure' a value converter for the Color type to have conversion set to this ValueConverter. Inside the db context class we do :
protected override void ConfigureConventions(ModelConfigurationBuilder configurationBuilder)
{
configurationBuilder.Properties<Color>().HaveConversion(typeof(ColorValueConverter));
}
We also define a migration for database and add a column in a table to save the color, which is defined as a string (varchar(100)) column, so we can test out the conversion.
using Microsoft.EntityFrameworkCore.Migrations;
#nullable disable
namespace PublisherData.Migrations
{
public partial class favcolor : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.AddColumn<string>(
name: "FavoriteColor",
table: "Authors",
type: "nvarchar(100)",
maxLength: 100,
nullable: true);
}
protected override void Down(MigrationBuilder migrationBuilder)
{
migrationBuilder.DropColumn(
name: "FavoriteColor",
table: "Authors");
}
}
}
A small console app then tests out the Value conversion :
// See https://aka.ms/new-console-template for more information
using Microsoft.EntityFrameworkCore;
using PublisherData;
using System.Drawing;
Console.WriteLine("Posting a new author - testing out ValueConverter feature of EF Core 6");
var options = new DbContextOptionsBuilder<PubContext>()
.UseSqlServer("Data Source = somedevmachine\\sqlexpress01; Initial Catalog = PubDatabase_M12_Api_EfCoreCourse; Trusted_Connection=true");
using var pubContext = new PubContext(options.Options);
var authorInsert = new PublisherDomain.Author
{
FirstName = "Claude",
LastName = "Monet",
FavoriteColor = Color.Aquamarine
};
pubContext.Authors.Add(authorInsert);
await pubContext.SaveChangesAsync();
var insertedAuthor = pubContext.Authors.Find(authorInsert.AuthorId);
string convertColorToArgbName(Color someColor) => Color.FromArgb(someColor.ToArgb()).Name;
string insertedAuthorFavoriteColorArgb = convertColorToArgbName(authorInsert.FavoriteColor.Value);
string aquaMarineColorArgb = convertColorToArgbName(Color.Aquamarine);
Console.WriteLine($"Favorite color is expected? {insertedAuthorFavoriteColorArgb.Equals(aquaMarineColorArgb)}");
We save the favorite color and when we load up the entity again we load the expected color by checking the Argb value 'Name' property (the hexadecimal color value acutally).
The output then reads:
Posting a new author - testing out ValueConverter feature of EF Core 6
Favorite color is expected? True
This is a rather trivial example. We could have a more complex object which we wanted to save in a particular way to the database and load it up again. For some reason or another, that object might not just be serialized, saved and restored and loaded via deserialization. Anyways,
ValueConverter is a feature which I wanted to test out in EF Core and it is an alternative to be at least aware of.
Saturday, 9 July 2022
MediatR - Adding a pipeline behavior and logging to SeriLog and Seq
This article will present how you can add a pipeline behavior to MediatR and then log to SeriLog and Seq. This makes it possible to add for example logging and use the 'decorator pattern' with MediatR to add
behavior to the pipeline of Asp.net core in general (.net 6 is used here). It is therefore sort of Middelware and tied together via MediatR.
First off, lets define the IPipelineBehavior, which will do our logging. Note that you have to add the generic type parameter where condition here of TRequest.
Note that we inject the ILogger<LoggingBehavior<TRequest, TResponse>> logger object here. In .net (core) we send in ILogger instance of the type of the class we are injecting into to log out to log sources and
SeriLog will present this information. Inside Seq we can also browse this log information.
We only log payloads if we are inside Development environment and the payload of request / response can be serialized and is below 50 kB.
Program.cs
using MediatR;
using Newtonsoft.Json;
using System.Diagnostics;
namespace CqrsDemoWebApi.Features.Books.Pipeline
{
public class LoggingBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IRequest<TResponse>
{
private readonly ILogger<LoggingBehavior<TRequest, TResponse>> _logger;
private readonly IWebHostEnvironment _environment;
public LoggingBehavior(ILogger<LoggingBehavior<TRequest, TResponse>> logger, IWebHostEnvironment environment)
{
_logger = logger;
_environment = environment;
}
public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
{
string? requestData = null;
if (_environment.IsDevelopment())
{
//log serializable payload data in request (as json format) to SeriLog if we are inside Development environment and the payload is below 50 kB in serializable size
try
{
requestData = request != null ? JsonConvert.SerializeObject(request) : null;
if (requestData?.Length > 50 * 1028)
requestData = null; //avoid showing data larger than 100 kB in serialized form in the logs of Seq / SeriLog
}
catch (Exception)
{
//ignore
}
}
_logger.LogInformation($"Handling request {typeof(TRequest).Name} {{@requestData}}", requestData);
var response = await next();
string? responseData = null;
if (_environment.IsDevelopment())
{
//log serializable payload data in response (as json format) to SeriLog if we are in Development environment and the payload is below 50 kB in serialized size
try
{
responseData = response != null ? JsonConvert.SerializeObject(response) : null;
if (responseData?.Length > 50 * 1028)
responseData = null; //avoid showing data larger than 100 kB in serialized form in the logs of Seq / SeriLog
}
catch (Exception)
{
//ignore
}
}
_logger.LogInformation($"Handled request {typeof(TRequest).Name}. Returning result of type {typeof(TResponse).Name} {{@responseData}}.", responseData);
return response;
}
}
}
Afterwards register the IPipelineBehavior. Note how we also add UseSerilog method here and add Seq too.
Program.cs
using CqrsDemoWebApi.Database;
using CqrsDemoWebApi.Features.Books.Pipeline;
using MediatR;
using Microsoft.EntityFrameworkCore;
using Serilog;
using System.Reflection;
var builder = WebApplication.CreateBuilder(args);
builder.Host.UseSerilog((ctx, lc) =>
lc.WriteTo.Console()
.WriteTo.Seq("http://localhost:5341")); //note - to use Seq with this url - install it from here (free individual license) : https://datalust.co/download
// Add services to the container.
builder.Services.AddMediatR(Assembly.GetExecutingAssembly()); //adding MediatR support here.
//register some pipeline(s) defined for MediatR usage
builder.Services.AddTransient(typeof(IPipelineBehavior<,>), typeof(LoggingBehavior<,>));
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var connectionString = new ConnectionString(builder.Configuration.GetConnectionString("CqrsDemoBooksDb"));
builder.Services.AddSingleton(connectionString);
builder.Services.AddDbContext<BooksDbContext>(options =>
options.UseSqlServer(connectionString.Value));
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
Here are the updated nuget packages inside the csproj file of the Asp.net core project (.net 6) :
<ItemGroup>
<PackageReference Include="Mediatr" Version="10.0.1" />
<PackageReference Include="MediatR.Extensions.Microsoft.DependencyInjection" Version="10.0.1" />
<PackageReference Include="Microsoft.EntityFrameworkCore" Version="6.0.1" />
<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="6.0.1" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="6.0.1">
<PrivateAssets>all</PrivateAssets>
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
</PackageReference>
<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="6.0.0" />
<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
<PackageReference Include="Serilog.AspNetCore" Version="6.0.0-dev-00265" />
<PackageReference Include="Serilog.Sinks.Seq" Version="5.1.2-dev-00225" />
<PackageReference Include="Swashbuckle.AspNetCore" Version="6.2.3" />
</ItemGroup>
We also need to install Seq on our computer to see the messages sent to SeriLog inside Seq.
Go to this url to download free individual license of Seq :
https://datalust.co/download
You can see the logs at Seq's default url and also set up in Program.cs file :
http://localhost:5341
Friday, 8 July 2022
Using MediatR to implement CQRS pattern with EF Core
This article will present a sample solution I have pushed to GitHub. It uses MediatR library to showcase CQRS pattern together with .NET 6 and Entity Framework Core.
MediatR takes care of decoupling the sending of messages to handling messages in-process. It can be used in the data layer together with EF Core for example to dispatch (send)
messages which will either handle (execute) a 'command' (defined in CQRS as a 'write operation') or process and return a 'query' (defined in CQRS as a 'read' operation). What we
gain from CQRS is the ability to offload the sending of these messages (commands and queries) to the correct handler for given message handler (executing either a 'command' or return the results
of a 'query') and this makes it easier to create scalable code in the data layer. It gives thinner Asp.net core (web api) controller actions in the sample demo.
First off, clone the solution :
.
Note that Query only implements IRequest of the return type and has no properties (i.e. 'request parameters') since we return all books here and need no such parameters. We inject the DbContext here via dependency injection.
In Program.cs of the .NET 6 solution we add MediatR using the builder.Services.AddMediatR() method:
want a return result 'void' - we use the special type Unit. This signals that we do not return any results from the command or the query. We can return an int value for the Command after we have inserted a row. It is okay in CQRS to return a resource identifier. CQRS allows also return results from command as long as it is a resource identifier to use to look up the data via a query. (it could be Guid etc too if that is suitable). Here is a example of a command that creates a book:
git clone https://github.com/toreaurstadboss/CqrsDemoWebApi.git
The following Nuget packages are relevant for EF Core and MediatR.
<PackageReference Include="Mediatr" Version="10.0.1" />
<PackageReference Include="MediatR.Extensions.Microsoft.DependencyInjection" Version="10.0.1" />
<PackageReference Include="Microsoft.EntityFrameworkCore" Version="6.0.1" />
<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="6.0.1" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="6.0.1">
The Wiki of MediatR contains some example code here:
https://github.com/jbogard/MediatR/wiki
We will first look into the Request /Reponse messages, dispatched to a single handler. These are either comands or queries in this sample.
Let us first consider a query that returns a list of books. Here is how it is implemented :
using CqrsDemoWebApi.Database;
using MediatR;
using Microsoft.EntityFrameworkCore;
namespace CqrsDemoWebApi.Features.Books.Query
{
public class GetBooks
{
public class Query : IRequest<IEnumerable<Book>>
{
}
public class QueryHandler : IRequestHandler<Query, IEnumerable<Book>>
{
private readonly BooksDbContext _db;
public QueryHandler(BooksDbContext db)
{
_db = db;
}
public async Task<IEnumerable<Book>> Handle(Query request, CancellationToken cancellationToken)
{
return await _db.Books.ToListAsync(cancellationToken);
}
}
}
}
We see that we are supposed to handle the query async since we must return a Task of TResponse (although we could use Task.FromResult here if we wanted to return it synchronously..) where the TRequest is of type IRequest
Program.cs (.NET 6 Web Api project)
using CqrsDemoWebApi.Database;
using MediatR;
using Microsoft.EntityFrameworkCore;
using System.Reflection;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddMediatR(Assembly.GetExecutingAssembly()); //adding MediatR support here.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var connectionString = new ConnectionString(builder.Configuration.GetConnectionString("CqrsDemoBooksDb"));
builder.Services.AddSingleton(connectionString);
builder.Services.AddDbContext<BooksDbContext>(options =>
options.UseSqlServer(connectionString.Value));
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
Nothing special here, just basic setup of a standard web api in Net 6 here, using the AddMediatR method pointing to the assembly where our contracts reside (commands and queries and their respective handlers).
The BookController web api controller looks like this :
using CqrsDemoWebApi.Database;
using CqrsDemoWebApi.Features.Books;
using CqrsDemoWebApi.Features.Books.Command;
using CqrsDemoWebApi.Features.Books.Query;
using MediatR;
using Microsoft.AspNetCore.Mvc;
namespace CqrsDemoWebApi.Controllers
{
[Route("api/[controller]")]
public class BookController : ControllerBase
{
private readonly IMediator _mediator;
public BookController(IMediator mediator)
{
_mediator = mediator;
}
[HttpGet]
public async Task<IEnumerable<Book>> GetBooks()
{
var books = await _mediator.Send(new GetBooks.Query());
return books;
}
[HttpGet("{id}")]
public async Task<Book> GetBook(int id)
{
var book = await _mediator.Send(new GetBooksById.Query { Id = id });
return book;
}
[HttpPost]
public async Task<ActionResult> AddBook([FromBody] AddNewBook.Command command)
{
var createdBookId = await _mediator.Send(command);
return CreatedAtAction(nameof(GetBook), new { id = createdBookId }, null);
}
[HttpDelete("{id}")]
public async Task<ActionResult> DeleteBook(int id)
{
await _mediator.Send(new DeleteCommand.Command { Id = id });
return NoContent();
}
}
}
As we see, we inject the IMediator here and then use the Dependency Injected field to send the command of query instance. Note the use of [FromBody] in Post requests to send the POST body into the handler for the Command.
Each query or command is implemented as nested classes where Query and QueryHandler is nested inside, implement IRequest of T and IRequestHandler of T (and return type U back again if we want a return result). If we do not want a return result 'void' - we use the special type Unit. This signals that we do not return any results from the command or the query. We can return an int value for the Command after we have inserted a row. It is okay in CQRS to return a resource identifier. CQRS allows also return results from command as long as it is a resource identifier to use to look up the data via a query. (it could be Guid etc too if that is suitable). Here is a example of a command that creates a book:
using CqrsDemoWebApi.Database;
using MediatR;
namespace CqrsDemoWebApi.Features.Books.Command
{
public class AddNewBook
{
public class Command : IRequest<int>
{
public string? Title { get; set; }
public string? Author { get; set; }
public int Year { get; set; }
public string? ImageLink { get; set; }
public int Pages { get; set; }
public string? Country { get; set; }
public string? Language { get; set; }
public string? Link { get; set; }
}
public class CommandHandler : IRequestHandler<Command, int>
{
private readonly BooksDbContext _booksDbContext;
public CommandHandler(BooksDbContext booksDbContext)
{
_booksDbContext = booksDbContext;
}
public async Task<int> Handle(Command request, CancellationToken cancellationToken)
{
var book = new Book
{
Title = request.Title,
Author = request.Author,
Year = request.Year,
ImageLink = request.ImageLink,
Pages = request.Pages,
Country = request.Country,
Language = request.Language,
Link = request.Link
};
await _booksDbContext.Books.AddAsync(book, cancellationToken);
_booksDbContext.SaveChanges();
return book.BookId;
}
}
}
}
A sample json body for the POST is then:
{
"title": "Mythical Man Month",
"author": "Frederick Brooks",
"year": 1975,
"country": "USA",
"imagelink": "",
"pages": 222
}
We return the BookId of the new book as we impement IRequestHandler<int> . It is the auto generated id after saving the new book.
This database is seeded by the way via a json file like this :
using Microsoft.EntityFrameworkCore;
using Newtonsoft.Json;
namespace CqrsDemoWebApi.Database.Seed
{
public static class ModelBuilderExtensions
{
public static void SeedBooks(this ModelBuilder modelBuilder, string contentRootPath)
{
string jsonInput = File.ReadAllText(Path.Combine(contentRootPath, @"Database/Seed/books.json"));
var books = JsonConvert.DeserializeObject<Book[]>(jsonInput);
int bookId = 0;
foreach (var book in books!)
{
bookId++;
book.BookId = bookId;
}
modelBuilder.Entity<Book>().HasData(books);
modelBuilder.Entity<Book>().HasKey(b => b.BookId);
modelBuilder.Entity<Book>().Property(b => b.BookId).ValueGeneratedOnAdd();
}
}
}
And we have the DbContext like this:
using CqrsDemoWebApi.Database.Seed;
using Microsoft.EntityFrameworkCore;
namespace CqrsDemoWebApi.Database
{
public class BooksDbContext : DbContext
{
private readonly IWebHostEnvironment _hostingEnvironment;
public BooksDbContext(DbContextOptions<BooksDbContext> options,
IWebHostEnvironment hostingEnvironment) : base(options)
{
_hostingEnvironment = hostingEnvironment;
}
public DbSet<Book> Books { get; set; } = null!;
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
string contentRootPath = _hostingEnvironment.ContentRootPath;
modelBuilder.SeedBooks(contentRootPath);
}
}
}
Now, the naming of each nested command/command handler or query/queryhandler is given a generic name. It could be that you want to make this a more specific name, but the wrapping class is a good enough specifier for this.
I.e. GetBooks.Query() or AddNewBook.Command (we use [FromBody] here and there do not new up an instance like the Query) are specified enough allowing us to use generic nested class names..
MediatR simplifies the goal of implementing CQRS and following the mediator pattern. There are more possibilities with MediatR. Look at videos on Youtube for this topic for example.
Tuesday, 5 July 2022
UnitOfWork pattern in Entity Framework
This article will present code regarding the 'Unit of Work' pattern in Entity Framework, EF. I will also show updated code for a generic repository pattern of the same code.
Note that the code in this article is very general and can be run against your DbContext regardless of how it looks. It is solved general, using generics and some usage of reflection.
Note that this code goes against DbContext and is written in a solution that uses Entity Framework 6.4.4. However, the code here should work also in newer EF versions, that is - EF Core versions - such as
EF Core 6, as the code is very general and EF Core also got the similar structure at least concerning the code shown here..
First off, the code for the interface IUnitOfWork looks like this:
IUnitOfWork.cs
using SomeAcme.Common.Interfaces;
namespace SomeAcme.Data.EntityFramework.Managers
{
public interface IUnitOfWork
{
UnitOfWork AddRepository<T>() where T : class;
UnitOfWork AddCustomRepository<T>() where T : class;
int Complete();
void Dispose();
UnitOfWork RemoveRepository<T>() where T : class;
IRepository<T> Repository<T>() where T : class;
}
}
The method Complete is important, as it will commit the transaction and perform changes to the database which the UnitOfWork implementation will work against.
The code for UnitWork implementation looks like this:
UnitOfWork.cs
using SomeAcme.Common.Interfaces;
using System;
using System.Collections.Generic;
using System.Linq;
namespace SomeAcme.Data.EntityFramework.Managers
{
public class UnitOfWork : IUnitOfWork, IDisposable
{
private readonly System.Data.Entity.DbContext _dbContext;
public UnitOfWork(System.Data.Entity.DbContext dbContext)
{
_dbContext = dbContext;
_repositories = new Dictionary<Type, object>();
}
public UnitOfWork AddRepository<T>() where T : class
{
if (!_repositories.ContainsKey(typeof(T)))
{
var repoObj = Activator.CreateInstance(typeof(Repository<T>), _dbContext);
Repository<T> repo = repoObj as Repository<T>;
if (repo == null)
{
throw new ArgumentNullException($"Could not instantiate repository of type {typeof(T).Name}");
}
_repositories[typeof(T)] = repo;
}
return this;
}
public UnitOfWork AddCustomRepository<T>() where T : class
{
if (!_repositories.ContainsKey(typeof(T)))
{
bool checkImpementationPassesGenericInterfaceCheck = typeof(T).GetInterfaces().Any(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IRepository<>));
if (!checkImpementationPassesGenericInterfaceCheck)
{
throw new ArgumentException($"The type {typeof(T).Name} must implement IRepository<T> to be added as a custom repository");
}
var repoObj = Activator.CreateInstance(typeof(T), _dbContext);
if (repoObj == null)
{
throw new ArgumentNullException($"Could not instantiate repository of type {typeof(T).Name}");
}
_repositories[typeof(T)] = repoObj;
}
return this;
}
public UnitOfWork RemoveRepository<T>() where T : class
{
if (_repositories.ContainsKey(typeof(T)))
{
_repositories.Remove(typeof(T));
}
return this;
}
public IRepository<T> Repository<T>() where T : class
{
//find suitable repo - possibly a custom repo ..
IRepository<T> repoFound = null;
foreach (var item in _repositories)
{
if (item.Key == typeof(T))
{
repoFound = _repositories[typeof(T)] as IRepository<T>;
break;
}
bool checkImpementationPassesGenericInterfaceCheck = item.Key.GetInterfaces().Any(i => i.IsGenericType && i.GetGenericTypeDefinition() == typeof(IRepository<>));
if (checkImpementationPassesGenericInterfaceCheck)
{
//the repo implements IRepository<T> - this is the one to use
repoFound = item.Value as IRepository<T>;
break;
}
}
if (repoFound == null)
{
throw new ArgumentNullException($"Could not retrieve repositiory defined inside the UnitOfWork for entity of type: {typeof(T).Name}. Is it registered into the UnitOfWork. Use method 'AddRepository'");
}
return repoFound;
}
private readonly IDictionary<Type, object> _repositories;
public int Complete()
{
if (_dbContext == null)
{
throw new ArgumentNullException($"The db context object of the UnitOfWork class is null, cannot complete the UnitOfWork as the db context is not initialized! No changes was performed in DB !");
}
int numStateEntriesWritten = _dbContext.SaveChanges();
return numStateEntriesWritten;
}
public void Dispose()
{
_dbContext?.Dispose(); //dispose the passed in _shared_ db context instead of
//disposing the db context inside each repository to dispose only once..
GC.SuppressFinalize(this);
}
}
}
Note - this implementation focuses on being able to define repositories to the UnitOfWork before running the unit work in case you want to be able to specify which tables / entities got repositories in the db context. Many other implementations will specify the repositories to expose in the UnitOfWork via property getters for example.
This will make it easier to implement such a UnitOfWork implementation without this ability to expanding which repositories the UnitOfWork is supporting.
Also note that in case you want to add a custom repository that implements IRepository<T> you need to cast into that custom repository when you call the 'Repository' method.
A closed abstraction instead, where you do not add repositories like in this code may be more desirable. But perhaps there are scenarios where you want to add repositories that can take part in the UnitOfWork dynamically as shown here. The downside is that you need to initialize the UnitOfWork in addition to performing the database 'steps' before calling the 'Complete' method.
What is the most practical UnitOfWork implementation in many cases most probably will be a closed abstraction where you specify which repositories the UnitOfWork supports and avoid having to add repositories like shown here. I did this as an academic exercise to see if such an implementation was possible. The unit tests passes and it looks okay to implement. Note though that you should probably have some default initialization here, i.e. in the constructor of UnitOfWork specify some default repositories and consider if you want to allow adding or removing repositories in the UnitOfWork class. Also, removing repositories could be considered an anti-pattern, so you could disallow this - only allowing adding custom repositories implementating the IRepository<T> in addition to listing up some implemented repositories.
Important - always just pass in ONE db context. Each repository must use the same db context instance so the change tracking works as expected. Also note -
the UnitOfWork and repositories implement IDisposable. When UnitOfWork goes out of scope, the db context is disposed. In case you want to add the UnitOfWork as a service in a DI container, remember to set up a scoped instance so it will be disposed. In case you use Singleton - this will cause the db connection to hang around.. the Dispose method is exposed as a public method anyways and can be called to dispose on-demand..
Finally, here are some tests that passes testing out the UnitOfWork together with the generic repository pattern !
[Test]
public void UnitOfWorkPerformsExpected()
{
var dbContext = GetContext();
var unitOfWork = new UnitOfWork(dbContext);
unitOfWork.AddRepository<OperationExternalEquipment>()
.AddRepository<OperationDiagnoseCode>();
var operationExternalEquipment = new OperationExternalEquipment
{
OperationId = 10296,
EquipmentText = "Stent graft type ABC-123",
OrderedDate = DateTime.Today
};
var operationDiagnoseCode = new OperationDiagnoseCode
{
OperationId = 10296,
DiagnoseCodeId = "A09.9",
IsCodePreFabricated = true
};
//act
unitOfWork.Repository<OperationExternalEquipment>().Add(operationExternalEquipment);
unitOfWork.Repository<OperationDiagnoseCode>().Add(operationDiagnoseCode);
int savedResult = unitOfWork.Complete();
savedResult.Should().Be(2);
//assert
var savedOperationEquipmentsForOperation = unitOfWork.Repository<OperationExternalEquipment>().Find(x => x.OperationId == 10296).ToList();
savedOperationEquipmentsForOperation.Any(x => x.EquipmentText == "Stent graft type ABC-123").Should().BeTrue();
var savedOperationDiagnoseCodesForOperation = unitOfWork.Repository<OperationDiagnoseCode>().Find(x => x.OperationId == 10296).ToList();
savedOperationDiagnoseCodesForOperation.Any(x => x?.DiagnoseCodeId == "A09.9").Should().BeTrue();
//cleanup
unitOfWork.Repository<OperationExternalEquipment>().Remove(operationExternalEquipment);
unitOfWork.Repository<OperationDiagnoseCode>().Remove(operationDiagnoseCode);
savedResult = unitOfWork.Complete();
savedResult.Should().Be(2);
}
[Test]
public void UnitOfWorkCustomRepoPerformsExpected()
{
var dbContext = GetContext();
var unitOfWork = new UnitOfWork(dbContext);
unitOfWork.AddCustomRepository<OperationExternalEquipmentCustomRepo>();
var operationExternalEquipment = new OperationExternalEquipment
{
OperationId = 10296,
EquipmentText = "Stent graft type DEF-456",
OrderedDate = DateTime.Today
};
//act
unitOfWork.Repository<OperationExternalEquipment>().Add(operationExternalEquipment);
int savedResult = unitOfWork.Complete();
savedResult.Should().Be(1);
//assert
var savedOperationEquipmentsForOperation = unitOfWork.Repository<OperationExternalEquipment>().Find(x => x.OperationId == 10296).ToList();
savedOperationEquipmentsForOperation.Any(x => x.EquipmentText == "Stent graft type DEF-456").Should().BeTrue();
//check if we can use a custom repo method !
var equipmentRepo = unitOfWork.Repository<OperationExternalEquipment>() as OperationExternalEquipmentCustomRepo;
var equipmentTexts = string.Join(",", equipmentRepo.GetEquipmentTexts(10296));
bool foundText = equipmentTexts.Contains("Stent graft type DEF-456");
foundText.Should().BeTrue();
//cleanup
unitOfWork.Repository<OperationExternalEquipment>().Remove(operationExternalEquipment);
savedResult = unitOfWork.Complete();
savedResult.Should().Be(1);
}
The adjusted implementation of Repository now looks like this - I have renamed many of the methods to be more standard compared to other implementations of the repository pattern demonstrated online in different videos on Youtube for example.
The updated interface for repository now looks like this:
IRepository.cs
using System;
using System.Collections.Generic;
using System.Linq.Expressions;
namespace SomeAcme.Common.Interfaces
{
/// <summary>
/// Generic implementation of repository pattern for (should maybe have been implemented a decade ago to save some development time .. :-) to save some code in DAL-layer Data.EntityFramework )
/// </summary>
/// <typeparam name="T">Entity T (POCO for table)</typeparam>
public interface IRepository<T> where T : class
{
/// <summary>
/// Performs an insert of an entity
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="entity"></param>
/// <param name="keyValues">If set, these key values are used to locate entity in db after the insertion has been performed if specifed by other param for saveImmediate</param>
/// <param name="saveImmediate">Save immediately in db after adding the entity</param>
T Add(T entity, bool saveImmediate = false);
/// <summary>
/// Performs an insert of multiple entities
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="entity"></param>
/// <param name="saveImmediate">Save immediately after adding item in db</param>
IEnumerable<T> AddRange(IEnumerable<T> entity, bool saveImmediate = false);
/// <summary>
/// Saves changes. Commits the data to the database.
/// </summary>
/// <param name="dbContext">Db context</param>
void SaveChanges(object dbContext);
/// <summary>
/// Delete an entity specified by <paramref name="keyValues"/> to look up entity
/// </summary>
/// <param name="keyValues"></param>
/// <returns></returns>
T Remove(bool saveImmediate, params object[] keyValues);
/// <summary>
/// Deletes an entity specified by <paramref name="entity"/> from the database
/// </summary>
/// <param name="entity"></param>
void Remove(T entity, bool saveImmediate = false);
/// <summary>
/// Removes entities specified by <paramref name="entities"/> from the database
/// </summary>
/// <param name="entities"></param>
void RemoveRange(IEnumerable<T> entities, bool saveImmediate = false);
/// <summary>
/// Update <paramref name="entity"/>
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="entity"></param>
/// <param name="saveImmediate">Save immediate if set to true</param>
/// <returns></returns>
T Update(T entity, bool saveImmediate, params object[] keyValues);
/// <summary>
/// Equivalent to a 'GetById' method, but tailored for generic use.
/// Retrieves <paramref name="idSelector"/> specified by <paramref name="idValue"/>
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="keyValues">Key values to use to find entity</param>
T Get(bool asNoTracking = true, params object[] keyValues);
/// <summary>
/// Retrieves entities of type <typeparamref name="T"/> via predicate <paramref name="condition"/>.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="condition"></param>
/// <returns></returns>
/// <param name="asNoTracking">If true, does not track items (less chance of db locks due to turning off change tracking) </param>
IEnumerable<T> Find(Expression<Func<T, bool>> condition, bool asNoTracking = true);
/// <summary>
/// Retrieves an entity of type <typeparamref name="T"/> via predicate <paramref name="condition"/>.
/// If not found, null is returned.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="condition"></param>
/// <returns></returns>
T GetByCondition(Expression<Func<T, bool>> condition);
/// <summary>
/// Retrieve all the entities specified by <typeparamref name="T"/>.
/// </summary>
/// <typeparam name="T"></typeparam>
/// <returns></returns>
/// <param name="asNoTracking">If true, does not track items (less chance of db locks due to turning off change tracking) </param>
IEnumerable<T> GetAll(bool asNoTracking = true);
}
}
And the implementation looks like this:
using SomeAcme.Common.Interfaces;
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Linq.Expressions;
namespace SomeAcme.Data.EntityFramework.Managers
{
public class Repository<T> : IRepository<T>, IDisposable where T : class
{
private readonly System.Data.Entity.DbContext _dbContext;
public Repository(System.Data.Entity.DbContext dbContext)
{
_dbContext = dbContext;
}
public T Add(T entity, bool saveImmediate = false)
{
return ExecuteQuery((T obj, System.Data.Entity.DbContext dbContext) =>
{
var entityDb = dbContext.Set<T>().Add(entity);
if (saveImmediate)
SaveChanges(dbContext);
return entityDb;
}, entity);
}
public T Add(T entity, bool saveImmediate = false, params object[] keys)
{
return ExecuteQuery((T obj, System.Data.Entity.DbContext dbContext) =>
{
dbContext.Entry(obj).State = EntityState.Added;
if (saveImmediate)
SaveChanges(dbContext);
var entityInDb = dbContext.Set<T>().Find(keys);
return entityInDb;
}, entity);
}
public IEnumerable<T> AddRange(IEnumerable<T> entities, bool saveImmediate)
{
var entitites = _dbContext.Set<T>().AddRange(entities);
if (saveImmediate)
SaveChanges(_dbContext);
return entitites;
}
public T Remove(bool saveImmediate = false, params object[] keyValues)
{
var entity = _dbContext.Set<T>().Find(keyValues);
if (entity == null)
return null;
var entry = _dbContext.Entry(entity);
if (entry == null)
return null;
entry.State = EntityState.Deleted;
if (saveImmediate)
SaveChanges(_dbContext);
return entity;
}
public void Remove(T entity, bool saveImmediate = false)
{
_dbContext.Set<T>().Remove(entity);
if (saveImmediate)
SaveChanges(_dbContext);
}
public void RemoveRange(IEnumerable<T> entities, bool saveImmediate = false)
{
_dbContext.Set<T>().RemoveRange(entities);
if (saveImmediate)
SaveChanges(_dbContext);
}
/// <summary>
/// Note - requiring here that we have defined primary key(s) on the target tables !
/// </summary>
/// <param name="keyValues"></param>
/// <returns></returns>
public T Get(params object[] keyValues)
{
var entity = _dbContext.Set<T>().Find(keyValues);
_dbContext.Entry(entity).State = EntityState.Detached;
return entity;
}
public IEnumerable<T> GetAll(bool asNoTracking = true)
{
return asNoTracking ? _dbContext.Set<T>().AsNoTracking() : _dbContext.Set<T>();
}
public IEnumerable<T> Find(Expression<Func<T, bool>> condition, bool asNoTracking = true)
{
IQueryable<T> query = asNoTracking ? _dbContext.Set<T>().AsNoTracking() : _dbContext.Set<T>();
var entities = query.Where(condition);
return entities;
}
public T GetByCondition(Expression<Func<T, bool>> condition)
{
IQueryable<T> query = _dbContext.Set<T>().AsNoTracking();
var entities = query.Where(condition);
return entities.FirstOrDefault();
}
public bool ExistsByCondition(Expression<Func<T, bool>> condition)
{
IQueryable<T> query = _dbContext.Set<T>().AsNoTracking();
return query.Any(condition);
}
public T Get(bool asNoTracking, params object[] keyValues)
{
var entity = asNoTracking ? _dbContext.Set<T>().AsNoTracking().FirstOrDefault() : _dbContext.Set<T>().Find(keyValues);
return entity;
}
public void SaveChanges(object context)
{
var dbContext = context as System.Data.Entity.DbContext;
if (dbContext == null)
{
throw new ArgumentException($"dbContext object inside save method : Must be of type System.Data.Entity.DbContext", nameof(context));
}
dbContext.SaveChanges();
}
public T Update(T entity, bool saveImmediate = false, params object[] keyValues)
{
return ExecuteQuery((T obj, System.Data.Entity.DbContext dbContext) =>
{
var entityInDb = dbContext.Set<T>().Find(keyValues);
if (entityInDb == null)
return null;
dbContext.Entry(entityInDb).CurrentValues.SetValues(obj);
if (saveImmediate)
{
SaveChanges(dbContext);
}
return obj;
}, entity);
}
private T ExecuteQuery(Func<T, System.Data.Entity.DbContext, T> query, T entity)
{
T result = query(entity, _dbContext);
return result;
}
public void Dispose()
{
Dispose(true);
}
private void Dispose(bool isDisposing)
{
if (isDisposing)
{
_dbContext?.Dispose();
GC.SuppressFinalize(this);
}
}
}
}
Totally, the code of the unit of work and repository pattern is about 300 lines of code combined. It should match a lot of Data Access Layer (DAL) implementations of Entity Framework, possible we could reduce a lot of code here in many projects by following these patterns which are accepted data access patterns defined by the 'Gang of four' way back many decades ago.
If I would adjust this code next I would do these modifications :
- UnitOfWork should only allow adding custom repos in addition to some predefines repos. This should give an overall simplification of UnitOfWork
- Specific transaction handling should be added, like setting the transaction isolation scope. Also speifically doing rollback in case anything crashes in UnitOfWork
- Possible add some more shared utility methods inside Repository class if such should be added
Saturday, 25 June 2022
Generic Singleton that allows initing a type T once
I have looked into a simple design pattern today and decided to have a go with a generic implementation of this.
The class is sealed with a private constructor. It will for a generic pattern of type T instantiate a 'Singleton', i.e.
same objects will be returned via the static Instance property for a given type T. It is also possible to 'Init' to a custom
object in case you want to set the Singleton object to a custom inited object of type T. If you skip initing call -
this will just use the parameterless
constructor. But you can only do a call to Init once (i.e. call the Init method) , so do this initing only in the application startup code for example. Of course, we could instead create a more distilled version, disallowing 'Initing' singleton objects not allowing sending in an adjusted instance of type T as the inited singleton - and also Singleton is considered a pattern which should be implemented per class and not in a generic manner like this. Also, it violates the Single responsibility principle
as the object of type T has not control of how in instantiates itself. But Singleton pattern is an accepted creational pattern.
to call in the initing and pass in parameters for example.
constructor. But you can only do a call to Init once (i.e. call the Init method) , so do this initing only in the application startup code for example. Of course, we could instead create a more distilled version, disallowing 'Initing' singleton objects not allowing sending in an adjusted instance of type T as the inited singleton - and also Singleton is considered a pattern which should be implemented per class and not in a generic manner like this. Also, it violates the Single responsibility principle
as the object of type T has not control of how in instantiates itself. But Singleton pattern is an accepted creational pattern.
public sealed class Singleton<T> where T : class, new()
{
private static Lazy<T> InstanceProxy
{
get
{
if (_instanceObj?.IsValueCreated != true)
{
_instanceObj = new Lazy<T>(() => new T());
}
return _instanceObj;
}
}
private static Lazy<T>? _instanceObj;
public static T Instance { get { return InstanceProxy.Value; } }
public static void Init(Lazy<T> instance)
{
if (_instanceObj?.IsValueCreated == true)
{
throw new ArgumentException($"A Singleton for the type <T> is already set");
}
_instanceObj = instance ?? throw new ArgumentNullException(nameof(instance));
}
private Singleton()
{
}
}
You can use it like this : Some model class :
public class Aeroplane
{
public string? Model { get; set; }
public string? Manufacturer { get; set; }
public int YearBuilt { get; set; }
public int PassengerCount { get; set; }
}
Usage sample :
var aeroplane = new Aeroplane
{
Manufacturer = "Boeing",
Model = "747",
PassengerCount = 350,
YearBuilt = 2005
};
var aeroPlane3 = Singleton<Aeroplane>.Instance;
var aeroPlane4 = Singleton<Aeroplane>.Instance;
Console.WriteLine($"Aeroplane3 and aeroplane4 is same object? {Object.ReferenceEquals(aeroPlane3, aeroPlane4)}");
Outputs 'true'.
Trying to re-init type T Singleton to another object fails :
var aeroplane2 = new Aeroplane
{
Manufacturer = "Sopwith Aviation Company",
Model = "Sophwith Camel",
PassengerCount = 1,
YearBuilt = 1917
};
Singleton<Aeroplane>.Init(new Lazy<Aeroplane>(aeroplane2));
You can of course just access the Singleton with initing it - as mentioned it will call the default public constructor of type T and set an this 'default' instance of T as the singleton for type T.
Possible you could have a way of setting a custom constructor here instead of passing an object as a sort of improved 'factory pattern'. We could for example in the Init method specify which constructor methodto call in the initing and pass in parameters for example.
var aeroplaneDefaultInstantiated = Singleton<Aeroplane>.Instance;
Note : Default instantiation - calls the parameterless public constructor of type T. So you must do the initialization inside the public parameterless constructor if you skip the Init method.
We can also allow sending in a custom constructor to the Singleton class by offering another init method. Here, we can send in parameters of given types and therefore identify the constructor to call. This of course demands that such a constructor exists.
It offers another way of setting a singleton object. We now can either set the singleton object for type T via :
- An instance that calls the default parameterless constructor and sets this object as the singleton for type T
- A customized instance in case you want to have more fine tuned initalized object to be set as the singleton for type T
- An instance that calls a specified constructor of the type T and sets the created instance as the singleton for type T
void Main()
{
Singleton<Aeroplane>.Init(new object[] { "Nieuport IV", 1911 });
//Singleton<Aeroplane>.Init(new object[] { "Nieuport V", 1914 });
var aeroplaneTwo = Singleton<Aeroplane>.Instance;
var aeroplaneThree = Singleton<Aeroplane>.Instance;
Object.ReferenceEquals(aeroplaneTwo, aeroplaneThree).Dump();
aeroplaneTwo.Dump();
aeroplaneThree.Dump();
}
public class Aeroplane
{
public string? Model { get; set; }
public string? Manufacturer { get; set; }
public int YearBuilt { get; set; }
public int PassengerCount { get; set; }
public Aeroplane()
{
}
public Aeroplane(string model, int yearBuilt)
{
Model = model;
YearBuilt = yearBuilt;
if (YearBuilt < 1913) {
PassengerCount = 1;
}
}
}
public sealed class Singleton<T> where T : class, new()
{
private static Lazy<T> InstanceProxy
{
get
{
if (_instanceObj?.IsValueCreated != true)
{
_instanceObj = new Lazy<T>(() => new T());
}
return _instanceObj;
}
}
private static Lazy<T>? _instanceObj;
public static void Init(object[] constructorParams)
{
if (_instanceObj?.IsValueCreated == true)
{
throw new ArgumentException($"A Singleton for the type <{typeof(T).Name}> is already set");
}
var constructor = typeof(T).GetConstructor(constructorParams.Select(p => p.GetType()).ToArray());
if (constructor == null)
{
string typenamesParams = string.Join(",", constructorParams.Select(p => p.GetType()));
throw new ArgumentException($"Could not find a constructor of type {typeof(T).Name} with the parameter types {typenamesParams}");
}
var instanceObj = constructor.Invoke(constructorParams);
_instanceObj = new Lazy<T>(instanceObj as T);
}
public static T Instance { get { return InstanceProxy.Value; } }
public static void Init(Lazy<T> instance)
{
if (_instanceObj?.IsValueCreated == true)
{
throw new ArgumentException($"A Singleton for the type <T> is already set");
}
_instanceObj = instance ?? throw new ArgumentNullException(nameof(instance));
}
private Singleton()
{
}
}
Once more, we disallow calling the Init method many times, here we call a specific constructor to init as the Singleton object.