I wrote an AngularJs directive at work today for clearing a text field. We still use this in multiple projects for front-end (although I have worked more with Angular than AngularJs last 2-3 years).
The directive ended up like this (we use Bootstrap v2.3.2) :
Checking that we have added test adapter for NUnit so that our tests in Azure Devops are run
A challenge with running tests inside Powershell can be if NUnit test adapter Nuget package is missing from the solution.
If you run test using NUnit 2.x, you require NUnitTestAdapter. If you use NUnit 3.x, NUnit3TestAdapter is required.
The following Powershell script can be used to check if we have added a Nuget package reference at least to one such test project in the
solution. We have here some tests that will list up all PackageReference in csproj files of the solution.
Note: this requires the following setup of your Nuget package references listed in the solution.
You have to have csproj projects in the solution
You must use PackageReference, i.e. list up nuget packages in the csproj file. This will not work if you instead use packages lock json format or packages.config.
For example, we could run the function call :
List-PackagesOfTestProjectInSolution "C:\dev\someacme\someacme.sln"
And we get our lists of package references in that solution (here we only look inside projects with a name containing "Test":
This article will discuss the immutable collections in C#, more precisely immutable lists of generic type T wrapped inside a class. This makes it possible to easier use immutable lists and these lists can only be
altered via functional calls. Remember that an immutable list always returns a new immutable list. For easier use, we can have a wrapper for this.
First of, inside Linqpad 5, being used in this article, hit F4. In case you want to use Visual Studio instead, the same code should work there (except Linqpad's Dump method). In the tab Additional referencesNow choose Add Nuget.. Then seach for System.Collections.Immutable. After selecting this Nuget package, choose the tab Additional Namesapce Imports.
Now paste this demo code:
As we can see, the wrapper class can add items to the immutable collections and we also reassign the result modifying operation to the same _internalList field, which has a private setter and is initialized to an empty array in the constructor. This gives you mutability to the immutable collection without having to remember to reassign the variable, which is error prone in itself. Note - we have called the _internalList and you see that we can get thi
What is the benefit of this ? Well, although we can reach into the internal collection with the Contents method here, the immutable list is still immutable. If you want to change it, you have to call specific methods here on it offered in the wrapping class. So, data-integrity wise, we have data that only can change via the methods offered in the wrapping class. A collection which is not immutable can be changed in many ways only by giving access to it. We still have control over the data via the wrapper and we make it easier to consume the immutable class by reassigning the collection.
Many developers use Entity Framework (EF) today as the library of their data access library to communicate against the database. EF
is a ORM, object-relational mapper and while it boasts much functionality like change tracking and mapping relationships, Dapper at the other
line of ORMs is a Micro-ORM. A Micro-ORM has less functionality, but offers usually more speed and less overhead.
Dapper is a great Micro-ORM, however, writing SQL manually is often error-prone or tedious. Some purists love writing the SQL manually and be
sure which SQL they send off to the DB. That is much of the point of Dapper. However, lending a hand to developers in building their SQL should still be
allowed. The query compilation time added to such helper methods are miniscule anyways compared to the heavy overhead of an advanced ORM like EF.
Anyways, the code in this article shows some code I am working with for building inner joins between to tables. The relationship between the two tables are 1:1 in my test case
and the inner join does for now not support a where predicate filter, although adding such a filter should be easy.
The source code for DapperUtils of mine is available on GitHub:
https://github.com/toreaurstadboss/DapperUtils
First, we make use of SqlBuilder from DapperUtils addon lib for Dapper.
Using SqlBuilder, we can define a Sql template and add extension methods and helper methods required to build and retrieve the inner join.
The helper methods in use are added also below the extension method InnerJoin. Make note that we use SqlBuilder here to do much of the SQL template processing to end up with
the SQL that is sent to the DB (RawSql property of SqlBuilder instance).
///<summary>/// Inner joins the left and right tables by specified left and right key expression lambdas./// This uses a template builder and a shortcut to join two tables without having to specify any SQL manually/// and gives you the entire inner join result set. It is an implicit requirement that the <paramref name="leftKey"/>/// and <paramref name="rightKey"/> are compatible data types as they are used for the join./// This method do for now not allow specifying any filtering (where-clause) or logic around the joining besides/// just specifying the two columns to join.///</summary>///<typeparam name="TLeftTable">Type of left table</typeparam>///<typeparam name="TRightTable">Type of right table</typeparam>///<param name="connection">IDbConnection to the DB</param>///<param name="leftKey">Member expression of the left table in the join</param>///<param name="rightKey">Member expression to the right table in the join</param>///<returns>IEnumerable of ExpandoObject. Tip: Iterate through the IEnumerable and save each ExpandoObject into a variable of type dynamic to access the variables more conveniently if desired.</returns>publicstaticIEnumerable<ExpandoObject> InnerJoin<TLeftTable, TRightTable>(this IDbConnection connection,
Expression<Func<TLeftTable, object>> leftKey, Expression<Func<TRightTable, object>> rightKey)
{
var builder = new SqlBuilder();
string leftTableSelectClause = string.Join(",", GetPublicPropertyNames<TLeftTable>("l"));
string rightTableSelectClause = string.Join(",", GetPublicPropertyNames<TRightTable>("r"));
string leftKeyName = GetMemberName(leftKey);
string rightKeyName = GetMemberName(rightKey);
string leftTableName = GetDbTableName<TLeftTable>();
string rightTableName = GetDbTableName<TRightTable>();
string joinSelectClause = $"select {leftTableSelectClause}, {rightTableSelectClause} from {leftTableName} l /**innerjoin**/";
var selector = builder.AddTemplate(joinSelectClause);
builder.InnerJoin($"{rightTableName} r on l.{leftKeyName} = r.{rightKeyName}");
var joinedResults = connection.Query(selector.RawSql, selector.Parameters)
.Select(x => (ExpandoObject)DapperUtilsExtensions.ToExpandoObject(x)).ToList();
return joinedResults;
}
privatestaticstring[] GetPublicPropertyNames<T>(string tableQualifierPrefix = null) {
returntypeof(T).GetProperties(System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Instance)
.Where(x => !IsNotMapped(x))
.Select(x => !string.IsNullOrEmpty(tableQualifierPrefix) ? tableQualifierPrefix + "." + x.Name : x.Name).ToArray();
}
privatestaticboolIsNotMapped(PropertyInfo x)
{
var notmappedAttr = x.GetCustomAttributes<NotMappedAttribute>()?.OfType<NotMappedAttribute>().FirstOrDefault();
return notmappedAttr != null;
}
///<summary>/// Returns database table name, either via the System.ComponentModel.DataAnnotations.Schema.Table attribute/// if it exists, or just the name of the <typeparamref name="TClass"/> type parameter. ///</summary>///<typeparam name="TClass"></typeparam>///<returns></returns>privatestaticstringGetDbTableName<TClass>()
{
var tableAttribute = typeof(TClass).GetCustomAttributes(typeof(TableAttribute), false)?.FirstOrDefault() as TableAttribute;
if (tableAttribute != null)
{
if (!string.IsNullOrEmpty(tableAttribute.Schema))
{
return$"[{tableAttribute.Schema}].[{tableAttribute.Name}]";
}
return tableAttribute.Name;
}
returntypeof(TClass).Name;
}
privatestaticstringGetMemberName<T>(Expression<Func<T, object>> expression)
{
switch (expression.Body)
{
case MemberExpression m:
return m.Member.Name;
case UnaryExpression u when u.Operand is MemberExpression m:
return m.Member.Name;
default:
thrownew NotImplementedException(expression.GetType().ToString());
}
}
///<summary>/// Returns database table name, either via the System.ComponentModel.DataAnnotations.Schema.Table attribute/// if it exists, or just the name of the <typeparamref name="TClass"/> type parameter. ///</summary>///<typeparam name="TClass"></typeparam>///<returns></returns>privatestaticstringGetDbTableName<TClass>()
{
var tableAttribute = typeof(TClass).GetCustomAttributes(typeof(TableAttribute), false)?.FirstOrDefault() as TableAttribute;
if (tableAttribute != null)
{
if (!string.IsNullOrEmpty(tableAttribute.Schema))
{
return$"[{tableAttribute.Schema}].[{tableAttribute.Name}]";
}
return tableAttribute.Name;
}
returntypeof(TClass).Name;
}
publicstatic ExpandoObject ToExpandoObject(objectvalue)
{
IDictionary<string, object> dapperRowProperties = valueas IDictionary<string, object>;
IDictionary<string, object> expando = new ExpandoObject();
if (dapperRowProperties == null)
{
return expando as ExpandoObject;
}
foreach (KeyValuePair<string, object> property in dapperRowProperties)
{
if (!expando.ContainsKey(property.Key))
{
expando.Add(property.Key, property.Value);
}
else
{
//prefix the colliding key with a random guid suffixed
expando.Add(property.Key + Guid.NewGuid().ToString("N"), property.Value);
}
}
return expando as ExpandoObject;
}
Here are some Nuget packages in use in the small lib functions here are in test project too:
Two unit tests shows how easier syntax we get with this helper method. The downside is that you cant fully control the sql yourself, but the benefit is quicker to implement.
[Test]
public void InnerJoinWithManualSqlReturnsExpected()
{
var builder = new SqlBuilder();
var selector = builder.AddTemplate("select p.ProductID, p.ProductName, p.CategoryID, c.CategoryName, s.SupplierID, s.City from products p /**innerjoin**/");
builder.InnerJoin("categories c on c.CategoryID = p.CategoryID");
builder.InnerJoin("suppliers s on p.SupplierID = s.SupplierID");
dynamic joinedproductsandcategoryandsuppliers = Connection.Query(selector.RawSql, selector.Parameters).Select(x => (ExpandoObject)DapperUtilsExtensions.ToExpandoObject(x)).ToList();
var firstRow = joinedproductsandcategoryandsuppliers[0];
Assert.AreEqual(firstRow.ProductID + firstRow.ProductName + firstRow.CategoryID + firstRow.CategoryName + firstRow.SupplierID + firstRow.City, "1Chai1Beverages1London");
}
[Test]
public void InnerJoinWithoutManualSqlReturnsExpected()
{
var joinedproductsandcategory = Connection.InnerJoin<Product, Category>(l => l.CategoryID, r => r.CategoryID);
dynamic firstRow = joinedproductsandcategory.ElementAt(0);
Assert.AreEqual(firstRow.ProductID + firstRow.ProductName + firstRow.CategoryID + firstRow.CategoryName + firstRow.SupplierID, "1Chai1Beverages1");
}
Our POCO classes used in the tests are these two. We use the Nuget package System.ComponentModel.Annotations and attributes TableName and NotMapped to control the SQL built here
to specify the DB table name for the POCO (if they are the same, the name of the type is used as fallback if attribute TableName is missing) and NotMapped in case there are properties like relationship properties ("navigation properties in EF for Dapper") that should not be used in the SQL select clause.
In the end, we have a easy way to do a standard join. An improvement here could be the following:
Support for where predicates to filter the joins
More control on the join condition if desired
Support for joins accross three tables (or more?) - SqlBuilder already supports this, what is missing is lambda expression support for Intellisense support
What if a property does not match against db column ? Should support ColumnName attribute from System.ComponentModel.DataAnnotations.
Investigate other join types such as left outer joins - this should be just a minor adjustment actually.
I just added a flatten method of my SimpleTsLinq library today!
The Github repo is at:
The Npm page is at:
This method can flatten multiple arrays at desired depth (defaults to Infinity) and each array itself may have arbitrary depth.
The end result is that the multiple (nested arrays) are returned as a flat, single array. Much similar to SelectMany in Linq!
First I added the method to generic interface Array below
if (!Array.prototype.Flatten) {
Array.prototype.Flatten = function <T>(otherArrays: T[][] = null, depth = Infinity) {
let flattenedArrayOfThis = [...flatten(this, depth)];
if (otherArrays == null || otherArrays == undefined) {
return flattenedArrayOfThis;
}
return [...flattenedArrayOfThis, ...flatten(otherArrays, depth)];
}
}
function* flatten(array, depth) {
if (depth === undefined) {
depth = 1;
}
for (const item of array) {
if (Array.isArray(item) && depth > 0) {
yield* flatten(item, depth - 1);
} else {
yield item;
}
}
}
The implementation uses a generator (identified by the * suffix) method which is recursively called if we have an array within an array
Two tests below are run in Karma to test it out.
it('can flatten multiple arrays into a single array', () => {
let oneArray = [1, 2, [3, 3]];
let anotherArray = [4, [4, 5], 6];
let thirdArray = [7, 7, [7, 7]];
let threeArrays = [oneArray, anotherArray, thirdArray];
let flattenedArrays = oneArray.Flatten([anotherArray, thirdArray], Infinity);
let isEqualInContentToExpectedFlattenedArray = flattenedArrays.SequenceEqual([1, 2, 3, 3, 4, 4, 5, 6, 7, 7, 7, 7]);
expect(isEqualInContentToExpectedFlattenedArray).toBe(true);
});
it('can flatten one deep array into a single array', () => {
let oneArray = [1, 2, [3, 3]];
let flattenedArrays = oneArray.Flatten(null, 1);
let isEqualInContentToExpectedFlattenedArray = flattenedArrays.SequenceEqual([1, 2, 3, 3]);
expect(isEqualInContentToExpectedFlattenedArray).toBe(true);
});
This article will present a simple draw ink control in Windows Forms.
The code is run in Linqpad and the concepts here should be easily portable to a little application.
Note - there is already built in controls for Windows Forms for this (and WPF and UWP too). That is not the point of this article.
The point is to display how you can use System.Reactive and Observable.FromEventPattern method to create an event source stream from CLR events
so you can build reactive applications where the source pushes updates to its target / receiver instead of traditional pull based scenario of event subscriptions.
First off, we install Linqpad from:
https://www.linqpad.net
I used Linqpad 5 for this code, you can of course download Linqpad 6 with .Net core support, but this article is tailored for Linpad 5 and .NET Framework.
After installing Linqpad 5, start it and hit F4. Choose Add Nuget. Now choose Search online and type the following four nuget packages to get started with Reactive extensions for .NET.
System.Reactive
System.Reactive.Core
System.Reactive.Interfaces
System.Reactive.Linq
Also choose Add.. and choose System.Windows.Forms.
Also, choose the tab Additional Namespace Imports.
Import these namespaces
System.Reactive
System.Reactive.Linq
System.Windows.Forms
Over to the code, first we create a Form with a PictureBox to draw onto like this in C# program:
voidMain()
{
var form = new Form();
form.Width = 800;
form.Height = 800;
form.BackColor = Color.White;
var canvas = new PictureBox();
canvas.Height = 400;
canvas.Width = 400;
canvas.BackColor = Color.AliceBlue;
form.Controls.Add(canvas);
.. //more code soon
Next up we create a list of Point to add the points to. We also use Observable.FromEventPattern to track events using the System.Reactive method to create an observable from a CLR event.
We then subscribe to the three events we have set up with observables and add the logic to draw anti-aliased Bezier curves. Actually, drawing a Bezier curve usually consists of the end user defining
four control points, the start and end of the bezier line and two control points (for the simplest Bezier curve). However, I chose anti-aliased Bezier curves that just uses the last four points from the
dragged line, since smooth Bezier curves looks way better than using DrawLine for example for simple polylines. I use GDI CreateGraphics() method of the Picturebox (this is also available on most other Windows Forms controls,
including Forms, but I wanted to have the drawing restricted to the PictureBox).
The full code then is the entire code snippet below:
voidMain()
{
var form = new Form { Width = 800, Height = 800, BackColor = Color.White };
var canvas = new PictureBox { Height = 400, Width = 400, BackColor = Color.AliceBlue };
form.Controls.Add(canvas);
var points = new List<Point>();
bool isDrag = false;
var mouseDowns = Observable.FromEventPattern<MouseEventArgs>(canvas, "MouseDown");
var mouseUps = Observable.FromEventPattern<MouseEventArgs>(canvas, "MouseUp");
var mouseMoves = Observable.FromEventPattern<MouseEventArgs>(canvas, "MouseMove");
mouseDowns.Subscribe(m =>
{
if (m.EventArgs.Button == MouseButtons.Right)
{
isDrag = false;
points.Clear();
canvas.CreateGraphics().Clear(Color.AliceBlue);
return;
}
isDrag = true;
});
mouseUps.Subscribe(m => {
isDrag = false;
});
mouseMoves.Subscribe(move => {
points.Add(new Point(move.EventArgs.Location.X, move.EventArgs.Location.Y));
if (isDrag && points.Count > 4) {
//form.CreateGraphics().DrawLine(new Pen(Color.Blue, 10), points[points.Count - 2].X, points[points.Count - 2].Y, points[points.Count - 1].X, points[points.Count - 1].Y);var pt1 = new PointF(points[points.Count - 4].X, points[points.Count - 4].Y);
var pt2 = new PointF(points[points.Count - 3].X, points[points.Count - 3].Y);
var pt3 = new PointF(points[points.Count - 2].X, points[points.Count - 2].Y);
var pt4 = new PointF(points[points.Count - 1].X, points[points.Count - 1].Y);
var graphics = canvas.CreateGraphics();
graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.AntiAlias;
graphics.DrawBezier(new Pen(Color.Blue, 4.0f), pt1, pt2, pt3, pt4);
}
});
form.Show();
}
Linqpad/System.Reactive/GDI Windows Forms in action ! Screenshot:
I have added comments here for defining a polyline also instead of Bezier, since this also works and is quicker than the nicer Bezier curve. Maybe you want to display this on a simple device with less processing power etc.
To clear the line, just hit right click button. To start drawing, just left click and drag and let go again.
Now look how easy this code really is to create a simple Ink control in Windows Forms ! Of course Windows Forms today are more and more "dated" compared to younger frameworks, but it still does its job. WPF got its own built-in InkControl.
But in case you want an Ink control in Windows Forms, this is an easy way of creating one and also a good Hello World to Reactive extensions.
In .NET Core, the code should be really similar to the code above. Windows Forms is available with .NET Core 3.0 or newer.
https://devblogs.microsoft.com/dotnet/windows-forms-designer-for-net-core-released/
.NET 5 and .net core contains a lot of new methods for Json functionality in the System.Text.Json namespace. I created a helper class for reading a file using Utf8JsonReaderSerializer
and this just outputs the json to a formatted json string. With optimizations, the serialization could be done even faster. For now, I need to use a conversion between StringBuilder toString to remove last commas of arrays and properties of objects as the Utf8JsonReaderSerializer is sequential, forward-only as mentioned in the API page at:
https://docs.microsoft.com/en-us/dotnet/api/system.text.json.utf8jsonreader?view=net-5.0
This is the helper method I came up with to read a file and take the way via Utf8JsonReaderSerializer:
using System;
using System.IO;
using System.Linq;
using System.Text;
using System.Text.Json;
namespaceSystemTextJsonTestRun
{
publicstaticclassUtf8JsonReaderSerializer
{
publicstaticstringReadFile(string filePath)
{
if (!File.Exists(filePath))
{
thrownew FileNotFoundException(filePath);
}
var jsonBytes = File.ReadAllBytes(filePath);
var jsonSpan = jsonBytes.AsSpan();
var json = new Utf8JsonReader(jsonSpan);
var sb = new StringBuilder();
while (json.Read())
{
if (json.TokenType == JsonTokenType.StartObject)
{
sb.Append(Environment.NewLine);
}
elseif (json.TokenType == JsonTokenType.EndObject)
{
//remove last comma added
sb.RemoveLast(",");
sb.Append(Environment.NewLine);
}
if (json.CurrentDepth > 0)
{
for (int i = 0; i < json.CurrentDepth; i++)
{
sb.Append(" "); //space indentation
}
}
sb.Append(GetTokenRepresentation(json));
if (json.TokenType == JsonTokenType.EndObject || json.TokenType == JsonTokenType.EndArray)
{
sb.AppendLine();
}
if (new[] { JsonTokenType.String, JsonTokenType.Number, JsonTokenType.Null, JsonTokenType.False,
JsonTokenType.Number, JsonTokenType.None, JsonTokenType.True }.Contains(json.TokenType))
{
sb.AppendLine(",");
}
}
//remove last comma for EndObject
sb.RemoveLast(",");
return sb.ToString();
}
privatestaticstringGetTokenRepresentation(Utf8JsonReader json) =>
json.TokenType switch
{
JsonTokenType.StartObject => $"{{{Environment.NewLine}",
JsonTokenType.EndObject => "},",
JsonTokenType.StartArray => $"[{Environment.NewLine}",
JsonTokenType.EndArray => $"]",
JsonTokenType.PropertyName => $"\"{json.GetString()}\":",
JsonTokenType.Comment => json.GetString(),
JsonTokenType.String => $"\"{json.GetString()}\"",
JsonTokenType.Number => GetNumberToString(json),
JsonTokenType.True => json.GetBoolean().ToString().ToLower(),
JsonTokenType.False => json.GetBoolean().ToString().ToLower(),
JsonTokenType.Null => string.Empty,
_ => "Unknown Json token type"
};
//TODO: Use the Try methods of the Utf8JsonReader more than trying and failing here privatestaticstringGetNumberToString(Utf8JsonReader json)
{
try
{
if (int.TryParse(json.GetInt32().ToString(), outvar res))
return res.ToString();
}
catch
{
try
{
if (float.TryParse(json.GetSingle().ToString(), outvar resFloat))
return resFloat.ToString();
}
catch
{
try
{
if (decimal.TryParse(json.GetDouble().ToString(), outvar resDes))
return resDes.ToString();
}
catch
{
return"?";
}
}
}
return$"?"; //fallback to a string if not possible to deduce the type
}
}
}
The json file I tested the code with inputted came out again as this string:
{"courseName":"Build Your Own Application Framework","language":"C#","author":{"firstName":"Matt","lastName":"Honeycutt"},"publishedAt":"2012-03-13T12:30:00.000Z","publishedYear":2014,"isActive":true,"isRetired":false,"tags":["aspnet","C#","dotnet"]}
This code validates against Json Lint also:
https://jsonlint.com
Now why even bother parsing a Json file just to output the file again to a json string? Well, first of all, we use a very fast parser Utf8JsonReader from .NET and we can for example do various processing along the forward-only sequential processing and formatting indented the file. Utf8JsonReader will also validate the json document strictly to the Json specification - RFC 8259. Hence, we can get validation for free here to by catching any errors and returning true or false in method that scans this file by adding a method for this looking at the json.Read() method (if it returns false) or catching JsonException if a node of the json document does not validate.
Also, a low level analysis of the Utf8JsonReader let's you see which different tokens of the json document structure .NET provides. We could transform the document or add specific formatting and so on by altering the code displayed here.
To run the code test with a sample json document like this:
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Utf8JsonReader sample");
string json = Utf8JsonReaderSerializer.ReadFile("sample.json");
string tempFile = Path.ChangeExtension(Path.GetTempFileName(),"json");
File.WriteAllText(tempFile, json);
Console.WriteLine($"Json file read and processed result in location: {tempFile}");
Console.WriteLine($"Json file contents: {Environment.NewLine}{json}");
}
Team City has several bugs when it comes to running NUnit tests.
The following guide shows how you can prepare the Team City build agent to run NUnit 3.x tests.
We need first to install NUnit Console runner
Tips around this was found in the following Stack Overflow thread:
This is also mentioned in the documentation of Team City:
First off, add two Command line steps and add the two commands into each step - this step can be run at the start of the pipeline in Team City.
Inside the NUnit runner type step, configure also the NUnit console path:
Use this path:
packages\NUnit.ConsoleRunner.3.8.0\tools\nunit3-console.exe
For the testassemblies make sure you use a path like this:
**\bin\%BuildConfiguration%\*.Test.dll
Add the %BuildConfiguration% parameter and set it to:
Debug
More tips here:
https://stackoverflow.com/questions/57953724/nunit-teamcity-process-exited-with-code-4
This article will present a Strip method that accepts a Regex which defines the pattern of allowed characters. It is similar to Regex Replace, but it works in the inverted way.
Instead of removing the chars matching the pattern in Regex.Replace, this utility method instead lets you define the allowed chars, i.e. these chars defined in this regex are the chars I want to keep.
First off we define the utility method, as an extension method.
///<summary>/// Strips away every character not defined in the provided regex <paramref name="allowedChars"/>///</summary>///<param name="s">Input string</param>///<param name="allowedChars">The allowed characters defined in a Regex with pattern, for example: [A-z|0-9]+/</param>///<returns>Input string with only the allowed characters</returns>publicstaticstringStrip(thisstring s, Regex allowedChars)
{
if (s == null)
{
return s;
}
if (allowedChars == null)
{
returnstring.Empty;
}
Match match = Regex.Match(s, allowedChars.ToString());
List<char> allowedAlphabet = new List<char>();
while (match.Success)
{
if (match.Success)
{
for (int i = 0; i < match.Groups.Count; i++)
{
allowedAlphabet.AddRange(match.Groups[i].Value.ToCharArray());
}
}
match = match.NextMatch();
}
returnnewstring(s.Where(ch => allowedAlphabet.Contains(ch)).ToArray());
}
Here are some tests that tests out this Strip method:
[Test]
[TestCase("abc123abc", "[A-z]+", "abcabc")]
[TestCase("abc123def456", "[0-9]+", "123456")]
[TestCase("The F-32 Lightning II is a newer generation fighter jets than the F-16 Fighting Falcon", "[0-9]+", "3216")]
[TestCase("Here are some Norwegian letters : ÆØÅ and in lowercase: æøå", "[æ|ø|å]", "æøå")]
publicvoidTestStripWithRegex(string input, string regexString, string expectedOutput)
{
var regex = new Regex(regexString);
input.Strip(regex).Should().Be(expectedOutput);
}
In this article I will present some code I just did in my SimpleTsLinq library, which you can easily install using Npm.
The library is here on Npmjs.com :
The ToDictionary method looks like this:
if (!Array.prototype.ToDictionary) {
Array.prototype.ToDictionary = function <T>(keySelector: (arg: T) =>any): any {
let hash = {};
this.map(item => {
let key = keySelector(item);
if (!(key in hash)) {
hash[key] = item;
}
else {
if (!(Array.isArray(hash[key]))) {
hash[key] = [hash[key]];
}
hash[key].push(item);
}
});
return hash;
}
}
Entity Framework will hit a performance penalty bottleneck or crash if Contains contains a too large of a list.
Here is how you can avoid this, using Marc Gravell's excellent approach in this. I am including some tests of this. I also suggest you consider LinqKit to use expandable queries to make this all work.
First off, this class contains the extension methods for Entity Framework for this:
publicclassEntityExtensions {
///<summary>/// This method overcomes a weakness with Entity Framework with Contains where you can partition the values to look for into /// blocks or partitions, it is modeled after Marc Gravell's answer here:/// https://stackoverflow.com/a/568771/741368/// Entity Framework hits a limit of 2100 parameter limit in the DB but probably comes into trouble before this limit as even/// queries with several 100 parameters are slow.///</summary>///<typeparam name="T"></typeparam>///<typeparam name="TValue"></typeparam>///<param name="source">Source, for example DbSet (table)</param>///<param name="selector">Selector, key selector</param>///<param name="blockSize">Size of blocks (chunks/partitions)</param>///<param name="values">Values as parameters</param>//////<example>//////<[!CDATA[
/// /// The following EF query will hit a performance penalty or time out if EF gets a too large list of operationids:
/// ///
/// /// var patients = context.Patients.Where(p => operationsIds.Contains(p.OperationId)).Select(p => new {////// p.OperationId,////// p.////// });////////////////// var patients = context.Patients.AsExpandable().InRange(p => p.OperationId, 1000, operationIds)/// //.Select(p => new/// //{/// // p.OperationId,/// // p.IsDaytimeSurgery/// //}).ToList();/// //]]///</example>///<returns></returns>publicstaticIEnumerable<T> InRange<T, TValue>(this IQueryable<T> source,
Expression<Func<T, TValue>> selector,
int blockSize,
IEnumerable<TValue> values)
{
MethodInfo method = null;
foreach (MethodInfo tmp intypeof(Enumerable).GetMethods(
BindingFlags.Public | BindingFlags.Static))
{
if (tmp.Name == "Contains" && tmp.IsGenericMethodDefinition
&& tmp.GetParameters().Length == 2)
{
method = tmp.MakeGenericMethod(typeof(TValue));
break;
}
}
if (method == null) thrownew InvalidOperationException(
"Unable to locate Contains");
foreach (TValue[] block in values.GetBlocks(blockSize))
{
var row = Expression.Parameter(typeof(T), "row");
var member = Expression.Invoke(selector, row);
var keys = Expression.Constant(block, typeof(TValue[]));
var predicate = Expression.Call(method, keys, member);
var lambda = Expression.Lambda<Func<T, bool>>(
predicate, row);
foreach (T recordinsource.Where(lambda))
{
yieldreturnrecord;
}
}
}
///<summary>/// Similar to Chunk, it partitions the IEnumerable source and returns the chunks or blocks by given blocksize. The last block can have variable length/// between 0 to blocksize since the IEnumerable can have of course variable size not evenly divided by blocksize. ///</summary>///<typeparam name="T"></typeparam>///<param name="source"></param>///<param name="blockSize"></param>///<returns></returns>publicstaticIEnumerable<T[]> GetBlocks<T>(this IEnumerable<T> source, int blockSize)
{
List<T> list = new List<T>(blockSize);
foreach (T item in source)
{
list.Add(item);
if (list.Count == blockSize)
{
yieldreturn list.ToArray();
list.Clear();
}
}
if (list.Count > 0)
{
yieldreturn list.ToArray();
}
}
}
Linqkit allows us to rewrite queries for EF using expression trees.
One class is ExpandableQuery. See the links here for further info about Linqkit and Linq-Expand.
///<summary>Refer to http://www.albahari.com/nutshell/linqkit.html and/// http://tomasp.net/blog/linq-expand.aspx for more information.</summary>publicstaticclassExtensions
{
publicstaticIQueryable<T> AsExpandable<T> (this IQueryable<T> query)
{
if (query isExpandableQuery<T>) return (ExpandableQuery<T>)query;
returnnew ExpandableQuery<T> (query);
}
This all seems to look a bit cryptic, so lets see an integration test of mine instead:
This shows how to use the InRange method of Marc Gravell. We use the AsExpandable method to allow us to hack into the expression tree of Entity Framework and the InRange method allows us to partition the work
for EF. We do not know the siz of operational unit ids (usually it is low and another entity - operation Ids is of variable length and will in production blow up since we in some cases surpass the 2100 limit of Contains).
And as I said before, Entity Framework will hit a performance bottleneck before 2100 parameteters are sent into the Contains method. This way of fixing it up will allow you to get stable running code in production again against large data and variable length.
This code is tested with Entity Framework 6.2.0.
Another article considers performance considerations for Contains and different approaches here:
https://www.toptal.com/dot-net/entity-framework-performance-using-contains
IMHO this approach has proven stable in a production environment for several years with large data and can be considered a stable workaround for EF slow Contains performance.
I have made the LinqKit fork LinqKit.AsyncSupport available on Nuget here now:
https://www.nuget.org/packages/ToreAurstadIt.LinqKit.AsyncSupport/1.1.0
This makes it possible to perform Async calls and expandable queries, i.e. queries with inline method calls for example.
The nuget package now also sports symbol package for easier debugging experience.
The source code for LinqKit.AsyncSupport is available here:
https://github.com/toreaurstadboss/LinqKit.AsyncSupport