This artile will present a simple property grid in Blazor I have made. The component relies on standard stuff like Bootstrap, jQuery, Twitter Bootstrap and Font Awesome. But the repo url shown here links to the Github repo of mine which can be easily forked if you want to add features (such as editing capabilities). The component already supports nested levels, so if the object you inspect has a hierarchical structure, this is shown in this Blazor component.
Having a component to inspect objects in Blazor is great as Blazor lacks inspect tools (since the app is compiled into a web assembly, we cannot easily inspect state of objects in the app other than the DOM and Javascript objects. With this component we can get basic inspection support to inspect state of the object in the app you desire to inspect).
The Github repo contains also a bundled application which uses the component and shows a sample use-case (also shown in Gif video below). I have tested the component with three levels of depth for a sample object (included in the repo).
The component is available here on my Github repo:
The component consists of two components where one of them is used in a recursive manner to support nested object structure.
The top level component got this code-behind class.
PropertyGridComponentBase.cs
using System.Collections.Generic;
using System.Reflection;
using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.Web;
using Microsoft.JSInterop;
namespaceBlazorPropertyGridComponents.Components
{
publicclassPropertyGridComponentBase : ComponentBase
{
[Inject]
public IJSRuntime JsRuntime { get; set; }
[Parameter] publicobject DataContext { get; set; }
public Dictionary<string, PropertyInfoAtLevelNodeComponent> Props { get; set; }
publicPropertyGridComponentBase()
{
Props = new Dictionary<string, PropertyInfoAtLevelNodeComponent>();
}
protectedoverridevoidOnParametersSet()
{
Props.Clear();
if (DataContext == null)
return;
Props["ROOT"] = MapPropertiesOfDataContext(string.Empty, DataContext, null);
StateHasChanged();
}
privateboolIsNestedProperty(PropertyInfo pi) =>
pi.PropertyType.IsClass && pi.PropertyType.Namespace != "System";
private PropertyInfoAtLevelNodeComponent MapPropertiesOfDataContext(string propertyPath, object parentObject,
PropertyInfo currentProp)
{
if (parentObject == null)
returnnull;
var publicProperties = parentObject.GetType()
.GetProperties(BindingFlags.Public | BindingFlags.Instance);
var propertyNode = new PropertyInfoAtLevelNodeComponent
{
PropertyName = currentProp?.Name ?? "ROOT",
PropertyValue = parentObject,
PropertyType = parentObject.GetType(),
FullPropertyPath = TrimFullPropertyPath($"{propertyPath}.{currentProp?.Name}") ?? "ROOT",
IsClass = parentObject.GetType().IsClass && parentObject.GetType().Namespace != "System"
};
foreach (var p in publicProperties)
{
var propertyValue = p.GetValue(parentObject, null);
if (!IsNestedProperty(p))
{
propertyNode.SubProperties.Add(p.Name, new PropertyInfoAtLevelNodeComponent
{
IsClass = false,
FullPropertyPath = TrimFullPropertyPath($"{propertyPath}.{p.Name}"),
PropertyName = p.Name,
PropertyValue = propertyValue,
PropertyType = p.PropertyType
//note - SubProperties are default empty if not nested property of course.
}
);
}
else
{
//we need to add the sub property but recurse also call to fetch the nested properties
propertyNode.SubProperties.Add(p.Name, new PropertyInfoAtLevelNodeComponent
{
IsClass = true,
FullPropertyPath = propertyPath + p.Name,
PropertyName = p.Name,
PropertyValue = MapPropertiesOfDataContext(TrimFullPropertyPath($"{propertyPath}.{p.Name}"), propertyValue, p),
PropertyType = p.PropertyType
//note - SubProperties are default empty if not nested property of course.
}
);
}
}
return propertyNode;
}
protectedvoidtoggleExpandButton(MouseEventArgs e, string buttonId)
{
JsRuntime.InvokeVoidAsync("toggleExpandButton", buttonId);
}
privatestringTrimFullPropertyPath(string fullpropertypath)
{
if (string.IsNullOrEmpty(fullpropertypath))
return fullpropertypath;
return fullpropertypath.TrimStart('.').TrimEnd('.');
}
}
}
And its razor file looks like this:
PropertyGridComponentBase.razor
@inherits PropertyGridComponentBase
@using BlazorPropertyGridComponents.Components
<tableclass="table table-striped col-md-4 col-lg-3 col-sm-6"><thead><tr><thscope="col">Property</th><thscope="col">Value</th></tr></thead><tbody>
@foreach (KeyValuePair<string, PropertyInfoAtLevelNodeComponent> prop in Props)
{
@if (!prop.Value.IsClass)
{
@* <tr><td>@prop.Key</td><td>@prop.Value</td></tr>*@
}
else
{
var currentNestedDiv = "currentDiv_" + prop.Key;
var currentProp = prop.Value.PropertyValue;
//must be a nested class property
<tr><tdcolspan="2"><buttontype="button"id="@prop.Key"class="btn btn-info fas fa-minus" @onclick="(e) => toggleExpandButton(e,prop.Key)"data-toggle="collapse"data-target="#@currentNestedDiv"></button><divid="@currentNestedDiv"class="collapse show"><PropertyRowComponentDepth="1"PropertyInfoAtLevel="@prop.Value" /></div></td></tr>
}
}
</tbody></table>
@code {
}
We also have this helper class to model each property in the nested structure:
PropertyInfoAtLevelNodeComponent.cs
using System;
using System.Collections.Generic;
namespaceBlazorPropertyGridComponents.Components
{
///<summary>/// Node class for hierarchical structure of property info for an object of given object graph structure.///</summary>publicclassPropertyInfoAtLevelNodeComponent
{
publicPropertyInfoAtLevelNodeComponent()
{
SubProperties = new Dictionary<string, PropertyInfoAtLevelNodeComponent>();
}
publicstring PropertyName { get; set; }
publicobject PropertyValue { get; set; }
public Type PropertyType { get; set; }
public Dictionary<string, PropertyInfoAtLevelNodeComponent> SubProperties { get; privateset; }
publicstring FullPropertyPath { get; set; }
publicbool IsClass { get; set; }
}
}
Our lower component used by the top component code-behind looks like this:
PropertyRowComponentBase.cs
using System.Collections.Generic;
using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.Web;
using Microsoft.JSInterop;
namespaceBlazorPropertyGridComponents.Components
{
publicclassPropertyRowComponentBase : ComponentBase
{
publicPropertyRowComponentBase()
{
DisplayedFullPropertyPaths = new List<string>();
}
[Parameter]
public PropertyInfoAtLevelNodeComponent PropertyInfoAtLevel { get; set; }
[Parameter]
publicint Depth { get; set; }
[Parameter]
public List<string> DisplayedFullPropertyPaths { get; set; }
[Inject]
protected IJSRuntime JsRunTime { get; set; }
protectedvoidtoggleExpandButton(MouseEventArgs e, string buttonId)
{
JsRunTime.InvokeVoidAsync("toggleExpandButton", buttonId);
}
}
}
The razor file looks like this:
PropertyRowComponent.razor
@using BlazorPropertyGridComponents.Components
@inherits PropertyRowComponentBase
@foreach (var item in PropertyInfoAtLevel.SubProperties.Keys)
{
var propertyInfoAtLevel = PropertyInfoAtLevel.SubProperties[item];
if (propertyInfoAtLevel != null)
{
@* if (DisplayedFullPropertyPaths.Contains(propertyInfoAtLevel.FullPropertyPath)){
continue; //the property is already displayed.
}*@
DisplayedFullPropertyPaths.Add(propertyInfoAtLevel.FullPropertyPath);
@* <spanclass="text-white bg-dark">@propertyInfoAtLevel.FullPropertyPath</span>*@
@* <em>
@propertyInfoAtLevel
</em>*@
}
if (!propertyInfoAtLevel.PropertyType.IsClass || propertyInfoAtLevel.PropertyType.Namespace.StartsWith("System"))
{
<tr><td><spantitle="@propertyInfoAtLevel.FullPropertyPath"class="font-weight-bold">@propertyInfoAtLevel.PropertyName</span></td><td><span>@propertyInfoAtLevel.PropertyValue</span></td></tr>
}
else if (propertyInfoAtLevel.PropertyValue != null && propertyInfoAtLevel.PropertyValue is PropertyInfoAtLevelNodeComponent)
{
var nestedLevel = (PropertyInfoAtLevelNodeComponent)propertyInfoAtLevel.PropertyValue;
var collapseOrNotCssClass = Depth == 0 ? "collapse show" : "collapse";
var curDepth = Depth + 1;
collapseOrNotCssClass += " depth" + Depth;
var currentNestedDiv = "collapsingdiv_" + propertyInfoAtLevel.PropertyName;
//must be a nested class property
<tr><tdcolspan="2"><span>@propertyInfoAtLevel.PropertyName</span><buttonid="@propertyInfoAtLevel.FullPropertyPath"type="button" @onclick="(e) => toggleExpandButton(e,propertyInfoAtLevel.FullPropertyPath)"class="fas btn btn-info fa-plus"data-toggle="collapse"data-target="#@currentNestedDiv"></button><divid="@currentNestedDiv"class="@collapseOrNotCssClass"><PropertyRowComponentPropertyInfoAtLevel="@nestedLevel"Depth="@curDepth" /></div></td></tr>
}
}
@code {
}
We add to the solution also Font-awesome via right click solution explorer and choose Add => Client-Side Library. Search for 'font-awesome'
Choose Font Awesome and add all files to be added to the lib/font-awesome folder of wwwroot.
Then at the bottom of _Host.cshtml we add:
This article will test out Blazor. I had some difficulties with getting live reload to work. I got it working in Visual Studio 2019 for the Blazor Asp.Net Core project template.
We will also create a very simple component (a clock) that calls Javascript function from C#.
You can clone the simple app of mine from Github like this:
First off, we add the following into _host.cshtml :
_Host.cshtml
<scriptsrc="js/script.js"></script><scriptsrc="_framework/blazor.server.js"></script><script>Blazor.defaultReconnectionHandler._reconnectCallback = function (d) {
document.location.reload();
}
</script>
The Blazor.defaultReconnectionHandler._reconnectCallback is set to reload the document location
This makes the page reload when you edit the razor files of the Blazor app. You will see this as a temporarily recompile step - give it some 5 seconds in a simple app.
Let's for fun add a clock component also. Add to the Shared folder the file Clock.razor.
Clock.razor
@inject IJSRuntime JsRunTime
@implements IDisposable
The time is now:
00:00:00@code {
ElementReferencetimeDiv;
protectedoverrideasyncTaskOnAfterRenderAsync(bool firstRender)
{
if (firstRender)
{
awaitJsRunTime.InvokeVoidAsync("startTime", timeDiv);
}
}
publicvoidDispose()
{
JsRunTime.InvokeVoidAsync("stopTime");
}
}
And we have also the script.js file in wwwroot to add some Javascript (Blazor razor files dont like Js in the component itself, just make sure to add the Js somewhere in wwwroot instead which loads up the necessary Js).
As you can see we inject with the @inject in the razor Blazor file (rhymes a bit) the IJsRunTime. This allows us to call client-side code from the C# code. We start off the clock with a setTimeout and stop the clock with a clearTimeout.
In this article I will present code for creating added functionality to IMemoryCache in Asp.Net Core or in Net.Core in general.
The code has been tested in Asp.Net Core 3.1. I have tested out a Generic memory cache and creating middleware for adding items and removing and listing
values. Usually you do not want to expose caching to a public api, but perhaps your api resides in a safe(r) intranet zone and you want to cache different objects.
This article will teach you the principles upon building a generic memory cache for (asp).net core and to wire up cache functionality to rest api(s).
The code of this article is available on Github:
We start with our Generic Memory cache. It has some features:
The primary feature is to offer generic functionality and STRONGLY TYPED access to the IMemoryCache
Strongly typed access means you can use the cache (memory) as a repository and easily add, remove, update and get multiple items in a strongly typed fashion and
easily add compound objects (class instances or nested objects, what have you - whatever you want here (as long as it is serializable to Json would be highly suggested in case you want to use the Generic Memory Cache together with REST apis)
You add homogenous objects of the same type to a prefixed part of the cache (by prefixed keys) to help avoid collisions in the same process
If you add the same key twice, the item will not be added again - you must update instead
Additional methods exists for removing, updating and clearing the memory cache.
The Generic memory cache wraps IMemoryCache in Asp.Net Core which will do the actual caching in memory on the workstation or server in use for your application.
GenericMemoryCache.cs
using Microsoft.Extensions.Caching.Memory;
using Microsoft.Extensions.Primitives;
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Threading;
namespaceSomeAcme.SomeUtilNamespace
{
///<summary>/// Thread safe memory cache for generic use - wraps IMemoryCache///</summary>///<typeparam name="TCacheItemData">Payload to store in the memory cache</typeparam>/// multiple paralell importing sessions</remarks>publicclassGenericMemoryCache<TCacheItemData> : IGenericMemoryCache<TCacheItemData>
{
privatereadonlystring _prefixKey;
privatereadonlyint _defaultExpirationInSeconds;
privatestaticreadonlyobject _locker = newobject();
publicGenericMemoryCache(IMemoryCache memoryCache, string prefixKey, int defaultExpirationInSeconds = 0)
{
defaultExpirationInSeconds = Math.Abs(defaultExpirationInSeconds); //checking if a negative value was passed into the constructor.
_prefixKey = prefixKey;
Cache = memoryCache;
_defaultExpirationInSeconds = defaultExpirationInSeconds;
}
///<summary>/// Cache object if direct access is desired. Only allow exposing this for inherited types.///</summary>protected IMemoryCache Cache { get; }
publicstringPrefixKey(string key) => $"{_prefixKey}_{key}"; //to avoid IMemoryCache collisions with other parts of the same process, each cache key is always prefixed with a set prefix set by the constructor of this class.///<summary>/// Adds an item to memory cache///</summary>///<param name="key"></param>///<param name="itemToCache"></param>///<returns></returns>publicboolAddItem(string key, TCacheItemData itemToCache)
{
try
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
lock (_locker)
{
if (!Cache.TryGetValue(key, out TCacheItemData existingItem))
{
var cts = new CancellationTokenSource(_defaultExpirationInSeconds > 0 ?
_defaultExpirationInSeconds * 1000 : -1);
var cacheEntryOptions = new MemoryCacheEntryOptions().AddExpirationToken(new CancellationChangeToken(cts.Token));
Cache.Set(key, itemToCache, cacheEntryOptions);
returntrue;
}
}
returnfalse; //Item not added, the key already exists
}
catch (Exception err)
{
Debug.WriteLine(err);
returnfalse;
}
}
publicvirtualList<T> GetValues<T>()
{
lock (_locker)
{
var values = Cache.GetValues<ICacheEntry>().Where(c => c.Value is T).Select(c => (T)c.Value).ToList();
return values;
}
}
///<summary>/// Retrieves a cache item. Possible to set the expiration of the cache item in seconds. ///</summary>///<param name="key"></param>///<returns></returns>public TCacheItemData GetItem(string key)
{
try
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
lock (_locker)
{
if (Cache.TryGetValue(key, out TCacheItemData cachedItem))
{
return cachedItem;
}
}
returndefault(TCacheItemData);
}
catch (Exception err)
{
Debug.WriteLine(err);
returndefault(TCacheItemData);
}
}
publicboolSetItem(string key, TCacheItemData itemToCache)
{
try
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
lock (_locker)
{
if (GetItem(key) != null)
{
AddItem(key, itemToCache);
returntrue;
}
UpdateItem(key, itemToCache);
}
returntrue;
}
catch (Exception err)
{
Debug.WriteLine(err);
returnfalse;
}
}
///<summary>/// Updates an item in the cache and set the expiration of the cache item ///</summary>///<param name="key"></param>///<param name="itemToCache"></param>///<returns></returns>publicboolUpdateItem(string key, TCacheItemData itemToCache)
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
lock (_locker)
{
TCacheItemData existingItem = GetItem(key);
if (existingItem != null)
{
//always remove the item existing before updating
RemoveItem(key);
}
AddItem(key, itemToCache);
}
returntrue;
}
///<summary>/// Removes an item from the cache ///</summary>///<param name="key"></param>///<returns></returns>publicboolRemoveItem(string key)
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
lock (_locker)
{
if (Cache.TryGetValue(key, outvar item))
{
if (item != null)
{
}
Cache.Remove(key);
returntrue;
}
}
returnfalse;
}
publicvoidAddItems(Dictionary<string, TCacheItemData> itemsToCache)
{
foreach (var kvp in itemsToCache)
AddItem(kvp.Key, kvp.Value);
}
///<summary>/// Clear all cache keys starting with known prefix passed into the constructor.///</summary>publicvoidClearAll()
{
lock (_locker)
{
List<string> cacheKeys = Cache.GetKeys<string>().Where(k => k.StartsWith(_prefixKey)).ToList();
foreach (string cacheKey in cacheKeys)
{
if (cacheKey.StartsWith(_prefixKey))
Cache.Remove(cacheKey);
}
}
}
}
}
There are different ways of making use of the generic memory cache above. The simplest use-case would be to instantiate it in a Controller and add cache items as wanted.
As you can see the Generic Memory cache offers strongly typed access to the memory cache.
Lets look at how we can register the Memory Cache as a service too.
startup.cs
// This method gets called by the runtime. Use this method to add services to the container.publicvoidConfigureServices(IServiceCollection services)
{
services.AddControllers();
services.AddMemoryCache();
services.AddSingleton<GenericMemoryCache<WeatherForecast>>(genmen => new GenericMemoryCache<WeatherForecast>(new MemoryCache(new MemoryCacheOptions()), "WEATHER_FORECASTS", 120));
}
In the sample above we register as a singleton (memory is either way shared so making a transient or scoped generic memory cache would be less logical)
and register the memory cache above. We can then inject it like this :
This way of injecting the generic memory cache is cumbersome, since we need to have a more dynamic way of specfifying the type of the memory cache. We could register the type of the generic memory cache to object, but then we loose the strongly typing by boxing the items in the cache to object.
Instead, I have looked into defining a custom middleware for working against the generic memory cache. Of course you would in production add some protection against this cache so it cannot be readily available for everyone, such as a token or similar to be added into the REST api calls. The middleware shown next is just a suggestion how we can build up a generic memory cache in asp.net core via rest api calls. It should be very handy in case you have consumers / clients that have data they want to store into a cache on-demand. The appliances of this could be endless in an asp.net core environment. That is if you would offer such functionality. In many cases, you would otherwise use my GenericMemoryCache more directly where needed and not expose it. But for those who want to see how it can be made available in a REST api, the following middleware offers a suggestion.
Startup.cs
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.publicvoidConfigure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
..
app.UseGenericMemoryCache(new GenericMemoryCacheOptions
{
PrefixKey = "volvoer",
DefaultExpirationInSeconds = 600
});
..
We call first the UseGenericMemoryCache to just register the middlware and we initially also set up the PrefixKey to "volvoer" and default expiration in seconds to ten minutes.
But we will instead just use Postman to send some rest api calls to build up contents of the cache instead afterwards.
The UseMiddleware extension method is used in the extension method that is added to offer this functionality:
GenericMemoryCacheExtensions.cs
using Microsoft.AspNetCore.Builder;
namespaceSomeAcme.SomeUtilNamespace
{
publicstaticclassGenericMemoryCacheExtensions
{
publicstatic IApplicationBuilder UseGenericMemoryCache<TItemData>(this IApplicationBuilder builder, GenericMemoryCacheOptions options) where TItemData: class
{
return builder.UseMiddleware<GenericMemoryCacheMiddleware<TItemData>>(options);
}
}
}
The middleware looks like this (it could be easily extended to cover more functions of the API):
GenericMemoryCacheMiddleware.cs
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Caching.Memory;
using Newtonsoft.Json;
using System;
using System.IO;
using System.Text;
using System.Threading.Tasks;
namespaceSomeAcme.SomeUtilNamespace
{
publicclassGenericMemoryCacheMiddleware<TCacheItemData> whereTCacheItemData: class
{
privatereadonly RequestDelegate _next;
privatereadonlystring _prefixKey;
privatereadonlyint _defaultExpirationTimeInSeconds;
publicGenericMemoryCacheMiddleware(RequestDelegate next, GenericMemoryCacheOptions options)
{
if (options == null)
{
thrownew ArgumentNullException(nameof(options));
}
_next = next;
_prefixKey = options.PrefixKey;
_defaultExpirationTimeInSeconds = options.DefaultExpirationInSeconds;
}
publicasync Task InvokeAsync(HttpContext context, IMemoryCache memoryCache)
{
context.Request.EnableBuffering(); //do this to be able to re-read the body multiple times without consuming it. (asp.net core 3.1)if (context.Request.Method.ToLower() == "post") {
if (IsDefinedCacheOperation("addtocache", context))
{
// Leave the body open so the next middleware can read it.using (var reader = new StreamReader(
context.Request.Body,
encoding: Encoding.UTF8,
detectEncodingFromByteOrderMarks: false,
bufferSize: 4096,
leaveOpen: true))
{
var body = await reader.ReadToEndAsync();
// Do some processing with bodyif (body != null)
{
string cacheKey = context.Request.Query["cachekey"].ToString();
if (context.Request.Query.ContainsKey("type"))
{
var typeArgs = CreateGenericCache(context, memoryCache, outvar cache);
var payloadItem = JsonConvert.DeserializeObject(body, typeArgs[0]);
var addMethod = cache.GetType().GetMethod("AddItem");
if (addMethod != null)
{
addMethod.Invoke(cache, new[] {cacheKey, payloadItem});
}
}
else
{
var cache = new GenericMemoryCache<object>(memoryCache, cacheKey, 0);
if (cache != null)
{
//TODO: implement
}
}
}
}
// Reset the request body stream position so the next middleware can read it
context.Request.Body.Position = 0;
}
}
if (context.Request.Method.ToLower() == "delete")
{
if (IsDefinedCacheOperation("removeitemfromcache", context))
{
var typeArgs = CreateGenericCache(context, memoryCache, outvar cache);
var removeMethod = cache.GetType().GetMethod("RemoveItem");
string cacheKey = context.Request.Query["cachekey"].ToString();
if (removeMethod != null)
{
removeMethod.Invoke(cache, new[] { cacheKey });
}
}
}
if (context.Request.Method.ToLower() == "get")
{
if (IsDefinedCacheOperation("getvaluesfromcache", context))
{
var typeArgs = CreateGenericCache(context, memoryCache, outvar cache);
var getValuesMethod = cache.GetType().GetMethod("GetValues");
if (getValuesMethod != null)
{
var genericGetValuesMethod = getValuesMethod.MakeGenericMethod(typeArgs);
var existingValuesInCache = genericGetValuesMethod.Invoke(cache, null);
if (existingValuesInCache != null)
{
context.Response.ContentType = "application/json";
await context.Response.WriteAsync(JsonConvert.SerializeObject(existingValuesInCache));
}
else
{
context.Response.ContentType = "application/json";
await context.Response.WriteAsync("{}"); //return empty object literal
}
return; //terminate further processing - return data
}
}
}
await _next(context);
}
privatestaticboolIsDefinedCacheOperation(string cacheOperation, HttpContext context, bool requireType = true)
{
return context.Request.Query.ContainsKey(cacheOperation) &&
context.Request.Query.ContainsKey("prefix") && (!requireType || context.Request.Query.ContainsKey("type"));
}
privatestatic Type[] CreateGenericCache(HttpContext context, IMemoryCache memoryCache, outobject cache)
{
Type genericType = typeof(GenericMemoryCache<>);
string cacheitemtype = context.Request.Query["type"].ToString();
string prefix = context.Request.Query["prefix"].ToString();
Type[] typeArgs = {Type.GetType(cacheitemtype)};
Type cacheType = genericType.MakeGenericType(typeArgs);
cache = Activator.CreateInstance(cacheType, memoryCache, prefix, 0);
return typeArgs;
}
}
}
The middleware above for now supports adding items to the cache and removing them or listing them up.
I have used this busines model to test it out:
The following requests were tested to add three cars and then delete one and then list them up:
# add three cars
POST https://localhost:44391/caching/addcar?addtocache&prefix=volvoer&cachekey=240&type=GenericMemoryCacheAspNetCore.Models.Car,GenericMemoryCacheAspNetCore
POST https://localhost:44391/caching/addcar?addtocache&prefix=volvoer&cachekey=Amazon&type=GenericMemoryCacheAspNetCore.Models.Car,GenericMemoryCacheAspNetCore
POST https://localhost:44391/caching/addcar?addtocache&prefix=volvoer&cachekey=Pv&type=GenericMemoryCacheAspNetCore.Models.Car,GenericMemoryCacheAspNetCore
#remove one
DELETE https://localhost:44391/caching?removeitemfromcache&prefix=volvoer&cachekey=Amazon&&type=GenericMemoryCacheAspNetCore.Models.Car,GenericMemoryCacheAspNetCore
# list up the cars in the cache (items)
GET https://localhost:44391/caching/addcar?getvaluesfromcache&prefix=volvoer&type=GenericMemoryCacheAspNetCore.Models.Car,GenericMemoryCacheAspNetCore
About the POST, I have posted payloads in the body via postman such as this:
{
Make:"Volvo",
Model:"Amazon"}
Finally, we can see that we get the cached data in our generic memory cache. As you can see, the REST api specifies the type arguments by specifying the type name with namespaces and after the comma, also the asembly name (fully qualified type name). So this way of building a generic memory cache via rest api is fully feasible in asp.net core. However, it should only be used in scenarios where such functionality is desired and the clients can be trusted in some way (or by restricing access to such functionality only to priviledged users via a token or other functionality.) You would of course never allow clients to just send over data to a server's memory cache only to see it bogged down by memory. That was not the purpose of this article. The purpose was to acquaint the reader more with IMemoryCache, Generic Memory cache and middlware in Asp.Net Core. A generic memory cache will give you strongly typed access to memory cache in asp.net core and the concepts shown here in .net core should be similar.
This article will describe how you can output runnable SQL from Entity Framework. The output will be sent to the Console and Debug. You can easily modify this to output to other output sources, such as tracing or files for that matter.
What is important is that we need to interpolate the parameters from Entity Framework so that we get a runnable SQL.
Entity Framework parameterizes the SQL queries such that SQL injection is avoided. Where conditions and similar are inserted into parameters, notably with the p__linq naming convention.
We will interpolate these parameters into runnable SQL such that you can paste SQL into SQL Server Management Studio (SMSMS). Or you could save the runnable SQL to a .sql file and let SQLCMD run it from command line.
Either way, we must set up the DbContext to do this. I am using Entity Framework 6.2.0. It should be possible to use this technique with all EF 6.x version. In Entity Framework Core and Entity Framework Core 2, the techniques will be similar.
First define a DbConfiguration and attribute the DbContext class you are using like this with the DbConfigurationType (we are not considering ObjectContext in this article, but DbContext is a wrapper around this class anyways, so you should be apply to techniques taught here
to other scenarios).
Ok, so our DbConfiguration just inherits from DbConfiguration and sets up a custom DatabaseLogFormatter like this:
SomeAcmeDataContextConfiguration.cs
using System.Data.Entity;
namespaceSomeAcme.Data.EntityFramework.DbContext
{
publicclassSomeAcmeDataContextConfiguration : DbConfiguration
{
publicSomeAcmeDataContextConfiguration()
{
SetDatabaseLogFormatter((context, logAction) => new SomeAcmeDbLogFormatter(context, logAction));
}
}
}
SetDatabaseLogFormatter is a protected method o DbConfiguration.
Our DatabaseLogFormatter implementation then looks like this:
SomeAcmeDbLogFormatter.cs
using System;
using System.Data.Common;
using System.Data.Entity.Infrastructure.Interception;
using SomeAcme.Data.EntityFramework.DbContext.Extensions;
namespaceSomeAcme.Data.EntityFramework.DbContext
{
publicclassSomeAcmeDbLogFormatter : DatabaseLogFormatter
{
publicSomeAcmeDbLogFormatter(System.Data.Entity.DbContext dbContext, Action<string> loggingAction) :
base(dbContext, loggingAction)
{
}
publicoverridevoidLogCommand<TResult>(DbCommand command, DbCommandInterceptionContext<TResult> interceptionContext)
{
string cmdText = command.CommandText;
if (string.IsNullOrEmpty(cmdText))
return;
if (cmdText.StartsWith("Openend connection", StringComparison.InvariantCultureIgnoreCase) ||
cmdText.StartsWith("Closed connection", StringComparison.InvariantCultureIgnoreCase))
return;
Write($"--DbContext {Context.GetType().Name} is executing command against DB {Context.Database.Connection.Database}: {Environment.NewLine}{command.GetGeneratedQuery().Replace(Environment.NewLine, "")}{Environment.NewLine}");
}
publicoverridevoidLogResult<TResult>(DbCommand command, DbCommandInterceptionContext<TResult> interceptionContext)
{
//empty by intention
}
}
}
We also have a helper extension method called GetGeneratedQuery on DbCommand objects to help us get the cruft of this article - the interpolated runnable query. From my testing we can just interpolate the parameters as is in most use cases. However, some datatypes
in the T-SQL world must be quoted (like, strings) and we need to adjust the date and time data types to a runnable format too. In case you find this helper method should be interpolated, please let me know.
Our helper method GetGeneratedQuery looks like this:
SomeAcmeDbCommandExtensions.cs
using System;
using System.Data;
using System.Data.Common;
using System.Data.SqlClient;
using System.Linq;
using System.Text;
namespaceSomeAcme.Data.EntityFramework.DbContext.Extensions
{
publicstaticclassDbCommandExtensions
{
///<summary>/// Returns the generated sql string where parameters are replaced by value. Generated a runnable/// SQL script. Note that this is an approximation anwyays, but gives us a runnable query. The database server query engine optimizer will possible rewrite/// even simple queries if it sees it possible to rearrange the query to predictively create a more efficient query.///</summary>///<param name="dbCommand"></param>///<returns></returns>publicstaticstringGetGeneratedQuery(this DbCommand dbCommand)
{
DbType[] quotedParameterTypes = new DbType[] {
DbType.AnsiString, DbType.Date,
DbType.DateTime, DbType.DateTime2, DbType.Guid, DbType.String,
DbType.AnsiStringFixedLength, DbType.StringFixedLength
};
var sb = new StringBuilder();
sb.AppendLine(dbCommand.CommandText);
var arrParams = new SqlParameter[dbCommand.Parameters.Count];
dbCommand.Parameters.CopyTo(arrParams, 0); //copy dbCommand parameters into another collection to avoid //mutating the query and be able to run a foreach loop foreach (SqlParameter p in arrParams.OrderByDescending(p => p.ParameterName.Length))
{
stringvalue = p.Value.ToString();
if (p.DbType == DbType.Date || p.DbType == DbType.DateTime || p.DbType == DbType.DateTime2)
{
value = DateTime.Parse(value).ToString("yyyy-MM-dd HH:mm:ss.fff");
}
if (quotedParameterTypes.Contains(p.DbType))
value = "'" + value + "'";
sb.Replace("@" + p.ParameterName, value);
}
return sb.ToString();
}
}
}
We also need to activate database logging in the first place. Database logging to the console and debug should be avoided in production in ordinary cases, as they make a performance impact. Instead, it is handy to turn it on or off via an app setting. I have decided
to only allow it while debugging so my constructors of my DbContext where I have tested it calls this method:
SomeAcmeDbContext.cs
(once more need to add some code)
privatevoidSetupDbContextBehavior()
{
Configuration.AutoDetectChangesEnabled = true;
Configuration.LazyLoadingEnabled = true;
ObjectContext.CommandTimeout = 10 * 60;
#if DEBUG//To enable outputting database traffic to the console, set the app setting OutputDatabaseTrafficLogging in web.config to true//this must not be activated in production. To safe guard this,//this block below is wrapped in the debug preprocessor directive.bool outputDatabaseTrafficLogging = ConfigurationManagerWrapper.GetAppsetting(SomeAcme.Common.Constants.OutputDatabaseTrafficLogging);
if (outputDatabaseTrafficLogging)
{
Database.Log = s =>
{
if (s.StartsWith("Opened connection", StringComparison.InvariantCultureIgnoreCase)
|| s.StartsWith("Closed connection", StringComparison.InvariantCultureIgnoreCase))
return;
Console.WriteLine(s);
Debug.WriteLine(s);
};
}
#endif
Never mind the first three lines, they are just added here as tips for additional settings you CAN set if you want to. The important bit is the Database.Log delegate property, which acceps a lambda for example where you set up what to do with the logging. Here we just tell the DbContext that if the app setting OutputDatabaseTrafficLogging is set to true, we output the runnable SQL from Entity Framework to the console.
That's all there is to it! You can now activate the app setting and see in the debug output (or in console) runnable SQL. And you can paste the SQL into SMSS for example to check for performance issues such as missing indexes and similar or tune up the size of the result sets and alter the SQL.
You should also consider making your DbContext runnable in Linqpad for easier tuning of EF queries, but that is for another article. Happy coding!
This article is for Angular developers running on a Windows platform. We will use Windows domain tools such as netstat and Powershell in this article.
A cross platform version of Powershell exist. I have not tested this approach on other OS-es, such as Linux. Linux developers using Angular (if any) might
follow my approach here with success, it is is possible to do this in a similar manner on *nix systems.
Also note that this article is meant for those using Angular SPA with .Net Core. This is the standard setup for Windows platform and Angular development.
When developing an Angular app locally, sometimes you want to run on a fixed port. For example, your app might federate its access towards ADFS (Active Directory Federated Services) and you desire to have a fixed port.
This makes it possible to set up a callback url that can be fixed and not have the standard setup with a random port that Angular and webpack sets up for you.
Here is how I managed to achieve to override this and set up a fixed port.
First off, we need to create a short Powershell script with a function to stop (kill) the processes running at a given port with the following contents:
KillAngularApp.ps1
param (
[Parameter(Mandatory=$true)][string] $portToFind
)
# This PS script requires the paramter $portToFind to be passed into itpwd
Write-Host Probing for Angular App running at $portToFind$runningEudAppProcessLocal = 'netstat -ano | findstr "$portToFind"'$arr = Invoke-Expression $runningEudAppProcessLocal# $runningEudAppProcessLocal$arr = $arr -split'\s+'
Write-Host Probing complete:
$arrif ($arr.Length -ge 5) {
$runningAngularAppPort = $arr[5]
$runningAngularAppPort
Write-Host Killing the process..
$killScript = "taskkill /PID $runningAngularAppPort /F"
Invoke-Expression $killScript
Write-Host probing once more
$arr = Invoke-Expression $runningEudAppProcessLocalif ($arr.Length -eq 0){
Write-Host There is no running process any more at $portToFind
}
}
The powershell script above runs 'netstat ano -findstr "someportnumber"'. It finds the PID at that port and splits the result using the '\s+' expression, i.e. whitespace.
If we find a PID (process id), we stop that process using the 'taskkill' command with the '-force' flag.
We then need to have a C# class that .Net Core can call. The code here has been tested with .Net Core 3.0 and 3.1 successfully.
KillAngularPortHelper.cs
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using System;
using System.Diagnostics;
namespaceEndUserDevice
{
publicstaticclassAngularKillPortHelper
{
///<summary>/// Kills Angular app running with for example node at configured port Host:SpaPort///</summary>publicstaticvoidKillPort()
{
try
{
string environmentName = Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT");
string configFile = environmentName == Environments.Development ? "appsettings.json" :
environmentName == Environments.Staging ? "appsettings.Staging.json" :
environmentName == Environments.Production ? "appsettings.Production.json" : "appsettings.json";
// Set up configuration sources.var config = new ConfigurationBuilder()
.AddJsonFile(configFile, optional: false)
.Build();
string angularAppPort = config.GetValue<string>("Configuration:Host:SpaPort");
if (environmentName == Environments.Development)
{
string killAngularAppRunningAtPortMessage =
$"Trying to shutdown existing running Angular SPA if it is running at reserved fixed port: {angularAppPort}";
Debug.WriteLine(killAngularAppRunningAtPortMessage);
Console.WriteLine(killAngularAppRunningAtPortMessage);
//requires Nuget Packages://Microsoft.Powershell.SDK//System.Management.Automationstring ps1File = config.GetValue<string>("Configuration:Host:CloseSpaPortScriptPath");
var startInfo = new ProcessStartInfo();
startInfo.FileName = "powershell.exe";
startInfo.Arguments = "-noprofile \"& \"\"" + ps1File + "\"\"\"";
startInfo.Arguments += " -portToFind " + angularAppPort;
startInfo.UseShellExecute = false;
//WARNING!!! If the powershell script outputs lots of data, this code could hang//You will need to output using a stream reader and purge the contents from time to time
startInfo.RedirectStandardOutput = !startInfo.UseShellExecute;
startInfo.RedirectStandardError = !startInfo.UseShellExecute;
//startInfo.CreateNoWindow = true;var process = new System.Diagnostics.Process();
process.StartInfo = startInfo;
process.Start();
process.WaitForExit(3*1000);
//if you want to limit how long you wait for the program to complete//input is in milliseconds//var seconds_to_wait_for_exit = 120;//process.WaitForExit(seconds_to_wait_for_exit * 1000); string output = "";
if (startInfo.RedirectStandardOutput)
{
output += "Standard Output";
output += Environment.NewLine;
output += process.StandardOutput.ReadToEnd();
}
if (startInfo.RedirectStandardError)
{
var error = process.StandardError.ReadToEnd();
if (!string.IsNullOrWhiteSpace(error))
{
if (!string.IsNullOrWhiteSpace(output))
{
output += Environment.NewLine;
output += Environment.NewLine;
}
output += "Error Output";
output += Environment.NewLine;
output += error;
}
}
Console.WriteLine(output);
Debug.WriteLine(output);
}
}
catch (Exception err)
{
Console.WriteLine(err);
}
}
}
}
Finally we can configure our app to use the ports we want to use like this:
The ApiPort here is not used by our code in our article. The SpaPort however is used.
We then call this helper class like this in Program.cs of our asp.net core application hosting the Angular Spa:
This makes sure that the port is ready and not buy by another app (such as your previous debugging session of the app!) and by killing the app running with node at that port, we make
sure that Angular can run at a fixed port.
Are we done yet? No! We must also update the package.json file to use that port we want to use! This must correspond to the port we configure in the appsettings.json file to kill in the first place. (i.e. make sure the port is freely available and not busy)
Never mind much of the setup above, the important bit to make note of here is the part:
--port=44394 &REM
The adjustments of the ng start script makes sure we use a fixed port of our Angular app. Sadly, as you can see - this is not easily configurable as we hard code it into our package.json.
Maybe we could set this up as an environment variable in a Angular environment file.. ?
Hope you found this article interesting. Having a fixed port is always handy when developing locally Angular app and you do no not like this random port thingy that is default set up otherwise.
Works on my PC!
I have created a standalone tool that can run Eslint from the commandline. The tool is a Node.js application build with Pkg as a node10-win application, built as a standalone EXE file executable.
You can find the repository here:
Here you can also alter the application to your needs, if necessary.
The application is available as a Npm package or a Nuget package on the official repos (npmjs.org and nuget.org)
This article will focus on the use of the application via Nuget and activating the tool in Azure devops.
First off make sure you add the official Nuget repo to your Nuget.config file like this:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageRestore>
<add key="enabled" value="True" />
</packageRestore>
<activePackageSource>
<!-- some other nuget repo in addition if desired -->
</activePackageSource>
<packageSources>
<clear />
<!-- some other nuget repo in addition if desired -->
<add key="Nuget official repo" value="https://nuget.org/api/v2/" />
</packageSources>
</configuration>
Now you can add a packagereference to the EslintStandalone.Cli tool in the .csproj project file (or .vbproj if you use Visual Basic) like this:
Also add the following copy step to copy the standalone.exe tool within the Nuget package out to the bin folder of your project:
<ItemGroup>
<Content Include="$(PkgEslintStandalone_Cli)\eslint-standalone.exe">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</Content>
</ItemGroup>
This is possible since you use GeneratePathProperty set to true and we refer the folder of the nuget on disk like $(PkgEslintStandalone_Cli). The Nuget pakage is called EsLintStandalone.Cli. We replace '.' with '_' and we prefix always the variable to point to the nuget folder with Pkg and we
reference the entire package with the $() expression.
The next step is to add the execution of the tool in Azure Devops like a task. You can either define a single task or a Task group. I like task groups, since we then can easily share task among projects.
The following command should be added:
dir
echo Starting Eslint tool to analyzing for compability issues in Javascript files
cd Source\SomeProject\bin
echo Current folder
dir *.exe
move eslint-standalone.exe ..
cd ..
echo Navigated to root folder of SomeProject. Starting the eslint-standalone tool.
eslint-standalone.exe
Here we copy the standalone tool a level down to the root of the project, parent folder of bin folder. Here we usually have our target files, which will be Javascript files in our project with such files (e.g. a MVC project or other web projects usually).
Finally we must supply a .eslintrc.js file, a config file for Eslint. At my work, I have customers that uses Internet Explorer 11. So I check for Ecmascript 5 compability. This tool can handle such as scenario. The following .eslintrc.js such suffice:
module.exports = {
"plugins": ["ie11"],
"env": {
"browser": true,
"node": true,
"es6": false
},
"parserOptions": {
"ecmaVersion": 5,
},
"rules": {
"ie11/no-collection-args": ["error"],
"ie11/no-for-in-const": ["error"],
//"ie11/no-loop-func": ["warn"],
"ie11/no-weak-collections": ["error"],
"curly": ["off"]
}
};
//A list of rules that can be applied is here: https://eslint.org/docs/rules/
//The rules can have the following severity in EsLint: "warn", "error" and "off".
https://eslint.org/docs/rules/
You can find Eslint rules at the link above. You can set the error level to either 'warn' or' 'error' or 'off'. https://eslint.org/docs/user-guide/configuring
If you want to use the tool in a Npm based project, you can see the Npm page here:
https://www.npmjs.com/package/eslint-standalone
npm i eslint-standalone
I got two version of the tool. Version 1.1. is recommended, as you must supply a .eslintrc.js file and have control over how the linting is done. Version 1.2. supplies a .eslintrc.js in the same folder as the tool with ES5 support detection as shown above included (.eslintrc.js file is bundled together).
The tool itself is quite simple code in Node.js:
#!/usr/bin/env node
const CLIEngine = require("eslint").CLIEngine;
const minimist = require("minimist");
const path = require("path");
const chalk = require("chalk");
const eslintPluginCompat = require("eslint-plugin-compat");
const eslintIe11 = require("eslint-plugin-ie11");
const fs = require("fs");
const { promisify } = require("util");
const fsAccessAsync = promisify(fs.access);
var runEsLint = function(baseConfig, args) {
const cli = new CLIEngine({ baseConfig });
let filesDir = [];
if (args.dir) {
// Dir can be a string or an array, we do a preprocessing to always have an array
filesDir = []
.concat(args.dir)
.map((item) => path.resolve(process.cwd(), item));
} else {
filesDir = ["./."];
}
console.log(`> eslint is checking the following dir: ${filesDir}`);
const report = cli.executeOnFiles(filesDir);
if (report.errorCount > 0) {
const formatter = cli.getFormatter();
console.log(
chalk.bold.redBright(`> eslint has found ${report.errorCount} error(s)`)
);
console.log(formatter(report.results));
process.exitCode = 1; //eslint errors encountered means the process should exit not with exit code 0.
return;
}
console.log(chalk.bold.greenBright("> eslint finished without any errors!"));
process.env.exitCode = 0; //exit with success code
}
var tryLoadConfigViaKnownSystemFolder = function(){
let configFileFound = null;
try {
let knownHomeDirectoryOnOSes =
process.env.HOME || process.env.HOMEPATH || process.env.USERPROFILE;
let knownHomeDirectoryOnOSesNormalized = path.normalize(
knownHomeDirectoryOnOSes + "/.eslintrc"
);
configPath = path.resolve(knownHomeDirectoryOnOSesNormalized);
if (checkIfFileExistsAndIsAccessible(configPath)){
configFileFound = true;
errorEncountered = false;
}
} catch (error) {
errorEncountered = true;
console.error(error);
process.exitCode = 1; //signal an error has occured. https://stackoverflow.com/questions/5266152/how-to-exit-in-node-js
return configFileFound;
}
};
var checkIfFileExistsAndIsAccessible = function(configPathFull) {
try {
fs.accessSync(configPathFull, fs.F_OK);
return true;
}
catch (Error){
return false;
}
}
var tryLoadFileInDirectoryStructure = function(curDir){
let configFullPathFound = null;
for (let i = 0; i < 100; i++) {
try {
if (i > 0) {
console.info("Trying lib folder of eslint-standalone: " + curDir);
let oldCurDir = curDir;
curDir = path.resolve(curDir, ".."); //parent folder
if (oldCurDir == curDir) {
//at the top of media disk volume - exit for loop trying to retrieve the .eslintrc.js file from parent folder
console.info(
"It is recommended to save an .eslintrc.js file in the folder structure where you run this tool."
);
break;
}
}
configPath = path.join(curDir + "/.eslintrc.js");
configPath = path.normalize(configPath);
if (checkIfFileExistsAndIsAccessible(configPath)){
baseConfig = require(configPath);
errorEncountered = false;
configFullPathFound = configPath;
break; //exit the for loop
}
} catch (error) {
process.stdout.write(".");
errorEncountered = true;
}
}
return configFullPathFound;
}
var inspectArgs = function(args) {
let fix = false;
console.log("Looking at provided arguments:");
for (var i = 0; i < args.length; i++) {
console.log(args[i]);
if (args[i] === "--fix") {
fix = true;
console.log("Fix option provided: " + fix);
console.warn("Fix is not supported yet, you must manually adjust the files."
);
}
}
}
module.exports = (() => {
const args = process.argv.slice(2);
inspectArgs(args);
// Read a default eslint config
//console.log("Dirname: " + __dirname);
let configPath = "";
let baseConfig = "";
let errorEncountered = false;
console.info("Trying to resolve .eslintrc.js file");
console.info("Trying current working directory:", process.cwd());
let curDir = process.cwd();
let configFilefound = tryLoadFileInDirectoryStructure(curDir);
if (configFilefound === null) {
curDir = __dirname;
configFilefound = tryLoadFileInDirectoryStructure(curDir);
}
// try {
// configPath = path.join(curDir + "/.eslintrc.js");
// configPath = path.normalize(configPath);
// baseConfig = require(configPath);
// console.info("Found config file in current working folder");
// errorEncountered = false;
// configFilefound = baseConfig !== "";
// } catch (error) {
// //ignore error handling for now at working folder
// configFilefound = false;
// }
// if (!configFilefound) {
// curDir = __dirname;
// for (let i = 0; i < 100; i++) {
// try {
// if (i > 0) {
// console.info("Trying lib folder of eslint-standalone: " + curDir);
// let oldCurDir = curDir;
// curDir = path.resolve(curDir, ".."); //parent folder
// if (oldCurDir == curDir) {
// //at the top of media disk volume - exit for loop trying to retrieve the .eslintrc.js file from parent folder
// console.info(
// "It is recommended to save an .eslintrc.js file in the folder structure where you run this tool."
// );
// break;
// }
// }
// configPath = path.join(curDir + "/.eslintrc.js");
// configPath = path.normalize(configPath);
// baseConfig = require(configPath);
// errorEncountered = false;
// break; //exit the for loop
// } catch (error) {
// process.stdout.write(".");
// errorEncountered = true;
// }
// }
// }
// Check if the path to a client config was specified
if (args.conf) {
if (Array.isArray(args.conf)) {
const error = chalk.bold.redBright(
`> eslint requires a single config file`
);
errorEncountered = true;
console.warn(error);
}
try {
configPath = path.resolve(process.cwd(), args.conf);
baseConfig = require(configPath);
errorEncountered = false;
} catch (error) {
errorEncountered = true;
console.log(error);
}
}
if (errorEncountered === true) {
configFileFound = tryLoadConfigViaKnownSystemFolder();
if (configFileFound !== null) {
baseConfig = `{
"extends": "${configPath}"
}`;
}
// try {
// let knownHomeDirectoryOnOSes =
// process.env.HOME || process.env.HOMEPATH || process.env.USERPROFILE;
// let knownHomeDirectoryOnOSesNormalized = path.normalize(
// knownHomeDirectoryOnOSes + "/.eslintrc"
// );
// configPath = path.resolve(knownHomeDirectoryOnOSesNormalized);
// errorEncountered = false;
// } catch (error) {
// errorEncountered = true;
// console.error(error);
// process.exitCode = 1; //signal an error has occured. https://stackoverflow.com/questions/5266152/how-to-exit-in-node-js
// return;
// }
}
console.log(`> eslint has loaded config from: ${configFilefound}`);
runEsLint(baseConfig, args);
// console.log('base config: ');
// console.log(baseConfig);
// const cli = new CLIEngine({ baseConfig });
// let filesDir = [];
// if (args.dir) {
// // Dir can be a string or an array, we do a preprocessing to always have an array
// filesDir = []
// .concat(args.dir)
// .map((item) => path.resolve(process.cwd(), item));
// } else {
// filesDir = ["./."];
// }
// console.log(`> eslint is checking the following dir: ${filesDir}`);
// const report = cli.executeOnFiles(filesDir);
// if (report.errorCount > 0) {
// const formatter = cli.getFormatter();
// console.log(
// chalk.bold.redBright(`> eslint has found ${report.errorCount} error(s)`)
// );
// console.log(formatter(report.results));
// process.exitCode = 1; //eslint errors encountered means the process should exit not with exit code 0.
// return;
// }
// console.log(chalk.bold.greenBright("> eslint finished without any errors!"));
// process.env.exitCode = 0; //exit with success code
})();
The following sample code shows how to create a Generic Memory Cache for .Net Framework.
This allows you to cache specific items defined by a TCacheItemData type argument, i.e. caching same type of data such as instances of a class, or arrays of instances.
Inside your .csproj you should see something like:
Over to the implementation. Since a memory cache is shared by possibly other applications, it is important to prefix your cached contents, i.e. prefix the the keys.
This makes it easier to barrier the memory cache. Note though that some barriering is done accross processes of course, this is just to make it easier within your application and running process
to group the cached elements with a prefix key used for the generic memory cache operations.
Now over to the implementation.
using System;
using System.Collections;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Runtime.Caching;
namespaceSomeAcme.SomeUtilNamespace
{
///<summary>/// Thread safe memory cache for generic use///</summary>///<typeparam name="TCacheItemData">Payload to store in the memory cache</typeparam>///<remarks>Uses MemoryCache.Default which defaults to an in-memory cache. All cache items are prefixed with an 'import cache session guid' to compartmentalize/// multiple paralell importing sessions</remarks>publicclassGenericMemoryCache<TCacheItemData> whereTCacheItemData : class
{
privatereadonlystring _prefixKey;
privatereadonly ObjectCache _cache;
privatereadonly CacheItemPolicy _cacheItemPolicy;
publicGenericMemoryCache(string prefixKey, int defaultExpirationInSeconds = 0)
{
defaultExpirationInSeconds = Math.Abs(defaultExpirationInSeconds); //checking if a negative value was passed into the constructor.
_prefixKey = prefixKey;
_cache = MemoryCache.Default;
_cacheItemPolicy = defaultExpirationInSeconds == 0
? new CacheItemPolicy { Priority = CacheItemPriority.NotRemovable }
: new CacheItemPolicy
{ AbsoluteExpiration = DateTime.Now.AddSeconds(Math.Abs(defaultExpirationInSeconds)) };
}
///<summary>/// Cache object if direct access is desired///</summary>public ObjectCache Cache => _cache;
publicstringPrefixKey(string key) => $"{_prefixKey}_{key}";
///<summary>/// Adds an item to memory cache///</summary>///<param name="key"></param>///<param name="itemToCache"></param>///<returns></returns>publicboolAddItem(string key, TCacheItemData itemToCache)
{
try
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
var cacheItem = new CacheItem(key, itemToCache);
_cache.Add(cacheItem, _cacheItemPolicy);
returntrue;
}
catch (Exception err)
{
Debug.WriteLine(err);
returnfalse;
}
}
publicvirtualList<T> GetValues<T>()
{
List<T> list = new List<T>();
IDictionaryEnumerator cacheEnumerator = (IDictionaryEnumerator)((IEnumerable)_cache).GetEnumerator();
while (cacheEnumerator.MoveNext())
{
if (cacheEnumerator.Key == null)
continue;
if (cacheEnumerator.Key.ToString().StartsWith(_prefixKey))
list.Add((T)cacheEnumerator.Value);
}
return list;
}
///<summary>/// Retrieves a cache item. Possible to set the expiration of the cache item in seconds. ///</summary>///<param name="key"></param>///<returns></returns>public TCacheItemData GetItem(string key)
{
try
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
if (_cache.Contains(key))
{
CacheItem cacheItem = _cache.GetCacheItem(key);
object cacheItemValue = cacheItem?.Value;
UpdateItem(key, cacheItemValue as TCacheItemData);
TCacheItemData item = _cache.Get(key) as TCacheItemData;
return item;
}
returnnull;
}
catch (Exception err)
{
Debug.WriteLine(err);
returnnull;
}
}
publicboolSetItem(string key, TCacheItemData itemToCache)
{
try
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
if (GetItem(key) != null)
{
AddItem(key, itemToCache);
returntrue;
}
UpdateItem(key, itemToCache);
returntrue;
}
catch (Exception err)
{
Debug.WriteLine(err);
returnfalse;
}
}
///<summary>/// Updates an item in the cache and set the expiration of the cache item ///</summary>///<param name="key"></param>///<param name="itemToCache"></param>///<returns></returns>publicboolUpdateItem(string key, TCacheItemData itemToCache)
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
CacheItem cacheItem = _cache.GetCacheItem(key);
if (cacheItem != null)
{
cacheItem.Value = itemToCache;
_cache.Set(key, itemToCache, _cacheItemPolicy);
}
else
{
//if we cant find the cache item, just set the cache directly
_cache.Set(key, itemToCache, _cacheItemPolicy);
}
returntrue;
}
///<summary>/// Removes an item from the cache ///</summary>///<param name="key"></param>///<returns></returns>publicboolRemoveItem(string key)
{
if (!key.StartsWith(_prefixKey))
key = PrefixKey(key);
if (_cache.Contains(key))
{
_cache.Remove(key);
returntrue;
}
returnfalse;
}
publicvoidAddItems(Dictionary<string, TCacheItemData> itemsToCache)
{
foreach (var kvp in itemsToCache)
AddItem(kvp.Key, kvp.Value);
}
///<summary>/// Clear all cache keys starting with known prefix passed into the constructor.///</summary>publicvoidClearAll()
{
var cacheKeys = _cache.Select(kvp => kvp.Key).ToList();
foreach (string cacheKey in cacheKeys)
{
if (cacheKey.StartsWith(_prefixKey))
_cache.Remove(cacheKey);
}
}
}
}
To add live reloading when developing Asp.Net Core Views, it is recommended to upgrade to .Net Core 3.1. This makes it easier to add in the Nuget package for recompilation.
In case you have a .Net Core 2 app, follow the MSDN guide here: https://docs.microsoft.com/en-us/aspnet/core/migration/22-to-30?view=aspnetcore-3.1&tabs=visual-studio
After the app runs as .Net Core 3.1, run the following (procedure below is tested okay with VS 2019 and Chrome as the browser 'linked' to the reloading:
Edit the .csproj file by selecting the project and right clicking and selecting Edit project file in VS 2019. Past in these two Nuget package references and run dotnet restore, then dotnet build and finally dotnet run.
The runtime compilation and Browserlink should both be added. The first will rebuild your edited razor views (cshtml) and BrowserLink reloads your browser while debugging, after the razor view is updated.
Also download this Visual Studio Extension, "Browser reload on save":
https://marketplace.visualstudio.com/items?itemName=MadsKristensen.BrowserReloadonSave
You will have to close all Visual Studio processes to start installing Mad Kristensen's browser extension.
In your Startup class you should inside ConfigureServices add these two lines, specifying AddRazorRunitmeCompilation:
And finally at the top of the Configure method in the Startup class add in BrowserLink. Note, add this at the top of the Configure method such that the pipeline is adding BrowserLink correct.
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseBrowserLink();
}
Now, just start up your app with F5 and start editing a razor file. If all was set up correct, you should now see your razor view reload in the Browser. This makes it easier to edit and adjust razor views!
To add Bootstrap 4 or newer to an Asp.NET Core MVC Solution, you can do this in the following manner if you use Visual Studio 2019 and .Net Core 3.1. Or at least have access to
the 'Manage Client-side Libraries' functionality. Bootstrap is not supported in Nuget and definately not for .Net Core apps, so you need to add it manually using this.
The Manage client-side libaries in Visual Studio adds a libman.json file. This is the Library Manager json file, similar to NPM based solutions package.json.
Now add the following into libman.json:
This adds Bootstrap 4.2.1 and Jquery 3.3.1 into wwwroot lib folders for each library.
Now in your _Layout.cshtml file (or the file you use as your Layout file), just drag bootstrap.min.css file and bootstrap.bundle.js files into that razor file.
After you restart the solution, you should have Bootstrap 4 available into your Asp.Net Core MVC app!
And if you want to add client side libraries using a GUI, select your project and choose then via right click Add->Client side library. Here you can search for client side libraries.
I created a new extension in Visual Studio today! The UniqueEnumValueFixer vs extension is now available here!
https://marketplace.visualstudio.com/items?itemName=ToreAurstadIT.EXT001
The extension is actually a Code Fix for Visual Studio. It flags a warning to the developer if an enum contains multiple members
mapped to the same value. Having a collision with values for enum causes ambiguity and confusion for the developer. An enum value
has not got a single mapping from enum member to integer value.
Example like this:
Here we see that iceconverted is set to Fudge, which is the last of the colliding valued enum members. This gives code which is not clear and confusing and ambiguous. It is perfectly valid,
but programmers will perhaps sigh a bit when they see enums with multiple members mapped to same value.
The following sample code shows a violation of the rule:
Here, multiple members are mapped to the same value in the enum. Strawberry and Vanilla points to the same value through assignment. And Peach is set to same value as Chocolate.
The code fix will show enums containing the violation after compiling the solution in the Errors and Warnings pane of Visual Studio.
public override void Initialize(AnalysisContext context)
{
// TODO: Consider registering other actions that act on syntax instead of or in addition to symbols
// See https://github.com/dotnet/roslyn/blob/master/docs/analyzers/Analyzer%20Actions%20Semantics.md for more information
context.RegisterSymbolAction(AnalyzeSymbol, SymbolKind.NamedType);
}
private static void AnalyzeSymbol(SymbolAnalysisContext context)
{
try
{
var namedTypeSymbol = (INamedTypeSymbol)context.Symbol;
if (namedTypeSymbol.EnumUnderlyingType != null)
{
var valueListForEnum = new List<Tuple<string, int>>();
//Debugger.Launch();
//Debugger.Break();
var typeResolved = context.Compilation.GetTypeByMetadataName(namedTypeSymbol.MetadataName) ?? context.Compilation.GetTypeByMetadataName(namedTypeSymbol.ToString());
if (typeResolved != null)
{
foreach (var member in typeResolved.GetMembers())
{
var c = member.GetType().GetRuntimeProperty("ConstantValue");
if (c == null)
{
c = member.GetType().GetRuntimeProperties().FirstOrDefault(prop =>
prop != null && prop.Name != null &&
prop.Name.Contains("IFieldSymbol.ConstantValue"));
if (c == null)
{
continue;
}
}
var v = c.GetValue(member) as int?;
if (v.HasValue)
{
valueListForEnum.Add(new Tuple<string, int>(member.Name, v.Value));
}
}
if (valueListForEnum.GroupBy(v => v.Item2).Any(g => g.Count() > 1))
{
var diagnostic = Diagnostic.Create(Rule, namedTypeSymbol.Locations[0],
namedTypeSymbol.Name);
context.ReportDiagnostic(diagnostic);
}
}
}
}
catch (Exception err)
{
Console.WriteLine(err);
}
}
I made an AngularJs directive today that adds a horizontal scroller at top and bottom of an HTML container element, such as text area, table or div.
The AngularJs directive uses the link function of AngularJs to prepend and wrap the necessary scrolling mechanism and add some Javascript scroll event handlers using
jQuery.
import angular from 'angular';
var app = angular.module('plunker', []);
app.controller('MainCtrl', function($scope, $compile) {
$scope.name = 'Dual wielded horizontal scroller';
});
app.directive('doubleHscroll', function($compile) {
return {
restrict: 'C',
link: function(scope, elem, attr){
var elemWidth = parseInt(elem[0].clientWidth);
elem.wrap(`<div id='wrapscroll' style='width:${elemWidth}px;overflow:scroll'></div>`);
//note the top scroll contains an empty space as a 'trick'
$('#wrapscroll').before(`<div id='topscroll' style='height:20px; overflow:scroll;width:${elemWidth}px'><div style='min-width:${elemWidth}px'> </div></div>`);
$(function(){
$('#topscroll').scroll(function(){
$("#wrapscroll").scrollLeft($("#topscroll").scrollLeft());
});
$('#wrapscroll').scroll(function() {
$("#topscroll").scrollLeft($("#wrapscroll").scrollLeft());
});
});
}
};
});
The HTML that uses this directive, restricted to 'C' (class) is then simply using the class 'double-Hscroll' following AngularJs 'snake escaping' naming convention of capitalization and dashes.
<!DOCTYPE html>
<html>
<head>
<link rel="stylesheet" href="lib/style.css" />
<script src="lib/script.js"></script>
<script
src="https://code.jquery.com/jquery-3.5.1.js"
integrity="sha256-QWo7LDvxbWT2tbbQ97B53yJnYU3WhH/C8ycbRAkjPDc="
crossorigin="anonymous"></script>
</head>
<body ng-app="plunker" ng-cloak>
<div ng-controller="MainCtrl">
<h1>Hello {{name}}</h1>
<p>Dual horizontal scroll top and below a text area.</p>
<textarea noresize class="double-hscroll" rows="10" cols="30">
lorem ipsum dolores lorem ipsum dolores
lorem ipsum dolores
lorem ipsum dolores sit amen
lorem ipsum dolores
lorem ipsum dolores sit amen
lorem ipsum dolores
lorem ipsum dolores amen sit
</textarea>
</div>
</body>
</html>
Note that the correct version to install depends on the version of .Net Core you are running.The package above was tested OK with .Net Core.
Then we need to add EventLog. In the Program class we can do this like so:
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Logging.EventLog;
namespace SomeAcme.SomeApi
{
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureLogging((hostingContext, logging) =>
{
logging.ClearProviders();
logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
logging.AddEventLog(new EventLogSettings()
{
**SourceName = "SomeApi",
LogName = "SomeApi",**
Filter = (x, y) => y >= LogLevel.Warning
});
logging.AddConsole();
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup();
});
}
}
using SomeAcme.SomeApi.SomeModels;
using SomeAcme.SomeApi.Services;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using System.Collections.Generic;
namespace SomeAcme.SomeApi.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class SomeController : ControllerBase
{
private readonly ISomeService _healthUnitService;
private readonly ILogger _logger;
public SomeController(ISomeService someService, ILogger logger)
{
_someService= someService;
_logger = logger;
}
// GET: api/Some
[HttpGet]
public IEnumerable GetAll()
{
return _someService.GetAll();
}
}
}
More advanced use, add a global exception handler inside Configure method of Startup class in .Net Core:
//Set up a global error handler for handling Unhandled exceptions in the API by logging it and giving a HTTP 500 Error with diagnostic information in Development and Staging
app.UseExceptionHandler(errorApp =>
{
errorApp.Run(async context =>
{
context.Response.StatusCode = 500; // or another Status accordingly to Exception Type
context.Response.ContentType = "application/json";
var status = context.Features.Get();
var error = context.Features.Get();
if (error != null)
{
var ex = error.Error;
string exTitle = "Http 500 Internal Server Error in SomeAcme.SomeApi occured. The unhandled error is: ";
string exceptionString = !env.IsProduction() ? (new ExceptionModel
{
Message = exTitle + ex.Message,
InnerException = ex?.InnerException?.Message,
StackTrace = ex?.StackTrace,
OccuredAt = DateTime.Now,
QueryStringOfException = status?.OriginalQueryString,
RouteOfException = status?.OriginalPath
}).ToString() : new ExceptionModel()
{
Message = exTitle + ex.Message,
OccuredAt = DateTime.Now
}.ToString();
try
{
_logger.LogError(exceptionString);
}
catch (Exception err)
{
Console.WriteLine(err);
}
await context.Response.WriteAsync(exceptionString, Encoding.UTF8);
}
});
});
And finally a helper model to pack our exception information into.
using System;
using Newtonsoft.Json;
namespace SomeAcme.SomeApi.Models
{
///
/// Exception model for generic useful information to be returned to client caller
///
public class ExceptionModel
{
public string Message { get; set; }
public string InnerException { get; set; }
public DateTime OccuredAt { get; set; }
public string StackTrace { get; set; }
public string RouteOfException { get; set; }
public string QueryStringOfException { get; set; }
public override string ToString()
{
return JsonConvert.SerializeObject(this);
}
}
}
The tricky bit here is to get hold of a logger inside the Startup class. You can inject ILoggerFactory for this and just do :
_logger = loggerFactory.CreateLogger();
Where _logger is used in the global error handler above.
Now back again to the question of how to write to the event log, look at the source code for SomeController above. We inject ILogger here. Just use that instance and it offers different methods for writing to your configured logs. Since we added in the Program class event log, this happens automatically.
Before you test out the code above, run the following Powershell script as administrator to get your event log source:
New-EventLog -LogName SomeApi -SourceName SomeApi
What I like with this approach is that if we do everything correct, the exceptions pops up inside the SomeApi source nicely and not inside the application event log (clutter IMHO).