Sunday, 13 October 2019

Using TagBuilder in MVC

I have been started programming MVC again at work after many years focusing more on WPF. So I came accross a class called 'TagBuilder'. This is a handy class for generating markup programatically. Let us create a HTML helper that renders an image tag. It can use TagBuilder to achieve this. First off, we create the HTML helper like this:
using System.Web;
using System.Web.Mvc;

namespace HelloTagBuilderDemo.Helpers
{
    /// <summary>
    /// Sample usage of TagBuilder in MVC
    /// </summary>
    public static class HtmlHelpers
    {
        /// <summary>
        /// Generates an IMG element which is self-closing and uses as src the given path pointing to an image file with a relative path within the web application
        /// and with alternate text
        /// </summary>
        /// <param name="path"></param>
        /// <param name="alternateText"></param>
        /// <returns></returns>
        // ReSharper disable once UnusedParameter.Global
        // ReSharper disable once InvalidXmlDocComment
        public static IHtmlString Image(this HtmlHelper htmlHelper, string path, string alternateText)
        {
            var builder = new TagBuilder("img");
            // ReSharper disable once StringLiteralTypo
            builder.Attributes.Add("src", VirtualPathUtility.ToAbsolute(path));
            builder.Attributes.Add("alt", alternateText);
            var markupResult = builder.ToString(TagRenderMode.SelfClosing);
            return new MvcHtmlString(markupResult);
        }
    }
}

We make use of the VirtualPathUtility.ToAbsolute method to convert a virtual path to an absolute path. Then we make use of the html helper inside a MVC view like this:

@{
    ViewBag.Title = "Home Page";
}

@using System.Web.Mvc.Html
@using HelloTagBuilderDemo.Helpers


<div class="jumbotron">
    <h1>ASP.NET</h1>
    <p class="lead">ASP.NET is a free web framework for building great Web sites and Web applications using HTML, CSS and JavaScript.</p>
    <p><a href="https://asp.net" class="btn btn-primary btn-lg">Learn more »</a></p>
</div>

<div class="row">
    <div class="col-md-4">
        <h2>Getting started</h2>
        <p>Harold is at it again, making clones of himself to inflict multiple pain.</p>                
    </div>
    <div class="col-md-4">
        @Html.Image(@"~/Images/haroldatitagain.jpg", "Harold")
    </div>
</div>
And now we have our resulting HTML helper at display, showing also in the screen grab the markup the HTML helper generated for us. [1] Tagbuilder on MSDN: https://docs.microsoft.com/en-us/dotnet/api/system.web.mvc.tagbuilder?view=aspnet-webpages-3.2

Sunday, 29 September 2019

Deleting a set of databases with similar names with T-SQL

Devops Sunday. If you end up having many databases in SQL Server and want to get rid of them by matching their names, this T-SQL should help you out.

use master
go
declare @tablestoNuke as table(db nvarchar(100))
insert into @tablestoNuke
select name from sys.databases  
where name like '%SomeSimilarDbNameSet%'
declare @nukedb as nvarchar(100)
declare @nukesql as nvarchar(150)

declare nuker cursor for
select db from @tablestoNuke

open nuker
fetch next from nuker into @nukedb

while @@FETCH_STATUS = 0 
begin
set @nukesql = 'drop database ' + @nukedb
exec sp_executesql @nukesql
fetch next from nuker into @nukedb
end

close nuker
deallocate nuker

print 'All done nuking'


The T-SQL uses a cursor to loop through the database names fetched from sys.databases view on master db and then uses exec sp_executesql to delete the databases, by dropping them.

Monday, 23 September 2019

Getting started with tests on controllers in AngularJs

Some notes - I had to work with AngularJs tests today and needed to look into Jasmine and mocking enough and import enough to have running tests. I use the Chutzpah test runner to run the Jasmine Tests. The unit test below should get you started writing tests for controllers in AngularJs. The key concepts is to import jQuery, Bootstrap, Angular Animate, Angular-Mocks and your module and controller through using the reference path syntax at the top and then define a beforeEach that capture the _$controller_ and $_rootScope_ variables and using $rootScope.$new() to create a scope. But in my case I also had to specify a provided factory 'bootstrappedData' since my controller reads the 'model' property inside that. By specifying the value this provided factory returns at the top of each tests, I got the amount of DRY I needed to get started testing. I had to this since my controller got the data in an indirect manner, using he factory. I then create a new instance of the controller after updating the 'bootstrappedData' factory.

/// 
/// 
/// 
/// 
/// 
/// 
/// 

describe('someController works', function () {
    beforeEach(module('app'));
    var $scope;
    var $rootScope;
    var $controller;
    var $bootstrappedData;
    var $repository;
    var ctrl;
    var provide;

    beforeEach(module(function ($provide) {
        provide = $provide;
      
    }));

    beforeEach(module(function ($provide) {
        $provide.factory('repository', function () {
            return {
                model: {
                }
            };
        });
    }));

    beforeEach(inject(function (_$controller_, _$rootScope_) {
        $controller = _$controller_;
        $rootScope = _$rootScope_;
        scope = $rootScope.$new();

    }));

    it('Creates the AngularJs controller someController', function () {
        provide.factory('bootstrappedData', function () {
            return {
                model: {
                }
            };
        });

        ctrl = $controller('someController', { $scope: scope });
        expect(ctrl).not.toBe(null);

    });

    it('Method someproperty returns expected', function () {
        provide.factory('bootstrappedData', function () {
            return {
                model: {
                    SomeProperty: '3'
                }
            };
        });
        ctrl = $controller('KontrollskjemaController', { $scope: scope });
        var someprop = scope.isSomeConditionalPropertyReturningTrueIfSomePropertyIsThree;
        expect(someprop).toBe(true);
    });

});


A tip is to add a file called Chutzpah.json and ignoring well known Javascript libraries to only run code coverage on your own code:

{
  "CodeCoverageExcludes": [ "*\\jquery.js", "*\\angular.js", "*\\bootstrap.min.js", "*\\jquery*", "*\\angular-animate.min.js" ]
  //"CodeCoverageIncludes": [ "*\\*Spec.js" ]
} 

Saturday, 21 September 2019

Looking into (circular) dependencies of MEF using C# and Ndepend - Migrating from MEF to Autofac

I decided to look into circular dependencies in C# using reflection and NDepend today. A circular dependency is problematic, especially if you are using dependency injection. In fact, if your system injects dependencies through constructors, if part A imports part B and vice versa - you will usually get a crash. To instantiate the part A we need part B, but that holds also for part A instantiating part B. A huge system I have been working on for years uses MEF or Managed Extensibility Framework. This is primarily an extension framework for allowing pluggable applications, such as seen in Silverlight. It also provides Inversion of Control and you can also import through constructors, that is - every type can decorate one constructor with the [ImportingConstructor] attribute. My system however uses property based injection. You can decorate a property with [Import] attribute and the type of the constructor will then create an instance of an object exporting itself as that type. You can import either concrete or interface based types, and it is also possible to specify a key for the import (string identifier). So the bad part about property based injections is that circular injections can creep up on you - the system will not crash - but it allows circular depedendencies to exist in your system. I created the following Unit Test to detect these circular dependencies.
        [Test]
        [Category(TestCategories.IntegrationTest)]
        public void OutputCircularDependencies()
        {
            var compositionParts = new List>CompositionPart<();
          
            foreach (var part in _aggregateCatalog.Parts)
            {
                var importList = new List>string<();
               
                foreach (var import in part.ImportDefinitions)
                {
                    if (import != null)
                    {
                        importList.Add(import.ContractName);
                    }
                }

                foreach (var export in part.ExportDefinitions)
                {
                    string exportType = null;
                    if (export.Metadata.ContainsKey("ExportTypeIdentity"))
                        exportType = export.Metadata["ExportTypeIdentity"].ToString();

                    string creationPolicy = null;
                    if (export.Metadata.ContainsKey("System.ComponentModel.Composition.CreationPolicy"))
                        creationPolicy = export.Metadata["System.ComponentModel.Composition.CreationPolicy"].ToString();
                    compositionParts.Add(new CompositionPart(part.ToString(), exportType, creationPolicy, importList));                   
                }             
            }

            foreach (var part in compositionParts)
            {
                //check each import if it imports this part
                foreach (var importPart in part.Imports)
                {
                    var matchingPart = compositionParts.FirstOrDefault(c =< c.Identity == importPart);
                    if (matchingPart != null)
                    {
                        if (matchingPart.Imports.Any(i =< i == part.Identity))
                        {
                            //Circular reference detected!
                            Console.WriteLine(@"Component {0} is circular dependent of component {1}", part.Name, matchingPart.Name);
                        }
                    }
                }
            }

            
        }
The code loops through the ComposablePart parts of the AggregateCatalog.Each part has ExportDefinitions (usually one export) and ImportDefinitions (often multiple imports). The first pass will then just gather up information for all the parts of the AssemblyCatalog and then loop through each part again and loop though its imports in an inner loop. If the imported part imports the part itself, we have a circular dpeendency. The test just outputs the circular dependencies to the console. I use the class CompositionPart to have a entity to contain some information about each composition part of the AssemblyCatalag instance. It is just a regular AssemblyCatalog in MEF, created like this:

        readonly AggregateCatalog _aggregateCatalog = new AggregateCatalog();
        CompositionContainer _container;

        [SetUp]
        public void CommonInitialize()
        {
            _aggregateCatalog.Catalogs.Add(new AssemblyCatalog(System.Reflection.Assembly.GetAssembly(typeof(SomeFeatureModule))));
            _aggregateCatalog.Catalogs.Add(new AssemblyCatalog(System.Reflection.Assembly.GetAssembly(typeof(SomeFeatureServiceAgent))));
            _aggregateCatalog.Catalogs.Add(new AssemblyCatalog(System.Reflection.Assembly.GetAssembly(typeof(SomeCommonUtil))));
            _aggregateCatalog.Catalogs.Add(new AssemblyCatalog(System.Reflection.Assembly.GetAssembly(typeof(SomeProvider))));           
            _container = new CompositionContainer(_aggregateCatalog);
            _container.ComposeParts();
        }

As we can see, the AggregateCatalog just consists of several assemblies in .NET. I rewrite the software to not use MEF and property based imports (i.e. properties decorated with the import attribute of System.ComponentModel.Composition.ImportAttribute) by looping through the parts of the AggregateCatalog and looking at the export definitions. The target is Autofac, which is a DI framework that offers dependency injection IMHO better than MEF ImportingConstructor, so analyzing the export defiitions, I can create a list of statements to migrate from MEF to Autofac.

        [Test]
        [Category(TestCategories.IntegrationTest)]
        public void OutputAutofacRegistrations()
        {

            var sb = new StringBuilder();

            foreach (var part in _aggregateCatalog.Parts)
            {
                string partName = part.ToString();

                if (part.ExportDefinitions != null)
                {
                    foreach (var export in part.ExportDefinitions)
                    {
                        string exportType = null;
                        if (export.Metadata.ContainsKey("ExportTypeIdentity"))
                             exportType = export.Metadata["ExportTypeIdentity"].ToString();                        
                        
                        string creationPolicy = null;
                        if (export.Metadata.ContainsKey("System.ComponentModel.Composition.CreationPolicy"))
                            creationPolicy = export.Metadata["System.ComponentModel.Composition.CreationPolicy"].ToString();
                        
                        string autofacExportDefinition = string.Format("builder.RegisterType>{0}<(){1}{2};", partName, 
                            !string.IsNullOrEmpty(exportType) ? ".As>" + exportType + "<()" : string.Empty,
                            !string.IsNullOrEmpty(creationPolicy) && creationPolicy.ToLower().Contains("nonshared") ? 
                            ".InstancePerDependency()" : ".SingleInstance()");
                        sb.AppendLine(autofacExportDefinition);

                        Console.WriteLine(autofacExportDefinition);
                    }
                    
                }

            }

I also used Ndepend as a supporting tool to look at visualizations of these dependencies. I had trouble detecting ImportingAttribute on properties (properties are methods in C# known as 'property getters' and 'property setters'), but at least I came up with the following CQLinq statements to look for all types that are decorated with the ExportAttribute (exporting parts) and Ndepend then got nice visualization of something called 'View internal dependency cycles on graph'. As Ndepend or my CQLinq skills lacking could not find the importing attribute decorated on properties (Ndepend does not fully support attribute detection on methods in an easy way yet - detecting attributes on types is easier), I ended up with the CQLinq below to at least list up the exporting classes and launching the graphical tool to look if the parts (classes) with circular dependencies was a hotspot in the source base, i.e. a class used by many other classes. The CQLinq below shows how to generate such a graph for revealing class interdepenencies - quite easy using Ndepend.
let exportingTypes = from type in JustMyCode.Types
where type.IsClass && type.HasAttribute("System.ComponentModel.Composition.ExportAttribute")
&& (type.FullNameLike("SomeAcme"))
let f = Types.ChildMethods().Where(m =< m.IsPropertyGetter)
select type
let typesAttributes = Types
from m in exportingTypes.UsingAny(typesAttributes).ChildMethods().Where(x =< x.IsPropertySetter || x.IsPropertyGetter)
let mAttributes = typesAttributes.Where(t =< m.HasAttribute(t)).ToArray()
where mAttributes .Length < 0
select new { m, mAttributes} 
So there you have some tips around how to migrate from MEF to Autofac and detect cyclic dependencies in your source code. Ndepend will be a good tool to have as a companion to the refactoring job when you want to migrate. I will suggest to first rewrite your application using importing constructors instead of property based imports and then fix up the cyclic dependencies. You can use the Lazy initializer for example. It will delay constructing a part of type T specified to fix up such circular dependencies. Or you could of course refactor the code such that part A and part B that imports eachother instead imports some other part C both.. There are different ways to fix it up. Once you have rewritten the software to use importing constructors and there are no circular dependencies, you can switch to Autofac. I showed you in the unit test OutputAutofacRegistrations how to do that. It outputs ContainerBuilder statements to build up a working Autofac IContainer with same kind of mesh of dependencies as in the MEF based application.

Sunday, 8 September 2019

Building Angular apps with source maps and vendor chunk

Here is a quick tip about controlling the generation of source maps and vendor chunks in Angular 8 apps. Sourcemaps are built default in Angular according to the documentation @ https://angular.io/cli/build To be specific, the following command worked for me:


ng build --prod --sourceMap

In addition, the vendor chunk is now baked into the main chunk. To create a separate vendor chunk, run this: Angular 8 will put the vendor chunk into the main for optimizing the js code.
ng build --prod --sourceMap --vendor-chunk=true
In addition, it is recommended to analyze the bundle size with for example Webpack Bundle Analyzer like this: npm install -g webpack-bundle-analyzer Then add the following Npm run scripts to package.json:
    "buildwithstats": "ng b --sourceMap --prod --stats-json",
    "analyze": "webpack-bundle-analyzer --port 9901 dist/stats.json",
Now we have an interactive TreeMap view we can zoom into and see what is taking up space in our bundle!

Monday, 12 August 2019

Searching for 'Angularized values' in MS Sql Server DB

The following query can be used to look for 'Angularized values' in MS Sql Server DB.
DECLARE @sql nvarchar(1024)
DECLARE @column_name varchar(200)
DECLARE @table_name varchar(200)
DECLARE @sp_out varchar(2048)
DECLARE @x int

create table #tmp(sp_out varchar(2048), table_name varchar(255), column_name varchar(2048), id uniqueidentifier)
   
DECLARE tn_cursor CURSOR FOR
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.tables

open tn_cursor 
FETCH NEXT FROM tn_cursor
INTO @table_name

WHILE @@FETCH_STATUS = 0    
BEGIN   
                print @table_name

                DECLARE cn_cursor CURSOR FOR
                SELECT COLUMN_NAME
                FROM INFORMATION_SCHEMA.columns
                WHERE TABLE_NAME = @table_name

                open cn_cursor 
                FETCH NEXT FROM cn_cursor
                INTO @column_name

                WHILE @@FETCH_STATUS = 0    
                BEGIN   
                                FETCH NEXT FROM cn_cursor
                                INTO @column_name

                                --print @column_name

                                set @sql = 
                                N'insert into #tmp SELECT TOP (1000) ' + @column_name + ', ''' + @table_name + ''', ''' + @column_name + ''', Id' +
                                N' FROM [dbo].[' + @table_name + N']' +
                                N' WHERE ' + @column_name + N' like ''%? % ?%'''
                
                                --print @sql
                                exec sp_executesql @sql
                END

                CLOSE cn_cursor;    
                DEALLOCATE cn_cursor; 

                FETCH NEXT FROM tn_cursor
                INTO @table_name
END

CLOSE tn_cursor;    
DEALLOCATE tn_cursor; 

select * from  #tmp
drop table #tmp



This query will look for all fields in the DB with the contents with pattern '%? % %?' that matches Angularized values. An Angularized value is generated by AngularJs when autogenerated values are made when the contents of a drop down does not match the HTML.

Tuesday, 6 August 2019

Font Awesome 5 in Angular 8

Font Awesome 5 in Angular 8 based app! Here is how I did it, I first import the Font awesome packages into the app module:
import { fas } from '@fortawesome/free-solid-svg-icons';
import { far } from '@fortawesome/free-regular-svg-icons';
import { fab } from '@fortawesome/free-brands-svg-icons';
import { FontAwesomeModule } from '@fortawesome/angular-fontawesome';
import { library } from '@fortawesome/fontawesome-svg-core';
Importing the FontAwesomeModule also into the imports section.

Add the following in to your constructor of app module:
 constructor(){
    library.add(fab, far, fas);
  }
Now you can reference the Font Awesome icons from inside any component like in this markup example:
   <div class="crop"
     (click)="onClick()"
     [style.width.px]="starWidth"
     [title]="rating">
  <div style="width: 75px">
    <span><fa-icon [icon]="['far', 'star']"></fa-icon></span>
    <span><fa-icon [icon]="['far', 'star']"></fa-icon></span>
    <span><fa-icon [icon]="['far', 'star']"></fa-icon></span>
    <span><fa-icon [icon]="['far', 'star']"></fa-icon></span>
    <span><fa-icon [icon]="['far', 'star']"></fa-icon></span>
  </div>
</div>
Note that if you do not use the solid icons from the 'fas' library, you must specify the type of Font Awesome icon library, such as 'far' for the regular icons. I ended up with using the following npm packages: "@fortawesome/angular-fontawesome": "^0.3.0", "@fortawesome/fontawesome-svg-core": "^1.2.21", "@fortawesome/free-brands-svg-icons": "^5.10.1", "@fortawesome/free-regular-svg-icons": "^5.10.1", "@fortawesome/free-solid-svg-icons": "^5.10.1", Note: I did a downgrade to version 0.3.0 of the angular-fontawesome package. Tested out in Angular 8. Note that adding the entire library is not suggested in most cases, as this will increase the bundle size Webpack builds up. Instead, add the necessary icons one by one. However, in a development period, it is neat to have all the (Free) icons from Font Awesome readily available until it is production/deploying time! Happy Angular 8 coding!
Example of a shared module you can set up to import in Angular 8 supporting Font Awesome five is below:
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { StarComponent } from "src/app/shared/star.component";
import { FormsModule } from "@angular/forms";
import { fas } from '@fortawesome/free-solid-svg-icons';
import { far } from '@fortawesome/free-regular-svg-icons';
import { fab } from '@fortawesome/free-brands-svg-icons';
import { library } from '@fortawesome/fontawesome-svg-core';
import { FontAwesomeModule } from '@fortawesome/angular-fontawesome';

@NgModule({
  declarations: [
    StarComponent
  ],
  imports: [
    CommonModule,
    FontAwesomeModule
  ],
  exports: [
   StarComponent,
   CommonModule,
   FormsModule,
   FontAwesomeModule,
  ]
})
export class SharedModule {

  constructor () {
    library.add(fab, far, fas);
  }

}

Angular - Displaying localized currency - setting up locale to Norwegian (example)

This article will quickly describe how we can set up a locale for Angular; to display for example currency in a correct manner. First off we import the necessary locale setup in our main module, e.g. app.module.ts :
import { NgModule, LOCALE_ID } from '@angular/core';
import { registerLocaleData } from '@angular/common';
import  localeNor from '@angular/common/locales/nb';
import  localeNorExtra from '@angular/common/locales/nb';
Note here that this setup targets Norwegian locale. You can see a list of these locales in the parent folder of this url: https://github.com/angular/angular/blob/master/packages/common/locales/nb.ts We also import the 'extra' information for locales - the Norwegian locale here. The parent folder show the available locales in this url: https://github.com/angular/angular/blob/master/packages/common/locales/extra/nb.ts Then we set up the Norwegian locale using the method registerLocaleData below the imports of our app module with a call to the method. registerLocaleData(localeNor, 'no', localeNorExtra); We also set up the providers for the app module to use the 'no' variable for LOCALE_ID A complete sample of the app module looks like this then:

import { BrowserModule } from '@angular/platform-browser';
import { NgModule, LOCALE_ID } from '@angular/core';
import { registerLocaleData } from '@angular/common';
import  localeNor from '@angular/common/locales/nb';
import  localeNorExtra from '@angular/common/locales/nb';
import { FormsModule } from '@angular/forms';
import { RouterModule } from '@angular/router';
import { AppComponent } from './app.component';
import { ProductListComponent } from "src/app/products/product-list.component";
import { WelcomeComponent } from "src/app/home/welcome.component";

registerLocaleData(localeNor, 'no', localeNorExtra);

@NgModule({
  declarations: [
    AppComponent,
    ProductListComponent,
  ],
  providers: [
    {provide: LOCALE_ID, useValue: 'no'
  ],
  imports: [
    BrowserModule,
    FormsModule,
    RouterModule.forRoot([
      { path: '', component: AppComponent }

    ])
  ],
  bootstrap: [AppComponent]
})
export class AppModule { }


An example of using the locale setup is to be observed in the use of the pipe called currency. Sample markup:

  <td>{{ product.productCode | lowercase }}</td>
            <td>{{ product.releaseDate }}</td>
            <td>{{ product.price  | currency:'NOK':'symbol':'1.2-2' }}</td>
            <td>

Note the syntax here: Inside the interpolation expression in Angular with {{ My_Expression }} as the notation we pipe with the '|' sign to the currency pipe then we add a ':' sign and the 'NOK' denotes the Norwegian currency. If we want 'øre' (Norwegian cents) afterwards, we can add that outside the interpolation expression. Note that we also add another ':' sign and then 'symbol':'1.2-2' This states that we show at least one digit for the integer part and between 2 and 2 decimals (i.e. just 2 decimals please). This shows how we can set up the standard locale of an Angular app. Supporting multiple aplications should then be a matter of importing additional locales for Angular and have some way of switching between locales. Probably it is best to be sure to usually addition both the locale and the 'locale extra' when setting this up.

Friday, 2 August 2019

Misc OpenId Connect setup for Angular 8

This article presents some sample basic setup for OpenId Connect setup for Angular 8, targeting the npm library openid-client and Thinktecture IdentityServer 4 as the STS. The following AutService sets up the UserManager first on the client side. I must mention here that the setup ended up quite similar to the setup I did for an ASP.NET Core backend API I did for a React frontend SPA.
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { catchError } from 'rxjs/operators';
import { UserManager, User, WebStorageStateStore, Log } from 'oidc-client';
import { Constants } from '../constants';
import { Utils } from './utils';
import { AuthContext } from '../model/auth-context';

@Injectable()
export class AuthService {
  private _userManager: UserManager;
  private _user: User;
  authContext: AuthContext;

  constructor(private httpClient: HttpClient) {
    Log.logger = console;
    const config = {
      authority: Constants.stsAuthority,
      client_id: Constants.clientId,
      redirect_uri: `${Constants.clientRoot}assets/oidc-login-redirect.html`,
      scope: 'openid projects-api profile',
      response_type: 'id_token token',
      post_logout_redirect_uri: `${Constants.clientRoot}?postLogout=true`,
      userStore: new WebStorageStateStore({ store: window.localStorage }),
      automaticSilentRenew: true,
      silent_redirect_uri: `${Constants.clientRoot}assets/silent-redirect.html`
    };
    this._userManager = new UserManager(config);
    this._userManager.getUser().then(user => {
      if (user && !user.expired) {
        this._user = user;
        this.loadSecurityContext();
      }
    });
    this._userManager.events.addUserLoaded(args => {
      this._userManager.getUser().then(user => {
        this._user = user;
        this.loadSecurityContext();
      });
    });
  }

  login(): Promise<any> {
    console.log('Inside the new auth service!');
    return this._userManager.signinRedirect();
  }

  logout(): Promise<any> {
    return this._userManager.signoutRedirect();
  }

  isLoggedIn(): boolean {
    return this._user && this._user.access_token && !this._user.expired;
  }

  getAccessToken(): string {
    return this._user ? this._user.access_token : '';
  }

  signoutRedirectCallback(): Promise<any> {
    return this._userManager.signoutRedirectCallback();
  }

  loadSecurityContext() {
    this.httpClient.get<AuthContext>(`${Constants.apiRoot}Account/AuthContext`).subscribe(context => {
      this.authContext = context;
    }, error => console.error(Utils.formatError(error)));
  }
}


The login redirect file oidc-login-redirect.html goes into the assets folder of Angular 8 project.
<script src="https://cdnjs.cloudflare.com/ajax/libs/oidc-client/1.4.1/oidc-client.min.js"></script>
<script>
  var config = {
    userStore: new Oidc.WebStorageStateStore({ store: window.localStorage })
  };
  var mgr = new Oidc.UserManager(config);
  mgr.signinRedirectCallback().then(() => {
        window.history.replaceState({},
            window.document.title,
            window.location.origin);
        window.location = "/";
    }, error => {
        console.error(error);
    });

</script>

The silent redirect that automatically renews the token is in the file silent-redirect.html in the assets folder:
<script src="https://cdnjs.cloudflare.com/ajax/libs/oidc-client/1.5.1/oidc-client.min.js"></script>
<script>
  var config = {
    userStore: new Oidc.WebStorageStateStore({ store: window.localStorage })
  };
new Oidc.UserManager(config).signinSilentCallback()
        .catch((err) => {
            console.log(err);
        });

</script>
Then, the config in the ThinkTecture IdentityServer looks somewhat like this:
using System.Collections.Generic;
using IdentityServer4;
using IdentityServer4.Models;

namespace SecuringAngularApps.STS
{
    public class Config
    {
        public static IEnumerable<ApiResource> GetApiResources()
        {
            return new List<ApiResource>
            {
                new ApiResource("projects-api", "Projects API")
            };
        }

        public static IEnumerable<Client> GetClients()
        {
            return new List<Client>
            {
                new Client
                {
                    ClientId = "spa-client",
                    ClientName = "Projects SPA",
                    AllowedGrantTypes = GrantTypes.Implicit,
                    AllowAccessTokensViaBrowser = true,
                    RequireConsent = false,

                    RedirectUris =           { "http://localhost:4200/assets/oidc-login-redirect.html","http://localhost:4200/assets/silent-redirect.html" },
                    PostLogoutRedirectUris = { "http://localhost:4200/?postLogout=true" },
                    AllowedCorsOrigins =     { "http://localhost:4200/" },

                    AllowedScopes =
                    {
                        IdentityServerConstants.StandardScopes.OpenId,
                        IdentityServerConstants.StandardScopes.Profile,
                        "projects-api"
                    },
                    IdentityTokenLifetime=120,
                    AccessTokenLifetime=120

                },
                new Client
                {
                    ClientId = "mvc",
                    ClientName = "MVC Client",
                    AllowedGrantTypes = GrantTypes.HybridAndClientCredentials,

                    ClientSecrets =
                    {
                        new Secret("secret".Sha256())
                    },

                    RedirectUris           = { "http://localhost:4201/signin-oidc" },
                    PostLogoutRedirectUris = { "http://localhost:4201/signout-callback-oidc" },

                    AllowedScopes =
                    {
                        IdentityServerConstants.StandardScopes.OpenId,
                        IdentityServerConstants.StandardScopes.Profile
                    },
                    AllowOfflineAccess = true

                }
            };
        }

        public static IEnumerable<IdentityResource> GetIdentityResources()
            {
                return new List<IdentityResource>
            {
                new IdentityResources.OpenId(),
                new IdentityResources.Profile(),
            };
            }
        }
    }
And in the API backend we wire up the STS like this in Asp.net Core in the Startup.cs:
using System;
using System.Linq;
using SecuringAngularApps.STS.Data;
using SecuringAngularApps.STS.Models;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Identity;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.AspNetCore.Authentication.OpenIdConnect;
using Microsoft.AspNetCore.Authentication.Cookies;
using IdentityServer4.Services;
using SecuringAngularApps.STS.Quickstart.Account;

namespace SecuringAngularApps.STS
{
    public class Startup
    {
        public IConfiguration Configuration { get; }
        public IHostingEnvironment Environment { get; }

        public Startup(IConfiguration configuration, IHostingEnvironment environment)
        {
            Configuration = configuration;
            Environment = environment;
        }

        public void ConfigureServices(IServiceCollection services)
        {
            services.AddDbContext<ApplicationDbContext>(options =>
                options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));

            services.AddIdentity<ApplicationUser, IdentityRole>()
                .AddEntityFrameworkStores<ApplicationDbContext>()
                .AddDefaultTokenProviders();

            services.AddCors(options =>
            {
                options.AddPolicy("CorsPolicy", corsBuilder =>
                {
                    corsBuilder.AllowAnyHeader()
                    .AllowAnyMethod()
                    .AllowAnyOrigin()
                    .AllowCredentials();
                });
            });
            services.AddTransient<IProfileService, CustomProfileService>();

            services.AddMvc();

            var builder = services.AddIdentityServer(options =>
                {
                    options.Events.RaiseErrorEvents = true;
                    options.Events.RaiseInformationEvents = true;
                    options.Events.RaiseFailureEvents = true;
                    options.Events.RaiseSuccessEvents = true;
                })
                .AddInMemoryIdentityResources(Config.GetIdentityResources())
                .AddInMemoryApiResources(Config.GetApiResources())
                .AddInMemoryClients(Config.GetClients())
                .AddAspNetIdentity<ApplicationUser>()
                .AddProfileService<CustomProfileService>();


            if (Environment.IsDevelopment())
            {
                builder.AddDeveloperSigningCredential();
            }
            else
            {
                throw new Exception("need to configure key material");
            }
        }

        public void Configure(IApplicationBuilder app, IHostingEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
                app.UseDatabaseErrorPage();
            }
            else
            {
                app.UseExceptionHandler("/Home/Error");
            }

            app.UseStaticFiles();
            app.UseCors("CorsPolicy");

            app.UseIdentityServer();
            app.UseMvcWithDefaultRoute();
        }
    }
}

Thursday, 1 August 2019

Intellisense of spy objects (mocks) in Jasmin tests

When creating unit tests or integration tests for Angular 8, we often use mocking - such as mocking services. We sometimes want to fix up the intellisense of our mocks when we create a spy object using Jasmine (of which Angular tests most often are written in - the 'NUnit for Javascript world'). Here is how we can achieve that. First off, create a new file called Spied.ts and add this Typescript:
export type Spied<T> = {
  [Method in keyof T]: jasmine.Spy;
};
A little bit of terminology here for .NET coders concerning Javascript tests:
  • Spy object : Mock object
  • Using .and.returnValue(of(somedata)) : Equal to using Moq Setup method to return some data for given method
  • Expect in Jasmin : Similar to Assert in MSTest and NUnit.
This builds a mapped type that maps to a jasmine.Spy object, see the explanation of a mapped type here: https://www.typescriptlang.org/docs/handbook/release-notes/typescript-2-1.html#mapped-types We now can declare our mock objects as a 'Spied' object like this example:
let mockHeroService: Spied<HeroService>
mockHeroService = jasmine.createSpyObj(['getHeroes', 'addHero', 'deleteHero']);
The great thing about this then is that we now have decent Intellisense in place! Look at this video from VsCode as proof! To handle dependency injection scenarios do like in this example:
import { VoterService, ISession } from "src/app/events";
import { of } from "rxjs";
import { Spied } from "src/app/common/Spied";
import { HttpClient } from "@angular/common/http";

describe('VoterService', () => {
  let voterService: VoterService;
  let mockHttp: Spied<HttpClient>;

  beforeEach(() => {
    mockHttp = <Spied<HttpClient>>jasmine.createSpyObj('mockHttp', ['delete', 'post']);
    voterService = new VoterService(mockHttp);
    console.log('Inside beforeEach');
  });

  describe('deleteVoter', () => {

    it('should remove the voter from the list of voters', () => {
      var session = { id: 6, name: "John", voters: ["joe", "john"] };
      mockHttp.delete.and.returnValue(of(false));
      console.log(voterService);

      voterService.deleteVoter(3, <ISession>session, "joe");
      expect(session.voters.length).toBe(1);
    });

  });

});



We can then adjust our constructor to include the '| any' modifier of the injected parameter:

import { Injectable, Inject } from '@angular/core';
import { ISession } from '../shared';
import { HttpClient, HttpHeaders } from '@angular/common/http';
import { Observable, of } from 'rxjs';
import { catchError } from 'rxjs/operators';

@Injectable()
export class VoterService {
  constructor(@Inject(HttpClient) private http: HttpClient | any) {


  }

..

Note that we here adjust the constructor to not only accept the concrete class HttpClient but also 'any' allowing us to inject the mock object. We could alter this and introduce an interface for example instead for a more elegant approach. In case you get build errors like when running ng build stating that 'jasmine' could not be found, try out this: Inside tsconfig.json, explicitly add 'jasmine' for your 'types' like this:
{
  "compileOnSave": false,
  "compilerOptions": {
    "baseUrl": "./",
    "outDir": "./dist/out-tsc",
    "sourceMap": true,
    "declaration": false,
    "downlevelIteration": true,
    "experimentalDecorators": true,
    "module": "esnext",
    "moduleResolution": "node",
    "importHelpers": true,
    "target": "es2015",
    "types": [ "jasmine" ],
    "typeRoots": [
      "node_modules/@types"
    ],
    "lib": [
      "es2018",
      "dom"
    ]
  },
  "angularCompilerOptions": {
    "fullTemplateTypeCheck": true,
    "strictInjectionParameters": true
  }
}

And then put the single line on top to import jasmine like this in Spied.ts:
import 'jasmine';

Wednesday, 31 July 2019

Consistency guard of enums used in Entity Framework

This is a consistency guard for enums in Entity Framework. It is a mechanism for protecting an entity in Entity Framework or just EF, in case an enum value was loaded from the database with an illeagal value. An illeagal enum value would be any value of an enum that cannot be parsed into an enum. We use Enum.IsDefined method (at first running Convert.ChangeType) to check if the value for the enum is leagal or not. We define a helper class BrokenEnumValue to contain our metadata about enum values that are illeagal or 'broken'. The rest of the code in this article goes into the DbContext class (Or ObjectContext would also work) that EF uses. The ObjectMaterialized event is added in the constructor for example.
            var objectContext = ((IObjectContextAdapter) this).ObjectContext;
            _log = (ILog) AutofacHostFactory.Container.Resolve(typeof(ILog));
            objectContext.ObjectMaterialized += ObjectContext_ObjectMaterialized;
Our helper POCO:

    public class BrokenEnumValue
    {
        public string PropertyName { get; set; }
        public string PropertyTypeName { get; set; }
        public Guid? SchemaGuid { get; set; }
        public string OldValue { get; set; }
        public string CorrectedValue { get; set; }

        public override string ToString()
        {
            return $"{PropertyName} {PropertyTypeName} {SchemaGuid} {OldValue} {CorrectedValue}";
        }
    }



        private void ObjectContext_ObjectMaterialized(object sender, ObjectMaterializedEventArgs e)
        {
            var brokenEnumProperties = FixBrokenEnumProperties(e.Entity);
            if (brokenEnumProperties.Any())
            {
                Type objType = e.Entity.GetType();
                var idProperty = objType.GetProperty("Id");
                Guid? schemaGuid = idProperty?.GetValue(e.Entity, null) as Guid?;
                foreach (var brokenEnum in brokenEnumProperties)
                    brokenEnum.SchemaGuid = schemaGuid;
                string brokenEnumsInfo = string.Join(" ", brokenEnumProperties.Select(b => b.ToString()).ToArray());
                _log.WriteWarning($"Detected broken enum propert(ies) in entity and resolved them to default value if available in enum (None): {brokenEnumsInfo}");
            }
        }

           public IList<BrokenEnumValue> FixBrokenEnumProperties(object obj)
        {
            var list = new List<BrokenEnumValue>();
            try
            {
                if (obj == null) return list;
       
                PropertyInfo[] properties = obj.GetType().GetProperties();
                foreach (PropertyInfo property in properties)
                {
                    if (property.GetIndexParameters()?.Any() == true)
                        continue; //skip indexer properties
                    if (property.PropertyType.IsArray)
                    {
                        Array a = (Array) property.GetValue(obj);
                        for (int i = 0; i < a.Length; i++)
                        {
                            object o = a.GetValue(i);
                            list.AddRange(FixBrokenEnumProperties(o));
                        }
                        continue; //continue to next iteration
                    }
                    object propValue = property.GetValue(obj, null);
                    var elems = propValue as IList;
                    if (elems != null)
                    {
                        foreach (var item in elems)
                        {
                            list.AddRange(FixBrokenEnumProperties(item));
                        }
                    }
                    else
                    {
                        if (property.PropertyType.IsEnum && !IsEnumDefined(propValue, property.PropertyType))
                        {
                            var correctedValue = GetDefaultEnumValue(propValue, property.PropertyType);
                            list.Add(new BrokenEnumValue
                            {
                                CorrectedValue = correctedValue?.ToString(),
                                OldValue = propValue?.ToString(),
                                PropertyName = property.Name,
                                PropertyTypeName = property.PropertyType.FullName,
                            });
                            property.SetValue(obj, correctedValue);
                            
                        }
                        if (property.PropertyType.IsClass && (property.PropertyType.GetCustomAttributes(typeof(DataContractAttribute))?.Any() == true)
                                                          && !(property.PropertyType == typeof(string)) && !property.PropertyType.IsValueType)
                        {
                            list.AddRange(FixBrokenEnumProperties(propValue));
                        }
                    }
                }
            }
            catch (Exception err)
            {
                _somelog.WriteError($"Expection occurred trying to fix broken enum properties: {err}");
            }
            return list;
        }

           private static T GetDefaultEnumValue<T>(T entity, Type propertyType)
           {
               foreach (var enumValue in propertyType.GetEnumValues())
               {
                   if (String.Compare(enumValue.ToString(), "None", StringComparison.OrdinalIgnoreCase) == 0)
                   {
                       return (T)enumValue;
                   }
               }
               return entity;
           }

        private static bool IsEnumDefined(object entity, Type propertyType)
        {
            var castedValue = Convert.ChangeType(entity, propertyType);
            return Enum.IsDefined(propertyType, castedValue);
        }


With this guard, we can avoid that the entity does not load in case an illeagal value was loaded for a given enum. Note that our fallback is looking for the enum value mapping to the [None] enum member, so we fallback to the [None] enum value, if it exists. Mosts enum should have a [None] member mapping to the enum integer value 0. You can of course adjust the strategy used here. I believe such a consistency guard would be helpful for many applications using EF.

Tuesday, 30 July 2019

Slide-in animation of forms in Angular 8

This is a cool feature in Angular 8, the support of animations. First off, we define our animation in a Typescript file like for example a sliding in translation of the X-axis. app-animation.ts
import { trigger, animate, transition, style, group, query } from '@angular/animations';

export const slideInAnimation = trigger('slideInAnimation', [
  // Transition between any two states
  transition('* <=> *', [
    // Events to apply
    // Defined style and animation function to apply
    // Config object with optional set to true to handle when element not yet added to the DOM
    query(':enter, :leave', style({ position: 'fixed', width: '100%', zIndex: 2 }), { optional: true }),
    // group block executes in parallel
    group([
      query(':enter', [
        style({ transform: 'translateX(100%)' }),
        animate('0.5s ease-out', style({ transform: 'translateX(0%)' }))
      ], { optional: true }),
      query(':leave', [
        style({ transform: 'translateX(0%)' }),
        animate('0.5s ease-out', style({ transform: 'translateX(-100%)' }))
      ], { optional: true })
    ])
  ])
]);


In our app.component.ts, we import this animation slideInAnimation like this:
import { slideInAnimation } from './app.animation';
@Component({
  selector: 'pm-root',
  templateUrl: './app.component.html',
  styleUrls: ['./app.component.css'],
  animations: [ slideInAnimation ]
})
export class AppComponent {
  pageTitle = 'SomeAcme Product Management';


Note the association of the slideInAnimation above. And in the app component template (app.component.html) we just declare a template variable for the router outlet and set it up to trigger on route activation.
<div class="container" [@slideInAnimation]="o.isActivated ? o.activatedRoute: ''">
  <router-outlet #o="outlet"></router-outlet>
</div>
This video shows the effect. Note that this is an excerpt from Deborah Kurata's course on Angular routing from Pluralsight. Video of this technique:

Sunday, 28 July 2019

Angular 8 - Datepicker and timepicker

I have been looking at the ngx-bootstrap components for Angular lately. These components are very nice and similar to the components that Bootstrap provides, for Angular. I have tested it out with Angular 8. First off, we import the datepicker and time picker of this library like this: ng add ngx-bootstrap --components datepicker ng add ngx-bootstrap --components timepicker Inside your app.module.ts, you should see the two components imported:
import { BsDatepickerModule } from "ngx-bootstrap/datepicker";
import { TimepickerModule } from 'ngx-bootstrap/timepicker';

It is also included to the imports. My sample projects app.module.ts looks like this:
import { BrowserModule } from "@angular/platform-browser";
import { FormsModule } from "@angular/forms";
import { NgModule } from "@angular/core";
import { HttpClientModule } from "@angular/common/http";
import { AppComponent } from "./app.component";
import { UserSettingsFormComponent } from "./user-settings-form/user-settings-form.component";
import { AppFocusDirective } from "./data/app-focus.directive";
import { ButtonsModule } from "ngx-bootstrap/buttons";
import { BsDatepickerModule } from "ngx-bootstrap/datepicker";
import { BsLocaleService } from 'ngx-bootstrap/datepicker';
import { BrowserAnimationsModule } from "@angular/platform-browser/animations";
import { defineLocale } from "ngx-bootstrap/chronos";
import { nbLocale } from "ngx-bootstrap/locale";
import { TimepickerModule, TimepickerConfig } from 'ngx-bootstrap/timepicker';
import { getTimepickerConfig  } from './TimepickerConfig';

defineLocale("nb", nbLocale);

@NgModule({
  declarations: [AppComponent, UserSettingsFormComponent, AppFocusDirective],
  imports: [
    BrowserModule,
    FormsModule,
    HttpClientModule,
    ButtonsModule.forRoot(),
    BsDatepickerModule.forRoot(),
    BrowserAnimationsModule,
    TimepickerModule.forRoot()
  ],
  providers: [ BsLocaleService,
   { provide: TimepickerConfig, useFactory: getTimepickerConfig }
  ],
  bootstrap: [AppComponent]
})
export class AppModule {

  constructor(private bsLocaleService: BsLocaleService) {
    this.bsLocaleService.use('nb');
  }
}

Observe the localization adjustments I have included here, for Norwegian localization of the date picker. I also include a custom factory function for setting up the time picker to not show the meridian button, i.e. show a 24 hours format of the time.
import { TimepickerConfig } from "ngx-bootstrap/timepicker";

// such override allows to keep some initial values

export function getTimepickerConfig(): TimepickerConfig {
  return Object.assign(new TimepickerConfig(), {
    hourStep: 2,
    minuteStep: 5,
    showMeridian: false,
    readonlyInput: false,
    mousewheel: true,
    showMinutes: true,
    showSeconds: false
  });
}


If you want to test out the adjustments I made, the Github repo is available here: https://github.com/toreaurstadboss/Angular8FormsNgxBootstrapTesting

Monday, 22 July 2019

HttpInterceptor to cache clientside in Angular 8

I have been working with a sample project of HttpInterceptor to cache data clientside in Angular 8 hosted on Github. The repository is here: https://github.com/toreaurstadboss/AngularHttpInterceptorSample First off, the data is delivered by a small backend using Express.Js.The server.js file looks like this:
const express = require('express');
const path = require('path');
const fs = require('fs');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');

var towns = require('./server/routes/towns');

var app = express();

// view engine setup
app.set('views', path.join(__dirname, '/server/views'));
app.set('view engine', 'jade');

app.use(favicon(path.join(__dirname, 'dist/favicon.ico')));
app.use(logger('dev'));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser());
app.use(express.static(path.join(__dirname, 'dist')));

app.use(function (req, res, next) {
    if (!req.url.startsWith('/api')){
      req.url = '/api' + req.url;
    }
    console.log('got requst url: ' + req.url);
    console.log('Server handling request at Time:', new Date())
    next()
  })

// app.use('/', routes);
app.get('/api/towns', function(req, res) {
    let townsFileContents = fs.readFileSync(path.join(__dirname, 'server/data/townsandcities.json'), 'utf-8');
    let townsFound = JSON.parse(townsFileContents);
    console.log(townsFileContents);
    res.json(townsFound);
});

// app.get('*', function(req, res) {
//   res.sendFile(path.join(__dirname, 'dist/index.html'));
// });

// catch 404 and forward to error handler
// app.use(function(req, res, next) {
//     var err = new Error('Not Found');
//     err.status = 404;
//     next(err);
// });

// error handlers

// development error handler
// will print stacktrace
if (app.get('env') === 'development') {
    app.use(function(err, req, res, next) {
        res.status(err.status || 500);
        res.render('error', {
            message: err.message,
            error: err
        });
    });
}

// production error handler
// no stacktraces leaked to user
app.use(function(err, req, res, next) {
    res.status(err.status || 500);
    res.render('error', {
        message: err.message,
        error: {}
    });
});

var debug = require('debug')('server');

app.set('port', process.env.PORT || 3000);

app.listen(app.get('port'));

console.log('Listening on port: ' + app.get('port'));

module.exports = app;

Angular bits is written in Typescript instead of just Js. The Http Interceptor itself is a ES 6 class which is exported and caches http responses according to route urls.
import { HttpInterceptor, HttpRequest, HttpHandler, HttpEvent, HttpResponse } from '@angular/common/http';
import { Observable, of } from 'rxjs';
import { tap } from 'rxjs/operators';
import { CacheService } from './cache.service';


export class CacheInterceptor implements HttpInterceptor {

    constructor(private cacheService: CacheService) {
    }

    intercept(req: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> {


        if (req.method !== 'GET') {
            this.cacheService.invalidateCache();
        }
        const cachedResponse: HttpResponse<any> = this.cacheService.get(req.url);
        if (cachedResponse !== undefined) {
         return of(cachedResponse);
        }

        return next.handle(req).pipe(
            tap(event => {
                if (event instanceof HttpResponse) {
                    this.cacheService.put(req.url, event);
                    let reponse: HttpResponse<any>;
                    reponse  = event;

                }
                return of(event);
            })
        );
    }

}

The CacheService caches responses to the client by route url as mentioned. It looks like this:
import { Injectable } from '@angular/core';
import { ICacheservice } from './icacheservice';
import { HttpResponse } from '@angular/common/http';

@Injectable({
  providedIn: 'root'
})
export class CacheService implements ICacheservice {
  private requests = { };
  put(url: string, response: HttpResponse): void {
    this.requests[url] = response;
  }
  get(url: string): HttpResponse | undefined {
    return this.requests[url] ? this.requests[url] : undefined;
  }
  invalidateCache(): void {
    this.requests = {};
  }
  expireItem(url: string, timeoutInMilliseconds: number) {
    const itemFound = this.get(url);
    if (itemFound !== undefined) {
      setTimeout(() => {
        this.requests[url] = undefined;
      }, timeoutInMilliseconds);
    }

  }

  constructor() { }

}

We inject the HttpInterceptor to the app module like this:
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppComponent } from './app.component';
import { CacheInterceptor } from './core/cache.interceptor';
import { HTTP_INTERCEPTORS, HttpClientModule } from '@angular/common/http';
import { BrowserAnimationsModule } from '@angular/platform-browser/animations';
import { MatTableModule } from  '@angular/material';

@NgModule({
  declarations: [
    AppComponent
  ],
  imports: [
    BrowserModule,
    HttpClientModule,
    MatTableModule,
    BrowserAnimationsModule
  ],
  providers: [
    { provide: HTTP_INTERCEPTORS, useClass: CacheInterceptor, multi: true }
  ],
  bootstrap: [AppComponent]
})
export class AppModule { }

The index page then contains some markup including a Material Table using Google's Material UI. The app.component.ts handles the client side call back that triggers the retrieval and clearing of the data. If you click the load link, the data is retrieved and cached. If you open up Developer tools with F12, you will spot that the data is cached if you hit load many times. If you hit clear link, the data is cleared from cache and next click on load will fetch data again and inject into the cache. This sample shows how you can create a cache side feature of Angular 8 and support an HTTP interceptor. To check it out yourself run: git clone https://github.com/toreaurstadboss/AngularHttpInterceptorSample.git Afterwards, run: npm run start This will start both the Angular website on port 4200 and the Node.js Express.js backend on port 3000. The package.json script for npm run start looks like this: "start": "concurrently --kill-others \"ng build --watch --no-delete-output-path\" \"nodemon server.js\" \"ng serve --proxy-config proxy.conf.json \"", You can also just enter: npm start

Monday, 15 July 2019

Http echo service written in Node.js

I have added a small repo on Github that shows how to create a simple HTTP echo service. https://github.com/toreaurstadboss/NodeJsHttpEchoServer The server.js is script is this:
let http = require('http');

console.log('Attempting to start the Node.js echos server on port 8081..')
http.createServer().on('request', function(request, response){
 request.pipe(response);
}).on('close', function(){
    console.log('The Node.js echo server has been closed.')
})
.listen(8081);

console.log('Node.js echo server started on port 8081.')

This will pipe out the requesting data to the response. Test it out with this command for example in a command window: curl -d "Hello dog" http://localhost:8081 Note - use not Powershell for this command as Powershell has aliased curl command for Invoke-WebRequest. Rather install curl with Chocolatey for example, choco install curl

Sunday, 14 July 2019

Using Ndepend to investigate method cycles

Ndepend is a very powerful Code analysis platform for DotNet and I have created a method cycle detection rule. The method cycle detection also supports property setters and getters. A cycle in a property or method will often cause problems, such as stack overflow exceptions due to too many recursive calls. Spotting such cycles can often be hard in real-world scenarios. A classic error is to actually do a cycle through a property getter, by just calling the getter once more or a property setter for that instance. The following C# attribute invokes a Rules extracted from Source code from Ndepend.

using System;
using NDepend.Attributes;

namespace Hemit.OpPlan.Client.Infrastructure.Aspects
{
    /// Ndepend attribute to enable cyclic loops in methods in the source code 
    [CodeRule(@"// <Name>Avoid methods of a type to be in cycles. Detect also recursive property setter calls</Name>
warnif count > 0

from t in Application.Types
                 .Where(t => t.ContainsMethodDependencyCycle != null && 
                             t.ContainsMethodDependencyCycle.Value)

// Optimization: restreint methods set
// A method involved in a cycle necessarily have a null Level.
let methodsSuspect = t.Methods.Where(m => m.Level == null)

// hashset is used to avoid iterating again on methods already caught in a cycle.
let hashset = new HashSet<IMethod>()


from suspect in methodsSuspect
   // By commenting this line, the query matches all methods involved in a cycle.
   where !hashset.Contains(suspect)

   // Define 2 code metrics
   // - Methods depth of is using indirectly the suspect method.
   // - Methods depth of is used by the suspect method indirectly.
   // Note: for direct usage the depth is equal to 1.
   let methodsUserDepth = methodsSuspect.DepthOfIsUsing(suspect)
   let methodsUsedDepth = methodsSuspect.DepthOfIsUsedBy(suspect)

   // Select methods that are both using and used by methodSuspect
   let usersAndUsed = from n in methodsSuspect where 
                         methodsUserDepth[n] > 0 && 
                         methodsUsedDepth[n] > 0 
                      select n

   where usersAndUsed.Count() > 0

   // Here we've found method(s) both using and used by the suspect method.
   // A cycle involving the suspect method is found!
   let cycle = System.Linq.Enumerable.Append(usersAndUsed,suspect)


   // Fill hashset with methods in the cycle.
   // .ToArray() is needed to force the iterating process.
   let unused1 = (from n in cycle let unused2 = hashset.Add(n) select n).ToArray()

let cycleContainsSetter = (from n1 in cycle where n1.IsPropertySetter select n1).Count()

  
select new { suspect, cycle, cycleContainsSetter }",
        Active = true,
        DisplayStatInReport = true,
        DisplayListInReport = true,
        DisplaySelectionViewInReport = true,
        IsCriticalRule = false)]
    public class MethodCycleDetectionAttribute : Attribute
    {
    }
}


Note that this Ndepend Code rule uses the CQLinq syntax placed into a C# attribute class that inherits from Attribute and itself is attributed with a Ndepend.Attributes.CodeRuleAttribute. I had to adjust the attribute a little bit using the ExtensionMethodsEnumerable fix for .Net 4.6.2 mentioned on Ndepend blog post here: https://blog.ndepend.com/problem-extension-methods/ The following screen shot shows Visual Studio 2019 with Ndepend version 2019.2.5 installed. It shows the method cycle detection code rule I created showing up as a code rule through source code. Selecting that code rule in the menu Ndepend => Rules Explorer that selects Queries and Rules Explorer allows to not only view the code rule but quickly see the code analysis results.
One method cycle I have is shown in the following image, the method ProcessPasRequest of my system. This case shows how advanced Ndepend really is in code analysis. It can detect transitive cycles with multiple function calls. The following two screen shots shows the cycle in question. In this case, it was not a bug, since the cycle will only happend a finite amount of times for a given set of patients, but it still can go into an infinite loop if the external system is not working correct. However in this case, if that external system is not working, the system I have been working with also faults as it relies on that external system. With Ndepend I could spot method cycles with transitive cycles with a breeze! I can also spot property getter and setter cycles, as property getters and setters in C# are methods anyways (compiler generated). I managed to get Ndepend back to the main menu in Visual Studio 2019 using this extension.
https://marketplace.visualstudio.com/items?itemName=Evgeny.RestoreExtensions

Loading tweets from Twitter with Node.Js and Express.Js

This article will present a Git repository I created that can fetch tweets from Twitter using Twitter Api v1.1 and Node and Express. Or also known as Node.js and Express.Js. First off, Twitter now enforces all API calls to be authenticated. To be able to call the Twitter API, there are four values you must retrieve for the App you have created at https://developer.twitter.com/en/apps. These are:
  • Consumer API
  • Consumer API secret
  • Access token
  • Access token secret
The Git repo I created is available here: https://github.com/toreaurstadboss/SimpleTwitterFeedExpressJs The solution is started using: npm run start But for working with the solution, use nodemon command. The server.js is the primary part. Here we create the Express.Js server object for Node which will host our web service calls for the Tweets. We use the following Npm libraries:
  • express (web server)
  • fs (FileStream for Node
  • https (for secured communication
  • Twitter (for Twitter API v1.1 communication
The following Js script is the server.js file:
var express = require("express");
var request = require("request");
let config = require("./config");
const https = require("https");
const fs = require("fs");
var url = require("url");
var Twitter = require("twitter");

var key = fs.readFileSync(__dirname + "/certs/selfsigned.key");
var cert = fs.readFileSync(__dirname + "/certs/selfsigned.crt");
const port = 3000;
var options = {
  key: key,
  cert: cert
};

var app = express();

var client = new Twitter(config);

app.get("/", function(req, res) {
  res.sendFile(__dirname + "/index.html");
});

function getTweets(userName, tweetCount, res) {
  var params = {
    q: userName,
    count: tweetCount
  };

  var responseFromTwitter = {};

  let tweetSearched = false;

  // Initiate your search using the above paramaters
  client.get("search/tweets", params, function(err, data, response) {
    console.log("Found # Tweets: " + data.statuses.length);
    res.charset = "utf-8";
    res.append("Content-Type", "application/json; charset=utf-8");

    // If there is no error, proceed
    if (!err) {
      res.end(JSON.stringify(data.statuses));
    }
  });
}

app.get("/tweets/:username", function(req, res) {
  let tweetsFound = getTweets(req.params.username, 10, res);
  console.log(tweetsFound);
});

app.use("/css", express.static(__dirname + "/node_modules/bootstrap/dist/css"));

var server = https.createServer(options, app);
server.listen(port, () => {
  console.log("server starting on port: " + port);
});
Make note that this sample uses HTTPS communication. It is using self signed certificates. I created these using this command inside Windows Subsystem for Linux: sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./selfsigned.key -out selfsigned.crt This created the key pair of the selfsigned key and the certificate. The HTTPS server is created using this:
var server = https.createServer(options, app);
server.listen(port, () => {
  console.log("server starting on port: " + port);
});
The bulk of the code retrieving the Tweets is the method getTweets. We pass on the response object and use the end method of Express to stringify the JSON data we found. I have not done much error handling here. We return data as Json if we found some Tweets for given username. There is one route defined in Express here: app.get("/tweets/:username", function(req, res) { let tweetsFound = getTweets(req.params.username, 10, res); console.log(tweetsFound); }); The method getTweets will then retrieve data from Twitter using a HTTPS call to
function getTweets(userName, tweetCount, res) {
  var params = {
    q: userName,
    count: tweetCount
  };

  var responseFromTwitter = {};

  let tweetSearched = false;

  // Initiate your search using the above paramaters
  client.get("search/tweets", params, function(err, data, response) {
    console.log("Found # Tweets: " + data.statuses.length);
    res.charset = "utf-8";
    res.append("Content-Type", "application/json; charset=utf-8");

    // If there is no error, proceed
    if (!err) {
      res.end(JSON.stringify(data.statuses));
    }
  });
}

To test out this code, you can enter the following command to clone the repository.
git clone https://github.com/toreaurstadboss/SimpleTwitterFeedExpressJs.git

Tuesday, 11 June 2019

Deselecting all options in HTML SELECT using Jquery plugin

I created my first jQuery plugin today. This plugin will deselect all select option elements after a given timeout when checking an associated checkbox which will in our plugin represent the $(this) object. It can be utilized when you use the data-target attribute. The data-target attribute will automatically select the first option when the checkbox is unchecked. This plugin, however - deselects all options after a given timeout of milliseconds using the setTimeout function. It will keep a pointer to the select object by applying the $() function on a passed in DOM element identifier. Example usage:
     <script type="text/javascript">
                $(function() {
                    $("[name=@SearchParametersDataContract.SomeDOMElementCheckbox]").deselectAfterSelectCheckbox("#@SomeDOMElementSelect",100);
                });
            </script>
The jQuery plugin looks like this:

        <script type="text/javascript">
            $.fn.deselectAllAfterUnselectCheckbox= function(targetSelect, timeout) {
                $(this).on("click",
                    function() {
                        if (timeout === null || timeout == undefined)
                            timeout = 50;
                        $targetSelect = $(targetSelect);
                        if (this.checked === false) {
                            setTimeout(function() {
                                 $targetSelect.children("option").attr("selected", false);
                            }, timeout)
                        };
                    });
            }
        </script>

Wednesday, 5 June 2019

Random selection of a HTML SELECT element

Just made a JsFiddle of a random selection control for HTML SELECT tag. The JsFiddle is available here: Random selecting combobox (JsFiddle)
  • The control should use Bootstrap to function properly. It needs jQuery library for Javascript bit.
  • To turn on hiding the selected option, set the data-hide-selectedoption HTML 5 data attribute to "true".

Random selecting HTML SELECT control


<!-- 
  Bootstrap docs: https://getbootstrap.com/docs
-->

<div class="container">

  <div class="input-group mb-3">
    <div class="input-group-prepend">
      <span class="input-group-text" id="basic-addon1"><img style="cursor:pointer" title="Choose random value" alt="Choose random value" width="24" height="24" src="https://cdn4.iconfinder.com/data/icons/ionicons/512/icon-shuffle-512.png" /></span>
    </div>
    <select data-hide-selectedoption="true" disabled id="ComboBoxIceCreamBar" class="form-control" placeholder="Username" aria-label="Username" aria-describedby="basic-addon1">
 <option selected hidden>Choose icecream</option>
 <option value="Vanilla">Vanilla</option>
 <option value="Strawberry">Strawberry</option>
 <option value="Pineapple">Pineapple</option>
 <option value="Mint chocolate">Mint chocolate</option>
 <option value="Mango ice cream">Mango ice cream</option>
 <option value="Rasperry Ripple">Raspberry Ripple</option>
 <option value="Pistachio">Pistachio</option>
 <option value="Cookie dough">Cookie dough</option>
 <option value="Lemon">Lemon</option>
 <option value="Mango">Mango</option>
 </select>
    <!-- <input type="text" class="form-control" placeholder="Username" aria-label="Username" aria-describedby="basic-addon1"> -->
  </div>

</div>

<script type="text/javascript">
  $("#basic-addon1").on("click", function() {
    var selectcontrol = $(this).parent().siblings('select:first');
    var optionslength = selectcontrol.children('option').length;
    var randomIndex = Math.max(1, parseInt(Math.random() * optionslength));
  var optionElementToSelect = selectcontrol.children('option')[randomIndex];
    optionElementToSelect.selected = true;
  //debugger
  if (selectcontrol.data('hide-selectedoption')){
       optionElementToSelect.innerText = 'Selected value hidden.';   
  } 
  console.log("Selected ice cream: " + selectcontrol.val());
  });

</script>

Friday, 10 May 2019

Adding only untracked files in Git repo using Powershell

Are you a .NET developer mainly still use Windows OS and use Powershell and not Git bash for example ? The following procedure can be followed to create an aliased function for adding untracked files in a Git repository. Inside your $profile file of Powershell (in case it is missing - you can run: New-Item $Profile) notepad $Profile Now add this Powershell method:
function AddUntracked-Git() {
 &git ls-files -o --exclude-standard | select | foreach { git add $_ }
}
Save the $profile file and reload it into Powershell. Then reload your $profile file with: . $profile This is similar to the source command in *nix environments IMHO. So next time you, if you are developer using Powershell in Windows against Git repo and want to just include untracked files you can run: AddUntracked-Git This follows the Powershell convention where you have verb-nouns.

Friday, 3 May 2019

Powershell - Appending folder to path

The following Powershell function can be used to append a folder to the path for a user at the command line.

function AppendPath($filePath) {
 $path = [Environment]::GetEnvironmentVariable("Path")
 $path += ";" + $filePath
 [Environment]::SetEnvironmentVariable("Path", $path)
 Write-Host $path 
}

To call this function, just run the Powershell command:
AppendPath "c:\temp"
You can add this file into your $profile file as a function for easy availability. Run . $profile to reload your $profile file. This will append the folder "c:\temp" to your environment variable PATH. To view your environment variable PATH just enter:
echo $env:path
It is not required, but you can also use Chocolatey's refreshenv script to force update the environment variable if it is still not updating.

Wednesday, 1 May 2019

Regenerating precompiled views in Entity Framework

This article will present a solution to regenerate precompiled views in Entity Framework. Precompiled views can have a dramatic effect on the startup time of your DbContext / ObjectContext, especially the time to execute the first query against the database. First off, the following class can generate these precompiled views:

using System;
using System.Collections.Generic;
using System.Data.Entity.Core.Mapping;
using System.Data.Entity.Core.Metadata.Edm;
using System.Data.Entity.Infrastructure;

namespace PrecompiledViewGenerator
{
    /// <summary>
    /// Capable of generating pre compiled views that EF can use for quicker startup time when application domains starts.
    /// </summary>
    /// <typeparam name="TDbContext"></typeparam>
    /// <param name="dbContext"></param>
    /// <remarks>See https://github.com/ErikEJ/EntityFramework6PowerTools/tree/community/src/PowerTools Github page for the source code of EF Power tools.</remarks>
    /// <returns>A string containing the precompiled views that can be written to file or database for later use to gain optimized startup speeds of EF in application domains.</returns>
    public class EntityFrameworkPrecompiledViewGenerator
    {
        /// <summary>
        /// Generates pre compiled views from a db context. Uses EF Powertools runtime T4 template (.tt file) for precompiled view generation of views for EF 6 ObjectContext for C#.
        /// </summary>
        /// <typeparam name="TDbContext"></typeparam>
        /// <param name="dbContext"></param>
        /// <param name="viewContainerSuffix">The suffix to apply in the generated file containing the precompiled views</param>
        /// <remarks>See https://github.com/ErikEJ/EntityFramework6PowerTools/tree/community/src/PowerTools Github page for the source code of EF Power tools.</remarks>
        /// <returns>A string containing the precompiled views that can be written to file or database for later use to gain optimized startup speeds of EF in application domains.</returns>
        public string GeneratePrecompiledViews<TDbContext>(TDbContext dbContext, string viewContainerSuffix) where TDbContext : IObjectContextAdapter
        {
            if (string.IsNullOrEmpty(viewContainerSuffix))
                throw new ArgumentNullException(nameof(viewContainerSuffix));
            var objectContext = (dbContext as IObjectContextAdapter)?.ObjectContext; 
            if (objectContext == null)
                throw new ArgumentNullException(nameof(dbContext));
            var viewGenerator = new CSharpViewGenerator();
            var mappingCollection = (StorageMappingItemCollection) objectContext.MetadataWorkspace.GetItemCollection(DataSpace.CSSpace);
            if (mappingCollection == null)
                throw new ArgumentNullException(nameof(dbContext));
            var listOfEdmSchemaErrors = new List<EdmSchemaError>();
            var views = mappingCollection.GenerateViews(listOfEdmSchemaErrors);
            listOfEdmSchemaErrors.ForEach(error =>
            {
                if (error.Severity == EdmSchemaErrorSeverity.Error)
                    throw new ArgumentOutOfRangeException($"An error occurred while trying to generate views for {dbContext.GetType().Name}. The error was: {error.ToString()}");
            });
            viewGenerator.ContextTypeName = dbContext.GetType().FullName;
            viewGenerator.MappingHashValue = mappingCollection.ComputeMappingHashValue();
            viewGenerator.ViewContainerSuffix = viewContainerSuffix;
            viewGenerator.Views = views;
            string precompiledViews = viewGenerator.TransformText();
            return precompiledViews;
        }
    }
}



The following runtime text template (.T4) is used to generate the precompiled views:

//------------------------------------------------------------------------------
// <auto-generated>
//     This code was generated by a tool.
//
//     Changes to this file may cause incorrect behavior and will be lost if
//     the code is regenerated.
// </auto-generated>
//------------------------------------------------------------------------------

using System.Data.Entity.Infrastructure.MappingViews;

[assembly: DbMappingViewCacheTypeAttribute(
    typeof(<#= ContextTypeName #>),
    typeof(Edm_EntityMappingGeneratedViews.ViewsForBaseEntitySets<#= ContextTypeName #>))]

namespace Edm_EntityMappingGeneratedViews
{
    using System;
    using System.CodeDom.Compiler;
    using System.Data.Entity.Core.Metadata.Edm;

    /// <summary>
    /// Implements a mapping view cache.
    /// </summary>
    [GeneratedCode("Entity Framework 6 Power Tools", "0.9.2.0")]
    internal sealed class ViewsForBaseEntitySets<#= ViewContainerSuffix #> : DbMappingViewCache
    {
        /// <summary>
        /// Gets a hash value computed over the mapping closure.
        /// </summary>
        public override string MappingHashValue
        {
            get { return "<#= MappingHashValue #>"; }
        }

        /// <summary>
        /// Gets a view corresponding to the specified extent.
        /// </summary>
        /// <param name="extent">The extent.</param>
        /// <returns>The mapping view, or null if the extent is not associated with a mapping view.</returns>
        public override DbMappingView GetView(EntitySetBase extent)
        {
            if (extent == null)
            {
                throw new ArgumentNullException("extent");
            }

            var extentName = extent.EntityContainer.Name + "." + extent.Name;
<#
    var index = 0;
    foreach (var view in Views)
    {
#>

            if (extentName == "<#= view.Key.EntityContainer.Name + "." + view.Key.Name #>")
            {
                return GetView<#= index #>();
            }
<#
        index++;
    }
#>

            return null;
        }
<#
    index = 0;
    foreach (var view in Views)
    {
#>

        /// <summary>
        /// Gets the view for <#= view.Key.EntityContainer.Name + "." + view.Key.Name #>.
        /// </summary>
        /// <returns>The mapping view.</returns>
        private static DbMappingView GetView<#= index #>()
        {
            return new DbMappingView(@"<#= view.Value.EntitySql #>");
        }
<#
        index++;
    }
#>
    }
}
<#+
    public string ContextTypeName { get; set; }
 public string ViewContainerSuffix { get; set; }
    public string MappingHashValue { get; set; }
    public dynamic Views { get; set; }
#>

This T4 file is compiled into the class CSharpViewGenerator. Our DbContext can now check the mapping hash value of the loaded precompiled views file in your assembly and compute in again to quickly assert if the database is in sync with the precompiled views. The following code can establish both a check that the precompiled views are in sync and run code that will regenerate the precompiled views file of which the developer can then copy from Notepad and into the precompiled view file again. Not perhaps an elegant solution, but it lets the developer easily check and keep the precompiled views file updated and in sync. And you do not need to install the EF Powertools extension either, as I use the .tt file and much of the source code from there which is part of the "Generate views" command anywyays! Sample DbContext is then:
using System;
using System.Data.Entity;
using System.Data.Entity.Core;
using System.Data.Entity.Core.Mapping;
using System.Data.Entity.Core.Metadata.Edm;
using System.Diagnostics;
using System.IO;
using System.Runtime.CompilerServices;
using Edm_EntityMappingGeneratedViews;

using PrecompiledViewGenerator;

namespace SomeAcme.SomeRegistry.Data
{
    public class SomeDatabaseMigrationsFactory : DatabaseMigrationsFactory<SomeDatabase>
    {
    }

    public class SomeDatabase : SomeDatabaseBaseClass
    {
        public DbSet<SomeEntity> SomeEntity { get; set; }

        public SomeDatabase(ISomeDependency dep)
            : base(dep)
        {
            var someEntityPrecompiledViewsMapping = new ViewsForBaseEntitySetsSomeEntityDatabase();
            var mappingCollection = (StorageMappingItemCollection)ObjectContext.MetadataWorkspace.GetItemCollection(DataSpace.CSSpace);
            string mappingHashValue =  mappingCollection.ComputeMappingHashValue();
            if (SomeEntityPrecompiledViewsMapping.MappingHashValue != mappingHashValue)
            {
                var precompiledViewGenerator = new EntityFrameworkPrecompiledViewGenerator();
                string precompiledViewsContents = precompiledViewGenerator.GeneratePrecompiledViews(this, this.GetType().Name);
                string viewFileForMainDb = "SomeEntityDatabase.Views.cs";
                string temporaryPrecompiledViewFile = Path.Combine(Path.GetTempPath(), viewFileForMainDb);
                try
                {
                    File.WriteAllText(temporaryPrecompiledViewFile, precompiledViewsContents);
                    Process.Start("c:\\windows\\notepad.exe", temporaryPrecompiledViewFile);
                }
                catch (Exception err)
                {
                    throw new Exception("An error occured while trying to regenerate precompiled views for EF since the database changed. Error is: " + err);
                }
                throw new EntityCommandCompilationException($"The precompiled views file is not in sync with the database any longer. Replace the file {viewFileForMainDb} with the generated new contents!");
            }
        }
    }
}

The overall execution of code is the following:
  • In the DbContext constructor - check that the computed mapping hash value matches to that of the of the CSSpace storagemapping collection that your precompiled views file contains (MappingHashValue)
  • If the hash values do not agree, it is necessary to regenerate the pre compiled views file again. The contents are generated and written to a temporary file.
  • Notepad or similar is launched telling the developer to replace the pre compiled views file contents with this new contents.
  • An EntityCommandCompilationException is thrown as this is the same type of exception that is thrown if the precompiled views file does not agree
  • Developer replaces the contents and rebuilds the solution
  • The next time the startup time of EF should be reduced significantly again and work, since our DbContext and precomplied views are in sync again
Below is a sample of an exception thrown when my DbContext was not in sync with the precompiled views file:


System.Data.Entity.Core.EntityCommandCompilationException

System.Data.Entity.Core.EntityCommandCompilationException
  HResult=0x8013193B
  Message=An error occurred while preparing the command definition. See the inner exception for details.
  Source=EntityFramework
  StackTrace:
   at System.Data.Entity.Core.EntityClient.Internal.EntityCommandDefinition..ctor(DbProviderFactory storeProviderFactory, DbCommandTree commandTree, DbInterceptionContext interceptionContext, IDbDependencyResolver resolver, BridgeDataReaderFactory bridgeDataReaderFactory, ColumnMapFactory columnMapFactory)
   at System.Data.Entity.Core.EntityClient.Internal.EntityProviderServices.CreateDbCommandDefinition(DbProviderManifest providerManifest, DbCommandTree commandTree, DbInterceptionContext interceptionContext)
   at System.Data.Entity.Core.Objects.Internal.ObjectQueryExecutionPlanFactory.CreateCommandDefinition(ObjectContext context, DbQueryCommandTree tree)
   at System.Data.Entity.Core.Objects.Internal.ObjectQueryExecutionPlanFactory.Prepare(ObjectContext context, DbQueryCommandTree tree, Type elementType, MergeOption mergeOption, Boolean streaming, Span span, IEnumerable`1 compiledQueryParameters, AliasGenerator aliasGenerator)
   at System.Data.Entity.Core.Objects.ELinq.ELinqQueryState.GetExecutionPlan(Nullable`1 forMergeOption)
   at System.Data.Entity.Core.Objects.ObjectQuery`1.<>c__DisplayClass7.<GetResults>b__6()
   at System.Data.Entity.Core.Objects.ObjectContext.ExecuteInTransaction[T](Func`1 func, IDbExecutionStrategy executionStrategy, Boolean startLocalTransaction, Boolean releaseConnectionOnSuccess)
   at System.Data.Entity.Core.Objects.ObjectQuery`1.<>c__DisplayClass7.<GetResults>b__5()
   at System.Data.Entity.SqlServer.DefaultSqlExecutionStrategy.Execute[TResult](Func`1 operation)
   at System.Data.Entity.Core.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption)
   at System.Data.Entity.Core.Objects.ObjectQuery`1.<System.Collections.Generic.IEnumerable<T>.GetEnumerator>b__0()
   at System.Data.Entity.Internal.LazyEnumerator`1.MoveNext()
   at System.Linq.Enumerable.Single[TSource](IEnumerable`1 source)
   atProgram.cs:line 164

Inner Exception 1:
MappingException: The current model no longer matches the model used to pre-generate the mapping views, as indicated by the ViewsForBaseEntitySetsabf8c33ac61e42fc601fe7446b41eaf48c7577efe6d6e17ccccc2b434793c28e.MappingHashValue property. Pre-generated mapping views must be either regenerated using the current model or removed if mapping views generated at runtime should be used instead. See http://go.microsoft.com/fwlink/?LinkId=318050 for more information on Entity Framework mapping views.

 

Tuesday, 30 April 2019

Finding databases with biggest tables in SQL Server

I created a T-SQL script today for use with SQL Server that will list the tables that have most rows for SQL Server, able to loop through all databases.

DECLARE @currentDB AS VARCHAR(100)
DECLARE @mrsdb_sqlscript AS NVARCHAR(500)
DECLARE db_cursor CURSOR FOR 
SELECT name from sys.databases where state_desc <> 'offline' and name not in ('master', 'tempdb', 'model', 'msdb') order by name 
DROP TABLE IF EXISTS #largestTables 
CREATE TABLE #largestTables (
 DatabaseName VARCHAR(100),
 SchemaName VARCHAR(200), 
 TableName VARCHAR(200), 
 TotalRowCount INT 
)

OPEN db_cursor 
FETCH NEXT from db_cursor into @currentDB 

WHILE @@FETCH_STATUS = 0 
BEGIN
 SET @mrsdb_sqlscript = N'INSERT INTO #largestTables 
  SELECT ''' + @currentDB + ''' , SCHEMA_NAME(schema_id) AS [SchemaName],
 [Tables].name AS [TableName],
 SUM([Partitions].[rows]) AS [TotalRowCount]
 FROM ' + @currentDB + '.sys.tables AS [Tables]
 JOIN ' + @currentDB + '.sys.partitions AS [Partitions]
 ON [Tables].[object_id] = [Partitions].[object_id]
 AND [Partitions].index_id IN ( 0, 1 )
 GROUP BY SCHEMA_NAME(schema_id), [Tables].name';

 PRINT 'Looking for largest table in the following DB: ' + @currentDB 
 exec sp_executesql @mrsdb_sqlscript 

 FETCH NEXT FROM db_cursor into @currentDB
END

CLOSE db_cursor 
DEALLOCATE db_cursor 

SELECT TOP 100 * FROM #largestTables 
ORDER BY TotalRowCount DESC, Schemaname ASC


The SQL script above will use a temporary table variable and database cursor and loop through all databases, executing a SQL script that will insert data into a temporary table variable and after the cursor has looped through its dataset, the result in the temporary table variable is presented to the DBA monitoring. Use it to spot where you have much data in your databases of the data base server you are working with!

Friday, 29 March 2019

Quickly search your Git log using wildcards and shell function

Imagine you got a Git repository with a long history - equivalent to log - and you want find a changeset with a comment containing a word or some words and you want to get the commit SHA to quickly cherry pick that commit to your target branch. This is easier using Azure Devops or some tool, but how to quickly search the log. First we define a search log shell function inside our git config file .gitconfig :

searchlog = "!f() { git --no-pager log --color-words --all --decorate --graph -i --grep \"$1\";  }; f"

Now we can search the log quickly using a wildcard:
git searchlog "visning.*av.*nåtidslinje.*"
We separate our search words with the .* regex syntax and we can search our git log for multiple keywords quickly and get a nice presentation of commits with commit messages matching our log. And now we have a quick way to find our Git commits through the log and the command line to overcome our needly in the haystack scenario.