Showing posts with label DevOps. Show all posts
Showing posts with label DevOps. Show all posts

Monday, 23 September 2024

Looking up gMSA accounts in Active Directory

A short article with tips how you can find gMSA accounts in Active Directory or AD. First off, import the module ActiveDirectory. Then you can run the following snippet to find some gMsa accounts of which you know part of the name of.


Import-Module ActiveDirectory
Get-ADObject -Filter {ObjectClass -eq "msDS-GroupManagedServiceAccount"  -and Name -Like '*SomeGmsa*' }  -Properties DistinguishedName,  SamAccountName | Select DistinguishedName, SamAccountName 


This yields the results:


DistinguishedName                                                   SamAccountName  
-----------------                                                   --------------  
CN=gMSA1DVSomeGmsa,CN=Managed Service Accounts,DC=someacme,DC=org        MSA1DVSomeAcmeP$    
CN=gMSA1_gMSA1DGmsaPT,CN=Managed Service Accounts,DC=someacme,DC=org     MSA1gMSA1DVSomeAcme$
CN=gMSA1_DVSomeGmsaPT,CN=Managed Service Accounts,DC=someacme,DC=org     MSA1DVSomeAcmePT$   

You can search for gMSA users in AD like this:


Import-Module ActiveDirectory 

Get-ADServiceAccount -Filter "Name -like '*SomeGmsa*'"


This should yield a list of matching gMSA users with given name :

You can also ask for all properties of Gmsa users using -Properties with * :



Import-Module ActiveDirectory
Get-ADObject -Filter {ObjectClass -eq "msDS-GroupManagedServiceAccount"  -and Name -Like '*SomeGmsa*' }  -Properties *




Please note that your server should also install the gMsa account where it shall be used ! And to use the module ActiveDirectory, the Windows feature must be installed. From Powershell admin console you can run:


# Remember to install the RSAT-AD-Powershell module 

Add-WindowsFeature RSAT-AD-PowerShell

Install-ADServiceAccount SomeGmsa$


Import-Module ActiveDirectory
Get-ADObject -Filter {ObjectClass -eq "msDS-GroupManagedServiceAccount"  -and Name -Like '*SomeGmsa*' }  -Properties DistinguishedName,  SamAccountName | Select DistinguishedName, SamAccountName 

Note that you gMsa user has a SamAccountName which is suffixed by '$'. This can be set up in IIS for you application as the app pool identity. The username will be in this example:


MYDOMAIN\SomeGmsa$

The password of the gMSa service account will actually be empty ! Instead, the service account is installed as shown above using the cmd-let Install-AdServiceAccount.

Wednesday, 27 March 2024

Importing Json File to SQL Server into a variable

A short article today of how to import JSON file to SQL Server into a variable, which can then
be used to insert it into a column of type NVARCHAR(MAX) of a table. The maximum size of NVARCHAR(MAX) is 2 Gb, so you can
import large Json files using this datatype. If the Json is small and below 4000 chars, use for example NVARCHAR(4000) instead. Here is a SQL script to import the json file using OPENROWSET and Bulk import. We also pass in the path to the folder where the json file is. It is put in the same folder as the .sql file script. Note that the variable $(FullScriptDir) is passed in via a .bat file (shown further below) and we expect the .json file to be in the same folder as the .bat file. You can provide a full path to a .json file instead and skip the .bat file here and import a json file, but it is nice to load the .json file from the same folder as the .sql file in case you want to copy the .sql and .json file to another server and not having to provide and possibly having to adjust the full path. Sql-script import_json_file_openrowset.sql:


DECLARE @JSONFILE VARCHAR(MAX); 

SELECT @JSONFILE = BulkColumn
FROM OPENROWSET (BULK '$(FullScriptDir)\top-posts.json', SINGLE_CLOB) AS j;

PRINT 'JsonFile contents: ' + @JSONFILE

IF (ISJSON(@JSONFILE)=1) PRINT 'It is valid Json';


The .bat file here passes the current folder as a variable to the sql script runsqlscript.bat


@set FullScriptDir=%CD%
sqlcmd -S .\SQLEXPRESS  -i import_json_file_openrowset.sql


This outputs:


sqlcmd -S .\SQLEXPRESS  -i import_json_file_openrowset.sql
JsonFile contents: [
   {
      "Id":6107,
      "Score":176,
      "ViewCount":155988,
      "Title":"What are deconvolutional layers?",
      "OwnerUserId":8820
   },
   {
      "Id":155,
      "Score":164,
      "ViewCount":25822,
      "Title":"Publicly Available Datasets",
      "OwnerUserId":227
   }
]
It is valid Json


With the variable JSONFILE you can do whatever with it such as inserting it to a column in a new row of a table for example.
Importing json from a string directly using OPENJSON

It is also possible to directly just import the JSON from a string variable like this:


DECLARE @JSONSTRINGSAMPLE VARCHAR(MAX) 

SET @JSONSTRINGSAMPLE = N'[
 {
    "Id": 2334,
    "Score": 4.3,
    "Title": "Python - Used as scientific tool for graphing"
 },
{
    "Id": 2335,
    "Score": 5.2,
    "Title": "C# : Math and physics programming"
 }
]';

SELECT * FROM OPENJSON (@JSONSTRINGSAMPLE) WITH (
    Id INT,
    Score REAL,
    Title NVARCHAR(100)
)


Sunday, 25 April 2021

Making NUnit tests run in Team City for NUnit 3.x

Team City has several bugs when it comes to running NUnit tests. The following guide shows how you can prepare the Team City build agent to run NUnit 3.x tests. We need first to install NUnit Console runner Tips around this was found in the following Stack Overflow thread: This is also mentioned in the documentation of Team City: First off, add two Command line steps and add the two commands into each step - this step can be run at the start of the pipeline in Team City.


%teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe install NUnit.Console -version 3.10.0 -o packages  -ExcludeVersion -OutputDirectory %system.teamcity.build.tempDir%\NUnit %teamcity.tool.NuGet.CommandLine.DEFAULT%\tools\nuget.exe install NUnit.Extension.NUnitProjectLoader -version 3.6.0 -o packages
The following Nuget packages for NUnit was used:
  • NUnit 3.2.0
  • NUnit.ConsoleRunner 3.10.0
  • NUnit.Extension.NUnitProjectLoader 3.6.0
  • NUnit.Extension.TeamCityEventListener 1.0.7
  • NUnit3TestAdapter 3.16.1
Inside the NUnit runner type step, configure also the NUnit console path: Use this path:
packages\NUnit.ConsoleRunner.3.8.0\tools\nunit3-console.exe For the testassemblies make sure you use a path like this: **\bin\%BuildConfiguration%\*.Test.dll Add the %BuildConfiguration% parameter and set it to: Debug
More tips here: https://stackoverflow.com/questions/57953724/nunit-teamcity-process-exited-with-code-4

And here:
https://stackoverflow.com/questions/36996564/nunit-3-2-1-teamcity-could-not-load-file-or-assembly-nunit-framework

Wednesday, 10 June 2020

Creating a self signed certificate with Powershell and preparing it for IIS

I just wrote an automated routine in Powershell to create a self signed certificate.
#Install-Module -Name 'WebAdministration'

Import-Module -Name WebAdministration

function AddSelfSignedCertificateToSSL([String]$dnsname, [String]$siteName='Default Web Site'){
 $newCert = New-SelfSignedCertificate -DnsName $dnsname -CertStoreLocation Cert:\LocalMachine\My
 $binding = Get-WebBinding -Name $siteName -Protocol "https"
 $binding.AddSslCertificate($newCert.GetCertHashString(), "My")
 $newCertThumbprint = $newCert.Thumbprint
 $sourceCertificate = $('cert:\localmachine\my\' + $newCertThumbprint)
 
 $store = new-object system.security.cryptography.X509Certificates.X509Store -argumentlist "Root", LocalMachine
 $store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]"ReadWrite")
 $store.Add($newCert)
 return $newCertThumbprint
}

Write-Host Installing self-signed certificate Cert:\LocalMachine\My and Cert:\LocalMachine\Root ..

$certinstalledThumbprint = AddSelfSignedCertificateToSSL 'someacmeapp.somedomain.net'

Write-Host Added certificate $certinstalledThumbprint to Cert:\LocalMachine\My and Cert:\LocalMachine\Root and set this up as the SSL certificate on Default Web Site.




Tuesday, 7 April 2020

Writing to Event Log in .Net Core (Tested with .Net Core 3.1)

Writing to the Event Log in .Net Core requires first a Nuget package installation
Install-Package Microsoft.Extensions.Logging.EventLog -Version 3.1.2
Note that the correct version to install depends on the version of .Net Core you are running.The package above was tested OK with .Net Core. Then we need to add EventLog. In the Program class we can do this like so:
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Logging.EventLog;

namespace SomeAcme.SomeApi
{
    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }

        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureLogging((hostingContext, logging) =>
                {
                    logging.ClearProviders();
                    logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
                    logging.AddEventLog(new EventLogSettings()
                    {
                        **SourceName = "SomeApi",
                        LogName = "SomeApi",**
                        Filter = (x, y) => y >= LogLevel.Warning
                    });
                    logging.AddConsole();
                })
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup();
                });
    }
}
And our appsettings.json file includes setup:
{
  "ConnectionStrings": {
    "DefaultConnection": "Server=.\\SQLEXPRESS;Database=SomeApi;Trusted_Connection=True;MultipleActiveResultSets=true"
  },
  **"Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft": "Warning",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  },**
  "AllowedHosts": "*"
}
We can inject the ILogger instance
using SomeAcme.SomeApi.SomeModels;
using SomeAcme.SomeApi.Services;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Logging;
using System.Collections.Generic;

namespace SomeAcme.SomeApi.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class SomeController : ControllerBase
    {
        private readonly ISomeService _healthUnitService;
        private readonly ILogger _logger;

        public SomeController(ISomeService someService, ILogger logger)
        {
            _someService= someService;
            _logger = logger;
        }
        // GET: api/Some
        [HttpGet]
        public IEnumerable GetAll()
        {
            return _someService.GetAll();
        }
    }
}

More advanced use, add a global exception handler inside Configure method of Startup class in .Net Core:
        //Set up a global error handler for handling Unhandled exceptions in the API by logging it and giving a HTTP 500 Error with diagnostic information in Development and Staging
        app.UseExceptionHandler(errorApp =>
        {
            errorApp.Run(async context =>
            {
                context.Response.StatusCode = 500; // or another Status accordingly to Exception Type
                context.Response.ContentType = "application/json";

                var status = context.Features.Get();

                var error = context.Features.Get();
                if (error != null)
                {
                    var ex = error.Error;
                    string exTitle = "Http 500 Internal Server Error in SomeAcme.SomeApi occured. The unhandled error is: ";
                    string exceptionString = !env.IsProduction() ? (new ExceptionModel
                    {
                        Message = exTitle + ex.Message,
                        InnerException = ex?.InnerException?.Message,
                        StackTrace = ex?.StackTrace,
                        OccuredAt = DateTime.Now,
                        QueryStringOfException = status?.OriginalQueryString,
                        RouteOfException = status?.OriginalPath
                    }).ToString() : new ExceptionModel()
                    {
                        Message = exTitle + ex.Message,
                        OccuredAt = DateTime.Now
                    }.ToString();
                    try
                    {
                        _logger.LogError(exceptionString);
                    }
                    catch (Exception err)
                    {
                        Console.WriteLine(err);
                    }
                    await context.Response.WriteAsync(exceptionString, Encoding.UTF8);
                }
            });
        });

And finally a helper model to pack our exception information into.

using System;
using Newtonsoft.Json;

namespace SomeAcme.SomeApi.Models
{
    /// 
    /// Exception model for generic useful information to be returned to client caller
    /// 
    public class ExceptionModel
    {
        public string Message { get; set; }
        public string InnerException { get; set; }
        public DateTime OccuredAt { get; set; }
        public string StackTrace { get; set; }
        public string RouteOfException { get; set; }
        public string QueryStringOfException { get; set; }

        public override string ToString()
        {
            return JsonConvert.SerializeObject(this);
        }
    }
}

The tricky bit here is to get hold of a logger inside the Startup class. You can inject ILoggerFactory for this and just do :

_logger = loggerFactory.CreateLogger();

Where _logger is used in the global error handler above. Now back again to the question of how to write to the event log, look at the source code for SomeController above. We inject ILogger here. Just use that instance and it offers different methods for writing to your configured logs. Since we added in the Program class event log, this happens automatically. Before you test out the code above, run the following Powershell script as administrator to get your event log source:
New-EventLog -LogName SomeApi -SourceName SomeApi
What I like with this approach is that if we do everything correct, the exceptions pops up inside the SomeApi source nicely and not inside the application event log (clutter IMHO).

Sunday, 5 April 2020

Deploying an SQL Express database in Azure Devops pipeline with YAML and generating and updating the database with migrate scripts using EF Core Code First tools

Here a full example of how I achieved running Integration tests using Sql Express in Azure Devops. I had to use the YAML based pipelines so I could use simonauner's approach using Chocolatey to install Sql Express. Make note that I had to install EF Core tools since I use .Net Core 3.1 in this pipeline. Also note that I generate an EF Code First migration SQL file on the fly so that the rigged SQL Express instance is filled with contents. Deploy SQL Express instance in Azure Devops, install and generate and run EF Code first migration sql script to update database with schema and seed data using EF Code First tools.

# ASP.NET Core (.NET Framework)
# Build and test ASP.NET Core projects targeting the full .NET Framework.
# Add steps that publish symbols, save build artifacts, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/dotnet-core

trigger:
- feature/testability

pool:
  vmImage: 'windows-latest'

variables:
  solution: '**/*.sln'
  buildPlatform: 'Any CPU'
  buildConfiguration: 'Release'

steps:
- script: choco install sql-server-express
- task: NuGetToolInstaller@1

- task: VisualStudioTestPlatformInstaller@1
  displayName: 'Visual Studio Test Platform Installer'
  inputs:
    versionSelector: latestStable

- task: NuGetCommand@2
  inputs:
    restoreSolution: '$(solution)'

- task: DotNetCoreCLI@2
  displayName: Build
  inputs:
    command: build
    projects: '**/*.csproj'
    arguments: '--configuration Debug' # Update this to match your need

- script: 'dotnet tool install --global dotnet-ef'
  displayName: 'Generate EF Code First Migrations SQL Script Update script'

- script: 'dotnet ef migrations script -i -o %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql --project .\SomeAcme\SomeAcme.csproj'
  displayName: 'Generate EF Code First Migrations migrate.sql'

- script: 'sqlcmd -S .\SQLEXPRESS -Q "CREATE DATABASE [SomeAcmeDb]"'
  displayName: 'Create database SomeAcmeDb in Azure Devops SQL EXPRESS'

- script: 'sqlcmd -i %BUILD_ARTIFACTSTAGINGDIRECTORY%\migrate.sql -S .\SQLEXPRESS -d SomeAcmeDb'
  displayName: ' Run migrate.sql on SQL EXPRESS in Azure Devops'

# PowerShell
# Run a PowerShell script on Linux, macOS, or Windows
- task: PowerShell@2
  inputs:
    targetType: 'inline' # Optional. Options: filePath, inline
    #filePath: # Required when targetType == FilePath
    #arguments: # Optional
    script: 'gci -recurse -filter *.dll' # Required when targetType == Inline
    #errorActionPreference: 'stop' # Optional. Options: stop, continue, silentlyContinue
    #failOnStderr: false # Optional
    #ignoreLASTEXITCODE: false # Optional
    #pwsh: false # Optional
    #workingDirectory: # Optional

- task: VSTest@2
  displayName: 'VsTest - testAssemblies'
  inputs:
    testAssemblyVer2: |
     **\*SomeAcme.Tests.dll
     !**\*TestAdapter.dll
     !**\obj\**
    vsTestVersion: toolsInstaller
    testFiltercriteria: 'Category=IntegrationTest'
    runInParallel: false
    codeCoverageEnabled: false
    testRunTitle: 'XUnit tests SomeAcme solution integration test starting'
    failOnMinTestsNotRun: true
    rerunFailedTests: false

Sunday, 29 September 2019

Deleting a set of databases with similar names with T-SQL

Devops Sunday. If you end up having many databases in SQL Server and want to get rid of them by matching their names, this T-SQL should help you out.

use master
go
declare @tablestoNuke as table(db nvarchar(100))
insert into @tablestoNuke
select name from sys.databases  
where name like '%SomeSimilarDbNameSet%'
declare @nukedb as nvarchar(100)
declare @nukesql as nvarchar(150)

declare nuker cursor for
select db from @tablestoNuke

open nuker
fetch next from nuker into @nukedb

while @@FETCH_STATUS = 0 
begin
set @nukesql = 'drop database ' + @nukedb
exec sp_executesql @nukesql
fetch next from nuker into @nukedb
end

close nuker
deallocate nuker

print 'All done nuking'


The T-SQL uses a cursor to loop through the database names fetched from sys.databases view on master db and then uses exec sp_executesql to delete the databases, by dropping them.

Friday, 3 May 2019

Powershell - Appending folder to path

The following Powershell function can be used to append a folder to the path for a user at the command line.

function AppendPath($filePath) {
 $path = [Environment]::GetEnvironmentVariable("Path")
 $path += ";" + $filePath
 [Environment]::SetEnvironmentVariable("Path", $path)
 Write-Host $path 
}

To call this function, just run the Powershell command:
AppendPath "c:\temp"
You can add this file into your $profile file as a function for easy availability. Run . $profile to reload your $profile file. This will append the folder "c:\temp" to your environment variable PATH. To view your environment variable PATH just enter:
echo $env:path
It is not required, but you can also use Chocolatey's refreshenv script to force update the environment variable if it is still not updating.

Tuesday, 30 April 2019

Finding databases with biggest tables in SQL Server

I created a T-SQL script today for use with SQL Server that will list the tables that have most rows for SQL Server, able to loop through all databases.

DECLARE @currentDB AS VARCHAR(100)
DECLARE @mrsdb_sqlscript AS NVARCHAR(500)
DECLARE db_cursor CURSOR FOR 
SELECT name from sys.databases where state_desc <> 'offline' and name not in ('master', 'tempdb', 'model', 'msdb') order by name 
DROP TABLE IF EXISTS #largestTables 
CREATE TABLE #largestTables (
 DatabaseName VARCHAR(100),
 SchemaName VARCHAR(200), 
 TableName VARCHAR(200), 
 TotalRowCount INT 
)

OPEN db_cursor 
FETCH NEXT from db_cursor into @currentDB 

WHILE @@FETCH_STATUS = 0 
BEGIN
 SET @mrsdb_sqlscript = N'INSERT INTO #largestTables 
  SELECT ''' + @currentDB + ''' , SCHEMA_NAME(schema_id) AS [SchemaName],
 [Tables].name AS [TableName],
 SUM([Partitions].[rows]) AS [TotalRowCount]
 FROM ' + @currentDB + '.sys.tables AS [Tables]
 JOIN ' + @currentDB + '.sys.partitions AS [Partitions]
 ON [Tables].[object_id] = [Partitions].[object_id]
 AND [Partitions].index_id IN ( 0, 1 )
 GROUP BY SCHEMA_NAME(schema_id), [Tables].name';

 PRINT 'Looking for largest table in the following DB: ' + @currentDB 
 exec sp_executesql @mrsdb_sqlscript 

 FETCH NEXT FROM db_cursor into @currentDB
END

CLOSE db_cursor 
DEALLOCATE db_cursor 

SELECT TOP 100 * FROM #largestTables 
ORDER BY TotalRowCount DESC, Schemaname ASC


The SQL script above will use a temporary table variable and database cursor and loop through all databases, executing a SQL script that will insert data into a temporary table variable and after the cursor has looped through its dataset, the result in the temporary table variable is presented to the DBA monitoring. Use it to spot where you have much data in your databases of the data base server you are working with!

Tuesday, 26 February 2019

Powershell - starting and stopping multiple app pools

The following powershell script defines some functions in Powershell that can start up or stop all iis app pools on a server. It can be handy when you want to test out concurrency issues and switch off all IIS app pools and start up again.

Function fnStartApplicationPool([string]$appPoolName){
Import-Module WebAdministration 
 if ((Get-WebAppPoolState $appPoolName).Value -ne 'Started') {
  Write-Host 'IIS app pool ' $appPoolName ' is not started. Starting.' 
  Start-WebAppPool -Name $appPoolName 
  Write-Host 'IIS app pool ' $appPoolName 'started' 
 }
}

Function fnStartAllApplicationPools() {
Import-Module WebAdministration  
 Write-Host "Starting all app pools" 
 $appPools = (Get-ChildItem IIS:\AppPools)

foreach ($appPool in $appPools) { 
  & fnStartApplicationPool -appPoolName $appPool.Name
}
}

#fnStartAllApplicationPools #start all applications pools


Function fnStopApplicationPool([string]$poolname) {
Import-Module WebAdministration 
 if ((Get-WebAppPoolState $appPoolName).Value -ne 'Stopped') {
  Stop-WebAppPool -Name $appPoolName 
 }
}

Function fnStopAllApplicationPools(){
Import-Module WebAdministration  
 Write-Host "Starting all app pools" 
 $appPools = (Get-ChildItem IIS:\AppPools)

foreach ($appPool in $appPools) { 
  & fnStopApplicationPool-appPoolName $appPool.Name
}  

}

#fnStopAllApplicationPools #start all applications pools


Thursday, 22 November 2018

Displaying branch history in Git

I desired to have an easy branch log today. The following alias commands makes this easier. These go under the [alias] section in the .gitconfig file you are using in your repository.

latest = "!f() { echo "Latest \"${1:-11}\" commits accross all branches:"; git log  --abbrev-commit --date=relative --branches --all --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset%n' -n ${1:-11};  } ; f"
logbranch = "!f() { echo "Latest \"${1:-11}\" commits in current branch against master:"; git log master..${1:git branch}  --abbrev-commit --date=relative  --pretty=format:'%C(yellow)%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(white blue bold)<%an>%Creset%n' -n ${1:-11};  } ; f"

git logbranch will display only the latest commit for the specified branch or defaulting to the current branch, defaulting to last 11 commits using a shell function. Note that we compare against the master branch. And we get the following sample output:

Sunday, 4 November 2018

Closing branches in Git

Git unlike Mercurial has no builtin support for closing branches. This leads to a proliferation of branches and running git branch -a to view remote branches or git branch will show ever more branches. Actually, closing a branch in Git can be supported through the use of tags. We decide to keep the tag for future use, so that we can use it to check out a new branch from this tag. Another way would of course be to just delete a brach local and/or remote, but that is not the same as closing a branch. Closing a branch in Mercurial still makes it possible to reopen it again for later work. Anyways, in this article, I will show two aliases which can be used to close a branch, either both local and remote or just remote. Put the following into the [alias] section of your .gitConfig file:

closebranch = "!w() { echo Attempting to close local and remote branch: $1 Processing...; echo Checking the branch $1 out..; git checkout $1; echo Trying to create a new tag archive/$1; git tag archive/\"$1\"; git push origin archive/\"$1\"; echo Deleting the local branch $1; git branch -d $1;  echo Deleting the remote branch $1; git push origin --delete $1; echo Done. To restore the closed branch later, enter: git checkout -b MyNewBranch archive/\"$1\"; }; w"

closebranchpassive = "!w() { echo Attempting to close local and remote branch: $1 Processing...; echo Checking the branch $1 out..; git checkout $1; echo Trying to create a new tag archive/$1; git tag archive/\"$1\"; git push origin archive/$1; echo Deleting the local branch $1;   echo Deleting the remote branch $1;  echo Done. To restore the closed branch later, enter: git checkout -b MyNewBranch archive/\"$1\"; }; w"

closeremotebranch =  "!w() { echo Attempting to close remote branch: $1 Processing...; echo Checking the branch $1 out..; git checkout $1;  echo Trying to create a new tag archive/$1; git tag archive/\"$1\"; git push origin archive/\"$1\"; echo Deleting the remote branch $1; git push origin --delete $1; echo Done. To restore the closed branch later, enter: git checkout -b MyNewBranch archive/\"$1\"; }; w"

What we do here is the following:
  • Check out the branch to close
  • Tag this branch as archive/branchname
  • Important - push the tag the remote e.g. origin in the provided aliased commands above
  • (Delete the local branch)
  • Delete the remote branch
  • Display a friendly output message how to restore the branch later through a tag
What we use here is a shell function in each alias. This allows us to do multiple commands in Git through a simple aliased command. Say you want to close a local and remote branch called MyBranchToBeClosed. Just enter: git closebranch MyBranchToBeClosed If you just want to close the remote branch and keep the local one, enter: git closeremotebranch MyBranchToBeClosed To restore the branch MyBranchToBeClosed (which now is actually closed!) later, just enter: git checkout -b MyRestoredBranch archive/MyBranchToBeClosed This lets you keep old branch around as tags and not proliferate the branch listings. We however have moved the branch(es) over to tags prefixed with archive/ I wish Git was simpler to use sometimes so we did not have to use such hacks, closing branches should be easy.

Wednesday, 17 October 2018

Working with Netsh http sslcert setup and SSL bindings through Powershell

I am working with a solution at work where I need to enable IIS Client certificates. I am not able to get past the "Provide client certificate" dialog, but it is possible to alter the setup of SSL cert bindings on your computer through the Netsh command. This command is not in Powershell, but at the command line. I decided to write some Powershell functions to be able to alter this setup atleast in an easier way. One annoyance with the netsh command is that you have to keep track of the Application Id and Certificate hash values. Here, we can easier keep track of this through Powershell code. The Powershell code to display and alter, modify, delete and and SSL cert bindings is as follows:

function Get-NetshSetup($sslBinding='0.0.0.0:443') {

$sslsetup = netsh http show ssl 0.0.0.0:443
#Get-Member -InputObject $sslsetup

$sslsetupKeys = @{}

foreach ($line in $sslsetup){
 if ($line -ne $null -and $line.Contains(': ')){
    
    $key = $line.Split(':')[0]
    $value = $line.Split(':')[1]
     if (!$sslsetupKeys.ContainsKey($key)){
       $sslsetupKeys.Add($key.Trim(), $value.Trim()) 
      }
    } 
}


return $sslsetup

}

function Display-NetshSetup($sslBinding='0.0.0.0:443'){
 
 Write-Host SSL-Setup is: 

 $sslsetup = Get-NetshSetup($sslBinding)

foreach ($key in $sslsetup){
 Write-Host $key $sslsetup[$key]
}
}

function Modify-NetshSetup($sslBinding='0.0.0.0:443', $certstorename='My',
  $verifyclientcertrevocation='disable', $verifyrevocationwithcachedcleintcertonly='disable',
  $clientCertNegotiation='enable', $dsmapperUsage='enable'){
  $sslsetup = Get-NetshSetup($sslBinding)
 
  echo Deleting sslcert netsh http binding for $sslBinding ...
  netsh http delete sslcert ipport=$sslBinding
  echo Adding sslcert netsh http binding for $sslBinding...
  netsh http add sslcert ipport=$sslBinding certhash=$sslsetup['Certificate Hash'] appid=$sslsetup['Application ID'] certstorename=$certstorename verifyclientcertrevocation=$verifyclientcertrevocation verifyrevocationwithcachedclientcertonly=$verifyrevocationwithcachedcleintcertonly clientcertnegotiation=$clientCertNegotiation dsmapperusage=$dsmapperUsage
  echo Done. Inspect output.  
  Display-NetshSetup $sslBinding
}



function Add-NetshSetup($sslBinding, $certstorename, $certhash, $appid, 
  $verifyclientcertrevocation='disable', $verifyrevocationwithcachedcleintcertonly='disable',
  $clientCertNegotiation='enable', $dsmapperUsage='enable'){

  echo Adding sslcert netsh http binding for $sslBinding...
  netsh http add sslcert ipport=$sslBinding certhash=$certhash appid=$appid  clientcertnegotiation=$clientCertNegotiation dsmapperusage=$dsmapperUsage certstorename=$certstorename verifyclientcertrevocation=$verifyclientcertrevocation verifyrevocationwithcachedclientcertonly=$verifyrevocationwithcachedcleintcertonly 
   
  echo Done. Inspect output.  
  Display-NetshSetup $sslBinding
}





#Get-NetshSetup('0.0.0.0:443'); 
Display-NetshSetup
#Modify-NetshSetup 
Add-NetshSetup '0.0.0.0:443' 'MY' 'c0fe06da89bcb8f22da8c8cbdc97be413b964619' '{4dc3e181-e14b-4a21-b022-59fc669b0914}'
Display-NetshSetup