This article will look on different ways to hash a password in .NET.
MD5 was developed by Ron Rivest in 1991 and was used a lot in the 90s, but in 2005 it was
revealed it contains collisions. MD5 and SHA-1 is not advised to used in sensitive hashing related to
security anymore.
Instead, a PBKDF or Password Derived Key-derivation function algorithm will be used.
A PBKDF2-based method in Rfc2898DeriveBytes will be used. It has been available since .NET 6.
Users of Asp.net Core Identity are recommended to use PasswordHasher instead :
https://andrewlock.net/exploring-the-asp-net-core-identity-passwordhasher/
An overview of the arithmetic flow of PBKDF2 is shown below. In the diagram, SHA-512 is indicated, but the code shown in this article
uses SHA-256.
First off, to do a MD5 hash we can use the following :
staticstringMd5(string input){
using (var md5 = MD5.Create()){
var byteHash = md5.ComputeHash(Encoding.UTF8.GetBytes(input));
var hash = BitConverter.ToString(byteHash).Replace("-", "");
return hash;
}
}
MD5 Demonstration in .NET
-------------------------
Password to hash: abc123
MD5 hashed password: E99A18C428CB38D5F260853678922E03
The MD5 hash above agrees with the online MD5 hash here:
https://www.md5hashgenerator.com/
MD5 method here does not mention any salt, but this could be concatenated with the password to prevent against rainbow table attacks, that is
dictionary attacks.
Next, to perform PDKDF2 hashing, the code below can be used. Note that this algorithm will be run iteratively to generate a hash value that is
increasingly more computationally expensive to calculate the hash of compared to the number of iterations and includes a salt, making it scalable to be
more and more difficult for attacks.
The value 32 here is the desired output length of the hash, we can decide how long the hash we get out of the call to the method.
We can then test out the Pbkdf2 method using an increasing number of iterations.
voidRunPbkdf2HashDemo()
{
conststring passwordToHash = "abc123";
Console.WriteLine("Password Based Key Derivation Function Demonstration in .NET");
Console.WriteLine("------------------------------------------------------------");
Console.WriteLine();
Console.WriteLine("PBKDF2 Hashes using Rfc2898DeriveBytes");
Console.WriteLine();
HashPassword(passwordToHash, 1);
HashPassword(passwordToHash, 10);
HashPassword(passwordToHash, 100);
HashPassword(passwordToHash, 1000);
HashPassword(passwordToHash, 10000);
HashPassword(passwordToHash, 100000);
HashPassword(passwordToHash, 1000000);
HashPassword(passwordToHash, 5000000);
}
This gives the following output:
Password Based Key Derivation Function Demonstration in .NET
------------------------------------------------------------
PBKDF2 Hashes using Rfc2898DeriveBytes
Password to hash : abc123
Hashed Password : eqeul5z7l2dPrOo8WjH/oTt0RYHvlZ2lvk8SUoTjZq4=
Iterations (1) Elapsed Time: 0 ms
Password to hash : abc123
Hashed Password : wfd8qQobzBPZvdemqrtZczqctFe0JeAkKjU3IJ48cms=
Iterations (10) Elapsed Time: 0 ms
Password to hash : abc123
Hashed Password : VY45SxzhqjYronha0kt1mQx+JRDVlXj82prX3H7kjII=
Iterations (100) Elapsed Time: 0 ms
Password to hash : abc123
Hashed Password : B0LfHgRSslG/lWe7hbp4jb8dEqQ/bZwNtxsaqbVBZ2I=
Iterations (1000) Elapsed Time: 0 ms
Password to hash : abc123
Hashed Password : LAHwpS4bnbO7CQ1r7buYgUTrp10FyaRyeK6hCwGwv20=
Iterations (10000) Elapsed Time: 1 ms
Password to hash : abc123
Hashed Password : WDjyPySpULXtVOVmSR9cYlzAY4LWeJqDBhszKAfIaPc=
Iterations (100000) Elapsed Time: 13 ms
Password to hash : abc123
Hashed Password : sDx6sOrTl2b7cNZGUAecg7YO4Md/g3eAtfQSvh/vxpM=
Iterations (1000000) Elapsed Time: 127 ms
Password to hash : abc123
Hashed Password : ruywLaR0QApOU5bkqE/x2AAhYJzBj5y6D3P3IxlIF2I=
Iterations (5000000) Elapsed Time: 643 ms
Note that it takes many iterations before the computation takes significant time.
Sources / links :
This article presents some helper methods for performing AES Encryption using Galois Counter Mode (GCM). AES or Advanced Encryption Standard is the most used encryption algorithm used today, having overtaken DES and Triple DES
since 2001. We will look into the GCM mode of AES in this article.
AES-GCM class AesGcm is supported in .NET Core 3.0 and newer .NET versions, plus in .NET Standard 2.1.
AES-GCM is authenticated encryption, compared to default AES-CBC (Cipher Block Chaining).
Benefits of using GCM mode of AES is the following:
Data authenticity / integrity. This is provided via a tag that is outputted by the encryption and used while decrypting
Provides support for sending additional data, used for example in newer TLS implementations to provide both encryption and a non-encrypted payload. This is called additional metadata
Here is a helper class to perform encryption and decryption using AES-GCM.
publicstaticclassAesGcmEncryption {
publicstatic (byte[], byte[]) Encrypt(byte[] dataToEncrypt, byte[] key, byte[] nonce, byte[] associatedData = null)
{
usingvar aesGcm = new AesGcm(key);
//tag and ciphertext will be filled during encryptionvar tag = newbyte[16]; //tag is a hmac (hash-based message authentication code) to check that information has not been tampered withvar cipherText = newbyte[dataToEncrypt.Length];
aesGcm.Encrypt(nonce, dataToEncrypt, cipherText, tag, associatedData);
return (cipherText, tag);
}
publicstaticbyte[] Decrypt(byte[] cipherText, byte[] key, byte[] nonce, byte[] tag, byte[] associatedData = null)
{
usingvar aesGcm = new AesGcm(key);
//tag and ciphertext will be filled during encryptionvar decryptedData = newbyte[cipherText.Length];
aesGcm.Decrypt(nonce, cipherText, tag, decryptedData, associatedData);
return decryptedData;
}
}
In the code above, the encrypt method returns a tuple with the ciperText and the tag. These are the encrypted data and the tag, both must be used while decrypting and the tag provides as mentioned a means of checking the integrity of data, i.e. that data has not been tampered with.
Note that the 16-byte tag and the ciphertext is filled after running the Encrypt method of the AesGcm class. The cipherText array must be the same length as the dataToEncrypt array inputted.
Here is sample code to use AES-GCM. Note that the metadata used here, while optional, must match in case it is set in the encryption and decryption. The nonce must be 12 bytes - 96 bits in length.The nonce is similar to a initialization vector, although it is used once for the particular encryption and decryption,
it is used to protect against replay attacks.
AES Encryption demo GCM - Galois Counter Mode:
--------------
Original Text = Text to encrypt
Encrypted Text = 9+2x0kctnRwiDDHBm0/H
Tag = sSDxsg17HFdjE4cuqRlroQ==
Decrypted Text = Text to encrypt
Use AES-GCM to provide integrity checking and allowing to send in metadata if desired to encrypt and decrypting with the AES algorithm.
We can protect the AES key using different methods, for example using the Data Protection API, this is only supported in Windows.
Let's look at a helper class for using Data Protection API.
voidEncryptAndDecryptWithProtectedKey(){
var original = "Text to encrypt";
Console.WriteLine($"Original Text = {original}");
//Create key and nnoce . Encrypt our text with AES var gcmKey = RandomNumberGenerator.GetBytes(32);
var nonce = RandomNumberGenerator.GetBytes(12);
var result = EncryptText(original, gcmKey, nonce);
//Create some entropy and protect AES keyvar entropy = RandomNumberGenerator.GetBytes(16);
var protectedKey = ProtectedData.Protect(gcmKey, entropy, DataProtectionScope.CurrentUser);
Console.WriteLine($"gcmKey = {Convert.ToBase64String(gcmKey)}, protectedKey = {Convert.ToBase64String(protectedKey)}");
// Decrypt the text with AES. the AES key has to be retrieved with DPAPI.var decryptedText = DecryptText(result.encrypted, nonce, result.tag, protectedKey, entropy);
Console.WriteLine($"Decrypted Text using AES GCM with key retrieved via Data Protection API = {decryptedText}");
}
privatestatic (byte[] encrypted, byte[] tag) EncryptText(string original, byte[] gcmKey, byte[] nonce){
return AesGcmEncryption.Encrypt(Encoding.UTF8.GetBytes(original), gcmKey, nonce, Encoding.UTF8.GetBytes("some meta"));
}
privatestaticstringDecryptText(byte[] encrypted, byte[] nonce, byte[] tag, byte[] protectedKey, byte[] entropy){
var key = DataProtectionUtil.Unprotect(protectedKey, entropy, DataProtectionScope.CurrentUser);
Console.WriteLine($"Inside DecryptText: gcmKey = {Convert.ToBase64String(key)}, protectedKey = {Convert.ToBase64String(protectedKey)}");
var decryptedText = AesGcmEncryption.Decrypt(encrypted, key, nonce, tag, Encoding.UTF8.GetBytes("some meta"));
return Encoding.UTF8.GetString(decryptedText);
}
Data Protection API is only supported on Windows platform, there are more possibilities to protect AES key but protecting your key is always a challenge when dealing with symmetric encryption algorithms such as AES.
Some more links:
I have looked at Digital signatures with RSA in .NET today. Digital signatures are used to provide non-repudiation, an authenticity proof that the original sender is who the sender claims to be and
also that the data has not been hampered with.
We will return a tuple of both a SHA-256 computed hash of some document data and also its digital signature using the RSA algorithm.
I have used .netstandard 2.0 here, so the code can be used in most frameworks in both .NET Framework and .NET. We will use RSA here to do the digital signature signing and verification.
First off, here is a helper class to create a RSA encrypted signature of a SHA-256 hash, here we create a new RSA with key size 2048.
RsaDigitalSignature.cs
In the code above, we receive some document data and create the SHA-255 hash, which is computed. We return a tuple with the signed hash from the computed SHA-256 hash and also the computed SHA-256 hash itself.
A console application that runs the sample code above is the following:
voidMain()
{
SignAndVerifyData();
//Console.ReadLine();
}
privatestaticvoidSignAndVerifyData()
{
Console.WriteLine("RSA-based sigital signature demo");
var document = Encoding.UTF8.GetBytes("Document to sign");
var digitalSignature = new RsaDigitalSignature();
var signature = digitalSignature.SignData(document);
bool isValidSignature = digitalSignature.VerifySignature(signature.Signature, signature.HashOfData);
Console.WriteLine($"\nInput Document:\n{Convert.ToBase64String(document)}\nIs the digital signature valid? {isValidSignature} \nSignature: {Convert.ToBase64String(signature.Signature)} \nHash of data:\n{ Convert.ToBase64String(signature.HashOfData)}");
}
Our verification of the signature shows that the verification of the digital signature passes.
Input Document:
RG9jdW1lbnQgdG8gc2lnbg==
Is the digital signature valid? True
Signature: Gok1x8Wxm9u5jTRcqrgPsI45ie3WPZLi/FNbaJMGTHqBmNbpJTEYjsXix97aIF6uPjgrxQWJKCegc8S4yASdut7TpJafO9wSRqvScc2SuOGK9BqnX+9GwRRQNti8ynm0ARRar+Z4hTpYY/XngFZ+ovvqIT3KRMK/7tsMmTg87mY0KelteFX7z7G7wPB9kKjT6ORYK4lVr35fihrbxei0XQP59YuEdALy+vbvKUm3JNv4sBU0lc9ZKpp2XF0rud8UnY1Nz4/XH7ZoaKfca5HXs9yq89DJRaPBRi1+Wv41vTCf8zFKPWZJrw6rm6kBMNHMENYbeBNdZyiCspTsHZmsVA==
Hash of data:
VPPxOVW2A38lCB810vuZbBH50KQaPSCouN0+tOpYDYs=
The code above uses a RSA created on the fly and is not so easy to share between a sender and a receiver. Let's look at how we can use X509 certificates to do the RSA encyption. It should be possible to share the source code below between the sender and the receiver and for example
export the public part of the X509 certificate to the receiver, which the receiver could install in a certificate store, only requred to know the thumbprint of the cert which is easy to see in MMC (Microsoft Management Console) or using Powershell and cd-ing into cert:\ folder .
Let's first look at a helper class to get hold of a installed X509 certificate.
publicclassCertStoreUtil
{
publicstatic System.Security.Cryptography.X509Certificates.X509Certificate2 GetCertificateFromStore(
System.Security.Cryptography.X509Certificates.StoreLocation storeLocation,
string thumbprint, bool validOnly = true) {
var store = new X509Store(storeLocation);
store.Open(OpenFlags.ReadOnly);
var cert = store.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, validOnly).FirstOrDefault();
store.Close();
return cert;
}
}
Next up, a helper class to create a RSA-based digital signature like in the previous example, but using a certificate.
A console app that tests out the code above is shown next, I have selected a random cert on my dev pc here.
voidMain()
{
SignAndVerifyData();
//Console.ReadLine();
}
privatestaticvoidSignAndVerifyData()
{
Console.WriteLine("RSA-based sigital signature demo");
var document = Encoding.UTF8.GetBytes("Document to sign");
//var x509CertLocalHost = CertStoreUtil.GetCertificateFromStore(StoreLocation.LocalMachine, "1f0b749ff936abddad89f4bbea7c30ed64e3dd07");var digitalSignatureWithCert = new RsaFromCertDigitalSignature(StoreLocation.LocalMachine, "1f0b749ff936abddad89f4bbea7c30ed64e3dd07");
var signatureWithCert = digitalSignatureWithCert.SignData(document);
bool isValidSignatureFromCert = digitalSignatureWithCert.VerifySignature(signatureWithCert.Signature, signatureWithCert.HashOfData);
Console.WriteLine(
$@"Input Document:
{Convert.ToBase64String(document)}
Is the digital signature signed with private key of CERT valid according to public key of CERT? {isValidSignatureFromCert}
Signature: {Convert.ToBase64String(signatureWithCert.Signature)}
Hash of data:\n{Convert.ToBase64String(signatureWithCert.HashOfData)}");
}
Now here is an important concept in digital signatures :
For digital signatures, we MUST use a private key (e.g. private key of RSA instance, which can either be made on the fly or retrieved from for example a X509 certificate. Or a Json web key in a more modern example.
For digital signature, to verify a signature we can use either the public or the private key, usually just the public key (which can be shared). For X509 certiifcates, we usually share a public cert (.cert or similar format) and keep our private cert ourselves (.pfx).
Sample output of the console app shown above:
RSA-based sigital signature demo
Input Document:
RG9jdW1lbnQgdG8gc2lnbg==
Is the digital signature signed withprivate key of CERT valid according to public key of CERT? True
Signature: ZHWzJeZnwbfI109uK0T4ubq4B+CHedQPIDgPREz+Eq9BR6A9y6kQEvSrxqUHvOppSDN5kDt5bTiWv1pvDPow+czb7N6kmFf1zQUxUs3ip4WPovBtQKmfpf9/i3DNkRILcoMLdZdKnn0aSaK66f0oxkSIc4nEkb3O9PbejVso6wLqSdDCh96d71gbHqOjyiZLBj2VlqalWvEPuo9GB0s2Uz2fxtFGMUQiZvH3jKR+9F4LwvKCc1K0E/+J4Np57JSfKgmid9QyL2r7nO19SVoVL3yBY7D8UxVIRw8sT/+JKXlnyh8roK7kaxDtW4+FMK6LT/QPvi8LkiNmA+eVv3kk9w==
Hash of data:\nVPPxOVW2A38lCB810vuZbBH50KQaPSCouN0+tOpYDYs=
WCF or Windows Communication Foundation was released initially in 2006 and was an important part of .NET Framework to create serverside services. It supports a lot of different protocols,
not only HTTP(S), but also Net.Tcp, Msmq, Named pipes and more.
Sadly, .NET Core 1, when released in 2016, did not include WCF. The use of WCF has been more and more replaced by REST API over HTTP(S) using JWT tokens and not SAML.
But a community driven project supported by a multitude of companies including Microsoft and Amazon Web Services has been working on the Core WCF project and this project is starting to
gain some more use, also allowing companies to migrate their platform services over to .NET.
I have looked at some basic stuff though, namely Basic Auth in Core WCF, and actually there is no working code sample for this. I have tapped into the ASP.NET Core pipeline to make it work by
studying different code samples which has made part of it work, and I got it working. In this article I will explain how.
I use GenericIdentity to make it work. On the client side I have this extension method where I pass the username and password inside the soap envelope. I use .net6 client and service and service use CoreWCF version 1.5.1.
The client is an ASP.NET Core MVC client who has added a Core WCF service as a connected service, generating a ServiceClient. The same type of service reference seen in .NET Framework in other words.
using System.ServiceModel;
using System.ServiceModel.Channels;
namespaceCoreWCFWebClient1.Extensions
{
publicstaticclassBasicHttpBindingClientFactory
{
///<summary>/// Creates a basic auth client with credentials set in header Authorization formatted as 'Basic [base64encoded username:password]'/// Makes it easier to perform basic auth in Asp.NET Core for WCF///</summary>///<param name="username"></param>///<param name="password"></param>///<returns></returns>publicstatic TServiceImplementation WithBasicAuth<TServiceContract, TServiceImplementation>(this TServiceImplementation client, string username, string password)
where TServiceContract : classwhere TServiceImplementation : ClientBase<TServiceContract>, new()
{
string clientUrl = client.Endpoint.Address.Uri.ToString();
var binding = new BasicHttpsBinding();
binding.Security.Mode = BasicHttpsSecurityMode.Transport;
binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic;
string basicHeaderValue = "Basic " + Base64Encode($"{username}:{password}");
var eab = new EndpointAddressBuilder(new EndpointAddress(clientUrl));
eab.Headers.Add(AddressHeader.CreateAddressHeader("Authorization", // Header Namestring.Empty, // Namespace
basicHeaderValue)); // Header Valuevar endpointAddress = eab.ToEndpointAddress();
var clientWithConfiguredBasicAuth = (TServiceImplementation) Activator.CreateInstance(typeof(TServiceImplementation), binding, endpointAddress)!;
clientWithConfiguredBasicAuth.ClientCredentials.UserName.UserName = username;
clientWithConfiguredBasicAuth.ClientCredentials.UserName.Password = username;
return clientWithConfiguredBasicAuth;
}
privatestaticstringBase64Encode(string plainText)
{
var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(plainText);
return Convert.ToBase64String(plainTextBytes);
}
}
}
Example call inside a razor file in a .net6 web client, I made client and service from the WCF template :
Index.cshtml
@{
string username = "someuser";
string password = "somepassw0rd";
var client = newServiceClient().WithBasicAuth<IService, ServiceClient>(username, password);
var result = await client.GetDataAsync(42);
<h5>@Html.Raw(result)</h5>
}
I manage to set the identity via the call above, here is a screenshot showing this :
Setting up Basic Auth for serverside
Let's look at the serverside, it was created to start with as an ASP.NET Core .NET 6 with MVC Views solution.
I added these Nugets to add CoreWCF, showing the entire .csproj since it also includes some important using statements :
CoreWCFService1.csproj
This adds authentication in services. We also make sure to add authentication itself after WebApplicationBuilder has been built, making sure also to set AllowSynchronousIO to true as usual.
Below is listet the pipline setup of authentication, the StartsWithSegments should of course be adjusted in case you have multiple services:
Program.cs
app.Use(async (context, next) =>
{
// Only check for basic auth when path is for the TransportWithMessageCredential endpoint only
if (context.Request.Path.StartsWithSegments("/Service.svc"))
{
// Check if currently authenticated
var authResult = await context.AuthenticateAsync("Basic");
if (authResult.None)
{
// If the client hasn't authenticated, send a challenge to the client and complete request
await context.ChallengeAsync("Basic");
return;
}
}
// Call the next delegate/middleware in the pipeline.
// Either the request was authenticated of it's for a path which doesn't require basic auth
await next(context);
});
We set up the servicemodel security like this to support transport mode security with the basic client credentials type.
Program.cs
The BasicAuthenticationHandler looks like this:
BasicAuthenticationHandler.cs
using Microsoft.AspNetCore.Authentication;
using Microsoft.Extensions.Options;
using System.Security.Claims;
using System.Security.Principal;
using System.Text;
using System.Text.Encodings.Web;
publicclassBasicAuthenticationHandler : AuthenticationHandler<AuthenticationSchemeOptions>
{
privatereadonly IUserRepository _userRepository;
publicBasicAuthenticationHandler(IOptionsMonitor<AuthenticationSchemeOptions> options,
ILoggerFactory logger,
UrlEncoder encoder,
ISystemClock clock, IUserRepository userRepository) :
base(options, logger, encoder, clock)
{
_userRepository = userRepository;
}
protectedasyncoverride Task<AuthenticateResult> HandleAuthenticateAsync()
{
string? authTicketFromSoapEnvelope = await Request!.GetAuthenticationHeaderFromSoapEnvelope();
if (authTicketFromSoapEnvelope != null && authTicketFromSoapEnvelope.StartsWith("basic", StringComparison.OrdinalIgnoreCase))
{
var token = authTicketFromSoapEnvelope.Substring("Basic ".Length).Trim();
var credentialsAsEncodedString = Encoding.UTF8.GetString(Convert.FromBase64String(token));
var credentials = credentialsAsEncodedString.Split(':');
if (await _userRepository.Authenticate(credentials[0], credentials[1]))
{
var identity = new GenericIdentity(credentials[0]);
var claimsPrincipal = new ClaimsPrincipal(identity);
var ticket = new AuthenticationTicket(claimsPrincipal, Scheme.Name);
returnawait Task.FromResult(AuthenticateResult.Success(ticket));
}
}
returnawait Task.FromResult(AuthenticateResult.Fail("Invalid Authorization Header"));
}
protectedoverride Task HandleChallengeAsync(AuthenticationProperties properties)
{
Response.StatusCode = 401;
Response.Headers.Add("WWW-Authenticate", "Basic realm=\"thoushaltnotpass.com\"");
Context.Response.WriteAsync("You are not logged in via Basic auth").Wait();
return Task.CompletedTask;
}
}
This authentication handler has got a flaw, if you enter the wrong password and username you get a 500 internal server error instead of the 401. I have not found out how to fix this yet.. Authenticate.Fail seems to short-circuit everything in case you enter wrong credentials.
The _userRepository.Authenticate method is implemented as a dummy implementation, the user repo could for example do a database connection to look up the user via the provided credentials or some other means, maybe via ASP.NET Core MemberShipProvider ?
The user repo looks like this:
(I)UserRepository.cs
publicinterfaceIUserRepository
{
public Task<bool> Authenticate(string username, string password);
}
publicclassUserRepository : IUserRepository
{
public Task<bool> Authenticate(string username, string password)
{
//TODO: some dummie auth mechanism used here, make something more realistic such as DB user repo lookup or similarif (username == "someuser" && password == "somepassw0rd")
{
return Task.FromResult(true);
}
return Task.FromResult(false);
}
}
So I have implemented basic auth via reading out the credentials via Auth header inside soap envelope.
I circumvent a lot of the Core WCF Auth by perhaps relying too much on the ASP.Net Core pipeline instead. Remember, WCF has to interop some with the ASP.NET Core pipeline to make it work properly and as long as we satisfy the demands of both the WCF and ASP.NET Core pipelines, we can make the authentication work.
I managed to set the username via setting claims in the expected places of ServiceSecurityContext and CurrentPrincipal.
The WCF service looks like this, note the use of the [Autorize] attribute :
Service.cs
publicclassService : IService
{
[Authorize]
publicstringGetData(intvalue)
{
return$"You entered: {value}. <br />The client logged in with transport security with BasicAuth with https (BasicHttpsBinding).<br /><br />The username is set inside ServiceSecurityContext.Current.PrimaryIdentity.Name: {ServiceSecurityContext.Current.PrimaryIdentity.Name}. <br /> This username is also stored inside Thread.CurrentPrincipal.Identity.Name: {Thread.CurrentPrincipal?.Identity?.Name}";
}
public CompositeType GetDataUsingDataContract(CompositeType composite)
{
if (composite == null)
{
thrownew ArgumentNullException("composite");
}
if (composite.BoolValue)
{
composite.StringValue += "Suffix";
}
return composite;
}
}
I am mainly satisfied with this setup as it though is not optimal since ASP.NET Core don't seem to be able to work together with CoreWCF properly, instead we add the authentication as a soap envelope authorization header which we read out.
I used some time to read out the authentication header, this is done on the serverside with the following extension method :
HttpRequestExtensions.cs
using System.IO.Pipelines;
using System.Text;
using System.Xml.Linq;
publicstaticclassHttpRequestExtensions
{
publicstaticasync Task<string?> GetAuthenticationHeaderFromSoapEnvelope(this HttpRequest request)
{
ReadResult requestBodyInBytes = await request.BodyReader.ReadAsync();
string body = Encoding.UTF8.GetString(requestBodyInBytes.Buffer.FirstSpan);
request.BodyReader.AdvanceTo(requestBodyInBytes.Buffer.Start, requestBodyInBytes.Buffer.End);
string authTicketFromHeader = null;
if (body?.Contains(@"http://schemas.xmlsoap.org/soap/envelope/") == true)
{
XNamespace ns = "http://schemas.xmlsoap.org/soap/envelope/";
var soapEnvelope = XDocument.Parse(body);
var headers = soapEnvelope.Descendants(ns + "Header").ToList();
foreach (var header in headers)
{
var authorizationElement = header.Element("Authorization");
if (!string.IsNullOrWhiteSpace(authorizationElement?.Value))
{
authTicketFromHeader = authorizationElement.Value;
break;
}
}
}
return authTicketFromHeader;
}
}
Note the use of BodyReader and method AdvanceTo. This was the only way to rewind the Request stream after reading the HTTP soap envelope header for Authorization, it took me hours to figure out why this failed in ASP.NET Core pipeline, until I found some tips in a Github discussion thread on Core WCF mentioning the error and a suggestion in a comment there.
See more documentation about BodyWriter and BodyReader here from MVP Steve Gordon here:
https://www.stevejgordon.co.uk/using-the-bodyreader-and-bodywriter-in-asp-net-core-3-0
I have tested out CoreWCF a bit and it is good to see WCF once again in a modern framework such as ASP.NET Core.
Here is how you can increase timeouts in CoreWCF. You can put the timeout into an appsettings file too if you want.
First off, after having added a Service Reference to your WCF service. Look inside the Reference.cs file.
Make note of:
Namespace in the Reference.cs file
Class name of the client
My client uses these Nuget packages in its csproj :
Look inside the Reference.cs file, a method called ConfigureEndpoint is listed :
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.Tools.ServiceModel.Svcutil", "2.1.0")]
publicpartialclassServiceClient : System.ServiceModel.ClientBase, MyService.IService
{
////// Implement this partial method to configure the service endpoint.////// The endpoint to configure/// The client credentialsstaticpartialvoidConfigureEndpoint(System.ServiceModel.Description.ServiceEndpoint serviceEndpoint, System.ServiceModel.Description.ClientCredentials clientCredentials);
//more code
Next up, implementing this method to configured the binding.
namespaceMyService
{
publicpartialclassServiceClient
{
///<summary>/// Implement this partial method to configure the service endpoint.///</summary>///<param name="serviceEndpoint">The endpoint to configure</param>///<param name="clientCredentials">The client credentials</param>staticpartialvoidConfigureEndpoint(System.ServiceModel.Description.ServiceEndpoint serviceEndpoint, System.ServiceModel.Description.ClientCredentials clientCredentials)
{
serviceEndpoint.Binding.OpenTimeout
= serviceEndpoint.Binding.CloseTimeout
= serviceEndpoint.Binding.ReceiveTimeout
= serviceEndpoint.Binding.SendTimeout = TimeSpan.FromSeconds(15);
}
}
}
We also want to be able to configure the timeout here.
Lets add the following nuget packages also to the client (I got a .NET 6 console app):
We can also avoid hardcoding timeouts by adding appsettings.json to our project and set the file to copy to output folder.
If you are inside a console project you can add json config file like this, preferably registering it in some shared setup in Program.cs, but I found it a bit challenging to consume it from a static method I ended up with this :
////// Implement this partial method to configure the service endpoint.////// The endpoint to configure/// The client credentialsstaticpartialvoidConfigureEndpoint(System.ServiceModel.Description.ServiceEndpoint serviceEndpoint, System.ServiceModel.Description.ClientCredentials clientCredentials)
{
var serviceProvider = new ServiceCollection()
.AddSingleton(_ =>
new ConfigurationBuilder()
.SetBasePath(Path.Combine(AppContext.BaseDirectory))
.AddJsonFile("appsettings.json", optional: true)
.Build())
.BuildServiceProvider();
var config = serviceProvider.GetService();
int timeoutInSeconds = int.Parse(config!["ServiceTimeoutInSeconds"]);
serviceEndpoint.Binding.OpenTimeout
= serviceEndpoint.Binding.CloseTimeout
= serviceEndpoint.Binding.ReceiveTimeout
= serviceEndpoint.Binding.SendTimeout = TimeSpan.FromSeconds(timeoutInSeconds);
}
And we have our appsettings.json file :
{
"ServiceTimeoutInSeconds" : 9
}
The CoreWCF project got an upgrade tool that will do a lot of the migration for you. WCF had a lot of config settings and having an appsettings.json file for every setting will be some work. The upgrade tool should take care of generating some of these config values and add them into dedicated json files for this.
The speech synthesis service of Azure AI is accessed via a REST service. You can actually test it out first in Postman, retrieving an access token via an endpoint for this and then
calling the text to speech endpoint using the access token as a bearer token.
To get the demo working, you have to inside the Azure Portal create the necessary resources / services. This article is focused on speech service.
Important, if you want to test out the DEMO yourself, remember to put the keys into environment variables so they are not exposed via source control.
To get started with speech synthesis in Azure Cognitive Services, add a Speech Service resource via the Azure Portal.
https://learn.microsoft.com/en-us/azure/ai-services/speech-service/overview
We also need to add audio capability to our demo, which is a .NET MAUI Blazor app. The Nuget package used is the following :
MultiLingual.Translator.csproj
This Nuget package's website is here:
https://github.com/jfversluis/Plugin.Maui.Audio
The MauiProgram.cs looks like the following, make note of AudioManager.Current, which is registered as a singleton.
MauiProgram.cs
using Microsoft.Extensions.Configuration;
using MultiLingual.Translator.Lib;
using Plugin.Maui.Audio;
namespaceMultiLingual.Translator;
publicstaticclassMauiProgram
{
publicstatic MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.ConfigureFonts(fonts =>
{
fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");
});
builder.Services.AddMauiBlazorWebView();
#if DEBUG
builder.Services.AddBlazorWebViewDeveloperTools();
#endif
builder.Services.AddSingleton(AudioManager.Current);
builder.Services.AddTransient<MainPage>();
builder.Services.AddScoped<IDetectLanguageUtil, DetectLanguageUtil>();
builder.Services.AddScoped<ITranslateUtil, TranslateUtil>();
builder.Services.AddScoped<ITextToSpeechUtil, TextToSpeechUtil>();
var config = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
builder.Configuration.AddConfiguration(config);
return builder.Build();
}
}
Next up, let's look at the TextToSpeechUtil. This class, which is a service that does two things against the REST API of the text-to-speech Azure Cognitive AI service :
Fetch an access token
Synthesize text to speech
TextToSpeechUtil.cs
using Microsoft.Extensions.Configuration;
using MultiLingual.Translator.Lib.Models;
using System.Security;
using System.Text;
namespaceMultiLingual.Translator.Lib
{
publicclassTextToSpeechUtil : ITextToSpeechUtil
{
publicTextToSpeechUtil(IConfiguration configuration)
{
_configuration = configuration;
}
publicasync Task<TextToSpeechResult> GetSpeechFromText(string text, string language, TextToSpeechLanguage[] actorVoices, string? preferredVoiceActorId)
{
var result = new TextToSpeechResult();
result.Transcript = GetSpeechTextXml(text, language, actorVoices, preferredVoiceActorId, result);
result.ContentType = _configuration[TextToSpeechSpeechContentType];
result.OutputFormat = _configuration[TextToSpeechSpeechXMicrosoftOutputFormat];
result.UserAgent = _configuration[TextToSpeechSpeechUserAgent];
result.AvailableVoiceActorIds = ResolveAvailableActorVoiceIds(language, actorVoices);
result.LanguageCode = language;
string? token = await GetUpdatedToken();
HttpClient httpClient = GetTextToSpeechWebClient(token);
string ttsEndpointUrl = _configuration[TextToSpeechSpeechEndpoint];
var response = await httpClient.PostAsync(ttsEndpointUrl, new StringContent(result.Transcript, Encoding.UTF8, result.ContentType));
using (var memStream = new MemoryStream()) {
var responseStream = await response.Content.ReadAsStreamAsync();
responseStream.CopyTo(memStream);
result.VoiceData = memStream.ToArray();
}
return result;
}
privateasync Task<string?> GetUpdatedToken()
{
string? token = _token?.ToNormalString();
if (_lastTimeTokenFetched == null || DateTime.Now.Subtract(_lastTimeTokenFetched.Value).Minutes > 8)
{
token = await GetIssuedToken();
}
return token;
}
private HttpClient GetTextToSpeechWebClient(string? token)
{
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", token);
httpClient.DefaultRequestHeaders.Add("X-Microsoft-OutputFormat", _configuration[TextToSpeechSpeechXMicrosoftOutputFormat]);
httpClient.DefaultRequestHeaders.Add("User-Agent", _configuration[TextToSpeechSpeechUserAgent]);
return httpClient;
}
privatestringGetSpeechTextXml(string text, string language, TextToSpeechLanguage[] actorVoices, string? preferredVoiceActorId, TextToSpeechResult result)
{
result.VoiceActorId = ResolveVoiceActorId(language, preferredVoiceActorId, actorVoices);
string speechXml = $@"
<speak version='1.0' xml:lang='en-US'>
<voice xml:lang='en-US' xml:gender='Male' name='Microsoft Server Speech Text to Speech Voice {result.VoiceActorId}'>
<prosody rate='1'>{text}</prosody>
</voice>
</speak>";
return speechXml;
}
private List<string> ResolveAvailableActorVoiceIds(string language, TextToSpeechLanguage[] actorVoices)
{
if (actorVoices?.Any() == true)
{
var voiceActorIds = actorVoices.Where(v => v.LanguageKey == language || v.LanguageKey.Split("-")[0] == language).SelectMany(v => v.VoiceActors).Select(v => v.VoiceId).ToList();
return voiceActorIds;
}
returnnew List<string>();
}
privatestringResolveVoiceActorId(string language, string? preferredVoiceActorId, TextToSpeechLanguage[] actorVoices)
{
string actorVoiceId = "(en-AU, NatashaNeural)"; //default to a select voice actor id if (actorVoices?.Any() == true)
{
var voiceActorsForLanguage = actorVoices.Where(v => v.LanguageKey == language || v.LanguageKey.Split("-")[0] == language).SelectMany(v => v.VoiceActors).Select(v => v.VoiceId).ToList();
if (voiceActorsForLanguage != null)
{
if (voiceActorsForLanguage.Any() == true)
{
var resolvedPreferredVoiceActorId = voiceActorsForLanguage.FirstOrDefault(v => v == preferredVoiceActorId);
if (!string.IsNullOrWhiteSpace(resolvedPreferredVoiceActorId))
{
return resolvedPreferredVoiceActorId!;
}
actorVoiceId = voiceActorsForLanguage.First();
}
}
}
return actorVoiceId;
}
privateasync Task<string> GetIssuedToken()
{
var httpClient = new HttpClient();
string? textToSpeechSubscriptionKey = Environment.GetEnvironmentVariable("AZURE_TEXT_SPEECH_SUBSCRIPTION_KEY", EnvironmentVariableTarget.Machine);
httpClient.DefaultRequestHeaders.Add(OcpApiSubscriptionKeyHeaderName, textToSpeechSubscriptionKey);
string tokenEndpointUrl = _configuration[TextToSpeechIssueTokenEndpoint];
var response = await httpClient.PostAsync(tokenEndpointUrl, new StringContent("{}"));
_token = (await response.Content.ReadAsStringAsync()).ToSecureString();
_lastTimeTokenFetched = DateTime.Now;
return _token.ToNormalString();
}
privateconststring OcpApiSubscriptionKeyHeaderName = "Ocp-Apim-Subscription-Key";
privateconststring TextToSpeechIssueTokenEndpoint = "TextToSpeechIssueTokenEndpoint";
privateconststring TextToSpeechSpeechEndpoint = "TextToSpeechSpeechEndpoint";
privateconststring TextToSpeechSpeechContentType = "TextToSpeechSpeechContentType";
privateconststring TextToSpeechSpeechUserAgent = "TextToSpeechSpeechUserAgent";
privateconststring TextToSpeechSpeechXMicrosoftOutputFormat = "TextToSpeechSpeechXMicrosoftOutputFormat";
privatereadonly IConfiguration _configuration;
private DateTime? _lastTimeTokenFetched = null;
private SecureString _token = null;
}
}
Let's look at the appsettings.json file. The Ocp-Apim-Subscription-Key is put into environment variable, this is a secret key you do not want to expose to avoid leaking a key an running costs for usage of service.
Appsettings.json
Next up, I have gathered all the voice actor ids for languages in Azure Cognitive Services which have voice actor ids. Thesee are all the most known languages in the list of Azure about 150 supported languages, see the following json for an overview of voice actor ids.
For example, Norwegian language got three voice actors that are synthesized neural net trained AI voice actors for realistic speech synthesis.
Let's look at the source code for calling the TextToSpeechUtil.cs shown above from a MAUI Blazor app view, Index.razor
The code below shown is two private methods that does the work of retrieving the audio file from the Azure Speeech Service by first loading up all the voice actor ids from a bundled json file of voice actors displayed above and deserialize this into a list of voice actors.
Retrieving the audio file passes in the translated text of which to generate synthesized speedch for and also the target language, all available actor voices and preferred voice actor id, if set.
Retrieved is metadata and the audio file, in a MP3 file format. The file format is recognized by for example Windows withouth having to have any codec libraries installed in addition.
Index.razor (Inside the @code block { .. } of that razor file)
A screenshot shows how the DEMO app now looks like. You can translate text into other language and then have speech synthesis in Azure AI Cognitive Service generate a realistic audio speech of the translated text so you can also see how the text not only is translated, but also pronounced.