tag:blogger.com,1999:blog-72401091430896199212024-03-29T04:29:52.732+01:00Coding GroundsCompilation of different programming
projects I amuse myself with.Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.comBlogger317125tag:blogger.com,1999:blog-7240109143089619921.post-40085862840669288152024-03-27T22:10:00.013+01:002024-03-27T23:34:38.849+01:00Importing Json File to SQL Server into a variableA short article today of how to import JSON file to SQL Server into a variable, which can then <br>
be used to insert it into a column of type NVARCHAR(MAX) of a table. The maximum size of NVARCHAR(MAX) is 2 Gb, so you can <br>import
large Json files using this datatype. If the Json is small and below 4000 chars, use for example NVARCHAR(4000) instead.
Here is a SQL script to import the json file using OPENROWSET and Bulk import. We also pass in the path to the folder where the json file is. It is put in the same folder as the .sql file script.
Note that the variable $(FullScriptDir) is passed in via a .bat file (shown further below) and we expect the .json file to be in the same folder as the .bat file.
You can provide a full path to a .json file instead and skip the .bat file here and import a json file, but it is nice to load the .json file from the same folder as the .sql file in case you want to
copy the .sql and .json file to another server and not having to provide and possibly having to adjust the full path.
Sql-script <b>import_json_file_openrowset.sql</b>:
<pre>
<code class='hljs sql'>
DECLARE @JSONFILE VARCHAR(MAX);
SELECT @JSONFILE = BulkColumn
FROM OPENROWSET (BULK '$(FullScriptDir)\top-posts.json', SINGLE_CLOB) AS j;
PRINT 'JsonFile contents: ' + @JSONFILE
IF (ISJSON(@JSONFILE)=1) PRINT 'It is valid Json';
</code>
</pre>
The .bat file here passes the current folder as a variable to the sql script
<b>runsqlscript.bat</b>
<pre>
<code class='hljs cmd'>
@set FullScriptDir=%CD%
sqlcmd -S .\SQLEXPRESS -i import_json_file_openrowset.sql
</code>
</pre>
This outputs:
<pre>
<code class='hljs bash'>
sqlcmd -S .\SQLEXPRESS -i import_json_file_openrowset.sql
JsonFile contents: [
{
"Id":6107,
"Score":176,
"ViewCount":155988,
"Title":"What are deconvolutional layers?",
"OwnerUserId":8820
},
{
"Id":155,
"Score":164,
"ViewCount":25822,
"Title":"Publicly Available Datasets",
"OwnerUserId":227
}
]
It is valid Json
</code>
</pre>
With the variable JSONFILE you can do whatever with it such as inserting it to a column in a new row of a table for example.
<br />
<b>Importing json from a string directly using OPENJSON</b>
<br /><br />
It is also possible to directly just import the JSON from a string variable like this:
<pre>
<code class='hljs sql'>
DECLARE @JSONSTRINGSAMPLE VARCHAR(MAX)
SET @JSONSTRINGSAMPLE = N'[
{
"Id": 2334,
"Score": 4.3,
"Title": "Python - Used as scientific tool for graphing"
},
{
"Id": 2335,
"Score": 5.2,
"Title": "C# : Math and physics programming"
}
]';
SELECT * FROM OPENJSON (@JSONSTRINGSAMPLE) WITH (
Id INT,
Score REAL,
Title NVARCHAR(100)
)
</code>
</pre>
<img width="800" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkIP-0aXE26oFfX4bPK-45LFl83KbamJSoqUuOzey46fWPkLAAVvNpxvrLGtzDKbL2-TCwWUbIFVEF4Re3jdGlr1q5I0NbywzcRnXEX9DpEYZNu-760QhI3rVo9dToCPSgT5R2dvEriRPn3Gqqps7LhtN837U18riDiiz4RZzippvB9B0fydarLKMUTX0/s1600/json_import_vs_code.png"/>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-12458901037301544422024-03-19T22:27:00.014+01:002024-03-19T22:59:41.741+01:00Functional programming - Fork combinator in C# to combine results from partsThis article will discuss a wellknown combinator called <em>Fork</em> which allows you to combine the mapped result.
Consider the following extension methods to <em>fork</em> on an object. <em>Fork</em> here means to operate on parts of the object such as <br>
different properties and apply functions on these parts and then recombine the results into a combined result via a specified combinator function, sometimes
called a 'join function'.
<pre>
<code class='hljs csharp'>
public static class FunctionalExtensions {
public static TOutput Map<TInput, TOutput>(
this TInput @this,
Func<TInput, TOutput> func) => func(@this);
public static TOutput Fork<TInput, TMiddle, TOutput>(
this TInput @this,
Func<IEnumerable<TMiddle>, TOutput> combineFunc,
params Func<TInput, TMiddle>[] parts)
{
var intermediateResults = parts.Select(p => p(@this));
var result = combineFunc(intermediateResults);
return result;
}
public static TOutput Fork<TInput, TMiddle, TOutput>(
this TInput @this,
Func<TInput, TMiddle> leftFunc,
Func<TInput, TMiddle> rightFunc,
Func<TMiddle, TMiddle, TOutput> combineFunc)
{
var leftResult = leftFunc(@this); // @this.Map(leftFunc);
var rightResult = rightFunc(@this); // @this.Map(rightFunc);
var combineResult = combineFunc(leftResult, rightResult);
return combineResult;
}
}
</code>
</pre>
Let's take a familiar mathematical example, calculating the <em>Hypotenuse</em> in a triangle using Pythagorean theorem. This states that the length of the longest side A of a 'right triangle' is the
square root of the sum of the squares of the shorter sides B and C :
<code>
A = √(B² + C²)
</code>
Consider this class:
<pre>
<code class='hljs csharp'>
public class Triangle {
public double CathetusA { get; set; }
public double CathetusB { get; set; }
public double Hypotenuse { get; set; }
}
</code>
</pre>
Let's test the first <em>Fork</em> helper extension method accepting two functions for specifying the <em>left</em> and <em>right</em> components:
<pre>
<code class='hljs csharp'>
var triangle = new Triangle
{
CathetusA = 3,
CathetusB = 4
};
triangle.Hypotenuse = triangle.Fork(
t => t.CathetusA * t.CathetusA,
t => t.CathetusB * t.CathetusB,
(l,r) => Math.Sqrt(l+r));
Console.WriteLine(triangle.Hypotenuse);
</code>
</pre>
This yields <em>'5'</em> as the answer via the <em>forked result</em> above. A simple example, but this allows us to create a simple combinatory
logic example on an object of any type using functional programming (FP).
Let's look at a simpler example just combining multiple properties of an object with a simple string-join, but using the Fork version supporting arbitrary number of parts / components:
<pre class='csharp hljs'>
<code class='hljs csharp'>
public class Person {
public string JobTitle { get; set; }
public string FirstName { get; set; }
public IEnumerable<string> MiddleNames { get; set; }
public string LastName { get; set; }
}
var person = new Person{
JobTitle = "Detective",
FirstName = "Alexander",
MiddleNames = new[] { "James", "Axel" },
LastName = "Foley"
};
string contactCardText = person.Fork(parts => string.Join(" ", parts), p => p.FirstName,<br /> p => string.Join(" ", p.MiddleNames), p => p.LastName);
Console.WriteLine(contactCardText);
</code>
</pre>
This yields:
<code>
Alexander James Axel Foley
</code>
Fork can be very useful in many cases you need to 'branch off' on an object and recombine parts of the object with some specific function, either two parts or multiple parts and either continue
to work on the results or retrieve the results.
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-9781908434134958792024-03-10T23:31:00.003+01:002024-03-10T23:39:38.927+01:00Functional programming - the Tee function to inspect current state in a chained expressionIn this article we will look at helper extension methods of StringBuilder first to better support chaining StringBuilder.
We will work on the same StringBuilder instance and add support for appending lines or character to the StringBuilder given a
condition. Also example showing how to aggregate lines from a sequence is shown and appending formatted lines. Since C# interpolation
has become more easy to use, I would suggest you keep using AppendLine instead.
Here is the helper methods in the extension class :
<pre>
public static class StringBuilderExtensions {
public static StringBuilder AppendSequence<T>(this StringBuilder @this, IEnumerable<T> sequence, Func<StringBuilder, T, StringBuilder> fn)
{
var sb = sequence.Aggregate(@this, fn);
return sb;
}
public static StringBuilder AppendWhen(this StringBuilder @this, Func<bool> condition, Func<StringBuilder, StringBuilder> fn) =>
condition() ? fn(@this) : @this;
public static StringBuilder AppendFormattedLine(
this StringBuilder @this,
string format,
params object[] args) =>
@this.AppendFormat(format, args).AppendLine();
}
</pre>
Now consider this example usage:
<pre>
<code class='hljs csharp'>
void Main()
{
var countries = new Dictionary<int, string>{
{ 1, "Norway" },
{ 2, "France" },
{ 3, "Austria" },
{ 4, "Sweden" },
{ 5, "Finland" },
{ 6, "Netherlands" }
};
string options = BuildSelectBox(countries, "countriesSelect", true);
options.Dump("Countries"); //dump is a method available in Linqpad to output objects
}
private static string BuildSelectBox(IDictionary<int, string> options, string id, bool includeUnknown) =>
new StringBuilder()
.AppendFormattedLine($"<select id=\"{id}\" name=\"{id}\">")
.AppendWhen(() => includeUnknown, sb => sb.AppendLine("\t<option value=\"0\">Unknown</option>"))
.AppendSequence(options, (sb, item) => sb.AppendFormattedLine("\t<option value=\"{0}\">{1}</option>", item.Key, item.Value))
.AppendLine($"</select>").ToString();
</code>
</pre>
What if we wanted to inspect the state of the stringbuilder in the middle of these chained expression. Is it possible to output state in such lengthy chained functional expressions?
Yes, that is called the <em>Tee</em> method inside functional programming patterns. Other might call it for <em>Tap</em> such as used in Rx languages.
The Tee method looks like this:
<pre>
<code class='hljs csharp'>
public static class FunctionalExtensions {
public static T Tee<T>(this T @this, Action<T> act) {
act(@this);
return @this;
}
}
</code>
</pre>
We can now inspect state in the middle of chained expressions in functional expressions.
<pre>
<code class='hljs csharp'>
private static string BuildSelectBox(IDictionary<int, string> options, string id, bool includeUnknown) =>
new StringBuilder()
.AppendFormattedLine($"<select id=\"{id}\" name=\"{id}\">")
.AppendWhen(() => includeUnknown, sb => sb.AppendLine("\t<option value=\"0\">Unknown</option>"))
.Tee(Console.WriteLine)
.AppendSequence(options, (sb, item) => sb.AppendFormattedLine("\t<option value=\"{0}\">{1}</option>", item.Key, item.Value))
.AppendLine($"</select>").ToString();
</code>
</pre>
The picture below shows the output:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEho-8-vD3PkaGF3bgAAibQ8s8xS1vIPNXt79fwkKo89lBEDlxrUqnraPh8XHzQRNXjtstA08HkKj-7ZBbOs80S7-PHQjUO3VEMcVCC4hiNyieuStPN7XYC8Pz6YNeBwbJ1p8xezPCuKyg_0_2oXhBL-C-AEfvYZiacwF3ptPdK44avIbgXlJioy6mr7i8E/s1600/functional_programming_tee_output.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" data-original-height="288" data-original-width="423" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEho-8-vD3PkaGF3bgAAibQ8s8xS1vIPNXt79fwkKo89lBEDlxrUqnraPh8XHzQRNXjtstA08HkKj-7ZBbOs80S7-PHQjUO3VEMcVCC4hiNyieuStPN7XYC8Pz6YNeBwbJ1p8xezPCuKyg_0_2oXhBL-C-AEfvYZiacwF3ptPdK44avIbgXlJioy6mr7i8E/s1600/functional_programming_tee_output.png"/></a></div>
So there you have it, if you have lengthy chained functional expressions, make such a <em>Tee</em> helper method to peek into the state this far. The name <em>Tee</em> stems from the Unix Command by the same name. It copies contents from STDIN to STDOUT.
More about <em>Tee</em> Unix command here:
<pre>
<a href='https://shapeshed.com/unix-tee/'>https://shapeshed.com/unix-tee/</a>
</pre>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-45101080681569948502024-03-09T22:45:00.003+01:002024-03-09T22:45:44.045+01:00Functional programming - looking up current time and encapsulating usingsI looked at encapsulating Using statements today for functional programming and how to look up the current time with API available on the Internet.
<pre>
<code class='hljs csharp'>
public static class Disposable {
public static TResult Using<TDisposable,TResult>(
Func<TDisposable> factory,
Func<TDisposable, TResult> map)
where TDisposable : IDisposable
{
using (var disposable = factory()){
return map(disposable);
}
}
}
void Main()
{
var currentTime = EpochTime.AddSeconds(Disposable
.Using(() => new HttpClient(),
client => JsonDocument.Parse(client.GetStringAsync(@"http://worldtimeapi.org/api/timezone/europe/oslo").Result))
.RootElement
.GetProperty("unixtime")
.GetInt64()).ToLocalTime(); //list of time zones available here: http://worldtimeapi.org/api/timezone
currentTime.Dump("CurrentTime");
}
public static DateTime EpochTime => new DateTime(1970, 1, 1);
</code>
</pre>
The Disposable is abstracted away in the helper method called <em>Using</em> accepting a factory function to create a <em>TDisposable</em> that accepts an <em>IDisposable</em>.
We look up the current time using the <em>WorldTimeApi</em> and make use of extracting the UnixTime which is measured from <em>Epoch</em> as the number of seconds elapsed from 1st January 1970.
We make use of <em>System.Text.Json</em> here, which is part of .NET to parse the json retrieved.Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-44709895665427020002024-03-07T15:03:00.010+01:002024-03-07T16:45:02.433+01:00Currying functions in C# This article will look into helper methods for currying functions in C#. The definition of Currying consists of splitting up a function with multiple arguments
into multiple functions accepting one argument. But you can also have some of the arguments provided via smaller functions, so be aware also of this alternative.
What is in the name currying? The name has nothing to do with cooking from India, but comes from the mathematician Haskell Brooks Curry (!) <br /><br />
<a href='https://en.wikipedia.org/wiki/Haskell_Curry'>https://en.wikipedia.org/wiki/Haskell_Curry</a><br /><br />
A reason for introducing support for currying is that you can build complex functions from simpler functions as building blocks. Currying is explained
great here:<br />
<a href='https://www.c-sharpcorner.com/UploadFile/rmcochran/functional-programming-in-C-Sharp-currying/'>https://www.c-sharpcorner.com/UploadFile/rmcochran/functional-programming-in-C-Sharp-currying/</a>
<br /><br />
We will see in the examples that we can provide multiple arguments at once and the syntax will look a bit special compared to other C# code.
Curryings benefits is to allow a more flexible way to call a method. You can store into variables calls to a function providing a subset of argument
and use that variable to either specify an intermediate other call or get the final result.
Note - The function will be called when ALL arguments are provided ONCE ! This helps a lot of avoiding surprising side effects.
Let's first look at a sample set of methods we want to support currying.
<pre>
<code class='hljs csharp'>
int FooFourArgs(string st, float x, int j, int k)
{
Console.WriteLine($"Inside method FooFourArgs. Got parameters: st={st}, x={x}, j={j}, k={k}");
return 42;
}
int FooThreeArgs(string st, float x, int j)
{
Console.WriteLine($"Inside method FooThreeArgs. Got parameters: st={st}, x={x}, j={j}");
return 42;
}
int FooTwoArgs(string st, float x)
{
Console.WriteLine($"Inside method FooTwoArgs. Got parameters: st={st}, x={x}");
return 41;
}
int FooOneArgs(string st)
{
Console.WriteLine($"Inside method FooOneArgs. Got parameters: st={st}");
return 40;
}
</code>
</pre>
We want to call the sample methods above in a more flexible way by splitting the number of arguments we provide.
Let's see the extension methods to call up to four arguments to a function. Note the use of <em>chaining</em> the <em>lambda operator</em> (=>) to provide
the support for currying.
<pre>
<code class='hljs csharp'>
public static class FunctionExtensions
{
public static Func<T1, TResult> Curried<T1, TResult>(this Func<T1, TResult> func)
{
return x1 => func(x1);
}
public static Func<T1, Func<T2, TResult>> Curried<T1, T2, TResult>(this Func<T1, T2, TResult> func)
{
return x1 => x2 => func(x1, x2);
}
public static Func<T1, Func<T2, Func<T3, TResult>>> Curried<T1, T2, T3, TResult>(this Func<T1, T2, T3, TResult> func)
{
return x1 => x2 => x3 => func(x1, x2, x3);
}
public static Func<T1, Func<T2, Func<T3, Func<T4, TResult>>>> Curried<T1, T2, T3, T4, TResult>(this Func<T1, T2, T3, T4, TResult> func)
{
return x1 => x2 => x3 => x4 => func(x1, x2, x3,x4);
}
}
</code>
</pre>
The following main method shows how to use these curry helper methods:
<pre>
<code class='hljs csharp'>
void Main()
{
var curryOneArgsDelegate = new Func<string, int>((st) => FooOneArgs(st)).Curried();
var curryOneArgsPhaseOne = curryOneArgsDelegate("hello");
var curryTwoArgsDelegate = new Func<string, float, int>((st, x) => FooTwoArgs(st,x)).Curried();
var curryTwoArgsPhaseOne = curryTwoArgsDelegate("hello");
var curryTwoArgsPhaseTwo = curryTwoArgsPhaseOne(3.14f);
var curryThreeArgsDelegate = new Func<string, float, int, int>((st, x, j) => FooThreeArgs(st, x, j)).Curried();
var curryThreeArgsPhaseOne = curryThreeArgsDelegate("hello");
var curryThreeArgsPhaseTwo = curryThreeArgsPhaseOne(3.14f);
var curryThreeArgsPhaseThree = curryThreeArgsPhaseTwo(123);
//Or call currying in a single call passing in two or more parametres
var curryThreeArgsPhaseOneToThree = curryThreeArgsDelegate("hello")(3.14f)(123);
var curryFourArgsDelegate = new Func<string, float, int, int, int>((st, x, j, k) => FooFourArgs(st, x, j, k)).Curried();
var curryFourArgsPhaseOne = curryFourArgsDelegate("hello");
var curryFourArgsNextPhases = curryFourArgsPhaseOne(3.14f)(123)(456); //just pass in the last arguments if they are known at this stage
curryFourArgsDelegate("hello")(3.14f)(123)(456); //you can pass in 1-4 parameters to FooFourArgs method - all in a single call for example or one by one
}
</code>
</pre>
The output we get is this. Note that we only call the methods we defined when all parameters are sent in. The function call which had partial argument list provided did not result into a function call.
<pre>
<code class='hljs csharp'>
Inside method FooOneArgs. Got parameters: st=hello
Inside method FooTwoArgs. Got parameters: st=hello, x=3,14
Inside method FooThreeArgs. Got parameters: st=hello, x=3,14, j=123
Inside method FooThreeArgs. Got parameters: st=hello, x=3,14, j=123
Inside method FooFourArgs. Got parameters: st=hello, x=3,14, j=123, k=456
</code>
</pre>
So from a higher level, currying a function f(x,y,z) means adding support that you could call the function like this:<br>
f(x,g(y,z)) or f(x,g(y,h(z))) - there more arguments you get there is more variations of number of parameters and methods you can pass in.
Here is another example how you can build up a calculation uing simpler methods.
<pre>
<code class='hljs csharp'>
void Main()
{
Func<int, int, int> Area = (x,y) => x*y;
Func<int, int, int, int> CubicArea = (x,y,z) => Area.Curried()(Area(x,y))(z);
CubicArea(3,2,4); //supplying all arguments manully is okay
}
</code>
</pre>
CubicArea expects THREE arguments. The implementation allows us to use the Area function and via currying we can use that method and provide the last third argument avoiding compilation error.
Currying makes your functions allow more flexible ways of being called.
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-6119587650398631632024-02-24T21:22:00.022+01:002024-02-26T00:52:34.278+01:00Using IronPython to execute Python code from .NET<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzb0-LslxcGDH0CK2UFpK2U9QYk2xdLE3kSA3bVN6p5ENyzLnqnQ7McokMx8uaJCnPPFXtmHjHyJjRn4UYDBuc50wLLLhEE1N2EDLM-TflfanacQ-RCRsW41w9HZOrdSjLntfYh4i98kugVWk4abuPGj_51fDDZ-aEFRDmmv6hpvLZHmCPw_fqGQNoi8o/s1600/ironpython_logo.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" data-original-height="112" data-original-width="509" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhzb0-LslxcGDH0CK2UFpK2U9QYk2xdLE3kSA3bVN6p5ENyzLnqnQ7McokMx8uaJCnPPFXtmHjHyJjRn4UYDBuc50wLLLhEE1N2EDLM-TflfanacQ-RCRsW41w9HZOrdSjLntfYh4i98kugVWk4abuPGj_51fDDZ-aEFRDmmv6hpvLZHmCPw_fqGQNoi8o/s1600/ironpython_logo.png"/></a></div>
Let's look at some code showing how to execute Python code from .NET using IronPython!
IronPython provides support for Python scripts to run inside .NET and utilizes the Dynamic Language Runtime - DLR.
The DLR together allows the caller to get dynamic typing and dynamic method dispatch, which is central in the dynamic languages such as Python.
IronPython was first released in 2004, some 20 years ago. It has continued to evolve slowly and provides seamless integration into .NET ecosystem for Python developers.
In this article, I will present some simple code that shows how you can run Python code inside a .NET 8 console application.
We will load up some <em>tuples</em> in an array in some simple Python code, using IronPython.
<b>Tuples in Python</b>
Tuples in Python are immutable (such as in C#) and are defined using parentheses and comma-separated. This is the same as in C#, but Python had tuple support over 20 years before C#.
We will have to add one Nuget package, the IronPython package, in a net8.0 application.
<br /><br />
<b>HelloIronPythonDemo1.csproj</b>
<pre>
<code class='hljs xml'>
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="IronPython" Version="3.4.1" />
</ItemGroup>
<ItemGroup>
<None Update="customers.py">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
</ItemGroup>
</Project>
</code>
</pre>
Consider the following array of tuples in Python :
<br /><br />
<b>customers.py</b>
<pre>
<code class='csharp hljs'>
customers = [
('Jenna', 42, 165),
('Thor', 40, 174),
('Christopher', 18, 170),
('Liz', 16, 168),
]
</code>
</pre>
Python code is very compact and you declare variables without specifying type such as in C#, Python uses a simple way of creating variables and while C# got support in C# 7 in 2017, Python has had support for tuples since its early days. In the Python 1.4 version, we find it documented here: <br />
<a href='https://docs.python.org/release/1.4/tut/node37.html#SECTION00630000000000000000'>https://docs.python.org/release/1.4/tut/node37.html#SECTION00630000000000000000</a>.<br /> Bear in mind, this is way back in 1996, C# was over 20 years later with its tuple support.
If you install IronPython, you get a terminal where you can enter Python code (plus more functionality with .NET) such as shown below, where tuples are created and tuples may be composed or 'packed' and also 'unpacked', which is called <em>deconstructed</em> in .NET tuples.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhyphenhyphenaOCBcfnRRVM77Klre7fIKr6RTiZuZziUjWLb90XEEWTGGg2d7tLtb1ENChC0RKZYrisUJPvAfP8Dq4aPdpLsryqH5Bc_0oc2S1I1q7bXb3L-LuegA4mX9LGQGans1DoTLr4hYLp4qsHpyq7zbFKLym8i6qpA4JG0qBlUaNmNPqq1grasZ5gfwuiEDU/s1600/ironpython_terminal.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" data-original-height="558" data-original-width="1087" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhyphenhyphenaOCBcfnRRVM77Klre7fIKr6RTiZuZziUjWLb90XEEWTGGg2d7tLtb1ENChC0RKZYrisUJPvAfP8Dq4aPdpLsryqH5Bc_0oc2S1I1q7bXb3L-LuegA4mX9LGQGans1DoTLr4hYLp4qsHpyq7zbFKLym8i6qpA4JG0qBlUaNmNPqq1grasZ5gfwuiEDU/s1600/ironpython_terminal.png"/></a></div>
To execute code to retrieve this array of tuples, first create a <em>ScriptEngine</em> and then create a
<em>ScriptScope</em>, which we will use to retrieve the Python-declared variable <em>customers</em>.
We create a <em>ScriptSource</em>, where we use the <em>ScriptEngine</em> to load up either a string or a file.
A dynamic variable will be used to get the array of tuples and we can loop through this array with a foreach loop and
output its content.
<br /><br />
<b>Program.cs</b>
<pre>
<code class='csharp hljs'>
using IronPython.Hosting;
using Microsoft.Scripting.Hosting;
using static System.Console;
IronPythonDemo1.OutputSomeExternallyLoadedTuples();
public class IronPythonDemo1
{
public static void OutputSomeExternallyLoadedTuples()
{
var engine = Python.CreateEngine();
ScriptScope scope = engine.CreateScope();
//ScriptSource source = engine.CreateScriptSourceFromString(tupleStatement);
ScriptSource source = engine.CreateScriptSourceFromFile("customers.py");
source.Execute(scope);
dynamic customers = scope.GetVariable("customers");
foreach (var customer in customers)
{
Console.WriteLine($"(Name = {StringExtensions.FixedLength(customer[0], 20)}, Age = {StringExtensions.FixedLength(customer[1].ToString(), 8)}, Height={StringExtensions.FixedLength(customer[2].ToString(), 8)})");
}
}
}
</code>
</pre>
Documentation for named tuples are available here: <a href='https://docs.python.org/3/library/collections.html#collections.namedtuple'>https://docs.python.org/3/library/collections.html#collections.namedtuple</a>
Here is sample coding showing script that although it is more verbose, shows more readability of which field is which for a named tuple. In an ordinary tuple, you use indexes to retrieve the nth field (0-based). But with named tuples, you use a field name instead.
<pre>
<code class='hljs csharp'>
from collections import namedtuple
Customer = namedtuple('Customer', ['Name', 'Age', 'Height'])
customers2 = [
Customer(Name = 'Jenna', Age = 42, Height = 165),
Customer(Name = 'Thor', Age = 38, Height = 174),
Customer(Name = 'Christopher', Age = 42, Height = 170),
Customer(Name = 'Liz', Age = 42, Height = 168),
]
for cust in customers2:
print(f"{cust.Name} with a height of {cust.Height}(cm)")
</code>
</pre>
This outputs:
<pre>
<code class='hljs csharp'>
Jenna with a height of 165(cm)
Thor with a height of 174(cm)
Christopher with a height of 170(cm)
Liz with a height of 168(cm)
When your tuple gets many fields, having this readability should reduce bugs. Also, if you add more fields to your tuple, you do not have to fix up indexes in your script. So code is a bit more verbose, but it is also more open for change and readable.
</code>
</pre>
The FixedLength extension method is a simple method to output text to a fixed width.
<pre>
<code class='csharp hljs'>
public static class StringExtensions
{
public static string FixedLength(this string input, int length, char paddingchar = ' ')
{
if (string.IsNullOrWhiteSpace(input))
{
return input;
}
if (input.Length > length)
return input.Substring(0, length);
else
return input.PadRight(length, paddingchar);
}
}
</code>
</pre>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2ThOakoHuHKyg4y_vDte5APD8AQt3zTgrwJUyE6unYtYSf-okCVuV4h1XVBWuYxsQxVtxg769F82zlSqmlVc9IVn-Xq9P5m8tiophI6RgNrAXtR-4qPiHJXiL9stYqu9ezkP_10_HxEUCphgjqhN638CqgRAQLj4x6YP31NTaDti6XFSC6cEQQGs-Xlo/s1600/ironpython_demo1.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" data-original-height="115" data-original-width="1097" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2ThOakoHuHKyg4y_vDte5APD8AQt3zTgrwJUyE6unYtYSf-okCVuV4h1XVBWuYxsQxVtxg769F82zlSqmlVc9IVn-Xq9P5m8tiophI6RgNrAXtR-4qPiHJXiL9stYqu9ezkP_10_HxEUCphgjqhN638CqgRAQLj4x6YP31NTaDti6XFSC6cEQQGs-Xlo/s1600/ironpython_demo1.png"/></a></div>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com1tag:blogger.com,1999:blog-7240109143089619921.post-41400152090422846542024-02-01T00:48:00.004+01:002024-02-01T00:53:26.078+01:00Creating a data table from IEnumerable of T and defining column order explicitly in C#This article shows code how you can create a DataTable from a collection of T (IEnumerable<T>) and defining explicitly the column order.
An extension method for this looks like the following:
<pre>
<code class='hljs csharp'>
public static class DataTableExtensions
{
public static DataTable CreateOrderedDataTable<T>(this IEnumerable<T> data)
{
var dataTable = new DataTable();
var orderedProps = typeof(T).GetProperties(BindingFlags.Instance | BindingFlags.Public)
.OrderBy(prop => GetColumnOrder(prop)).ToList();
foreach (var prop in orderedProps){
dataTable.Columns.Add(prop.Name, Nullable.GetUnderlyingType(prop.PropertyType) ?? prop.PropertyType);
}
if (data != null)
{
dataTable.BeginLoadData();
var enumerator = data.GetEnumerator();
while (enumerator.MoveNext()){
var item = enumerator.Current;
var rowValues = new List<object>();
foreach (var prop in orderedProps){
rowValues.Add(prop.GetValue(item, null));
}
dataTable.Rows.Add(rowValues.ToArray());
}
dataTable.AcceptChanges();
}
return dataTable;
}
static int GetColumnOrder(PropertyInfo prop)
{
var displayAttribute = prop.GetCustomAttributes(typeof(DisplayAttribute), false).FirstOrDefault() as DisplayAttribute;
int orderKey = displayAttribute?.Order ?? prop.MetadataToken;
return orderKey;
}
}
</code>
</pre>
We order first by <em>DisplayAttribute</em> and the <em>Order value</em>, and fallback to property's <em>MetadataToken</em>. This is an integer value that also returns the order the property was declared,
in case you want to order just by the way properties are defined. We get the enumerator here and fetch the row one by one. We could use a simple foreach loop here too. Note the use of <em>BeginLoadData</em>
and <em>AcceptChanges</em>.
Consider the two classes next. One class does not set any explicit order, the other class uses the Display attribute's Order value to define a custom order of columns for the DataTable.
<pre>
<code class='hljs csharp'>
public class Car
{
public int Id { get; set; }
public string Make { get; set; }
public string Model { get; set; }
public string Color { get; set; }
}
public class CarV2
{
[Display(Order = 4)]
public int Id { get; set; }
[Display(Order = 3)]
public string Make { get; set; }
[Display(Order = 2)]
public string Model { get; set; }
[Display(Order = 14)]
public bool IsElectric { get; set; }
[Display(Order = -188865)]
public string Color { get; set; }
}
</code>
</pre>
Next, the following little program in Linqpad tests this extension method and displays the datatables resulting with column ordering set.
<pre>
<code class='hljs csharp'>
void Main()
{
var cars = new List<Car>{
new Car { Id = 1, Make = "Audi", Model = "A5", Color = "Blue" },
new Car { Id = 2, Make = "Volvo", Model = "XC60", Color = "Silver" },
new Car { Id = 3, Make = "Golf", Model = "GTI", Color = "White" },
new Car { Id = 4, Make = "Audi", Model = "A5", Color = "Blue" },
};
var dataTable = cars.CreateOrderedDataTable();
dataTable.Dump("Cars datatable, data type is: Car");
var carV2s = new List<CarV2>{
new CarV2 { Id = 1, Make = "Audi", Model = "A5", Color = "Blue" },
new CarV2 { Id = 2, Make = "Volvo", Model = "XC60", Color = "Silver" },
new CarV2 { Id = 3, Make = "Golf", Model = "GTI", Color = "White" },
new CarV2 { Id = 4, Make = "Audi", Model = "A5", Color = "Blue" },
};
var dataTableV2 = carV2s.CreateOrderedDataTable();
dataTableV2.Dump("Carsv2 datatable, datatype is CarV2");
}
</code>
</pre>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi70L9eF0OnErw8KErS7abVBJVsjufxO_IFyTC-_dG5G57U2Lp-BsCn7hVFVPSI_3L9Cj1O8HttIeGjvVwxrwRRWl23XIUBTHzYRKz5OmtkJBxmlpcKRXHWvtmjZW19wtaTdJJETzGQUfcSztx5nEnmoxjkXhvax3v6Xb7VV92KXUczoQQQGGKvuaheD4Q/s335/datatables_cars.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" height="320" data-original-height="335" data-original-width="263" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi70L9eF0OnErw8KErS7abVBJVsjufxO_IFyTC-_dG5G57U2Lp-BsCn7hVFVPSI_3L9Cj1O8HttIeGjvVwxrwRRWl23XIUBTHzYRKz5OmtkJBxmlpcKRXHWvtmjZW19wtaTdJJETzGQUfcSztx5nEnmoxjkXhvax3v6Xb7VV92KXUczoQQQGGKvuaheD4Q/s320/datatables_cars.png"/></a></div>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-48024610683294721322024-01-14T23:17:00.003+01:002024-01-15T00:24:07.848+01:00Generating repeated data into variable in SQL Server in T-SQLLet's see how we can create repeated data into variable of SQL Server in T-SQL.
Use the REPLICATE function to create repeated data like this:
<pre>
<code class='hljs tsql'>
DECLARE @myVariable NVARCHAR(MAX)
SET @myVariable = REPLICATE('.', 10)
PRINT @myVariable
PRINT len(@myVariable)
</code>
</pre>
<img alt="" border="0" data-original-height="95" data-original-width="401" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaA68Amo7_LulXdTqna9Vryaw6FnbOOAfnYlG6jDl4uegJM-XnBqBvIOFV6YD9KJS_N4m5DCBz3s35_gwvfVPnThDkBhBKyjtw4359yU-_nPlNM0VdFbsbUQEeV3F-M7Pn5tdPicOo092AYxB4lfslWJh0djZ8iPR4NH4II62bpkLGilbj9Hq3Nk4GZxE/s1600/repeated_data_tsql.png"/>
<br />
<br />
<br />
In case you want to set the variable to data which is longer than 8000 characters, you must convert the argument to NVARCHAR(MAX).
<pre>
<code class='hljs csharp'>
DECLARE @myVariable NVARCHAR(MAX)
SET @myVariable = REPLICATE(CONVERT(NVARCHAR(MAX),'.'), 1024*1024*2)
PRINT len(@myVariable)
</code>
</pre>
Creating random content is also easy in T-SQL:
<pre>
<code class='hljs csharp'>
DECLARE @myVariable NVARCHAR(MAX)
SET @myVariable = REPLICATE(CONVERT(NVARCHAR(MAX),REPLACE(NEWID(),'-', '')), 4)
PRINT len(@myVariable)
PRINT @myVariable
</code>
</pre>
NEWID() creates a new guid, and we strip away the '-' letter, giving 32 chars which we replicate above four times. Since we were below 8000 chars, we chould have skipped using convert to nvarchar(max).
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-81562866630875751882023-12-31T21:34:00.012+01:002023-12-31T23:00:32.870+01:00Password hashing in .NETThis article will look on different ways to hash a password in .NET.
MD5 was developed by Ron Rivest in 1991 and was used a lot in the 90s, but in 2005 it was
revealed it contains collisions. MD5 and SHA-1 is not advised to used in sensitive hashing related to
security anymore.
Instead, a PBKDF or <em>Password Derived Key-derivation function</em> algorithm will be used.
A PBKDF2-based method in <em>Rfc2898DeriveBytes</em> will be used. It has been available since .NET 6.
Users of Asp.net Core Identity are recommended to use <em>PasswordHasher</em> instead :
<a href='https://andrewlock.net/exploring-the-asp-net-core-identity-passwordhasher/'>https://andrewlock.net/exploring-the-asp-net-core-identity-passwordhasher/</a>
An overview of the arithmetic flow of PBKDF2 is shown below. In the diagram, SHA-512 is indicated, but the code shown in this article<br>
uses SHA-256.
<img alt="" border="0" data-original-height="188" data-original-width="844" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxfDCF-urOtF4NCKe9jlciwURDkkiAYiM5javhfWGt_OMUzMNT60Vy6cLewRDxmgAQHd-5Gzg3R69hteZbCO-ohVy6JrQJratprEN-uOx8UjbWRzN5NaBtxL_oD_uZFjLLF09jU8ZzJGhfpxEcf5heDWRtUNGEPQlSY9gqO6Jebq4tPzINvB4JWvPjQtw/s1600/pbkdf2_arithmetic_flow.png"/>
<br /><br />
First off, to do a MD5 hash we can use the following :
<pre>
<code class='hljs csharp'>
static string Md5(string input){
using (var md5 = MD5.Create()){
var byteHash = md5.ComputeHash(Encoding.UTF8.GetBytes(input));
var hash = BitConverter.ToString(byteHash).Replace("-", "");
return hash;
}
}
</code>
</pre>
And to test it out we can run the following:
<pre>
<code class='hljs csharp'>
void Md5Demo()
{
string inputPassword = "abc123";
string md5Hash = Md5(inputPassword);
Console.WriteLine("MD5 Demonstration in .NET");
Console.WriteLine("-------------------------");
Console.WriteLine($"Password to hash: {inputPassword}");
Console.WriteLine($"MD5 hashed password: {md5Hash}");
Console.WriteLine();
}
</code>
</pre>
<br />
<code>
MD5 Demonstration in .NET
-------------------------
Password to hash: abc123
MD5 hashed password: E99A18C428CB38D5F260853678922E03
</code>
The MD5 hash above agrees with the online MD5 hash here:
<a href='https://www.md5hashgenerator.com/'>https://www.md5hashgenerator.com/</a>
MD5 method here does not mention any salt, but this could be concatenated with the password to prevent against <em>rainbow table attacks</em>, that is
dictionary attacks.
Next, to perform PDKDF2 hashing, the code below can be used. Note that this algorithm will be run iteratively to generate a hash value that is
increasingly more computationally expensive to calculate the hash of compared to the number of iterations and includes a salt, making it scalable <br>to be
more and more difficult for attacks.
<pre>
<code class='hljs csharp'>
static byte[] _salt = RandomNumberGenerator.GetBytes(32);
static void HashPassword(string passwordToHash, int numberOfRounds)
{
var sw = Stopwatch.StartNew();
var hashedPassword = Rfc2898DeriveBytes.Pbkdf2(
passwordToHash,
_salt,
numberOfRounds,
HashAlgorithmName.SHA256,
32);
sw.Stop();
Console.WriteLine();
Console.WriteLine("Password to hash : " + passwordToHash);
Console.WriteLine("Hashed Password : " + Convert.ToBase64String(hashedPassword));
Console.WriteLine("Iterations < " + numberOfRounds + "> Elapsed Time: " + sw.ElapsedMilliseconds + " ms");
}
</code>
</pre>
The value 32 here is the desired output length of the hash, we can decide how long the hash we get out of the call to the method.
We can then test out the Pbkdf2 method using an increasing number of iterations.
<pre>
<code class='hljs csharp'>
void RunPbkdf2HashDemo()
{
const string passwordToHash = "abc123";
Console.WriteLine("Password Based Key Derivation Function Demonstration in .NET");
Console.WriteLine("------------------------------------------------------------");
Console.WriteLine();
Console.WriteLine("PBKDF2 Hashes using Rfc2898DeriveBytes");
Console.WriteLine();
HashPassword(passwordToHash, 1);
HashPassword(passwordToHash, 10);
HashPassword(passwordToHash, 100);
HashPassword(passwordToHash, 1000);
HashPassword(passwordToHash, 10000);
HashPassword(passwordToHash, 100000);
HashPassword(passwordToHash, 1000000);
HashPassword(passwordToHash, 5000000);
}
</code>
</pre>
This gives the following output:
<pre>
<code class='hljs csharp'>
Password Based Key Derivation Function Demonstration in .NET
------------------------------------------------------------
PBKDF2 Hashes using Rfc2898DeriveBytes
Password to hash : abc123
Hashed Password : eqeul5z7l2dPrOo8WjH/oTt0RYHvlZ2lvk8SUoTjZq4=
Iterations (1) Elapsed Time: 0 ms
Password to hash : abc123
Hashed Password : wfd8qQobzBPZvdemqrtZczqctFe0JeAkKjU3IJ48cms=
Iterations (10) Elapsed Time: 0 ms
Password to hash : abc123
Hashed Password : VY45SxzhqjYronha0kt1mQx+JRDVlXj82prX3H7kjII=
Iterations (100) Elapsed Time: 0 ms
Password to hash : abc123
Hashed Password : B0LfHgRSslG/lWe7hbp4jb8dEqQ/bZwNtxsaqbVBZ2I=
Iterations (1000) Elapsed Time: 0 ms
Password to hash : abc123
Hashed Password : LAHwpS4bnbO7CQ1r7buYgUTrp10FyaRyeK6hCwGwv20=
Iterations (10000) Elapsed Time: 1 ms
Password to hash : abc123
Hashed Password : WDjyPySpULXtVOVmSR9cYlzAY4LWeJqDBhszKAfIaPc=
Iterations (100000) Elapsed Time: 13 ms
Password to hash : abc123
Hashed Password : sDx6sOrTl2b7cNZGUAecg7YO4Md/g3eAtfQSvh/vxpM=
Iterations (1000000) Elapsed Time: 127 ms
Password to hash : abc123
Hashed Password : ruywLaR0QApOU5bkqE/x2AAhYJzBj5y6D3P3IxlIF2I=
Iterations (5000000) Elapsed Time: 643 ms
</code>
</pre>
Note that it takes many iterations before the computation takes significant time.
Sources / links :
<ul>
<li>
OWASP recommends using 600 000 iterations when using SHA256 as noted here:
<a href='https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#pbkdf2'>https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html#pbkdf2</a>
</li>
<li>
Hashing passwords using PBKDF2 are explained by Stephen Haunts here:
<a href='Hashing Passwords Safely using a Password Based Key Derivation Function (PBKDF2)'>Hashing Passwords Safely using a Password Based Key Derivation Function (PBKDF2)</a>
</li>
</ul>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-6014524729660866932023-12-31T00:24:00.017+01:002023-12-31T01:21:06.454+01:00AES Encryption with Galois Counter Mode (GCM) in C#This article presents some helper methods for performing AES Encryption using Galois Counter Mode (GCM). AES or Advanced Encryption Standard is the most used encryption algorithm used today, having overtaken DES and Triple DES
since 2001. We will look into the GCM mode of AES in this article.
AES-GCM class <b>AesGcm</b> is supported in .NET Core 3.0 and newer .NET versions, plus in .NET Standard 2.1.
AES-GCM is authenticated encryption, compared to default AES-CBC (Cipher Block Chaining).
Benefits of using GCM mode of AES is the following:
<ul>
<li>Data authenticity / integrity. This is provided via a <em>tag</em> that is outputted by the encryption and used while decrypting</li>
<li>Provides support for sending <em>additional data</em>, used for example in newer TLS implementations to provide both encryption and a non-encrypted payload. This is called <em>additional metadata</em></li>
</ul>
Here is a helper class to perform encryption and decryption using AES-GCM.
<pre>
<code class='hljs csharp'>
public static class AesGcmEncryption {
public static (byte[], byte[]) Encrypt(byte[] dataToEncrypt, byte[] key, byte[] nonce, byte[] associatedData = null)
{
using var aesGcm = new AesGcm(key);
//tag and ciphertext will be filled during encryption
var tag = new byte[16]; //tag is a hmac (hash-based message authentication code) to check that information has not been tampered with
var cipherText = new byte[dataToEncrypt.Length];
aesGcm.Encrypt(nonce, dataToEncrypt, cipherText, tag, associatedData);
return (cipherText, tag);
}
public static byte[] Decrypt(byte[] cipherText, byte[] key, byte[] nonce, byte[] tag, byte[] associatedData = null)
{
using var aesGcm = new AesGcm(key);
//tag and ciphertext will be filled during encryption
var decryptedData = new byte[cipherText.Length];
aesGcm.Decrypt(nonce, cipherText, tag, decryptedData, associatedData);
return decryptedData;
}
}
</code>
</pre>
In the code above, the encrypt method returns a tuple with the ciperText and the tag. These are the encrypted data and the tag, both must be used while decrypting and the tag provides as mentioned a means of checking the integrity of data, i.e. that data has not been tampered with.
Note that the 16-byte tag and the ciphertext is filled after running the Encrypt method of the AesGcm class. The cipherText array must be the same length as the dataToEncrypt array inputted.
Here is sample code to use AES-GCM. Note that the metadata used here, while optional, must match in case it is set in the encryption and decryption. The nonce must be 12 bytes - 96 bits in length.The nonce is similar to a initialization vector, although it is used once for the particular encryption and decryption,
it is used to protect against <em>replay attacks</em>.
<pre>
<code class='hljs csharp'>
void TestAesGCM()
{
const string original = "Text to encrypt";
var key = RandomNumberGenerator.GetBytes(32); //256 bits key
var nonce = RandomNumberGenerator.GetBytes(12); //96 bits nonce
(byte[] cipherText, byte[] tag) result = AesGcmEncryption.Encrypt(Encoding.UTF8.GetBytes(original),
key, nonce, Encoding.UTF8.GetBytes("some metadata 123"));
byte[] decryptedText = AesGcmEncryption.Decrypt(result.cipherText, key, nonce, result.tag, Encoding.UTF8.GetBytes("some metadata 123"));
Console.WriteLine("AES Encryption demo GCM - Galois Counter Mode:");
Console.WriteLine("--------------");
Console.WriteLine("Original Text = " + original);
Console.WriteLine("Encrypted Text = " + Convert.ToBase64String(result.cipherText));
Console.WriteLine("Tag = " + Convert.ToBase64String(result.tag));
Console.WriteLine("Decrypted Text = " + Encoding.UTF8.GetString(decryptedText));
}
</code>
</pre>
<code>
AES Encryption demo GCM - Galois Counter Mode:
--------------
Original Text = Text to encrypt
Encrypted Text = 9+2x0kctnRwiDDHBm0/H
Tag = sSDxsg17HFdjE4cuqRlroQ==
Decrypted Text = Text to encrypt
</code>
Use AES-GCM to provide integrity checking and allowing to send in metadata if desired to encrypt and decrypting with the AES algorithm.
We can protect the AES key using different methods, for example using the <em>Data Protection API</em>, this is only supported in Windows.
Let's look at a helper class for using Data Protection API.
<pre>
<code class='hljs csharp'>
public static class DataProtectionUtil {
public static byte[] Protect(byte[] dataToEncrypt, byte[] optionalEntropy, DataProtectionScope scope)
{
var encryptedData = ProtectedData.Protect(dataToEncrypt, optionalEntropy, scope);
return encryptedData;
}
public static byte[] Unprotect(byte[] encryptedData, byte[] optionalEntropy, DataProtectionScope scope){
var decryptedData = ProtectedData.Unprotect(encryptedData, optionalEntropy, scope);
return decryptedData;
}
public static string Protect(string dataToEncrypt, string optionalEntropy, DataProtectionScope scope)
{
var encryptedData = ProtectedData.Protect(Encoding.UTF8.GetBytes(dataToEncrypt), optionalEntropy != null ? Encoding.UTF8.GetBytes(optionalEntropy) : null, scope);
return Convert.ToBase64String(encryptedData);
}
public static string Unprotect(string encryptedData, string optionalEntropy, DataProtectionScope scope)
{
var decryptedData = ProtectedData.Unprotect(Convert.FromBase64String(encryptedData), optionalEntropy != null ? Encoding.UTF8.GetBytes(optionalEntropy) : null, scope);
return Encoding.UTF8.GetString(decryptedData);
}
}
</code>
</pre>
An example how to protect your AES key:
<pre>
<code class='hljs csharp'>
void EncryptAndDecryptWithProtectedKey(){
var original = "Text to encrypt";
Console.WriteLine($"Original Text = {original}");
//Create key and nnoce . Encrypt our text with AES
var gcmKey = RandomNumberGenerator.GetBytes(32);
var nonce = RandomNumberGenerator.GetBytes(12);
var result = EncryptText(original, gcmKey, nonce);
//Create some entropy and protect AES key
var entropy = RandomNumberGenerator.GetBytes(16);
var protectedKey = ProtectedData.Protect(gcmKey, entropy, DataProtectionScope.CurrentUser);
Console.WriteLine($"gcmKey = {Convert.ToBase64String(gcmKey)}, protectedKey = {Convert.ToBase64String(protectedKey)}");
// Decrypt the text with AES. the AES key has to be retrieved with DPAPI.
var decryptedText = DecryptText(result.encrypted, nonce, result.tag, protectedKey, entropy);
Console.WriteLine($"Decrypted Text using AES GCM with key retrieved via Data Protection API = {decryptedText}");
}
private static (byte[] encrypted, byte[] tag) EncryptText(string original, byte[] gcmKey, byte[] nonce){
return AesGcmEncryption.Encrypt(Encoding.UTF8.GetBytes(original), gcmKey, nonce, Encoding.UTF8.GetBytes("some meta"));
}
private static string DecryptText(byte[] encrypted, byte[] nonce, byte[] tag, byte[] protectedKey, byte[] entropy){
var key = DataProtectionUtil.Unprotect(protectedKey, entropy, DataProtectionScope.CurrentUser);
Console.WriteLine($"Inside DecryptText: gcmKey = {Convert.ToBase64String(key)}, protectedKey = {Convert.ToBase64String(protectedKey)}");
var decryptedText = AesGcmEncryption.Decrypt(encrypted, key, nonce, tag, Encoding.UTF8.GetBytes("some meta"));
return Encoding.UTF8.GetString(decryptedText);
}
</code>
</pre>
Data Protection API is only supported on Windows platform, there are more possibilities to protect AES key but protecting your key is always a challenge when dealing with symmetric encryption algorithms such as AES.
Some more links:
<ul>
<li>What is a Nonce ? <a href='https://www.youtube.com/watch?v=EOgpr73-pgc'>https://www.youtube.com/watch?v=EOgpr73-pgc</a></li>
<li>AES Explained - <a href='https://www.youtube.com/watch?v=O4xNJsjtN6E'>https://www.youtube.com/watch?v=O4xNJsjtN6E</a></li>
<li>Galois/Counter Mode (GCM) - <a href='https://www.youtube.com/watch?v=V2TlG3JbGp0'>https://www.youtube.com/watch?v=V2TlG3JbGp0</a></li>
<li>Authenticated Encryption in .NET with AES-GCM - <a href='https://www.scottbrady91.com/c-sharp/aes-gcm-dotnet'>https://www.scottbrady91.com/c-sharp/aes-gcm-dotnet</a></li>
</ul
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-3512549667726422022023-12-28T22:32:00.007+01:002023-12-28T23:03:39.937+01:00Digital signatures with RSA in .NETI have looked at Digital signatures with RSA in .NET today. Digital signatures are used to provide non-repudiation, an authenticity proof that the original sender is who the sender claims to be and
also that the data has not been hampered with.
We will return a tuple of both a SHA-256 computed hash of some document data and also its digital signature using the RSA algorithm.
I have used <code>.netstandard 2.0</code> here, so the code can be used in most frameworks in both .NET Framework and .NET. We will use RSA here to do the digital signature signing and verification.
First off, here is a helper class to create a RSA encrypted signature of a SHA-256 hash, here we create a new RSA with key size 2048.
<b>RsaDigitalSignature.cs</b>
<pre>
<code class='hljs csharp'>
public class RsaDigitalSignature
{
private RSA _rsa;
public RsaDigitalSignature()
{
_rsa = RSA.Create();
_rsa.KeySize = 2048;
}
public static byte[] ComputeHashSha256(byte[] toBeHashed)
{
using (var sha256 = SHA256.Create())
{
return sha256.ComputeHash(toBeHashed);
}
}
public (byte[] Signature, byte[] HashOfData) SignData(byte[] dataToSign)
{
var hashOfDataToSign = ComputeHashSha256(dataToSign);
return (_rsa.SignHash(
hashOfDataToSign,
HashAlgorithmName.SHA256,
RSASignaturePadding.Pkcs1),
hashOfDataToSign);
}
public bool VerifySignature(byte[] signature, byte[] hashOfDataToSign)
{
return _rsa.VerifyHash(hashOfDataToSign, signature, HashAlgorithmName.SHA256, RSASignaturePadding.Pkcs1);
}
}
</code>
</pre>
In the code above, we receive some document data and create the SHA-255 hash, which is computed. We return a tuple with the signed hash from the computed SHA-256 hash and also the computed SHA-256 hash itself.
A console application that runs the sample code above is the following:
<pre>
<code class='hljs csharp'>
void Main()
{
SignAndVerifyData();
//Console.ReadLine();
}
private static void SignAndVerifyData()
{
Console.WriteLine("RSA-based sigital signature demo");
var document = Encoding.UTF8.GetBytes("Document to sign");
var digitalSignature = new RsaDigitalSignature();
var signature = digitalSignature.SignData(document);
bool isValidSignature = digitalSignature.VerifySignature(signature.Signature, signature.HashOfData);
Console.WriteLine($"\nInput Document:\n{Convert.ToBase64String(document)}\nIs the digital signature valid? {isValidSignature} \nSignature: {Convert.ToBase64String(signature.Signature)} \nHash of data:\n{ Convert.ToBase64String(signature.HashOfData)}");
}
</code>
</pre>
Our verification of the signature shows that the verification of the digital signature passes.
<pre>
<code class='hljs csharp'>
Input Document:
RG9jdW1lbnQgdG8gc2lnbg==
Is the digital signature valid? True
Signature: Gok1x8Wxm9u5jTRcqrgPsI45ie3WPZLi/FNbaJMGTHqBmNbpJTEYjsXix97aIF6uPjgrxQWJKCegc8S4yASdut7TpJafO9wSRqvScc2SuOGK9BqnX+9GwRRQNti8ynm0ARRar+Z4hTpYY/XngFZ+ovvqIT3KRMK/7tsMmTg87mY0KelteFX7z7G7wPB9kKjT6ORYK4lVr35fihrbxei0XQP59YuEdALy+vbvKUm3JNv4sBU0lc9ZKpp2XF0rud8UnY1Nz4/XH7ZoaKfca5HXs9yq89DJRaPBRi1+Wv41vTCf8zFKPWZJrw6rm6kBMNHMENYbeBNdZyiCspTsHZmsVA==
Hash of data:
VPPxOVW2A38lCB810vuZbBH50KQaPSCouN0+tOpYDYs=
</code>
</pre>
The code above uses a RSA created on the fly and is not so easy to share between a sender and a receiver. Let's look at how we can use X509 certificates to do the RSA encyption. It should be possible to share the source code below between the sender and the receiver and for example<br>
export the public part of the X509 certificate to the receiver, which the receiver could install in a certificate store, only requred to know the thumbprint of the cert which is easy to see in MMC (Microsoft Management Console) or using Powershell and cd-ing into cert:\ folder .
Let's first look at a helper class to get hold of a installed X509 certificate.
<pre>
<code class='hljs csharp'>
public class CertStoreUtil
{
public static System.Security.Cryptography.X509Certificates.X509Certificate2 GetCertificateFromStore(
System.Security.Cryptography.X509Certificates.StoreLocation storeLocation,
string thumbprint, bool validOnly = true) {
var store = new X509Store(storeLocation);
store.Open(OpenFlags.ReadOnly);
var cert = store.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, validOnly).FirstOrDefault();
store.Close();
return cert;
}
}
</code>
</pre>
Next up, a helper class to create a RSA-based digital signature like in the previous example, but using a certificate.
<pre>
<code class='hljs csharp'>
public class RsaFromCertDigitalSignature
{
private RSA _privateKey;
private RSA _publicKey;
public RsaFromCertDigitalSignature(StoreLocation storeLocation, string thumbprint)
{
_privateKey = CertStoreUtil.GetCertificateFromStore(StoreLocation.LocalMachine, thumbprint).GetRSAPrivateKey();
_publicKey = CertStoreUtil.GetCertificateFromStore(StoreLocation.LocalMachine, thumbprint).GetRSAPrivateKey();
}
public static byte[] ComputeHashSha256(byte[] toBeHashed)
{
using (var sha256 = SHA256.Create())
{
return sha256.ComputeHash(toBeHashed);
}
}
public (byte[] Signature, byte[] HashOfData) SignData(byte[] dataToSign)
{
var hashOfDataToSign = ComputeHashSha256(dataToSign);
return (_privateKey.SignHash(
hashOfDataToSign,
HashAlgorithmName.SHA256,
RSASignaturePadding.Pkcs1),
hashOfDataToSign);
}
public bool VerifySignature(byte[] signature, byte[] hashOfDataToSign)
{
return _publicKey.VerifyHash(hashOfDataToSign, signature, HashAlgorithmName.SHA256, RSASignaturePadding.Pkcs1);
}
}
</code>
</pre>
A console app that tests out the code above is shown next, I have selected a random cert on my dev pc here.
<pre>
<code class='csharp hljs'>
void Main()
{
SignAndVerifyData();
//Console.ReadLine();
}
private static void SignAndVerifyData()
{
Console.WriteLine("RSA-based sigital signature demo");
var document = Encoding.UTF8.GetBytes("Document to sign");
//var x509CertLocalHost = CertStoreUtil.GetCertificateFromStore(StoreLocation.LocalMachine, "1f0b749ff936abddad89f4bbea7c30ed64e3dd07");
var digitalSignatureWithCert = new RsaFromCertDigitalSignature(StoreLocation.LocalMachine, "1f0b749ff936abddad89f4bbea7c30ed64e3dd07");
var signatureWithCert = digitalSignatureWithCert.SignData(document);
bool isValidSignatureFromCert = digitalSignatureWithCert.VerifySignature(signatureWithCert.Signature, signatureWithCert.HashOfData);
Console.WriteLine(
$@"Input Document:
{Convert.ToBase64String(document)}
Is the digital signature signed with private key of CERT valid according to public key of CERT? {isValidSignatureFromCert}
Signature: {Convert.ToBase64String(signatureWithCert.Signature)}
Hash of data:\n{Convert.ToBase64String(signatureWithCert.HashOfData)}");
}
</code>
</pre>
Now here is an important concept in digital signatures :
<ul>
<li>For digital signatures, we MUST use a private key (e.g. private key of RSA instance, which can either be made on the fly or retrieved from for example a X509 certificate. Or a Json web key in a more modern example.</li>
<li>For digital signature, to verify a signature we can use either the public or the private key, usually just the public key (which can be shared). For X509 certiifcates, we usually share a public cert (.cert or similar format) and keep our private cert ourselves (.pfx).</li>
</ul>
Sample output of the console app shown above:
<pre>
<code class='hljs csharp'>
RSA-based sigital signature demo
Input Document:
RG9jdW1lbnQgdG8gc2lnbg==
Is the digital signature signed with private key of CERT valid according to public key of CERT? True
Signature: ZHWzJeZnwbfI109uK0T4ubq4B+CHedQPIDgPREz+Eq9BR6A9y6kQEvSrxqUHvOppSDN5kDt5bTiWv1pvDPow+czb7N6kmFf1zQUxUs3ip4WPovBtQKmfpf9/i3DNkRILcoMLdZdKnn0aSaK66f0oxkSIc4nEkb3O9PbejVso6wLqSdDCh96d71gbHqOjyiZLBj2VlqalWvEPuo9GB0s2Uz2fxtFGMUQiZvH3jKR+9F4LwvKCc1K0E/+J4Np57JSfKgmid9QyL2r7nO19SVoVL3yBY7D8UxVIRw8sT/+JKXlnyh8roK7kaxDtW4+FMK6LT/QPvi8LkiNmA+eVv3kk9w==
Hash of data:\nVPPxOVW2A38lCB810vuZbBH50KQaPSCouN0+tOpYDYs=
</code>
</pre>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com1tag:blogger.com,1999:blog-7240109143089619921.post-66819087336720591112023-11-23T16:31:00.013+01:002023-11-23T17:42:14.994+01:00Implementing Basic Auth in Core WCF<img alt="" border="0" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaWoovmMcsbs2rrIQwl0LhIgjSueN-KH-jxLaOK6R5oAlardy06CUcI1j7EnmB4ZWHJKSJJk5MV2DpZ-ZWZhMqKO9UzJcNXcq4qQUo9m0uswqRfBdKVnv2ypDoIO_iDNiM965X_5AwmT7ww8jPXGow0tj-LBAZF4N1dRosWx6Ut_L1IJHe9v0NCW8lxp8/s1600/CoreWCFImage.png"/>
<p>
WCF or Windows Communication Foundation was released initially in 2006 and was an important part of .NET Framework to create serverside services. It supports a lot of different protocols,
not only HTTP(S), but also Net.Tcp, Msmq, Named pipes and more.
</p>
<p>
Sadly, .NET Core 1, when released in 2016, did not include WCF. The use of WCF has been more and more replaced by REST API over HTTP(S) using JWT tokens and not SAML.
</p>
<p>
But a community driven project supported by a multitude of companies including Microsoft and Amazon Web Services has been working on the Core WCF project and this project is starting to
gain some more use, also allowing companies to migrate their platform services over to .NET.
</p>
<p>
I have looked at some basic stuff though, namely <b>Basic Auth</b> in Core WCF, and actually there is no working code sample for this. I have tapped into the ASP.NET Core pipeline to make it work by
studying different code samples which has made part of it work, and I got it working. In this article I will explain how.
</p>
<p>
I use GenericIdentity to make it work. On the client side I have this extension method where I pass the username and password inside the soap envelope. I use .net6 client and service and service use CoreWCF version 1.5.1.
</p>
<p>
Source code for demo client is here:
<a href='https://github.com/toreaurstadboss/CoreWCFWebClient1'>https://github.com/toreaurstadboss/CoreWCFWebClient1</a>
</p>
The client is an ASP.NET Core MVC client who has added a Core WCF service as a connected service, generating a ServiceClient. The same type of service reference seen in .NET Framework in other words.
<h3>Client side setup for Core WCF Basic Auth</h3>
<p>
Source code for demo service is here:
<a href='https://github.com/toreaurstadboss/CoreWCFService1'>https://github.com/toreaurstadboss/CoreWCFService1</a>
</p>
<br />
Extension method WithBasicAuth:
<br />
<b>BasicHttpBindingClientFactory.cs</b>
<pre>
<code class='hljs csharp'>
using System.ServiceModel;
using System.ServiceModel.Channels;
namespace CoreWCFWebClient1.Extensions
{
public static class BasicHttpBindingClientFactory
{
/// <summary>
/// Creates a basic auth client with credentials set in header Authorization formatted as 'Basic [base64encoded username:password]'
/// Makes it easier to perform basic auth in Asp.NET Core for WCF
/// </summary>
/// <param name="username"></param>
/// <param name="password"></param>
/// <returns></returns>
public static TServiceImplementation WithBasicAuth<TServiceContract, TServiceImplementation>(this TServiceImplementation client, string username, string password)
where TServiceContract : class
where TServiceImplementation : ClientBase<TServiceContract>, new()
{
string clientUrl = client.Endpoint.Address.Uri.ToString();
var binding = new BasicHttpsBinding();
binding.Security.Mode = BasicHttpsSecurityMode.Transport;
binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic;
string basicHeaderValue = "Basic " + Base64Encode($"{username}:{password}");
var eab = new EndpointAddressBuilder(new EndpointAddress(clientUrl));
eab.Headers.Add(AddressHeader.CreateAddressHeader("Authorization", // Header Name
string.Empty, // Namespace
basicHeaderValue)); // Header Value
var endpointAddress = eab.ToEndpointAddress();
var clientWithConfiguredBasicAuth = (TServiceImplementation) Activator.CreateInstance(typeof(TServiceImplementation), binding, endpointAddress)!;
clientWithConfiguredBasicAuth.ClientCredentials.UserName.UserName = username;
clientWithConfiguredBasicAuth.ClientCredentials.UserName.Password = username;
return clientWithConfiguredBasicAuth;
}
private static string Base64Encode(string plainText)
{
var plainTextBytes = System.Text.Encoding.UTF8.GetBytes(plainText);
return Convert.ToBase64String(plainTextBytes);
}
}
}
</pre>
</code>
Example call inside a razor file in a .net6 web client, I made client and service from the WCF template :
<br />
<b>Index.cshtml</b>
<pre>
<code class='hljs razor'>
@{
string username = "someuser";
string password = "somepassw0rd";
var client = new ServiceClient().WithBasicAuth<IService, ServiceClient>(username, password);
var result = await client.GetDataAsync(42);
<h5>@Html.Raw(result)</h5>
}
</code>
</pre>
I manage to set the identity via the call above, here is a screenshot showing this :
<img src='https://github.com/CoreWCF/CoreWCF/assets/49962899/8a71882b-4793-4815-b10f-6533d4ec487b' />
<h3>Setting up Basic Auth for serverside</h3>
Let's look at the serverside, it was created to start with as an ASP.NET Core .NET 6 with MVC Views solution.
I added these Nugets to add CoreWCF, showing the entire .csproj since it also includes some important using statements :
<br />
<b>CoreWCFService1.csproj</b>
<pre>
<code class='hljs xml'>
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>true</ImplicitUsings>
</PropertyGroup>
<ItemGroup>
<Using Include="CoreWCF" />
<Using Include="CoreWCF.Configuration" />
<Using Include="CoreWCF.Channels" />
<Using Include="CoreWCF.Description" />
<Using Include="System.Runtime.Serialization " />
<Using Include="CoreWCFService1" />
<Using Include="Microsoft.Extensions.DependencyInjection.Extensions" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="CoreWCF.Primitives" Version="1.5.1" />
<PackageReference Include="CoreWCF.Http" Version="1.5.1" />
</ItemGroup>
</Project>
</code>
</pre>
Next up, in the file <em>Program.cs</em> different setup is added to add Basic Auth.
In Program.cs , basic auth is set up in these code lines :
<br />
<b>Program.cs</b>
<pre>
<code class='hljs xml'>
builder.Services.AddSingleton<IUserRepository, UserRepository>();
builder.Services.AddAuthentication("Basic").
AddScheme<AuthenticationSchemeOptions, BasicAuthenticationHandler>
("Basic", null);
</code>
</pre>
This adds authentication in services. We also make sure to add authentication itself after WebApplicationBuilder has been built, making sure also to set AllowSynchronousIO to true as usual.
Below is listet the pipline setup of authentication, the StartsWithSegments should of course be adjusted in case you have multiple services:
<br />
<b>Program.cs</b>
<pre>
<code class='hljs xml'>
app.Use(async (context, next) =>
{
// Only check for basic auth when path is for the TransportWithMessageCredential endpoint only
if (context.Request.Path.StartsWithSegments("/Service.svc"))
{
// Check if currently authenticated
var authResult = await context.AuthenticateAsync("Basic");
if (authResult.None)
{
// If the client hasn't authenticated, send a challenge to the client and complete request
await context.ChallengeAsync("Basic");
return;
}
}
// Call the next delegate/middleware in the pipeline.
// Either the request was authenticated of it's for a path which doesn't require basic auth
await next(context);
});
</code>
</pre>
We set up the servicemodel security like this to support transport mode security with the basic client credentials type.
<br />
<b>Program.cs</b>
<pre>
<code class='hljs csharp'>
app.UseServiceModel(serviceBuilder =>
{
var basicHttpBinding = new BasicHttpBinding();
basicHttpBinding.Security.Mode = BasicHttpSecurityMode.Transport;
basicHttpBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic;
serviceBuilder.AddService<Service>(options =>
{
options.DebugBehavior.IncludeExceptionDetailInFaults = true;
});
serviceBuilder.AddServiceEndpoint<Service, IService>(basicHttpBinding, "/Service.svc");
var serviceMetadataBehavior = app.Services.GetRequiredService<ServiceMetadataBehavior>();
serviceMetadataBehavior.HttpsGetEnabled = true;
});
</code>
</pre>
The BasicAuthenticationHandler looks like this:
<br />
<b>BasicAuthenticationHandler.cs</b>
<pre>
<code class='hljs csharp'>
using Microsoft.AspNetCore.Authentication;
using Microsoft.Extensions.Options;
using System.Security.Claims;
using System.Security.Principal;
using System.Text;
using System.Text.Encodings.Web;
public class BasicAuthenticationHandler : AuthenticationHandler<AuthenticationSchemeOptions>
{
private readonly IUserRepository _userRepository;
public BasicAuthenticationHandler(IOptionsMonitor<AuthenticationSchemeOptions> options,
ILoggerFactory logger,
UrlEncoder encoder,
ISystemClock clock, IUserRepository userRepository) :
base(options, logger, encoder, clock)
{
_userRepository = userRepository;
}
protected async override Task<AuthenticateResult> HandleAuthenticateAsync()
{
string? authTicketFromSoapEnvelope = await Request!.GetAuthenticationHeaderFromSoapEnvelope();
if (authTicketFromSoapEnvelope != null && authTicketFromSoapEnvelope.StartsWith("basic", StringComparison.OrdinalIgnoreCase))
{
var token = authTicketFromSoapEnvelope.Substring("Basic ".Length).Trim();
var credentialsAsEncodedString = Encoding.UTF8.GetString(Convert.FromBase64String(token));
var credentials = credentialsAsEncodedString.Split(':');
if (await _userRepository.Authenticate(credentials[0], credentials[1]))
{
var identity = new GenericIdentity(credentials[0]);
var claimsPrincipal = new ClaimsPrincipal(identity);
var ticket = new AuthenticationTicket(claimsPrincipal, Scheme.Name);
return await Task.FromResult(AuthenticateResult.Success(ticket));
}
}
return await Task.FromResult(AuthenticateResult.Fail("Invalid Authorization Header"));
}
protected override Task HandleChallengeAsync(AuthenticationProperties properties)
{
Response.StatusCode = 401;
Response.Headers.Add("WWW-Authenticate", "Basic realm=\"thoushaltnotpass.com\"");
Context.Response.WriteAsync("You are not logged in via Basic auth").Wait();
return Task.CompletedTask;
}
}
</code>
</pre>
This authentication handler has got a flaw, if you enter the wrong password and username you get a 500 internal server error instead of the 401. I have not found out how to fix this yet.. Authenticate.Fail seems to short-circuit everything in case you enter wrong credentials.
The _userRepository.Authenticate method is implemented as a dummy implementation, the user repo could for example do a database connection to look up the user via the provided credentials or some other means, maybe via ASP.NET Core MemberShipProvider ?
The user repo looks like this:
<br />
<b>(I)UserRepository.cs</b>
<pre>
<code class='hljs csharp'>
public interface IUserRepository
{
public Task<bool> Authenticate(string username, string password);
}
public class UserRepository : IUserRepository
{
public Task<bool> Authenticate(string username, string password)
{
//TODO: some dummie auth mechanism used here, make something more realistic such as DB user repo lookup or similar
if (username == "someuser" && password == "somepassw0rd")
{
return Task.FromResult(true);
}
return Task.FromResult(false);
}
}
</code>
</pre>
So I have implemented basic auth via reading out the credentials via Auth header inside soap envelope.
I circumvent a lot of the Core WCF Auth by perhaps relying too much on the ASP.Net Core pipeline instead. Remember, WCF has to interop some with the ASP.NET Core pipeline to make it work properly and as long as we satisfy the demands of both the WCF and ASP.NET Core pipelines, we can make the authentication work.
I managed to set the username via setting claims in the expected places of ServiceSecurityContext and CurrentPrincipal.
The WCF service looks like this, note the use of the [Autorize] attribute :
<br />
<b>Service.cs</b>
<pre>
<code class='hljs csharp'>
public class Service : IService
{
[Authorize]
public string GetData(int value)
{
return $"You entered: {value}. <br />The client logged in with transport security with BasicAuth with https (BasicHttpsBinding).<br /><br />The username is set inside ServiceSecurityContext.Current.PrimaryIdentity.Name: {ServiceSecurityContext.Current.PrimaryIdentity.Name}. <br /> This username is also stored inside Thread.CurrentPrincipal.Identity.Name: {Thread.CurrentPrincipal?.Identity?.Name}";
}
public CompositeType GetDataUsingDataContract(CompositeType composite)
{
if (composite == null)
{
throw new ArgumentNullException("composite");
}
if (composite.BoolValue)
{
composite.StringValue += "Suffix";
}
return composite;
}
}
</code>
</pre>
I am mainly satisfied with this setup as it though is not optimal since ASP.NET Core don't seem to be able to work together with CoreWCF properly, instead we add the authentication as a soap envelope authorization header which we read out.
I used some time to read out the authentication header, this is done on the serverside with the following extension method :
<br />
<b>HttpRequestExtensions.cs</b>
<pre>
<code class='hljs csharp'>
using System.IO.Pipelines;
using System.Text;
using System.Xml.Linq;
public static class HttpRequestExtensions
{
public static async Task<string?> GetAuthenticationHeaderFromSoapEnvelope(this HttpRequest request)
{
ReadResult requestBodyInBytes = await request.BodyReader.ReadAsync();
string body = Encoding.UTF8.GetString(requestBodyInBytes.Buffer.FirstSpan);
request.BodyReader.AdvanceTo(requestBodyInBytes.Buffer.Start, requestBodyInBytes.Buffer.End);
string authTicketFromHeader = null;
if (body?.Contains(@"http://schemas.xmlsoap.org/soap/envelope/") == true)
{
XNamespace ns = "http://schemas.xmlsoap.org/soap/envelope/";
var soapEnvelope = XDocument.Parse(body);
var headers = soapEnvelope.Descendants(ns + "Header").ToList();
foreach (var header in headers)
{
var authorizationElement = header.Element("Authorization");
if (!string.IsNullOrWhiteSpace(authorizationElement?.Value))
{
authTicketFromHeader = authorizationElement.Value;
break;
}
}
}
return authTicketFromHeader;
}
}
</code>
</pre>
Note the use of <em>BodyReader</em> and method AdvanceTo. This was the only way to rewind the Request stream after reading the HTTP soap envelope header for Authorization, it took me hours to figure out why this failed in ASP.NET Core pipeline, until I found some tips in a Github discussion thread on Core WCF mentioning the error and a suggestion in a comment there.
See more documentation about BodyWriter and BodyReader here from MVP Steve Gordon here:
<a href='https://www.stevejgordon.co.uk/using-the-bodyreader-and-bodywriter-in-asp-net-core-3-0'>https://www.stevejgordon.co.uk/using-the-bodyreader-and-bodywriter-in-asp-net-core-3-0</a>
<img alt="" border="0" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiaWoovmMcsbs2rrIQwl0LhIgjSueN-KH-jxLaOK6R5oAlardy06CUcI1j7EnmB4ZWHJKSJJk5MV2DpZ-ZWZhMqKO9UzJcNXcq4qQUo9m0uswqRfBdKVnv2ypDoIO_iDNiM965X_5AwmT7ww8jPXGow0tj-LBAZF4N1dRosWx6Ut_L1IJHe9v0NCW8lxp8/s1600/CoreWCFImage.png"/>Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-85781592569203944972023-11-21T16:34:00.006+01:002023-11-21T16:36:09.488+01:00Increasing timeout in CoreWCF project for clientI have tested out CoreWCF a bit and it is good to see WCF once again in a modern framework such as ASP.NET Core.
Here is how you can increase timeouts in CoreWCF. You can put the timeout into an appsettings file too if you want.
First off, after having added a <i>Service Reference</i> to your WCF service. Look inside the <i>Reference.cs</i> file.
Make note of:
<ol>
<li>Namespace in the Reference.cs file</li>
<li>Class name of the client</li>
</ol>
My client uses these Nuget packages in its csproj :
<pre>
<code class='hljs csharp'>
<ItemGroup>
<PackageReference Include="System.ServiceModel.Duplex" Version="4.10.*" />
<PackageReference Include="System.ServiceModel.Federation" Version="4.10.*" />
<PackageReference Include="System.ServiceModel.Http" Version="4.10.*" />
<PackageReference Include="System.ServiceModel.NetTcp" Version="4.10.*" />
<PackageReference Include="System.ServiceModel.Security" Version="4.10.*" />
</ItemGroup>
</code>
</pre>
<pre>
<code class='hljs csharp'>
<ItemGroup>
<PackageReference Include="CoreWCF.Primitives" Version="1.*" />
<PackageReference Include="CoreWCF.Http" Version="1.*" />
</ItemGroup>
</code>
</pre>
Look inside the Reference.cs file, a method called <em>ConfigureEndpoint</em> is listed :
<pre>
<code class='hljs csharp'>
[System.Diagnostics.DebuggerStepThroughAttribute()]
[System.CodeDom.Compiler.GeneratedCodeAttribute("Microsoft.Tools.ServiceModel.Svcutil", "2.1.0")]
public partial class ServiceClient : System.ServiceModel.ClientBase<MyService.IService>, MyService.IService
{
/// <summary>
/// Implement this partial method to configure the service endpoint.
/// </summary>
/// <param name="serviceEndpoint">The endpoint to configure</param>
/// <param name="clientCredentials">The client credentials</param>
static partial void ConfigureEndpoint(System.ServiceModel.Description.ServiceEndpoint serviceEndpoint, System.ServiceModel.Description.ClientCredentials clientCredentials);
//more code
</code>
</pre>
Next up, implementing this method to configured the binding.
<pre>
<code class='hljs csharp'>
namespace MyService
{
public partial class ServiceClient
{
/// <summary>
/// Implement this partial method to configure the service endpoint.
/// </summary>
/// <param name="serviceEndpoint">The endpoint to configure</param>
/// <param name="clientCredentials">The client credentials</param>
static partial void ConfigureEndpoint(System.ServiceModel.Description.ServiceEndpoint serviceEndpoint, System.ServiceModel.Description.ClientCredentials clientCredentials)
{
serviceEndpoint.Binding.OpenTimeout
= serviceEndpoint.Binding.CloseTimeout
= serviceEndpoint.Binding.ReceiveTimeout
= serviceEndpoint.Binding.SendTimeout = TimeSpan.FromSeconds(15);
}
}
}
</code>
</pre>
We also want to be able to configure the timeout here.
Lets add the following nuget packages also to the client (I got a .NET 6 console app):
<pre>
<code class='hljs csharp'>
<PackageReference Include="Microsoft.Extensions.Configuration" Version="6.0.0" />
<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="6.0.0" />
</code>
</pre>
We can also avoid hardcoding timeouts by adding appsettings.json to our project and set the file to copy to output folder.
If you are inside a console project you can add json config file like this, preferably registering it in some shared setup in Program.cs, but I found it a bit challenging to consume it from a static method I ended up with this :
<pre>
<code class='hljs csharp'>
/// <summary>
/// Implement this partial method to configure the service endpoint.
/// </summary>
/// <param name="serviceEndpoint">The endpoint to configure</param>
/// <param name="clientCredentials">The client credentials</param>
static partial void ConfigureEndpoint(System.ServiceModel.Description.ServiceEndpoint serviceEndpoint, System.ServiceModel.Description.ClientCredentials clientCredentials)
{
var serviceProvider = new ServiceCollection()
.AddSingleton(_ =>
new ConfigurationBuilder()
.SetBasePath(Path.Combine(AppContext.BaseDirectory))
.AddJsonFile("appsettings.json", optional: true)
.Build())
.BuildServiceProvider();
var config = serviceProvider.GetService<IConfigurationRoot>();
int timeoutInSeconds = int.Parse(config!["ServiceTimeoutInSeconds"]);
serviceEndpoint.Binding.OpenTimeout
= serviceEndpoint.Binding.CloseTimeout
= serviceEndpoint.Binding.ReceiveTimeout
= serviceEndpoint.Binding.SendTimeout = TimeSpan.FromSeconds(timeoutInSeconds);
}
</code>
</pre>
And we have our appsettings.json file :
<pre>
<code class='hljs csharp'>
{
"ServiceTimeoutInSeconds" : 9
}
</code>
</pre>
The CoreWCF project got an upgrade tool that will do a lot of the migration for you. WCF had a lot of config settings and having an appsettings.json file for every setting will be some work. The upgrade tool should take care of generating some of these config values and add them into dedicated json files for this.
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-17744095443966922602023-11-20T22:34:00.013+01:002023-11-20T23:47:51.939+01:00Using synthesized speech in Azure Cognitive Services - Text to SpeechI have extended my demo repo with <em>Multi-Lingual translator</em> to include AI realistic speech.
The Github repo for the demo is available here :
<br /><br />
<a href='https://github.com/toreaurstadboss/MultiLingual.Translator'>https://github.com/toreaurstadboss/MultiLingual.Translator</a>
<br /><br />
The speech synthesis service of Azure AI is accessed via a REST service. You can actually test it out first in Postman, retrieving an access token via an endpoint for this and then
calling the text to speech endpoint using the access token as a bearer token.
To get the demo working, you have to inside the Azure Portal create the necessary resources / services. This article is focused on <em>speech service</em>.
Important, if you want to test out the DEMO yourself, remember to put the keys into environment variables so they are not exposed via source control.
To get started with speech synthesis in Azure Cognitive Services, add a <em>Speech Service</em> resource via the Azure Portal.
<a href='https://learn.microsoft.com/en-us/azure/ai-services/speech-service/overview'>https://learn.microsoft.com/en-us/azure/ai-services/speech-service/overview</a>
We also need to add audio capability to our demo, which is a .NET MAUI Blazor app. The Nuget package used is the following :
<b>MultiLingual.Translator.csproj</b>
<pre>
<code class='hljs csharp'>
<ItemGroup>
<PackageReference Include="Plugin.Maui.Audio" Version="2.0.0" />
</ItemGroup>
</code>
</pre>
This Nuget package's website is here:
<a href='https://github.com/jfversluis/Plugin.Maui.Audio'>https://github.com/jfversluis/Plugin.Maui.Audio</a>
The <em>MauiProgram.cs</em> looks like the following, make note of <em>AudioManager.Current</em>, which is registered as a singleton.
<b>MauiProgram.cs</b>
<pre>
<code class='hljs csharp'>
using Microsoft.Extensions.Configuration;
using MultiLingual.Translator.Lib;
using Plugin.Maui.Audio;
namespace MultiLingual.Translator;
public static class MauiProgram
{
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.ConfigureFonts(fonts =>
{
fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");
});
builder.Services.AddMauiBlazorWebView();
#if DEBUG
builder.Services.AddBlazorWebViewDeveloperTools();
#endif
builder.Services.AddSingleton(AudioManager.Current);
builder.Services.AddTransient<MainPage>();
builder.Services.AddScoped<IDetectLanguageUtil, DetectLanguageUtil>();
builder.Services.AddScoped<ITranslateUtil, TranslateUtil>();
builder.Services.AddScoped<ITextToSpeechUtil, TextToSpeechUtil>();
var config = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
builder.Configuration.AddConfiguration(config);
return builder.Build();
}
}
</code>
</pre>
Next up, let's look at the TextToSpeechUtil. This class, which is a <em>service</em> that does two things against the <em>REST API</em> of the text-to-speech Azure Cognitive AI service :
<ol>
<li>Fetch an access token</li>
<li>Synthesize text to speech</li>
</ol>
<b>TextToSpeechUtil.cs</b>
<pre>
<code class='hljs csharp'>
using Microsoft.Extensions.Configuration;
using MultiLingual.Translator.Lib.Models;
using System.Security;
using System.Text;
namespace MultiLingual.Translator.Lib
{
public class TextToSpeechUtil : ITextToSpeechUtil
{
public TextToSpeechUtil(IConfiguration configuration)
{
_configuration = configuration;
}
public async Task<TextToSpeechResult> GetSpeechFromText(string text, string language, TextToSpeechLanguage[] actorVoices, string? preferredVoiceActorId)
{
var result = new TextToSpeechResult();
result.Transcript = GetSpeechTextXml(text, language, actorVoices, preferredVoiceActorId, result);
result.ContentType = _configuration[TextToSpeechSpeechContentType];
result.OutputFormat = _configuration[TextToSpeechSpeechXMicrosoftOutputFormat];
result.UserAgent = _configuration[TextToSpeechSpeechUserAgent];
result.AvailableVoiceActorIds = ResolveAvailableActorVoiceIds(language, actorVoices);
result.LanguageCode = language;
string? token = await GetUpdatedToken();
HttpClient httpClient = GetTextToSpeechWebClient(token);
string ttsEndpointUrl = _configuration[TextToSpeechSpeechEndpoint];
var response = await httpClient.PostAsync(ttsEndpointUrl, new StringContent(result.Transcript, Encoding.UTF8, result.ContentType));
using (var memStream = new MemoryStream()) {
var responseStream = await response.Content.ReadAsStreamAsync();
responseStream.CopyTo(memStream);
result.VoiceData = memStream.ToArray();
}
return result;
}
private async Task<string?> GetUpdatedToken()
{
string? token = _token?.ToNormalString();
if (_lastTimeTokenFetched == null || DateTime.Now.Subtract(_lastTimeTokenFetched.Value).Minutes > 8)
{
token = await GetIssuedToken();
}
return token;
}
private HttpClient GetTextToSpeechWebClient(string? token)
{
var httpClient = new HttpClient();
httpClient.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", token);
httpClient.DefaultRequestHeaders.Add("X-Microsoft-OutputFormat", _configuration[TextToSpeechSpeechXMicrosoftOutputFormat]);
httpClient.DefaultRequestHeaders.Add("User-Agent", _configuration[TextToSpeechSpeechUserAgent]);
return httpClient;
}
private string GetSpeechTextXml(string text, string language, TextToSpeechLanguage[] actorVoices, string? preferredVoiceActorId, TextToSpeechResult result)
{
result.VoiceActorId = ResolveVoiceActorId(language, preferredVoiceActorId, actorVoices);
string speechXml = $@"
<speak version='1.0' xml:lang='en-US'>
<voice xml:lang='en-US' xml:gender='Male' name='Microsoft Server Speech Text to Speech Voice {result.VoiceActorId}'>
<prosody rate='1'>{text}</prosody>
</voice>
</speak>";
return speechXml;
}
private List<string> ResolveAvailableActorVoiceIds(string language, TextToSpeechLanguage[] actorVoices)
{
if (actorVoices?.Any() == true)
{
var voiceActorIds = actorVoices.Where(v => v.LanguageKey == language || v.LanguageKey.Split("-")[0] == language).SelectMany(v => v.VoiceActors).Select(v => v.VoiceId).ToList();
return voiceActorIds;
}
return new List<string>();
}
private string ResolveVoiceActorId(string language, string? preferredVoiceActorId, TextToSpeechLanguage[] actorVoices)
{
string actorVoiceId = "(en-AU, NatashaNeural)"; //default to a select voice actor id
if (actorVoices?.Any() == true)
{
var voiceActorsForLanguage = actorVoices.Where(v => v.LanguageKey == language || v.LanguageKey.Split("-")[0] == language).SelectMany(v => v.VoiceActors).Select(v => v.VoiceId).ToList();
if (voiceActorsForLanguage != null)
{
if (voiceActorsForLanguage.Any() == true)
{
var resolvedPreferredVoiceActorId = voiceActorsForLanguage.FirstOrDefault(v => v == preferredVoiceActorId);
if (!string.IsNullOrWhiteSpace(resolvedPreferredVoiceActorId))
{
return resolvedPreferredVoiceActorId!;
}
actorVoiceId = voiceActorsForLanguage.First();
}
}
}
return actorVoiceId;
}
private async Task<string> GetIssuedToken()
{
var httpClient = new HttpClient();
string? textToSpeechSubscriptionKey = Environment.GetEnvironmentVariable("AZURE_TEXT_SPEECH_SUBSCRIPTION_KEY", EnvironmentVariableTarget.Machine);
httpClient.DefaultRequestHeaders.Add(OcpApiSubscriptionKeyHeaderName, textToSpeechSubscriptionKey);
string tokenEndpointUrl = _configuration[TextToSpeechIssueTokenEndpoint];
var response = await httpClient.PostAsync(tokenEndpointUrl, new StringContent("{}"));
_token = (await response.Content.ReadAsStringAsync()).ToSecureString();
_lastTimeTokenFetched = DateTime.Now;
return _token.ToNormalString();
}
private const string OcpApiSubscriptionKeyHeaderName = "Ocp-Apim-Subscription-Key";
private const string TextToSpeechIssueTokenEndpoint = "TextToSpeechIssueTokenEndpoint";
private const string TextToSpeechSpeechEndpoint = "TextToSpeechSpeechEndpoint";
private const string TextToSpeechSpeechContentType = "TextToSpeechSpeechContentType";
private const string TextToSpeechSpeechUserAgent = "TextToSpeechSpeechUserAgent";
private const string TextToSpeechSpeechXMicrosoftOutputFormat = "TextToSpeechSpeechXMicrosoftOutputFormat";
private readonly IConfiguration _configuration;
private DateTime? _lastTimeTokenFetched = null;
private SecureString _token = null;
}
}
</code>
</pre>
Let's look at the appsettings.json file. The <em>Ocp-Apim-Subscription-Key</em> is put into environment variable, this is a secret key you do not want to expose to avoid leaking a key an running costs for usage of service.
<b>Appsettings.json</b>
<pre>
<code class='hljs csharp'>
{
"TextToSpeechIssueTokenEndpoint": "https://norwayeast.api.cognitive.microsoft.com/sts/v1.0/issuetoken",
"TextToSpeechSpeechEndpoint": "https://norwayeast.tts.speech.microsoft.com/cognitiveservices/v1",
"TextToSpeechSpeechContentType": "application/ssml+xml",
"TextToSpeechSpeechUserAgent": "MultiLingualTranslatorBlazorDemo",
"TextToSpeechSpeechXMicrosoftOutputFormat": "audio-24khz-48kbitrate-mono-mp3"
}
</code>
</pre>
Next up, I have gathered all the voice actor ids for languages in Azure Cognitive Services which have voice actor ids. Thesee are all the most known languages in the list of Azure about 150 supported languages, see the following json for an overview of voice actor ids.
For example, Norwegian language got three voice actors that are synthesized neural net trained AI voice actors for realistic speech synthesis.
<div style='overflow:scroll;max-height:200px;background-color:aliceblue'>
<pre>
<code class='hljs json'>
[
{
"LanguageKey": "af-ZA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "af-ZA-AdriNeural2",
"VoiceId": "(af-ZA, AdriNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "af-ZA-WillemNeural2",
"VoiceId": "(af-ZA, WillemNeural2)"
}
]
},
{
"LanguageKey": "am-ET",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "am-ET-MekdesNeural2",
"VoiceId": "(am-ET, MekdesNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "am-ET-AmehaNeural2",
"VoiceId": "(am-ET, AmehaNeural2)"
}
]
},
{
"LanguageKey": "ar-AE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-AE-FatimaNeural",
"VoiceId": "(ar-AE, FatimaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-AE-HamdanNeural",
"VoiceId": "(ar-AE, HamdanNeural)"
}
]
},
{
"LanguageKey": "ar-BH",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-BH-LailaNeural",
"VoiceId": "(ar-BH, LailaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-BH-AliNeural",
"VoiceId": "(ar-BH, AliNeural)"
}
]
},
{
"LanguageKey": "ar-DZ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-DZ-AminaNeural",
"VoiceId": "(ar-DZ, AminaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-DZ-IsmaelNeural",
"VoiceId": "(ar-DZ, IsmaelNeural)"
}
]
},
{
"LanguageKey": "ar-EG",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-EG-SalmaNeural",
"VoiceId": "(ar-EG, SalmaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-EG-ShakirNeural",
"VoiceId": "(ar-EG, ShakirNeural)"
}
]
},
{
"LanguageKey": "ar-IQ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-IQ-RanaNeural",
"VoiceId": "(ar-IQ, RanaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-IQ-BasselNeural",
"VoiceId": "(ar-IQ, BasselNeural)"
}
]
},
{
"LanguageKey": "ar-JO",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-JO-SanaNeural",
"VoiceId": "(ar-JO, SanaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-JO-TaimNeural",
"VoiceId": "(ar-JO, TaimNeural)"
}
]
},
{
"LanguageKey": "ar-KW",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-KW-NouraNeural",
"VoiceId": "(ar-KW, NouraNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-KW-FahedNeural",
"VoiceId": "(ar-KW, FahedNeural)"
}
]
},
{
"LanguageKey": "ar-LB",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-LB-LaylaNeural",
"VoiceId": "(ar-LB, LaylaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-LB-RamiNeural",
"VoiceId": "(ar-LB, RamiNeural)"
}
]
},
{
"LanguageKey": "ar-LY",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-LY-ImanNeural",
"VoiceId": "(ar-LY, ImanNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-LY-OmarNeural",
"VoiceId": "(ar-LY, OmarNeural)"
}
]
},
{
"LanguageKey": "ar-MA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-MA-MounaNeural",
"VoiceId": "(ar-MA, MounaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-MA-JamalNeural",
"VoiceId": "(ar-MA, JamalNeural)"
}
]
},
{
"LanguageKey": "ar-OM",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-OM-AyshaNeural",
"VoiceId": "(ar-OM, AyshaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-OM-AbdullahNeural",
"VoiceId": "(ar-OM, AbdullahNeural)"
}
]
},
{
"LanguageKey": "ar-QA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-QA-AmalNeural",
"VoiceId": "(ar-QA, AmalNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-QA-MoazNeural",
"VoiceId": "(ar-QA, MoazNeural)"
}
]
},
{
"LanguageKey": "ar-SA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-SA-ZariyahNeural",
"VoiceId": "(ar-SA, ZariyahNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-SA-HamedNeural",
"VoiceId": "(ar-SA, HamedNeural)"
}
]
},
{
"LanguageKey": "ar-SY",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-SY-AmanyNeural",
"VoiceId": "(ar-SY, AmanyNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-SY-LaithNeural",
"VoiceId": "(ar-SY, LaithNeural)"
}
]
},
{
"LanguageKey": "ar-TN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-TN-ReemNeural",
"VoiceId": "(ar-TN, ReemNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-TN-HediNeural",
"VoiceId": "(ar-TN, HediNeural)"
}
]
},
{
"LanguageKey": "ar-YE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ar-YE-MaryamNeural",
"VoiceId": "(ar-YE, MaryamNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ar-YE-SalehNeural",
"VoiceId": "(ar-YE, SalehNeural)"
}
]
},
{
"LanguageKey": "az-AZ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "az-AZ-BanuNeural2",
"VoiceId": "(az-AZ, BanuNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "az-AZ-BabekNeural2",
"VoiceId": "(az-AZ, BabekNeural2)"
}
]
},
{
"LanguageKey": "bg-BG",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "bg-BG-KalinaNeural",
"VoiceId": "(bg-BG, KalinaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "bg-BG-BorislavNeural",
"VoiceId": "(bg-BG, BorislavNeural)"
}
]
},
{
"LanguageKey": "bn-BD",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "bn-BD-NabanitaNeural2",
"VoiceId": "(bn-BD, NabanitaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "bn-BD-PradeepNeural2",
"VoiceId": "(bn-BD, PradeepNeural2)"
}
]
},
{
"LanguageKey": "bn-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "bn-IN-TanishaaNeural2",
"VoiceId": "(bn-IN, TanishaaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "bn-IN-BashkarNeural2",
"VoiceId": "(bn-IN, BashkarNeural2)"
}
]
},
{
"LanguageKey": "bs-BA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "bs-BA-VesnaNeural2",
"VoiceId": "(bs-BA, VesnaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "bs-BA-GoranNeural2",
"VoiceId": "(bs-BA, GoranNeural2)"
}
]
},
{
"LanguageKey": "ca-ES",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ca-ES-JoanaNeural",
"VoiceId": "(ca-ES, JoanaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ca-ES-EnricNeural",
"VoiceId": "(ca-ES, EnricNeural)"
},
{
"IsFemale": true,
"VoiceActor": "ca-ES-AlbaNeural",
"VoiceId": "(ca-ES, AlbaNeural)"
}
]
},
{
"LanguageKey": "cs-CZ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "cs-CZ-VlastaNeural",
"VoiceId": "(cs-CZ, VlastaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "cs-CZ-AntoninNeural",
"VoiceId": "(cs-CZ, AntoninNeural)"
}
]
},
{
"LanguageKey": "cy-GB",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "cy-GB-NiaNeural2",
"VoiceId": "(cy-GB, NiaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "cy-GB-AledNeural2",
"VoiceId": "(cy-GB, AledNeural2)"
}
]
},
{
"LanguageKey": "da-DK",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "da-DK-ChristelNeural",
"VoiceId": "(da-DK, ChristelNeural)"
},
{
"IsFemale": false,
"VoiceActor": "da-DK-JeppeNeural",
"VoiceId": "(da-DK, JeppeNeural)"
}
]
},
{
"LanguageKey": "de-AT",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "de-AT-IngridNeural",
"VoiceId": "(de-AT, IngridNeural)"
},
{
"IsFemale": false,
"VoiceActor": "de-AT-JonasNeural",
"VoiceId": "(de-AT, JonasNeural)"
}
]
},
{
"LanguageKey": "de-CH",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "de-CH-LeniNeural",
"VoiceId": "(de-CH, LeniNeural)"
},
{
"IsFemale": false,
"VoiceActor": "de-CH-JanNeural",
"VoiceId": "(de-CH, JanNeural)"
}
]
},
{
"LanguageKey": "de-DE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "de-DE-KatjaNeural",
"VoiceId": "(de-DE, KatjaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "de-DE-ConradNeural1",
"VoiceId": "(de-DE, ConradNeural1)"
},
{
"IsFemale": true,
"VoiceActor": "de-DE-AmalaNeural",
"VoiceId": "(de-DE, AmalaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "de-DE-BerndNeural",
"VoiceId": "(de-DE, BerndNeural)"
},
{
"IsFemale": false,
"VoiceActor": "de-DE-ChristophNeural",
"VoiceId": "(de-DE, ChristophNeural)"
},
{
"IsFemale": true,
"VoiceActor": "de-DE-ElkeNeural",
"VoiceId": "(de-DE, ElkeNeural)"
},
{
"IsFemale": true,
"VoiceActor": "de-DE-GiselaNeural",
"VoiceId": "(de-DE, GiselaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "de-DE-KasperNeural",
"VoiceId": "(de-DE, KasperNeural)"
},
{
"IsFemale": false,
"VoiceActor": "de-DE-KillianNeural",
"VoiceId": "(de-DE, KillianNeural)"
},
{
"IsFemale": true,
"VoiceActor": "de-DE-KlarissaNeural",
"VoiceId": "(de-DE, KlarissaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "de-DE-KlausNeural",
"VoiceId": "(de-DE, KlausNeural)"
},
{
"IsFemale": true,
"VoiceActor": "de-DE-LouisaNeural",
"VoiceId": "(de-DE, LouisaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "de-DE-MajaNeural",
"VoiceId": "(de-DE, MajaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "de-DE-RalfNeural",
"VoiceId": "(de-DE, RalfNeural)"
},
{
"IsFemale": true,
"VoiceActor": "de-DE-SeraphinaNeural",
"VoiceId": "(de-DE, SeraphinaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "de-DE-TanjaNeural",
"VoiceId": "(de-DE, TanjaNeural)"
}
]
},
{
"LanguageKey": "el-GR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "el-GR-AthinaNeural",
"VoiceId": "(el-GR, AthinaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "el-GR-NestorasNeural",
"VoiceId": "(el-GR, NestorasNeural)"
}
]
},
{
"LanguageKey": "en-AU",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-AU-NatashaNeural",
"VoiceId": "(en-AU, NatashaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-AU-WilliamNeural",
"VoiceId": "(en-AU, WilliamNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-AU-AnnetteNeural",
"VoiceId": "(en-AU, AnnetteNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-AU-CarlyNeural",
"VoiceId": "(en-AU, CarlyNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-AU-DarrenNeural",
"VoiceId": "(en-AU, DarrenNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-AU-DuncanNeural",
"VoiceId": "(en-AU, DuncanNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-AU-ElsieNeural",
"VoiceId": "(en-AU, ElsieNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-AU-FreyaNeural",
"VoiceId": "(en-AU, FreyaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-AU-JoanneNeural",
"VoiceId": "(en-AU, JoanneNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-AU-KenNeural",
"VoiceId": "(en-AU, KenNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-AU-KimNeural",
"VoiceId": "(en-AU, KimNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-AU-NeilNeural",
"VoiceId": "(en-AU, NeilNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-AU-TimNeural",
"VoiceId": "(en-AU, TimNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-AU-TinaNeural",
"VoiceId": "(en-AU, TinaNeural)"
}
]
},
{
"LanguageKey": "en-CA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-CA-ClaraNeural",
"VoiceId": "(en-CA, ClaraNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-CA-LiamNeural",
"VoiceId": "(en-CA, LiamNeural)"
}
]
},
{
"LanguageKey": "en-GB",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-GB-SoniaNeural",
"VoiceId": "(en-GB, SoniaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-GB-RyanNeural",
"VoiceId": "(en-GB, RyanNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-GB-LibbyNeural",
"VoiceId": "(en-GB, LibbyNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-GB-AbbiNeural",
"VoiceId": "(en-GB, AbbiNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-GB-AlfieNeural",
"VoiceId": "(en-GB, AlfieNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-GB-BellaNeural",
"VoiceId": "(en-GB, BellaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-GB-ElliotNeural",
"VoiceId": "(en-GB, ElliotNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-GB-EthanNeural",
"VoiceId": "(en-GB, EthanNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-GB-HollieNeural",
"VoiceId": "(en-GB, HollieNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-GB-MaisieNeural",
"VoiceId": "(en-GB, MaisieNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-GB-NoahNeural",
"VoiceId": "(en-GB, NoahNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-GB-OliverNeural",
"VoiceId": "(en-GB, OliverNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-GB-OliviaNeural",
"VoiceId": "(en-GB, OliviaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-GB-ThomasNeural",
"VoiceId": "(en-GB, ThomasNeural)"
}
]
},
{
"LanguageKey": "en-HK",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-HK-YanNeural",
"VoiceId": "(en-HK, YanNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-HK-SamNeural",
"VoiceId": "(en-HK, SamNeural)"
}
]
},
{
"LanguageKey": "en-IE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-IE-EmilyNeural",
"VoiceId": "(en-IE, EmilyNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-IE-ConnorNeural",
"VoiceId": "(en-IE, ConnorNeural)"
}
]
},
{
"LanguageKey": "en-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-IN-NeerjaNeural",
"VoiceId": "(en-IN, NeerjaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-IN-PrabhatNeural",
"VoiceId": "(en-IN, PrabhatNeural)"
}
]
},
{
"LanguageKey": "en-KE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-KE-AsiliaNeural",
"VoiceId": "(en-KE, AsiliaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-KE-ChilembaNeural",
"VoiceId": "(en-KE, ChilembaNeural)"
}
]
},
{
"LanguageKey": "en-NG",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-NG-EzinneNeural",
"VoiceId": "(en-NG, EzinneNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-NG-AbeoNeural",
"VoiceId": "(en-NG, AbeoNeural)"
}
]
},
{
"LanguageKey": "en-NZ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-NZ-MollyNeural",
"VoiceId": "(en-NZ, MollyNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-NZ-MitchellNeural",
"VoiceId": "(en-NZ, MitchellNeural)"
}
]
},
{
"LanguageKey": "en-PH",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-PH-RosaNeural",
"VoiceId": "(en-PH, RosaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-PH-JamesNeural",
"VoiceId": "(en-PH, JamesNeural)"
}
]
},
{
"LanguageKey": "en-SG",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-SG-LunaNeural",
"VoiceId": "(en-SG, LunaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-SG-WayneNeural",
"VoiceId": "(en-SG, WayneNeural)"
}
]
},
{
"LanguageKey": "en-TZ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-TZ-ImaniNeural",
"VoiceId": "(en-TZ, ImaniNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-TZ-ElimuNeural",
"VoiceId": "(en-TZ, ElimuNeural)"
}
]
},
{
"LanguageKey": "en-US",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-US-JennyMultilingualNeural3",
"VoiceId": "(en-US, JennyMultilingualNeural3)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-JennyNeural",
"VoiceId": "(en-US, JennyNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-GuyNeural",
"VoiceId": "(en-US, GuyNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-AriaNeural",
"VoiceId": "(en-US, AriaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-DavisNeural",
"VoiceId": "(en-US, DavisNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-AmberNeural",
"VoiceId": "(en-US, AmberNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-AnaNeural",
"VoiceId": "(en-US, AnaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-AndrewNeural",
"VoiceId": "(en-US, AndrewNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-AshleyNeural",
"VoiceId": "(en-US, AshleyNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-BrandonNeural",
"VoiceId": "(en-US, BrandonNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-BrianNeural",
"VoiceId": "(en-US, BrianNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-ChristopherNeural",
"VoiceId": "(en-US, ChristopherNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-CoraNeural",
"VoiceId": "(en-US, CoraNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-ElizabethNeural",
"VoiceId": "(en-US, ElizabethNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-EmmaNeural",
"VoiceId": "(en-US, EmmaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-EricNeural",
"VoiceId": "(en-US, EricNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-JacobNeural",
"VoiceId": "(en-US, JacobNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-JaneNeural",
"VoiceId": "(en-US, JaneNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-JasonNeural",
"VoiceId": "(en-US, JasonNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-MichelleNeural",
"VoiceId": "(en-US, MichelleNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-MonicaNeural",
"VoiceId": "(en-US, MonicaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-NancyNeural",
"VoiceId": "(en-US, NancyNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-RogerNeural",
"VoiceId": "(en-US, RogerNeural)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-SaraNeural",
"VoiceId": "(en-US, SaraNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-SteffanNeural",
"VoiceId": "(en-US, SteffanNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-TonyNeural",
"VoiceId": "(en-US, TonyNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-AIGenerate1Neural1",
"VoiceId": "(en-US, AIGenerate1Neural1)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-AIGenerate2Neural1",
"VoiceId": "(en-US, AIGenerate2Neural1)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-BlueNeural1",
"VoiceId": "(en-US, BlueNeural1)"
},
{
"IsFemale": true,
"VoiceActor": "en-US-JennyMultilingualV2Neural1,3",
"VoiceId": "(en-US, JennyMultilingualV2Neural1,3)"
},
{
"IsFemale": false,
"VoiceActor": "en-US-RyanMultilingualNeural1,3",
"VoiceId": "(en-US, RyanMultilingualNeural1,3)"
}
]
},
{
"LanguageKey": "en-ZA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "en-ZA-LeahNeural",
"VoiceId": "(en-ZA, LeahNeural)"
},
{
"IsFemale": false,
"VoiceActor": "en-ZA-LukeNeural",
"VoiceId": "(en-ZA, LukeNeural)"
}
]
},
{
"LanguageKey": "es-AR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-AR-ElenaNeural",
"VoiceId": "(es-AR, ElenaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-AR-TomasNeural",
"VoiceId": "(es-AR, TomasNeural)"
}
]
},
{
"LanguageKey": "es-BO",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-BO-SofiaNeural",
"VoiceId": "(es-BO, SofiaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-BO-MarceloNeural",
"VoiceId": "(es-BO, MarceloNeural)"
}
]
},
{
"LanguageKey": "es-CL",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-CL-CatalinaNeural",
"VoiceId": "(es-CL, CatalinaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-CL-LorenzoNeural",
"VoiceId": "(es-CL, LorenzoNeural)"
}
]
},
{
"LanguageKey": "es-CO",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-CO-SalomeNeural",
"VoiceId": "(es-CO, SalomeNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-CO-GonzaloNeural",
"VoiceId": "(es-CO, GonzaloNeural)"
}
]
},
{
"LanguageKey": "es-CR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-CR-MariaNeural",
"VoiceId": "(es-CR, MariaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-CR-JuanNeural",
"VoiceId": "(es-CR, JuanNeural)"
}
]
},
{
"LanguageKey": "es-CU",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-CU-BelkysNeural",
"VoiceId": "(es-CU, BelkysNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-CU-ManuelNeural",
"VoiceId": "(es-CU, ManuelNeural)"
}
]
},
{
"LanguageKey": "es-DO",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-DO-RamonaNeural",
"VoiceId": "(es-DO, RamonaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-DO-EmilioNeural",
"VoiceId": "(es-DO, EmilioNeural)"
}
]
},
{
"LanguageKey": "es-EC",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-EC-AndreaNeural",
"VoiceId": "(es-EC, AndreaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-EC-LuisNeural",
"VoiceId": "(es-EC, LuisNeural)"
}
]
},
{
"LanguageKey": "es-ES",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-ES-ElviraNeural",
"VoiceId": "(es-ES, ElviraNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-ES-AlvaroNeural",
"VoiceId": "(es-ES, AlvaroNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-ES-AbrilNeural",
"VoiceId": "(es-ES, AbrilNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-ES-ArnauNeural",
"VoiceId": "(es-ES, ArnauNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-ES-DarioNeural",
"VoiceId": "(es-ES, DarioNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-ES-EliasNeural",
"VoiceId": "(es-ES, EliasNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-ES-EstrellaNeural",
"VoiceId": "(es-ES, EstrellaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-ES-IreneNeural",
"VoiceId": "(es-ES, IreneNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-ES-LaiaNeural",
"VoiceId": "(es-ES, LaiaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-ES-LiaNeural",
"VoiceId": "(es-ES, LiaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-ES-NilNeural",
"VoiceId": "(es-ES, NilNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-ES-SaulNeural",
"VoiceId": "(es-ES, SaulNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-ES-TeoNeural",
"VoiceId": "(es-ES, TeoNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-ES-TrianaNeural",
"VoiceId": "(es-ES, TrianaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-ES-VeraNeural",
"VoiceId": "(es-ES, VeraNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-ES-XimenaNeural",
"VoiceId": "(es-ES, XimenaNeural)"
}
]
},
{
"LanguageKey": "es-GQ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-GQ-TeresaNeural",
"VoiceId": "(es-GQ, TeresaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-GQ-JavierNeural",
"VoiceId": "(es-GQ, JavierNeural)"
}
]
},
{
"LanguageKey": "es-GT",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-GT-MartaNeural",
"VoiceId": "(es-GT, MartaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-GT-AndresNeural",
"VoiceId": "(es-GT, AndresNeural)"
}
]
},
{
"LanguageKey": "es-HN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-HN-KarlaNeural",
"VoiceId": "(es-HN, KarlaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-HN-CarlosNeural",
"VoiceId": "(es-HN, CarlosNeural)"
}
]
},
{
"LanguageKey": "es-MX",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-MX-DaliaNeural",
"VoiceId": "(es-MX, DaliaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-MX-JorgeNeural",
"VoiceId": "(es-MX, JorgeNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-MX-BeatrizNeural",
"VoiceId": "(es-MX, BeatrizNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-MX-CandelaNeural",
"VoiceId": "(es-MX, CandelaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-MX-CarlotaNeural",
"VoiceId": "(es-MX, CarlotaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-MX-CecilioNeural",
"VoiceId": "(es-MX, CecilioNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-MX-GerardoNeural",
"VoiceId": "(es-MX, GerardoNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-MX-LarissaNeural",
"VoiceId": "(es-MX, LarissaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-MX-LibertoNeural",
"VoiceId": "(es-MX, LibertoNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-MX-LucianoNeural",
"VoiceId": "(es-MX, LucianoNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-MX-MarinaNeural",
"VoiceId": "(es-MX, MarinaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-MX-NuriaNeural",
"VoiceId": "(es-MX, NuriaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-MX-PelayoNeural",
"VoiceId": "(es-MX, PelayoNeural)"
},
{
"IsFemale": true,
"VoiceActor": "es-MX-RenataNeural",
"VoiceId": "(es-MX, RenataNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-MX-YagoNeural",
"VoiceId": "(es-MX, YagoNeural)"
}
]
},
{
"LanguageKey": "es-NI",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-NI-YolandaNeural",
"VoiceId": "(es-NI, YolandaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-NI-FedericoNeural",
"VoiceId": "(es-NI, FedericoNeural)"
}
]
},
{
"LanguageKey": "es-PA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-PA-MargaritaNeural",
"VoiceId": "(es-PA, MargaritaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-PA-RobertoNeural",
"VoiceId": "(es-PA, RobertoNeural)"
}
]
},
{
"LanguageKey": "es-PE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-PE-CamilaNeural",
"VoiceId": "(es-PE, CamilaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-PE-AlexNeural",
"VoiceId": "(es-PE, AlexNeural)"
}
]
},
{
"LanguageKey": "es-PR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-PR-KarinaNeural",
"VoiceId": "(es-PR, KarinaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-PR-VictorNeural",
"VoiceId": "(es-PR, VictorNeural)"
}
]
},
{
"LanguageKey": "es-PY",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-PY-TaniaNeural",
"VoiceId": "(es-PY, TaniaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-PY-MarioNeural",
"VoiceId": "(es-PY, MarioNeural)"
}
]
},
{
"LanguageKey": "es-SV",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-SV-LorenaNeural",
"VoiceId": "(es-SV, LorenaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-SV-RodrigoNeural",
"VoiceId": "(es-SV, RodrigoNeural)"
}
]
},
{
"LanguageKey": "es-US",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-US-PalomaNeural",
"VoiceId": "(es-US, PalomaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-US-AlonsoNeural",
"VoiceId": "(es-US, AlonsoNeural)"
}
]
},
{
"LanguageKey": "es-UY",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-UY-ValentinaNeural",
"VoiceId": "(es-UY, ValentinaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-UY-MateoNeural",
"VoiceId": "(es-UY, MateoNeural)"
}
]
},
{
"LanguageKey": "es-VE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "es-VE-PaolaNeural",
"VoiceId": "(es-VE, PaolaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "es-VE-SebastianNeural",
"VoiceId": "(es-VE, SebastianNeural)"
}
]
},
{
"LanguageKey": "et-EE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "et-EE-AnuNeural2",
"VoiceId": "(et-EE, AnuNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "et-EE-KertNeural2",
"VoiceId": "(et-EE, KertNeural2)"
}
]
},
{
"LanguageKey": "eu-ES",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "eu-ES-AinhoaNeural2",
"VoiceId": "(eu-ES, AinhoaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "eu-ES-AnderNeural2",
"VoiceId": "(eu-ES, AnderNeural2)"
}
]
},
{
"LanguageKey": "fa-IR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "fa-IR-DilaraNeural2",
"VoiceId": "(fa-IR, DilaraNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "fa-IR-FaridNeural2",
"VoiceId": "(fa-IR, FaridNeural2)"
}
]
},
{
"LanguageKey": "fi-FI",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "fi-FI-SelmaNeural",
"VoiceId": "(fi-FI, SelmaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fi-FI-HarriNeural",
"VoiceId": "(fi-FI, HarriNeural)"
},
{
"IsFemale": true,
"VoiceActor": "fi-FI-NooraNeural",
"VoiceId": "(fi-FI, NooraNeural)"
}
]
},
{
"LanguageKey": "fil-PH",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "fil-PH-BlessicaNeural2",
"VoiceId": "(fil-PH, BlessicaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "fil-PH-AngeloNeural2",
"VoiceId": "(fil-PH, AngeloNeural2)"
}
]
},
{
"LanguageKey": "fr-BE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "fr-BE-CharlineNeural",
"VoiceId": "(fr-BE, CharlineNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-BE-GerardNeural",
"VoiceId": "(fr-BE, GerardNeural)"
}
]
},
{
"LanguageKey": "fr-CA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "fr-CA-SylvieNeural",
"VoiceId": "(fr-CA, SylvieNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-CA-JeanNeural",
"VoiceId": "(fr-CA, JeanNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-CA-AntoineNeural",
"VoiceId": "(fr-CA, AntoineNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-CA-ThierryNeural",
"VoiceId": "(fr-CA, ThierryNeural)"
}
]
},
{
"LanguageKey": "fr-CH",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "fr-CH-ArianeNeural",
"VoiceId": "(fr-CH, ArianeNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-CH-FabriceNeural",
"VoiceId": "(fr-CH, FabriceNeural)"
}
]
},
{
"LanguageKey": "fr-FR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "fr-FR-DeniseNeural",
"VoiceId": "(fr-FR, DeniseNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-FR-HenriNeural",
"VoiceId": "(fr-FR, HenriNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-FR-AlainNeural",
"VoiceId": "(fr-FR, AlainNeural)"
},
{
"IsFemale": true,
"VoiceActor": "fr-FR-BrigitteNeural",
"VoiceId": "(fr-FR, BrigitteNeural)"
},
{
"IsFemale": true,
"VoiceActor": "fr-FR-CelesteNeural",
"VoiceId": "(fr-FR, CelesteNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-FR-ClaudeNeural",
"VoiceId": "(fr-FR, ClaudeNeural)"
},
{
"IsFemale": true,
"VoiceActor": "fr-FR-CoralieNeural",
"VoiceId": "(fr-FR, CoralieNeural)"
},
{
"IsFemale": true,
"VoiceActor": "fr-FR-EloiseNeural",
"VoiceId": "(fr-FR, EloiseNeural)"
},
{
"IsFemale": true,
"VoiceActor": "fr-FR-JacquelineNeural",
"VoiceId": "(fr-FR, JacquelineNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-FR-JeromeNeural",
"VoiceId": "(fr-FR, JeromeNeural)"
},
{
"IsFemale": true,
"VoiceActor": "fr-FR-JosephineNeural",
"VoiceId": "(fr-FR, JosephineNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-FR-MauriceNeural",
"VoiceId": "(fr-FR, MauriceNeural)"
},
{
"IsFemale": true,
"VoiceActor": "fr-FR-VivienneNeural",
"VoiceId": "(fr-FR, VivienneNeural)"
},
{
"IsFemale": false,
"VoiceActor": "fr-FR-YvesNeural",
"VoiceId": "(fr-FR, YvesNeural)"
},
{
"IsFemale": true,
"VoiceActor": "fr-FR-YvetteNeural",
"VoiceId": "(fr-FR, YvetteNeural)"
}
]
},
{
"LanguageKey": "ga-IE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ga-IE-OrlaNeural2",
"VoiceId": "(ga-IE, OrlaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "ga-IE-ColmNeural2",
"VoiceId": "(ga-IE, ColmNeural2)"
}
]
},
{
"LanguageKey": "gl-ES",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "gl-ES-SabelaNeural2",
"VoiceId": "(gl-ES, SabelaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "gl-ES-RoiNeural2",
"VoiceId": "(gl-ES, RoiNeural2)"
}
]
},
{
"LanguageKey": "gu-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "gu-IN-DhwaniNeural",
"VoiceId": "(gu-IN, DhwaniNeural)"
},
{
"IsFemale": false,
"VoiceActor": "gu-IN-NiranjanNeural",
"VoiceId": "(gu-IN, NiranjanNeural)"
}
]
},
{
"LanguageKey": "he-IL",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "he-IL-HilaNeural",
"VoiceId": "(he-IL, HilaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "he-IL-AvriNeural",
"VoiceId": "(he-IL, AvriNeural)"
}
]
},
{
"LanguageKey": "hi-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "hi-IN-SwaraNeural",
"VoiceId": "(hi-IN, SwaraNeural)"
},
{
"IsFemale": false,
"VoiceActor": "hi-IN-MadhurNeural",
"VoiceId": "(hi-IN, MadhurNeural)"
}
]
},
{
"LanguageKey": "hr-HR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "hr-HR-GabrijelaNeural",
"VoiceId": "(hr-HR, GabrijelaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "hr-HR-SreckoNeural",
"VoiceId": "(hr-HR, SreckoNeural)"
}
]
},
{
"LanguageKey": "hu-HU",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "hu-HU-NoemiNeural",
"VoiceId": "(hu-HU, NoemiNeural)"
},
{
"IsFemale": false,
"VoiceActor": "hu-HU-TamasNeural",
"VoiceId": "(hu-HU, TamasNeural)"
}
]
},
{
"LanguageKey": "hy-AM",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "hy-AM-AnahitNeural2",
"VoiceId": "(hy-AM, AnahitNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "hy-AM-HaykNeural2",
"VoiceId": "(hy-AM, HaykNeural2)"
}
]
},
{
"LanguageKey": "id-ID",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "id-ID-GadisNeural",
"VoiceId": "(id-ID, GadisNeural)"
},
{
"IsFemale": false,
"VoiceActor": "id-ID-ArdiNeural",
"VoiceId": "(id-ID, ArdiNeural)"
}
]
},
{
"LanguageKey": "is-IS",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "is-IS-GudrunNeural2",
"VoiceId": "(is-IS, GudrunNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "is-IS-GunnarNeural2",
"VoiceId": "(is-IS, GunnarNeural2)"
}
]
},
{
"LanguageKey": "it-IT",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "it-IT-ElsaNeural",
"VoiceId": "(it-IT, ElsaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "it-IT-IsabellaNeural",
"VoiceId": "(it-IT, IsabellaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "it-IT-DiegoNeural",
"VoiceId": "(it-IT, DiegoNeural)"
},
{
"IsFemale": false,
"VoiceActor": "it-IT-BenignoNeural",
"VoiceId": "(it-IT, BenignoNeural)"
},
{
"IsFemale": false,
"VoiceActor": "it-IT-CalimeroNeural",
"VoiceId": "(it-IT, CalimeroNeural)"
},
{
"IsFemale": false,
"VoiceActor": "it-IT-CataldoNeural",
"VoiceId": "(it-IT, CataldoNeural)"
},
{
"IsFemale": true,
"VoiceActor": "it-IT-FabiolaNeural",
"VoiceId": "(it-IT, FabiolaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "it-IT-FiammaNeural",
"VoiceId": "(it-IT, FiammaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "it-IT-GianniNeural",
"VoiceId": "(it-IT, GianniNeural)"
},
{
"IsFemale": false,
"VoiceActor": "it-IT-GiuseppeNeural",
"VoiceId": "(it-IT, GiuseppeNeural)"
},
{
"IsFemale": true,
"VoiceActor": "it-IT-ImeldaNeural",
"VoiceId": "(it-IT, ImeldaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "it-IT-IrmaNeural",
"VoiceId": "(it-IT, IrmaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "it-IT-LisandroNeural",
"VoiceId": "(it-IT, LisandroNeural)"
},
{
"IsFemale": true,
"VoiceActor": "it-IT-PalmiraNeural",
"VoiceId": "(it-IT, PalmiraNeural)"
},
{
"IsFemale": true,
"VoiceActor": "it-IT-PierinaNeural",
"VoiceId": "(it-IT, PierinaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "it-IT-RinaldoNeural",
"VoiceId": "(it-IT, RinaldoNeural)"
}
]
},
{
"LanguageKey": "ja-JP",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ja-JP-NanamiNeural",
"VoiceId": "(ja-JP, NanamiNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ja-JP-KeitaNeural",
"VoiceId": "(ja-JP, KeitaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "ja-JP-AoiNeural",
"VoiceId": "(ja-JP, AoiNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ja-JP-DaichiNeural",
"VoiceId": "(ja-JP, DaichiNeural)"
},
{
"IsFemale": true,
"VoiceActor": "ja-JP-MayuNeural",
"VoiceId": "(ja-JP, MayuNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ja-JP-NaokiNeural",
"VoiceId": "(ja-JP, NaokiNeural)"
},
{
"IsFemale": true,
"VoiceActor": "ja-JP-ShioriNeural",
"VoiceId": "(ja-JP, ShioriNeural)"
}
]
},
{
"LanguageKey": "jv-ID",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "jv-ID-SitiNeural2",
"VoiceId": "(jv-ID, SitiNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "jv-ID-DimasNeural2",
"VoiceId": "(jv-ID, DimasNeural2)"
}
]
},
{
"LanguageKey": "ka-GE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ka-GE-EkaNeural2",
"VoiceId": "(ka-GE, EkaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "ka-GE-GiorgiNeural2",
"VoiceId": "(ka-GE, GiorgiNeural2)"
}
]
},
{
"LanguageKey": "kk-KZ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "kk-KZ-AigulNeural2",
"VoiceId": "(kk-KZ, AigulNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "kk-KZ-DauletNeural2",
"VoiceId": "(kk-KZ, DauletNeural2)"
}
]
},
{
"LanguageKey": "km-KH",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "km-KH-SreymomNeural2",
"VoiceId": "(km-KH, SreymomNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "km-KH-PisethNeural2",
"VoiceId": "(km-KH, PisethNeural2)"
}
]
},
{
"LanguageKey": "kn-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "kn-IN-SapnaNeural2",
"VoiceId": "(kn-IN, SapnaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "kn-IN-GaganNeural2",
"VoiceId": "(kn-IN, GaganNeural2)"
}
]
},
{
"LanguageKey": "ko-KR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ko-KR-SunHiNeural",
"VoiceId": "(ko-KR, SunHiNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ko-KR-InJoonNeural",
"VoiceId": "(ko-KR, InJoonNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ko-KR-BongJinNeural",
"VoiceId": "(ko-KR, BongJinNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ko-KR-GookMinNeural",
"VoiceId": "(ko-KR, GookMinNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ko-KR-HyunsuNeural",
"VoiceId": "(ko-KR, HyunsuNeural)"
},
{
"IsFemale": true,
"VoiceActor": "ko-KR-JiMinNeural",
"VoiceId": "(ko-KR, JiMinNeural)"
},
{
"IsFemale": true,
"VoiceActor": "ko-KR-SeoHyeonNeural",
"VoiceId": "(ko-KR, SeoHyeonNeural)"
},
{
"IsFemale": true,
"VoiceActor": "ko-KR-SoonBokNeural",
"VoiceId": "(ko-KR, SoonBokNeural)"
},
{
"IsFemale": true,
"VoiceActor": "ko-KR-YuJinNeural",
"VoiceId": "(ko-KR, YuJinNeural)"
}
]
},
{
"LanguageKey": "lo-LA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "lo-LA-KeomanyNeural2",
"VoiceId": "(lo-LA, KeomanyNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "lo-LA-ChanthavongNeural2",
"VoiceId": "(lo-LA, ChanthavongNeural2)"
}
]
},
{
"LanguageKey": "lt-LT",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "lt-LT-OnaNeural2",
"VoiceId": "(lt-LT, OnaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "lt-LT-LeonasNeural2",
"VoiceId": "(lt-LT, LeonasNeural2)"
}
]
},
{
"LanguageKey": "lv-LV",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "lv-LV-EveritaNeural2",
"VoiceId": "(lv-LV, EveritaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "lv-LV-NilsNeural2",
"VoiceId": "(lv-LV, NilsNeural2)"
}
]
},
{
"LanguageKey": "mk-MK",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "mk-MK-MarijaNeural2",
"VoiceId": "(mk-MK, MarijaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "mk-MK-AleksandarNeural2",
"VoiceId": "(mk-MK, AleksandarNeural2)"
}
]
},
{
"LanguageKey": "ml-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ml-IN-SobhanaNeural2",
"VoiceId": "(ml-IN, SobhanaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "ml-IN-MidhunNeural2",
"VoiceId": "(ml-IN, MidhunNeural2)"
}
]
},
{
"LanguageKey": "mn-MN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "mn-MN-YesuiNeural2",
"VoiceId": "(mn-MN, YesuiNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "mn-MN-BataaNeural2",
"VoiceId": "(mn-MN, BataaNeural2)"
}
]
},
{
"LanguageKey": "mr-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "mr-IN-AarohiNeural",
"VoiceId": "(mr-IN, AarohiNeural)"
},
{
"IsFemale": false,
"VoiceActor": "mr-IN-ManoharNeural",
"VoiceId": "(mr-IN, ManoharNeural)"
}
]
},
{
"LanguageKey": "ms-MY",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ms-MY-YasminNeural",
"VoiceId": "(ms-MY, YasminNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ms-MY-OsmanNeural",
"VoiceId": "(ms-MY, OsmanNeural)"
}
]
},
{
"LanguageKey": "mt-MT",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "mt-MT-GraceNeural2",
"VoiceId": "(mt-MT, GraceNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "mt-MT-JosephNeural2",
"VoiceId": "(mt-MT, JosephNeural2)"
}
]
},
{
"LanguageKey": "my-MM",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "my-MM-NilarNeural2",
"VoiceId": "(my-MM, NilarNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "my-MM-ThihaNeural2",
"VoiceId": "(my-MM, ThihaNeural2)"
}
]
},
{
"LanguageKey": "nb-NO",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "nb-NO-PernilleNeural",
"VoiceId": "(nb-NO, PernilleNeural)"
},
{
"IsFemale": false,
"VoiceActor": "nb-NO-FinnNeural",
"VoiceId": "(nb-NO, FinnNeural)"
},
{
"IsFemale": true,
"VoiceActor": "nb-NO-IselinNeural",
"VoiceId": "(nb-NO, IselinNeural)"
}
]
},
{
"LanguageKey": "ne-NP",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ne-NP-HemkalaNeural2",
"VoiceId": "(ne-NP, HemkalaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "ne-NP-SagarNeural2",
"VoiceId": "(ne-NP, SagarNeural2)"
}
]
},
{
"LanguageKey": "nl-BE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "nl-BE-DenaNeural",
"VoiceId": "(nl-BE, DenaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "nl-BE-ArnaudNeural",
"VoiceId": "(nl-BE, ArnaudNeural)"
}
]
},
{
"LanguageKey": "nl-NL",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "nl-NL-FennaNeural",
"VoiceId": "(nl-NL, FennaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "nl-NL-MaartenNeural",
"VoiceId": "(nl-NL, MaartenNeural)"
},
{
"IsFemale": true,
"VoiceActor": "nl-NL-ColetteNeural",
"VoiceId": "(nl-NL, ColetteNeural)"
}
]
},
{
"LanguageKey": "pl-PL",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "pl-PL-AgnieszkaNeural",
"VoiceId": "(pl-PL, AgnieszkaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "pl-PL-MarekNeural",
"VoiceId": "(pl-PL, MarekNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pl-PL-ZofiaNeural",
"VoiceId": "(pl-PL, ZofiaNeural)"
}
]
},
{
"LanguageKey": "ps-AF",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ps-AF-LatifaNeural2",
"VoiceId": "(ps-AF, LatifaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "ps-AF-GulNawazNeural2",
"VoiceId": "(ps-AF, GulNawazNeural2)"
}
]
},
{
"LanguageKey": "pt-BR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "pt-BR-FranciscaNeural",
"VoiceId": "(pt-BR, FranciscaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "pt-BR-AntonioNeural",
"VoiceId": "(pt-BR, AntonioNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pt-BR-BrendaNeural",
"VoiceId": "(pt-BR, BrendaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "pt-BR-DonatoNeural",
"VoiceId": "(pt-BR, DonatoNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pt-BR-ElzaNeural",
"VoiceId": "(pt-BR, ElzaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "pt-BR-FabioNeural",
"VoiceId": "(pt-BR, FabioNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pt-BR-GiovannaNeural",
"VoiceId": "(pt-BR, GiovannaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "pt-BR-HumbertoNeural",
"VoiceId": "(pt-BR, HumbertoNeural)"
},
{
"IsFemale": false,
"VoiceActor": "pt-BR-JulioNeural",
"VoiceId": "(pt-BR, JulioNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pt-BR-LeilaNeural",
"VoiceId": "(pt-BR, LeilaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pt-BR-LeticiaNeural",
"VoiceId": "(pt-BR, LeticiaNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pt-BR-ManuelaNeural",
"VoiceId": "(pt-BR, ManuelaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "pt-BR-NicolauNeural",
"VoiceId": "(pt-BR, NicolauNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pt-BR-ThalitaNeural",
"VoiceId": "(pt-BR, ThalitaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "pt-BR-ValerioNeural",
"VoiceId": "(pt-BR, ValerioNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pt-BR-YaraNeural",
"VoiceId": "(pt-BR, YaraNeural)"
}
]
},
{
"LanguageKey": "pt-PT",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "pt-PT-RaquelNeural",
"VoiceId": "(pt-PT, RaquelNeural)"
},
{
"IsFemale": false,
"VoiceActor": "pt-PT-DuarteNeural",
"VoiceId": "(pt-PT, DuarteNeural)"
},
{
"IsFemale": true,
"VoiceActor": "pt-PT-FernandaNeural",
"VoiceId": "(pt-PT, FernandaNeural)"
}
]
},
{
"LanguageKey": "ro-RO",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ro-RO-AlinaNeural",
"VoiceId": "(ro-RO, AlinaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ro-RO-EmilNeural",
"VoiceId": "(ro-RO, EmilNeural)"
}
]
},
{
"LanguageKey": "ru-RU",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ru-RU-SvetlanaNeural",
"VoiceId": "(ru-RU, SvetlanaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ru-RU-DmitryNeural",
"VoiceId": "(ru-RU, DmitryNeural)"
},
{
"IsFemale": true,
"VoiceActor": "ru-RU-DariyaNeural",
"VoiceId": "(ru-RU, DariyaNeural)"
}
]
},
{
"LanguageKey": "si-LK",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "si-LK-ThiliniNeural2",
"VoiceId": "(si-LK, ThiliniNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "si-LK-SameeraNeural2",
"VoiceId": "(si-LK, SameeraNeural2)"
}
]
},
{
"LanguageKey": "sk-SK",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "sk-SK-ViktoriaNeural",
"VoiceId": "(sk-SK, ViktoriaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "sk-SK-LukasNeural",
"VoiceId": "(sk-SK, LukasNeural)"
}
]
},
{
"LanguageKey": "sl-SI",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "sl-SI-PetraNeural",
"VoiceId": "(sl-SI, PetraNeural)"
},
{
"IsFemale": false,
"VoiceActor": "sl-SI-RokNeural",
"VoiceId": "(sl-SI, RokNeural)"
}
]
},
{
"LanguageKey": "so-SO",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "so-SO-UbaxNeural2",
"VoiceId": "(so-SO, UbaxNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "so-SO-MuuseNeural2",
"VoiceId": "(so-SO, MuuseNeural2)"
}
]
},
{
"LanguageKey": "sq-AL",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "sq-AL-AnilaNeural2",
"VoiceId": "(sq-AL, AnilaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "sq-AL-IlirNeural2",
"VoiceId": "(sq-AL, IlirNeural2)"
}
]
},
{
"LanguageKey": "sr-LATN-RS",
"VoiceActors": [
{
"IsFemale": false,
"VoiceActor": "sr-Latn-RS-NicholasNeural1,2",
"VoiceId": "(sr-Latn, RS)"
},
{
"IsFemale": true,
"VoiceActor": "sr-Latn-RS-SophieNeural1,2",
"VoiceId": "(sr-Latn, RS)"
}
]
},
{
"LanguageKey": "sr-RS",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "sr-RS-SophieNeural2",
"VoiceId": "(sr-RS, SophieNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "sr-RS-NicholasNeural2",
"VoiceId": "(sr-RS, NicholasNeural2)"
}
]
},
{
"LanguageKey": "su-ID",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "su-ID-TutiNeural2",
"VoiceId": "(su-ID, TutiNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "su-ID-JajangNeural2",
"VoiceId": "(su-ID, JajangNeural2)"
}
]
},
{
"LanguageKey": "sv-SE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "sv-SE-SofieNeural",
"VoiceId": "(sv-SE, SofieNeural)"
},
{
"IsFemale": false,
"VoiceActor": "sv-SE-MattiasNeural",
"VoiceId": "(sv-SE, MattiasNeural)"
},
{
"IsFemale": true,
"VoiceActor": "sv-SE-HilleviNeural",
"VoiceId": "(sv-SE, HilleviNeural)"
}
]
},
{
"LanguageKey": "sw-KE",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "sw-KE-ZuriNeural2",
"VoiceId": "(sw-KE, ZuriNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "sw-KE-RafikiNeural2",
"VoiceId": "(sw-KE, RafikiNeural2)"
}
]
},
{
"LanguageKey": "sw-TZ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "sw-TZ-RehemaNeural",
"VoiceId": "(sw-TZ, RehemaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "sw-TZ-DaudiNeural",
"VoiceId": "(sw-TZ, DaudiNeural)"
}
]
},
{
"LanguageKey": "ta-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ta-IN-PallaviNeural",
"VoiceId": "(ta-IN, PallaviNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ta-IN-ValluvarNeural",
"VoiceId": "(ta-IN, ValluvarNeural)"
}
]
},
{
"LanguageKey": "ta-LK",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ta-LK-SaranyaNeural",
"VoiceId": "(ta-LK, SaranyaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ta-LK-KumarNeural",
"VoiceId": "(ta-LK, KumarNeural)"
}
]
},
{
"LanguageKey": "ta-MY",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ta-MY-KaniNeural",
"VoiceId": "(ta-MY, KaniNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ta-MY-SuryaNeural",
"VoiceId": "(ta-MY, SuryaNeural)"
}
]
},
{
"LanguageKey": "ta-SG",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ta-SG-VenbaNeural",
"VoiceId": "(ta-SG, VenbaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ta-SG-AnbuNeural",
"VoiceId": "(ta-SG, AnbuNeural)"
}
]
},
{
"LanguageKey": "te-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "te-IN-ShrutiNeural",
"VoiceId": "(te-IN, ShrutiNeural)"
},
{
"IsFemale": false,
"VoiceActor": "te-IN-MohanNeural",
"VoiceId": "(te-IN, MohanNeural)"
}
]
},
{
"LanguageKey": "th-TH",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "th-TH-PremwadeeNeural",
"VoiceId": "(th-TH, PremwadeeNeural)"
},
{
"IsFemale": false,
"VoiceActor": "th-TH-NiwatNeural",
"VoiceId": "(th-TH, NiwatNeural)"
},
{
"IsFemale": true,
"VoiceActor": "th-TH-AcharaNeural",
"VoiceId": "(th-TH, AcharaNeural)"
}
]
},
{
"LanguageKey": "tr-TR",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "tr-TR-EmelNeural",
"VoiceId": "(tr-TR, EmelNeural)"
},
{
"IsFemale": false,
"VoiceActor": "tr-TR-AhmetNeural",
"VoiceId": "(tr-TR, AhmetNeural)"
}
]
},
{
"LanguageKey": "uk-UA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "uk-UA-PolinaNeural",
"VoiceId": "(uk-UA, PolinaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "uk-UA-OstapNeural",
"VoiceId": "(uk-UA, OstapNeural)"
}
]
},
{
"LanguageKey": "ur-IN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ur-IN-GulNeural",
"VoiceId": "(ur-IN, GulNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ur-IN-SalmanNeural",
"VoiceId": "(ur-IN, SalmanNeural)"
}
]
},
{
"LanguageKey": "ur-PK",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "ur-PK-UzmaNeural",
"VoiceId": "(ur-PK, UzmaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "ur-PK-AsadNeural",
"VoiceId": "(ur-PK, AsadNeural)"
}
]
},
{
"LanguageKey": "uz-UZ",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "uz-UZ-MadinaNeural2",
"VoiceId": "(uz-UZ, MadinaNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "uz-UZ-SardorNeural2",
"VoiceId": "(uz-UZ, SardorNeural2)"
}
]
},
{
"LanguageKey": "vi-VN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "vi-VN-HoaiMyNeural",
"VoiceId": "(vi-VN, HoaiMyNeural)"
},
{
"IsFemale": false,
"VoiceActor": "vi-VN-NamMinhNeural",
"VoiceId": "(vi-VN, NamMinhNeural)"
}
]
},
{
"LanguageKey": "wuu-CN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "wuu-CN-XiaotongNeural2",
"VoiceId": "(wuu-CN, XiaotongNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "wuu-CN-YunzheNeural2",
"VoiceId": "(wuu-CN, YunzheNeural2)"
}
]
},
{
"LanguageKey": "yue-CN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "yue-CN-XiaoMinNeural1,2",
"VoiceId": "(yue-CN, XiaoMinNeural1,2)"
},
{
"IsFemale": false,
"VoiceActor": "yue-CN-YunSongNeural1,2",
"VoiceId": "(yue-CN, YunSongNeural1,2)"
}
]
},
{
"LanguageKey": "zh-CN",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaoxiaoNeural",
"VoiceId": "(zh-CN, XiaoxiaoNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-YunxiNeural",
"VoiceId": "(zh-CN, YunxiNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-YunjianNeural",
"VoiceId": "(zh-CN, YunjianNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaoyiNeural",
"VoiceId": "(zh-CN, XiaoyiNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-YunyangNeural",
"VoiceId": "(zh-CN, YunyangNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaochenNeural",
"VoiceId": "(zh-CN, XiaochenNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaohanNeural",
"VoiceId": "(zh-CN, XiaohanNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaomengNeural",
"VoiceId": "(zh-CN, XiaomengNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaomoNeural",
"VoiceId": "(zh-CN, XiaomoNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaoqiuNeural",
"VoiceId": "(zh-CN, XiaoqiuNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaoruiNeural",
"VoiceId": "(zh-CN, XiaoruiNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaoshuangNeural",
"VoiceId": "(zh-CN, XiaoshuangNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaoxuanNeural",
"VoiceId": "(zh-CN, XiaoxuanNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaoyanNeural",
"VoiceId": "(zh-CN, XiaoyanNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaoyouNeural",
"VoiceId": "(zh-CN, XiaoyouNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaozhenNeural",
"VoiceId": "(zh-CN, XiaozhenNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-YunfengNeural",
"VoiceId": "(zh-CN, YunfengNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-YunhaoNeural",
"VoiceId": "(zh-CN, YunhaoNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-YunxiaNeural",
"VoiceId": "(zh-CN, YunxiaNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-YunyeNeural",
"VoiceId": "(zh-CN, YunyeNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-YunzeNeural",
"VoiceId": "(zh-CN, YunzeNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-CN-XiaorouNeural1",
"VoiceId": "(zh-CN, XiaorouNeural1)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-YunjieNeural1",
"VoiceId": "(zh-CN, YunjieNeural1)"
}
]
},
{
"LanguageKey": "zh-CN-GUANGXI",
"VoiceActors": [
{
"IsFemale": false,
"VoiceActor": "zh-CN-guangxi-YunqiNeural1,2",
"VoiceId": "(zh-CN, guangxi)"
}
]
},
{
"LanguageKey": "zh-CN-henan",
"VoiceActors": [
{
"IsFemale": false,
"VoiceActor": "zh-CN-henan-YundengNeural2",
"VoiceId": "(zh-CN, henan)"
}
]
},
{
"LanguageKey": "zh-CN-liaoning",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "zh-CN-liaoning-XiaobeiNeural1,2",
"VoiceId": "(zh-CN, liaoning)"
},
{
"IsFemale": false,
"VoiceActor": "zh-CN-liaoning-YunbiaoNeural1,2",
"VoiceId": "(zh-CN, liaoning)"
}
]
},
{
"LanguageKey": "zh-CN-shaanxi",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "zh-CN-shaanxi-XiaoniNeural1,2",
"VoiceId": "(zh-CN, shaanxi)"
}
]
},
{
"LanguageKey": "zh-CN-shandong",
"VoiceActors": [
{
"IsFemale": false,
"VoiceActor": "zh-CN-shandong-YunxiangNeural2",
"VoiceId": "(zh-CN, shandong)"
}
]
},
{
"LanguageKey": "zh-CN-sichuan",
"VoiceActors": [
{
"IsFemale": false,
"VoiceActor": "zh-CN-sichuan-YunxiNeural1,2",
"VoiceId": "(zh-CN, sichuan)"
}
]
},
{
"LanguageKey": "zh-HK",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "zh-HK-HiuMaanNeural",
"VoiceId": "(zh-HK, HiuMaanNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-HK-WanLungNeural",
"VoiceId": "(zh-HK, WanLungNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-HK-HiuGaaiNeural",
"VoiceId": "(zh-HK, HiuGaaiNeural)"
}
]
},
{
"LanguageKey": "zh-TW",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "zh-TW-HsiaoChenNeural",
"VoiceId": "(zh-TW, HsiaoChenNeural)"
},
{
"IsFemale": false,
"VoiceActor": "zh-TW-YunJheNeural",
"VoiceId": "(zh-TW, YunJheNeural)"
},
{
"IsFemale": true,
"VoiceActor": "zh-TW-HsiaoYuNeural",
"VoiceId": "(zh-TW, HsiaoYuNeural)"
}
]
},
{
"LanguageKey": "zu-ZA",
"VoiceActors": [
{
"IsFemale": true,
"VoiceActor": "zu-ZA-ThandoNeural2",
"VoiceId": "(zu-ZA, ThandoNeural2)"
},
{
"IsFemale": false,
"VoiceActor": "zu-ZA-ThembaNeural2",
"VoiceId": "(zu-ZA, ThembaNeural2)"
}
]
}
]
</code>
</pre>
</div>
Let's look at the source code for calling the TextToSpeechUtil.cs shown above from a MAUI Blazor app view, <em>Index.razor</em>
The code below shown is two private methods that does the work of retrieving the audio file from the Azure Speeech Service by first loading up all the voice actor ids from a bundled json file of voice actors displayed above and deserialize this into a list of voice actors.
Retrieving the audio file passes in the translated text of which to generate synthesized speedch for and also the target language, all available actor voices and preferred voice actor id, if set.
Retrieved is metadata and the audio file, in a MP3 file format. The file format is recognized by for example Windows withouth having to have any codec libraries installed in addition.
<b>Index.razor</b> (Inside the @code block { .. } of that razor file)
<pre>
<code class='hljs csharp'>
private async Task<TextToSpeechLanguage[]> GetActorVoices()
{
//https://learn.microsoft.com/en-us/azure/ai-services/speech-service/language-support?tabs=tts
Stream actorVoicesStream = await FileSystem.OpenAppPackageFileAsync("voicebook.json");
using StreamReader sr = new StreamReader(actorVoicesStream);
string actorVoicesJson = string.Empty;
string line;
while ((line = sr.ReadLine()) != null)
{
//Console.WriteLine(line);
actorVoicesJson += line;
}
var actorVoices = JsonSerializer.Deserialize<TextToSpeechLanguage[]>(actorVoicesJson);
return actorVoices;
}
private async void SpeakText()
{
await Submit();
var actorVoices = await GetActorVoices();
TextToSpeechResult textToSpeechResult = await TextToSpeechUtil.GetSpeechFromText(Model.TranslatedText, Model.TargetLanguage, actorVoices, Model.PreferredVoiceActorId);
Model.ActiveVoiceActorId = textToSpeechResult.VoiceActorId;
Model.Transcript = textToSpeechResult.Transcript;
Model.AvailableVoiceActorIds = textToSpeechResult.AvailableVoiceActorIds;
Model.AdditionalVoiceDataMetaInformation = $"Byte size voice data: {textToSpeechResult?.VoiceData?.Length}, Audio output format: {textToSpeechResult.OutputFormat}";
var voiceFolder = Path.Combine(FileSystem.Current.AppDataDirectory, "Resources", "Raw");
if (!Directory.Exists(voiceFolder))
{
Directory.CreateDirectory(voiceFolder);
}
string voiceFile = "textToSpeechVoiceOutput_" + Model.TargetLanguage + Guid.NewGuid().ToString("N") + ".mpga";
string voiceRelativeFile = Path.Combine(voiceFile);
string voiceFileFullPath = Path.Combine(voiceFolder, voiceFile);
await File.WriteAllBytesAsync(voiceFileFullPath, textToSpeechResult.VoiceData);
Stream voiceStream = File.OpenRead(voiceFileFullPath);
StateHasChanged();
var player = AudioManager.CreatePlayer(voiceStream);
player.Play();
}
</code>
</pre>
A screenshot shows how the DEMO app now looks like. You can translate text into other language and then have speech synthesis in Azure AI Cognitive Service generate a realistic audio speech of the translated text so you can also see how the text not only is translated, but also pronounced.
<br /> <br /><br /><br />
<img alt="" border="0" width="800"
src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj1v5DXhXm2GqA6ZvRjmB2fgG5_SlUluyUmg6Oyp90gnom0r-1ppc-CbkhlnrL1mxwdm-6o9OWPplgWdqwiVoHwZvueLANBPdjdFvBFlNVHtz04dMC87hilbfX7eUqE-MUDWw_R83u3EjWddONd0G6hZyRbHf1uLq12iUiGifPTlJYduEtPyNcOsBPAaTk/s1600/azure_ai_speech_synthesis.png"/>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-3496424628571024372023-11-12T23:21:00.003+01:002023-11-12T23:21:55.210+01:00Getting a parent object by name in parent scope chain in AngularJsI still work with some legacy solutions in AngularJs. I want to look in parent scope for a object which I know the name of, but it is some levels up by calling multiple $parent calls to get to the correct parent scope.
Here is a small util method I wrote the other day to access a variable inside parent scopes by known name.
Note : Select an element via F12 Developer tools and access its AngularJs scope. In the browser this is done by running in the console :
<code>
angular.element($0).scope()
</code>
Here is the helper method I wrote :
<pre>
<code class='hljs js'>
angular.element($0).scope().findParentObjByName = function($scope, objName) {
var curScope = $scope;
var parentLevel = 0;
//debugger
while ((curScope = curScope.$parent) != null && !curScope.hasOwnProperty(objName) && parentLevel < 15){
parentLevel++;
}
return curScope.hasOwnProperty(objName) ? curScope[objName] : null;
}
</code>
</pre>
We can then look for a property in the parent scopes like in this example :
<pre>
<code class='hljs js'>
angular.element($0).scope().findParentObjByName($scope, 'list')
</code>
</pre>
This returns the object, if found and you can further work on it , for example in this particular example I used :
<pre>
<code class='hljs js'>
angular.element($0).scope().findParentObjByName($scope, 'list').listData[0]
</code>
</pre>Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-33321455269247363772023-10-29T23:00:00.015+01:002023-10-30T00:45:40.590+01:00Primary constructors in C# 12This article will look at primary constructors in C# 12. It is part of .NET 8 and C# 12.
Primary constructors can be tested on the following website offering a C# compiler which supports .NET 8.
<br /><br />
<a href='https://sharplab.io'>Sharplab.io</a>
<br /><br />
Since .NET 8 is released medio 14th of November, 2023, which is like in two weeks after writing this article, it
will be generally available very soon. You can already also use .NET 8 in preview versions of VS 2022.
Let's look at usage of primary constructor.
The following program defined one primary constructor, note that the constructor is before the class declaration starts inside <br>
the block.
<b>Program.cs</b>
<pre>
<code class='hljs csharp'>
using System;
public class Person(string firstName, string lastName) {
public override string ToString()
{
lastName += " (Primary constructor parameters might be mutated) ";
return $"{lastName}, {firstName}";
}
}
public class Program {
public static void Main(){
var p = new Person("Tom", "Cruise");
Console.WriteLine(p.ToString());
}
}
</code>
</pre>
The output of running the small program above gives this output :
<br /><br /><br />
<b>Program.cs</b>
<br />
<code>
Cruise (Primary constructor parameters might be mutated) , Tom
</code>
<br /><br />
If a class has added a primary constructor, this constructor must be called. If you add another constructor, you must call the primary constructor. For example like in this example,
using a default constructor (empty constructor), calling the primary constructor:
<pre>
<code class='hljs csharp'>
public Person() : this("", "")
{
}
</code>
</pre>
<br />
<p>A gist of this can be tested here : </p>
<a href='https://sharplab.io/#gist:494a321789363cdef9518278e14fb311'>https://sharplab.io/#gist:494a321789363cdef9518278e14fb311</a>
<br />
<br />
Another example of primary constructors are shown below. We use a <em>record</em> called <b>Distance</b> and pass in two dx and dy components of a vector and calculate its mathematical<br>
distance and direction. We convert to degrees here using PI * radians = 180 expression known from trigonometry. If dy < 0, we are in quadrant 3 or 4 and we add 180 degrees.
<pre>
<code class='hljs csharp'>
using System;
var vector = new Distance(-2, -3);
Console.WriteLine($"The vector {vector} has a magnitude of {vector.Magnitude} with direction {vector.Direction}");
public record Distance(double dx, double dy) {
public double Magnitude { get; } = Math.Round(Math.Sqrt(dx*dx + dy*dy), 2);
public double Direction { get; } = dy < 0 ? 180 + Math.Round(Math.Atan(dy / dx) * 180 / Math.PI, 2) :
Math.Round(Math.Atan(dy / dx) * 180 / Math.PI, 2);
}
</code>
</pre>
A copy of the code above is available in the Gist below:
<br />
<code>
<a href='https://sharplab.io/#gist:78092029741a7b9e7362441d9eb8e083'>https://sharplab.io/#gist:78092029741a7b9e7362441d9eb8e083</a>
</code>
<pre>
<code class='hljs csharp'>
The vector Distance { dx = -2, dy = -3, Magnitude = 3.61, Direction = 236.31 } has a magnitude of 3.61 with direction 236.31
</code>
</pre>
If you have forgot trigonometry lessons from school, here is a good page about magnitude and direction:
<br />
<br />
<a href='https://mathsathome.com/magnitude-direction-vector/'>https://mathsathome.com/magnitude-direction-vector/</a>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-89201344458747159972023-10-21T22:38:00.013+02:002023-10-21T23:22:30.398+02:00Using Azure Health Information extraction in Azure Cognitive ServicesThis article presents code how to extract Health information from arbitrary text using Azure Health Information extraction in Azure Cognitive Services. This technology uses NLP - natural language processing combined with AI techniques.
A Github repo exists with the code for a running .NET MAUI Blazor demo in .NET 7 here:
<br />
<br />
<a href='https://github.com/toreaurstadboss/HealthTextAnalytics'>https://github.com/toreaurstadboss/HealthTextAnalytics</a>
<br />
<br />
A screenshot from the demo shows how it works below.
<img alt="" width="950" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjXY9F-1FMmGcfUTVfIlzamI79PHyjzvy1ETXLaKHtJS5OOqjbUMo8eWsxEZx4HTFLoGIYx0vyUszPK_0h0F_V2qiIKtKVfZO_CGEakBfxYlvdB5lKzMIISvpe6GDO6il_xHqnrlp6eVGeoXmQZyQm27gValimYx5jYdpI8QhkksjhyphenhyphenSyR6sGEViTWTYaE/s1600/azure_ai_healthcare_text_analytics.png"/>
The demo uses Azure AI Healthcare information extraction to extract <em>entities</em> of the text, such as a person's age, gender, employment and medical history and condition such as diagnosises, procedures and so on.
The returned data in the demo is shown at the bottom of the demo, the raw data shows it is in the format as a json and in a FHIR format. Since we want FHIR format, we must use the REST api to get this information.
Azure AI Healthcare information also extracts <em>relations</em>, which is connecting the <em>entities</em> together for semantic analysis of the text. Also, <em>links</em> exist for each entity for further reading.
These are external systems such as Snomed CT and Snomed codes for each entity.
Let's look at the source code for the demo next.
We define a named http client in the MauiProgram.cs file which starts the application. We could move the code into a middleware extension method, but the code is kept simple in the demo.
<br /><br />
<b>MauiProgram.cs</b>
<pre>
<code class='hljs csharp'>
var azureEndpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_ENDPOINT");
var azureKey = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_KEY");
if (string.IsNullOrWhiteSpace(azureEndpoint))
{
throw new ArgumentNullException(nameof(azureEndpoint), "Missing system environment variable: AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_ENDPOINT");
}
if (string.IsNullOrWhiteSpace(azureKey))
{
throw new ArgumentNullException(nameof(azureKey), "Missing system environment variable: AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_KEY");
}
var azureEndpointHost = new Uri(azureEndpoint);
builder.Services.AddHttpClient("Az", httpClient =>
{
string baseUrl = azureEndpointHost.GetLeftPart(UriPartial.Authority); //https://stackoverflow.com/a/18708268/741368
httpClient.BaseAddress = new Uri(baseUrl);
//httpClient..Add("Content-type", "application/json");
//httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));//ACCEPT header
httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", azureKey);
});
</code>
</pre>
The content-type header will be specified instead inside the HttpRequestMessage shown further below and not in this named client. As we see, we must add both the endpoint base url and also the key in the <em>Ocp-Apim-Subscription-key</em> http header.
Let's next look at how to create a POST request to the <em>language resource</em> endpoint that offers the health text analysis below.
<br /><br />
<b>HealthAnalyticsTextClientService.cs</b>
<pre>
<code class='hljs csharp'>
using HealthTextAnalytics.Models;
using System.Diagnostics;
using System.Text;
using System.Text.Json.Nodes;
namespace HealthTextAnalytics.Util
{
public class HealthAnalyticsTextClientService : IHealthAnalyticsTextClientService
{
private readonly IHttpClientFactory _httpClientFactory;
private const int awaitTimeInMs = 500;
private const int maxTimerWait = 10000;
public HealthAnalyticsTextClientService(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}
public async Task<HealthTextAnalyticsResponse> GetHealthTextAnalytics(string inputText)
{
var client = _httpClientFactory.CreateClient("Az");
string requestBodyRaw = HealthAnalyticsTextHelper.CreateRequest(inputText);
//https://learn.microsoft.com/en-us/azure/ai-services/language-service/text-analytics-for-health/how-to/call-api?tabs=ner
var stopWatch = Stopwatch.StartNew();
HttpRequestMessage request = CreateTextAnalyticsRequest(requestBodyRaw);
var response = await client.SendAsync(request);
var result = new HealthTextAnalyticsResponse();
var timer = new PeriodicTimer(TimeSpan.FromMilliseconds(awaitTimeInMs));
int timeAwaited = 0;
while (await timer.WaitForNextTickAsync())
{
if (response.IsSuccessStatusCode)
{
result.IsSearchPerformed = true;
var operationLocation = response.Headers.First(h => h.Key?.ToLower() == Constants.Constants.HttpHeaderOperationResultAvailable).Value.FirstOrDefault();
var resultFromHealthAnalysis = await client.GetAsync(operationLocation);
JsonNode resultFromService = await resultFromHealthAnalysis.GetJsonFromHttpResponse();
if (resultFromService.GetValue<string>("status") == "succeeded")
{
result.AnalysisResultRawJson = await resultFromHealthAnalysis.Content.ReadAsStringAsync();
result.ExecutionTimeInMilliseconds = stopWatch.ElapsedMilliseconds;
result.Entities.AddRange(HealthAnalyticsTextHelper.GetEntities(result.AnalysisResultRawJson));
result.CategorizedInputText = HealthAnalyticsTextHelper.GetCategorizedInputText(inputText, result.AnalysisResultRawJson);
break;
}
}
timeAwaited += 500;
if (timeAwaited >= maxTimerWait)
{
result.CategorizedInputText = $"ERR: Timeout. Operation to analyze input text using Azure HealthAnalytics language service timed out after waiting for {timeAwaited} ms.";
break;
}
}
return result;
}
private static HttpRequestMessage CreateTextAnalyticsRequest(string requestBodyRaw)
{
var request = new HttpRequestMessage(HttpMethod.Post, Constants.Constants.AnalyzeTextEndpoint);
request.Content = new StringContent(requestBodyRaw, Encoding.UTF8, "application/json");//CONTENT-TYPE header
return request;
}
}
}
</code>
</pre>
The code is using some helper methods to be shown next. As the code above shows, we must poll the Azure service until we get a reply from the service. We poll every 0.5 second up to a maxium of 10 seconds from the service. Typical requests takes about 3-4 seconds to process. Longer input text / 'documents' would need more processing time than 10 seconds, but for this demo, it works great.
<br /><br />
<b>HealthAnalyticsTextHelper.CreateRequest method</b>
<pre>
<code class='hljs csharp'>
public static string CreateRequest(string inputText)
{
//note - the id 1 here in the request is a 'local id' that must be unique per request. only one text is supported in the
//request genreated, however the service allows multiple documents and id's if necessary. in this demo, we only will send in one text at a time
var request = new
{
analysisInput = new
{
documents = new[]
{
new { text = inputText, id = "1", language = "en" }
}
},
tasks = new[]
{
new { id = "analyze 1", kind = "Healthcare", parameters = new { fhirVersion = "4.0.1" } }
}
};
return JsonSerializer.Serialize(request, new JsonSerializerOptions { WriteIndented = true });
}
</code>
</pre>
Creating the body of POST we use a template via a new anonymized object shown above which is what the REST service excepts. We could have multiple documents here, that is input texts, in this demo only one text / document is sent in. Note the use of id='1' and 'analyze 1' here.
We have some helper methods in System.Text.Json here to extract the JSON data sent in the response.
<br /><br />
<b>JsonNodeUtil</b>
<pre>
<code class='hljs csharp'>
public static class JsonNodeUtil
{
public static async Task<JsonNode> GetJsonFromHttpResponse(this HttpResponseMessage response)
{
var resultFromService = JsonSerializer.Deserialize<JsonNode>(await response.Content.ReadAsStringAsync());
return resultFromService;
}
public static T? GetValue<T>(this JsonNode jsonNode, string key)
{
if (jsonNode == null)
{
return default;
}
return jsonNode[key] != null ? jsonNode[key].GetValue<T>() : default;
}
}
</code>
</pre>
More code exists for returning a <em>categorized colored input text</em> showing the entities of the input text in the helper below.
<br /><br />
<b>HealthAnalyticsTextHelper.cs - methods GetCategorizedInputText and GetBackgroundColor</b>
<pre>
<code class='hljs csharp'>
public static string GetCategorizedInputText(string inputText, string analysisText)
{
var sb = new StringBuilder(inputText);
try
{
Root doc = JsonSerializer.Deserialize<Root>(analysisText);
//try loading up the documents inside of the analysisText
var entities = doc?.tasks?.items.FirstOrDefault()?.results?.documents?.SelectMany(d => d.entities)?.ToList();
if (entities != null)
{
foreach (var row in entities.OrderByDescending(r => r.offset))
{
sb.Insert(row.offset + row.length, "</b></span>");
sb.Insert(row.offset, $"<span style='color:{GetBackgroundColor(row)}' title='{row.category}: {row.text} Confidence: {row.confidenceScore} {row.name}'><b>");
}
}
}
catch (Exception err)
{
Console.WriteLine("Got an error while trying to load in analysis healthcare json: " + err.ToString());
}
return $"<pre style='text-wrap:wrap; max-height:500px;font-size: 10pt;font-family:Verdana, Geneva, Tahoma, sans-serif;'>{sb}</pre>";
}
private static string GetBackgroundColor(Entity row)
{
var cat = row?.category?.ToLower();
string backgroundColor = cat switch
{
"age" => "purple",
"diagnosis" => "orange",
"gender" => "purple",
"symptomorsign" => "purple",
"direction" => "blue",
"symptom" => "purple",
"symptoms" => "purple",
"bodystructure" => "blue",
"body" => "purple",
"structure" => "purple",
"examinationname" => "green",
"procedure" => "green",
"treatmentname" => "green",
"conditionqualifier" => "lightgreen",
"time" => "lightgreen",
"date" => "lightgreen",
"familyrelation" => "purple",
"employment" => "purple",
"livingstatus" => "purple",
"administrativeevent" => "darkgreen",
"careenvironment" => "darkgreen",
_ => "darkgray"
};
return backgroundColor;
}
</code>
</pre>
I have added the Domain classes from the service using the <a href='https://json2csharp.com/'>https://json2csharp.com/</a> website on the intial responses I got from the REST service using Postman. The REST Api might change in the future, that is, the JSON returned.
In that case, you might want to adjust the domain classes here if the deserialization fails. It seems relatively stable though, I have tested the code for some weeks now.
Finally, the categorized colored text code here had to remove newlines to get a correct indexing of the different <em>entities</em> found in the text. This code shows how to get rid of newlines of the inputted text.
<pre>
<code class='hljs chsharp'>
public static class StringExtensions
{
public static string CleanupAllWhiteSpace(this string input) => Regex.Replace(input ?? string.Empty, @"\s+", " ");
}
</code>
</pre>
Let's look at the UI in the Index.razor file below.
<br /><br />
<b>Index.razor</b>
<pre>
<code class='hljs csharp'>
@page "/"
@using HealthTextAnalytics.Models;
@inject IHttpClientFactory _httpClientFactory;
@inject IHealthAnalyticsTextClientService _healthAnalyticsTextClientService;
<h3>Azure HealthCare Text Analysis - Azure Cognitive Services</h3>
<EditForm Model="@Model" OnValidSubmit="@Submit">
<DataAnnotationsValidator />
<ValidationSummary />
<InputWatcher @ref="inputWatcher" FieldChanged="@FieldChanged" />
<div class="form-group row">
<label><strong>Text input</strong></label>
<InputTextArea @onkeyup="@removeWhitespace" class="overflow-scroll" style="max-height:500px;max-width:900px;font-size: 10pt;font-family:Verdana, Geneva, Tahoma, sans-serif" @bind-Value="@Model.InputText" rows="5" />
</div>
<div class="form-group row">
<div class="col">
<br />
<button class="btn btn-outline-primary" disabled="@isInvalid" type="submit">Run</button>
</div>
<div class="col">
</div>
<div class="col">
</div>
</div>
<br />
@if (isProcessing)
{
<div class="progress" style="max-width: 90%">
<div class="progress-bar progress-bar-striped progress-bar-animated"
style="width: 100%; background-color: green">
Retrieving result from Azure HealthCare Text Analysis. Processing..
</div>
</div>
<br />
}
<div class="form-group row">
<label><strong>Analysis result</strong></label>
@if (isSearchPerformed)
{
<br />
<b>Execution time took: @Model.ExecutionTime ms (milliseconds)</b><br />
<br />
<b>Categorized and analyzed Health Analysis of inputted text</b>
@ms
<br />
<table class="table table-striped table-dark table-hover">
<th>Category</th>
<th>Text</th>
<th>Name</th>
<th>ConfidenceScore</th>
<th>Offset</th>
<th>Length</th>
<th>Links</th>
<tbody>
@foreach (var entity in Model.EntititesInAnalyzedResult)
{
<tr>
<td>@entity.category</td>
<td>@entity.text</td>
<td>@entity.name</td>
<td>@entity.confidenceScore</td>
<td>@entity.offset</td>
<td>@entity.length</td>
<td>@string.Join(Environment.NewLine, (@entity.links ?? new List<Link>()).Select(l => l?.dataSource + " " + l?.id + " | "))</td>
</tr>
}
</tbody>
</table>
<b>Health Analysis raw text from Azure service</b>
<InputTextArea class="overflow-scroll" readonly="readonly" style="max-height:500px; max-width:900px;font-size: 10pt;font-family:Verdana, Geneva, Tahoma, sans-serif" @bind-Value="@Model.AnalysisResult" rows="1000" />
}
</div>
</EditForm>
</code>
</pre>
The code-behind of Index.razor , looks like this.
<pre>
<code class='csharp'>
using HealthTextAnalytics.Models;
using HealthTextAnalytics.Util;
using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.Web;
namespace HealthTextAnalytics.Pages
{
public partial class Index
{
private IndexModel Model = new();
MarkupString ms = new();
private bool isProcessing = false;
private bool isSearchPerformed = false;
private InputWatcher inputWatcher = new InputWatcher();
private bool isInvalid = false;
private void FieldChanged(string fieldName)
{
isInvalid = !inputWatcher.Validate();
}
protected override void OnParametersSet()
{
Model.InputText = SampleData.Sampledata.SamplePatientTextNote2.CleanupAllWhiteSpace();
StateHasChanged();
}
private void removeWhitespace(KeyboardEventArgs eventArgs)
{
Model.InputText = Model.InputText.CleanupAllWhiteSpace();
StateHasChanged();
}
private async Task Submit()
{
try
{
ResetFieldsForBeforeSearch();
HealthTextAnalyticsResponse response = await _healthAnalyticsTextClientService.GetHealthTextAnalytics(Model.InputText);
Model.EntititesInAnalyzedResult = response.Entities;
Model.ExecutionTime = response.ExecutionTimeInMilliseconds;
Model.AnalysisResult = response.AnalysisResultRawJson;
ms = new MarkupString(response.CategorizedInputText);
}
catch (Exception err)
{
Console.WriteLine(err);
}
finally
{
ResetFieldsAfterSearch();
StateHasChanged();
}
}
private void ResetFieldsForBeforeSearch()
{
isProcessing = true;
isSearchPerformed = false;
ms = new MarkupString(string.Empty);
Model.EntititesInAnalyzedResult.Clear();
Model.AnalysisResult = string.Empty;
}
private void ResetFieldsAfterSearch()
{
isProcessing = false;
isSearchPerformed = true;
}
}
}
</code>
</pre>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-24473690850059145352023-10-14T22:55:00.016+02:002023-10-14T23:49:55.855+02:00Using Image Analysis in Azure AI Cognitive ServicesI have added a demo .NET MAUI Blazor app that uses Image Analysis in Computer Vision in Azure Cognitive Services.
Note that <em>Image Analysis</em> is not available in all Azure data centers. For example, Norway East does not have this feature.
However, North Europe Azure data center do have the feature, the data center i Ireland.
A Github repo exists for this demo here: <br /><br />
<a target='_blank' href='https://github.com/toreaurstadboss/Image.Analyze.Azure.Ai'>https://github.com/toreaurstadboss/Image.Analyze.Azure.Ai</a>
<br /><br />
A screen shot for this demo is shown below:
<img alt="Demo screenshot" style="width:990px;" border="0" data-original-height="1222" data-original-width="2036" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_-16m2LkNbf3iMCZqHvscuV9vHZzbsghGZbMQX8vItY2iaIG3li3hRpXZpWztL4U9-YEe3rCajM7DaEwGVlQHoS64M4vYarcp2RaJ3_gzrxGKjHnlIVjzJl0BfGGPcvjbF7vn32bTQg074Cjfk562tpRt49WyH3o1dJPlRspoNwwOatcMpUzI4p3ll00/s1600/group_of_cows.png"/>
The demo allows you to upload a picture (supported formats are .jpeg, .jpg and .png, but Azure AI Image Analyzer supports a lot of other image formats too).
The demo shows a preview of the selected image and to the right an image of bounding boxes of objects in the image. A list of tags extracted from the image are also shown. Raw data from the Azure Image Analyzer
service is shown in the text box area below the pictures, with a list of tags to the right.
The demo is written with .NET Maui Blazor and .NET 6.
Let us look at some code for making this demo.
<b>ImageSaveService.cs</b>
<pre>
<code class='hljs csharp'>
using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
namespace Ocr.Handwriting.Azure.AI.Services
{
public class ImageSaveService : IImageSaveService
{
public async Task<ImageSaveModel> SaveImage(IBrowserFile browserFile)
{
var buffers = new byte[browserFile.Size];
var bytes = await browserFile.OpenReadStream(maxAllowedSize: 30 * 1024 * 1024).ReadAsync(buffers);
string imageType = browserFile.ContentType;
var basePath = FileSystem.Current.AppDataDirectory;
var imageSaveModel = new ImageSaveModel
{
SavedFilePath = Path.Combine(basePath, $"{Guid.NewGuid().ToString("N")}-{browserFile.Name}"),
PreviewImageUrl = $"data:{imageType};base64,{Convert.ToBase64String(buffers)}",
FilePath = browserFile.Name,
FileSize = bytes / 1024,
};
await File.WriteAllBytesAsync(imageSaveModel.SavedFilePath, buffers);
return imageSaveModel;
}
}
}
//Interface defined inside IImageService.cs shown below
using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
namespace Ocr.Handwriting.Azure.AI.Services
{
public interface IImageSaveService
{
Task<ImageSaveModel> SaveImage(IBrowserFile browserFile);
}
}
</code>
</pre>
The ImageSaveService saves the uploaded image from the IBrowserFile into a base-64 string from the image bytes of the uploaded IBrowserFile via OpenReadStream of the IBrowserFile.
This allows us to preview the uploaded image. The code also saves the image to the AppDataDirectory that MAUI supports - FileSystem.Current.AppDataDirectory.
Let's look at how to call the analysis service itself, it is actually quite straight forward.
<b>ImageAnalyzerService.cs</b>
<pre>
<code class='hljs csharp'>
using Azure;
using Azure.AI.Vision.Common;
using Azure.AI.Vision.ImageAnalysis;
namespace Image.Analyze.Azure.Ai.Lib
{
public class ImageAnalyzerService : IImageAnalyzerService
{
public ImageAnalyzer CreateImageAnalyzer(string imageFile)
{
string key = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_SECONDARY_KEY");
string endpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_SECONDARY_ENDPOINT");
var visionServiceOptions = new VisionServiceOptions(new Uri(endpoint), new AzureKeyCredential(key));
using VisionSource visionSource = CreateVisionSource(imageFile);
var analysisOptions = CreateImageAnalysisOptions();
var analyzer = new ImageAnalyzer(visionServiceOptions, visionSource, analysisOptions);
return analyzer;
}
private static VisionSource CreateVisionSource(string imageFile)
{
using var stream = File.OpenRead(imageFile);
using var reader = new StreamReader(stream);
byte[] imageBuffer;
using (var streamReader = new MemoryStream())
{
stream.CopyTo(streamReader);
imageBuffer = streamReader.ToArray();
}
using var imageSourceBuffer = new ImageSourceBuffer();
imageSourceBuffer.GetWriter().Write(imageBuffer);
return VisionSource.FromImageSourceBuffer(imageSourceBuffer);
}
private static ImageAnalysisOptions CreateImageAnalysisOptions() => new ImageAnalysisOptions
{
Language = "en",
GenderNeutralCaption = false,
Features =
ImageAnalysisFeature.CropSuggestions
| ImageAnalysisFeature.Caption
| ImageAnalysisFeature.DenseCaptions
| ImageAnalysisFeature.Objects
| ImageAnalysisFeature.People
| ImageAnalysisFeature.Text
| ImageAnalysisFeature.Tags
};
}
}
//interface shown below
public interface IImageAnalyzerService
{
ImageAnalyzer CreateImageAnalyzer(string imageFile);
}
</code>
</pre>
We retrieve environment variables here and we create an <em>ImageAnalyzer</em>. We create a <em>Vision source</em> from the saved picture we uploaded and open a stream to it using File.OpenRead method on System.IO.
Since we saved the file in the AppData folder of the .NET MAUI app, we can read this file.
We set up the <em>image analysis options</em> and the <em>vision service options</em>. We then call the return the image analyzer.
Let's look at the code-behind of the <em>index.razor</em> file that initializes the <em>Image analyzer</em>, and runs the <em>Analyze</em> method of it.
<b>Index.razor.cs</b>
<pre>
<code class='hljs csharp'>
using Azure.AI.Vision.ImageAnalysis;
using Image.Analyze.Azure.Ai.Extensions;
using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
using Microsoft.JSInterop;
using System.Text;
namespace Image.Analyze.Azure.Ai.Pages
{
partial class Index
{
private IndexModel Model = new();
//https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40?WT.mc_id=twitter&pivots=programming-language-csharp
private string ImageInfo = string.Empty;
private async Task Submit()
{
if (Model.PreviewImageUrl == null || Model.SavedFilePath == null)
{
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor Image Analyzer App", $"You must select an image first before running Image Analysis. Supported formats are .jpeg, .jpg and .png", "Ok", "Cancel");
return;
}
using var imageAnalyzer = ImageAnalyzerService.CreateImageAnalyzer(Model.SavedFilePath);
ImageAnalysisResult analysisResult = await imageAnalyzer.AnalyzeAsync();
if (analysisResult.Reason == ImageAnalysisResultReason.Analyzed)
{
Model.ImageAnalysisOutputText = analysisResult.OutputImageAnalysisResult();
Model.Caption = $"{analysisResult.Caption.Content} Confidence: {analysisResult.Caption.Confidence.ToString("F2")}";
Model.Tags = analysisResult.Tags.Select(t => $"{t.Name} (Confidence: {t.Confidence.ToString("F2")})").ToList();
var jsonBboxes = analysisResult.GetBoundingBoxesJson();
await JsRunTime.InvokeVoidAsync("LoadBoundingBoxes", jsonBboxes);
}
else
{
ImageInfo = $"The image analysis did not perform its analysis. Reason: {analysisResult.Reason}";
}
StateHasChanged(); //visual refresh here
}
private async Task CopyTextToClipboard()
{
await Clipboard.SetTextAsync(Model.ImageAnalysisOutputText);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor Image Analyzer App", $"The copied text was put into the clipboard. Character length: {Model.ImageAnalysisOutputText?.Length}", "Ok", "Cancel");
}
private async Task OnInputFile(InputFileChangeEventArgs args)
{
var imageSaveModel = await ImageSaveService.SaveImage(args.File);
Model = new IndexModel(imageSaveModel);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor ImageAnalyzer app App", $"Wrote file to location : {Model.SavedFilePath} Size is: {Model.FileSize} kB", "Ok", "Cancel");
}
}
}
</code>
</pre>
In the code-behind above we have a submit handler called <em>Submit</em>. We there analyze the image and send the result both to the UI and also to a client side Javascript method using IJSRuntime in .NET MAUI Blazor.
Let's look at the two helper methods of ImageAnalysisResult next.
<b>ImageAnalysisResultExtensions.cs</b>
<pre>
<code class='hljs csharp'>
using Azure.AI.Vision.ImageAnalysis;
using System.Text;
namespace Image.Analyze.Azure.Ai.Extensions
{
public static class ImageAnalysisResultExtensions
{
public static string GetBoundingBoxesJson(this ImageAnalysisResult result)
{
var sb = new StringBuilder();
sb.AppendLine(@"[");
int objectIndex = 0;
foreach (var detectedObject in result.Objects)
{
sb.Append($"{{ \"Name\": \"{detectedObject.Name}\", \"Y\": {detectedObject.BoundingBox.Y}, \"X\": {detectedObject.BoundingBox.X}, \"Height\": {detectedObject.BoundingBox.Height}, \"Width\": {detectedObject.BoundingBox.Width}, \"Confidence\": \"{detectedObject.Confidence:0.0000}\" }}");
objectIndex++;
if (objectIndex < result.Objects?.Count)
{
sb.Append($",{Environment.NewLine}");
}
else
{
sb.Append($"{Environment.NewLine}");
}
}
sb.Remove(sb.Length - 2, 1); //remove trailing comma at the end
sb.AppendLine(@"]");
return sb.ToString();
}
public static string OutputImageAnalysisResult(this ImageAnalysisResult result)
{
var sb = new StringBuilder();
if (result.Reason == ImageAnalysisResultReason.Analyzed)
{
sb.AppendLine($" Image height = {result.ImageHeight}");
sb.AppendLine($" Image width = {result.ImageWidth}");
sb.AppendLine($" Model version = {result.ModelVersion}");
if (result.Caption != null)
{
sb.AppendLine(" Caption:");
sb.AppendLine($" \"{result.Caption.Content}\", Confidence {result.Caption.Confidence:0.0000}");
}
if (result.DenseCaptions != null)
{
sb.AppendLine(" Dense Captions:");
foreach (var caption in result.DenseCaptions)
{
sb.AppendLine($" \"{caption.Content}\", Bounding box {caption.BoundingBox}, Confidence {caption.Confidence:0.0000}");
}
}
if (result.Objects != null)
{
sb.AppendLine(" Objects:");
foreach (var detectedObject in result.Objects)
{
sb.AppendLine($" \"{detectedObject.Name}\", Bounding box {detectedObject.BoundingBox}, Confidence {detectedObject.Confidence:0.0000}");
}
}
if (result.Tags != null)
{
sb.AppendLine($" Tags:");
foreach (var tag in result.Tags)
{
sb.AppendLine($" \"{tag.Name}\", Confidence {tag.Confidence:0.0000}");
}
}
if (result.People != null)
{
sb.AppendLine($" People:");
foreach (var person in result.People)
{
sb.AppendLine($" Bounding box {person.BoundingBox}, Confidence {person.Confidence:0.0000}");
}
}
if (result.CropSuggestions != null)
{
sb.AppendLine($" Crop Suggestions:");
foreach (var cropSuggestion in result.CropSuggestions)
{
sb.AppendLine($" Aspect ratio {cropSuggestion.AspectRatio}: "
+ $"Crop suggestion {cropSuggestion.BoundingBox}");
};
}
if (result.Text != null)
{
sb.AppendLine($" Text:");
foreach (var line in result.Text.Lines)
{
string pointsToString = "{" + string.Join(',', line.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}";
sb.AppendLine($" Line: '{line.Content}', Bounding polygon {pointsToString}");
foreach (var word in line.Words)
{
pointsToString = "{" + string.Join(',', word.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}";
sb.AppendLine($" Word: '{word.Content}', Bounding polygon {pointsToString}, Confidence {word.Confidence:0.0000}");
}
}
}
var resultDetails = ImageAnalysisResultDetails.FromResult(result);
sb.AppendLine($" Result details:");
sb.AppendLine($" Image ID = {resultDetails.ImageId}");
sb.AppendLine($" Result ID = {resultDetails.ResultId}");
sb.AppendLine($" Connection URL = {resultDetails.ConnectionUrl}");
sb.AppendLine($" JSON result = {resultDetails.JsonResult}");
}
else
{
var errorDetails = ImageAnalysisErrorDetails.FromResult(result);
sb.AppendLine(" Analysis failed.");
sb.AppendLine($" Error reason : {errorDetails.Reason}");
sb.AppendLine($" Error code : {errorDetails.ErrorCode}");
sb.AppendLine($" Error message: {errorDetails.Message}");
}
return sb.ToString();
}
}
}
</code>
</pre>
Finally, let's look at the client side Javascript function that we call and send the bounding boxes json to draw the boxes. We will use Canvas in HTML 5 to show the picture and the bounding boxes of objects found in the image.
<b>index.html</b>
<pre>
<code class='hljs csharp'>
<script type="text/javascript">
var colorPalette = ["red", "yellow", "blue", "green", "fuchsia", "moccasin", "purple", "magenta", "aliceblue", "lightyellow", "lightgreen"];
function rescaleCanvas() {
var img = document.getElementById('PreviewImage');
var canvas = document.getElementById('PreviewImageBbox');
canvas.width = img.width;
canvas.height = img.height;
}
function getColor() {
var colorIndex = parseInt(Math.random() * 10);
var color = colorPalette[colorIndex];
return color;
}
function LoadBoundingBoxes(objectDescriptions) {
if (objectDescriptions == null || objectDescriptions == false) {
alert('did not find any objects in image. returning from calling load bounding boxes : ' + objectDescriptions);
return;
}
var objectDesc = JSON.parse(objectDescriptions);
//alert('calling load bounding boxes, starting analysis on clientside js : ' + objectDescriptions);
rescaleCanvas();
var canvas = document.getElementById('PreviewImageBbox');
var img = document.getElementById('PreviewImage');
var ctx = canvas.getContext('2d');
ctx.drawImage(img, img.width, img.height);
ctx.font = "10px Verdana";
for (var i = 0; i < objectDesc.length; i++) {
ctx.beginPath();
ctx.strokeStyle = "black";
ctx.lineWidth = 1;
ctx.fillText(objectDesc[i].Name, objectDesc[i].X + objectDesc[i].Width / 2, objectDesc[i].Y + objectDesc[i].Height / 2);
ctx.fillText("Confidence: " + objectDesc[i].Confidence, objectDesc[i].X + objectDesc[i].Width / 2, 10 + objectDesc[i].Y + objectDesc[i].Height / 2);
}
for (var i = 0; i < objectDesc.length; i++) {
ctx.fillStyle = getColor();
ctx.globalAlpha = 0.2;
ctx.fillRect(objectDesc[i].X, objectDesc[i].Y, objectDesc[i].Width, objectDesc[i].Height);
ctx.lineWidth = 3;
ctx.strokeStyle = "blue";
ctx.rect(objectDesc[i].X, objectDesc[i].Y, objectDesc[i].Width, objectDesc[i].Height);
ctx.fillStyle = "black";
ctx.fillText("Color: " + getColor(), objectDesc[i].X + objectDesc[i].Width / 2, 20 + objectDesc[i].Y + objectDesc[i].Height / 2);
ctx.stroke();
}
ctx.drawImage(img, 0, 0);
console.log('got these object descriptions:');
console.log(objectDescriptions);
}
</script>
</code>
</pre>
The index.html file in wwwroot is the place we usually put extra css and js for MAUI Blazor apps and Blazor apps. I have chosen to put the script directly into the index.html file and not in a .js file, but that is an option to be chosen to tidy up a bit more.
So there you have it, we can relatively easily find objects in images using Azure analyze image service in Azure Cognitive Services. We can get tags and captions of the image. In the demo the caption is shown above the picture loaded.
Azure Computer vision service is really good since it has got a massive training set and can recognize a lot of different objects for different usages.
As you see in the source code, I have the key and endpoint inside environment variables that the code expects exists. Never expose keys and endpoints in your source code.
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-87909800251152621972023-09-22T23:09:00.011+02:002023-09-23T00:16:57.649+02:00Using Azure Computer Vision to perform Optical Character Recognition (OCR)This article shows how you can use Azure Computer vision in Azure Cognitive Services to perform Optical Character Recognition (OCR).
The Computer vision feature is available by adding a <em>Computer Vision</em> resource in Azure Portal.
I have made a .NET MAUI Blazor app and the Github Repo for it is available here :
<code>
<a href='https://github.com/toreaurstadboss/Ocr.Handwriting.Azure.AI.Models'>https://github.com/toreaurstadboss/Ocr.Handwriting.Azure.AI.Models</a><br />
</code>
Let us first look at the .csproj of the Lib project in this repo.
<pre>
<code class='hljs csharp'>
<Project Sdk="Microsoft.NET.Sdk.Razor">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>
<ItemGroup>
<SupportedPlatform Include="browser" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Azure.CognitiveServices.Vision.ComputerVision" Version="7.0.1" />
<PackageReference Include="Microsoft.AspNetCore.Components.Web" Version="6.0.19" />
</ItemGroup>
</Project>
</code>
</pre>
The following class generates <em>ComputerVision clients</em> that can be used to extract different information from streams and files containing video and images. We are going to focus on
images and extracting text via OCR. Azure Computer Vision can also extract handwritten text in addition to regular text written by typewriters or text inside images and similar. Azure Computer Vision also can
detect shapes in images and classify objects. This demo only focuses on text extraction form images.
<b>ComputerVisionClientFactory</b>
<pre>
<code class='hljs csharp'>
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
namespace Ocr.Handwriting.Azure.AI.Lib
{
public interface IComputerVisionClientFactory
{
ComputerVisionClient CreateClient();
}
/// <summary>
/// Client factory for Azure Cognitive Services - Computer vision.
/// </summary>
public class ComputerVisionClientFactory : IComputerVisionClientFactory
{
// Add your Computer Vision key and endpoint
static string? _key = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_KEY");
static string? _endpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_ENDPOINT");
public ComputerVisionClientFactory() : this(_key, _endpoint)
{
}
public ComputerVisionClientFactory(string? key, string? endpoint)
{
_key = key;
_endpoint = endpoint;
}
public ComputerVisionClient CreateClient()
{
if (_key == null)
{
throw new ArgumentNullException(_key, "The AZURE_COGNITIVE_SERVICES_VISION_KEY is not set. Set a system-level environment variable or provide this value by calling the overloaded constructor of this class.");
}
if (_endpoint == null)
{
throw new ArgumentNullException(_key, "The AZURE_COGNITIVE_SERVICES_VISION_ENDPOINT is not set. Set a system-level environment variable or provide this value by calling the overloaded constructor of this class.");
}
var client = Authenticate(_key!, _endpoint!);
return client;
}
public static ComputerVisionClient Authenticate(string key, string endpoint) =>
new ComputerVisionClient(new ApiKeyServiceClientCredentials(key))
{
Endpoint = endpoint
};
}
}
</code>
</pre>
The setup of the endpoint and key of the Computer Vision resource is done via system-level envrionment variables.
Next up, let's look at retrieving OCR text from images. Here we use <em>ComputerVisionClient</em>. We open up a stream of a file, an image, using <em>File.OpenReadAsync</em> and then the method
<em>ReadInStreamAsync</em> of Computer vision client. The image we will load up in the app is selected by the user and the image is previewed and saved using MAUI Storage lib (inside the Appdata folder).
<b>OcrImageService.cs</b>
<pre>
<code class='hljs csharp'>
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
using Microsoft.Extensions.Logging;
using System.Diagnostics;
using ReadResult = Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models.ReadResult;
namespace Ocr.Handwriting.Azure.AI.Lib
{
public interface IOcrImageService
{
Task<IList<ReadResult?>?> GetReadResults(string imageFilePath);
Task<string> GetReadResultsText(string imageFilePath);
}
public class OcrImageService : IOcrImageService
{
private readonly IComputerVisionClientFactory _computerVisionClientFactory;
private readonly ILogger<OcrImageService> _logger;
public OcrImageService(IComputerVisionClientFactory computerVisionClientFactory, ILogger<OcrImageService> logger)
{
_computerVisionClientFactory = computerVisionClientFactory;
_logger = logger;
}
private ComputerVisionClient CreateClient() => _computerVisionClientFactory.CreateClient();
public async Task<string> GetReadResultsText(string imageFilePath)
{
var readResults = await GetReadResults(imageFilePath);
var ocrText = ExtractText(readResults?.FirstOrDefault());
return ocrText;
}
public async Task<IList<ReadResult?>?> GetReadResults(string imageFilePath)
{
if (string.IsNullOrWhiteSpace(imageFilePath))
{
return null;
}
try
{
var client = CreateClient();
//Retrieve OCR results
using (FileStream stream = File.OpenRead(imageFilePath))
{
var textHeaders = await client.ReadInStreamAsync(stream);
string operationLocation = textHeaders.OperationLocation;
string operationId = operationLocation[^36..]; //hat operator of C# 8.0 : this slices out the last 36 chars, which contains the guid chars which are 32 hexadecimals chars + four hyphens
ReadOperationResult results;
do
{
results = await client.GetReadResultAsync(Guid.Parse(operationId));
_logger.LogInformation($"Retrieving OCR results for operationId {operationId} for image {imageFilePath}");
}
while (results.Status == OperationStatusCodes.Running || results.Status == OperationStatusCodes.NotStarted);
IList<ReadResult?> result = results.AnalyzeResult.ReadResults;
return result;
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
return null;
}
}
private static string ExtractText(ReadResult? readResult) => string.Join(Environment.NewLine, readResult?.Lines?.Select(l => l.Text) ?? new List<string>());
}
}
</code>
</pre>
Let's look at the MAUI Blazor project in the app.
The <em>MauiProgram.cs</em> looks like this.
<b>MauiProgram.cs</b>
<pre>
<code class='hljs csharp'>
using Ocr.Handwriting.Azure.AI.Data;
using Ocr.Handwriting.Azure.AI.Lib;
using Ocr.Handwriting.Azure.AI.Services;
using TextCopy;
namespace Ocr.Handwriting.Azure.AI;
public static class MauiProgram
{
public static MauiApp CreateMauiApp()
{
var builder = MauiApp.CreateBuilder();
builder
.UseMauiApp<App>()
.ConfigureFonts(fonts =>
{
fonts.AddFont("OpenSans-Regular.ttf", "OpenSansRegular");
});
builder.Services.AddMauiBlazorWebView();
#if DEBUG
builder.Services.AddBlazorWebViewDeveloperTools();
builder.Services.AddLogging();
#endif
builder.Services.AddSingleton<WeatherForecastService>();
builder.Services.AddScoped<IComputerVisionClientFactory, ComputerVisionClientFactory>();
builder.Services.AddScoped<IOcrImageService, OcrImageService>();
builder.Services.AddScoped<IImageSaveService, ImageSaveService>();
builder.Services.InjectClipboard();
return builder.Build();
}
}
</code>
</pre>
We also need some code to preview and save the image an end user chooses. The <em>IImageService</em> looks like this.
<b>ImageSaveService</b>
<pre>
<code class='hljs csharp'>
using Microsoft.AspNetCore.Components.Forms;
using Ocr.Handwriting.Azure.AI.Models;
namespace Ocr.Handwriting.Azure.AI.Services
{
public class ImageSaveService : IImageSaveService
{
public async Task<ImageSaveModel> SaveImage(IBrowserFile browserFile)
{
var buffers = new byte[browserFile.Size];
var bytes = await browserFile.OpenReadStream(maxAllowedSize: 30 * 1024 * 1024).ReadAsync(buffers);
string imageType = browserFile.ContentType;
var basePath = FileSystem.Current.AppDataDirectory;
var imageSaveModel = new ImageSaveModel
{
SavedFilePath = Path.Combine(basePath, $"{Guid.NewGuid().ToString("N")}-{browserFile.Name}"),
PreviewImageUrl = $"data:{imageType};base64,{Convert.ToBase64String(buffers)}",
FilePath = browserFile.Name,
FileSize = bytes / 1024,
};
await File.WriteAllBytesAsync(imageSaveModel.SavedFilePath, buffers);
return imageSaveModel;
}
}
}
</code>
</pre>
Note the use of <em>maxAllowedSize</em> of <em>IBrowserfile.OpenReadStream</em> method, this is a good practice since IBrowserFile only supports 512 kB per default. I set it in the app to 30 MB to support some high res images too.
We preview the image as base-64 here and we also save the image also. Note the use of <em>FileSystem.Current.AppDataDirectory</em> as base path here. Here we use nuget package Microsoft.Maui.Storage.
These are the packages that is used for the MAUI Blazor project of the app.
<b>Ocr.Handwriting.Azure.AI.csproj</b>
<pre>
<code class='hljs chsarp'>
<ItemGroup>
<PackageReference Include="Microsoft.Azure.CognitiveServices.Vision.ComputerVision" Version="7.0.1" />
<PackageReference Include="TextCopy" Version="6.2.1" />
</ItemGroup>
</code>
</pre>
The GUI looks like this :
<b>Index.razor</b>
<pre>
<code class='hljs csharp'>
@page "/"
@using Ocr.Handwriting.Azure.AI.Models;
@using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
@using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
@using Ocr.Handwriting.Azure.AI.Lib;
@using Ocr.Handwriting.Azure.AI.Services;
@using TextCopy;
@inject IImageSaveService ImageSaveService
@inject IOcrImageService OcrImageService
@inject IClipboard Clipboard
<h1>Azure AI OCR Text recognition</h1>
<EditForm Model="Model" OnValidSubmit="@Submit" style="background-color:aliceblue">
<DataAnnotationsValidator />
<label><b>Select a picture to run OCR</b></label><br />
<InputFile OnChange="@OnInputFile" accept=".jpeg,.jpg,.png" />
<br />
<code class="alert-secondary">Supported file formats: .jpeg, .jpg and .png</code>
<br />
@if (Model.PreviewImageUrl != null) {
<label class="alert-info">Preview of the selected image</label>
<div style="overflow:auto;max-height:300px;max-width:500px">
<img class="flagIcon" src="@Model.PreviewImageUrl" /><br />
</div>
<code class="alert-light">File Size (kB): @Model.FileSize</code>
<br />
<code class="alert-light">File saved location: @Model.SavedFilePath</code>
<br />
<label class="alert-info">Click the button below to start running OCR using Azure AI</label><br />
<br />
<button type="submit">Submit</button> <button style="margin-left:200px" type="button" class="btn-outline-info" @onclick="@CopyTextToClipboard">Copy to clipboard</button>
<br />
<br />
<InputTextArea style="width:1000px;height:300px" readonly="readonly" placeholder="Detected text in the image uploaded" @bind-Value="Model!.OcrOutputText" rows="5"></InputTextArea>
}
</EditForm>
@code {
private IndexModel Model = new();
private async Task OnInputFile(InputFileChangeEventArgs args)
{
var imageSaveModel = await ImageSaveService.SaveImage(args.File);
Model = new IndexModel(imageSaveModel);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor OCR App", $"Wrote file to location : {Model.SavedFilePath} Size is: {Model.FileSize} kB", "Ok", "Cancel");
}
public async Task CopyTextToClipboard()
{
await Clipboard.SetTextAsync(Model.OcrOutputText);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor OCR App", $"The copied text was put into the clipboard. Character length: {Model.OcrOutputText?.Length}", "Ok", "Cancel");
}
private async Task Submit()
{
if (Model.PreviewImageUrl == null || Model.SavedFilePath == null)
{
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor OCR App", $"You must select an image first before running OCR. Supported formats are .jpeg, .jpg and .png", "Ok", "Cancel");
return;
}
Model.OcrOutputText = await OcrImageService.GetReadResultsText(Model.SavedFilePath);
StateHasChanged(); //visual refresh here
}
}
</code>
</pre>
The UI works like this. The user selects an image. As we can see by the 'accept' html attribute, the .jpeg, .jpg and .png extensions are allowed in the file input dialog. When the user selects an image, the image is saved and
previewed in the UI.
By hitting the Submit button, the OCR service in Azure is contacted and text is retrieved and displayed in the text area below, if any text is present in the image. A button allows copying the text into the clipboard.
Here are some screenshots of the app.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMFvpSQTbNBzDZbDT8CayZYoKB_v9PlAx1RUOQE9r6JlayuwepzQnEO0Y-IrpODlIsaI_TDlNRFZA90bZJ51EXjzOn_tKlvJt7BMVxs-apNFbJtEDiW74p--G0kKKngwkCrtpbfQsOQXix5B7zX6sFUvl0TyAAJKlP_Umb1pH3FOVgo657GaNj0HIlCPo/s1600/ocr1.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" data-original-height="984" data-original-width="1103" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMFvpSQTbNBzDZbDT8CayZYoKB_v9PlAx1RUOQE9r6JlayuwepzQnEO0Y-IrpODlIsaI_TDlNRFZA90bZJ51EXjzOn_tKlvJt7BMVxs-apNFbJtEDiW74p--G0kKKngwkCrtpbfQsOQXix5B7zX6sFUvl0TyAAJKlP_Umb1pH3FOVgo657GaNj0HIlCPo/s1600/ocr1.png"/></a></div>
<br />
<br />
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJRtFhB3IbRy7ye2YwiqubCyzg9vc6V5KiuUXkymVqSJSuUtkQiT8qCvSLgZS6F40Dqxw1SKjqSo5Pz82B15wRIDV7lOhibY-8JI7tqyzd3gtGdA6J0btdPkqOv8Feqv2bT_o9PB5qfYqnre4IA4cl4QKSNO1uP3n7Y_edvtjt_EiVcZqNPxQwqzBtKu8/s1600/ocr2.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" data-original-height="871" data-original-width="1417" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJRtFhB3IbRy7ye2YwiqubCyzg9vc6V5KiuUXkymVqSJSuUtkQiT8qCvSLgZS6F40Dqxw1SKjqSo5Pz82B15wRIDV7lOhibY-8JI7tqyzd3gtGdA6J0btdPkqOv8Feqv2bT_o9PB5qfYqnre4IA4cl4QKSNO1uP3n7Y_edvtjt_EiVcZqNPxQwqzBtKu8/s1600/ocr2.png"/></a></div>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-39379885524540564192023-09-19T22:29:00.003+02:002023-09-23T00:19:06.599+02:00Using Azure AI TextAnalytics and translation service to build an universal translatorThis article shows code how to build a universal translator using Azure AI Cognitive Services. This includes Azure AI Textanalytics to detect languge from text input, and
using Azure AI Translation services.
The Github repo is here :
<br />
<code>
<a href='https://github.com/toreaurstadboss/MultiLingual.Translator'>https://github.com/toreaurstadboss/MultiLingual.Translator</a>
</code>
<br />
The following Nuget packages are used in the Lib project csproj file :
<pre>
<code class='hljs xml'>
<ItemGroup>
<PackageReference Include="Azure.AI.Translation.Text" Version="1.0.0-beta.1" />
<PackageReference Include="Microsoft.AspNetCore.Components.Web" Version="6.0.19" />
<PackageReference Include="Azure.AI.TextAnalytics" Version="5.3.0" />
</ItemGroup>
</code>
</pre>
We are going to build a .NET 6 cross platform MAUI Blazor app. First off, we focus on the Razor Library project called 'Lib'. This
project contains the library util code to detect language and translate into other language.
Let us first look at creating the clients needed to detect language and to translate text.
<b>TextAnalyticsFactory.cs</b>
<pre>
<code class='hljs csharp'>
using Azure;
using Azure.AI.TextAnalytics;
using Azure.AI.Translation.Text;
using System;
namespace MultiLingual.Translator.Lib
{
public static class TextAnalyticsClientFactory
{
public static TextAnalyticsClient CreateClient()
{
string? uri = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICE_ENDPOINT", EnvironmentVariableTarget.Machine);
string? key = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICE_KEY", EnvironmentVariableTarget.Machine);
if (uri == null)
{
throw new ArgumentNullException(nameof(uri), "Could not get system environment variable named 'AZURE_COGNITIVE_SERVICE_ENDPOINT' Set this variable first.");
}
if (key == null)
{
throw new ArgumentNullException(nameof(uri), "Could not get system environment variable named 'AZURE_COGNITIVE_SERVICE_KEY' Set this variable first.");
}
var client = new TextAnalyticsClient(new Uri(uri!), new AzureKeyCredential(key!));
return client;
}
public static TextTranslationClient CreateTranslateClient()
{
string? keyTranslate = Environment.GetEnvironmentVariable("AZURE_TRANSLATION_SERVICE_KEY", EnvironmentVariableTarget.Machine);
string? regionForTranslationService = Environment.GetEnvironmentVariable("AZURE_TRANSLATION_SERVICE_REGION", EnvironmentVariableTarget.Machine);
if (keyTranslate == null)
{
throw new ArgumentNullException(nameof(keyTranslate), "Could not get system environment variable named 'AZURE_TRANSLATION_SERVICE_KEY' Set this variable first.");
}
if (keyTranslate == null)
{
throw new ArgumentNullException(nameof(keyTranslate), "Could not get system environment variable named 'AZURE_TRANSLATION_SERVICE_REGION' Set this variable first.");
}
var client = new TextTranslationClient(new AzureKeyCredential(keyTranslate!), region: regionForTranslationService);
return client;
}
}
}
</code>
</pre>
The code assumes that there is four environment variables at the SYSTEM level of your OS.
Further on, let us now look at the code to detect language. This uses a <em>TextAnalyticsClient</em> detect the language an input text is written in, using this client.
<b>IDetectLanguageUtil.cs</b>
<pre>
<code class='hljs csharp'>
using Azure.AI.TextAnalytics;
namespace MultiLingual.Translator.Lib
{
public interface IDetectLanguageUtil
{
Task<DetectedLanguage> DetectLanguage(string inputText);
Task<double> DetectLanguageConfidenceScore(string inputText);
Task<string> DetectLanguageIso6391(string inputText);
Task<string> DetectLanguageName(string inputText);
}
}
</code>
</pre>
<b>DetectLanguageUtil.cs</b>
<pre>
<code class='hljs csharp'>
using Azure.AI.TextAnalytics;
namespace MultiLingual.Translator.Lib
{
public class DetectLanguageUtil : IDetectLanguageUtil
{
private TextAnalyticsClient _client;
public DetectLanguageUtil()
{
_client = TextAnalyticsClientFactory.CreateClient();
}
/// <summary>
/// Detects language of the <paramref name="inputText"/>.
/// </summary>
/// <param name="inputText"></param>
/// <remarks> <see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>
public async Task<DetectedLanguage> DetectLanguage(string inputText)
{
DetectedLanguage detectedLanguage = await _client.DetectLanguageAsync(inputText);
return detectedLanguage;
}
/// <summary>
/// Detects language of the <paramref name="inputText"/>. Returns the language name.
/// </summary>
/// <param name="inputText"></param>
/// <remarks> <see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>
public async Task<string> DetectLanguageName(string inputText)
{
DetectedLanguage detectedLanguage = await DetectLanguage(inputText);
return detectedLanguage.Name;
}
/// <summary>
/// Detects language of the <paramref name="inputText"/>. Returns the language code.
/// </summary>
/// <param name="inputText"></param>
/// <remarks> <see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>
public async Task<string> DetectLanguageIso6391(string inputText)
{
DetectedLanguage detectedLanguage = await DetectLanguage(inputText);
return detectedLanguage.Iso6391Name;
}
/// <summary>
/// Detects language of the <paramref name="inputText"/>. Returns the confidence score
/// </summary>
/// <param name="inputText"></param>
/// <remarks> <see cref="Models.LanguageCode" /> contains the language code list of languages supported</remarks>
public async Task<double> DetectLanguageConfidenceScore(string inputText)
{
DetectedLanguage detectedLanguage = await DetectLanguage(inputText);
return detectedLanguage.ConfidenceScore;
}
}
}
</code>
</pre>
The <em>Iso6391</em> code is important when it comes to translation, which will be shown soon. But first let us look at the supported languages of Azure AI Translation services.
<b>LanguageCode.cs</b>
<pre>
<code class='hljs csharp'>
namespace MultiLingual.Translator.Lib.Models
{
/// <summary>
/// List of supported languages in Azure AI services
/// https://learn.microsoft.com/en-us/azure/ai-services/translator/language-support
/// </summary>
public static class LanguageCode
{
public const string Afrikaans = "af";
public const string Albanian = "sq";
public const string Amharic = "am";
public const string Arabic = "ar";
public const string Armenian = "hy";
public const string Assamese = "as";
public const string AzerbaijaniLatin = "az";
public const string Bangla = "bn";
public const string Bashkir = "ba";
public const string Basque = "eu";
public const string BosnianLatin = "bs";
public const string Bulgarian = "bg";
public const string CantoneseTraditional = "yue";
public const string Catalan = "ca";
public const string ChineseLiterary = "lzh";
public const string ChineseSimplified = "zh-Hans";
public const string ChineseTraditional = "zh-Hant";
public const string chiShona = "sn";
public const string Croatian = "hr";
public const string Czech = "cs";
public const string Danish = "da";
public const string Dari = "prs";
public const string Divehi = "dv";
public const string Dutch = "nl";
public const string English = "en";
public const string Estonian = "et";
public const string Faroese = "fo";
public const string Fijian = "fj";
public const string Filipino = "fil";
public const string Finnish = "fi";
public const string French = "fr";
public const string FrenchCanada = "fr-ca";
public const string Galician = "gl";
public const string Georgian = "ka";
public const string German = "de";
public const string Greek = "el";
public const string Gujarati = "gu";
public const string HaitianCreole = "ht";
public const string Hausa = "ha";
public const string Hebrew = "he";
public const string Hindi = "hi";
public const string HmongDawLatin = "mww";
public const string Hungarian = "hu";
public const string Icelandic = "is";
public const string Igbo = "ig";
public const string Indonesian = "id";
public const string Inuinnaqtun = "ikt";
public const string Inuktitut = "iu";
public const string InuktitutLatin = "iu-Latn";
public const string Irish = "ga";
public const string Italian = "it";
public const string Japanese = "ja";
public const string Kannada = "kn";
public const string Kazakh = "kk";
public const string Khmer = "km";
public const string Kinyarwanda = "rw";
/// <summary>
/// Fear my Bak'leth !
/// </summary>
public const string Klingon = "tlh-Latn";
public const string KlingonplqaD = "tlh-Piqd";
public const string Konkani = "gom";
public const string Korean = "ko";
public const string KurdishCentral = "ku";
public const string KurdishNorthern = "kmr";
public const string KyrgyzCyrillic = "ky";
public const string Lao = "lo";
public const string Latvian = "lv";
public const string Lithuanian = "lt";
public const string Lingala = "ln";
public const string LowerSorbian = "dsb";
public const string Luganda = "lug";
public const string Macedonian = "mk";
public const string Maithili = "mai";
public const string Malagasy = "mg";
public const string MalayLatin = "ms";
public const string Malayalam = "ml";
public const string Maltese = "mt";
public const string Maori = "mi";
public const string Marathi = "mr";
public const string MongolianCyrillic = "mn-Cyrl";
public const string MongolianTraditional = "mn-Mong";
public const string Myanmar = "my";
public const string Nepali = "ne";
public const string Norwegian = "nb";
public const string Nyanja = "nya";
public const string Odia = "or";
public const string Pashto = "ps";
public const string Persian = "fa";
public const string Polish = "pl";
public const string PortugueseBrazil = "pt";
public const string PortuguesePortugal = "pt-pt";
public const string Punjabi = "pa";
public const string QueretaroOtomi = "otq";
public const string Romanian = "ro";
public const string Rundi = "run";
public const string Russian = "ru";
public const string SamoanLatin = "sm";
public const string SerbianCyrillic = "sr-Cyrl";
public const string SerbianLatin = "sr-Latn";
public const string Sesotho = "st";
public const string SesothosaLeboa = "nso";
public const string Setswana = "tn";
public const string Sindhi = "sd";
public const string Sinhala = "si";
public const string Slovak = "sk";
public const string Slovenian = "sl";
public const string SomaliArabic = "so";
public const string Spanish = "es";
public const string SwahiliLatin = "sw";
public const string Swedish = "sv";
public const string Tahitian = "ty";
public const string Tamil = "ta";
public const string TatarLatin = "tt";
public const string Telugu = "te";
public const string Thai = "th";
public const string Tibetan = "bo";
public const string Tigrinya = "ti";
public const string Tongan = "to";
public const string Turkish = "tr";
public const string TurkmenLatin = "tk";
public const string Ukrainian = "uk";
public const string UpperSorbian = "hsb";
public const string Urdu = "ur";
public const string UyghurArabic = "ug";
public const string UzbekLatin = "uz";
public const string Vietnamese = "vi";
public const string Welsh = "cy";
public const string Xhosa = "xh";
public const string Yoruba = "yo";
public const string YucatecMaya = "yua";
public const string Zulu = "zu";
}
}
</code>
</pre>
As there are about 5-10 000 languages in the World, the list above shows that Azure AI translation services supports about 130 of these, which is 1-2 % of the total amount of languages. Of course, the languages supported by Azure AI are also including the most spoken languages in the World.
Let us look at the translation util code next.
<b>ITranslateUtil.cs</b>
<pre>
<code class='hljs csharp'>
namespace MultiLingual.Translator.Lib
{
public interface ITranslateUtil
{
Task<string?> Translate(string targetLanguage, string inputText, string? sourceLanguage = null);
}
}
</code>
</pre>
<b>TranslateUtil.cs</b>
<pre>
<code class='hljs csharp'>
using Azure.AI.Translation.Text;
using MultiLingual.Translator.Lib.Models;
namespace MultiLingual.Translator.Lib
{
public class TranslateUtil : ITranslateUtil
{
private TextTranslationClient _client;
public TranslateUtil()
{
_client = TextAnalyticsClientFactory.CreateTranslateClient();
}
/// <summary>
/// Translates text using Azure AI Translate services.
/// </summary>
/// <param name="targetLanguage"><see cref="LanguageCode" for a list of supported languages/></param>
/// <param name="inputText"></param>
/// <param name="sourceLanguage">Pass in null here to auto detect the source language</param>
/// <returns></returns>
public async Task<string?> Translate(string targetLanguage, string inputText, string? sourceLanguage = null)
{
var translationOfText = await _client.TranslateAsync(targetLanguage, inputText, sourceLanguage);
if (translationOfText?.Value == null)
{
return null;
}
var translation = translationOfText.Value.SelectMany(l => l.Translations).Select(l => l.Text)?.ToList();
string? translationText = translation?.FlattenString();
return translationText;
}
}
}
</code>
</pre>
We use a little helper extension method here too :
<b>StringExtensions.cs</b>
<pre>
<code class='hljs csharp'>
using System.Text;
namespace MultiLingual.Translator.Lib
{
public static class StringExtensions
{
/// <summary>
/// Merges a collection of lines into a flattened string separating each line by a specified line separator.
/// Newline is deafult
/// </summary>
/// <param name="inputLines"></param>
/// <param name="lineSeparator"></param>
/// <returns></returns>
public static string? FlattenString(this IEnumerable<string>? inputLines, string lineSeparator = "\n")
{
if (inputLines == null || !inputLines.Any())
{
return null;
}
var flattenedString = inputLines?.Aggregate(new StringBuilder(),
(sb, l) => sb.AppendLine(l + lineSeparator),
sb => sb.ToString().Trim());
return flattenedString;
}
}
}
</code>
</pre>
Here are some tests for detecting language :
<b>DetectLanguageUtilTests.cs</b>
<pre>
<code class='hljs csharp'>
using Azure.AI.TextAnalytics;
using FluentAssertions;
namespace MultiLingual.Translator.Lib.Test
{
public class DetectLanguageUtilTests
{
private DetectLanguageUtil _detectLanguageUtil;
public DetectLanguageUtilTests()
{
_detectLanguageUtil = new DetectLanguageUtil();
}
[Theory]
[InlineData("Donde esta la playa", "es", "Spanish")]
[InlineData("Jeg er fra Trøndelag og jeg liker brunost", "no", "Norwegian")]
public async Task DetectLanguageDetailsSucceeds(string text, string expectedLanguageIso6391, string expectedLanguageName)
{
string? detectedLangIso6391 = await _detectLanguageUtil.DetectLanguageIso6391(text);
detectedLangIso6391.Should().Be(expectedLanguageIso6391);
string? detectedLangName = await _detectLanguageUtil.DetectLanguageName(text);
detectedLangName.Should().Be(expectedLanguageName);
}
[Theory]
[InlineData("Du hast mich", "de", "German")]
public async Task DetectLanguageSucceeds(string text, string expectedLanguageIso6391, string expectedLanguageName)
{
DetectedLanguage detectedLanguage = await _detectLanguageUtil.DetectLanguage(text);
detectedLanguage.Iso6391Name.Should().Be(expectedLanguageIso6391);
detectedLanguage.Name.Should().Be(expectedLanguageName);
}
}
}
</code>
</pre>
And here are some translation util tests :
<b>TranslateUtilTests.cs</b>
<pre>
<code class='hljs csharp'>
using FluentAssertions;
using MultiLingual.Translator.Lib.Models;
namespace MultiLingual.Translator.Lib.Test
{
public class TranslateUtilTests
{
private TranslateUtil _translateUtil;
public TranslateUtilTests()
{
_translateUtil = new TranslateUtil();
}
[Theory]
[InlineData("Jeg er fra Norge og jeg liker brunost", "i'm from norway and i like brown cheese", LanguageCode.Norwegian, LanguageCode.English)]
[InlineData("Jeg er fra Norge og jeg liker brunost", "i'm from norway and i like brown cheese", null, LanguageCode.English)] //auto detect language is tested here
[InlineData("Ich bin aus Hamburg und ich liebe bier", "i'm from hamburg and i love beer", LanguageCode.German, LanguageCode.English)]
[InlineData("Ich bin aus Hamburg und ich liebe bier", "i'm from hamburg and i love beer", null, LanguageCode.English)] //Auto detect source language is tested here
[InlineData("tlhIngan maH", "we are klingons", LanguageCode.Klingon, LanguageCode.English)] //Klingon force !
public async Task TranslationReturnsExpected(string input, string expectedTranslation, string sourceLanguage, string targetLanguage)
{
string? translation = await _translateUtil.Translate(targetLanguage, input, sourceLanguage);
translation.Should().NotBeNull();
translation.Should().BeEquivalentTo(expectedTranslation);
}
}
}
</code>
</pre>
Over to the UI. The app is made with <em>MAUI Blazor</em>.
Here are some models for the app :
<b>LanguageInputModel.cs</b>
<pre>
<code class='hljs csharp'>
namespace MultiLingual.Translator.Models
{
public class LanguageInputModel
{
public string InputText { get; set; }
public string DetectedLanguageInfo { get; set; }
public string DetectedLanguageIso6391 { get; set; }
public string TargetLanguage { get; set; }
public string TranslatedText { get; set; }
}
}
</code>
</pre>
<b>NameValue.cs</b>
<pre>
<code class='hljs csharp'>
namespace MultiLingual.Translator.Models
{
public class NameValue
{
public string Name { get; set; }
public string Value { get; set; }
}
}
</code>
</pre>
The UI consists of this razor code in, written for Blazor MAUI app.
<b>Index.razor</b>
<pre>
<code class='hljs csharp'>
@page "/"
@inject ITranslateUtil TransUtil
@inject IDetectLanguageUtil DetectLangUtil
@inject IJSRuntime JS
@using MultiLingual.Translator.Lib;
@using MultiLingual.Translator.Lib.Models;
@using MultiLingual.Translator.Models;
<h1>Azure AI Text Translation</h1>
<EditForm Model="@Model" OnValidSubmit="@Submit" class="form-group" style="background-color:aliceblue;">
<DataAnnotationsValidator />
<ValidationSummary />
<div class="form-group row">
<label for="Model.InputText">Text to translate</label>
<InputTextArea @bind-Value="Model!.InputText" placeholder="Enter text to translate" @ref="inputTextRef" id="textToTranslate" rows="5" />
</div>
<div class="form-group row">
<span>Detected language of text to translate</span>
<InputText class="languageLabelText" readonly="readonly" placeholder="The detected language of the text to translate" @bind-Value="Model!.DetectedLanguageInfo"></InputText>
@if (Model.DetectedLanguageInfo != null){
<img src="@FlagIcon" class="flagIcon" />
}
</div>
<br />
<div class="form-group row">
<span>Translate into language</span>
<InputSelect placeholder="Choose the target language" @bind-Value="Model!.TargetLanguage">
@foreach (var item in LanguageCodes){
<option value="@item.Value">@item.Name</option>
}
</InputSelect>
<br />
@if (Model.TargetLanguage != null){
<img src="@TargetFlagIcon" class="flagIcon" />
}
</div>
<br />
<div class="form-group row">
<span>Translation</span>
<InputTextArea readonly="readonly" placeholder="The translated text target language" @bind-Value="Model!.TranslatedText" rows="5"></InputTextArea>
</div>
<button type="submit" class="submitButton">Submit</button>
</EditForm>
@code {
private Azure.AI.TextAnalytics.TextAnalyticsClient _client;
private InputTextArea inputTextRef;
public LanguageInputModel Model { get; set; } = new();
private string FlagIcon {
get
{
return $"images/flags/png100px/{Model.DetectedLanguageIso6391}.png";
}
}
private string TargetFlagIcon {
get
{
return $"images/flags/png100px/{Model.TargetLanguage}.png";
}
}
private List<NameValue> LanguageCodes = typeof(LanguageCode).GetFields().Select(f => new NameValue {
Name = f.Name,
Value = f.GetValue(f)?.ToString(),
}).OrderBy(f => f.Name).ToList();
private async void Submit()
{
var detectedLanguage = await DetectLangUtil.DetectLanguage(Model.InputText);
Model.DetectedLanguageInfo = $"{detectedLanguage.Iso6391Name} {detectedLanguage.Name}";
Model.DetectedLanguageIso6391 = detectedLanguage.Iso6391Name;
if (_client == null)
{
_client = TextAnalyticsClientFactory.CreateClient();
}
Model.TranslatedText = await TransUtil.Translate(Model.TargetLanguage, Model.InputText, detectedLanguage.Iso6391Name);
StateHasChanged();
}
protected override async Task OnAfterRenderAsync(bool firstRender)
{
if (firstRender)
{
Model.TargetLanguage = LanguageCode.English;
await JS.InvokeVoidAsync("exampleJsFunctions.focusElement", inputTextRef?.AdditionalAttributes.FirstOrDefault(a => a.Key?.ToLower() == "id").Value);
StateHasChanged();
}
}
}
</code>
</pre>
Finally, a screenshot how the app looks like :
You enter the text to translate, then the detected language is shown after you hit Submit. You can select the target language to translate the text into. English is selected as default. The Iso6391 code of the selected language is shown as a flag icon, if there exists a 1:1 mapping between the Iso6391 code and
the flag icons available in the app. The top flag show the detected language via the Iso6391 code, IF there is a 1:1 mapping between this code and the available Flag icons.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_Z8XIq4PQ7ygq6ZdSCAYc7K4cpJBcArIf7XuwLj55-wvraVt76LJV6xni7qgUySVm0ORJQ6g2ecHpVi79fDZBzxtI_l1aEB0CgvZsSF91SpdOcew3aT8itV2EjXM6zlbcHgEFCYQwyYgDGR88oyOWknwwYyU1HfeQ435cSfjX5lxdnLUOudNm6KgNZ_E/s1600/azure_ai_text_translation.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" data-original-height="871" data-original-width="1179" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_Z8XIq4PQ7ygq6ZdSCAYc7K4cpJBcArIf7XuwLj55-wvraVt76LJV6xni7qgUySVm0ORJQ6g2ecHpVi79fDZBzxtI_l1aEB0CgvZsSF91SpdOcew3aT8itV2EjXM6zlbcHgEFCYQwyYgDGR88oyOWknwwYyU1HfeQ435cSfjX5lxdnLUOudNm6KgNZ_E/s1600/azure_ai_text_translation.png"/></a></div>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-36702963687823689242023-08-01T17:48:00.008+02:002023-08-01T21:39:39.971+02:00Writing and reading unmapped properties in Azure Cosmos DBThis article presents code that shows how you can read and write unmapped properties in Azure Cosmos DB.
An unmapped property in Azure Cosmos is a property that is NOT part of a model in your domain models, for example you could set a LastUpdateBy or other metadata
and not expose this in your domain models. This is similar to <em>shadow</em> properties in Entity Framework, but these do exist in your domain models and are ignored actively using <br>
Fluent API when configuring your db context.
In Azure Cosmos, the RAW json is exposed in the "__json" property. This can be read using Newtonsoft library and you can add properties that are unmapped and read them afterwards.
You should not only save the changes to persist them in Azure Cosmos but also re-read the data again to get an updated item in your <em>container</em> (table) in Azure Cosmos DB.
The following extension methods allows to write and read such unmapped properties easier using Azure Cosmos DB.
<pre>
<code class='hljs csharp'>
public static class AzureCosmosEntityExtensions
{
public static TResult? GetUnmappedProperty<TResult, T>(this T entity, string propname, DbContext context) where T : class {
if (entity == null)
{
return default(TResult);
}
var entry = context.Entry(entity);
var rawJson = entry.Property<JObject>("__jObject");
var currentValueProp = rawJson.CurrentValue[propname];
if (currentValueProp == null)
{
return default(TResult);
}
var currentValuePropCasted = currentValueProp.ToObject<TResult?>();
return currentValuePropCasted;
}
public static void SetUnmappedProperty<T>(this T entity, string propname, object value, DbContext context) where T : class
{
if (entity == null)
{
return;
}
var entry = context.Entry(entity);
var rawJson = entry.Property<JObject>("__jObject");
rawJson.CurrentValue[propname] = JToken.FromObject(value);
entry.State = EntityState.Modified;
}
}
</code>
</pre>
Let's see some sample code to set this up. Consider the following model :
<pre>
<code class='hljs csharp'>
public class Address
{
public string AddressId { get; set; }
public string State { get; set; }
public string City { get; set; }
public string Street { get; set; }
public string HouseNumber { get; set; }
}
</code>
</pre>
Let's add another unmapped property "LastUpdate" that are not exposed in the domain model, but is an unmapped model. We must as mentioned make sure to reload the data we read here after saving
the entity to test out reading the unmapped property. Note that we set <em>ModelState</em> to <em>Modified</em> to trigger the saving of these unmapped properties, since the <em>ChangeTracker</em> is not
tracking them and EF will not save the updates to these unmapped properties if this is not done.
<pre>
<code class='hljs csharp'>
//Create a db context and for an item entity in a container (table) in Azure Cosmos DB,
//set the unmapped property to a value and also read out this property after saving it and reloading the entity
using var context = await contextFactory.CreateDbContextAsync();
var address = await context.Addresses.FirstAsync();
const string unmappedPropKey = "LastUpdate";
address.SetUnmappedProperty(unmappedPropKey, DateTime.UtcNow, context);
await context.SaveChangesAsync();
address = await context.Addresses.FirstAsync();
var unnmappedProp = address.GetUnmappedProperty<DateTime, Address>(unmappedPropKey, context);
</code>
</pre>
The following screenshot below shows the unmapped property <em>LastUpdate</em> was written to Azure Cosmos DB item in the container (table).
<img alt="" border="0" width="900" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbvRuA6SXNz9m0rpLrFWhZQNb2MMVKxCSk76Z5h83k_f5eTHq0bcRk9LNkiLLxo-SkTZzO5oYtd8JI-8ZaEdr9jN-OIAsV_zXHg5FURnS65-FI4bzBxtoCMSp2gyBwtFRNnbyA7tkmZFtzdMRfUkXmkc0wv0qRqIZOGcZYWkbVabve92sk2HFyy1uxI5E/s1600/azurecosmosdb_unmapped_properties.png"/>Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-89992443873324936252023-07-31T16:25:00.005+02:002023-07-31T16:43:29.268+02:00Loading references (navigational properties) for an item in Azure Cosmos DBThe extensions methods shown here can be applied in general in Entity Framework Core. In Azure Cosmos DB, if you use the <em>FindAsync</em> method for example, you will not load the references of the item automatically.
Instead, you must explicitly go via <em>Entry</em> method and then to LoadAsync on each <em>Reference</em>.
A Reference is also called a navigational property in EF.
Let's first consider this code finding some Trip data item (data is stored in json format in Azure Cosmos DB since it is schema less and non-relational db or a 'document DB') and related data inside the
Driver, Address.
The POCO for Trip looks like this:
<pre>
<code class='hljs csharp'>
#region Info and license
#endregion
namespace TransportApp.Domain
{
public class Trip
{
public string TripId { get; set; }
public DateTime BeginUtc { get; set; }
public DateTime? EndUtc { get; set; }
public short PassengerCount { get; set; }
public string DriverId { get; set; }
public Driver Driver { get; set; }
public string VehicleId { get; set; }
public Vehicle Vehicle { get; set; }
public string FromAddressId { get; set; }
public Address FromAddress { get; set; }
public string ToAddressId { get; set; }
public Address ToAddress { get; set; }
}
}
</code>
</pre>
We create a db context for Azure Cosmos DB like this :
<pre>
<code class='csharp hljs'>
private readonly IDbContextFactory<TransportContext> _contextFactory;
public SomeService(IDbContextFactory<SomeDbContext> contextFactory)
{
_contextFactory = contextFactory;
//..
}
public async Task RunSomeDemoCode(){
using var context = await contextFactory.CreateDbContextAsync();
}
</code>
</pre>
Once we have our <em>context</em> object we can get data from Azure Cosmos DB.
<pre>
<code class='csharp hljs'>
public async Task RunSomeDemoCode(){
using var context = await contextFactory.CreateDbContextAsync();
var trip1 = await otherContext.Trips.FindAsync($"{nameof(Trip)}-1");
await otherContext.LoadEntityWithAllReferences(trip1!);
}
</code>
</pre>
Note that we probably want to check that trip1 object is not null here. We want to get the <em>relations</em> also here. This is not automatically loaded in Azure Cosmos DB ! We can load the <em>relations</em> or <em>navigational properties</em> using the following extensions methods listed below. If you use the method accepting an EntityEntry, you must use the <em>Entry</em> method first, like shown in the other method.
<pre>
<code class='csharp hljs'>
public static class EntityEntryExtensions
{
public static async Task LoadAllReferences<T>(this EntityEntry<T> entry) where T : class
{
foreach (var reference in entry.References)
{
await reference.LoadAsync();
}
}
}
public static class EntityExtensions
{
public static async Task LoadEntityWithAllReferences<T>(this DbContext dbContext, T dataItem) where T : class
{
if (dataItem == null)
{
return;
}
var entity = dbContext.Entry(dataItem!);
foreach (var reference in entity.References)
{
await reference.LoadAsync();
}
}
}
</code>
</pre>
I have added two screenshots here, first before calling the method <em>LoadEntityWithAllReferences</em>, and afterwards.
<pre>
Before:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifFgTtseKvgQ2Vbsvt6cQy1CRnFk0FQMsfnOSjNmLNOEwsmKUrH3tlwIrnE-oVDFR41bXCKxkJr_bh4h8BkVyCF76R4N496YqwDEfHgo6Lg9Hur5Igxo0odOkn4l0Nf_VW9cyYc53n6DWTlsygmU_QIMLgoobFZQdSwh5bImsj3ynStEGA5-eY_vuUmo4/s1600/azurecosmosdb_before_loadrelated_entities.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" data-original-height="307" data-original-width="822" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifFgTtseKvgQ2Vbsvt6cQy1CRnFk0FQMsfnOSjNmLNOEwsmKUrH3tlwIrnE-oVDFR41bXCKxkJr_bh4h8BkVyCF76R4N496YqwDEfHgo6Lg9Hur5Igxo0odOkn4l0Nf_VW9cyYc53n6DWTlsygmU_QIMLgoobFZQdSwh5bImsj3ynStEGA5-eY_vuUmo4/s1600/azurecosmosdb_before_loadrelated_entities.png"/></a></div>
After:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOqEz9pN2KuwDfpDMPZXYzfKIz7TNOO9L5ebPvcvO0n7IhsHoEglKsIuWgdCdlEbcO07gy4c-Xr4WKoor0GNIB-QhXtlvuA6LoSL9OuzSptQAcV0nklZK48-RZw6aNSnNpDD0hecAJVkWkWNSAavM8Pe_11D3qLS_6D0hPPfMJ6R4AzPVDATi1NZocCn8/s1600/azurecosmosdb_after_loadrelated_entities.png" style="display: block; padding: 1em 0; text-align: center; clear: left; float: left;"><img alt="" border="0" data-original-height="325" data-original-width="918" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOqEz9pN2KuwDfpDMPZXYzfKIz7TNOO9L5ebPvcvO0n7IhsHoEglKsIuWgdCdlEbcO07gy4c-Xr4WKoor0GNIB-QhXtlvuA6LoSL9OuzSptQAcV0nklZK48-RZw6aNSnNpDD0hecAJVkWkWNSAavM8Pe_11D3qLS_6D0hPPfMJ6R4AzPVDATi1NZocCn8/s1600/azurecosmosdb_after_loadrelated_entities.png"/></a></div>
</pre>
<hr />
<br />
<br />
<code>
As we can see from the screen shots, the references has been loaded and now you avoid to manually have to load one and one reference property / navigation property.
</code>
Note - about some of the sample code - it came from a Pluralsight Course. I have looked more into the extension methods here myself.
<pre>
/*
This demo application accompanies Pluralsight course 'Using EF Core 6 with Azure Cosmos DB',
by Jurgen Kevelaers. See https://pluralsight.pxf.io/efcore6-cosmos.
MIT License
Copyright (c) 2022 Jurgen Kevelaers
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
</pre>Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-8554864477759925172023-07-29T20:52:00.014+02:002023-07-29T21:42:37.341+02:00Forced cancellation of a CancellationTokenCancellationToken are used to signal that an asynchronous task should cancelled, at some given point in the code from here on a cancellation has been signaled and <em>downstream</em>.
Methods <em>downstream</em> are methods and sub methods / sub routines. We can pass a cancellation token into for example Entity Framework Core to cancel a heavy database I/O process.
Cancelling a method this way can be invoked on many different ways.
<br /><br /><br/>
<h5>Examples of ways to cancel a cancellation tokens</h5>
1. By user interface actions. Like hitting a Cancel button in the UI. For example, the REST client Insomnia allows you to do this.
2. Other means of stopping a task. Inside a browser you can stop requests by reloading a window / tab. For example - If you use Swagger API in a browser, you can refresh the Swagger web page in a tab to indicate a cancellation is desired.
The suggested way to programatically cancelling a cancellation token in code is to throw a OperationCanceledException in your code. If you have the <em>Cancellation Token Source</em> - the CTS - you can cancel the cancellation token as you like. Most often you do not have the CTS.
You can always throw an <em>OperationCanceledException</em>.
The source code can then listen to such a cancellation if you call <em>ThrowIfCancellationIsRequested</em> on the cancellation token.
Another way to cancel a cancellation token is to create a linked cancellation token and cancellation the cancellation token source you created for it. This is an alternative way that directly updates the cancellation of a token to be cancelled downstream in case you have some logic further upstream that still should be called instead of directly throwing an OperationCancelledException.
Is it a good approach to programatically create a new cancellation token and overwrite it or should you instead just throw an OperationCancelledException ? And why not just stick to the same object ? I am overwriting the token here using ref keyword, since CancellationToken is a struct object.
This makes it harder to overwite, since structs are copied by value into methods, such as an extension method. However if this is a good idea or not - I include the code here for completeness.
<br /><br />
<h3>Defining an extension method on cancellation tokens which can cancel them</h3>
<h5>CancellationTokenExtension.cs</h5>
<pre>
<code class='hljs csharp'>
namespace CarvedRock.Api
{
public static class CancellationTokenExtensions
{
public static void ForceCancel(ref this CancellationToken cancellationToken,
Func<bool>? condition = null)
{
if (condition == null || condition.Invoke())
{
var cts = CancellationTokenSource.CreateLinkedTokenSource(
cancellationToken);
cancellationToken = cts.Token;
cts.Cancel();
}
}
}
}
</code>
</pre>
Here is some sample code that shows how we can use this extension method.
<br /><br />
<h3>Using an extension method that is cancellation cancellation tokens </h3>
<h5>ProductController.cs</h5>
<pre>
<code class='hljs csharp'>
[HttpGet]
//[ResponseCache(Duration = 90, VaryByQueryKeys = new[] { "category" })]
public async Task<IEnumerable<ProductModel>> Get(CancellationToken cancellationToken, string category = "all")
{
cancellationToken.ForceCancel(() => category == "kayak");
using (_logger.BeginScope("ScopeCat: {ScopeCat}", category))
{
_logger.LogInformation( "Getting products in API.");
return await _productLogic.GetProductsForCategoryAsync(category, cancellationToken);
}
}
</code>
</pre>
So there we have it, we can either use an approach like in this article, creating a temporary new cancellation token source and then created a <em>linked cancellation token</em> from the original cancellation token, overwriting it, and at the same time cancel it, possible by supplying a condition to decide
if we want to cancel the cancellation token or not. Or we could just throw an <em>OperationCanceledException</em>.
In the example code above I finally make the EF code communicating with the database and supply the cancelation token into a ToListAsync method here. This makes our code <em>cancellable</em>, in case we for example hit big data in the database that is slow and user wants to cancel.
<br /><br />
<h3>Using an extension method that is cancellation cancellation tokens - downstream code making use of same cancellation token passed into sub method</h3>
<h5>CarvedRockRepository.cs</h5>
<pre>
<code class='hljs csharp'>
public async Task<List<Product>> GetProductsAsync(string category, CancellationToken cancellationToken)
{
//.. Inside GetProductAsync method receiving token - more code above here inside the method
var productsToSerialize = await _ctx.Products.Where(p => p.Category == category || category == "all")
.Include(p => p.Rating).ToListAsync(cancellationToken);
// more code inside method below
}
</code>
</pre>
<code>Note ! Remember to pass down the cancellation token to your methods and sub methods /sub routines. </code>
<img alt="" width="900" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEheLULKt_JBDnM6IJU4cZsavE0JZugTlReAcVT5Eov3pjDokMYVU6t0-esWmdU9sAiB109tQcMXtlnWXmL-tT3pj7q_jCl2tLOq1m_wBMFCA-rpqMT0OyWUtRDs19ZN5xWWmutucqTJuW2PCjxlrJR5iVcgszwKZ0hDfW-zF3cCkhDAdGVcg0uLhxpvfmU/s1600/insomnia_screenshot.png"/>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com1tag:blogger.com,1999:blog-7240109143089619921.post-63337626552845681042023-07-09T23:15:00.014+02:002023-07-10T00:13:46.157+02:00Localizing Blazor WASM applications with a language pickerThis article presents code how to localize a Blazor WASM app with a language picker. This is part of <em>globalizing</em> an app.
The sample app is in this sample app in GitHub:
<a href='https://github.com/toreaurstadboss/HelloBlazorLocalization'>https://github.com/toreaurstadboss/HelloBlazorLocalization</a>
<br /><br />
<img alt="" border="0" data-original-height="408" data-original-width="1039" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjHLskDy1c2_F5qwWcQ2ZNsBL4Mv3Crf8ytw2BpDZLQ2HYN9i6R0CHZlKRfhpct8AGBrd1V1nl6bdplQpoOa-gCffd41cEQ3ML42k5EsLyPdV-BKbi38O9dY50B7KSxPGsTfbbeoQp83IBH_tWpZbhfksy8deaVcpqzoZNArHIg7VVkXUExupPkXKg0Rg/s1600/languagepicker_blazorwasm.png"/>
First off, we need to add some Nuget package references, such as adding a capability of using local storage in a convenient way in the Blazor WASM app. The project file of the sample app has this setup :
<pre>
<code class='hljs csharp'>
<b>Project file - HelloBlazorLocalization.csproj</b>
<Project Sdk="Microsoft.NET.Sdk.BlazorWebAssembly">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<BlazorWebAssemblyLoadAllGlobalizationData>true</BlazorWebAssemblyLoadAllGlobalizationData>
</PropertyGroup>
<ItemGroup>
<Compile Remove="Shared\Resources\**" />
<Content Remove="Shared\Resources\**" />
<EmbeddedResource Remove="Shared\Resources\**" />
<None Remove="Shared\Resources\**" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="Blazored.LocalStorage" Version="4.3.0" />
<PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly" Version="6.0.3" />
<PackageReference Include="Microsoft.AspNetCore.Components.WebAssembly.DevServer" Version="6.0.3" PrivateAssets="all" />
<PackageReference Include="Microsoft.AspNetCore.Localization" Version="2.1.1" />
<PackageReference Include="Microsoft.AspNetCore.WebUtilities" Version="2.2.0" />
<PackageReference Include="Microsoft.Extensions.Localization" Version="6.0.3" />
</ItemGroup>
<ItemGroup>
<Folder Include="wwwroot\flag-icons\" />
</ItemGroup>
<ItemGroup>
<None Include="HelloBlazorLocalization.sln" />
</ItemGroup>
</Project>
</code>
</pre>
Note the use of the property setting :
<em>BlazorWebAssemblyLoadAllGlobalizationData</em>
This is required to add localization to your Blazor WASM app ! Also note that we use <em>Blazored.LocalStorage</em> to write and access local storage.
Let's look at the Program.cs file next how we set up the app.
<pre>
<code class='hljs csharp'>
<b>Program.cs</b>
using Blazored.LocalStorage;
using HelloBlazorLocalization;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Components.Web;
using Microsoft.AspNetCore.Components.WebAssembly.Hosting;
var builder = WebAssemblyHostBuilder.CreateDefault(args);
builder.RootComponents.Add<App>("#app");
builder.RootComponents.Add<HeadOutlet>("head::after");
builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.HostEnvironment.BaseAddress) });
builder.Services.Configure<RequestLocalizationOptions>(options =>
{
string[] supportedCultures = new[] { "no", "en" };
options
.AddSupportedCultures(supportedCultures)
.AddSupportedUICultures(supportedCultures)
.SetDefaultCulture("no");
});
builder.Services.AddLocalization(options =>
options.ResourcesPath = "Resources");
builder.Services.AddBlazoredLocalStorage();
await builder.Services.BuildServiceProvider().SetDefaultCultureAsync();
await builder.Build().RunAsync();
</code>
</pre>
An extension method is added to <em>ServiceProvider</em> to load up selected culture from local storage. It also inspects the query string set, if any, since language picker component presented later on will reload the Blazor WASM app after selecting language.
<pre>
<code class='hljs csharp'>
<b>WebAssemblyHostExtensions.cs</b>
using Blazored.LocalStorage;
using Microsoft.AspNetCore.Components;
using Microsoft.AspNetCore.Components.WebAssembly.Hosting;
using Microsoft.AspNetCore.WebUtilities;
using System.Globalization;
namespace HelloBlazorLocalization
{
public static class WebAssemblyHostExtensions
{
public async static Task SetDefaultCultureAsync(this ServiceProvider serviceProvider)
{
var navigationManager = serviceProvider.GetService<NavigationManager>();
var uri = navigationManager!.ToAbsoluteUri(navigationManager.Uri);
var queryStrings = QueryHelpers.ParseQuery(uri.Query);
var localStorage = serviceProvider.GetRequiredService<ILocalStorageService>();
if (queryStrings.TryGetValue("culture", out var selectedCulture))
{
await localStorage.SetItemAsStringAsync("culture", selectedCulture);
}
var cultureString = await localStorage.GetItemAsync<string>("culture");
CultureInfo cultureInfo;
if (!string.IsNullOrWhiteSpace(cultureString))
{
cultureInfo = new CultureInfo(cultureString);
}
else
{
cultureInfo = new CultureInfo("en-US");
}
CultureInfo.DefaultThreadCurrentCulture = cultureInfo;
CultureInfo.DefaultThreadCurrentUICulture = cultureInfo;
}
}
}
</code>
</pre>
Now, let's look at the Index.razor file where we repeat some of the code in the extension method shown above.
<pre>
<code class='hljs csharp'>
<b>Index.razor</b>
@page "/"
@using System.Globalization;
@inject NavigationManager NavigationManager
@inject Blazored.LocalStorage.ILocalStorageService LocalStorage
@inject IStringLocalizer<SharedResources> Localizer
<PageTitle>@Localizer["Home"]</PageTitle>
<h1>@Localizer["Home"]</h1>
@Localizer["HomeDescription"]
<SurveyPrompt Title="How is Blazor working for you?" />
@code {
protected override async Task OnParametersSetAsync()
{
var uri = NavigationManager.ToAbsoluteUri(NavigationManager.Uri);
var queryStrings = QueryHelpers.ParseQuery(uri.Query);
if (queryStrings.TryGetValue("culture", out var selectedCulture))
{
await LocalStorage.SetItemAsStringAsync("culture", selectedCulture);
}
else
{
selectedCulture = await LocalStorage.GetItemAsStringAsync("culture");
}
if (!string.IsNullOrWhiteSpace(selectedCulture))
{
var cultureInfo = new CultureInfo(selectedCulture);
CultureInfo.DefaultThreadCurrentCulture = cultureInfo;
CultureInfo.DefaultThreadCurrentUICulture = cultureInfo;
}
}
}
</code>
</pre>
To localize strings, we first do an inject of the IStringLocalizer as shown in the razor file. We also set the resource key when we fetch the localized text (Value). This is set up in the SharedResource files.
This is done in the sample app in three files.
<ul>
<li>An empty class called SharedResources at the root level</li>
<li>Two resources files (.resx) called SharedResources.en.resx and SharedResources.no.resx</li>
</ul>
You can have multiple resource file in Blazor WASM. Note that we in Program.cs set up the <em>ResourcesPath</em> to the sub folder Resources, where we put the .resx files. See the sample app for details (clone the Github repo).
Next up, let's look at the LanguagePicker.razor file that will show a language picker. The sample app got flag icons for all flags of countries so check out the folder flag-icons under wwwroot folder in the sample app.
<pre>
<code class='hljs csharp'>
<b>LanguagePicker.razor</b>
@using Microsoft.AspNetCore.Localization
@using Microsoft.Extensions.Options
@using System.Globalization
@inject IOptions<RequestLocalizationOptions> LocalizationOptions
@inject Blazored.LocalStorage.ILocalStorageService LocalStorage
<div class="mt-3 mb-3 mx-5">
@foreach (var culture in LocalizationOptions.Value.SupportedCultures)
{
<a style="cursor:pointer" onclick="location.href = '/?culture=@culture.ToString()';" class="text-decoration-none">
<img style="width:20px" src="flag-icons/@(culture.Name).png" alt="@culture.Name" />
<span class="badge rounded-pill mx-1 border border-primary
@((culture.ToString() == CultureInfo.CurrentCulture.ToString() || culture.ToString() == _selectedCulture) ?
"btn btn-success" : "btn btn-info text-dark")">@culture</span>
</a> <br />
}
</div>
@code {
private string? _selectedCulture;
protected override async Task OnParametersSetAsync()
{
_selectedCulture = await LocalStorage.GetItemAsStringAsync("culture");
}
}
</code>
</pre>
Note that Blazor WASM app should refresh entirely after choosing another language. Also note that you should set up multiple languages in your browser to get the expected results.
<img alt="" border="0" data-original-height="337" data-original-width="785" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD-6YwMqvdV9Xx-LTI2IXEYgXrTRnt0g_d7IYPKevwsEjBBNSK3pdtE9vZT3u0Qv8hVgg-hBUWNpAK-vOiTQrz6G9vYhuaud9OfVvBk4HGxIoLkwjBEbxhryH_-Fva_1HQ_4oVrbbQbgOUfeZ4RETYgiKu8EsAWllRaQvv9aioXh9uSAygxsYmBorDKmM/s1600/languagesettings.png"/>
You should have the supported languages set up in Blazor WASM, however it might still work to get the localization done if the language settings are not set up to include the specified languages. But if you do not see the expected results, check the language settings in your browser.
And as can be seen, we use local storage to persist our selected language. The selected language is displayed with the green button to indicate selected. When the Blazor WASM reloads, the selected language is fetched from local storage. This can be seen in Application => Local Storage in F12 Developer Tools in Chrome for example, when running the app.
Blazor WASM supports a reduced set of localization functionality, compared to Blazor server side apps.
<pre>
A limited set of ASP.NET Core's localization features are supported:
✔️Supported: IStringLocalizer and IStringLocalizer<T> are supported in Blazor apps.
❌Not supported: IHtmlLocalizer, IViewLocalizer, and Data Annotations localization are ASP.NET Core MVC features and not supported in Blazor apps.
</pre>
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0tag:blogger.com,1999:blog-7240109143089619921.post-82734344382884187492023-07-07T15:45:00.017+02:002023-07-07T17:47:47.092+02:00Mocking Http Client used for Blazor apps using bUnitThis article will look at running http client calls used by Blazor apps using bUnit. First off, bUnit is a library to perform unit tests for Blazor apps.
We will look at mocking http client calls in this article.
I have added a Github repo with the sample code in this article here :
<br />
<br />
<a href="https://github.com/toreaurstadboss/BlazorHttpClientMocking" target="_blank">https://github.com/toreaurstadboss/BlazorHttpClientMocking</a>
<br />
<b>Setting up the project Nuget package references of the test project - BlazorHttpClientMocking.Test</b>
<pre>
<code class='hljs csharp'>
<PackageReference Include="bunit" Version="1.21.9" />
<PackageReference Include="FluentAssertions" Version="6.11.0" />
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.5.0" />
<PackageReference Include="Moq" Version="4.18.4" />
<PackageReference Include="RichardSzalay.MockHttp" Version="6.0.0" />
<PackageReference Include="xunit" Version="2.4.2" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.4.5">
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
<PrivateAssets>all</PrivateAssets>
</PackageReference>
<PackageReference Include="coverlet.collector" Version="3.2.0">
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
<PrivateAssets>all</PrivateAssets>
</PackageReference>
</code>
</pre>
We will use the Nuget package <em>RichardSzalay.MockHttp</em> to do much of the mocking of http client.
The following helper extension methods allow us to easier add mocking of http client calls.
<pre>
<code class='hljs csharp'>
<b>Helper extension methods for http client using bUnit - MockHttpClientBunitHelpers.cs</b>
using Bunit;
using Microsoft.Extensions.DependencyInjection;
using RichardSzalay.MockHttp;
using System.Net;
using System.Net.Http.Headers;
using System.Text.Json;
namespace BlazorHttpClientMocking.Test.Helpers
{
public static class MockHttpClientBunitHelpers
{
public static MockHttpMessageHandler AddMockHttpClient(this TestServiceProvider services, string baseAddress = @"http://localhost")
{
var mockHttpHandler = new MockHttpMessageHandler();
var httpClient = mockHttpHandler.ToHttpClient();
httpClient.BaseAddress = new Uri(baseAddress);
services.AddSingleton<HttpClient>(httpClient);
return mockHttpHandler;
}
public static T? FromResponse<T>(this HttpResponseMessage? response, JsonSerializerOptions? options = null)
{
if (response == null)
{
return default(T);
}
if (options == null)
{
options = new JsonSerializerOptions
{
PropertyNameCaseInsensitive = true
};
}
string responseString = response.Content.ReadAsStringAsync().Result;
var result = JsonSerializer.Deserialize<T>(responseString, options);
return result;
}
public static async Task<T?> FromResponseAsync<T>(this HttpResponseMessage? response, JsonSerializerOptions? options = null)
{
if (response == null)
{
return await Task.FromResult(default(T));
}
if (options == null)
{
options = new JsonSerializerOptions
{
PropertyNameCaseInsensitive = true
};
}
string responseString = await response.Content.ReadAsStringAsync();
var result = JsonSerializer.Deserialize<T>(responseString, options);
return result;
}
public static MockedRequest RespondJson<T>(this MockedRequest request, T content)
{
request.Respond(req =>
{
var response = new HttpResponseMessage(HttpStatusCode.OK);
response.Content = new StringContent(JsonSerializer.Serialize(content));
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
return response;
});
return request;
}
public static MockedRequest RespondJson<T>(this MockedRequest request, Func<T> contentProvider)
{
request.Respond(req =>
{
var response = new HttpResponseMessage(HttpStatusCode.OK);
response.Content = new StringContent(JsonSerializer.Serialize(contentProvider()));
response.Content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
return response;
});
return request;
}
}
}
</code>
</pre>
The method <em>AddMockHttpClient</em>, which is an extension method on <em>TestServiceProvider</em> adds the mocked client. In the code above we read the response into a string and deserialize with System.Text.Json,
defaulting to case insensitive property naming, since this is default System.Text.Json on web, but not elsewhere, such as in test projects.
<pre>
<code class='csharp hljs'>
<b>Helper methods for serialization - SerializationHelpers.cs</b>
using System.Text.Json;
namespace BlazorHttpClientMocking.Test.Helpers
{
public static class SerializationHelpers
{
public static async Task<T?> DeserializeJsonAsync<T>(string path, JsonSerializerOptions? options = null)
{
if (options == null)
{
options = new JsonSerializerOptions
{
WriteIndented = true,
IncludeFields = true,
PropertyNameCaseInsensitive = true
};
}
using (Stream stream = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read))
{
if (File.Exists(path) && stream.Length > 0)
{
T? obj = await JsonSerializer.DeserializeAsync<T>(stream, options);
return obj;
}
return default(T);
}
}
}
}
</code>
</pre>
Let's look at a unit test which then sets up a mocked http client response that is used in the Blazor sample app on the FetchData page.
<pre>
<code class='csharp hljs'>
using BlazorHttpClientMocking.Test.Helpers;
using Bunit;
using FluentAssertions;
using Microsoft.Extensions.DependencyInjection;
using RichardSzalay.MockHttp;
using static BlazorHttpClientMocking.Pages.FetchData;
namespace BlazorHttpClientMocking.Test
{
public class FetchDataTests
{
[Fact]
public async Task FetchData_HttpClient_Request_SuccessResponse()
{
//Arrange
using var ctx = new TestContext();
var httpMock = ctx.Services.AddMockHttpClient();
string knownUrl = @"/sample-data/weather.json";
var sampleData = await SerializationHelpers.DeserializeJsonAsync<WeatherForecast[]>(knownUrl.TrimStart('/')); //trimming start of url since we need a physical path
httpMock.When(knownUrl).RespondJson(sampleData);
//Act
var httpClient = ctx.Services.BuildServiceProvider().GetService<HttpClient>();
var httpClientResponse = await httpClient!.GetAsync(knownUrl);
httpClientResponse.EnsureSuccessStatusCode();
var forecasts = await httpClientResponse.FromResponseAsync<WeatherForecast[]>();
//Assert
forecasts.Should().NotBeNull();
forecasts.Should().HaveCount(5);
}
}
}
</code>
</pre>
<br />
In the arrange part of the unit test above, we create a <em>TestContext</em> and add a mocked http client using the extension method shown earlier. We read out the sample json data and set up using the <em>When</em> method
and remember to add "/" to the path as this is expected since we have a baseAddress specified on the http client, set to @"http://localhost" default.
<br />
<br />
We retrieve http client via the Services collection on the TestContext and call <em>BuildServiceProvider</em> and GetService method to get the http client with the mocking. The mocking must be done via the <em>When</em> method and then we get the client. The mocked http client is a singleton service here.
<br />
<br />
We can also do parameters in the mocking of http client calls.
<h4>Using parameters in http client calls</h4>
Lets first add parameter support for the Fetchdata razor page.
<b>Fetchdata.razor</b>
<pre>
<code class='hljs csharp'>
@page "/fetchdata/"
@page "/fetchdata/{id:int}"
<!-- down to code part in the razor page -->
@code {
internal WeatherForecast[]? forecasts;
[Parameter]
public int? Id { get; set; }
protected override async Task OnInitializedAsync()
{
forecasts = await Http.GetFromJsonAsync<WeatherForecast[]>($"sample-data/weather.json");
if (forecasts != null && Id >= 0 && Id < 5)
{
forecasts = forecasts.Skip(Id.Value).Take(1).ToArray();
}
}
public class WeatherForecast
{
public DateOnly Date { get; set; }
public int TemperatureC { get; set; }
public string? Summary { get; set; }
public int TemperatureF => 32 + (int)(TemperatureC / 0.5556);
}
}
</code>
</pre>
<br />
<br />
Let's now look at using parameters in mocked http client calls in another unit test.
<pre>
<code class='hljs csharp'>
[Fact]
public async Task FetchData_HttpClient_With_Parameter_Request_SuccessResponse()
{
//Arrange
using var ctx = new TestContext();
var httpMock = ctx.Services.AddMockHttpClient();
string knownUrl = @"/sample-data/weather.json/0";
string fileUrl = @"sample-data/weather.json";
var sampleData = await SerializationHelpers.DeserializeJsonAsync<WeatherForecast[]>(fileUrl); //trimming start of url since we need a physical path
httpMock.When(knownUrl).RespondJson(sampleData);
//Act
var renderComponent = ctx.RenderComponent<FetchData>(p => p
.Add(fd => fd.Id, 0));
//Assert
renderComponent.Instance.forecasts.Should().NotBeNull();
renderComponent.Instance.forecasts.Should().HaveCount(1);
}
</code>
</pre>
Here we use bUnit's capabilities in rendering Blazor components using the RenderComponent method and we also set the Id parameter here to the value <em>0</em> which now will prepare our component
with the right forecasts, here only one forecast will be shown. We use the Instance property to look at the forecasts field of the component.
<code>
internal WeatherForecast[]? forecasts;
</code>
So bUnit can be used both the mock http client calls and also render Blazor components and also support parametrized calls of mocked http client calls.
<br />
<br />
<img alt="" border="0" data-original-height="358" data-original-width="927" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgitZdOwKZXKi3eVOAfI77gO2PTeWfnDZreAxez_QEj9ZuKnJUX3bwiODY2jRWCgEt8jhO6Q0h7q3lJfZ4q9SnOFxoeHOkhRLarQjsZUbv3yJzeoxwLsWAgh6ev5_DmIyUcL4AE7ePeo4PAucY3dJQHr9CDLbgi81XAEPaT_hpUe3st-s-nz_h6bQ0vdtg/s1600/forecasts.png"/>
<br />
Finally a tip concerning letting your internal fields to be available for test project. In the csproj file of the application we can set it up like in this example :
<pre>
<code class='hljs csharp'>
<ItemGroup>
<AssemblyAttribute Include="System.Runtime.CompilerServices.InternalsVisibleToAttribute">
<_Parameter1>BlazorHttpClientMocking.Test</_Parameter1>
</AssemblyAttribute>
</ItemGroup>
</code>
</pre>
Here we set up that the test project can see the internals of the blazor app project. This allows the test project to see internal methods and internal fields, internal classes and so on. This allows you to avoid changing parameters or fields in your components from private to public for example and instead change access modifier to internal so the tests can access those members.
Tore Aurstadhttp://www.blogger.com/profile/04987676273327898993noreply@blogger.com0