This article will look at primary constructors in C# 12. It is part of .NET 8 and C# 12.
Primary constructors can be tested on the following website offering a C# compiler which supports .NET 8.
Since .NET 8 is released medio 14th of November, 2023, which is like in two weeks after writing this article, it
will be generally available very soon. You can already also use .NET 8 in preview versions of VS 2022.
Let's look at usage of primary constructor.
The following program defined one primary constructor, note that the constructor is before the class declaration starts inside
the block.
Program.cs
using System;
publicclassPerson(string firstName, string lastName) {
publicoverridestringToString()
{
lastName += " (Primary constructor parameters might be mutated) ";
return$"{lastName}, {firstName}";
}
}
publicclassProgram {
publicstaticvoidMain(){
var p = new Person("Tom", "Cruise");
Console.WriteLine(p.ToString());
}
}
The output of running the small program above gives this output :
Program.cs
Cruise (Primary constructor parameters might be mutated) , Tom
If a class has added a primary constructor, this constructor must be called. If you add another constructor, you must call the primary constructor. For example like in this example,
using a default constructor (empty constructor), calling the primary constructor:
Another example of primary constructors are shown below. We use a record called Distance and pass in two dx and dy components of a vector and calculate its mathematical
distance and direction. We convert to degrees here using PI * radians = 180 expression known from trigonometry. If dy < 0, we are in quadrant 3 or 4 and we add 180 degrees.
using System;
var vector = new Distance(-2, -3);
Console.WriteLine($"The vector {vector} has a magnitude of {vector.Magnitude} with direction {vector.Direction}");
publicrecordDistance(double dx, double dy) {
publicdouble Magnitude { get; } = Math.Round(Math.Sqrt(dx*dx + dy*dy), 2);
publicdouble Direction { get; } = dy < 0 ? 180 + Math.Round(Math.Atan(dy / dx) * 180 / Math.PI, 2) :
Math.Round(Math.Atan(dy / dx) * 180 / Math.PI, 2);
}
This article presents code how to extract Health information from arbitrary text using Azure Health Information extraction in Azure Cognitive Services. This technology uses NLP - natural language processing combined with AI techniques.
A Github repo exists with the code for a running .NET MAUI Blazor demo in .NET 7 here:
A screenshot from the demo shows how it works below.
The demo uses Azure AI Healthcare information extraction to extract entities of the text, such as a person's age, gender, employment and medical history and condition such as diagnosises, procedures and so on.
The returned data in the demo is shown at the bottom of the demo, the raw data shows it is in the format as a json and in a FHIR format. Since we want FHIR format, we must use the REST api to get this information.
Azure AI Healthcare information also extracts relations, which is connecting the entities together for semantic analysis of the text. Also, links exist for each entity for further reading.
These are external systems such as Snomed CT and Snomed codes for each entity.
Let's look at the source code for the demo next.
We define a named http client in the MauiProgram.cs file which starts the application. We could move the code into a middleware extension method, but the code is kept simple in the demo.
MauiProgram.cs
var azureEndpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_ENDPOINT");
var azureKey = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_KEY");
if (string.IsNullOrWhiteSpace(azureEndpoint))
{
thrownew ArgumentNullException(nameof(azureEndpoint), "Missing system environment variable: AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_ENDPOINT");
}
if (string.IsNullOrWhiteSpace(azureKey))
{
thrownew ArgumentNullException(nameof(azureKey), "Missing system environment variable: AZURE_COGNITIVE_SERVICES_LANGUAGE_SERVICE_KEY");
}
var azureEndpointHost = new Uri(azureEndpoint);
builder.Services.AddHttpClient("Az", httpClient =>
{
string baseUrl = azureEndpointHost.GetLeftPart(UriPartial.Authority); //https://stackoverflow.com/a/18708268/741368
httpClient.BaseAddress = new Uri(baseUrl);
//httpClient..Add("Content-type", "application/json");//httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));//ACCEPT header
httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", azureKey);
});
The content-type header will be specified instead inside the HttpRequestMessage shown further below and not in this named client. As we see, we must add both the endpoint base url and also the key in the Ocp-Apim-Subscription-key http header.
Let's next look at how to create a POST request to the language resource endpoint that offers the health text analysis below.
HealthAnalyticsTextClientService.cs
using HealthTextAnalytics.Models;
using System.Diagnostics;
using System.Text;
using System.Text.Json.Nodes;
namespaceHealthTextAnalytics.Util
{
publicclassHealthAnalyticsTextClientService : IHealthAnalyticsTextClientService
{
privatereadonly IHttpClientFactory _httpClientFactory;
privateconstint awaitTimeInMs = 500;
privateconstint maxTimerWait = 10000;
publicHealthAnalyticsTextClientService(IHttpClientFactory httpClientFactory)
{
_httpClientFactory = httpClientFactory;
}
publicasync Task<HealthTextAnalyticsResponse> GetHealthTextAnalytics(string inputText)
{
var client = _httpClientFactory.CreateClient("Az");
string requestBodyRaw = HealthAnalyticsTextHelper.CreateRequest(inputText);
//https://learn.microsoft.com/en-us/azure/ai-services/language-service/text-analytics-for-health/how-to/call-api?tabs=nervar stopWatch = Stopwatch.StartNew();
HttpRequestMessage request = CreateTextAnalyticsRequest(requestBodyRaw);
var response = await client.SendAsync(request);
var result = new HealthTextAnalyticsResponse();
var timer = new PeriodicTimer(TimeSpan.FromMilliseconds(awaitTimeInMs));
int timeAwaited = 0;
while (await timer.WaitForNextTickAsync())
{
if (response.IsSuccessStatusCode)
{
result.IsSearchPerformed = true;
var operationLocation = response.Headers.First(h => h.Key?.ToLower() == Constants.Constants.HttpHeaderOperationResultAvailable).Value.FirstOrDefault();
var resultFromHealthAnalysis = await client.GetAsync(operationLocation);
JsonNode resultFromService = await resultFromHealthAnalysis.GetJsonFromHttpResponse();
if (resultFromService.GetValue<string>("status") == "succeeded")
{
result.AnalysisResultRawJson = await resultFromHealthAnalysis.Content.ReadAsStringAsync();
result.ExecutionTimeInMilliseconds = stopWatch.ElapsedMilliseconds;
result.Entities.AddRange(HealthAnalyticsTextHelper.GetEntities(result.AnalysisResultRawJson));
result.CategorizedInputText = HealthAnalyticsTextHelper.GetCategorizedInputText(inputText, result.AnalysisResultRawJson);
break;
}
}
timeAwaited += 500;
if (timeAwaited >= maxTimerWait)
{
result.CategorizedInputText = $"ERR: Timeout. Operation to analyze input text using Azure HealthAnalytics language service timed out after waiting for {timeAwaited} ms.";
break;
}
}
return result;
}
privatestatic HttpRequestMessage CreateTextAnalyticsRequest(string requestBodyRaw)
{
var request = new HttpRequestMessage(HttpMethod.Post, Constants.Constants.AnalyzeTextEndpoint);
request.Content = new StringContent(requestBodyRaw, Encoding.UTF8, "application/json");//CONTENT-TYPE headerreturn request;
}
}
}
The code is using some helper methods to be shown next. As the code above shows, we must poll the Azure service until we get a reply from the service. We poll every 0.5 second up to a maxium of 10 seconds from the service. Typical requests takes about 3-4 seconds to process. Longer input text / 'documents' would need more processing time than 10 seconds, but for this demo, it works great.
HealthAnalyticsTextHelper.CreateRequest method
publicstaticstringCreateRequest(string inputText)
{
//note - the id 1 here in the request is a 'local id' that must be unique per request. only one text is supported in the //request genreated, however the service allows multiple documents and id's if necessary. in this demo, we only will send in one text at a timevar request = new
{
analysisInput = new
{
documents = new[]
{
new { text = inputText, id = "1", language = "en" }
}
},
tasks = new[]
{
new { id = "analyze 1", kind = "Healthcare", parameters = new { fhirVersion = "4.0.1" } }
}
};
return JsonSerializer.Serialize(request, new JsonSerializerOptions { WriteIndented = true });
}
Creating the body of POST we use a template via a new anonymized object shown above which is what the REST service excepts. We could have multiple documents here, that is input texts, in this demo only one text / document is sent in. Note the use of id='1' and 'analyze 1' here.
We have some helper methods in System.Text.Json here to extract the JSON data sent in the response.
I have added the Domain classes from the service using the https://json2csharp.com/ website on the intial responses I got from the REST service using Postman. The REST Api might change in the future, that is, the JSON returned.
In that case, you might want to adjust the domain classes here if the deserialization fails. It seems relatively stable though, I have tested the code for some weeks now.
Finally, the categorized colored text code here had to remove newlines to get a correct indexing of the different entities found in the text. This code shows how to get rid of newlines of the inputted text.
I have added a demo .NET MAUI Blazor app that uses Image Analysis in Computer Vision in Azure Cognitive Services.
Note that Image Analysis is not available in all Azure data centers. For example, Norway East does not have this feature.
However, North Europe Azure data center do have the feature, the data center i Ireland.
A Github repo exists for this demo here:
A screen shot for this demo is shown below:
The demo allows you to upload a picture (supported formats are .jpeg, .jpg and .png, but Azure AI Image Analyzer supports a lot of other image formats too).
The demo shows a preview of the selected image and to the right an image of bounding boxes of objects in the image. A list of tags extracted from the image are also shown. Raw data from the Azure Image Analyzer
service is shown in the text box area below the pictures, with a list of tags to the right.
The demo is written with .NET Maui Blazor and .NET 6.
Let us look at some code for making this demo.
ImageSaveService.cs
using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
namespaceOcr.Handwriting.Azure.AI.Services
{
publicclassImageSaveService : IImageSaveService
{
publicasync Task<ImageSaveModel> SaveImage(IBrowserFile browserFile)
{
var buffers = newbyte[browserFile.Size];
var bytes = await browserFile.OpenReadStream(maxAllowedSize: 30 * 1024 * 1024).ReadAsync(buffers);
string imageType = browserFile.ContentType;
var basePath = FileSystem.Current.AppDataDirectory;
var imageSaveModel = new ImageSaveModel
{
SavedFilePath = Path.Combine(basePath, $"{Guid.NewGuid().ToString("N")}-{browserFile.Name}"),
PreviewImageUrl = $"data:{imageType};base64,{Convert.ToBase64String(buffers)}",
FilePath = browserFile.Name,
FileSize = bytes / 1024,
};
await File.WriteAllBytesAsync(imageSaveModel.SavedFilePath, buffers);
return imageSaveModel;
}
}
}
//Interface defined inside IImageService.cs shown belowusing Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
namespaceOcr.Handwriting.Azure.AI.Services
{
publicinterfaceIImageSaveService
{
Task<ImageSaveModel> SaveImage(IBrowserFile browserFile);
}
}
The ImageSaveService saves the uploaded image from the IBrowserFile into a base-64 string from the image bytes of the uploaded IBrowserFile via OpenReadStream of the IBrowserFile.
This allows us to preview the uploaded image. The code also saves the image to the AppDataDirectory that MAUI supports - FileSystem.Current.AppDataDirectory.
Let's look at how to call the analysis service itself, it is actually quite straight forward.
ImageAnalyzerService.cs
using Azure;
using Azure.AI.Vision.Common;
using Azure.AI.Vision.ImageAnalysis;
namespaceImage.Analyze.Azure.Ai.Lib
{
publicclassImageAnalyzerService : IImageAnalyzerService
{
public ImageAnalyzer CreateImageAnalyzer(string imageFile)
{
string key = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_SECONDARY_KEY");
string endpoint = Environment.GetEnvironmentVariable("AZURE_COGNITIVE_SERVICES_VISION_SECONDARY_ENDPOINT");
var visionServiceOptions = new VisionServiceOptions(new Uri(endpoint), new AzureKeyCredential(key));
using VisionSource visionSource = CreateVisionSource(imageFile);
var analysisOptions = CreateImageAnalysisOptions();
var analyzer = new ImageAnalyzer(visionServiceOptions, visionSource, analysisOptions);
return analyzer;
}
privatestatic VisionSource CreateVisionSource(string imageFile)
{
usingvar stream = File.OpenRead(imageFile);
usingvar reader = new StreamReader(stream);
byte[] imageBuffer;
using (var streamReader = new MemoryStream())
{
stream.CopyTo(streamReader);
imageBuffer = streamReader.ToArray();
}
usingvar imageSourceBuffer = new ImageSourceBuffer();
imageSourceBuffer.GetWriter().Write(imageBuffer);
return VisionSource.FromImageSourceBuffer(imageSourceBuffer);
}
privatestatic ImageAnalysisOptions CreateImageAnalysisOptions() => new ImageAnalysisOptions
{
Language = "en",
GenderNeutralCaption = false,
Features =
ImageAnalysisFeature.CropSuggestions
| ImageAnalysisFeature.Caption
| ImageAnalysisFeature.DenseCaptions
| ImageAnalysisFeature.Objects
| ImageAnalysisFeature.People
| ImageAnalysisFeature.Text
| ImageAnalysisFeature.Tags
};
}
}
//interface shown below publicinterfaceIImageAnalyzerService
{
ImageAnalyzer CreateImageAnalyzer(string imageFile);
}
We retrieve environment variables here and we create an ImageAnalyzer. We create a Vision source from the saved picture we uploaded and open a stream to it using File.OpenRead method on System.IO.
Since we saved the file in the AppData folder of the .NET MAUI app, we can read this file.
We set up the image analysis options and the vision service options. We then call the return the image analyzer.
Let's look at the code-behind of the index.razor file that initializes the Image analyzer, and runs the Analyze method of it.
Index.razor.cs
using Azure.AI.Vision.ImageAnalysis;
using Image.Analyze.Azure.Ai.Extensions;
using Image.Analyze.Azure.Ai.Models;
using Microsoft.AspNetCore.Components.Forms;
using Microsoft.JSInterop;
using System.Text;
namespaceImage.Analyze.Azure.Ai.Pages
{
partialclassIndex
{
private IndexModel Model = new();
//https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/how-to/call-analyze-image-40?WT.mc_id=twitter&pivots=programming-language-csharpprivatestring ImageInfo = string.Empty;
privateasync Task Submit()
{
if (Model.PreviewImageUrl == null || Model.SavedFilePath == null)
{
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor Image Analyzer App", $"You must select an image first before running Image Analysis. Supported formats are .jpeg, .jpg and .png", "Ok", "Cancel");
return;
}
usingvar imageAnalyzer = ImageAnalyzerService.CreateImageAnalyzer(Model.SavedFilePath);
ImageAnalysisResult analysisResult = await imageAnalyzer.AnalyzeAsync();
if (analysisResult.Reason == ImageAnalysisResultReason.Analyzed)
{
Model.ImageAnalysisOutputText = analysisResult.OutputImageAnalysisResult();
Model.Caption = $"{analysisResult.Caption.Content} Confidence: {analysisResult.Caption.Confidence.ToString("F2")}";
Model.Tags = analysisResult.Tags.Select(t => $"{t.Name} (Confidence: {t.Confidence.ToString("F2")})").ToList();
var jsonBboxes = analysisResult.GetBoundingBoxesJson();
await JsRunTime.InvokeVoidAsync("LoadBoundingBoxes", jsonBboxes);
}
else
{
ImageInfo = $"The image analysis did not perform its analysis. Reason: {analysisResult.Reason}";
}
StateHasChanged(); //visual refresh here
}
privateasync Task CopyTextToClipboard()
{
await Clipboard.SetTextAsync(Model.ImageAnalysisOutputText);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor Image Analyzer App", $"The copied text was put into the clipboard. Character length: {Model.ImageAnalysisOutputText?.Length}", "Ok", "Cancel");
}
privateasync Task OnInputFile(InputFileChangeEventArgs args)
{
var imageSaveModel = await ImageSaveService.SaveImage(args.File);
Model = new IndexModel(imageSaveModel);
await Application.Current.MainPage.DisplayAlert($"MAUI Blazor ImageAnalyzer app App", $"Wrote file to location : {Model.SavedFilePath} Size is: {Model.FileSize} kB", "Ok", "Cancel");
}
}
}
In the code-behind above we have a submit handler called Submit. We there analyze the image and send the result both to the UI and also to a client side Javascript method using IJSRuntime in .NET MAUI Blazor.
Let's look at the two helper methods of ImageAnalysisResult next.
ImageAnalysisResultExtensions.cs
using Azure.AI.Vision.ImageAnalysis;
using System.Text;
namespaceImage.Analyze.Azure.Ai.Extensions
{
publicstaticclassImageAnalysisResultExtensions
{
publicstaticstringGetBoundingBoxesJson(this ImageAnalysisResult result)
{
var sb = new StringBuilder();
sb.AppendLine(@"[");
int objectIndex = 0;
foreach (var detectedObject in result.Objects)
{
sb.Append($"{{ \"Name\": \"{detectedObject.Name}\", \"Y\": {detectedObject.BoundingBox.Y}, \"X\": {detectedObject.BoundingBox.X}, \"Height\": {detectedObject.BoundingBox.Height}, \"Width\": {detectedObject.BoundingBox.Width}, \"Confidence\": \"{detectedObject.Confidence:0.0000}\" }}");
objectIndex++;
if (objectIndex < result.Objects?.Count)
{
sb.Append($",{Environment.NewLine}");
}
else
{
sb.Append($"{Environment.NewLine}");
}
}
sb.Remove(sb.Length - 2, 1); //remove trailing comma at the end
sb.AppendLine(@"]");
return sb.ToString();
}
publicstaticstringOutputImageAnalysisResult(this ImageAnalysisResult result)
{
var sb = new StringBuilder();
if (result.Reason == ImageAnalysisResultReason.Analyzed)
{
sb.AppendLine($" Image height = {result.ImageHeight}");
sb.AppendLine($" Image width = {result.ImageWidth}");
sb.AppendLine($" Model version = {result.ModelVersion}");
if (result.Caption != null)
{
sb.AppendLine(" Caption:");
sb.AppendLine($" \"{result.Caption.Content}\", Confidence {result.Caption.Confidence:0.0000}");
}
if (result.DenseCaptions != null)
{
sb.AppendLine(" Dense Captions:");
foreach (var caption in result.DenseCaptions)
{
sb.AppendLine($" \"{caption.Content}\", Bounding box {caption.BoundingBox}, Confidence {caption.Confidence:0.0000}");
}
}
if (result.Objects != null)
{
sb.AppendLine(" Objects:");
foreach (var detectedObject in result.Objects)
{
sb.AppendLine($" \"{detectedObject.Name}\", Bounding box {detectedObject.BoundingBox}, Confidence {detectedObject.Confidence:0.0000}");
}
}
if (result.Tags != null)
{
sb.AppendLine($" Tags:");
foreach (var tag in result.Tags)
{
sb.AppendLine($" \"{tag.Name}\", Confidence {tag.Confidence:0.0000}");
}
}
if (result.People != null)
{
sb.AppendLine($" People:");
foreach (var person in result.People)
{
sb.AppendLine($" Bounding box {person.BoundingBox}, Confidence {person.Confidence:0.0000}");
}
}
if (result.CropSuggestions != null)
{
sb.AppendLine($" Crop Suggestions:");
foreach (var cropSuggestion in result.CropSuggestions)
{
sb.AppendLine($" Aspect ratio {cropSuggestion.AspectRatio}: "
+ $"Crop suggestion {cropSuggestion.BoundingBox}");
};
}
if (result.Text != null)
{
sb.AppendLine($" Text:");
foreach (var line in result.Text.Lines)
{
string pointsToString = "{" + string.Join(',', line.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}";
sb.AppendLine($" Line: '{line.Content}', Bounding polygon {pointsToString}");
foreach (var word in line.Words)
{
pointsToString = "{" + string.Join(',', word.BoundingPolygon.Select(pointsToString => pointsToString.ToString())) + "}";
sb.AppendLine($" Word: '{word.Content}', Bounding polygon {pointsToString}, Confidence {word.Confidence:0.0000}");
}
}
}
var resultDetails = ImageAnalysisResultDetails.FromResult(result);
sb.AppendLine($" Result details:");
sb.AppendLine($" Image ID = {resultDetails.ImageId}");
sb.AppendLine($" Result ID = {resultDetails.ResultId}");
sb.AppendLine($" Connection URL = {resultDetails.ConnectionUrl}");
sb.AppendLine($" JSON result = {resultDetails.JsonResult}");
}
else
{
var errorDetails = ImageAnalysisErrorDetails.FromResult(result);
sb.AppendLine(" Analysis failed.");
sb.AppendLine($" Error reason : {errorDetails.Reason}");
sb.AppendLine($" Error code : {errorDetails.ErrorCode}");
sb.AppendLine($" Error message: {errorDetails.Message}");
}
return sb.ToString();
}
}
}
Finally, let's look at the client side Javascript function that we call and send the bounding boxes json to draw the boxes. We will use Canvas in HTML 5 to show the picture and the bounding boxes of objects found in the image.
index.html
<script type="text/javascript">
var colorPalette = ["red", "yellow", "blue", "green", "fuchsia", "moccasin", "purple", "magenta", "aliceblue", "lightyellow", "lightgreen"];
function rescaleCanvas() {
var img = document.getElementById('PreviewImage');
var canvas = document.getElementById('PreviewImageBbox');
canvas.width = img.width;
canvas.height = img.height;
}
function getColor() {
var colorIndex = parseInt(Math.random() * 10);
var color = colorPalette[colorIndex];
return color;
}
function LoadBoundingBoxes(objectDescriptions) {
if (objectDescriptions == null || objectDescriptions == false) {
alert('did not find any objects in image. returning from calling load bounding boxes : ' + objectDescriptions);
return;
}
var objectDesc = JSON.parse(objectDescriptions);
//alert('calling load bounding boxes, starting analysis on clientside js : ' + objectDescriptions);
rescaleCanvas();
var canvas = document.getElementById('PreviewImageBbox');
var img = document.getElementById('PreviewImage');
var ctx = canvas.getContext('2d');
ctx.drawImage(img, img.width, img.height);
ctx.font = "10px Verdana";
for (var i = 0; i < objectDesc.length; i++) {
ctx.beginPath();
ctx.strokeStyle = "black";
ctx.lineWidth = 1;
ctx.fillText(objectDesc[i].Name, objectDesc[i].X + objectDesc[i].Width / 2, objectDesc[i].Y + objectDesc[i].Height / 2);
ctx.fillText("Confidence: " + objectDesc[i].Confidence, objectDesc[i].X + objectDesc[i].Width / 2, 10 + objectDesc[i].Y + objectDesc[i].Height / 2);
}
for (var i = 0; i < objectDesc.length; i++) {
ctx.fillStyle = getColor();
ctx.globalAlpha = 0.2;
ctx.fillRect(objectDesc[i].X, objectDesc[i].Y, objectDesc[i].Width, objectDesc[i].Height);
ctx.lineWidth = 3;
ctx.strokeStyle = "blue";
ctx.rect(objectDesc[i].X, objectDesc[i].Y, objectDesc[i].Width, objectDesc[i].Height);
ctx.fillStyle = "black";
ctx.fillText("Color: " + getColor(), objectDesc[i].X + objectDesc[i].Width / 2, 20 + objectDesc[i].Y + objectDesc[i].Height / 2);
ctx.stroke();
}
ctx.drawImage(img, 0, 0);
console.log('got these object descriptions:');
console.log(objectDescriptions);
}
</script>
The index.html file in wwwroot is the place we usually put extra css and js for MAUI Blazor apps and Blazor apps. I have chosen to put the script directly into the index.html file and not in a .js file, but that is an option to be chosen to tidy up a bit more.
So there you have it, we can relatively easily find objects in images using Azure analyze image service in Azure Cognitive Services. We can get tags and captions of the image. In the demo the caption is shown above the picture loaded.
Azure Computer vision service is really good since it has got a massive training set and can recognize a lot of different objects for different usages.
As you see in the source code, I have the key and endpoint inside environment variables that the code expects exists. Never expose keys and endpoints in your source code.