Showing posts with label IAsyncEnumerable. Show all posts
Showing posts with label IAsyncEnumerable. Show all posts

Sunday, 16 February 2025

Outputting tags/objects using Azure AI

This article presents a way to output tags for an image and output it to the console. Azure AI is used, more specifically the ImageAnalysisClient. The article shows how you can define a way to consume the data for an IAsyncEnumerable, so you can use await foreach to consume the data. I would recommend this approach for many services in Azure Ai (and similar) where there is no support out of the box for async enumerable and hide away the deails in a helper extension method as shown in this article.



  public static async void ExtractImageTags()
  {
      string visionApiKey = Environment.GetEnvironmentVariable("VISION_KEY")!;
      string visionApiEndpoint = Environment.GetEnvironmentVariable("VISION_ENDPOINT")!;

      var credentials = new AzureKeyCredential(visionApiKey);
      var serviceUri = new Uri(visionApiEndpoint);

      var imageAnalysisClient = new ImageAnalysisClient(serviceUri, credentials);
      await foreach (var tag in imageAnalysisClient.ExtractImageTagsAsync("Images/Store.png"))
      {
          Console.WriteLine(tag);
      }           
  }
  

The code creates an ImageAnalysisClient, defined in the Azure.AI.Vision.ImageAnalysis Nuget package. I got two environment variables here to store the key and endpoint. Note that not all Azure Ai features are available in all regions. If you just want to test out some Azure Ai features, you can first off just test it out at US East region, as that region will have most likely all features you want to test, then you can just a more local region if you are planning to do more workloads using Azure Ai.

Then we use an await foreach pattern here to extract the image tags asynchronously. This is a custom extension method I created so I can output the tags asynchronously using await foreach and also specify a wait time between each new tag being outputted, defaulting to 200 milliseconds here.

The extension method looks like this:


using Azure.AI.Vision.ImageAnalysis;

namespace UseAzureAIServicesFromNET.Vision;

public static class ImageAnalysisClientExtensions
{

    /// <summary>
    /// Extracts the tags for image at specified path, if existing.
    /// The results are returned as async enumerable of strings. 
    /// </summary>
    /// <param name="client"></param>
    /// <param name="imagePath"></param>
    /// <param name="waitTimeInMsBetweenOutputTags">Default wait time in ms between output. Defaults to 200 ms.</param>
    /// <returns></returns>
    public static async IAsyncEnumerable<string?> ExtractImageTagsAsync(this ImageAnalysisClient client, 
    	string imagePath, int waitTimeInMsBetweenOutputTags = 200)
    {
        if (!File.Exists(imagePath))
        {
            yield return default(string); //just return null if a file is not found at provided path
        }
        using FileStream imageStream = new FileStream(imagePath, FileMode.Open);
        var analysisResult = 
        	await client.AnalyzeAsync(BinaryData.FromStream(imageStream), VisualFeatures.Tags | VisualFeatures.Caption);
        yield return $"Description: {analysisResult.Value.Caption.Text}";
        foreach (var tag in analysisResult.Value.Tags.Values)
        {
            yield return $"Tag: {tag.Name}, Confidence: {tag.Confidence:F2}";        
            await Task.Delay(waitTimeInMsBetweenOutputTags);
        }
    }

}


The console output of the tags looks like this:

In addition to tags, we can also output objects in the image in a very similar extension method:


/// <summary>
/// Extracts the objects for image at specified path, if existing.
/// The results are returned as async enumerable of strings. 
/// </summary>
/// <param name="client"></param>
/// <param name="imagePath"></param>
/// <param name="waitTimeInMsBetweenOutputTags">Default wait time in ms between output. Defaults to 200 ms.</param>
/// <returns></returns>
public static async IAsyncEnumerable<string?> ExtractImageObjectsAsync(this ImageAnalysisClient client,
string imagePath, int waitTimeInMsBetweenOutputTags = 200)
{
    if (!File.Exists(imagePath))
    {
        yield return default(string); //just return null if a file is not found at provided path
    }
    using FileStream imageStream = new FileStream(imagePath, FileMode.Open);
    var analysisResult =
    	await client.AnalyzeAsync(BinaryData.FromStream(imageStream), VisualFeatures.Objects | VisualFeatures.Caption);
    yield return $"Description: {analysisResult.Value.Caption.Text}";
    foreach (var objectInImage in analysisResult.Value.Objects.Values)
    {
            yield return $"""
Object tag: {objectInImage.Tags.FirstOrDefault()?.Name} Confidence: {objectInImage.Tags.FirstOrDefault()?.Confidence}, 
Position (bbox): {objectInImage.BoundingBox}
""";
        await Task.Delay(waitTimeInMsBetweenOutputTags);
    }
}           
             

The code is nearly identical, we set the VisualFeatures of the image to extract and we read out the objects (not the tags). The console output of the objects looks like this: