Migrating an existing project to .NET Aspire - Part 1

Aspire provides tooling to simplify developing and deploying distributed applications. This series demonstrates how to apply this to an already existing solution. It turned out to be easier than I anticipated!

This first post describes the solution we are going to migrate. It consists of several .NET microservices, Dapr as an abstraction layer, messaging using RabbitMQ, persistence through MongoDB, and observability using Open Telemetry. The full code is available on Github, the highlights will be explained in this post.

The solution does not include authentication/authorization, units tests nor deployment manifests like HELM charts.

Architecture

The diagram shown below gives an overview of what we are working with. This architecture will remain unchanged after migrating to Aspire.

graph LR;
U>External process]
subgraph AP[Api Gateway Pod]
  A("<img src='Content/Blog/media/dotnet.svg'; height='30' />Api Gateway")
  AS{{"<img src='Content/Blog/media/dapr.svg'; height='30' />Gateway sidecar"}}
end
R("<img src='Content/Blog/media/rabbitmq.svg'; height='30' />RabbitMQ")
subgraph CP[Catalog Service Pod]
  CS{{"<img src='Content/Blog/media/dapr.svg'; height='30' />Catalog Service sidecar"}}
  C("<img src='Content/Blog/media/dotnet.svg'; height='30' />Catalog Service")
end
M("<img src='Content/Blog/media/mongodb.svg'; height='30' />MongoDB")
O("<img src='Content/Blog/media/otel.svg'; height='30' />Open Telemetry")

CS -- Save/Get state --> M
U -- POST /products --> A
U -- "GET /products/{id}" --> A
A -- ProductAddedEvent --> AS
AS --> R
R --> CS
CS -- ProductAddedEvent --> C
C -- Save/Get state --> CS
A -- "GET /products/{id}"--> C

This simple distributed application provides a way to add and retrieve products:

Adding products

Products can be added by calling POST /products on the API Gateway service. It produces a ProductAddedEvent and publishes this via Dapr to RabbitMQ. The Catalog Service asynchronously processes the events. The Catalog Service subscribes to the productAdded topic. Each event is mapped to an entity and saved to a MongoDB container.

Retrieving products

Products can be retrieved by calling GET /products/{id} on the API Gateway service. This essentially forwards the call to the Catalog Service to retrieve the product synchronously.

Dapr

Dapr provides an abstraction layer to commonly used patterns in distributed applications. In this example, the following Dapr building blocks are used:

  • Pub-sub, providing an abstraction over messaging. In this case, we use RabbitMQ as a message broker implementation, but using Dapr we can switch to other implementations without changing the code.
  • State store, allowing to save/retrieve schemaless data. In this case, we use MongoDB as an implementation.

Each service has its own Dapr instance. From the application perspective, Dapr is addressed using localhost. This is achieved locally in Docker Compose using separate Linux 'namespaces' in which the networks can be grouped. When deployed to a Kubernetes cluster, addressing localhost is achieved using sidecars: the Dapr container is placed alongside the microservice in the same pod.

Each Dapr instance is configured using component specifications, defined as YAML, in a format suspiciously looking like Kubernetes resources (because they are when deployed in Kubernetes). These components define which implementations of the state store and pub-sub to use, along with the connection information to these services. When developing locally, these are attached as volumes to the Dapr containers in the docker-compose.yml. This results in the following definition for the Catalog Service Dapr sidecar:

catalogservice-dapr:
  container_name: catalogservice-dapr
  image: "daprio/daprd"
  command: [ "./daprd", "-app-id", "catalogservice", "-app-port", "8080", "-components-path", "/components",  "-config", "/configuration/config.yaml" ]
  
  # Mount the Dapr components to use
  volumes:
    - "./config/dapr/components:/components"
    - "./config/dapr/configuration:/configuration"
  depends_on:
    - catalogservice
    - rabbitmq

  # Use the same namespace to allow accessing this sidecar using 'localhost' from inside the catalog service.
  network_mode: "service:catalogservice"

Open Telemetry

Open Telemetry has become the de facto standard for monitoring applications. It provides a way to centralize logs, traces and metrics from all components used in the solution in a standardized format. This includes the .NET microservices, Dapr sidecars, RabbitMQ and MongoDB.

To view the Open Telemetry information locally, the stack is built with several third-party components, running as containers. See the docker-compose.yml for more details.

Logging

Elasticsearch is used as storage, with Kibana as a UI. Example: Kibana logs

Tracing

Tempo is used for storing tracing information, with Grafana as UI. The following shows what a trace of posting a new product looks like: Tempo traces

Metrics

Prometheus is used to store metrics, with Grafana as UI. Using the ASP.NET Core dashboard, these can be displayed nicely: Prometheus metrics

.NET Microservices

The API Gateway service and Catalog Service are .NET 9, ASP.NET Core, minimal API's. Feature folders are used to separate functionality. The applications are packaged as Docker containers using a Dockerfile defined in each project. This Dockerfile was generated by Visual Studio.

Shared class library

A Shared class library project is used to share common functionality. This includes:

  • IStateStore and IPubSub, which are thin wrappers around the Dapr SDK, allowing for easier unit testing.
  • Integration events spanning multiple services, in this case just the ProductAddedEvent. When the solution grows larger it would be better to locate these in separate class libraries tied to the service that 'owns' the event (usually the service that publishes the event).
  • Startup extension methods, to register common services to DI, as well as setting up cross-cutting concerns like Open Telemetry and Open API endpoints.
  • A way to publish/subscribe to integration events in a 'strongly typed' manner, explained more in-depth below.

Strongly typed topics using IEvent

Usually when working with events in Dapr, a topic name must be passed during publishing/subscribing, which opens up the opportunity for errors. A clean solution I came up with was to add the following interface, which all integration events implement:

public interface IEvent
{
    public static abstract string Topic { get; }
}

This interface uses a public static abstract member, available since C# 11 (.NET 7). This allows accessing the Topic property without requiring an instance of the type.

Each integration event then defines the topic name as follows:

public record ProductAddedEvent(string Id, string Description, decimal Price) : IEvent
{
    public static string Topic => "productAdded";
}

This way, when an event is published, using generics, we can derive the topic name to be used, instead of having to pass it manually:

class PubSub(DaprClient daprClient, IOptionsMonitor<DaprOptions> options, ILogger<PubSub> logger) : IPubSub
{
    public async Task PublishEventAsync<TEvent>(TEvent @event) where TEvent : IEvent
    {
        var pubSubName = options.CurrentValue.PubSub;
        await daprClient.PublishEventAsync(pubSubName, TEvent.Topic, @event); // Using TEvent.Topic to get the topic name.
        logger.LogInformation("Published to topic '{Topic}' using pubsub '{PubSub}'", TEvent.Topic, pubSubName);
    }
}

A similar tactic is used when subscribing to events:

public static RouteHandlerBuilder MapEventHandler<TEvent>(this IEndpointRouteBuilder builder, Delegate requestDelegate) where TEvent : IEvent
{
    var daprOptions = ...;

    return builder
        .MapPost($"events/{TEvent.Topic}", requestDelegate)
        .WithTopic(daprOptions.PubSub, TEvent.Topic, ...);
}

API Gateway

The API Gateway defines endpoints to add and retrieve products. Adding products is defined as:

var group = endpoints.MapGroup("products");
group.MapPost("", async (ProductAddedEvent @event, IPubSub pubSub) =>
{
    await pubSub.PublishEventAsync(@event);
});

This is using the aforementioned IEvent trick to publish to the right topic.

Retrieving products is defined as follows:

var group = endpoints.MapGroup("products");
group.MapGet("{Id}", async (string id, CatalogService service) =>
{
    // TODO better error handling, as 404's get translated to 500's because of unhandled exceptions.
    return await service.GetProductAsync(id);
});

It uses a CatalogService to invoke the Catalog Service synchronously:

public class CatalogService(HttpClient httpClient)
{
    public async Task<JsonElement> GetProductAsync(string id)
    {
        return await httpClient.GetFromJsonAsync<JsonElement>($"products/{id}");
    }
}

The HttpClient is configured using IHttpClientFactory and typed HttpClients, using the Options pattern to retrieve the right Base url:

public static void AddCatalogServices(this IServiceCollection services, IConfiguration configuration)
{
    services.Configure<CatalogServiceOptions>(configuration.GetSection("CatalogService"));
    services.AddHttpClient<CatalogService>((p, httpClient) =>
        {
            var options = p.GetRequiredService<IOptionsMonitor<CatalogServiceOptions>>();
            httpClient.BaseAddress = new Uri(options.CurrentValue.BaseUrl);
        })
        .AddRetryPolicy();
}

The AddRetryPolicy() method is an extension method defined in the Shared project to apply a Polly retry policy to requests performed with the HttpClient:

 public static IHttpClientBuilder AddRetryPolicy(this IHttpClientBuilder builder)
{
    var policy = HttpPolicyExtensions
        .HandleTransientHttpError()
        .WaitAndRetryAsync(
            retryCount: 5,
            sleepDurationProvider: (retryAttempt) => TimeSpan.FromSeconds(Math.Pow(2, retryAttempt)));

    return builder.AddPolicyHandler(policy);
}

The base url to use is provided using an environment variable in the docker-compose.yml:

apigateway:
  container_name: apigateway
  image: ${DOCKER_REGISTRY-}apigateway
  ...
  environment:
    - CatalogService__BaseUrl=http://catalogservice:8080
    ...
  networks:
    - app-network

As the Catalog Service is located in the same app-network Docker network, the Catalog Service is available on its catalogservice host name.

This is quite a bit of ceremony, which we will be able to improve using Aspire.

Catalog Service

The catalog service defines an event handler (using the previously defined MapEventHandler extension method). It maps the event to an entity and saves it in the state store using Dapr.

app.MapEventHandler<ProductAddedEvent>(async (ProductAddedEvent @event, IStateStore stateStore) =>
{
    // Some validation

    var entity = @event.ToEntity();
    await stateStore.SaveStateAsync($"product_{@event.Id}", entity);
});

The products can be retrieved via a GET endpoint, fetching the product from the state store.

app.MapGet("products/{Id}", async (string id, IStateStore stateStore) =>
{
    var product = await stateStore.GetStateAsync<ProductEntity>($"product_{id}");

    return product == null
        ? Results.NotFound($"A product with id '{id}' was not found.")
        : Results.Ok(product);
});

Integration tests

The solution contains an example integration test to test the distributed application end-to-end. It requires the solution to be started with docker compose up. The test then essentially posts a product to the API Gateway, and asserts that this product can be retrieved from the API Gateway some time later.

The test is defined as an xUnit test:

[Fact]
public async Task Publishing_product_added_event_eventually_makes_product_available_async()
{
    // Create a unique product id, so that tests and/or test runs don't interfere with eachother
    var productId = Guid.NewGuid().ToString();
    var gateway = GetGatewayClient();

    // Act
    var response = await gateway.PostAsJsonAsync("products", new
    {
        id = productId,
        description = "This is a product description",
        price = 1.24
    });

    response.EnsureSuccessStatusCode();

    // Assert

    // Try retrieving the product until it is available
    var product = await PollGatewayUntilAsync<Product>(p => true, $"products/{productId}");
    Assert.Equal(
        new Product(productId, "This is a product description", 1.24M),
        product);
}

protected static HttpClient GetGatewayClient() => new HttpClient
{
    BaseAddress = new Uri("http://localhost:4000")
};

The test:

  1. Posts a product with a unique id to the API Gateway.
  2. Polls the GET products/{id} endpoint until the product is found. This has a timeout to prevent tests running indefinitely when something goes wrong.
  3. The information of the retrieved product is asserted. While polling feels suboptimal, it is required because of the asynchronous processing of the events.

This way of testing results in a very simple and effective way to test the functionality. By making sure each test uses unique id's, multiple tests can be easily parallelized.

DevOps pipeline

A Github Actions workflow is defined as an example of building and deploying the application.

Test

In this step, we run the tests. As a prerequisite of the integration tests, the solution must be started. That's why we first run:

- name: Docker compose up
  run: |
    docker compose -f docker-compose.yml up --wait

Then we run the tests, first installing .NET 9 as a dependency:

- name: Setup .NET
  uses: actions/setup-dotnet@v1
  with:
    dotnet-version: '9.0.x' 

- name: Run tests
  run: |
    dotnet test ./AspireMigrationExample.sln --configuration Release

Build and push images

Building and pushing the images is done by using the Docker build, tag and push commands:

- name: Build and push apigateway service
  run: |
    docker compose build apigateway
    docker tag apigateway:latest ${{ vars.DOCKER_REGISTRY }}/apigateway:${VERSION}
    docker push ${{ vars.DOCKER_REGISTRY }}/apigateway:${VERSION}

This builds the Dockerfile's defined with each project and pushes them to the registry.

Conclusion

Even though our example application is quite simple in terms of functionality, you can see that quite a substantial amount of boilerplate is required to get all aspects right.

In the next part you will see how Aspire can simplify this. Stay tuned!

← Back to the blog