Migrating an existing project to .NET Aspire - Part 2

This is part 2 of "Migrating an existing project to .NET Aspire". Make sure to read part 1 first!

This post shows how we are migrating the solution described in the previous post to .NET Aspire, and what Aspire brings to the table. The code for this post is available on GitHub.

Each migration step can be applied incrementally, allowing for staged migrations for larger projects. You could even skip a step if a feature does not appeal to you.

Adding an AppHost project

The first step of introducing Aspire is to add an AppHost project. The responsibility of this project is to orchestrate the distributed application. Consider it a replacement of the docker-compose.yml. In fact, at the end of this post, we will remove the dependency on docker-compose.yml completely.

In Visual Studio or Rider, add a new project of the type .NET Aspire App Host. This results in a project with a Program.cs file that contains the following:

var builder = DistributedApplication.CreateBuilder(args);

builder.Build().Run();

In this file we will be wiring up the applications and dependencies.

Make sure to update to Aspire 9.1.0, by changing the SDK version and Aspire.Hosting.AppHost NuGet package to version 9.1.0 in the .csproj file.

<Project Sdk="Microsoft.NET.Sdk">

    <Sdk Name="Aspire.AppHost.Sdk" Version="9.1.0"/>

    <PropertyGroup>
        ...
    </PropertyGroup>

    <ItemGroup>
        <PackageReference Include="Aspire.Hosting.AppHost" Version="9.1.0"/>
    </ItemGroup>

</Project>

Adding an Aspire Service Defaults project

We also need to add an Aspire Service Defaults project. This is a replacement of the Shared class library we introduced in the previous post. Its responsibility is to set up the cross-cutting concerns all services have, such as OpenTelemetry and health checks.

In Visual Studio or Rider, add a new project of the type .NET Aspire Service Defaults.

Instead of creating a new project, we could have modified the Shared class library to make it work with the Aspire tooling. To do so, we would need to add the <IsAspireSharedProject>true</IsAspireSharedProject> property in the Shared.csproj file, and ensure we had extension methods called AddServiceDefaults and MapDefaultEndpoints.

Wiring up projects

Let's register our Api Gateway and Catalog Service projects to Aspire! In Visual Studio there is tooling to quickly do this. Open the context menu of the Api Gateway project, and go Add -> .NET Aspire Orchestrator Support...:

Add orchestrator support

This will:

  1. Add a project reference from the AppHost project to the Api Gateway.
<ProjectReference Include="..\ApiGateway\ApiGateway.csproj" />

While looking ordinary, this is a special project reference that only provides metadata about the project. This is used by a source generator to allow strongly-typed referencing of the project in the next step.

  1. Update the AppHost Program.cs to add the Api Gateway project, using a generated type to reference the project:
  var builder = DistributedApplication.CreateBuilder(args);

+ builder.AddProject<Projects.ApiGateway>("apigateway");

  builder.Build().Run();
  1. Add a project reference from the Api Gateway to the ServiceDefaults project.
  2. Update the Api Gateway Program.cs to include calls to AddServiceDefaults and MapDefaultEndpoints, defined in the ServiceDefaults:
  var builder = WebApplication.CreateBuilder(args);

+ builder.AddServiceDefaults();

  builder.AddSharedServices();
  builder.Services.AddCatalogServices(builder.Configuration);
  
  var app = builder.Build();

+ app.MapDefaultEndpoints();

  app.AddSharedEndpoints();
  app.AddCatalogEndpoints();

Repeat this step for the Catalog Service project.

As far as I could see, Rider does not support this Add .NET Aspire Orchestrator Support (yet). However, using the steps explained above, this can be done manually without issues.

OpenTelemetry

The Service Defaults project contains code to set up OpenTelemetry, similar to what we already had. Therefore we disable the (conflicting) OpenTelemetry registration in the Shared project:

  namespace Shared.Extensions
  {
      public static class ServiceCollectionExtensions
      {
          public static void AddSharedServices(this IHostApplicationBuilder builder)
          {
              builder.Services.AddEndpointsApiExplorer();
              builder.Services.AddSwaggerGen();
  
              builder.Services.AddDaprClient();
              builder.Services.Configure<DaprOptions>(builder.Configuration.GetSection("Dapr"));
              builder.Services.AddTransient<IPubSub, PubSub>();
              builder.Services.AddTransient<IStateStore, StateStore>();
 
-             builder.AddOpenTelemetry();
          }
      }
  }

You might want to migrate your custom OpenTelemetry settings to the Service Defaults first.

After this, the project can be run with Docker compose once again. To be able to use the AppHost properly, we still have some steps to go.

Running the AppHost

We can now run the AppHost project for the first time (by selecting it as the startup project in the IDE or by running dotnet run from the console). This will open up the Aspire Dashboard, providing an integrated overview of our distributed application, which comes out of the box with Aspire.

Resources running

It also provides tabs for showing logs, traces and metrics. From the resources page there are also helpful links to access the Open API page of each service (the launch URL in the launch settings).

This dashboard is perfect for local development. It saves us from running a stack like Kibana/ElasticSearch/Tempo/Prometheus/Grafana just to have local telemetry.

In the overview we can see that the CatalogService is not healthy. When looking at the logs we can see why:

Catalog service crash log

This is because we don't provide the applications with any app settings like we do with the docker-compose set-up. Let's fix this.

For the Catalog Service our configuration currently looks like:

  catalogservice:
    image: ${DOCKER_REGISTRY-}catalogservice
    ...
    environment:
      - Dapr__StateStore=state-store
      - Dapr__PubSub=pub-sub
      - OTEL_EXPORTER_OTLP_ENDPOINT=http://otel:4317
      - OTEL_SERVICE_NAME=catalogservice

In Aspire, we can pass environment variables by using .WithEnvironment(...):

  builder.AddProject<Projects.CatalogService>("catalogservice")
+     .WithEnvironment("Dapr__PubSub", "pub-sub")
+     .WithEnvironment("Dapr__StateStore", "state-store");

Note that we only need to transfer over the Dapr settings, as Aspire will set the OpenTelemetry related settings. When we run the AppHost again, the Catalog Service starts without errors:

Resources running

Much better!

Note that our applications now run as processes, instead of Docker containers. You can see this when opening the Task Manager: Apps as processes This removes the OS isolation Docker provides, and one could argue that therefore the application tested locally is not representative of the application deployed to a different environment. This is true, but for most .NET applications this should not matter, and this way of running locally noticeably improves startup times, as well as the time it takes to attach the debugger!

HttpClient boilerplate

In part one we saw we needed quite a bit of boilerplate to set up the HttpClient used by the Api Gateway to communicate to the Catalog Service. Let's look again at our implementation:

public static void AddCatalogServices(this IServiceCollection services, IConfiguration configuration)
{
    services.Configure<CatalogServiceOptions>(configuration.GetSection("CatalogService"));
    services.AddHttpClient<CatalogService>((p, httpClient) =>
        {
            var options = p.GetRequiredService<IOptionsMonitor<CatalogServiceOptions>>();
            httpClient.BaseAddress = new Uri(options.CurrentValue.BaseUrl);
        })
        .AddRetryPolicy();
}

We created an options class to bind the configuration, and use that to set the base url. Additionally, we created an .AddRetryPolicy() extension method that sets up a custom Polly retry policy on the HttpClient.

The ServiceDefaults of Aspire standard ship with functionality to streamline this:

public static TBuilder AddServiceDefaults<TBuilder>(this TBuilder builder) where TBuilder : IHostApplicationBuilder
    {
        ...

        builder.Services.AddServiceDiscovery();
        builder.Services.ConfigureHttpClientDefaults(http =>
        {
            // Turn on resilience by default
            http.AddStandardResilienceHandler();

            // Turn on service discovery by default
            http.AddServiceDiscovery();
        });

None of this functionality is strictly specific to Aspire. You could use this code in non-Aspire projects. However, we will see Aspire works well with the conventions that are introduced by these features.

One interesting thing to note is the .ConfigureHttpClientDefaults(), which allows the configuration settings that should be applied to all HttpClients in the application.

Retry policies

Instead of using the custom .AddRetryPolicy() extension method to set-up a retry policy on the HttpClient, we can use .AddStandardResilienceHandler(), part of Microsoft.Extensions.Http.Resilience. This configures the HttpClient with sensible defaults, including timeouts, a circuit breaker, and retries.

Service Discovery

The ServiceDefaults of Aspire standard ship with functionality to streamline configuring the right base url, referred to as Service Discovery. This is part of the NuGet package Microsoft.Extensions.ServiceDiscovery.

With service discovery, we can use fixed names to configure the base url of the HttpClient. At runtime, the service discovery mechanism will translate this to the actual host name. To illustrate this, let's look at the final implementation of how we configure the HttpClient for the CatalogService:

public static void AddCatalogServices(this IServiceCollection services, IConfiguration configuration)
{
    services.AddHttpClient<CatalogService>(httpClient =>
      httpClient.BaseAddress = new Uri("http+https://catalogservice"));
}

The registration of the HttpClient is reduced to a single statement. We chose catalogservice as the logical name of the service we are trying to reach.

The service discovery mechanism looks at the .NET configuration to resolve the logical name. To make compatible with the existing docker-compose setup, we need to adjust the setting name we provide from CatalogService__BaseUrl to services__catalogservice__http__0.

  # Docker-compose.yml

    apigateway:
      image: ${DOCKER_REGISTRY-}apigateway
      ...
      environment:
-       - CatalogService__BaseUrl=http://catalogservice:8080
+       - services__catalogservice__http__0=http://catalogservice:8080

You could skip this step if you want to move straight to Aspire.

With Aspire, instead of the seemingly magical services__{serviceName}__http__0, we can simply wire up services by using .WithReference({service}), as shown:

  var builder = DistributedApplication.CreateBuilder(args);
  
  var catalogService = builder
      .AddProject<Projects.CatalogService>("catalogservice")
      .WithEnvironment("Dapr__PubSub", "pub-sub")
      .WithEnvironment("Dapr__StateStore", "state-store");
  
  builder
      .AddProject<Projects.ApiGateway>("apigateway")
+     .WithReference(catalogService)
      .WithEnvironment("Dapr__PubSub", "pub-sub");
  
  builder.Build().Run();

When we now run the AppHost, and execute GET /products/sfsd on the Api Gateway service, the following trace shows up: Trace retry We can see the Catalog Service is now called🎉. We can also observe the retry policy kicking in, caused by errors😅.

When we check the structured logs we see: Logs retry Opening one of the stack traces of the errors reveals:

Dapr.DaprException: State operation failed: the Dapr endpoint indicated a failure. See InnerException for details.
 ---> Grpc.Core.RpcException: Status(StatusCode="Unavailable", Detail="Error connecting to subchannel.", DebugException="System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it.")

Right, in our docker-compose, we spun op Dapr sidecar containers. We have not arranged anything related to Dapr with Aspire. Let's see how we can fix this.

Adding Dapr, RabbitMQ and MongoDB

Aspire provides Aspire.Hosting.* packages for various popular dependencies, including Dapr, RabbitMQ and MongoDB. Add the following packages to the AppHost project:

<PackageReference Include="Aspire.Hosting.MongoDB" Version="9.1.0" />
<PackageReference Include="Aspire.Hosting.RabbitMQ" Version="9.1.0" />
<PackageReference Include="CommunityToolkit.Aspire.Hosting.Dapr" Version="9.2.1" />

With Aspire 9.0 Dapr was part of the Aspire repository. As of Aspire 9.1, Dapr has been moved to the .NET Aspire Community Toolkit, a set of community-maintained Dapr integrations.

RabbitMQ

We add a RabbitMQ resource as follows:

  var builder = DistributedApplication.CreateBuilder(args);
  
+ var rabbitMq = builder
+     .AddRabbitMQ("rabbitmq",
+         userName: builder.AddParameter("RabbitMqUserName", () => "guest"),
+         password: builder.AddParameter("RabbitMqPassword", () => "guest"),
+         port: 5672)
+     .WithLifetime(ContainerLifetime.Persistent);

This spins up RabbitMQ as a Docker container when we run the AppHost. Typically, you would let Aspire determine the ports as much as possible and allow Aspire to configure the right port for resources depending on RabbitMQ. In this case we need to hardcode the username, password and port, because of Dapr, unfortunately. We will refer to a pub-sub.yml in shortly, and this file contains a fixed username/password/port.

I have attempted to dynamically create Dapr component configurations but could not get it to work due to lifecycle issues: We need to generate the file áfter the endpoints of RabbitMQ/Mongo are determined but befóre the Dapr sidecar is started. When I find a solution, I will share it.

The Aspire method .WithLifetime(ContainerLifetime.Persistent) is optional, but it ensures that when the AppHost is terminated, RabbitMQ will keep running, ready for the next run. Since starting up RabbitMQ can take ~20s, this is an effective way to shorten development cycles!

MongoDB

Adding MongoDB works similarly to adding RabbitMQ:

var mongo = builder
    .AddMongoDB("mongodb", 
        port: 27017,
        userName: builder.AddParameter("MongoUserName", () => "user"),
        password: builder.AddParameter("MongoPassword", () => "password"))
    .WithLifetime(ContainerLifetime.Persistent);

Dapr

When using Dapr with Aspire, like with .NET applications, Dapr sidecars are ran as processes, instead of Docker containers. Aspire uses the Dapr CLI for this. Therefore, installing the CLI on your host is required.

We register the Dapr components as Aspire resources. We could decide to use in-memory versions of the pub-sub and state store components using builder.AddDaprPubSub("pub-sub") and builder.AddDaprStateStore("state-store"), but I prefer a solution using RabbitMQ and MongoDB to have a more realistic experience, along with the ability to browse the queues/state if needed. Therefore, I added the components referring to the existing Dapr component paths:

var pubSub = builder.AddDaprPubSub("pub-sub", new DaprComponentOptions
{
    LocalPath = Path.Combine("..", "..", "config", "dapr", "components", "pub-sub.yml")
}).WaitFor(rabbitMq);

var stateStore = builder.AddDaprStateStore("state-store", new DaprComponentOptions
{
    LocalPath = Path.Combine("..", "..", "config", "dapr", "components", "state-store.yml")
}).WaitFor(mongo);

By using .WaitFor we make sure the sidecars using the Dapr components will wait to start up until the Dapr dependencies have started (Dapr has a tendency to crash when dependencies are not available on startup).

We need to slightly alter the Dapr component files to refer to localhost instead of the Docker DNS names of the dependencies:

    version: v1
    metadata:
    - name: host
-     value: "amqp://guest:guest@rabbitmq:5672"
+     value: "amqp://guest:guest@localhost:5672"
    - name: durable
      value: true
    - name: exchangeKind
    version: v1
    metadata:
      - name: host
-       value: "mongodb:27017"
+       value: "localhost:27017"
      - name: username
        value: "user"
      - name: password

Finally we change our applications to add a Dapr sidecar using .WithDaprSidecar(), and add the Dapr components we need for each service.

  var catalogService = builder
      .AddProject<Projects.CatalogService>("catalogservice")
-     .WithEnvironment("Dapr__PubSub", "pub-sub")
+     .WithEnvironment("Dapr__PubSub", pubSub.Resource.Name) // Retrieve the component name dynamically from the Aspire resource.
-     .WithEnvironment("Dapr__StateStore", "state-store")
+     .WithEnvironment("Dapr__StateStore", stateStore.Resource.Name)
+     .WithReference(pubSub)
+     .WithReference(stateStore)
+     .WithDaprSidecar();
  
  builder
      .AddProject<Projects.ApiGateway>("apigateway")
      .WithReference(catalogService)
-     .WithEnvironment("Dapr__PubSub", "pub-sub")
+     .WithEnvironment("Dapr__PubSub", pubSub.Resource.Name)
+     .WithReference(pubSub)
+     .WithDaprSidecar();

It is nice we can use C# to pass the Dapr component names dynamically. It is not hard to imagine you can clean this up slightly further by creating an extension method to combine the .WithEnvironment("Dapr__...) and .WithReference(<daprComponent>). I'm leaving this as an exercise to the reader.

After performing these steps, we can successfully add a product! This trace shows that the product successfully went from the Api Gateway, via pub-sub (=RabbitMQ), Catalog Service, and ended up in the state store (=MongoDB). Trace successful

You may also notice that Dapr shows up in the telemetry, meaning Aspire automatically has wired up the Dapr sidecars to output to OpenTelemetry!

Integration tests

Currently we run integration tests by spinning up the environment using docker compose up, after which we run xUnit tests to perform requests to the Api Gateway using HttpClient. Aspire provides tooling to improve this. If you know WebApplicationFactory you will recognize this, as the pattern looks very similar.

We start by, in our IntegrationTests project, adding a reference to the package Aspire.Hosting.Testing and our Apphost project:

  <ItemGroup>
+     <PackageReference Include="Aspire.Hosting.Testing" Version="9.1.0" />
      ...
  </ItemGroup>
  
  <ItemGroup>
+     <ProjectReference Include="..\AppHost\AppHost.csproj" />
  </ItemGroup>

This package adds a DistributedApplicationTestingBuilder, which we can use to spin up our AppHost when starting the tests. Let's create a helper method to do this:

private static async Task<DistributedApplication> InitializeDistributedApplicationAsync()
{
    var builder = await DistributedApplicationTestingBuilder
        .CreateAsync<Projects.AppHost>(
            args:
            [
                "DcpPublisher:RandomizePorts=false" // Important, so that the ports used for RabbitMQ and Mongo stay the same, required because of Dapr
            ]);

    var distributedApp = builder.Build();
    await distributedApp.StartAsync();

    var resources = distributedApp.Services.GetRequiredService<ResourceNotificationService>();
    await resources.WaitForResourceHealthyAsync("apigateway-dapr-cli") // Note we are waiting for the dapr sidecar to become active
        .WaitAsync(TimeSpan.FromSeconds(120));
    await resources.WaitForResourceHealthyAsync("catalogservice-dapr-cli")
        .WaitAsync(TimeSpan.FromSeconds(120));
    
    await Task.Delay(5_000); // Wait an additional 5 seconds, as when the Dapr sidecar is started, the pub-sub subscriptions are not created yet.

    return distributedApp;
}

This refers our AppHost project and starts it. After starting, we wait until all relevant resources have started up successfully, as we need that for our tests. Aspire by default randomizes ports to allow multiple AppHosts to be started in parallel, but we cannot use this due to the Dapr limitation mentioned earlier. It turns out we don't need it anyway, as for all our tests we expect to use the same AppHost, and we can run tests against that single AppHost instance in parallel.

To achieve a singleton AppHost for multiple tests, we use xUnit's Collection Fixtures:

[CollectionDefinition(nameof(DistributedApplicationCollection))]
public class DistributedApplicationCollection : ICollectionFixture<DistributedApplicationFixture>
{
    
}

public class DistributedApplicationFixture
{
    private DistributedApplication? distributedApp;
    private readonly SemaphoreSlim initializationLock = new SemaphoreSlim(1);
    
    public async Task<DistributedApplication> GetDistributedApplicationAsync()
    {
        await initializationLock.WaitAsync();

        try
        {
            distributedApp ??= await InitializeDistributedApplicationAsync();
            return distributedApp;
        }
        finally
        {
            initializationLock.Release();
        }
    }

    private static async Task<DistributedApplication> InitializeDistributedApplicationAsync()
    {
        ...
    }
}

Now when we call GetDistributedApplicationAsync() we only create the app host once.

We can use this fixture in the updated IntegrationTests base class. Instead of simply creating an HttpClient referring to localhost:4000 (the Api Gateway), we let the DistributedApplication dynamically provide one based on the Aspire resource name (apigateway):

+ [Collection(nameof(DistributedApplicationCollection))]
- public class IntegrationTest
+ public class IntegrationTest(DistributedApplicationFixture distributedAppFixture)
  {
-     protected static HttpClient GetGatewayClient() => new HttpClient
-     {
-         BaseAddress = new Uri("http://localhost:4000")
-     };
  
+     protected async Task<HttpClient> GetGatewayClientAsync()
+     {
+         var distributedApp = await distributedAppFixture.GetDistributedApplicationAsync();
+         return distributedApp.CreateHttpClient("apigateway");
+     }
  
      ...
  }

In the test cases it's only a matter of passing the fixture and turning the GetGatewayClient() async:

- public class ProductAddedTests : IntegrationTest
+ public class ProductAddedTests(DistributedApplicationFixture distributedAppFixture) : IntegrationTest(distributedAppFixture)
  {
      [Fact]
      public async Task Publishing_product_added_event_eventually_makes_product_available_async()
      {
          var productId = Guid.NewGuid().ToString();
  
-         var gateway = GetGatewayClient();
+         var gateway = await GetGatewayClientAsync();
  
          // Act
          var response = await gateway.PostAsJsonAsync("products", 
  
          ... same test code

With the docker compose version we needed to run docker compose up before running the tests. This is no longer necessary!

Running the test gives the following output:

Test summary: total: 1; failed: 0; succeeded: 1; skipped: 0; duration: 15,8s

As we only have a single test, I temporarily changed the test to a Theory and made it run 100 times in parallel, to check if our fixture works. You will notice it does not take that much more time to run 100 tests in parallel:

Test summary: total: 100; failed: 0; succeeded: 100; skipped: 0; duration: 17,4s

DevOps pipeline

On the DevOps side, it gets simpler as well!

Testing

We don't need to spin up the environment with docker compose up anymore as spinning up the environment happens automatically. However, we do need to install the Dapr CLI as a prerequisite of running the application.

    build_and_test:
      runs-on: ubuntu-latest
      steps:
        - name: Checkout code
          uses: actions/checkout@v2
  
-       - name: Docker compose up
-         run: |
-           docker compose -f docker-compose.yml up rabbitmq mongodb apigateway apigateway-dapr catalogservice catalogservice-dapr --wait
  
+       - name: Install Dapr CLI
+         run: |
+           wget -q https://raw.githubusercontent.com/dapr/cli/master/install/install.sh -O - | /bin/bash
+           dapr init
  
        - name: Setup .NET
          uses: actions/setup-dotnet@v1
          with:
            dotnet-version: '9.0.x' 
        
        - name: Run tests
          run: |
            dotnet test ./AspireMigrationExample.sln --configuration Release

Build and push

We used to use the Dockerfile to build the application into an image, using the docker-compose.yml to locate it. Since .NET 8 you can directly publish a .NET application as an image and push it, demonstrated below:

    steps:
      - name: Login to Docker Registry
        uses: docker/login-action@v3
        #...

-     - name: Build and push apigateway service
-        run: |
-          docker compose build apigateway
-          docker tag apigateway:latest ${{ vars.DOCKER_REGISTRY }}/apigateway:${VERSION}
-          docker push ${{ vars.DOCKER_REGISTRY }}/apigateway:${VERSION}
+     - name: Build and push apigateway service
+       working-directory: src/ApiGateway
+       run: |
+         dotnet publish --os linux --arch x64 /t:PublishContainer -p ContainerImageTag=${VERSION} -p ContainerRegistry=${{ vars.DOCKER_REGISTRY }}

Doing it this way has a few benefits:

  • It is quicker to build, as .NET employs some tricks to package the image directly into the image format without using Docker.
  • It does not use a Dockerfile, removing the need to manage the Dockerfile, including updating the base image and updating the project references to optimize the build speed, like COPY ["src/CatalogService/CatalogService.csproj", "src/CatalogService/"].

Cleaning up

At this point we are fully using Aspire, and have no dependencies anymore on:

  • docker-compose.yml, docker-compose.dcproj nor any Dockerfile, as we use the AppHost for orchestration.
  • Telemetry-stack related configuration, such as configuration for OpenTelemetry, Prometheus, Tempo and Grafana.
  • The Dapr config.yaml, as that file was only used to configure the OpenTelemetry endpoint. Aspire configures this for us automatically.
  • You could merge the Shared class library into the ServiceDefaults project, as they serve a similar purpose.

It's time to remove all these files!

Conclusion

As we've seen in this post, Aspire provides some nice 'glue' to make development of distributed applications a smoother and more standardized experience. It reduces the amount of boilerplate code, allowing us to focus more on writing business logic. I for one am a fan, and for its development experience alone I would use it. I would use it for distributed applications, but also for small apps (back-end + single database). Aspire would add value in both cases:

  • Writing integration tests without a lot of boilerplate to spin up the environment is nice. Being able to easily attach debuggers to both the test code as well as the application being tested is nice.
  • Wiring up services (hostnames / usernames / passwords / ports) in a strongly typed way feels clean.
  • Not having to maintain Dockerfiles (updating .NET version and COPY .csproj dependencies)
  • The Aspire dashboard is great, removing the need to spin up another observability stack, and provides an integrated experience to find what you need during development.
  • The debugging experience feels quicker because of running the applications natively instead of in Docker containers.

If I have to name the drawbacks, when discussing the development scenario of Aspire I would say:

  • The Dapr experience feels not as integrated as it could be. It would be great if the connection details of for example RabbitMQ could be passed dynamically to a Dapr pub-sub component to configure it, just like you can when passing connection details to a .NET app.
  • You lose the isolated environments that Docker provides, which means that for Dapr, every developer on the team needs to install the Dapr CLI as prerequisite.

Next up

Aspire does not stop at the development phase. We can use Aspire to quickly set up deployment to a growing set of platforms, like Azure and Kubernetes. In the next part I will go into detail about Kubernetes, the challenges I have encountered with Aspire, and how I resolved these. You can find the next part here!

← Back to the blog