Migrating an existing project to .NET Aspire - Part 3

This is the third and final part of "Migrating an existing project to .NET Aspire". Make sure to read part 1 and part 2 first!

In the previous parts we discussed how migrating to Aspire improves our development experience. We can go even further, by using Aspire to deploy our application. This post will explain how to do exactly that using Kubernetes and Aspir8. The code for this post is available on GitHub.

Deployment manifest

As we have seen in the previous part, the logic of the AppHost project provides enough information to wire up and start applications and dependencies locally. This information can also be used to deploy the distributed application to a platform of choice. To achieve this, Aspire provides a way to generate a "deployment manifest", which is essentially a JSON document describing the resources defined in the AppHost. You can generate this manifest yourself by executing:

dotnet run --project src\AppHost\AppHost.csproj --publisher manifest --output-path manifest.json

When inspecting the file, you will see every resource, including parameters. As an example, this is what the definition of the Api Gateway project and its Dapr sidecar looks like:

...
"apigateway": {
    "type": "project.v0",
    "path": "../ApiGateway/ApiGateway.csproj",
    "env": {
        "OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EXCEPTION_LOG_ATTRIBUTES": "true",
        "OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EVENT_LOG_ATTRIBUTES": "true",
        "OTEL_DOTNET_EXPERIMENTAL_OTLP_RETRY": "in_memory",
        "ASPNETCORE_FORWARDEDHEADERS_ENABLED": "true",
        "HTTP_PORTS": "{apigateway.bindings.http.targetPort}",
        "services__catalogservice__http__0": "{catalogservice.bindings.http.url}",
        "services__catalogservice__https__0": "{catalogservice.bindings.https.url}",
        "Dapr__PubSub": "pub-sub"
    },
    "bindings": {
        "http": {
            "scheme": "http",
            "protocol": "tcp",
            "transport": "http"
        },
        "https": {
            "scheme": "https",
            "protocol": "tcp",
            "transport": "http"
        }
    }
},
"apigateway-dapr": {
    "type": "dapr.v0",
    "dapr": {
        "application": "apigateway",
        "appId": "apigateway",
        "components": [
            "pub-sub"
        ]
    }
},
...

You can see how:

  • It is related to a .NET project, using the path property referring to the .csproj file
  • The environment variables to be passed are specified in the env section.
  • The reference to the Catalog service, using "services__catalogservice__http__0": "{catalogservice.bindings.http.url}"
  • It defines a Dapr sidecar by having a dapr.v0 resource referring to "application": "apigateway".

The idea is that, using this manifest, you are free to create a tool to deploy an application to the platform of your choice. In this post we will look at deploying to Kubernetes using a community-created tool called Aspir8.

Aspir8

Aspir8 provides tooling to translate the Aspire deployment manifest to Kubernetes resources. It is installed by executing:

dotnet tool install -g aspirate

To bootstrap your project with Aspir8, execute aspirate init in the src/AppHost directory: Aspir8 init As you can see, I chose to generate templates, this will be important later. Apart from the templates, this command resulted in a aspirate.json file containing:

{
  "TemplatePath": "C:\\......\\AspireMigrationExample\\templates",
  "ContainerSettings": {
    "Registry": "<your entered container registry>.azurecr.io",
    "RepositoryPrefix": "aspire",
    "Tags": [
      "latest"
    ],
    "Builder": "docker",
    "BuildArgs": null,
    "Context": null
  }
}

The template path is an absolute path and contains backslashes, which is not ideal when working in teams. Therefore I changed it to ../../templates.

Build and push

The next step is to build and push the application artifacts, using:

aspirate build --non-interactive --disable-secrets

The log output provides a nice overview of what Aspir8 did:

Aspir8 build You can see it created and used the Aspire deployment manifest explained at the start of the post. It uses it to find:

  • .NET projects and publishes them as a Docker image
  • Any other Docker-based project you may have defined (like Node.js projects and front-end projects), and uses the Dockerfiles to create an image.

It then pushes the images to the configured container registry.

Secrets

Note I used the build command with --disable-secrets. Aspir8 provides a way to manage parameters containing secrets, by saving them in an encrypted form in the aspire-state.json, which can then be checked into source control. Apart from finding it a bit strange to save secrets (albeit encrypted) into source control, I believe that parameters like these are better supplied from the DevOps pipeline during each run. This also answers how you would change the values for each DTAP environment.

Generating Kubernetes resources

After the build step we have Docker images available in our container registry. The next step is to create Kubernetes resources that define how we should run these images. Aspir8 provides the generate command for this:

aspirate generate -m ./manifest.json `
 --non-interactive `
 --include-dashboard `
 --image-pull-policy IfNotPresent `
 --skip-build `
 --parameter RabbitMqUserName=guest `
 --parameter RabbitMqPassword=guest `
 --parameter MongoUserName=user `
 --parameter MongoPassword=password

By passing -m ./manifest.json, we skip starting up the AppHost to create an Aspire deployment manifest, saving some time. By including --skip-build, we skip the publishing of Docker images, also saving time.

Finally we pass the required parameters (all parameter resources defined in the AppHost) using --parameter. Currently we pass the awful passwords we used throughout this blog series, but you would normally build up the command dynamically in your DevOps pipeline with strong secrets loaded from elsewhere.

Executing the command gives the following output:

Aspir8 generate

During the init step, we chose to generate a templates folder with a set of .hbs files. These are Handlebars templates. In this case they represent templated Kubernetes resources. The generate command takes these templates, and renders them for each Aspire resource.

The output is a set of Kustomize manifests in the aspirate-output folder:

Aspir8 generate templates

To visualize the final Kubernetes resources, we can execute:

cd aspirate-output/api-gateway
kubectl kustomize

This will print the Kubernetes resources in the console:

apiVersion: v1
kind: ConfigMap
metadata:
  name: apigateway-env
data:
  ASPNETCORE_FORWARDEDHEADERS_ENABLED: "true"
  ASPNETCORE_URLS: http://+:8080;
  Dapr__PubSub: pub-sub
  HTTP_PORTS: "8080"
  OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EVENT_LOG_ATTRIBUTES: "true"
  OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EXCEPTION_LOG_ATTRIBUTES: "true"
  OTEL_DOTNET_EXPERIMENTAL_OTLP_RETRY: in_memory
  OTEL_EXPORTER_OTLP_ENDPOINT: http://aspire-dashboard:18889
  OTEL_SERVICE_NAME: apigateway
  services__catalogservice__http__0: http://catalogservice:8080

---
apiVersion: v1
kind: Service

# Service definition ...

---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    dapr.io/app-id: apigateway
    dapr.io/config: tracing
    dapr.io/enable-api-logging: "true"
    dapr.io/enabled: "true"
  labels:
    app: apigateway
  name: apigateway
spec:
  minReadySeconds: 60
  replicas: 1
  selector:
    matchLabels:
      app: apigateway
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        dapr.io/app-id: apigateway
        dapr.io/config: tracing
        dapr.io/enable-api-logging: "true"
        dapr.io/enabled: "true"
      labels:
        app: apigateway
    spec:
      containers:
      - envFrom:
        - configMapRef:
            name: apigateway-env
        image: someRegistry.azurecr.io/aspire/apigateway:latest
        imagePullPolicy: IfNotPresent
        name: apigateway
        ports:
        - containerPort: 8080
          name: http
        - containerPort: 8443
          name: https
      terminationGracePeriodSeconds: 180

At a glance, the resources look reasonable, but for a production-worthy set-up, there are quite a few things to note:

Deploy all at once

Aspir8 provides a 'root' kustomization.yaml, that allows to deploy the whole distributed application, along with all its dependencies and even the Aspire dashboard. In a production setup, I would prefer a more granular approach for a few reasons:

  • To allow for independent deployment of services, you would probably want to apply one service at a time
  • For third-party dependencies like MongoDB and RabbitMQ, you probably want to opt for battle-tested HELM Charts, instead of these simple resource definitions.

This can simply be resolved by just applying the desired subfolder.

Dapr components

The Dapr component resources are hardcoded, unusable versions of pub-sub and state-store, ignoring the components we refer to in the AppHost project. There is an open issue on the Aspir8 GitHub regarding this.

The current solution is simply to have your own set of Dapr components (tailored to run in Kubernetes), that can be applied instead of the ones generated by Aspir8.

Duplicate annotations

The Dapr annotations are defined on both the Deployment resource and the template specification. Only the latter is truly necessary to let Dapr to inject a sidecar. While not necessarily cause any issues, it does feel messy. I have not found a solution for this, as the templates only refer to a single annotations list that's repeated twice:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{name}}
  labels:
    app: {{name}}
{{#if hasAnyAnnotations}}
  annotations:
  {{#each annotations}}
    {{@key}}: {{this}}
  {{/each}}
{{/if}}
spec:
  minReadySeconds: 60
  replicas: 1
  selector:
    matchLabels:
      app: {{name}}
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: {{name}}
{{#if hasAnyAnnotations}}
      annotations:
      {{#each annotations}}
        {{@key}}: {{this}}
      {{/each}}
{{/if}}
    spec:
      ...

Secrets in ConfigMaps

The Aspir8 template generates a Kubernetes Secret and/or ConfigMap based on the environment variables passed to a resource:

{{#if hasAnyEnv}}
configMapGenerator:
- name: {{name}}-env
  literals:
  {{#each env}}
    - {{@key}}={{this}}
  {{/each}}
    {{#if isProject}}
    - ASPNETCORE_URLS=http://+:8080;
    {{/if}}
{{/if}}

{{#if hasAnySecrets}}
secretGenerator:
- name: {{name}}-secrets
  envs:
  - .{{name}}.secrets
{{/if}}

I noticed that all my variables, including passwords, ended up in the ConfigMap instead of in a Secret. It turns out that Aspir8 currently only considers environment variables starting with ConnectionString, POSTGRES_PASSWORD or MSSQL_SA_PASSWORD to be a secret (see source code). This is quite limiting, and there is currently no clean solution available.

There is an issue on the Aspir8 GitHub that aims to resolve this by looking at which parameters are defined as 'secret'.

Providing Kubernetes resource settings from Aspire

The default generated Deployment resource lacks several settings, and when using Aspir8, there is no straightforward way to pass them dynamically. These settings include:

  • The desired number of replicas
  • Resource limits
  • Health probes
  • Update strategy

There is a way to support providing settings like these from the AppHost, but it has some limitations. To explain the solution, let's first go over the steps that Aspir8 executes to generate the resources:

  1. Aspir8 creates an in-memory model of the resource. Specifically, this is the model it uses:
public class KubernetesDeploymentData
{
    public string? Name {get; private set;}
    public string? Namespace {get; private set;}
    public Dictionary<string, string?> Env { get; private set; } = [];
    public Dictionary<string, string?> Secrets { get; private set; } = [];
    public Dictionary<string, string> Annotations { get; private set; } = [];
    public IReadOnlyCollection<Volume> Volumes { get; private set; } = [];
    public IReadOnlyCollection<Ports> Ports { get; private set; } = [];
    public IReadOnlyCollection<string> Manifests { get; private set; } = [];
    public IReadOnlyCollection<string> Args { get; private set; } = [];
    public bool? SecretsDisabled { get; private set; } = false;
    public bool? IsProject {get; private set;}
    public bool? WithPrivateRegistry { get; private set; } = false;
    public bool? WithDashboard { get; private set; } = false;
    public string? ContainerImage {get; private set;}
    public string? Entrypoint {get; private set;}
    public string? ImagePullPolicy {get; private set;}
    public string? ServiceType { get; private set; } = "ClusterIP";
    ...
    public bool HasPorts => Ports.Count > 0;
    public bool HasVolumes => Volumes.Count > 0;
    public bool HasAnySecrets => Secrets.Count > 0 && SecretsDisabled != true;
    public bool HasAnyAnnotations => Annotations.Count > 0;
    public bool HasArgs => Args.Count > 0;
    public bool HasAnyEnv => Env.Count > 0;
    ...
}
  1. Depending on the Aspire resource type, it selects which Handlebars templates to use. For .NET projects, it is deployment and service. Only these templates are rendered.
  2. It uses the model from step 1 to render each of the Handlebars templates from step 2.

To add more behaviour to the templates, we are constrained to the properties defined in the model. I opted to (spoiler alert) abuse the Env property, to basically add environment variables that in turn set Kubernetes settings.

AppHost changes

In the AppHost I wanted to specify Kubernetes settings, similarly to how deploying to Azure Container Apps can be configured by using PublishAsContainerApp. On the call site this looks like:

  var catalogService = builder
      .AddProject<Projects.CatalogService>("catalogservice")
+     .PublishToKubernetes(options =>
+     {
+         options.Replicas = 1;
+         options.Resources.Cpu.Request = "100m";
+         options.Resources.Cpu.Limit = "2000m";
+         options.Resources.Memory.Request = "1Gi";
+         options.Resources.Memory.Limit = "1Gi";
+     })
      .WithEnvironment("Dapr__PubSub", pubSub.Resource.Name)
      .WithEnvironment("Dapr__StateStore", stateStore.Resource.Name)
      .WithReference(pubSub)
      .WithReference(stateStore)
      .WithDaprSidecar();
  
  builder
      .AddProject<Projects.ApiGateway>("apigateway")
+     .PublishToKubernetes(options =>
+     {
+         options.Replicas = 2;
+         options.Resources.Cpu.Request = "100m";
+         options.Resources.Cpu.Limit = "1000m";
+         options.Resources.Memory.Request = "512Mi";
+         options.Resources.Memory.Limit = "512Mi";
+     })
      .WithReference(catalogService)
      .WithEnvironment("Dapr__PubSub", pubSub.Resource.Name)
      .WithReference(pubSub)
      .WithDaprSidecar();

To implement this, I created the following extension method:

public static class ProjectResourceBuilderExtensions
{
    public static IResourceBuilder<ProjectResource> PublishToKubernetes(this IResourceBuilder<ProjectResource> project, Action<KubernetesOptions> configureOptions)
    {
        if (!project.ApplicationBuilder.ExecutionContext.IsPublishMode)
            return project;
        
        var options = new KubernetesOptions();
        configureOptions(options);

        project.WithEnvironment("ASPIRE_REPLICAS", options.Replicas.ToString());
        project.WithEnvironment("ASPIRE_MEMORY_REQUEST", options.Resources.Memory.Request);
        project.WithEnvironment("ASPIRE_MEMORY_LIMIT", options.Resources.Memory.Limit);
        project.WithEnvironment("ASPIRE_CPU_REQUEST", options.Resources.Cpu.Request);
        project.WithEnvironment("ASPIRE_CPU_LIMIT", options.Resources.Cpu.Request);
        
        return project;
    }
}

public class KubernetesOptions
{
    public int Replicas { get; set; } = 1;
    public KubernetesResources Resources { get; } = new KubernetesResources();
}

public class KubernetesResources
{
    public KubernetesResource Memory { get; } = new KubernetesResource();
    public KubernetesResource Cpu { get; } = new KubernetesResource();
}

public class KubernetesResource
{
    public string? Request { get; set; }
    public string? Limit { get; set; }
}

As you can see, it is essentially is a wrapper around configuring environment variables on the resource.

Changing the template

These environment variables are then used in the Handlebars template:

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: {{name}}
    labels:
      app: {{name}}
  {{#if hasAnyAnnotations}}
    annotations:
    {{#each annotations}}
      {{@key}}: {{this}}
    {{/each}}
  {{/if}}
  spec:
    minReadySeconds: 60
-   replicas: 1
+   replicas: {{env.ASPIRE_REPLICAS}}
    ...
    template:
      ...
      spec:
      {{#if withPrivateRegistry}}
        imagePullSecrets:
        - name: image-pull-secret
      {{/if}}
        containers:
        - name: {{name}}
          image: {{containerImage}}
          imagePullPolicy: {{imagePullPolicy}}
          ...
+         resources:
+           requests:
+             cpu: {{env.ASPIRE_CPU_REQUEST}}
+             memory: {{env.ASPIRE_MEMORY_REQUEST}}
+           limits:
+             cpu: {{env.ASPIRE_CPU_LIMIT}}
+             memory: {{env.ASPIRE_MEMORY_LIMIT}}
        terminationGracePeriodSeconds: 180

And sure enough, when looking at the output after re-executing aspirate generate, we see:

  apiVersion: apps/v1
  kind: Deployment
  metadata:
    annotations:
      dapr.io/app-id: apigateway
      dapr.io/config: tracing
      dapr.io/enable-api-logging: "true"
      dapr.io/enabled: "true"
    labels:
      app: apigateway
    name: apigateway
  spec:
    minReadySeconds: 60
-   replicas: 1
+   replicas: 2
    selector:
      matchLabels:
        app: apigateway
    strategy:
      type: Recreate
    template:
      metadata:
        annotations:
          dapr.io/app-id: apigateway
          dapr.io/config: tracing
          dapr.io/enable-api-logging: "true"
          dapr.io/enabled: "true"
        labels:
          app: apigateway
      spec:
        containers:
        - envFrom:
          - configMapRef:
              name: apigateway-env
          image: someRegistry.azurecr.io/aspire/apigateway:latest
          imagePullPolicy: IfNotPresent
          name: apigateway
          ports:
          - containerPort: 8080
            name: http
          - containerPort: 8443
            name: https
+         resources:
+           limits:
+             cpu: 100m
+             memory: 512Mi
+           requests:
+             cpu: 100m
+             memory: 512Mi
        terminationGracePeriodSeconds: 180

Since the variables are defined as environment variables, they are also included in the ConfigMap, and therefore passed on as environment variables for the application.

  apiVersion: v1
  kind: ConfigMap
  metadata:
    name: apigateway-env
  data:
+   ASPIRE_CPU_LIMIT: 100m
+   ASPIRE_CPU_REQUEST: 100m
+   ASPIRE_MEMORY_LIMIT: 512Mi
+   ASPIRE_MEMORY_REQUEST: 512Mi
+   ASPIRE_REPLICAS: "2"
    ASPNETCORE_FORWARDEDHEADERS_ENABLED: "true"
    ASPNETCORE_URLS: http://+:8080;
    Dapr__PubSub: pub-sub
    HTTP_PORTS: "8080"
    OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EVENT_LOG_ATTRIBUTES: "true"
    OTEL_DOTNET_EXPERIMENTAL_OTLP_EMIT_EXCEPTION_LOG_ATTRIBUTES: "true"
    OTEL_DOTNET_EXPERIMENTAL_OTLP_RETRY: in_memory
    OTEL_EXPORTER_OTLP_ENDPOINT: http://aspire-dashboard:18889
    OTEL_SERVICE_NAME: apigateway
    services__catalogservice__http__0: http://catalogservice:8080

I would have liked to exclude these variables, by filtering out those starting with ASPIRE_. However, the Handlebars templates are quite limited. The default Handlebars configuration lacks basic string operations, and the default is what Aspir8 uses, unfortunately without ways to configure it. Therefore, having the extra environment variables passed to the application is the best we can achieve currently.

It would be great if Aspir8 had native support for most Kubernetes settings (providing an extension method such as the one I showed). However, it would already be helpful if Aspir8 had some kind of dictionary to include metadata like this, allowing us to expand the Kubernetes resource templates ourselves.

Final thoughts

In conclusion, I like Aspire for development. I do think the scenario of deploying to Kubernetes with Aspir8 needs to mature, because of the limitations described in this post. I currently would opt to not use Aspire for deployment to Kubernetes because of this.

The good news is that Microsoft for .NET 10 is focussing on the deployment story. David Fowler mentioned, for example, that they are working with the creator of Aspir8 to explore how it can be integrated in a better way into Aspire. I am looking forward to what will come from that effort, and I am hopeful for the future where we have a nicely integrated deployment experience!

← Back to the blog