Skip to content

Preview Environments ApplicationSet Architecture

Status: ✅ Using Flat ApplicationSet (2026-01-18)

Executive Summary

Preview environments use a single flat ApplicationSet (syrf-previews.yaml) that directly generates all applications (infrastructure + services) for each PR. This approach was chosen over App-of-Apps for its simplicity and debuggability.

Document Description
Feature Brief High-level feature overview
Implementation Spec Detailed implementation specification
Edge Case Analysis Analysis of edge cases and scenarios
Preview Infrastructure README Helm chart documentation

Architecture Decision: Flat ApplicationSet vs App-of-Apps

Decision: Use Flat ApplicationSet

We evaluated an App-of-Apps pattern (parent application with nested ApplicationSet) but decided against it.

Why App-of-Apps Was Rejected

  1. No UI Grouping Benefit: All applications still appear on the main ArgoCD screen regardless of hierarchy
  2. Added Complexity: Nested ApplicationSets require template escaping (Helm + ApplicationSet both use {{ }})
  3. Extra Chart to Maintain: Would require a preview-parent chart with nested ApplicationSet template
  4. Harder to Debug: Two layers of templating (Helm → ApplicationSet) makes troubleshooting more difficult
  5. Minimal Benefit: The "single deletion cascades" benefit is already achieved with ArgoCD finalizers

Chosen Approach: Flat ApplicationSet

The syrf-previews.yaml ApplicationSet directly generates all applications:

syrf-previews (ApplicationSet)
├── pr-{n}-infrastructure
├── pr-{n}-api
├── pr-{n}-projectmanagement
├── pr-{n}-quartz
└── pr-{n}-web

Benefits:

  • Simple, single layer of templating
  • Easy to debug (argocd app list --label pr-number={n})
  • No extra charts to maintain
  • Same service discovery via git files
  • Proven pattern already working in production

Current Implementation

ApplicationSet Structure

The syrf-previews.yaml uses two matrix generators:

  1. Matrix 1: Infrastructure App
  2. Git generator watches pr-*/pr.yaml files
  3. Combined with static list element serviceName: infrastructure
  4. Deploys preview-infrastructure Helm chart from syrf monorepo

  5. Matrix 2: Service Apps

  6. Git generator watches pr-*/pr.yaml files
  7. Merged with service configs from syrf/services/*/config.yaml and preview/services/*/config.yaml
  8. Deploys service charts (api, project-management, quartz, web)

Service Discovery

Services are discovered dynamically from git files (single source of truth):

cluster-gitops/
├── syrf/services/*/config.yaml              # Base service config (chartPath, chartRepo)
└── syrf/environments/preview/services/*/config.yaml  # Preview-specific config (hostPrefix, imageRepo)

Adding a new service to previews only requires adding config files - no ApplicationSet changes needed.

Deployment Strategy

Services with waitForDatabase=true use Recreate deployment strategy to ensure safe database seeding:

  • Production/Staging: RollingUpdate (default) - zero-downtime deployments
  • Preview (no waitForDatabase): RollingUpdate (default)
  • Preview (waitForDatabase=true): Recreate - guarantees old pods terminate before new pods start

This is implemented via conditional logic in _deployment-dotnet.tpl.


Files

File Purpose
cluster-gitops/argocd/applicationsets/syrf-previews.yaml Main ApplicationSet
src/charts/preview-infrastructure/ Infrastructure Helm chart
cluster-gitops/syrf/services/*/config.yaml Base service configs
cluster-gitops/syrf/environments/preview/services/*/config.yaml Preview service configs
cluster-gitops/syrf/environments/preview/pr-*/pr.yaml PR-specific metadata

Database Seeding Coordination

The flat ApplicationSet works with a Kubernetes-native coordination mechanism:

  1. Recreate strategy ensures old pods terminate before new pods start
  2. Init containers on service pods wait for a db-ready ConfigMap with matching seedVersion
  3. DatabaseLifecycle operator watches for deployments with watchedDeployments.labelSelector
  4. Coordination flow:
  5. Recreate terminates old pods → 0 ready replicas
  6. Operator sees 0 ready replicas → safe to seed
  7. Operator seeds database → creates db-ready ConfigMap with seedVersion
  8. Init containers pass (seedVersion matches) → main containers start

This ensures services never access the database during seeding operations.

Startup Probe for MongoDB Index Creation

Services in preview environments with waitForDatabase=true have a startupProbe configured to handle slow MongoDB startup:

  • Problem: MongoDB creates indexes on freshly seeded databases, taking 60+ seconds
  • Default liveness probe: Only allows ~90 seconds total before killing the pod
  • Solution: startupProbe allows up to 310 seconds for initial startup
# Configured in _deployment-dotnet.tpl for preview environments
startupProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 10
  failureThreshold: 30  # 10 + (30 * 10) = 310 seconds max
Environment Strategy Max Startup Time
Production RollingUpdate 90 seconds (liveness probe)
Staging RollingUpdate 90 seconds (liveness probe)
Preview (waitForDatabase=true) Recreate 310 seconds (startup probe)

Manual Reseed via /reseed-db Command

To trigger a database reseed on an existing preview:

  1. Comment /reseed-db on the PR
  2. Workflow updates seedVersion in pr.yaml (single source of truth)
  3. ArgoCD detects change and syncs all apps
  4. Services recreate (Recreate strategy) with new seedVersion
  5. Operator reseeds and creates new db-ready ConfigMap
  6. Services start successfully

Blocked when:

  • persist-db label present (preserves existing data)
  • preview label missing (no preview environment exists)

Future Considerations

If ArgoCD adds native UI grouping for related applications (without App-of-Apps), we could revisit the architecture. For now, the flat ApplicationSet provides the best balance of simplicity and functionality.