Preview Environments ApplicationSet Architecture¶
Status: ✅ Using Flat ApplicationSet (2026-01-18)
Executive Summary¶
Preview environments use a single flat ApplicationSet (syrf-previews.yaml) that directly generates all applications (infrastructure + services) for each PR. This approach was chosen over App-of-Apps for its simplicity and debuggability.
Related Documentation¶
| Document | Description |
|---|---|
| Feature Brief | High-level feature overview |
| Implementation Spec | Detailed implementation specification |
| Edge Case Analysis | Analysis of edge cases and scenarios |
| Preview Infrastructure README | Helm chart documentation |
Architecture Decision: Flat ApplicationSet vs App-of-Apps¶
Decision: Use Flat ApplicationSet¶
We evaluated an App-of-Apps pattern (parent application with nested ApplicationSet) but decided against it.
Why App-of-Apps Was Rejected¶
- No UI Grouping Benefit: All applications still appear on the main ArgoCD screen regardless of hierarchy
- Added Complexity: Nested ApplicationSets require template escaping (Helm + ApplicationSet both use
{{ }}) - Extra Chart to Maintain: Would require a
preview-parentchart with nested ApplicationSet template - Harder to Debug: Two layers of templating (Helm → ApplicationSet) makes troubleshooting more difficult
- Minimal Benefit: The "single deletion cascades" benefit is already achieved with ArgoCD finalizers
Chosen Approach: Flat ApplicationSet¶
The syrf-previews.yaml ApplicationSet directly generates all applications:
syrf-previews (ApplicationSet)
├── pr-{n}-infrastructure
├── pr-{n}-api
├── pr-{n}-projectmanagement
├── pr-{n}-quartz
└── pr-{n}-web
Benefits:
- Simple, single layer of templating
- Easy to debug (
argocd app list --label pr-number={n}) - No extra charts to maintain
- Same service discovery via git files
- Proven pattern already working in production
Current Implementation¶
ApplicationSet Structure¶
The syrf-previews.yaml uses two matrix generators:
- Matrix 1: Infrastructure App
- Git generator watches
pr-*/pr.yamlfiles - Combined with static list element
serviceName: infrastructure -
Deploys
preview-infrastructureHelm chart from syrf monorepo -
Matrix 2: Service Apps
- Git generator watches
pr-*/pr.yamlfiles - Merged with service configs from
syrf/services/*/config.yamlandpreview/services/*/config.yaml - Deploys service charts (api, project-management, quartz, web)
Service Discovery¶
Services are discovered dynamically from git files (single source of truth):
cluster-gitops/
├── syrf/services/*/config.yaml # Base service config (chartPath, chartRepo)
└── syrf/environments/preview/services/*/config.yaml # Preview-specific config (hostPrefix, imageRepo)
Adding a new service to previews only requires adding config files - no ApplicationSet changes needed.
Deployment Strategy¶
Services with waitForDatabase=true use Recreate deployment strategy to ensure safe database seeding:
- Production/Staging: RollingUpdate (default) - zero-downtime deployments
- Preview (no waitForDatabase): RollingUpdate (default)
- Preview (waitForDatabase=true): Recreate - guarantees old pods terminate before new pods start
This is implemented via conditional logic in _deployment-dotnet.tpl.
Files¶
| File | Purpose |
|---|---|
cluster-gitops/argocd/applicationsets/syrf-previews.yaml |
Main ApplicationSet |
src/charts/preview-infrastructure/ |
Infrastructure Helm chart |
cluster-gitops/syrf/services/*/config.yaml |
Base service configs |
cluster-gitops/syrf/environments/preview/services/*/config.yaml |
Preview service configs |
cluster-gitops/syrf/environments/preview/pr-*/pr.yaml |
PR-specific metadata |
Database Seeding Coordination¶
The flat ApplicationSet works with a Kubernetes-native coordination mechanism:
- Recreate strategy ensures old pods terminate before new pods start
- Init containers on service pods wait for a
db-readyConfigMap with matchingseedVersion - DatabaseLifecycle operator watches for deployments with
watchedDeployments.labelSelector - Coordination flow:
- Recreate terminates old pods → 0 ready replicas
- Operator sees 0 ready replicas → safe to seed
- Operator seeds database → creates db-ready ConfigMap with
seedVersion - Init containers pass (seedVersion matches) → main containers start
This ensures services never access the database during seeding operations.
Startup Probe for MongoDB Index Creation¶
Services in preview environments with waitForDatabase=true have a startupProbe configured to handle slow MongoDB startup:
- Problem: MongoDB creates indexes on freshly seeded databases, taking 60+ seconds
- Default liveness probe: Only allows ~90 seconds total before killing the pod
- Solution: startupProbe allows up to 310 seconds for initial startup
# Configured in _deployment-dotnet.tpl for preview environments
startupProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 30 # 10 + (30 * 10) = 310 seconds max
| Environment | Strategy | Max Startup Time |
|---|---|---|
| Production | RollingUpdate | 90 seconds (liveness probe) |
| Staging | RollingUpdate | 90 seconds (liveness probe) |
| Preview (waitForDatabase=true) | Recreate | 310 seconds (startup probe) |
Manual Reseed via /reseed-db Command¶
To trigger a database reseed on an existing preview:
- Comment
/reseed-dbon the PR - Workflow updates
seedVersioninpr.yaml(single source of truth) - ArgoCD detects change and syncs all apps
- Services recreate (Recreate strategy) with new
seedVersion - Operator reseeds and creates new
db-readyConfigMap - Services start successfully
Blocked when:
persist-dblabel present (preserves existing data)previewlabel missing (no preview environment exists)
Future Considerations¶
If ArgoCD adds native UI grouping for related applications (without App-of-Apps), we could revisit the architecture. For now, the flat ApplicationSet provides the best balance of simplicity and functionality.