Deploy Kubernetes Manifests
Deploy and manage Kubernetes manifests across distributed edge clusters using Expanso. Implement GitOps-style workflows by watching S3 or Git for changes and automatically applying them to edge K3s clusters.
Why Automate Kubernetes Deployments?
Challenge: Manually applying manifests to 100+ edge K3s clusters is error-prone and doesn't scale.
Solution: Use Expanso to automatically deploy manifests from central storage to all edge clusters.
Benefits:
- GitOps workflow: commit manifests → auto-deploy
- Consistent deployments across all edge sites
- Audit trail of all changes
- Automatic rollback on failures
Deploy from S3
Watch S3 for new/updated manifests and apply to local cluster:
input:
aws_s3:
bucket: edge-k8s-manifests
prefix: '${! env("CLUSTER_NAME") }/'
codec: all-bytes
scanner:
to_the_end: {}
pipeline:
processors:
- mapping: |
root.manifest_content = content()
root.manifest_path = meta("s3_key")
root.cluster = env("CLUSTER_NAME")
root.node_id = env("NODE_ID")
# Apply manifest to cluster
- command:
name: kubectl
args_mapping: '["apply", "-f", "-"]'
- mapping: |
root.kubectl_output = content().string()
root.applied_at = now()
root.success = !error().exists()
root.error = error().or("")
output:
broker:
pattern: fan_out
outputs:
# Log deployment result
- aws_s3:
bucket: deployment-logs
path: 'deployments/${! env("NODE_ID") }/${! timestamp_unix() }.json'
# Report status to control plane
- http_client:
url: https://api.company.com/deployments
verb: POST
headers:
Content-Type: application/json
Workflow:
- Upload manifest to S3:
s3://manifests/production/deployment.yaml - Expanso detects new file
- Applies to local K3s cluster
- Logs result and reports status
Deploy Specific Resource Types
Only deploy ConfigMaps and Secrets:
input:
aws_s3:
bucket: edge-k8s-configs
prefix: '${! env("CLUSTER_NAME") }/configs/'
codec: all-bytes
pipeline:
processors:
# Parse YAML to check resource type
- mapping: |
root.manifest = content().parse_yaml()
root.kind = this.manifest.kind
root.name = this.manifest.metadata.name
# Only apply ConfigMaps and Secrets
- switch:
cases:
- check: '["ConfigMap", "Secret"].contains(this.kind)'
processors:
- command:
name: kubectl
args_mapping: '["apply", "-f", "-"]'
output:
aws_s3:
bucket: config-deployment-logs
path: 'logs/${! json("kind") }/${! timestamp_unix() }.json'
Deploy with Validation
Validate manifests before applying:
pipeline:
processors:
- mapping: |
root.manifest = content()
# Dry-run first to validate
- command:
name: kubectl
args_mapping: '["apply", "--dry-run=server", "-f", "-"]'
- mapping: |
root.validation = content().string()
root.valid = !error().exists()
# Only apply if validation passed
- switch:
cases:
- check: this.valid
processors:
- command:
name: kubectl
args_mapping: '["apply", "-f", "-"]'
output:
aws_s3:
bucket: validated-deployments
path: 'deployments/${! timestamp_unix() }.json'
Rollback on Failure
Automatically rollback failed deployments:
pipeline:
processors:
# Save previous state
- command:
name: kubectl
args_mapping: '["get", "all", "-o", "yaml"]'
- mapping: |
root.previous_state = content()
# Apply new manifest
- command:
name: kubectl
args_mapping: '["apply", "-f", "-"]'
- mapping: |
root.deployment_success = !error().exists()
# Check if deployment is healthy
- command:
name: kubectl
args_mapping: '["rollout", "status", "deployment/my-app", "--timeout=5m"]'
# Rollback if failed
- switch:
cases:
- check: 'error().exists()'
processors:
- mapping: 'root.rollback_needed = true'
- command:
name: kubectl
args_mapping: '["apply", "-f", "-"]'
# Uses previous_state from earlier
output:
http_client:
url: https://api.company.com/deployments
verb: POST
Deploy from Git
Clone git repo and apply manifests:
input:
generate:
interval: 5m # Check every 5 minutes
mapping: 'root = {}'
pipeline:
processors:
# Clone git repo
- command:
name: git
args_mapping: '["clone", "--depth", "1", "https://github.com/company/k8s-manifests", "/tmp/manifests"]'
# Apply all manifests in directory
- command:
name: kubectl
args_mapping: '["apply", "-f", "/tmp/manifests/${! env("CLUSTER_NAME") }/"]'
# Cleanup
- command:
name: rm
args_mapping: '["-rf", "/tmp/manifests"]'
- mapping: |
root.deployed_at = now()
root.cluster = env("CLUSTER_NAME")
root.output = content().string()
output:
aws_s3:
bucket: git-deployments
path: 'deployments/${! env("CLUSTER_NAME") }/${! timestamp_unix() }.json'
Deploy to Specific Namespace
Apply manifests to a specific namespace:
pipeline:
processors:
- command:
name: kubectl
args_mapping: '["apply", "--namespace", "production", "-f", "-"]'
Best Practices
1. Use Dry-Run for Validation
processors:
- command:
name: kubectl
args_mapping: '["apply", "--dry-run=server", "-f", "-"]'
Catch errors before applying.
2. Add Deployment Labels
processors:
- mapping: |
# Add labels to manifest
root.manifest = content().parse_yaml()
root.manifest.metadata.labels.deployed_by = "expanso"
root.manifest.metadata.labels.deployed_at = now().string()
Track who/when manifests were deployed.
3. Use Service Accounts with Limited RBAC
kubectl create serviceaccount expanso-deploy
kubectl create clusterrole deployer --verb=create,update,patch,delete --resource=deployments,services,configmaps
kubectl create clusterrolebinding expanso-deploy --clusterrole=deployer --serviceaccount=default:expanso-deploy
4. Log All Deployments
output:
broker:
pattern: fan_out
outputs:
- aws_s3: # Audit log
- http_client: # Status API
Troubleshooting
Permission Denied
Solution: Check service account permissions:
kubectl auth can-i apply deployment --as=system:serviceaccount:default:expanso-deploy
Manifest Validation Failed
Solution: Use kubectl validate:
processors:
- command:
name: kubectl
args_mapping: '["apply", "--dry-run=client", "--validate=true", "-f", "-"]'
Deployment Timeout
Solution: Increase timeout:
processors:
- command:
name: kubectl
args_mapping: '["apply", "-f", "-", "--timeout=10m"]'
Next Steps
- K3s Logs: Collect cluster logs
- Docker Compose: Manage Docker stacks
- OpenShift Single-Node: Deploy on OpenShift
- Command Processor: Component reference