Edge Infrastructure Management
Use Expanso to manage, monitor, and automate distributed edge infrastructure. Execute commands, collect logs, deploy workloads, and maintain infrastructure across thousands of edge locationsβall through declarative pipelines.
Why Manage Infrastructure with Expanso?β
Traditional edge infrastructure challenges:
- Manual SSH to hundreds of locations doesn't scale
- Logs scattered across distributed sites
- Difficult to enforce consistency
- No centralized operational visibility
Expanso advantages:
- Declarative operations as pipelines
- Automated execution across all edge nodes
- Centralized log collection and monitoring
- Works offline with automatic retry
- Unified control plane for all locations
Getting Startedβ
Choose your infrastructure platform to get started:
Kubernetes, K3s & OpenShiftβ
π Collect K3s Logs
Stream kubectl logs from all pods and namespaces to centralized storage (S3, Elasticsearch, etc.)
βοΈ Single-Node OpenShift
Manage SNO deployments at the edge. Collect logs, monitor health, handle offline scenarios.
π Deploy Kubernetes Manifests
GitOps-style deployments: apply manifests from S3/Git to edge K3s clusters automatically
Docker & Containersβ
π³ Docker Compose Management
Deploy stacks, collect logs, monitor containers. Auto-restart failed services.
π Secrets Management
Provision and mount bootstrap tokens & credentials securely before installation.
Common Use Casesβ
Log Aggregation:
- Collect K3s/Kubernetes logs β S3/Elasticsearch
- Stream Docker container logs to centralized platform
- Parse and enrich logs with node/location metadata
Automated Deployments:
- GitOps: Deploy manifests from Git/S3 to edge clusters
- Manage docker-compose stacks remotely
- Roll out updates across distributed infrastructure
Health Monitoring:
- Check Kubernetes node and pod health
- Monitor Docker daemon and container status
- Alert on infrastructure issues automatically
Maintenance Automation:
- Scheduled backups of cluster state
- Prune unused Docker images/containers
- Clean up old logs and temporary files
Quick Example: K3s Log Collectionβ
Stream all K3s logs to S3 in 3 steps:
input:
subprocess:
name: kubectl
args: [logs, --all-containers, --prefix, --follow, --all-namespaces]
codec: lines
restart_on_exit: true
pipeline:
processors:
- mapping: |
root = this
root.node_id = env("NODE_ID")
root.timestamp = now()
output:
aws_s3:
bucket: edge-k3s-logs
path: 'logs/${! env("NODE_ID") }/${! timestamp_unix() }.jsonl'
batching:
count: 1000
period: 1m
Architecture Patternsβ
Hub-and-Spoke Log Collectionβ
Edge Sites (K3s/Docker)
β Expanso Agent
β Central Storage (S3/Elasticsearch)
β Analysis/Dashboards
GitOps Deploymentsβ
Git/S3 (manifests)
β Expanso watches for changes
β kubectl apply to edge clusters
β Report status to control plane
Self-Healing Infrastructureβ
Health Check (every 30s)
β Detect issues (failed pods/containers)
β Auto-remediation (restart, redeploy)
β Alert + log results
Best Practicesβ
- Add node identifiers to all outputs for filtering
- Batch cloud writes to reduce API calls and costs
- Use subprocess for long-running commands (kubectl logs -f)
- Handle command failures gracefully with error handling
- Set resource limits to prevent memory issues
Next Stepsβ
- Choose a guide above to get started
- Command Processor: Execute commands per message
- Subprocess Input: Stream command output
- Error Handling: Handle infrastructure failures