Skip to main content

Docker Compose Management

Manage Docker Compose stacks across distributed edge locations with Expanso. Deploy services, monitor health, collect logs, and automatically restart failed containers.

Why Manage Docker Compose with Expanso?

Challenge: Docker Compose stacks at edge locations require manual management and monitoring.

Solution: Use Expanso pipelines to automate Docker Compose operations across all edge sites.

Benefits:

  • Deploy and update stacks remotely
  • Auto-restart failed containers
  • Centralize logs from all services
  • Monitor health across locations

Monitor and Auto-Restart Containers

Check container status every 30 seconds and restart if needed:

input:
generate:
interval: 30s
mapping: |
root.check_time = now()
root.node_id = env("NODE_ID")

pipeline:
processors:
# Check container status
- command:
name: docker
args_mapping: '["ps", "--filter", "status=running", "--format", "{{.Names}}"]'

- mapping: |
# List of expected containers
let expected = ["nginx", "app", "redis", "postgres"]

# Running containers
let running = this.split("\n").filter(c -> c != "")

# Find missing containers
root.missing = $expected.filter(e -> !$running.contains(e))
root.node_id = @node_id
root.timestamp = @check_time

# Restart missing containers
- switch:
cases:
- check: this.missing.length() > 0
processors:
- for_each:
- command:
name: docker
args_mapping: |
root = ["compose", "-f", "/opt/app/docker-compose.yml", "up", "-d", this.missing.index(0)]

output:
switch:
cases:
# Alert if containers were restarted
- check: this.missing.length() > 0
output:
http_client:
url: https://alerts.company.com/webhook
verb: POST
headers:
Content-Type: application/json

# Log status checks
- output:
aws_s3:
bucket: container-status
path: 'status/${! env("NODE_ID") }/${! timestamp_unix() }.json'
batching:
count: 100
period: 5m

What this does:

  • Checks every 30 seconds for expected containers
  • Identifies missing containers
  • Automatically restarts them via docker-compose
  • Sends alerts when restarts happen
  • Logs all health checks

Collect Docker Compose Logs

Stream logs from all services in a compose stack:

input:
subprocess:
name: docker
args:
- compose
- -f
- /opt/app/docker-compose.yml
- logs
- --follow
- --timestamps
- --no-log-prefix
codec: lines
restart_on_exit: true

pipeline:
processors:
- mapping: |
# Try to parse structured JSON logs
root = this.parse_json().catch({
"message": this,
"level": "info"
})

root.node_id = env("NODE_ID")
root.location = env("LOCATION")
root.stack = "main-app"
root.timestamp = now()

output:
http_client:
url: https://logs.company.com/ingest
verb: POST
headers:
Content-Type: application/json
Authorization: Bearer ${LOG_API_TOKEN}
batching:
count: 500
period: 30s
processors:
- archive:
format: json_array

Deploy Docker Compose Stack

Deploy/update compose stacks from configuration in S3:

input:
aws_s3:
bucket: edge-compose-configs
prefix: '${! env("LOCATION") }/'
codec: all-bytes
scanner:
to_the_end: {}

pipeline:
processors:
- mapping: |
# Save compose file temporarily
root.compose_content = content()
root.compose_path = "/tmp/docker-compose-" + uuid_v4() + ".yml"

# Write compose file
- command:
name: bash
args_mapping: |
root = ["-c", "echo '" + this.compose_content + "' > " + this.compose_path]

# Deploy with docker-compose
- command:
name: docker
args_mapping: |
root = ["compose", "-f", this.compose_path, "up", "-d"]

- mapping: |
root.deployed = true
root.output = content().string()
root.node_id = env("NODE_ID")
root.timestamp = now()

output:
broker:
pattern: fan_out
outputs:
# Log deployment result
- aws_s3:
bucket: deployment-logs
path: 'deployments/${! env("NODE_ID") }/${! timestamp_unix() }.json'

# Report to control plane
- http_client:
url: https://api.company.com/deployments
verb: POST

Check Container Resource Usage

Monitor CPU and memory usage:

input:
generate:
interval: 60s
mapping: 'root = {}'

pipeline:
processors:
# Get container stats
- command:
name: docker
args_mapping: '["stats", "--no-stream", "--format", "{{json .}}"]'

- mapping: |
# Parse each line as JSON
root.containers = this.split("\n").filter(l -> l != "").map_each(l -> l.parse_json())
root.node_id = env("NODE_ID")
root.timestamp = now()

# Check for high resource usage
root.high_usage = this.containers.filter(c ->
c.CPUPerc.parse_float() > 80 ||
c.MemPerc.parse_float() > 90
)

output:
switch:
cases:
# Alert on high usage
- check: this.high_usage.length() > 0
output:
http_client:
url: https://alerts.company.com/webhook
verb: POST

# Log metrics
- output:
http_client:
url: https://metrics.company.com/docker
verb: POST
batching:
count: 10
period: 5m

Update Compose Stack

Pull latest images and restart services:

input:
generate:
interval: '@daily' # Run once per day
mapping: 'root = {}'

pipeline:
processors:
# Pull latest images
- command:
name: docker
args_mapping: '["compose", "-f", "/opt/app/docker-compose.yml", "pull"]'

# Restart with new images
- command:
name: docker
args_mapping: '["compose", "-f", "/opt/app/docker-compose.yml", "up", "-d"]'

- mapping: |
root.updated = true
root.output = content().string()
root.node_id = env("NODE_ID")
root.timestamp = now()

output:
aws_s3:
bucket: update-logs
path: 'updates/${! env("NODE_ID") }/${! timestamp_unix() }.json'

Best Practices

1. Use Absolute Paths

processors:
- command:
name: docker
args_mapping: '["compose", "-f", "/opt/app/docker-compose.yml", "up", "-d"]'

Avoids issues with working directory.

2. Check Docker Daemon Health

processors:
- command:
name: docker
args_mapping: '["info"]'
- mapping: |
root.docker_healthy = !error().exists()

Verify Docker is running before operations.

3. Add Timeouts for Commands

processors:
- command:
name: docker
args_mapping: '["compose", "up", "-d"]'
- mapping: |
root.timeout = if error().contains("timeout") {
true
} else {
false
}

Handle long-running operations.

4. Log All Operations

output:
broker:
pattern: fan_out
outputs:
- aws_s3: # Log to S3
- http_client: # Send to API

Maintain audit trail of all compose operations.


Troubleshooting

Docker Permission Denied

Solution: Add Expanso user to docker group:

sudo usermod -aG docker expanso-agent

Compose File Not Found

Solution: Use absolute paths:

args_mapping: '["compose", "-f", "/opt/app/docker-compose.yml", "up", "-d"]'

Containers Not Restarting

Solution: Check compose file and logs:

processors:
- command:
name: docker
args_mapping: '["compose", "-f", "/path/to/compose.yml", "config"]' # Validate first

Next Steps