Skip to main content

Local Secrets

Pipeline configurations often require credentials for databases, APIs, and message queues. Local secrets use environment variables and files to provide these credentials without hard-coding values in YAML.

Assumption

This guide assumes secrets (passwords, API keys, certificates) have already been provisioned. The examples focus on how to reference these secrets in pipeline configurations.


Why Use Local Secrets?

  • No Hard-Coded Credentials: Keep secrets out of version control
  • Environment Flexibility: Use different credentials for dev/staging/prod
  • Simple Setup: No external dependencies or infrastructure required
  • Platform Native: Works with systemd, Docker, Kubernetes

Environment Variable Interpolation

Expanso Edge supports environment variable interpolation using the ${VAR_NAME} syntax in pipeline configurations.

Basic Example

pipeline.yaml
input:
kafka:
addresses:
- kafka.example.com:9092
topics:
- orders
consumer_group: edge-processor
sasl:
mechanism: PLAIN
user: ${KAFKA_USERNAME}
password: ${KAFKA_PASSWORD}

output:
http_client:
url: https://api.example.com/ingest
headers:
Authorization: Bearer ${API_TOKEN}

Providing Variables

Local development:

export KAFKA_USERNAME="[email protected]"
export KAFKA_PASSWORD="secure-password"
export API_TOKEN="your-api-token"

expanso-edge run --config pipeline.yaml

Environment file:

# /etc/expanso/pipeline.env
KAFKA_USERNAME=[email protected]
KAFKA_PASSWORD=secure-password
API_TOKEN=your-api-token
# Load and run
source /etc/expanso/pipeline.env
expanso-edge run --config pipeline.yaml

Integration Patterns

Systemd Services

Use EnvironmentFile directive to load secrets:

/etc/systemd/system/expanso-edge.service
[Unit]
Description=Expanso Edge Node
After=network.target

[Service]
# Load pipeline secrets from environment file
EnvironmentFile=-/etc/expanso/pipeline.env

ExecStart=/usr/local/bin/expanso-edge run \
--config /etc/expanso/pipeline.yaml

Restart=unless-stopped
User=expanso

[Install]
WantedBy=multi-user.target

Create environment file:

sudo tee /etc/expanso/pipeline.env > /dev/null << 'EOF'
[email protected]
KAFKA_PASSWORD=secure-password
API_TOKEN=your-api-token
EOF

sudo chmod 600 /etc/expanso/pipeline.env
sudo chown expanso:expanso /etc/expanso/pipeline.env

Docker Containers

Environment Variables

docker run -d \
--name expanso-edge \
-e KAFKA_USERNAME="[email protected]" \
-e KAFKA_PASSWORD="secure-password" \
-e API_TOKEN="your-api-token" \
-v $(pwd)/pipeline.yaml:/etc/expanso/pipeline.yaml:ro \
ghcr.io/expanso-io/expanso-edge:latest \
run --config /etc/expanso/pipeline.yaml

Environment File

# Create .env file
cat > pipeline.env << 'EOF'
[email protected]
KAFKA_PASSWORD=secure-password
API_TOKEN=your-api-token
EOF

# Run with env file
docker run -d \
--name expanso-edge \
--env-file pipeline.env \
-v $(pwd)/pipeline.yaml:/etc/expanso/pipeline.yaml:ro \
ghcr.io/expanso-io/expanso-edge:latest \
run --config /etc/expanso/pipeline.yaml

Kubernetes Deployments

ConfigMap for Non-Sensitive Config + Secrets for Credentials

Create the Secret:

apiVersion: v1
kind: Secret
metadata:
name: pipeline-credentials
namespace: expanso-system
type: Opaque
stringData:
KAFKA_USERNAME: "[email protected]"
KAFKA_PASSWORD: "secure-password"
API_TOKEN: "your-api-token"

Create the Deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
name: expanso-edge
namespace: expanso-system
spec:
replicas: 1
selector:
matchLabels:
app: expanso-edge
template:
metadata:
labels:
app: expanso-edge
spec:
containers:
- name: expanso-edge
image: ghcr.io/expanso-io/expanso-edge:latest
args:
- run
- --config
- /etc/expanso/pipeline.yaml
envFrom:
- secretRef:
name: pipeline-credentials
volumeMounts:
- name: pipeline-config
mountPath: /etc/expanso
readOnly: true
volumes:
- name: pipeline-config
configMap:
name: expanso-pipeline-config

Create Kubernetes secret:

kubectl create secret generic pipeline-credentials \
--from-literal=KAFKA_USERNAME='[email protected]' \
--from-literal=KAFKA_PASSWORD='secure-password' \
--from-literal=API_TOKEN='your-api-token' \
--namespace=expanso-system

File-Based Secrets

Some credentials work better as files (TLS certificates, SSH keys).

TLS Certificates Example

pipeline.yaml
output:
http_client:
url: https://api.example.com/ingest
tls:
enabled: true
root_cas_file: /etc/expanso/certs/ca.crt
client_certs:
- cert_file: /etc/expanso/certs/client.crt
key_file: /etc/expanso/certs/client.key

Mounting Files

Docker:

docker run -d \
--name expanso-edge \
-v $(pwd)/certs:/etc/expanso/certs:ro \
-v $(pwd)/pipeline.yaml:/etc/expanso/pipeline.yaml:ro \
ghcr.io/expanso-io/expanso-edge:latest \
run --config /etc/expanso/pipeline.yaml

Kubernetes:

volumes:
- name: tls-certs
secret:
secretName: expanso-tls-certs
defaultMode: 0400

volumeMounts:
- name: tls-certs
mountPath: /etc/expanso/certs
readOnly: true

Common Patterns

Database Credentials

output:
sql_insert:
driver: postgres
dsn: "postgres://${DB_USER}:${DB_PASSWORD}@${DB_HOST}:5432/${DB_NAME}?sslmode=require"
table: events
columns: ["id", "timestamp", "data"]
args_mapping: |
root = [
this.id,
this.timestamp,
this.data.encode("json")
]

API Authentication

output:
http_client:
url: ${API_ENDPOINT}/events
headers:
Authorization: Bearer ${API_TOKEN}
X-API-Key: ${API_KEY}
verb: POST

Message Queue Credentials

input:
kafka:
addresses: ["${KAFKA_BROKER}"]
topics: ["${KAFKA_TOPIC}"]
consumer_group: ${KAFKA_CONSUMER_GROUP}
sasl:
mechanism: ${KAFKA_SASL_MECHANISM} # e.g., PLAIN, SCRAM-SHA-256
user: ${KAFKA_USERNAME}
password: ${KAFKA_PASSWORD}

Best Practices

1. Use Descriptive Variable Names

✅ Good:

POSTGRES_PASSWORD=secret
KAFKA_SASL_USERNAME=producer
API_TOKEN=abc123

❌ Bad:

PASS=secret
USER1=producer
TOKEN=abc123

2. Never Commit Secrets to Version Control

Add environment files to .gitignore:

# .gitignore
.env
*.env
pipeline.env
secrets/

3. Restrict File Permissions

chmod 600 /etc/expanso/pipeline.env
chown expanso:expanso /etc/expanso/pipeline.env

4. Separate Environments

Use different environment files per environment:

/etc/expanso/
├── pipeline.yaml
├── dev.env
├── staging.env
└── prod.env

5. Document Required Variables

Create a template showing required variables:

# pipeline.env.template
KAFKA_USERNAME=
KAFKA_PASSWORD=
API_TOKEN=
DB_HOST=
DB_PASSWORD=

6. Validate Variables at Startup

Check for missing variables before running:

#!/bin/bash
REQUIRED_VARS="KAFKA_USERNAME KAFKA_PASSWORD API_TOKEN"

for var in $REQUIRED_VARS; do
if [ -z "${!var}" ]; then
echo "Error: $var is not set"
exit 1
fi
done

expanso-edge run --config pipeline.yaml

Limitations

When to Use External Secrets Instead

Local secrets work well for:

  • Development and testing
  • Small-scale deployments
  • Single-node installations

Consider External Secret Managers when you need:

  • Automated rotation: Credentials expire and renew automatically
  • Centralized management: Hundreds of credentials across many nodes
  • Audit trails: Track who accessed which secrets when
  • Dynamic secrets: Generate short-lived credentials on demand
  • Compliance: Meet regulatory requirements (SOC 2, HIPAA, PCI-DSS)

Troubleshooting

Variable Not Interpolated

Symptom: Literal string ${VAR_NAME} appears in logs

Cause: Variable not set in environment

Solution:

# Check if variable is set
echo $VAR_NAME

# Set missing variable
export VAR_NAME="value"

Permission Denied Reading File

Symptom: Error: permission denied: /etc/expanso/certs/client.key

Solution:

# Fix file permissions
sudo chmod 600 /etc/expanso/certs/client.key
sudo chown expanso:expanso /etc/expanso/certs/client.key

Environment File Not Loaded

Symptom: Variables not available in systemd service

Solution: Verify EnvironmentFile path in service:

sudo systemctl cat expanso-edge | grep EnvironmentFile
sudo systemctl daemon-reload
sudo systemctl restart expanso-edge

Next Steps