Skip to main content

Send K3s Logs to Multiple Destinations

Send logs to both S3 for long-term storage and Elasticsearch for real-time search using the broker output pattern.

Pipeline

input:
subprocess:
name: kubectl
args:
- logs
- --all-containers=true
- --prefix=true
- --follow
- --all-namespaces
codec: lines
restart_on_exit: true

pipeline:
processors:
- mapping: |
root = this.parse_json().catch({
"message": this,
"level": "info"
})
root.node_id = env("NODE_ID")
root.timestamp = now()

output:
broker:
pattern: fan_out
outputs:
# Long-term storage in S3
- aws_s3:
bucket: edge-k3s-logs-archive
path: 'logs/${! env("NODE_ID") }/${! timestamp_unix() }.jsonl'
batching:
count: 5000
period: 5m

# Real-time search in Elasticsearch
- elasticsearch_v2:
urls: ['https://elasticsearch.company.com:9200']
index: 'k3s-logs-${! timestamp_unix("2006-01-02") }'
batching:
count: 100
period: 10s

What This Does

  • Fan-out pattern: Sends each log to both destinations simultaneously
  • S3 for archival: Large batches (5000 logs, 5 minutes) reduce API costs
  • Elasticsearch for search: Small batches (100 logs, 10 seconds) enable near-real-time queries
  • JSON parsing: Attempts to parse logs as JSON, falls back to plain text
  • Daily indices: Elasticsearch uses date-based indices for easier management

Fan-Out Pattern

The broker output with pattern: fan_out duplicates each log message and sends it to all configured outputs. Both outputs must succeed for the message to be acknowledged.

Different Batching Strategies

S3 batching (5000 logs / 5 minutes):

  • Optimized for cost (fewer API calls)
  • Acceptable latency for archival use case

Elasticsearch batching (100 logs / 10 seconds):

  • Optimized for freshness (recent logs appear quickly)
  • Higher API call rate acceptable for search use case

Use Cases

Compliance + operations: Store all logs in S3 for compliance, search recent logs in Elasticsearch for debugging

Cost optimization: Keep 7 days in Elasticsearch, years in S3

Disaster recovery: If Elasticsearch goes down, all logs still flow to S3

Next Steps