Kubernetes provides the most flexible deployment model for Sentinel, supporting multiple patterns from simple sidecar deployments to sophisticated service mesh integrations.
Deployment Patterns
| Pattern | Description | Best For |
|---|---|---|
| Sidecar | Agents in same pod as Sentinel | Simple setups, low latency |
| Service | Agents as separate deployments | Shared agents, independent scaling |
| DaemonSet | Sentinel on every node | Edge/gateway deployments |
Pattern 1: Sidecar Deployment
Agents run as sidecar containers in the same pod as Sentinel.
# sentinel-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sentinel
labels:
app: sentinel
spec:
replicas: 3
selector:
matchLabels:
app: sentinel
template:
metadata:
labels:
app: sentinel
spec:
containers:
# ─────────────────────────────────────────────────
# Sentinel Proxy
# ─────────────────────────────────────────────────
- name: sentinel
image: ghcr.io/raskell-io/sentinel:latest
ports:
- name: http
containerPort: 8080
- name: admin
containerPort: 9090
volumeMounts:
- name: config
mountPath: /etc/sentinel
readOnly: true
- name: sockets
mountPath: /var/run/sentinel
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "1000m"
memory: "512Mi"
livenessProbe:
httpGet:
path: /health
port: admin
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: admin
initialDelaySeconds: 5
periodSeconds: 5
# ─────────────────────────────────────────────────
# Auth Agent (sidecar)
# ─────────────────────────────────────────────────
- name: auth-agent
image: ghcr.io/raskell-io/sentinel-auth:latest
args:
- "--socket"
- "/var/run/sentinel/auth.sock"
volumeMounts:
- name: sockets
mountPath: /var/run/sentinel
- name: auth-secrets
mountPath: /etc/auth/secrets
readOnly: true
env:
- name: AUTH_SECRET
valueFrom:
secretKeyRef:
name: sentinel-secrets
key: auth-secret
resources:
requests:
cpu: "50m"
memory: "64Mi"
limits:
cpu: "200m"
memory: "128Mi"
# ─────────────────────────────────────────────────
# WAF Agent (sidecar, gRPC)
# ─────────────────────────────────────────────────
- name: waf-agent
image: ghcr.io/raskell-io/sentinel-waf:latest
args:
- "--grpc"
- "127.0.0.1:50051"
resources:
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
livenessProbe:
grpc:
port: 50051
initialDelaySeconds: 5
periodSeconds: 10
volumes:
- name: config
configMap:
name: sentinel-config
- name: sockets
emptyDir:
- name: auth-secrets
secret:
secretName: sentinel-secrets
---
apiVersion: v1
kind: Service
metadata:
name: sentinel
spec:
selector:
app: sentinel
ports:
- name: http
port: 80
targetPort: 8080
- name: admin
port: 9090
targetPort: 9090
type: LoadBalancer
ConfigMap
# sentinel-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sentinel-config
data:
sentinel.kdl: |
server {
listen "0.0.0.0:8080"
}
admin {
listen "0.0.0.0:9090"
}
agents {
agent "auth" type="auth" {
unix-socket "/var/run/sentinel/auth.sock"
events "request_headers"
timeout-ms 50
failure-mode "closed"
}
agent "waf" type="waf" {
grpc "http://127.0.0.1:50051"
events "request_headers" "request_body"
timeout-ms 100
failure-mode "open"
}
}
upstreams {
upstream "api" {
target "api-service.default.svc.cluster.local:80"
}
}
routes {
route "api" {
matches { path-prefix "/api/" }
upstream "api"
agents "auth" "waf"
}
}
Secrets
# sentinel-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: sentinel-secrets
type: Opaque
data:
auth-secret: <base64-encoded-secret>
Pattern 2: Separate Service
Agents run as independent deployments, accessed via Kubernetes services.
# waf-agent-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: waf-agent
labels:
app: waf-agent
spec:
replicas: 3
selector:
matchLabels:
app: waf-agent
template:
metadata:
labels:
app: waf-agent
spec:
containers:
- name: waf-agent
image: ghcr.io/raskell-io/sentinel-waf:latest
args:
- "--grpc"
- "0.0.0.0:50051"
ports:
- name: grpc
containerPort: 50051
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "1000m"
memory: "1Gi"
livenessProbe:
grpc:
port: 50051
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
grpc:
port: 50051
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: waf-agent
spec:
selector:
app: waf-agent
ports:
- name: grpc
port: 50051
targetPort: 50051
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: waf-agent-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: waf-agent
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Update Sentinel config to use the service:
agent "waf" type="waf" {
grpc "http://waf-agent.default.svc.cluster.local:50051"
events "request_headers" "request_body"
timeout-ms 200
failure-mode "open"
}
Pattern 3: DaemonSet (Edge Gateway)
Run Sentinel on every node for edge/gateway scenarios.
# sentinel-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: sentinel-edge
labels:
app: sentinel-edge
spec:
selector:
matchLabels:
app: sentinel-edge
template:
metadata:
labels:
app: sentinel-edge
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: sentinel
image: ghcr.io/raskell-io/sentinel:latest
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
volumeMounts:
- name: config
mountPath: /etc/sentinel
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
volumes:
- name: config
configMap:
name: sentinel-edge-config
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
Helm Chart
The official Helm chart provides the easiest way to deploy Sentinel on Kubernetes with production-ready defaults.
Repository: github.com/raskell-io/sentinel-helm
Installation
# Install from source
# Install with custom values
# Upgrade
Quick Start
# values.yaml
replicaCount: 2
config:
raw: |
listeners {
listener "http" {
address "0.0.0.0:80"
protocol "http"
}
}
routes {
route "api" {
matches { path-prefix "/api" }
upstream "backend"
}
}
upstreams {
upstream "backend" {
target "my-service:8080"
health-check { path "/health" }
}
}
Production Example
# production-values.yaml
replicaCount: 3
config:
raw: |
listeners {
listener "https" {
address "0.0.0.0:443"
protocol "https"
tls {
cert "/etc/sentinel/certs/tls.crt"
key "/etc/sentinel/certs/tls.key"
}
}
}
routes {
route "api" {
matches { path-prefix "/" }
upstream "backend"
}
}
upstreams {
upstream "backend" {
target "api-service.default.svc.cluster.local:80"
load-balancing "round_robin"
health-check {
path "/health"
interval-secs 10
}
}
}
# Resources
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "2000m"
memory: "1Gi"
# Autoscaling
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
targetCPUUtilizationPercentage: 70
# High availability
podDisruptionBudget:
enabled: true
minAvailable: 2
# Prometheus monitoring
serviceMonitor:
enabled: true
interval: 15s
# Ingress
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- host: api.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: api-tls
hosts:
- api.example.com
# TLS certificates
extraVolumes:
- name: tls-certs
secret:
secretName: sentinel-tls
extraVolumeMounts:
- name: tls-certs
mountPath: /etc/sentinel/certs
readOnly: true
Using an Existing ConfigMap
If you manage configuration separately:
config:
existingConfigMap: my-sentinel-config
configKey: sentinel.kdl
Chart Features
| Feature | Description |
|---|---|
| Secure defaults | Non-root user (65534), read-only filesystem, no privilege escalation |
| ConfigMap | Inline KDL config or reference existing ConfigMap |
| HPA | Horizontal Pod Autoscaler for automatic scaling |
| PDB | Pod Disruption Budget for high availability |
| ServiceMonitor | Prometheus Operator integration |
| Ingress | Optional ingress with TLS support |
| Extra volumes | Mount TLS certs, secrets, etc. |
All Configuration Options
See the values.yaml for all available options.
Service Mesh Integration
Istio
# sentinel-virtualservice.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: sentinel
spec:
hosts:
- sentinel
http:
- route:
- destination:
host: sentinel
port:
number: 8080
timeout: 30s
retries:
attempts: 3
perTryTimeout: 10s
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: sentinel
spec:
host: sentinel
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
h2UpgradePolicy: UPGRADE
loadBalancer:
simple: LEAST_CONN
Linkerd
# Add annotation to deployment
metadata:
annotations:
linkerd.io/inject: enabled
Observability
Prometheus ServiceMonitor
# sentinel-servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: sentinel
labels:
app: sentinel
spec:
selector:
matchLabels:
app: sentinel
endpoints:
- port: admin
path: /metrics
interval: 15s
Grafana Dashboard
# sentinel-dashboard-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: sentinel-dashboard
labels:
grafana_dashboard: "1"
data:
sentinel.json: |
{
"title": "Sentinel Proxy",
"panels": [
{
"title": "Request Rate",
"targets": [
{
"expr": "rate(sentinel_requests_total[5m])"
}
]
}
]
}
Logging with Fluentd
# fluentd-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
data:
fluent.conf: |
<source>
@type tail
path /var/log/containers/sentinel*.log
pos_file /var/log/sentinel.pos
tag sentinel.*
<parse>
@type json
</parse>
</source>
<match sentinel.**>
@type elasticsearch
host elasticsearch
port 9200
index_name sentinel
</match>
Network Policies
# sentinel-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: sentinel
spec:
podSelector:
matchLabels:
app: sentinel
policyTypes:
- Ingress
- Egress
ingress:
# Allow from ingress controller
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
# Allow admin from monitoring namespace
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090
egress:
# Allow to upstream services
- to:
- namespaceSelector:
matchLabels:
name: backend
ports:
- protocol: TCP
port: 80
# Allow to agent services
- to:
- podSelector:
matchLabels:
app: waf-agent
ports:
- protocol: TCP
port: 50051
# Allow DNS
- to:
- namespaceSelector:
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
Rolling Updates
# Update strategy in deployment
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
# Trigger rolling update
# Watch rollout
# Rollback if needed
Troubleshooting
Pod Not Starting
# Check pod status
# Describe pod
# Check logs
# Check events
Agent Connection Issues
# Check service discovery
# Test gRPC connection
# Check socket exists (sidecar)
Resource Issues
# Check resource usage
# Check resource limits
|
# Check OOMKilled
|
Config Issues
# Validate config
# Check configmap