This guide helps you migrate from Agent Protocol v1 to v2. The v2 protocol offers significant improvements in performance, reliability, and observability while maintaining conceptual compatibility.
Why Migrate?
| Improvement | v1 | v2 |
|---|---|---|
| Latency | ~50μs per request | ~10-20μs per request |
| Throughput | Single connection | Pooled connections (4x+ throughput) |
| Reliability | Basic timeouts | Circuit breakers, health tracking |
| Streaming | Limited | Full bidirectional streaming |
| Observability | Manual | Built-in Prometheus metrics |
| NAT Traversal | Not supported | Reverse connections |
Quick Migration
Minimal Change (Drop-in)
If you just want pooling benefits without code changes:
Before (v1):
use AgentClient;
let client = unix_socket.await?;
let response = client.send_event.await?;
After (v2):
use AgentPool;
let pool = new;
pool.add_agent.await?;
let response = pool.send_request_headers.await?;
The AgentPool automatically:
- Maintains 4 connections per agent
- Load balances requests
- Tracks health and circuit breaker state
- Exports Prometheus metrics
Step-by-Step Migration
1. Update Dependencies
# Cargo.toml
[]
= "0.3" # v2 included
2. Import v2 Types
// Before
use ;
// After
use ;
3. Replace Client with Pool
Before:
// Create individual clients
let waf_client = unix_socket.await?;
let auth_client = grpc.await?;
// Store clients somewhere
After:
// Create single pool for all agents
let pool = new;
// Add agents (transport auto-detected)
pool.add_agent.await?;
pool.add_agent.await?;
// Pool is Clone + Send + Sync
let pool = new;
4. Update Request Sending
Before:
let event = AgentEvent ;
let response = client.send_event.await?;
After:
use RequestHeadersEvent;
let event = RequestHeadersEvent ;
let response = pool.send_request_headers.await?;
5. Update Response Handling
Before:
match response.action
After:
match response.decision
6. Add Error Handling for New Error Types
use AgentProtocolError;
match pool.send_request_headers.await
Configuration Migration
KDL Configuration
Before (v1):
agents {
agent "waf" type="waf" {
unix-socket "/var/run/waf.sock"
timeout-ms 100
failure-mode "open"
}
}
After (v2):
agents {
agent "waf" type="waf" {
unix-socket "/var/run/waf.sock"
protocol-version 2 // Enable v2
connections 4 // Connection pool size
timeout-ms 100
failure-mode "open"
// New v2 options
circuit-breaker {
failure-threshold 5
reset-timeout-seconds 30
}
}
}
Rust Configuration
Before:
let client = unix_socket.await?;
After:
let config = AgentPoolConfig ;
let pool = with_config;
pool.add_agent.await?;
Feature-by-Feature Migration
Body Streaming
Before (v1):
// Send body as single event
let body_event = AgentEvent ;
client.send_event.await?;
After (v2):
// Stream body in chunks
for in body_chunks.enumerate
Health Checks
Before (v1):
// Manual health check
match client.ping.await
After (v2):
// Automatic health tracking
let health = pool.get_health?;
println!;
println!;
println!;
Metrics
Before (v1):
// Manual metrics collection
counter!.increment;
let start = now;
let result = client.send_event.await;
histogram!.record;
After (v2):
// Automatic metrics export
let prometheus_output = pool.metrics_collector.export_prometheus;
// Expose via /metrics endpoint
// Or get snapshot for custom handling
let snapshot = pool.protocol_metrics.snapshot;
println!;
println!;
Agent-Side Migration
If you maintain custom agents, update the server implementation:
gRPC Agents
The protobuf definitions are compatible. Update to support new message types:
// New message types in v2
message RequestHeadersEvent {
string correlation_id = 1;
string method = 2;
string uri = 3;
map<string, StringList> headers = 4;
// ...
}
message RequestBodyChunkEvent {
string correlation_id = 1;
bytes data = 2;
bool is_last = 3;
uint32 chunk_index = 4;
// ...
}
UDS Agents
V2 UDS uses binary MessagePack encoding for better performance:
// Server handshake response includes encoding negotiation
let handshake = HandshakeResponse ;
Rollback Plan
If you need to rollback to v1:
- Keep v1 client code in a feature flag during migration
- Monitor metrics during rollout
- Gradual rollout using traffic splitting
async
async
Compatibility Notes
Wire Protocol
- v2 UDS uses length-prefixed MessagePack (or JSON with negotiation)
- v2 gRPC uses updated protobuf messages
- v1 agents cannot connect to v2 pool (and vice versa)
Breaking Changes
| Change | Migration |
|---|---|
AgentClient → AgentPool | Use pool pattern |
send_event() → send_request_headers() | Update method calls |
Action → Decision | Update response handling |
EventType enum removed | Use typed methods |
| Request ID → Correlation ID | Use string correlation IDs |
Deprecated (Still Working)
| Deprecated | Replacement |
|---|---|
AgentClient (v1) | AgentPool (v2) |
| JSON-only UDS | MessagePack UDS |
| Manual health checks | Automatic health tracking |
Troubleshooting
“Connection refused” after migration
Ensure the agent supports v2 protocol. Check handshake:
# Test UDS connection
|
Circuit breaker keeps opening
Tune thresholds for your error rates:
let config = AgentPoolConfig ;
Higher latency than expected
Check connection pool size and load balancing:
// For low-latency workloads
let config = AgentPoolConfig ;
Memory usage increased
Large bodies may need mmap buffers:
[]
= { = "0.3", = ["mmap-buffers"] }
Next Steps
After migration:
- Enable metrics export - Add
/metricsendpoint for Prometheus - Configure alerts - Set up alerts for circuit breaker state
- Tune pool size - Adjust
connections_per_agentbased on load testing - Consider reverse connections - For agents behind NAT/firewalls
See also: