RFC2544 Throughput Benchmark
23 Mar, 2026
An RFC2544-aligned validation workflow used to measure throughput ceilings, frame-loss thresholds, and latency under controlled traffic profiles.
Overview
Problem / Goal
The lab objective was to produce a repeatable baseline for forwarding performance before and after policy changes. Rather than rely on anecdotal throughput testing, the workflow needed structured measurements that could be repeated after each platform adjustment.
Topology
Traffic generation and collection nodes were connected across a routed test fabric with QoS policy enabled on the egress edge. Multiple frame sizes were tested to expose differences in packet-processing overhead.
Approach
- Automate each benchmark run from a fixed scenario definition.
- Record throughput, latency, and frame loss for each frame size.
- Export results as markdown tables that could be embedded directly into project documentation.
Implementation
The traffic matrix was generated from a simple automation wrapper:
#!/usr/bin/env bash
for frame_size in 64 128 256 512 1024 1518; do
./traffic-runner \
--profile rfc2544 \
--frame-size "$frame_size" \
--duration 60 \
--output "results/${frame_size}.json"
done# Function to generate a Fibonacci series up to n
def fib(n):
a, b = 0, 1
while a < n:
print(a, end=' ')
a, b = b, a+b
print()
# Call the function
fib(1000)The raw JSON output was normalized into summary tables and charts before inclusion in the final case study.
Results
| Frame Size | Throughput | Avg Latency | Frame Loss |
|---|---|---|---|
| 64 | 7.4 Gbps | 480 us | 0.32% |
| 256 | 9.1 Gbps | 390 us | 0.06% |
| 512 | 9.4 Gbps | 370 us | 0.02% |
| 1518 | 9.8 Gbps | 365 us | 0.00% |
Key Takeaways
- Small frames exposed the real packet-processing limit of the fabric.
- Automating the benchmark removed operator variance between test runs.
- Markdown-native output made the results easy to publish alongside configs and diagrams.