Go back

RFC2544 Throughput Benchmark

23 Mar, 2026

An RFC2544-aligned validation workflow used to measure throughput ceilings, frame-loss thresholds, and latency under controlled traffic profiles.

RFC2544BenchmarkingQoSAutomation

Overview

Problem / Goal

The lab objective was to produce a repeatable baseline for forwarding performance before and after policy changes. Rather than rely on anecdotal throughput testing, the workflow needed structured measurements that could be repeated after each platform adjustment.

Topology

RFC2544 benchmark topology

Traffic generation and collection nodes were connected across a routed test fabric with QoS policy enabled on the egress edge. Multiple frame sizes were tested to expose differences in packet-processing overhead.

Approach

  • Automate each benchmark run from a fixed scenario definition.
  • Record throughput, latency, and frame loss for each frame size.
  • Export results as markdown tables that could be embedded directly into project documentation.

Implementation

The traffic matrix was generated from a simple automation wrapper:

#!/usr/bin/env bash
for frame_size in 64 128 256 512 1024 1518; do
  ./traffic-runner \
    --profile rfc2544 \
    --frame-size "$frame_size" \
    --duration 60 \
    --output "results/${frame_size}.json"
done
# Function to generate a Fibonacci series up to n
def fib(n): 
    a, b = 0, 1
    while a < n:
        print(a, end=' ')
        a, b = b, a+b
    print()
 
# Call the function
fib(1000)

The raw JSON output was normalized into summary tables and charts before inclusion in the final case study.

Results

Frame SizeThroughputAvg LatencyFrame Loss
647.4 Gbps480 us0.32%
2569.1 Gbps390 us0.06%
5129.4 Gbps370 us0.02%
15189.8 Gbps365 us0.00%

Key Takeaways

  • Small frames exposed the real packet-processing limit of the fabric.
  • Automating the benchmark removed operator variance between test runs.
  • Markdown-native output made the results easy to publish alongside configs and diagrams.