Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

vs. NDN-DPDK (C/Go, DPDK)

📝 Note: NDN-DPDK and ndn-rs are not direct competitors — they target fundamentally different deployment tiers. NDN-DPDK is purpose-built for carrier-grade, multi-Tbps line-rate forwarding on dedicated hardware. ndn-rs is designed to be embedded anywhere: in an application, on a microcontroller, or on a commodity server. This comparison explains the tradeoffs so you can choose the right tool for your throughput and deployment requirements.

This page compares ndn-rs with NDN-DPDK, a high-performance NDN forwarder that uses the DPDK (Data Plane Development Kit) kernel-bypass framework. NDN-DPDK is developed at the University of Memphis and is the reference implementation for high-throughput NDN forwarding research. It achieves multi-Tbps forwarding rates by bypassing the OS kernel entirely and dedicating CPU cores and NICs to packet processing.

Comparison Table

AspectNDN-DPDKndn-rsRationale
Target deploymentDedicated DPDK hardware. Requires DPDK-compatible NICs (Intel, Mellanox), huge pages, CPU core isolation, and root access. Designed for ISP or testbed core routers.Commodity servers, embedded devices, and in-process embedding. Runs anywhere cargo build runs: a laptop, a Raspberry Pi, a microcontroller, or inside another application.NDN-DPDK’s DPDK requirement makes it impractical for edge nodes, developer workstations, or IoT gateways. ndn-rs trades maximum throughput for universal deployability.
Peak throughputMulti-Tbps at line rate. NDN-DPDK is the fastest NDN forwarder known; it processes packets in dedicated polling loops without any system-call overhead.Hundreds of Gbps on high-core-count servers, limited by OS network stack. Not kernel-bypass, so there is one syscall per batch of packets.If your requirement is maximizing raw forwarding throughput on dedicated hardware, NDN-DPDK wins. ndn-rs prioritizes low-latency, embeddable, general-purpose forwarding over peak throughput.
Implementation languageC (data plane) + Go (control plane). The DPDK fast path is written in C; the management and configuration layer is in Go.Rust (full stack). The same language covers packet encoding, the engine core, the management protocol, and binaries. No FFI boundary on the critical path.An FFI boundary between C and Go (or between C and any managed language) complicates error propagation, lifetime management, and tooling. A single-language stack is easier to analyze with profilers, sanitizers, and static analyzers.
EmbeddabilityNot embeddable. NDN-DPDK is a standalone daemon; applications talk to it over a management API. Embedding DPDK itself in a library is possible but extremely complex.Embeddable library. ndn-engine is a regular Rust crate. An application adds it as a dependency, calls EngineBuilder::new(), and the forwarder runs in the same process with zero IPC overhead.Embedding the forwarder in-process removes the application/router IPC boundary entirely. For producer applications that serve high-request-rate data (e.g., a video CDN node), eliminating the Unix socket round-trip is significant.
Memory modelHugepage-backed DPDK mempools. Packet buffers live in pre-allocated hugepage memory; no dynamic allocation on the fast path. GC is absent; all memory is controlled by the mempool.bytes::Bytes reference-counted buffers. Dynamic allocation via the system allocator; jemalloc is the default for the ndn-fwd binary. No hugepages; no DPDK mempool required.DPDK mempools are the right tool for kernel-bypass line-rate forwarding, but they require upfront memory reservation and NUMA-aware configuration. Bytes is simpler to use and sufficient for general-purpose deployment.
Kernel bypassYes. DPDK binds the NIC directly, bypassing the kernel network stack entirely. No interrupt handling, no socket syscalls, no context switches on the packet path.No. ndn-rs uses standard OS networking (UDP, TCP, Unix sockets). Kernel involvement adds latency and limits maximum throughput, but makes deployment trivial.Kernel bypass requires root, DPDK-compatible NICs, and significant operational complexity. For the vast majority of NDN deployments — research labs, edge nodes, developer machines — the kernel overhead is acceptable.
Strategy systemFixed strategy. NDN-DPDK implements a single, highly optimized forwarding strategy hardcoded for throughput. Changing strategy logic requires modifying and recompiling the C data plane.Trait + WASM. Built-in strategies implement the Strategy trait; external strategies can be hot-loaded as WASM modules at runtime via ndn-strategy-wasm.NDN-DPDK’s strategy inflexibility is a deliberate tradeoff for performance — branching in the data plane costs throughput. ndn-rs accepts lower peak throughput in exchange for runtime-configurable forwarding behaviour.
Simulation supportNone built-in. Testing NDN-DPDK requires physical DPDK hardware or emulation (DPDK’s software PMD has limitations).In-process simulation. ndn-sim provides SimFace and SimLink for building arbitrary topologies in a single process with deterministic event replay.Simulation is essential for research and testing. Running NDN-DPDK experiments requires physical infrastructure; ndn-rs experiments can run on a laptop in CI.
Embedded / no_std targetsNot applicable. DPDK requires an OS, huge pages, and a DPDK-capable NIC driver.Same crate, no_std. ndn-tlv and ndn-packet compile without the standard library; ndn-embedded targets bare-metal microcontrollers.NDN-DPDK and ndn-rs serve non-overlapping ends of the hardware spectrum. ndn-rs intentionally covers the full range from microcontroller to server.
Operational complexityHigh. Setup requires: DPDK-compatible NIC, hugepage configuration, CPU isolation, NUMA pinning, kernel module loading (or vfio/uio binding), and a Go control-plane daemon.Low. cargo install ndn-fwd produces a single binary. Run it. No kernel modules, no hugepages, no NIC driver changes.Operational simplicity matters for research deployments, CI, and edge nodes. NDN-DPDK’s setup complexity is justified only when the throughput gain is required.

Where NDN-DPDK Has the Advantage

NDN-DPDK is the right choice when your requirement is maximum forwarding throughput on dedicated hardware:

  • Line-rate forwarding. NDN-DPDK can saturate 100 GbE links (and beyond with multi-port configurations) with real NDN traffic. ndn-rs does not approach these rates on the same hardware because it does not bypass the kernel.
  • NUMA-aware memory. DPDK mempools are NUMA-local by construction. On multi-socket servers, packet buffers are always allocated from the same NUMA node as the CPU processing them, eliminating cross-socket memory traffic.
  • Polling model. NDN-DPDK’s data plane spins on RX queues without interrupts, trading CPU utilization for minimal and predictable latency at high packet rates.
  • Research pedigree. NDN-DPDK is widely used in high-throughput NDN research and has published throughput results that serve as the community’s performance upper bound.

Interoperability

Both NDN-DPDK and ndn-rs use the standard NDN TLV wire format, so they interoperate on the same network. A typical deployment might use NDN-DPDK on core infrastructure routers and ndn-rs on edge nodes, producer applications, and embedded devices — the two layers communicate over standard NDN Faces (UDP multicast, TCP unicast) without any special configuration.