The first fully Open pNFS platform for AI and HPC

Single node to super cluster. Linearly scalable data acceleration.

In collaboration with Los Alamos National Laboratory, delivering
a unified path from NFS silos to Tier 0 with out complexity.

The PEAK:AIO answer

Parallel file system power with NFS simplicity. Clients read directly from the correct data node using standard Linux NFS, so you start on one node and grow to many with the same client and tools. Built on open standards, including pNFS Flex Files, RDMA, NVMe and CXL, so you can scale without legacy parallel file systems or proprietary stacks.

Why it matters

• Parallel performance, NFS simple
• Scale without changing clients or workflows
• Open on server and client, no vendor lock in
• Works with your preferred hardware, existing or new
• Keeps GPUs busy with steady small read latency
• Quick to deploy with clear setup notes

"AI has HPC-scale demands with different I/O. Standards based, open pNFS provides the flexible base the community needs. PEAK Open pNFS is a key step.” Gary Grider, HPC Division Leader, Los Alamos National Laboratory

Architecture @glance

Parallel performance with NFS simplicity.

  • Client: standard Linux NFS with pNFS Flex Files
  • Control path: light metadata path for LOOKUP, GETATTR and READDIR
  • Data path: client reads directly from the correct data server, RDMA or TCP
  • Data servers: NVMe, layout aware, steady small read latency
  • Standards: pNFS Flex Files, RDMA, NVMe, CXL ready
  • Open on server and client, no vendor lock in

Why this matters for AI and HPC

Your team gets a standard Linux client with pNFS Flex Files over RDMA, not a proprietary client. Results are reproducible, with clear mounts, settings and logs. Storage stays centralised, simple to run and easy to scale. The outcome is predictable throughput, steady small-read latency, and higher GPU utilisation without shifting risk to host-local “tier 0” storage.

  • Start with one node, grow to thousands on the same design.
  • Standard NFS clients. No lock in.
  • Centralised operations kept simple.
  • Reproducible method with mounts, settings and logs.
  • RDMA today, CXL ready when you are.

Performance highlights

  • Parallel throughput that scales cleanly Aggregate read rises with each data server added.
  • Small read latency stays steady under load 99th percentile shown for growing client counts.
  • One tenth of the hardware for AI Like for like AI tests show leading results with about one tenth of the kit. Full test notes and settings available.