Nvidia BlueField-3 User manual

NVIDIA BlueField-3 Networking Platform
User Guide

2
Table of Contents
1 Introduction.............................................................................. 9
1.1 System Requirements ....................................................................... 9
1.2 Package Contents ........................................................................... 10
1.2.1 Card Package ............................................................................ 10
1.2.2 Accessories Kit .......................................................................... 10
1.2.3 PCIe Auxiliary Card Package........................................................... 10
1.3 Features and Benefits ...................................................................... 11
2 BlueField DPU Administrator Quick Start Guide................................... 15
2.1 Prerequisites for Initial BlueField DPU Deployment ................................... 15
2.2 First-time Installation Procedure ......................................................... 15
3 Supported Interfaces .................................................................. 16
3.1 BlueField-3 SuperNICs Layout and Interface Information............................. 16
3.2 BlueField-3 DPUs Layout and Interface Information................................... 18
3.3 Interfaces Detailed Description........................................................... 19
3.3.1 DPU System-on-Chip (SoC)............................................................. 19
3.3.2 Networking Interface................................................................... 20
3.3.3 Networking Ports LEDs Interface ..................................................... 20
3.3.4 PCI Express Interface................................................................... 21
3.3.5 DDR5 SDRAM On-Board Memory....................................................... 21
3.3.6 NC-SI Management Interface .......................................................... 22
3.3.7 UART Interface Connectivity .......................................................... 22
3.3.8 USB 4-pin RA Connector................................................................ 22
3.3.9 1GbE OOB Management Interface .................................................... 23
3.3.10 PPS IN/OUT Interface .................................................................. 23
3.3.11 External PCIe Power Supply Connector.............................................. 24
3.3.12 Cabline CA-II Plus Connectors......................................................... 25
3.3.13 Integrated BMC Interface .............................................................. 25
3.3.14 NVMe SSD Interface..................................................................... 25
3.3.15 RTC Battery .............................................................................. 26
3.3.16 eMMC Interface.......................................................................... 26
4 Pinouts Description .................................................................... 27
4.1 PCI Express Interface....................................................................... 27

3
4.2 External Power Supply Connector ........................................................ 29
4.3 NC-SI Management Interface .............................................................. 30
4.4 Cabline CA-II Plus Connectors Pinouts ................................................... 32
4.4.1 Component Side......................................................................... 32
4.4.2 Print Side................................................................................. 34
5 Hardware Installation and PCIe Bifurcation ....................................... 36
5.1 Safety Warnings ............................................................................. 36
5.2 Installation Procedure Overview.......................................................... 37
5.3 System Requirements ...................................................................... 37
5.3.1 Hardware Requirements ............................................................... 37
5.3.2 Airflow Requirements .................................................................. 38
5.3.3 Software Requirements ................................................................ 38
5.4 Safety Precautions .......................................................................... 38
5.5 Unpacking .................................................................................... 38
5.6 Pre-Installation Checklist .................................................................. 38
5.7 Installation Instructions.................................................................... 39
5.8 Cables and Modules......................................................................... 39
5.8.1 Networking Cable Installation ........................................................ 39
5.9 DPU Power-Up and Power-Down Sequences ............................................ 40
5.9.1 Power-Up Sequence .................................................................... 40
5.9.2 Power-Down Sequence ................................................................. 41
5.10 PCIe x16 DPU/SuperNIC Installation Instructions ...................................... 42
5.10.1 Installation Instructions................................................................ 42
5.10.2 Uninstalling the DPU/SuperNIC ....................................................... 43
5.11 PCIe Extension Option (2x PCIe x16) Installation Instructions ....................... 44
5.11.1 Installing the DPU....................................................................... 45
5.11.2 Uninstalling the Cards.................................................................. 49
5.12 PCIe Bifurcation Configuration Options ................................................. 50
5.12.1 Host as Root Port on x4 PCIe Lane Peripherals ..................................... 52
5.12.2 DPU ARMs as Root Port on Peripherals ............................................... 53
6 Setting High-Speed-Port Link Type .................................................. 55
6.1 mlxconfig..................................................................................... 55
6.2 UEFI ........................................................................................... 55
7 Troubleshooting ........................................................................ 56

4
8 Specifications........................................................................... 57
8.1 B3140H SuperNICs Specifications ......................................................... 57
8.2 B3140L SuperNICs Specifications ......................................................... 58
8.3 B3220L SuperNICs Specifications ......................................................... 59
8.4 B3210L SuperNICs Specifications ......................................................... 61
8.5 B3240 DPUs Specifications................................................................. 62
8.6 B3210 DPUs Specifications................................................................. 63
8.7 B3210E DPUs Specifications ............................................................... 64
8.8 B3220 DPUs Specifications................................................................. 65
8.9 DPUs Mechanical Drawing and Dimensions.............................................. 66
8.10 Bracket Mechanical Drawings ............................................................. 68
9 Monitoring............................................................................... 69
9.1 Thermal Sensors............................................................................. 69
9.2 Heatsink ...................................................................................... 69
10 Finding the GUID/MAC on the DPU .................................................. 70
10.1 DPUs Board Label Example ................................................................ 70
10.2 SuperNICs Board Label Example .......................................................... 72
11 PCIe Auxiliary Card Kit ................................................................ 74
11.1 PCIe Auxiliary Card Package Contents ................................................... 74
11.2 Channel Insertion Loss ..................................................................... 75
11.3 Cabline CA-II Plus Harness Pinouts ....................................................... 76
11.3.1 Cabline CA-II Plus Harness - Component Side ...................................... 76
11.3.2 Cabline CA-II Plus Harness - Print Side .............................................. 83
11.4 Technical Specifications ................................................................... 92
11.4.1 PCIe Auxiliary Card Mechanical Drawings and Dimensions........................ 93
11.4.2 Bracket Mechanical Drawings and Dimensions ..................................... 93
11.4.3 Cabline CA-II Plus Harnesses Mechanical Drawing ................................. 94
12 Supported Servers and Power Cords ................................................ 95
12.1 Supported Servers .......................................................................... 95
12.2 Supported Power Cords ....................................................................95
13 Document Revision History ........................................................... 96

5
About This Manual
The NVIDIA® BlueField® networking platform ignites unprecedented innovation for modern data
centers and supercomputing clusters. With its robust compute power and integrated software-
defined hardware accelerators for networking, storage, and security, BlueField creates a secure and
accelerated infrastructure for any workload in any environment, ushering in a new era of
accelerated computing and AI.
This User Manual describes NVIDIA BlueField-3 DPUs (Data Processing Unit) and SuperNICs. It
provides details as to the interfaces of the board, specifications, required software and firmware
for operating the board, and a step-by-step plan of how to bring up the BlueField-3 DPUs and
SuperNICs.
Ordering Part Numbers
The tables below list the ordering part numbers (OPNs) for available BlueField-3 cards in Full-Height
Half-Length (FHHL) and Half-Height Half-Length (HHHL) form factors.
BlueField-3 DPUs
Mode
l and
Form
Facto
r
NVID
IA
OPN
Series
/
Cores
Data
Transm
ission
Rate
No.
of
Port
s
PCIe
Supp
ort
x16
PCIe
Exte
nsio
n
Opti
on
Exte
rnal
Pow
er
Con
nect
or
C
ry
p
to
On-
Boa
rd
DD
R5
Me
mor
y
Inte
grat
ed
BMC
PSID Life
cycl
e
B3240
Dual-
Slot
FHHL
900-9D
3B6-00
CN-AB0
P-
Series /
16 Arm-
Cores
InfiniBan
d: NDR
400Gb/s
(Default)
Ethernet
: 400GbE
2-
Ports
QSFP1
12
PCIe
Gen
5.0
x16
✔ ✔ ✔ 32GB ✔MT_00
00000
883
Mass
Produc
tion
900-9D
3B6-00
SN-AB0
P-
Series /
16 Arm-
Cores
InfiniBan
d: NDR
400Gb/s
(Default)
Ethernet
: 400GbE
2-
Ports
QSFP1
12
PCIe
Gen
5.0
x16
✔ ✔ -32GB ✔MT_00
00000
964
Mass
Produc
tion
B3220
Single-
Slot
FHHL
900-9D
3B6-00
CV-AA0
P-
Series /
16 Arm-
cores
InfiniBan
d:
NDR200
200Gb/s
Ethernet
: 200GbE
(Default)
2-
Ports
QSFP1
12
PCIe
Gen
5.0
x16
✔ ✔ ✔ 32GB ✔MT_00
00000
884
Mass
Produc
tion
You can download a PDF versionhere.
The Device ID of all DPUs is 41692. All DPUs/SuperNICs are shipped with a tall bracket.

6
Mode
l and
Form
Facto
r
NVID
IA
OPN
Series
/
Cores
Data
Transm
ission
Rate
No.
of
Port
s
PCIe
Supp
ort
x16
PCIe
Exte
nsio
n
Opti
on
Exte
rnal
Pow
er
Con
nect
or
C
ry
p
to
On-
Boa
rd
DD
R5
Me
mor
y
Inte
grat
ed
BMC
PSID Life
cycl
e
900-9D
3B6-00
SV-AA0
P-
Series /
16 Arm-
cores
InfiniBan
d:
NDR200
200Gb/s
Ethernet
: 200GbE
(Default)
2-
Ports
QSFP1
12
PCIe
Gen
5.0
x16
✔ ✔ -32GB ✔MT_00
00000
965
Mass
Produc
tion
B3210E
Single-
Slot
FHHL
900-9D
3B6-00
CC-EA0
E-
Series /
16 Arm-
cores
InfiniBan
d:
HDR100
100Gb/s
Ethernet
: 100GbE
(Default)
2-
Ports
QSFP1
12
PCIe
Gen
5.0
x16
✔ ✔ ✔ 32GB ✔MT_00
00001
115
Mass
Produc
tion
900-9D
3B6-00
SC-EA0
E-
Series /
16 Arm-
cores
InfiniBan
d:
HDR100
100Gb/s
Ethernet
: 100GbE
(Default)
2-
Ports
QSFP1
12
PCIe
Gen
5.0
x16
✔ ✔ -32GB ✔MT_00
00001
117
Mass
Produc
tion
BlueField-3 SuperNICs
Model
and
Form
Factor
NVIDIA
OPN
Series/
Cores
Data
Transmis
sion
Rate
No. of
Ports
PCIe
Suppo
rt
Cr
yp
to
On-
Boar
d
DDR
5
Mem
ory
Integr
ated
BMC
PSID Lifecy
cle
B3210L
Single-
Slot
FHHL
900-9D3B
4-00CC-
EA0
E-
Series /
8 Arm-
Cores
InfiniBand:
HDR100
100Gb/s
Ethernet:
100GbE
(Default)
2-Ports
QSFP11
2
PCIe
Gen 5.0
x16
✔16GB ✔MT_000
0000966
Mass
Producti
on
900-9D3B
4-00SC-
EA0
E-
Series /
8 Arm-
Cores
InfiniBand:
HDR100
100Gb/s
Ethernet:
100GbE
(Default)
2-Ports
QSFP11
2
PCIe
Gen 5.0
x16
-16GB ✔MT_000
0000967
Mass
Producti
on

7
Model
and
Form
Factor
NVIDIA
OPN
Series/
Cores
Data
Transmis
sion
Rate
No. of
Ports
PCIe
Suppo
rt
Cr
yp
to
On-
Boar
d
DDR
5
Mem
ory
Integr
ated
BMC
PSID Lifecy
cle
B3220L
Single-
Slot
FHHL
900-9D3B
4-00CV-
EA0
E-
Series /
8 Arm-
Cores
InfiniBand:
NDR200
200Gb/s
Ethernet:
200GbE
(Default)
2-Ports
QSFP11
2
PCIe
Gen 5.0
x16
✔16GB ✔MT_000
0001093
Mass
Producti
on
900-9D3B
4-00SV-
EA0
E-
Series /
8 Arm-
Cores
InfiniBand:
NDR200
200Gb/s
Ethernet:
200GbE
(Default)
2-Ports
QSFP11
2
PCIe
Gen 5.0
x16
-16GB ✔MT_000
0001094
Mass
Producti
on
B3140L
Single-
Slot
FHHL
900-9D3B
4-00EN-
EA0
E-
Series /
8 Arm-
Cores
InfiniBand:
NDR
400Gb/s
(Default)
Ethernet:
400GbE
1-Port
QSFP11
2
PCIe
Gen 5.0
x16
✔16GB ✔MT_000
0001010
Mass
Producti
on
900-9D3B
4-00PN-
EA0
E-
Series /
8 Arm-
Cores
InfiniBand:
NDR
400Gb/s
(Default)
Ethernet:
400GbE
1-Port
QSFP11
2
PCIe
Gen 5.0
x16
-16GB ✔MT_000
0001011
Mass
Producti
on
B3140H
Single-
Slot
HHHL
900-9D3D
4-00EN-
HA0
E-
Series /
8 Arm-
Cores
InfiniBand:
NDR
400Gb/s
Ethernet:
400GbE
(Default)
1-Port
QSFP11
2
PCIe
Gen 5.0
x16
✔16GB ✔MT_000
0001010
Mass
Producti
on
900-9D3D
4-00NN-
HA0
E-
Series /
8 Arm-
Cores
InfiniBand:
NDR
400Gb/s
Ethernet:
400GbE
(Default)
1-Port
QSFP11
2
PCIe
Gen 5.0
x16
- 16GB ✔MT_000
0001070
Mass
Producti
on
EOL'ed (End of Life) DPUs
Intended Audience
This manual is intended for the installer and user of these cards. The manual assumes basic
familiarity with InfiniBand/Ethernet network and architecture specifications.
Technical Support

8
•
•
Customers who purchased NVIDIA products directly from NVIDIA are invited to contact usthrough the
following methods:
URL: www.nvidia.com →Support
E-mail:[email protected]
Customers who purchased NVIDIA M-1 Global Support Services, please see your contract for details
regarding Technical Support.
Customers who purchased NVIDIA products through an NVIDIA-approved reseller should first seek
assistance through their reseller.
Related Documentation
InfiniBand
Architecture
Specification
InfiniBand Trade Association (IBTA) InfiniBand® specification Release 1.3.1,
November 2, 2016 and Vol. 2, Release 1.4 , and Vol 2 - Release 1.5.
IEEE Std 802.3
Specification
IEEE Ethernet specification.
PCI Express
Specifications
Industry Standard PCI Express Base and Card Electromechanical Specifications.
NVIDIA LinkX
Interconnect
Solutions
The NVIDIA® LinkX® product family of cables and transceivers provide the industry’s
broadest portfolio of QDR/FDR10 (40Gb/s), FDR (56Gb/s), EDR/HDR100 (100Gb/s),
HDR (200Gb/s) and NDR (400Gb/s) cables, including Direct Attach Copper cables
(DACs), copper splitter cables, Active Optical Cables (AOCs) and transceivers in a
wide range of lengths from 0.5m to 10km. In addition to meeting IBTA standards,
NVIDIA tests every product in an end-to-end environment ensuring a Bit Error Rate of
less than 1E-15.
BlueField DPU
Platform BSP
Documentation
This guide provides product release notes as well as information on the BSP and how
to develop and/or customize applications, system software, and file system images
for the BlueField platform.
DOCA SDK Software
Documentation
NVIDIA DOCA SDK software.
Document Conventions
When discussing memory sizes, GB and GBytes are used in this document to mean size in gigabytes.
The use of Gb or Gbits (small b) indicates size in gigabits. In this document PCIe is used to mean PCI
Express.
Revision History
A list of the changes made to this document are provided inDocument Revision History.

9
•
•
•
•
•
1 Introduction
The NVIDIA® BlueField®-3 networking platform is designed to accelerate data center infrastructure
workloads and usher in the era of accelerated computing and AI. Supporting both Ethernet and
InfiniBand connectivity, BlueField-3 offers speeds up to 400 gigabits per second (Gb/s). It combines
powerful computing with software-defined hardware accelerators for networking, storage, and
cybersecurity—all fully programmable through the NVIDIA DOCA™ software framework. Drawing on
the platform’s robust capabilities, BlueField data processing units (DPUs) and BlueField SuperNICs
revolutionize traditional computing environments, transforming them into secure, high-
performance, efficient, and sustainable data centers suitable for
any workload at any scale.
The BlueField-3 DPU is a cloud infrastructure processor that empowers organizations to build
software-defined, hardware-accelerated data centers from the cloud to the edge. BlueField-3 DPUs
offload, accelerate, and isolate software-defined networking, storage, security, and management
functions, significantly enhancing data center performance, efficiency, and security. By decoupling
data center infrastructure from business applications, BlueField-3 creates a secure, zero-trust data
center infrastructure, streamlines operations, and reduces the total cost of ownership.
The BlueField-3 SuperNIC is a novel class of network accelerator that’s purpose-built for
supercharging hyperscale AI workloads. Designed for network-intensive, massively parallel
computing, the BlueField-3 SuperNIC provides best-in-class remote direct-memory access over
converged Ethernet (RoCE) network connectivity between GPU servers at up to 400Gb/s, optimizing
peak AI workload efficiency. For modern AI clouds, the BlueField-3 SuperNIC enables secure multi-
tenancy while ensuring deterministic performance and performance isolation between tenant jobs.
1.1 System Requirements
Item Description
PCI
Express
slot
In PCIe x16 Configuration
PCIe Gen 5.0 (32GT/s) through x16 edge connector.
In PCIe x16 Extension Option - Switch DSP (Data Stream Port)
PCIe Gen 5.0 SERDES @32GT/s through edge connector
PCIe Gen 5.0 SERDES @32GT/s through PCIe Auxiliary Connection Card
System
Power
Supply
Minimum 75W or greater system power supply for all cards.
B3240, B3220, B3210and B3210E DPUs require a supplementary 8-pin ATX power supply
connectivity through the external power supply connector.
Operating
System
BlueField-3 DPU/SuperNIC is shipped with Ubuntu – a Linux commercial operating system – which
includes the NVIDIA OFED stack (MLNX_OFED), and is capable of running all customer-based Linux
applications seamlessly. For more information, please refer to the DOCA SDK documentation or
NVIDIA BlueField DPU BSP.
Connectiv
ity
Interoperable with 1/10/25/40/50/100/200/400 Gb/s Ethernet switches and SDR/DDR/EDR/
HDR100/HDR/NDR200/NDR InfiniBand switches
Passive copper cable with ESD protection
Powered connectors for optical and active cable support
NOTE: The power supply harness is not included in the package.
To power-up the DPU, power the ATX power supply and the PCIe golden fingers
simultaneously. Failure to do so may harm the DPU.

10
For detailed information, seeSpecifications.
1.2 Package Contents
Prior to unpacking your DPU, it is important to make sure your server meets all the system
requirements listed above for a smooth installation. Be sure to inspect each piece of equipment
shipped in the packing box. If anything is missing or damaged, contact your reseller.
1.2.1 Card Package
Item Description
Card 1x BlueField-3 DPU
Accessories 1x tall bracket (shipped assembled on the card)
1.2.2 Accessories Kit
Kit OPN Contents
MBF35-DKIT
4-pin USB to female USB Type-A cable
20-pin shrouded connector to USB Type-A cable
1.2.3 PCIe Auxiliary Card Package
The PCIe auxiliary kit can be purchased separately to operate selected DPUs in a dual-socket server.
For package contents, refer to PCIe Auxiliary Card Kit.
For B3240, B3220 and B3210E DPUs, you need an 8-pin PCIe external power cable to
activate the card. The cable is not included in the package. For further details, please refer
to External PCIe Power Supply Connector.
This is an optional accessories kit used for debugging purposes and can be ordered
separately.
•
•
•
•
This is an optional kit which applies to following OPNs:
B3220DPUs: 900-9D3B6-00CV-AA0 and 900-9D3B6-00SV-AA0
B3240DPUs: 900-9D3B6-00CN-AB0 and 900-9D3B6-00SN-AB0
B3210DPUs:900-9D3B6-00CC-AA0 and900-9D3B6-00SC-AA0
B3210EDPUs: 900-9D3B6-00CC-EA0 and 900-9D3B6-00SC-EA0

11
1.3 Features and Benefits
Feature Description
InfiniBand
Architecture
Specification
v1.5 compliant
BlueField-3 DPU delivers low latency, high bandwidth, and computing efficiency for high-
performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data
centers applications.
BlueField-3 DPU is InfiniBand Architecture Specification v1.5 compliant.
InfiniBand Network Protocols and Rates:
Protocol Standard Rate (Gb/s) Comment
s
4x Port
(4 Lanes)
2x Ports
(2 Lanes)
NDR/NDR200 IBTA Vol2 1.5 425 212.5 PAM4 256b/
257b
encoding
and RS-FEC
HDR/HDR100 IBTA Vol2 1.4 212.5 106.25 PAM4 256b/
257b
encoding
and RS-FEC
EDR IBTA Vol2 1.3.1 103.125 51.5625 NRZ 64b/
66b
encoding
FDR IBTA Vol2 1.2 56.25 N/A NRZ 64b/
66b
encoding
This section describes hardware features and capabilities. Please refer to the relevant
driver and/or firmware release notes for feature availability.

12
•
•
•
•
•
•
•
•
•
Feature Description
Up to 400 Gigabit
Ethernet
BlueField-3 DPU complies with the following IEEE 802.3 standards:
400GbE / 200GbE / 100GbE / 50GbE / 40GbE / 25GbE / 10GbE
Protocol MAC Rate
IEEE802.3ck 400/200/100 Gigabit Ethernet
(Include ETC enhancement)
IEEE802.3cd
IEEE802.3bs
IEEE802.3cm
IEEE802.3cn
IEEE802.3cu
400/200/100 Gigabit Ethernet
(Include ETC enhancement)
IEEE 802.3bj
IEEE 802.3bm
100 Gigabit Ethernet
IEEE 802.3by
Ethernet Consortium25
50/25 Gigabit Ethernet
IEEE 802.3ba 40 Gigabit Ethernet
IEEE 802.3ae 10 Gigabit Ethernet
IEEE 802.3cb 2.5/5 Gigabit Ethernet
(For 2.5: support only 2.5 x1000BASE-X)
IEEE 802.3ap Based on auto-negotiation and KR startup
IEEE 802.3ad
IEEE 802.1AX
Link Aggregation
IEEE 802.1Q
IEEE 802.1P VLAN tags and priority
IEEE 802.1Qau (QCN)
Congestion Notification
IEEE 802.1Qaz (ETS)
EEE 802.1Qbb (PFC)
IEEE 802.1Qbg
IEEE 1588v2
IEEE 802.1AE (MACSec)
Jumbo frame support (9.6KB)
On-board Memory Quad SPI NOR FLASH - includes 256Mbit for Firmware image.
UVPS EEPROM - includes 2Mbit.
FRU EEPROM - Stores the parameters and personality of the card. The EEPROM
capacity is 128Kbit. FRU I2C address is (0x50) and is accessible through the PCIe
SMBus.
DPU_BMC Flashes:
2x 64MByte for BMC Image
512MByte for Config Data
eMMC pSLC 40GB with 30K Write Cycles eMMC for SoC BIOS.
SSD (onboard BGA) 128GByte for user SoC OS, logs and application SW.
DDR5 SDRAM - 16GB/32GB @5600MT/s single/dual-channel DDR5 SDRAM memory.
Solder down on-board. 128bit + 16bit ECC
BlueField-3 IC The NVIDIA BlueField-3 DPU integrates x8 / x16 Armv8.2+ A78 Hercules cores (64-bit) is
interconnected by a coherent mesh network, one DRAM controller, an RDMA intelligent
network adapter supporting up to 400Gb/s, an embedded PCIe switch with endpoint and
root complex functionality, and up to 32 lanes of PCIe Gen 5.0.

13
•
•
•
•
•
Feature Description
Overlay Networks In order to better scale their networks, data center operators often create overlay
networks that carry traffic from individual virtual machines over logical tunnels in
encapsulated formats such as NVGRE and VXLAN. While this solves network scalability
issues, it hides the TCP packet from the hardware offloading engines, placing higher loads
on the host CPU. NVIDIA DPU effectively addresses this by providing advanced NVGRE and
VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay
protocol.
RDMA and RDMA
overConverged
InfiniBand/
Ethernet (RoCE)
NVIDIA DPU, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over
Converged InfiniBand/Ethernet) technology, delivers low-latency and high-performance
over InfiniBand/Ethernet networks. Leveraging data center bridging (DCB) capabilities as
well as advanced congestion control hardware mechanisms, RoCE provides efficient low-
latency RDMA services over Layer 2 and Layer 3 networks.
Quality of Service
(QoS)
Support for port-based Quality of Service enabling various application requirements for
latency and SLA.
Storage
Acceleration
A consolidated compute and storage network achieves significant cost-performance
advantages over multi-fabric networks. Standard block and file access protocols can
leverage RDMA for high-performance storage access:NVMe over Fabric offloads for the
target machine
BlueField-3 DPU may operate as a co-processor offloading specific storage tasks from
the host, isolating part of the storage media from the host, or enabling abstraction of
software-defined storage logic using the NVIDIA BlueField-3 Arm cores. On the storage
initiator side, NVIDIA BlueField-3 DPU can prove an efficient solution for hyper-
converged systems to enable the host CPU to focus on compute while all the storage
interface is handled through the Arm cores.
NVMe-oF Non-volatile Memory Express (NVMe) over Fabrics is a protocol for communicating block
storage IO requests over RDMA to transfer data between a host computer and a target
solid-state storage device or system over a network. NVIDIA BlueField-3 DPU may operate
as a co-processor offloading specific storage tasks from the host using its powerful NVMe
over Fabrics Offload accelerator.
SR-IOV NVIDIA DPU SR-IOV technology provides dedicated adapter resources and guaranteed
isolation and protection for virtual machines (VM) within the server.
High-
PerformanceAcc
elerations
Tag Matching and Rendezvous Offloads
Adaptive Routing on Reliable Transport
Burst Buffer Offloads for Background Checkpointing
GPU Direct GPUDirect RDMA is a technology that provides a direct P2P (Peer-to-Peer) data path
between the GPU Memory directly to/from the NVIDIA HCA devices. This provides a
significant decrease in GPU-GPU communication latency and completely offloads the CPU,
removing it from all GPU-GPU communications across the network. NVIDIA DPU uses high-
speed DMA transfers to copy data between P2P devices resulting in more efficient system
applications
Isolation BlueField-3 DPU functions as a “computer-in-front-of-a-computer,” unlocking unlimited
opportunities for custom security applications on its Arm processors, fully isolated from
the host’s CPU. In the event of a compromised host, BlueField-3 may detect/block
malicious activities in real-time and at wire speed to prevent the attack from spreading
further.
Cryptography
Accelerations
From IPsec and TLS data-in-motion inline encryption to AES-XTS block-level data-at-rest
encryption and public key acceleration, BlueField-3 DPU hardware-based accelerations
offload the crypto operations and free up the CPU, reducing latency and enabling scalable
crypto solutions. BlueField-3 “host-unaware” solutions may transmit and receive data,
while BlueField-3 acts as a bump-in-the-wire for crypto.

14
Feature Description
Securing
Workloads
BlueField-3 DPU accelerates connection tracking with its ASAP2 technology to enable
stateful filtering on a per-connection basis. Moreover, BlueField-3 includes a Titan IC
regular expression (RXP) acceleration engine supported by IDS/IPS tools to detect host
introspection and Application Recognition (AR) in real-time.
Security
Accelerators
A consolidated compute and network solution based on DPU achieves significant
advantages over a centralized security server solution. Standard encryption protocols and
security applications can leverage NVIDIA BlueField-3 compute capabilities and network
offloads for security application solutions such as Layer4 Statefull Firewall.
Virtualized Cloud By leveraging BlueField-3 DPU virtualization offloads, data center administrators can
benefit from better server utilization, allowing more virtual machines and more tenants
on the same hardware, while reducing the TCO and power consumption
Out-of-Band
Management
The NVIDIA BlueField-3 DPU incorporates a 1GbE RJ45 out-of-band port that allows the
network operator to establish trust boundaries in accessing the management function to
apply it to network resources. It can also be used to ensure management connectivity
(including the ability to determine the status of any network component) independent of
the status of other in-band network components.
BMC Some BlueField-3 DPUs incorporate local NIC BMC (Baseboard Management Controller)
hardware on the board. The BMC SoC (system on a chip) can utilize either shared or
dedicated NICs for remote access. The BMC node enables remote power cycling, board
environment monitoring, BlueField-3 chip temperature monitoring, board power and
consumption monitoring, and individual interface resets. The BMC also supports the ability
to push a bootstream to BlueField-3.
Having a trusted on-board BMC that is fully isolated for the host server ensures highest
security for the DPU boards.

15
2 BlueField DPU Administrator Quick Start Guide
This page is tailored for system administrators wishing to install BlueField and perform sample
administrative actions on it. For a quick start guide aimed at software developers wishing to
develop applications on the BlueField DPU using the DOCA framework, please refer to the NVIDIA
DOCA Developer Quick Start Guide.
2.1 Prerequisites for Initial BlueField DPU Deployment
Unable to render include or excerpt-include. Could not retrieve page.
2.2 First-time Installation Procedure
Unable to render include or excerpt-include. Could not retrieve page.
Not sure which guide to follow? For more details on the different BlueField user types,
please refer to the NVIDIA BlueField and DOCA User Typesdocument.

16
3 Supported Interfaces
This section describes the DPU/SuperNIC supported interfaces. Each numbered interface referenced
in the figures is described in the following table with a link to detailed information.
3.1 BlueField-3 SuperNICs Layout and Interface
Information
OPN SuperNIC Component Side SuperNIC Print Side
HHHL Single-Slot SuperNIC
Model: B3140H
900-9D3D4-00EN-HA0
900-9D3D4-00NN-HA0
Ite
m
Interface Description
1DPU SoC DPU IC 8 Arm-Cores
2Networking
Interface
The network traffic is transmitted through the DPU QSFP112 connectors. The QSFP112
connectors allow the use of modules and optical and passive cable interconnect
solutions
3Networking
Ports LEDs
Interface
One bi-color I/O LEDs per port to indicate link and physical status
4PCI Express
Interface
PCIe Gen 5.0 through an x16 edge connector
5DDR5 SDRAM
On-Board
Memory
Single-Channel Cards: 10 units of DDR5 SDRAM for a total of 16GB @ 5200MT/s 64bit +
8bit ECC, solder-down memory
6NC-SI
Management
Interface
NC-SI 20 pins BMC connectivity for remote management
7USB 4-pin RA
Connector
Used for OS image loading
81GbE OOB
Management
Interface
1GbE BASE-T OOB management interface
9Integrated
BMC
DPU BMC
10 SSD Interface 128GB
The below figures are for illustration purposes only and might not reflect the current
revision of the DPU.

17
Ite
m
Interface Description
11 RTC Battery Battery holder for RTC
12 eMMC x8 NAND flash
OPN SuperNIC Component Side SuperNIC Print Side
FHHL Single-Slot Dual-Port DPUs
Model: B3220L
900-9D3B4-00CV-EA0
900-9D3B4-00SV-EA0
Model: B3210L
900-9D3B4-00CC-EA0
900-9D3B4-00SC-EA0
FHHL Single-Slot Single-Port DPUs
Model: B3140L
900-9D3B4-00EN-EA0
900-9D3B4-00PN-EA0
Ite
m
Interface Description
1DPU SoC DPU SoC 8/16 Arm-Cores
2Networking
Interface
The network traffic is transmitted through the DPU QSFP112 connectors. The QSFP112
connectors allow the use of modules and optical and passive cable interconnect
solutions
3Networking
Ports LEDs
Interface
One bi-color I/O LEDs per port to indicate link and physical status
4PCI Express
Interface
PCIe Gen 5.0/4.0 through an x16 edge connector
5DDR5 SDRAM
On-Board
Memory
20 units of DDR5 SDRAM for a total of 32GB @ 5200 or 5600MT/s. 128bit + 16bit ECC,
solder-down memory
6NC-SI
Management
Interface
NC-SI 20 pins BMC connectivity for remote management
7USB 4-pin RA
Connector
Used for OS image loading
81GbE OOB
Management
Interface
1GbE BASE-T OOB management interface

18
Ite
m
Interface Description
9MMCX RA PPS
IN/OUT
Allows PPS IN/OUT
12 Integrated
BMC
DPU BMC
13 SSD Interface 128GB
14 RTC Battery Battery holder for RTC
15 eMMC x8 NAND flash
3.2 BlueField-3 DPUs Layout and Interface Information
OPN DPU Component Side DPU Print Side
FHHL Single-Slot Dual-Port DPUs
with PCIe Extension Option
Model: B3210E
900-9D3B6-00CC-EA0
900-9D3B6-00SC-EA0
Model: B3210
900-9D3B6-00CC-AA0
900-9D3B6-00SC-AA0
Model: B3220
900-9D3B6-00CV-AA0
900-9D3B6-00SV-AA0
FHHL Dual-Slot Dual-Port DPUs
Model: B3240
900-9D3B6-00CN-AB0
900-9D3B6-00SN-AB0
Ite
m
Interface Description
1DPU SoC DPU SoC 8/16 Arm-Cores
2Networking
Interface
The network traffic is transmitted through the DPU QSFP112 connectors. The QSFP112
connectors allow the use of modules and optical and passive cable interconnect
solutions
3Networking
Ports LEDs
Interface
One bi-color I/O LEDs per port to indicate link and physical status
4PCI Express
Interface
PCIe Gen 5.0/4.0 through an x16 edge connector

19
Ite
m
Interface Description
5DDR5 SDRAM
On-Board
Memory
20 units of DDR5 SDRAM for a total of 32GB @ 5200 or 5600MT/s. 128bit + 16bit ECC,
solder-down memory
6NC-SI
Management
Interface
NC-SI 20 pins BMC connectivity for remote management
7USB 4-pin RA
Connector
Used for OS image loading
81GbE OOB
Management
Interface
1GbE BASE-T OOB management interface
9MMCX RA PPS
IN/OUT
Allows PPS IN/OUT
10 External PCIe
Power Supply
Connector
An external 12V power connection through an 8-pin ATX connector
Applies to models: B3210E, B3210 and B3220
11 Cabline CA-II
Plus
Connectors
Two Cabline CA-II plus connectors are populated to allow connectivity to an additional
PCIe x16 Auxiliary card
Applies to models: B3210E, B3210 and B3220
12 Integrated
BMC
DPU BMC
13 SSD Interface 128GB
14 RTC Battery Battery holder for RTC
15 eMMC x8 NAND flash
3.3 Interfaces Detailed Description
3.3.1 DPU System-on-Chip (SoC)
NVIDIA® BlueField®-3 is a family of advanced IC solutions that integrate a coherent mesh of 64-bit
Armv8.2+ A78 Hercules cores, anNVIDIA® ConnectX®-7network adapter front-end, and a PCI
Express switch into a single chip. The powerful DPU IC architecture includes an Armv multicore
processor array, enabling customers to develop sophisticated applications and highly differentiated
feature sets. Leverages the rich Arm software ecosystem and introduces the ability to offload the
x86 software stack.
At the heart of BlueField-3, the ConnectX-7 network offload controller with RDMA and RDMA over
Converged Ethernet (RoCE) technology delivers cutting-edge performance for networking and
storage applications such as NVMe over Fabrics.Advanced features include an embedded virtual
switch with programmable access lists (ACLs), transport offloads, and stateless encaps/decaps of
NVGRE, VXLAN, and MPLS overlay protocols.

20
3.3.1.1 Encryption
The DPU and SuperNICs addresses the concerns of modern data centers by combining hardware
encryption accelerators with embedded software and fully integrated advanced network
capabilities, making it an ideal platform for developing proprietary security applications. It enables
a distributed security architecture by isolating and protecting each workload and providing flexible
control and visibility at the server and workload level; controlling risk at the server access layer
builds security into the DNA of the data center and enables prevention, detection, and response to
potential threats in real-time. DPU can deliver powerful functionality, including encryption of data-
in-motion, bare-metal provisioning, stateful L4 firewall, and more.
3.3.2 Networking Interface
The network ports are compliant with the InfiniBand Architecture Specification, Release 1.5.
InfiniBand traffic is transmitted through the cards' QSFP112 connectors.
3.3.3 Networking Ports LEDs Interface
One bicolor (Yellow and Green) I/O LED per port indicates speed and link status.
Link Indications
State Bi-Color LED (Yellow/Green)
Beacon command for locating the
adapter card
1Hz blinking Yellow
Applies to Crypto enabled OPNs.
The DPU includes special circuits to protect the card/server from ESD shocks when plugging
copper cables.
Other manuals for BlueField-3
1
Table of contents
Other Nvidia Network Hardware manuals