Skip to content

🎉 Get 10% off select products – Use code WELCOME10 at checkout!

NVIDIA ConnectX-7 NDR 400G InfiniBand Adapter Card

SKU: MCX75310AAS-NEAT

Accelerate Data-Driven Scientific Computing with In-Network Computing

The NVIDIA® ConnectX®-7 NDR 400 gigabits per second (Gb/s) InfiniBand host channel adapter (HCA) provides the highest networking performance available to take on the world's most challenging workloads. The ConnectX-7 InfiniBand adapter provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for high performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data centers.

High performance computing and artificial intelligence have driven supercomputers into wide commercial use as the primary data processing engines enabling research, scientific discoveries and product development. These systems can carry complex simulations and unlock the new era of AI, where software writes software. NVIDIA InfiniBand networking is the engine of these platforms delivering breakthrough performance.

ConnectX-7 NDR InfiniBand smart In-Network Computing acceleration engines include collective accelerations, MPI Tag Matching and All-to-All engines, and programmable datapath accelerators. These performance advantages and the standard guarantee of backward- and forward-compatibility ensure leading performance and scalability for compute and data-intensive applications and enable users to protect their data center investments.

Portfolio

  • Single-port or dual-port NDR (400Gb/s) or NDR200 (200Gb/s), with octal small form-factor pluggable (OSFP) connectors
  • Dual-port HDR (200Gb/s) with quad small form-factor pluggable (QSFP) connectors
  • PCIe standup half-height, half-length (HHHL) and full-height, half-length (FHHL) form factors, with options for NVIDIA Socket Direct™
  • Open Compute Project 3.0 (OCP3.0) tall small form factor (TSFF) and small form factor (SFF)
  • Standalone ConnectX-7 application-specific integrated circuit (ASIC), supporting PCIe switch capabilities
  • InfiniBand Trade Association (IBTA) Specification 1.5 compliant
  • Up to four ports
  • Remote direct-memory access (RDMA), send/receive semantics
  • Hardware-based congestion control
  • Atomic operations
  • 16 million input/output (IO) channels
  • 256 to 4Kbyte maximum transmission unit (MTU), 2Gbyte messages
  • 8x virtual lanes (VL) + VL15

Call for Availability

Regular price $1,809.11 Sale price $2,241.00
On Sale

Shipping calculated at checkout

Details

Accelerate Data-Driven Scientific Computing with In-Network Computing

The NVIDIA® ConnectX®-7 NDR 400 gigabits per second (Gb/s) InfiniBand host channel adapter (HCA) provides the highest networking performance available to take on the world's most challenging workloads. The ConnectX-7 InfiniBand adapter provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for high performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data centers.

High performance computing and artificial intelligence have driven supercomputers into wide commercial use as the primary data processing engines enabling research, scientific discoveries and product development. These systems can carry complex simulations and unlock the new era of AI, where software writes software. NVIDIA InfiniBand networking is the engine of these platforms delivering breakthrough performance.

ConnectX-7 NDR InfiniBand smart In-Network Computing acceleration engines include collective accelerations, MPI Tag Matching and All-to-All engines, and programmable datapath accelerators. These performance advantages and the standard guarantee of backward- and forward-compatibility ensure leading performance and scalability for compute and data-intensive applications and enable users to protect their data center investments.

Portfolio

  • Single-port or dual-port NDR (400Gb/s) or NDR200 (200Gb/s), with octal small form-factor pluggable (OSFP) connectors
  • Dual-port HDR (200Gb/s) with quad small form-factor pluggable (QSFP) connectors
  • PCIe standup half-height, half-length (HHHL) and full-height, half-length (FHHL) form factors, with options for NVIDIA Socket Direct™
  • Open Compute Project 3.0 (OCP3.0) tall small form factor (TSFF) and small form factor (SFF)
  • Standalone ConnectX-7 application-specific integrated circuit (ASIC), supporting PCIe switch capabilities
  • InfiniBand Trade Association (IBTA) Specification 1.5 compliant
  • Up to four ports
  • Remote direct-memory access (RDMA), send/receive semantics
  • Hardware-based congestion control
  • Atomic operations
  • 16 million input/output (IO) channels
  • 256 to 4Kbyte maximum transmission unit (MTU), 2Gbyte messages
  • 8x virtual lanes (VL) + VL15
Specifications
Bracket Height:
HHHL
Expansion Slot Type:
OSFP
Form Factor:
Standup
Host Interface:
PCI Express 5.0 x16
Manufacturer:
NVIDIA Corporation
Manufacturer Website Address:
http://www.nvidia.com
Maximum Data Transfer Rate:
400 Gbit/s
Media Type Supported:
Optical Fiber
Product Line:
ConnectX-7
Product Type:
400Gigabit Ethernet Card
Total Number of Ports:
1

LTT PARTNERS

We are a full-service Systems and Solutions Integrator. Through close relationships with dozens of top technology brands, we are the preferred partner for many enterprise organizations and industry leaders.

Learn more
Smiling man on a support team on a virtual call with a headset.

ONGOING SUPPORT

We pride ourselves on building long-term relationships with our partners and clients. From initial consultation to post-installation maintenance, we are always available and happy to help through our support network.

Learn more

Have Questions? We Can Help!

Close (esc)

GET 10% OFF SELECT PRODUCTS!

Enter your email to take advantage of this one time offer.

Age verification

By clicking enter you are verifying that you are old enough to consume alcohol.

Search

Header > Main Menu

Shopping Cart

Accelerate Data-Driven Scientific Computing with In-Network Computing

The NVIDIA® ConnectX®-7 NDR 400 gigabits per second (Gb/s) InfiniBand host channel adapter (HCA) provides the highest networking performance available to take on the world's most challenging workloads. The ConnectX-7 InfiniBand adapter provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for high performance computing (HPC), artificial intelligence (AI), and hyperscale cloud data centers.

High performance computing and artificial intelligence have driven supercomputers into wide commercial use as the primary data processing engines enabling research, scientific discoveries and product development. These systems can carry complex simulations and unlock the new era of AI, where software writes software. NVIDIA InfiniBand networking is the engine of these platforms delivering breakthrough performance.

ConnectX-7 NDR InfiniBand smart In-Network Computing acceleration engines include collective accelerations, MPI Tag Matching and All-to-All engines, and programmable datapath accelerators. These performance advantages and the standard guarantee of backward- and forward-compatibility ensure leading performance and scalability for compute and data-intensive applications and enable users to protect their data center investments.

Portfolio

  • Single-port or dual-port NDR (400Gb/s) or NDR200 (200Gb/s), with octal small form-factor pluggable (OSFP) connectors
  • Dual-port HDR (200Gb/s) with quad small form-factor pluggable (QSFP) connectors
  • PCIe standup half-height, half-length (HHHL) and full-height, half-length (FHHL) form factors, with options for NVIDIA Socket Direct™
  • Open Compute Project 3.0 (OCP3.0) tall small form factor (TSFF) and small form factor (SFF)
  • Standalone ConnectX-7 application-specific integrated circuit (ASIC), supporting PCIe switch capabilities
  • InfiniBand Trade Association (IBTA) Specification 1.5 compliant
  • Up to four ports
  • Remote direct-memory access (RDMA), send/receive semantics
  • Hardware-based congestion control
  • Atomic operations
  • 16 million input/output (IO) channels
  • 256 to 4Kbyte maximum transmission unit (MTU), 2Gbyte messages
  • 8x virtual lanes (VL) + VL15