Description
NVIDIA MCX556A-ECAT ConnectX-5 VPI Adapter Card EDR/100GbE
The NVIDIA MCX556A-ECAT ConnectX-5 VPI Adapter Card delivers breakthrough networking for modern data centers, HPC clusters, and AI-enabled workloads. This dual-port QSFP28 PCIe 3.0 x16 device combines InfiniBand EDR (up to 100 Gb/s per port) with 100 GbE Ethernet through Virtual Protocol Interconnect (VPI), enabling a single NIC to support both high-speed RDMA traffic and flexible Ethernet connectivity. Built to maximize throughput while minimizing latency, the MCX556A-ECAT is a capable solution for workloads that demand aggressive interconnect performance, workload consolidation, and future-proof scalability. It ships with a tall bracket for dense server racks, RoHS compliance for enterprise-grade reliability, and broad OS driver support to help IT teams deploy quickly and confidently.
- Unmatched per-port bandwidth with dual ports: Each QSFP28 port delivers up to 100 Gb/s, enabling a potential aggregate bandwidth of 200 Gb/s. This supports data-intensive workloads such as large-scale simulations, real-time analytics, and multi-node RDMA transfers with minimal latency and maximum throughput.
- Virtual Protocol Interconnect (VPI) for converged networks: VPI lets InfiniBand EDR and 100 GbE traffic share a single adapter, simplifying cabling and topology while preserving RDMA capabilities and deterministic performance for critical applications.
- InfiniBand EDR and 100 GbE compatibility: The adapter supports InfiniBand EDR connectivity for ultra-fast RDMA operations alongside full-rate 100 GbE Ethernet, enabling seamless deployment of hybrid HPC and data-center networks without multiple NICs.
- PCIe 3.0 x16 interface with tall server bracket: Optimized for high-bandwidth servers, the card plugs into a PCIe 3.0 x16 slot and uses a tall bracket suitable for dense, 1U and 2U server racks, ensuring robust mounting and reliable airflow.
- Enterprise-grade reliability with broad OS support: Engineered for 24x7 operation in data centers, with driver and firmware support across Linux and Windows environments via NVIDIA/Mellanox stack, providing proven stability and ongoing updates.
Techncial Details of NVIDIA MCX556A-ECAT ConnectX-5 VPI Adapter Card EDR/100GbE
- Product family: NVIDIA MCX556A-ECAT ConnectX-5 VPI Adapter Card
- Form factor: PCIe 3.0 x16, dual-slot-friendly tall bracket
- Ports and connectors: 2x QSFP28 (dual-port) interfaces
- Data rates: Up to 100 Gb/s per port (EDR InfiniBand and 100 GbE)
- Supported protocols: InfiniBand EDR, Ethernet 100 GbE, Virtual Protocol Interconnect (VPI)
- Environment and compliance: RoHS-compliant for enterprise-scale deployments
- Driver support: Linux and Windows with NVIDIA/Mellanox OFED driver stacks for HPC and data-center workloads
- Ideal workloads: HPC clustering, RDMA-accelerated databases, AI/ML data pipelines, storage networks, and high-performance interconnects
how to install NVIDIA MCX556A-ECAT ConnectX-5 VPI Adapter Card EDR/100GbE
Follow these steps to deploy the MCX556A-ECAT quickly and safely in a supported server platform. Preparation and careful handling help ensure optimal performance and reliability in production environments.
- Power down the host server, unplug all power and network cables, and ground yourself to prevent electrostatic discharge.
- Open the chassis and identify an available PCIe 3.0 x16 slot with adequate space for a tall-height card; confirm clearance for the QSFP28 connectors and bracket.
- Insert the MCX556A-ECAT into the PCIe slot, ensuring the connector seats fully and evenly. Secure the card bracket with the appropriate screw to anchor it firmly in place.
- Connect high-speed cables to both QSFP28 ports. Use DAC (direct attach copper) or fiber cables rated for InfiniBand EDR and 100 GbE, according to your network plan and distance requirements.
- Power on the server and boot into the operating system. Install the latest NVIDIA/Mellanox driver package and firmware updates from the official repository or vendor site.
- Verify device detection in the OS (for example, via lspci on Linux or Device Manager on Windows). Configure the NICs for RDMA (RoCE/IB) and Ethernet using the appropriate network management tools and drivers.
- Run connectivity and throughput tests to validate port operation at peak speeds, and confirm VPI functionality for converged IB and Ethernet traffic within your cluster or data-center fabric.
Frequently asked questions
- What is VPI? Virtual Protocol Interconnect is a technology that allows InfiniBand and Ethernet traffic to share a single NIC, enabling low-latency RDMA alongside flexible Ethernet connectivity without deploying separate adapters.
- Which workloads benefit most from this card? Highly parallel HPC workloads, distributed AI/ML pipelines, big data analytics, real-time simulation, and storage networks that require RDMA acceleration and high-throughput interconnects.
- What cables should I use? Use QSFP28-compatible fiber or DAC cables rated for InfiniBand EDR and 100 GbE. Always verify cable length and connector type meets your rack geometry and latency requirements.
- What operating systems are supported? The MCX556A-ECAT is supported by the NVIDIA/Mellanox OFED driver stack, with drivers available for Linux and Windows, plus vendor-provided firmware updates for ongoing performance improvements.
- Is this card RoHS compliant? Yes, the device is RoHS-compliant, aligning with enterprise standards for environmental responsibility and safety in data-center deployments.
- How do I optimize performance? Ensure your server BIOS/UEFI recognizes the PCIe slot, keep firmware up to date, install the latest OFED drivers, enable RDMA/VoIP features as needed, and align cabling to minimize latency and maximize throughput.
Customer reviews
Showing - Of Reviews