Description
Dell Mellanox ConnectX-6 DX Dual Port 100GbE QSFP56 Network Adapter, Full Height
Engineered for data centers that demand extreme performance, reliability, and scalability, the Dell Mellanox ConnectX-6 DX Dual Port 100GbE QSFP56 Network Adapter is a high-speed Ethernet PCI-Express NIC designed to accelerate server networking. This full-height, dual-port card delivers 200 Gbps of aggregate bandwidth with hardware-assisted offloads, enabling low-latency communication for virtualization, AI workloads, high-performance computing, and large-scale cloud deployments. Built on the proven ConnectX-6 DX architecture, it provides robust support for modern data-center fabrics, RoCE-based RDMA, and sophisticated offload capabilities that reduce CPU overhead and increase application throughput. Dell has validated this NIC on select Dell systems, ensuring compatibility, reliable driver delivery, and trusted Dell technical support for a seamless integration into enterprise environments.
- Blazing fast dual 100GbE connectivity: Equip your server with two 100 Gigabit Ethernet ports via QSFP56 interfaces, delivering up to 200 Gbps of consolidated bandwidth. This enables dense, high-throughput uplinks to top-of-rack switches and core networks, making it ideal for data centers, virtualization hosts, and storage networks that require scalable, low-latency networking.
- Advanced offloads for CPU-free acceleration: The ConnectX-6 DX architecture includes hardware offloads for RDMA (RoCE), large send/receive operations, and virtualization features. By moving processing from the CPU to the NIC, you unlock lower latency, higher IOPS, and improved application performance for clustered databases, hyperconverged environments, and AI pipelines.
- Robust virtualization and multi-tenant capabilities: With technologies such as SR-IOV and multi-queue support, this NIC enables safe, high-performance partitioning of network resources across multiple virtual machines and containers. It’s a strong fit for VMware, KVM, Hyper-V, and other virtualization platforms, providing predictable latency and efficient network isolation in multi-tenant data centers.
- Dell validated, enterprise-grade reliability: This adapter is tested and validated on Dell systems, with Dell Technical Support ready to assist. By pairing Dell hardware validation with Mellanox virtualization and driver ecosystems, customers gain a dependable network adapter that integrates smoothly with Dell OpenManage, PowerEdge servers, and enterprise support SLAs.
- Flexible, future-proof design for growing workloads: The full-height form factor and PCIe interface offer broad compatibility with a wide range of server chassis and PCIe slots. The card supports versatile data-center networking features, including VXLAN/NVGRE offloads, quality-of-service controls, and scalable networking configurations that help future-proof deployments as traffic patterns evolve.
Technical Details of Dell Mellanox ConnectX-6 DX Dual Port 100GbE QSFP56 Network Adapter
- Ports: 2 x 100 GbE QSFP56 interfaces for high-density, shielded uplink connectivity.
- Form factor: Full-height PCI Express network adapter, suitable for standard 4U and rack servers, as well as dense blade and tower configurations where applicable.
- Bus interface: PCIe x16 slot (supporting typical PCIe Gen3 x16 and Gen4 x16 implementations depending on host server capabilities).
- Aggregate bandwidth: Up to ~200 Gbps of total data transfer between the NIC and the server depending on port configuration and link status.
- Networking features: RoCE v2 RDMA, hardware offloads for LSO/LRO/TSO, VLAN offload, large send/receive offloads, and virtualization enhancements such as SR-IOV and multi-queue support to optimize virtualization and containerized workloads.
- Software and drivers: Dell-validated integration with Mellanox OFED drivers and Dell OpenManage. Supports Linux and Windows environments with ongoing driver updates available from Dell/Mellanox compatibility channels.
- Management and security: Management through standard NIC tooling and Dell integration; hardware-assisted features contribute to security by enabling robust segmentation and reduced CPU exposure to direct network processing tasks.
- Compatibility and warranty: Validated for use with Dell PowerEdge servers and other enterprise-grade platforms that accommodate PCIe cards; backed by Dell technical support and warranty services for enterprise deployments.
- Cable and optics considerations: Requires appropriate 100 Gbqsfp56 optics or DAC/AOC cables compatible with QSFP56 interfaces to connect to the network fabric or storage fabrics; cables are typically sold separately by vendors.
- Operating conditions and compliance: Designed for data-center environments with temperature and EMI considerations consistent with enterprise rack systems; adheres to standard RoHS and industry specifications for PCIe network adapters.
How to Install Dell Mellanox ConnectX-6 DX Dual Port 100GbE QSFP56 Network Adapter
- Prepare your server: Power down the server, unplug power cords, and ground yourself to prevent static discharge. Open the server chassis following the manufacturer’s safety guidelines. Verify that you have a compatible PCIe x16 slot available for a full-height card and that there is adequate clearance for the QSFP56 port and cables.
- Insert the NIC: Remove the appropriate slot cover and firmly seat the Dell Mellanox ConnectX-6 DX card into the PCIe x16 slot. Ensure the bracket is secured to the chassis and that the card is seated straight to avoid alignment issues. Reattach any screws to secure the card firmly.
- Connect uplinks: Attach 100 GbE QSFP56 cables or appropriate optical transceivers/DACs to the two ports. Route cables neatly to switches or fabric devices, ensuring there is no excessive bending or tension and that cable labeling is clear for future maintenance.
- Power on and install drivers: Boot the server and install the recommended drivers and firmware from Dell or Mellanox/OFED repositories. On Linux, install Mellanox OFED packages and use provided utilities to verify link status; on Windows, use the Dell-provided driver package to install the NIC software stack. After installation, load the NIC driver and confirm that both ports initialize to an active state.
- Configure networking and virtualization: Use network management tools available in your environment to set up VLANs, link aggregation groups if needed, and any SR-IOV or virtualization policies. For Linux, you may configure bond interfaces or SR-IOV virtual functions as required by the workload. For Windows, use the NIC properties to manage IP configurations, QoS policies, and virtualization settings. Validate connectivity with test traffic and monitor performance using standard network utilities.
- Verify performance and firmware: Run basic throughput tests to confirm 2x100GbE operation, check for offload functionality, and verify that the latest firmware is installed. Periodically check for driver and firmware updates to maintain compatibility with the latest switch firmware and data-center fabrics.
Frequently asked questions
-
Q: What is the total possible throughput of the Dell Mellanox ConnectX-6 DX Dual Port 100GbE QSFP56 Network Adapter?
A: The card provides two 100 GbE ports, delivering up to 200 Gbps of aggregate bandwidth under optimal conditions, suitable for high-density servers and demanding workloads. -
Q: Is this adapter Dell validated?
A: Yes. The ConnectX-6 DX Dual Port 100GbE QSFP56 Network Adapter is tested and validated on Dell systems, ensuring compatibility with Dell server hardware, BIOS/firmware, and drivers, and it is supported by Dell Technical Support for enterprise deployments. -
Q: What operating systems are supported?
A: The adapter supports multiple operating systems, including major Linux distributions and Windows Server variants. Dell and Mellanox provide driver packages and OFED support to ensure compatibility with your chosen OS. -
Q: Do I need additional software to access advanced features?
A: For full feature access, install Mellanox OFED (OpenFabrics Enterprise Distribution) or Dell-provided driver packages. These packages enable RoCE/RDMA offloads, SR-IOV, VXLAN/NVGRE offloads, and other advanced networking capabilities. -
Q: Are cables included with the card?
A: Cables are typically sold separately. You’ll need QSFP56-compatible optics or DAC cables to connect the two 100 GbE ports to your network fabric or storage devices. -
Q: Can this NIC be used in non-Dell servers?
A: While Dell validates the card for Dell systems, the PCIe form factor and drivers can be compatible with other enterprise servers that support PCIe NICs. Always verify driver support and warranty terms with the server vendor before deployment. -
Q: How do I update firmware and drivers?
A: Use Dell’s support portal or Mellanox/OpenFabrics repositories to download the latest firmware and driver packages. Follow the provided installation instructions to safely update firmware without impacting running workloads.
Customer reviews
Showing - Of Reviews