Mellanox Switch-IB 2, 36-port EDR 100Gb/s InfiniBand Leaf Blade, RoHS-6

MellanoxSKU: 4651383

Price:
Sale price$25,827.82

Description

Mellanox Switch-IB 2: 36-Port EDR 100Gb/s InfiniBand Leaf Blade

Experience the pinnacle of high-performance interconnects with Mellanox Switch-IB 2, a 36-port EDR InfiniBand leaf blade engineered for the most demanding data centers, HPC clusters, and AI workloads. This compact leaf blade is designed to slot into Mellanox/NVIDIA fabric ecosystems, delivering ultra-fast, low-latency connectivity that accelerates data movement and scales with your business. Built on the principles of SHARP Co-Design technology, it enables in-network computing, offloading select tasks from hosts and reducing end-to-end latency across the fabric. The RoHS-6 compliant design ensures your deployment aligns with modern environmental standards while maintaining top-tier performance, reliability, and manageability. Whether you’re building a dense HPC cluster, a petabyte-scale storage fabric, or a hybrid cloud data center, Switch-IB 2 provides the speed, density, and efficiency to power your most ambitious workloads.

  • High-density 36-port EDR InfiniBand leaf blade with aggregate bandwidth potential up to 3.6 Tb/s, enabling non-blocking fabrics that sustain peak performance across compute, storage, and accelerator nodes. This density supports scalable fabric topologies, reducing the need for oversized chassis and enabling more efficient cabling and airflow in data centers.
  • 100 Gb/s per port delivers exceptionally fast, low-latency interconnects for latency-sensitive applications. The blade’s design minimizes hop counts and supports high-throughput data transfers, which is critical for real-time analytics, large-scale simulations, and AI model training and inference in shared environments.
  • In-network computing powered by SHARP Co-Design technology, allowing intelligent processing within the fabric to offload repetitive tasks from servers. This capability can reduce host CPU utilization, enhance application throughput, and enable smarter data flow patterns that improve overall fabric efficiency and scalability.
  • RoHS-6 compliant construction ensures environmentally responsible materials usage without compromising performance. This compliance supports procurement standards and helps data centers meet sustainability goals while maintaining reliability and industry compatibility.
  • Seamless integration with Mellanox/NVIDIA InfiniBand fabrics and centralized management tooling. The blade is designed to work with OpenSM and vendor fabric management suites, enabling straightforward discovery, monitoring, topology management, and performance analytics across large-scale deployments.

Technical Details of Mellanox Switch-IB 2

  • Product type: InfiniBand leaf blade switch, designed for high-density data-center fabrics
  • Ports: 36-port EDR InfiniBand
  • Speed: 100 Gb/s per port, delivering robust aggregate bandwidth for demanding workloads
  • Compliance: RoHS-6
  • Fabric features: SHARP-enabled in-network computing to enhance efficiency and reduce host load
  • Form factor: Leaf blade module intended for integration into compatible blade/chassis ecosystems

How to install Mellanox Switch-IB 2

  • Plan your fabric topology and ensure you have a compatible chassis or blade enclosure that supports InfiniBand leaf blades. Verify power, cooling, and grounding requirements in your data-center environment before installation.
  • Power down the chassis and carefully insert the Switch-IB 2 blade into the designated slot, ensuring proper seating and secure grounding. Follow the manufacturer’s guidelines for blade insertion and slot alignment to avoid damage during installation.
  • Connect InfiniBand cables from the blade to adjacent switches, servers, or storage modules according to your fabric topology. Use high-quality cables and confirm proper termination to minimize signal loss and impedance mismatch in the network.
  • Power up the system and use Mellanox/NVIDIA management tools (such as OpenSM and compatible fabric management suites) to discover the blade, assign it to the InfiniBand fabric, and integrate it into your existing topology.
  • Configure basic fabric settings—including port roles, routing, multicast, and any QoS or traffic policies required by your workloads. Run fabric verification tests to confirm connectivity, bandwidth targets, and stability before rolling out production workloads.

Frequently asked questions

  • Q: What is the Mellanox Switch-IB 2?

    A: It is a 36-port EDR InfiniBand leaf blade designed to provide high-density, low-latency interconnects within data center and HPC fabrics. It supports SHARP in-network computing for smarter data flow and improved efficiency, while delivering RoHS-6 compliant hardware suitable for modern deployments.

  • Q: How many ports and what speed does it offer?

    A: The blade presents 36 ports, each capable of 100 Gb/s InfiniBand speeds, enabling substantial aggregate bandwidth for scale-out clusters and data-intensive applications.

  • Q: What is SHARP and how does it benefit my fabric?

    A: SHARP—Mellanox’s Co-Design technology—enables in-network computing within the InfiniBand fabric. This enables certain processing tasks to occur inside the fabric, reducing host CPU load, lowering latency, and accelerating data movement for workloads such as HPC simulations and AI model training.

  • Q: Is the Switch-IB 2 RoHS-6 compliant?

    A: Yes. The device adheres to RoHS-6 standards, ensuring restricted use of hazardous substances and alignment with contemporary environmental and procurement requirements.

  • Q: How do I manage and monitor the blade?

    A: Management is performed through Mellanox/NVIDIA fabric software tools, including OpenSM and related centralized fabric management suites. These tools provide device discovery, topology mapping, performance analytics, fault detection, and firmware updates to maintain a healthy InfiniBand fabric.

  • Q: Can this blade be used in mixed-density environments?

    A: Yes. The Switch-IB 2 blade is designed to integrate into larger InfiniBand fabrics and scale with your deployment, whether you’re expanding a HPC cluster, accelerating data center interconnects, or layering storage and compute resources in a converged environment.


Customer reviews

(0)

0 Out of 5 Stars


5 Stars
0
4 Stars
0
3 Stars
0
2 Stars
0
1 Star
0


Showing - Of Reviews


You may also like

Recently viewed