manuals.online logo
Brands
  1. Home
  2. •
  3. Brands
  4. •
  5. Qlogic
  6. •
  7. Adapter
  8. •
  9. Qlogic InfiniPath QHT7140 User manual

Qlogic InfiniPath QHT7140 User manual

Other Qlogic Adapter manuals

Qlogic BR-815 User manual

Qlogic

Qlogic BR-815 User manual

Qlogic SANsurfer FC/CNA HBA CLI User manual

Qlogic

Qlogic SANsurfer FC/CNA HBA CLI User manual

Qlogic QLE2740L-DEL User manual

Qlogic

Qlogic QLE2740L-DEL User manual

Qlogic QLE8042 User manual

Qlogic

Qlogic QLE8042 User manual

Qlogic QLE7240 User manual

Qlogic

Qlogic QLE7240 User manual

Qlogic SANblade QEM2462 Manual

Qlogic

Qlogic SANblade QEM2462 Manual

Qlogic SANblade QLA2342 User manual

Qlogic

Qlogic SANblade QLA2342 User manual

Qlogic QLE3044 User manual

Qlogic

Qlogic QLE3044 User manual

Qlogic QLogic Fibre Channel Switch User manual

Qlogic

Qlogic QLogic Fibre Channel Switch User manual

Qlogic QConvergeConsole CLI 2400 Series User manual

Qlogic

Qlogic QConvergeConsole CLI 2400 Series User manual

Qlogic InfiniPath QLE7140 User manual

Qlogic

Qlogic InfiniPath QLE7140 User manual

Qlogic QL45212 User manual

Qlogic

Qlogic QL45212 User manual

Qlogic 8200 Series User manual

Qlogic

Qlogic 8200 Series User manual

Qlogic QLA234 Series User manual

Qlogic

Qlogic QLA234 Series User manual

Qlogic SANblade QLA4050 User manual

Qlogic

Qlogic SANblade QLA4050 User manual

Qlogic QLE3142-SR-CK User manual

Qlogic

Qlogic QLE3142-SR-CK User manual

Qlogic QLE7280 User manual

Qlogic

Qlogic QLE7280 User manual

Qlogic SANblade 2300 Series User manual

Qlogic

Qlogic SANblade 2300 Series User manual

Qlogic InfiniPath User manual

Qlogic

Qlogic InfiniPath User manual

Qlogic QLE2460 User manual

Qlogic

Qlogic QLE2460 User manual

Qlogic QLE7240 User manual

Qlogic

Qlogic QLE7240 User manual

Qlogic SANblade 2300 Series User manual

Qlogic

Qlogic SANblade 2300 Series User manual

Qlogic QME8262-k User manual

Qlogic

Qlogic QME8262-k User manual

Qlogic FastLinQ 3400 Series User manual

Qlogic

Qlogic FastLinQ 3400 Series User manual

Popular Adapter manuals by other brands

Mini Gadgets BB2Outlet user manual

Mini Gadgets

Mini Gadgets BB2Outlet user manual

LSI LSISAS3442E-R Quick installation guide

LSI

LSI LSISAS3442E-R Quick installation guide

Moeller BBA -16 Series installation instructions

Moeller

Moeller BBA -16 Series installation instructions

Acer MWA3 quick start guide

Acer

Acer MWA3 quick start guide

Connect Tech ADG095 user guide

Connect Tech

Connect Tech ADG095 user guide

cable matters 604002 user manual

cable matters

cable matters 604002 user manual

Solwise PL-85PEW Easy start

Solwise

Solwise PL-85PEW Easy start

HP Photosmart 7100 user guide

HP

HP Photosmart 7100 user guide

HP E2494A installation guide

HP

HP E2494A installation guide

StarTech.com USB150WN1X1 quick start guide

StarTech.com

StarTech.com USB150WN1X1 quick start guide

Lindy 43198 user manual

Lindy

Lindy 43198 user manual

A&D AD-8527 instruction manual

A&D

A&D AD-8527 instruction manual

Motorola WU830G user guide

Motorola

Motorola WU830G user guide

Hawking HUF2 Specifications

Hawking

Hawking HUF2 Specifications

Allen-Bradley E3 Plus installation instructions

Allen-Bradley

Allen-Bradley E3 Plus installation instructions

Trust BT-2210Tp manual

Trust

Trust BT-2210Tp manual

HP NC7170 overview

HP

HP NC7170 overview

Uconnect BLE-485C user manual

Uconnect

Uconnect BLE-485C user manual

manuals.online logo
manuals.online logoBrands
  • About & Mission
  • Contact us
  • Privacy Policy
  • Terms and Conditions

Copyright 2025 Manuals.Online. All Rights Reserved.

DATASHEET
Superior Application Performance. The InfiniPath adapter’s low latency and high
message rates result in superior real-world application scalability across nearly all
modeling and simulation applications.
Well-known applications that have demonstrated superior scaling and outstanding
performance when running on clusters with the InfiniPath interconnect include: NAMD,
Amber8, PETSc, Star-CD, Fluent, NWChem, DL_POLY, LS-DYNA, WRF, POP, MM5,
LAMMPS, GAMESS, CPMD, AM2, CHARMM, GROMACS and many others.
Highest Effective Bandwidth and Message Rate. Because of its high messaging rate,
the InfiniPath bandwidth curve rises faster than any other adapter.
The InfiniPath HTX adapter delivers significantly more bandwidth at message sizes
typical of real-world HPC applications and many enterprise applications. It also deliv-
ers the highest effective bandwidth of any cluster interconnect because it achieves
half its peak bandwidth (n1/2)2at a message size of just 385 bytes, the lowest in the
industry. This means that applications run faster on the InfiniPath adapter than on any
other interconnect.
Such superior performance is a benefit of the unique, highly-pipelined, cut through
design that initiates a new message much faster than competitive alternatives. This
approach allows application message transmission to scale close to linearly when ad-
ditional CPU cores are added to a system, dramatically reducing application run times.
Other less effective interconnects can become a performance bottleneck, lowering the
return on investment of your computing resources.
Lowest MPI & TCP Latency. The InfiniPath industry-leading MPI ping-pong latency of
1.29 microseconds2(µs) is less than half of the latency of other InfiniBand adapters.
Unlike other interconnects, its random-ring latency for up to 256 CPUs, as measured
InfiniPath®
QHT7140
• Increases cluster efficiency and application productivity
• Provides superior application scaling to 1000s of CPUs
• Enables faster application run times for faster
time-to-solution
• Increases utilization of computing infrastructure
• Increases ROI of computing assets
HyperTransport HTX to InfiniBand
4X Adapter
Benefits
• HTX to InfiniBand 4X adapter
• HTX half-height short form factor
• 1.29 µs one-way MPI latency through an InfiniBand switch
1
• 954 MB/s uni-directional bandwidth1
• 88 byte n1/2 streaming message size (1 CPU core)1
• 3 year hardware warranty
Features
The InfiniPath HTX InfiniBand adapter delivers industry-lead-
ing performance in a cluster interconnect, allowing organiza-
tions to gain maximum advantage and return on their invest-
ment in clustered systems by driving up the utilization of the
computing infrastructure.
The InfiniPath adapter yields the lowest latency, the highest
message rate and highest effective bandwidth of any cluster
interconnect available. As a result, organizations relying on
clustered systems for critical computing tasks will experience
a significant increase in productivity.
by the HPC Challenge Benchmark Suite, is nearly identical to its ping-pong
latency, even as you increase the number of nodes.
The InfiniPath adapter, using a standard Linux distribution, also achieves the
lowest TCP/IP latency and outstanding bandwidth.3Eliminating the excess
latency found in traditional interconnects reduces communications wait time
and allows processors to spend more time computing, which results in ap-
plications that run faster and scale higher.
Lowest CPU Utilization. The InfiniPath connectionless environment elimi-
nates overhead that wastes valuable CPU cycles. It provides reliable data
transmission without the vast resources required by connection-oriented
adapters, thus increasing the efficiency of your clustered systems.
Built on Industry Standards. The InfiniPath adapter supports a rich com-
bination of open standards to achieve industry-leading performance. The
InfiniPath OpenIB software stack has been proven to be the highest perfor-
mance implementation of the OpenIB Verbs layer, which yields both superior
latency and bandwidth compared to other InfiniBand alternatives.
• InfiniBand 1.1 4X Compliant
• Standard InfiniBand fabric management
• MPI 1.2 with MPICH 1.2.6
• OpenIB supporting IPoIB, SDP, UDP and SRP
• PCI Express x8 Expansion Slot Compatible
• Supports SUSE, Red Hat, and Fedora Core Linux
DATASHEET
© 2006 QLogic Corporation. All rights reserved. QLogic, the QLogic Logo, Pathscale, InfiniPath, and Accelerating Cluster Performance are registered trademarks or trademarks of QLogic. HyperTransport and HTX are licensed trademarks of the HyperTransport Technology
Association. AMD, the AMD Arrow logo, AMD Opteron and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other trademarks are the property of their respective owners.
SN0058045-00 Rev D 11/06
Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo, CA 92656 949.389.6000
Europe Headquarters QLogic (UK) LTD. Surrey Technology Centre 40 Occam Road Guildford Surrey GU2 7YG UK +440(0)1483 295825
InfiniPath QHT7140
HyperTransport Interface
• HT v1.0.3 compliant
• HTX slot compliant
• 6.4 GB/s bandwidth
• ASIC supports a tunnel configuration with
upstream and downstream ports at 16 bits
@1.6 GT/s
Connectivity
• Single InfiniBand 4X port (10+10 Gbps) –
Copper
• External fiber optic media adapter module
support
• Compatible with managed InfiniBand switches
from Cisco®, SilverStorm™, Mellanox®, and
Voltaire®
• Interoperable with host channel adapters (HCAs)
from Cisco, SilverStorm, Mellanox and Voltaire run-
ning the Open Fabrics software stack
QLogic Host Driver/Upper level
Protocol (ULP) Support
• MPICH version 1.2.6
• TCP, NFS, UDP, SOCKETS through Ethernet driver
emulation
• Optimized MPI protocol stack supplied
• 32- and 64-bit application ready
• SDP, SRP, IPoIB supported through OpenFabrics
stack
InfiniBand Interfaces and specifications
• 4X speed (10+10 Gbps)
• Uses standard IBTA 1.1 compliant fabric and
cables; Link layer compatible
• Configurable MTU size (4096 maximum)
• Integrated SERDES
Management Support
• Includes InfiniBand 1.1 compliant SMA (Subnet
Management Agent)
• Interoperable with management solutions from
Cisco, SilverStorm,, and Voltaire
Regulatory Compliance
• FCC Part 15, Subpart B, Class A
• ICES-003, Class A
• EN 55022, Class A
• VCCI V-3/2004.4, Class A
Operating Environments
• Supports 64-bit Linux with 2.6.11 kernels
- Red Hat Enterprise Linux 4.x
- SUSE Linux 9.3 & 10.0
- Fedora Core 3 & Fedora 4
• Uses standard Linux TCP/IP stack
QLogic InfiniPath Adapter Specifications
• Typical Power Consumption: 5 Watts
• Available in PCI half height short and PCI full
height short form factors
• Operating Temperature: 10 to 45°C at 0-3km –
30 to 60°C (Non-operating)
• Humidity 20% to 80% (Non-condensing,
Operating) 5% to 90% (Non-operating)
QLogic InfiniPath ASIC Specifications
• HFCBGA package, 841 pin, 37.5 mm x 37.5 mm
ball pitch
• 330 signal I/Os
• 4.1W typical (HT cave), 4.4W typical (HT tunnel)
• Requires 1.2V and 2.5V supplies, plus interface
reference voltages
1Ping-pong latency and uni-directional bandwidth were measured by Dr. D. K. Panda on 2.8 GHz processors at Ohio State University using the standard OSU Ping-pong latency test.
2The n1⁄2 measurement was done with a single processor node communicating to a single processor node through a single level of switch. When measured with 4 processor cores per node
the n1⁄2 number was further improved to 88 bytes and 90% of the peak bandwidth was achieved with data packets of approximately 640 bytes.
3TCP/IP bandwidth and latency were measured using Netperf and a standard Linux TCP/IP software stack.
Note: Actual performance measurements may be improved over data published in this document. All current performance data is available in the InfiniPath section of the QLogic website at
www.qlogic.com/pathscale.