Dot Hill Systems SANnet II 200 Setup guide

SANnet II 200 SCSI Array Installation,
Operation, and Service Manual
June 2005
83-00002359, Revision C

Copyright
Copyright 2001-2005 Dot Hill Systems Corp. All rights reserved. No part of this publication may be reproduced,
stored in a retrieval system, translated, transcribed, or transmitted, in any form or by any means – manual, electric,
electronic, electromechanical, chemical, optical, or otherwise – without prior explicit written permission of Dot Hill
Systems Corp., 6305 El Camino Real, P.O. Box 9000, Carlsbad, CA., 92009-1606.
Trademarks
Dot Hill Systems, the Dot Hill logo, SANscape, SANnet, and SANpath are registered trademarks of Dot Hill
Systems Corp. All other trademarks and registered trademarks are proprietary to their respective owners.
Changes
The material in this document is for information only and is subject to change without notice. While reasonable
efforts have been made in the preparation of this document to assure its accuracy, Dot Hill Systems Corp., assumes
no liability resulting from errors or omissions in this document, or from the use of the information contained herein.
Dot Hill Systems Corp., reserves the right to make changes in the product design without reservation and without
notification to its users.

iii
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
1. Product and Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–1
1.1 SANnet II 200 SCSI Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–1
1.2 Array Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1–2
1.3 SCSI Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1–3
1.3.1 Redundant Configuration Considerations . . . . . . . . . . . . . . . . . . . . . 1–4
1.4 Device Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1–4
1.5 Field-Replaceable Units (FRUs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–5
1.5.1 RAID I/O Controller Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–5
1.5.2 I/O Expansion Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–6
1.5.3 Disk Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1–6
1.5.4 Battery Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–7
1.5.5 Power and Fan Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–7
1.6 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1–7
1.7 Additional Software Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–8
2. Site Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2–1
2.1 Customer Obligations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2–1
2.2 Safety Precautions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2–2
2.3 Environmental Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–3
2.3.1 Electromagnetic Compatibility (EMC) . . . . . . . . . . . . . . . . . . . . . . . . 2–3
2.4 Electrical and Power Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–3
2.5 Physical Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–4
2.6 Layout Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2–4

iv SANnet II 200 SCSI Array Installation, Operation, and Service Manual • June 2005
2.6.1 Rack Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2–4
2.6.2 Tabletop Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2–5
2.7 Console and Other Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2–6
2.8 Preinstallation Worksheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2–6
3. Unpacking Your SCSI Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3–1
3.1 Opening Your Package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3–1
3.2 Checking the SANnet II 200 SCSI Array Package Contents . . . . . . . . . . . . . .3–2
3.3 Field-Replaceable Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3–3
3.4 Customer-Provided Cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3–3
3.5 Mounting Your Array in a Rack or Cabinet . . . . . . . . . . . . . . . . . . . . . . . . . . . .3–3
4. Connecting Your SCSI Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–1
4.1 Converting Your Front Bezel Locks So the Keys Cannot Be Removed . . . . . .4–2
4.2 Hardware Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–4
4.3 Connecting the Chassis to an AC Power Outlet . . . . . . . . . . . . . . . . . . . . . . . .4–5
4.4 Connecting the Chassis to DC Power Outlets . . . . . . . . . . . . . . . . . . . . . . . . .4–7
4.5 Powering Up and Checking LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–9
4.6 Single-Bus and Split-Bus Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–10
4.6.1 Default Channel Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–11
4.7 Connecting Cables for a Single-Bus Configuration . . . . . . . . . . . . . . . . . . . .4–11
4.8 Connecting Cables for a Split-Bus Configuration . . . . . . . . . . . . . . . . . . . . . .4–14
4.8.1 Standard Cabling Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–17
4.9 Connecting Ports to Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–19
4.9.1 Connecting a SANnet II 200 SCSI RAID Array . . . . . . . . . . . . . . . .4–19
4.10 Cabling to Expansion Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–20
4.10.1 Cabling to One Expansion Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–20
4.10.2 Cabling to Two Expansion Units . . . . . . . . . . . . . . . . . . . . . . . . . . .4–22
4.10.3 Adding an Expansion Unit to an Existing RAID Array . . . . . . . . . . .4–24
4.11 Establishing Communications With An Array . . . . . . . . . . . . . . . . . . . . . . . . .4–25
4.11.1 Configuring a Host COM Port to Connect to a RAID Array . . . . . . .4–26
4.11.2 Manually Setting a Fixed IP Address . . . . . . . . . . . . . . . . . . . . . . . .4–27
4.12 Setting Up Out-of-Band Management Over Ethernet . . . . . . . . . . . . . . . . . .4–28
4.13 Remaining Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4–30

Contents v
4.14 Power-On Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–30
4.15 Power-Off Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–31
5. Checking LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5–1
5.1 LEDs When the Array Is First Powered On . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–1
5.2 Front-Panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5–2
5.2.1 Drive LED Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–4
5.3 Back-Panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–5
5.3.1 I/O Module LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–6
5.3.2 RAID Controller LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–7
5.3.3 Power Supply and Fan Module LEDs . . . . . . . . . . . . . . . . . . . . . . . . 5–7
5.3.4 EMU Module LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5–9
6. Maintaining Your Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6–1
6.1 Scanning Drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6–1
6.2 Using Software to Monitor and Manage Your Array . . . . . . . . . . . . . . . . . . . . . 6–2
6.2.1 Out-of-Band Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6–2
6.2.2 Inband Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6–3
6.2.3 Other Supported Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6–3
6.3 Battery Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6–3
6.3.1 Battery Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6–3
6.4 Silencing Audible Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6–4
6.5 Viewing Event Logs on the Screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6–6
6.6 Upgrading Firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6–8
6.6.1 Controller Firmware Upgrade Features . . . . . . . . . . . . . . . . . . . . . . . 6–8
6.6.2 Installing Firmware Upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6–9
6.7 Replacing the Front Bezel and Ear Caps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6–9
6.7.1 Removing the Front Bezel and Ear Caps . . . . . . . . . . . . . . . . . . . . . . 6–9
6.7.2 Placing the Bezel and Ear Caps Back Onto the Chassis . . . . . . . . . 6–10
7. Troubleshooting Your Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–1
7.1 Sensor Locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–1
7.2 RAID LUNs Not Visible to the Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–3
7.3 JBOD Disks Not Visible to the Host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7–3
7.4 Controller Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7–4

vi SANnet II 200 SCSI Array Installation, Operation, and Service Manual • June 2005
7.5 Recovering From Fatal Drive Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7–4
7.6 Using the Reset Button . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7–6
7.7 Troubleshooting Flowcharts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7–7
7.7.1 Power Supply and Fan Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7–7
7.7.2 Drive LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7–9
7.7.3 Front-Panel LEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7–14
7.7.4 I/O Controller Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7–19
A. SCSI Array Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A–1
A.1 Physical Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A–1
A.2 Summary of SANnet II 200 SCSI Array Specifications . . . . . . . . . . . . . . . . . A–2
A.3 Agency Approvals and Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A–3
B. Cabling JBODs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B–1
B.1 Known Limitations Affecting SANnet II 200 SCSI JBOD Arrays . . . . . . . . . . . B–2
B.2 Connecting a SANnet II 200 JBOD Array . . . . . . . . . . . . . . . . . . . . . . . . . . . B–2
B.3 Cabling a Single-Bus JBOD with One Host Connection . . . . . . . . . . . . . . . . B–3
B.4 Cabling a Single-Bus JBOD with Two Host Connections . . . . . . . . . . . . . . . . B–4
B.5 Cabling a Split-Bus, Single-Initiator JBOD Configuration . . . . . . . . . . . . . . . . B–5
B.5.1 Connecting a Split-Bus JBOD to One Host . . . . . . . . . . . . . . . . . . . B–6
B.6 Cabling a Split-Bus, Multi-Initiator JBOD Configuration . . . . . . . . . . . . . . . . . B–7
B.7 Overview of Provided Software Monitoring and Management Tools . . . . . . . B–9
B.8 Monitoring with SANscape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B–9
B.8.1 Enabling JBOD Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B–9
B.8.2 Viewing Component and Alarm Characteristics . . . . . . . . . . . . . . . B–11
B.9 Event Messages from the SANscape Alert . . . . . . . . . . . . . . . . . . . . . . . . . B–12
B.10 Monitoring with the SANscape CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B–12
B.11 Downloading Firmware to Disk Drives in a JBOD . . . . . . . . . . . . . . . . . . . . B–13
B.12 Managing Disks in the SANnet II 200 JBOD Array . . . . . . . . . . . . . . . . . . . B–13
B.13 Troubleshooting SANnet II 200 SCSI JBOD Arrays . . . . . . . . . . . . . . . . . . . B–13
B.13.1 Troubleshooting Configuration Issues . . . . . . . . . . . . . . . . . . . . . . B–13
B.13.2 Troubleshooting Hardware Issues . . . . . . . . . . . . . . . . . . . . . . . . . B–14
B.13.2.1 Writing Events to a Log File for an IBM AIX Host . . . . . B–15
B.13.3 Troubleshooting Flowcharts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B–16

Contents vii
C. Failed Component Alarm Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C–1
D. Connector Pinouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D–1
D.1 SCSI Host or Drive Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D–1
D.2 RJ-45 Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D–3
D.3 DB9 COM Port Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D–4
E. Configuring a Solaris Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .E–1
E.1 Accessing the Firmware Application On a Solaris Host . . . . . . . . . . . . . . . . . E–1
E.2 Editing the sd.conf File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E–2
E.3 Enabling a Solaris Host to Recognize New Devices and LUNs . . . . . . . . . . . E–4
E.4 Labeling a Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E–6
E.5 Making JBODs Visible to Solaris Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E–9
F. Configuring a Windows 200x Server or Windows 200x Advanced Server . . . . . .F–1
F.1 Setting Up the Serial Port Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .F–1
F.2 Accessing the Firmware Application From a Windows 200x Server . . . . . . . .F–3
F.3 Enabling a Windows 200x Server to Recognize New Devices and LUNs . . . .F–4
G. Configuring a Linux Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G–1
G.1 Checking the Adapter BIOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G–1
G.2 Multiple LUN Linux Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G–2
G.3 Making an ext3 Filesystem for Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G–3
G.4 Creating a Filesystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G–4
G.5 Creating a Mount Point and Mounting the Filesystem Manually . . . . . . . . . . G–4
G.6 Mounting the Filesystem Automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G–4
H. Configuring an IBM Server Running the AIX Operating System . . . . . . . . . . . . H–1
H.1 Setting Up a Serial Port Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H–1
H.2 Accessing the Firmware Application From an IBM Server Running AIX . . . . H–2
H.3 Identifying the Device On Which To Create a Logical Volume . . . . . . . . . . . . H–4
H.4 Using SMIT to Enable an AIX Host to Recognize New LUNs . . . . . . . . . . . . H–4
H.5 Creating a Volume Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H–5
H.6 Creating a Logical Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H–6
H.7 Creating a File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H–6
H.8 Mounting the New File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . H–7

viii SANnet II 200 SCSI Array Installation, Operation, and Service Manual • June 2005
H.9 Verifying That the New File System Is Mounted . . . . . . . . . . . . . . . . . . . . . . H–7
I. Configuring an HP Server Running the
HP-UX Operating System I–1
I.1 Setting Up a Serial Port Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–2
I.2 Accessing the Firmware Application From an HP Server Running HP-UX . . . I–2
I.3 Attaching the Disk Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–4
I.4 Logical Volume Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–5
I.5 Definitions of Common Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–5
I.6 Creating a Physical Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–5
I.7 Creating a Volume Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–6
I.8 Creating a Logical Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–8
I.9 Creating an HP-UX File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–8
I.10 Mounting the File System Manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–8
I.11 Mounting the File System Automatically . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I–9
J. Revision History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .J–1
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index–1

Preface ix
Preface
This manual gives step-by-step procedures for installing and initially configuring the
SANnet II SCSI array.
Caution – Read the SANnet II Family Safety, Regulatory, and Compliance Manual
before beginning any procedure in this guide.
Tip – This guide is written for experienced system administrators who are familiar
with Dot Hill hardware and software products.
How This Book Is Organized
This book covers the following topics:
Chapter 1 provides an overview of RAID features.
Chapter 2 covers the site planning and basic safety requirements.
Chapter 3 provides general procedures for unpacking and inspecting the array.
Chapter 4 provides procedures for cabling and for connecting to power and to the
network.
Chapter 5 describes the front-panel and back-panel LEDs.
Chapter 6 describes maintenance procedures.
Chapter 7 describes troubleshooting procedures.
Appendix A provides SANnet II 200 SCSI array specifications.
Appendix B shows how to cable JBODs to one or more host servers.
Appendix C provides information about failed component alarm codes.
Appendix D provides pinout identification for each connector.

xSANnet II 200 SCSI Array Installation, Operation, and Service Manual • June 2005
Appendix E provides information on configuring Sun servers running the Solaris™
operating system.
Appendix F provides information on configuring Windows 200x servers.
Appendix G provides information on configuring Linux servers.
Appendix H provides information on configuring IBM AIX servers.
Appendix I provides information on configuring HP-UX servers.
Typographic Conventions
Related Documentation
Typeface1
1. The settings on your browser might differ from these settings.
Meaning Examples
AaBbCc123 The names of commands, files,
and directories; on-screen
computer output
Edit your.login file.
Use ls -a to list all files.
% You have mail.
text Computer menu Click Start.
AaBbCc123 Book titles, new words or
terms, words to be emphasized.
Replace command-line
variables with real names or
values.
Read Chapter 6 in the User’s Guide.
These are called class options.
You must be a superuser to do this.
To delete a file, type rm filename.
Title Part Number
SANnet II 200 SCSI Array Release Notes 83-00002366
SANnet II 200 SCSI Array Best Practices Manual 83-00002667
SANnet II 200 Family Safety, Regulatory, and Compliance Manual 83-00002666
SANnet II Family RAID Firmware 4.1x User’s Guide 83-00003435
SANnet II Family FRU Installation Guide 83-00002708
SANnet II 200 Family Rack Installation Guide for 2U Arrays 83-00002365
SANscape User’s Guide 83-00003431
SANscape Alert User’s Guide 83-00003432
SANscape CLI User’s Guide 83-00003433

Preface xi
Technical Support
For late-breaking Release Notes and all manuals for this product, go to the SANnet II
SCSI array section at:
http://www.dothill.com/manuals
The following information may be required when contacting Technical Support: Dot
Hill serial number and part number of hardware; version of Dot Hill supplied software;
host computer platform and operating system version; description of the problem and
any related error messages.
Supply the following information to facilitate our tracking system and improve our
response time: customer name, company name; state and country; telephone number
with area code; Internet mail address; maintenance contract number, if applicable.
Placing a Support Call
After obtaining the above information, a support call may be placed by Internet mail,
fax, or telephone.
Phone: 1-877-DOT7X24 (877-368-7924)
URL: http://www.dothill.com/support/index.htm
Corporate Headquarters Contacts
United States (California) Corporate Headquarters
Tel: 1-760-931-5500 or 1-800-872-2783
Fax: 1-760-931-5527
E-mail: [email protected]m
Netherlands: European Headquarters
Dot Hill Systems Corp., B.V. (Netherlands)
Tel: 31 (0) 53 428 4980; Fax: 31 (0) 53 428 0562
E-mail: [email protected]
Japan: Japanese Headquarters
Nihon Dot Hill Systems Corp., Ltd.
Tel: 81-3-3251-1690; Fax: 81-3-3251-1691
E-mail: [email protected]
For additional sales offices in the U.K., China, Sweden, Germany, France, Israel, and
Singapore, see our web site:
http://www.dothill.com/company/offices.htm

xii SANnet II 200 SCSI Array Installation, Operation, and Service Manual • June 2005
Dot Hill Welcomes Your Comments
Dot Hill is interested in improving its documentation and welcomes your comments
and suggestions. You can email your comments to:
Include the part number (83-00002359) of your document in the subject line of your
email.

1-1
CHAPTER 1
Product and Architecture Overview
This chapter provides a brief overview of the SANnet II 200 SCSI array, which is an
LVD/SE device. Topics covered in this chapter are:
■“SANnet II 200 SCSI Arrays” on page 1-1
■“Array Configurations” on page 1-2
■“SCSI Architecture” on page 1-3
■“Device Identification” on page 1-4
■“Field-Replaceable Units (FRUs)” on page 1-5
■“Interoperability” on page 1-7
■“Additional Software Tools” on page 1-8
1.1 SANnet II 200 SCSI Arrays
Providing a total capacity of 10.8 terabytes, the SANnet II 200 SCSI RAID array is a
high-performance, modular, storage device with a very small footprint, 3.5-inches tall
by 19-inches wide (8.89-cm tall by 48.26-cm wide). The array contains one or two
internal RAID controllers and up to twelve 300-Gbyte disk drives with SCSI
connectivity to the data host.
Figure 1-1 Front View of a SANnet II 200 SCSI Array (RAID, Expansion Unit, or JBOD)
The RAID-equipped array is highly scalable and supports up to two expansion chassis
(expansion unit arrays that have a set of drives and no controller) for a total of 36
drives. The RAID array and expansion units connect to the storage devices and
consoles by means of standard serial port, Ethernet, and SCSI connections.

1-2 SANnet II 200 SCSI Array Installation, Operation, and Service Manual • June 2005
Figure 1-2 Rear View of a RAID Array
Also available is a JBOD array (Just a Bunch of Disks), which is similar to an
expansion unit except that it is connected directly to a host server rather than to a
RAID array.
.
Figure 1-3 Rear View of an Expansion Unit or JBOD
Extensive reliability, availability, and serviceability (RAS) features include redundant
components, notification of failed components, and the ability to replace components
while the unit is online.
The RAID array can be used either as a standalone storage unit or as a building block,
interconnected with expansion arrays of the same type. The array can be placed on a
tabletop or rackmounted in a server cabinet or expansion cabinet.
For information about specifications and agency approvals, see Appendix A.
1.2 Array Configurations
The SANnet II 200 SCSI array can be used in the following configurations:
■Single-controller configuration. A RAID array can be configured with a single
controller in a non-redundant configuration.
■A RAID array with two controllers. A RAID array can be configured with two
controllers to provide full redundancy.
■An expansion unit. An expansion unit consists of a chassis with disk drives and I/O
expansion modules. The expansion unit does not include an I/O controller module.
The expansion unit connects to and is managed by a RAID array
■A Just a Bunch of Disks (JBOD) array. The JBOD array connects to, and is
managed by, a host server.
For more information about JBODs, see Appendix B.

Chapter 1 Product and Architecture Overview 1-3
Table 1-1 shows the configuration options for SANnet II 200 SCSI arrays.
For information about maximum disk, logical drive, and array capacity, refer to the
SANnet II Family RAID Firmware User’s Guide.
1.3 SCSI Architecture
Each RAID array has five channels with the following defaults:
■Channels 1 and 3 are host channels connected to servers. Any SANnet II 200 SCSI
array host channel can be reassigned as a drive channel to connect to an expansion
unit.
■Channels 0 and 2 are drive channels that connect the internal 12-disk drives in the
RAID chassis, and can also be used to add expansion chassis to the configuration.
Channel 2 can also be reassigned as a host channel. However, in a dual-bus
configuration, channel 2 must be a drive channel.
Table 1-1 SANnet II 200 SCSI Array Configuration Options
Internal RAID controllers 1 or 2
SCSI disks Up to 12 per array or per expansion unit, with a
minimum of 4 plus 1 spare
SCSI expansion units1
1. A disk array with no controller.
Up to 2
SCSI JBOD arrays2
2. A disk array with no controller that is connected directly to a host computer, with no RAID array.
1
Connection options • Serial port
•Ethernet
Supported RAID levels 0, 1, 3, 5, 1+0, 3+0, and 5+0
Redundant field-replaceable
units (FRUs)
• Power supply and fan modules
• Controller modules
• I/O modules
• Disk drive modules
• EMUs (event monitoring unit)
Configuration management
and enclosure event
reporting options3
3. The host-based SANscape software provides a graphical user interface (GUI) and additional event-reporting capabili-
ties.
• In-band SCSI ports
• Out-of-band 10/100BASE-T Ethernet port
• RS-232 connectivity
• Enclosure monitoring by SCSI Accessed Fault-
Tolerant Enclosure (SAF-TE)

1-4 SANnet II 200 SCSI Array Installation, Operation, and Service Manual • June 2005
■Channel 6 is a redundant controller communication (RCCOM) channel. Channel 6
must remain a dedicated RCCOM channel. RCCOM provides the communication
channels by which two controllers in a redundant RAID array communicate with
one another. This communication enables the controllers to monitor each other, and
includes configuration updates, and control of cache.
For more host and drive channel information, see Chapter 4.
1.3.1 Redundant Configuration Considerations
This section provides information about setting up redundant configurations for
increased reliability. For more detailed information about configuration requirements,
refer to the SANnet II Family RAID Firmware User’s Guide and the SANnet II Family
Best Practices Manual.
SCSI is applied to storage configurations with topologies that aim to avoid loss of data
because of component failure. As a rule, the connections between source and target
should be configured in redundant pairs.
The recommended host-side connection consists of two or more host bus adapters
(HBAs). Each HBA is used to configure a connection between the host computer and
the array.
In the unlikely event of controller failure, the standby channels on the remaining
controller become an I/O route serving the host I/O originally directed to the failed
channel on its pair of controllers. Moreover, application failover software should be
running on the host computer to control the transfer of I/O from one HBA to another
in case either data path fails.
1.4 Device Identification
A label on the lower lip of an array chassis, underneath the front bezel, indicates
whether it is a JBOD array or a RAID array. For instance, “SANnet II 200 AC JBOD”
refers to an alternating-current version of a SANnet II 200 JBOD array, “SANnet II
200 DC JBOD” refers to a direct-current version of a JBOD array, and “SANnet II 200
AC RAID” refers to an alternating-current version of a RAID array. Similarly, using a
UNIX command such as probe-scsi-all provides similar information, using an
“R” designator for RAID arrays and a “J” designator for disks in a JBOD array. For
example, “SANnet II 200L J” identifies a JBOD array with SAF-TE firmware version
1170, and “SANnet II 200L R” identifies a SANnet II 200 SCSI RAID array with
firmware version 1170.
For a list of supported racks and cabinets, refer to the release notes for the model of
array that you are installing.

Chapter 1 Product and Architecture Overview 1-5
Reliability, availability, and serviceability (RAS) are supported by:
■Redundant components
■Notification of failed components
■Components that are replaceable while the unit is online
For information about specifications and agency approvals, see Appendix A.
1.5 Field-Replaceable Units (FRUs)
This section describes the FRUs contained in the SANnet II 200 SCSI array.
1.5.1 RAID I/O Controller Modules
A dual-controller configuration offers increased reliability and availability because it
eliminates a single point of failure, the controller. In a dual-controller configuration, if
the primary controller fails, the array automatically fails over to the second controller
without an interruption of data flow.
SANnet II 200 SCSI array I/O controller modules are hot-serviceable. SANnet II 200
SCSI array RAID controller modules provide four SCSI ports. Single-controller and
dual-controller models are available. Each RAID controller is configured with 1
gigabyte (Gbyte) of cache.
In the unlikely event of an I/O controller module failure, the redundant RAID
controller immediately begins servicing all I/O requests. The failure does not affect
application programs.
Each RAID I/O controller module can support up to 1 gigabyte of Synchronous
Dynamic Random Access Memory (SDRAM) with Error Control Check (ECC)
memory. In addition, each controller supports 64 megabytes (Mbyte) of on-board
memory. One Application Specific Integrated Circuit (ASIC) controller chip handles
the interconnection between the controller bus, DRAM memory, and Peripheral
Component Interconnect (PCI) internal buses. It also handles the interface between the
on-board 2-Mbyte flash, 32-Kbyte nonvolatile random access memory (NVRAM) RS-
232 port chip, and 10/100 BASE-T Ethernet chip.
The RAID I/O controller module is a multifunction board. I/O controller modules
include the SCSI Accessed Fault-Tolerant Enclosure (SAF-TE) logic and the RAID
controller. The SAF-TE logic monitors various temperature thresholds, fan speed from
each fan, voltage status from each power supply, and the FRU ID.
Each RAID I/O controller module incorporates the SAF-TE direct-attached capability
to monitor and maintain enclosure environmental information. The SAF-TE controller
chip monitors all internal +12 and +5 voltages, various temperature sensors located
throughout the chassis, and each fan. The SAF-TE also controls the front-panel and
back-panel LEDs and the audible alarm. Both the RAID chassis and the expansion
chassis support dual SAF-TE failover capabilities for fully redundant event
monitoring.

1-6 SANnet II 200 SCSI Array Installation, Operation, and Service Manual • June 2005
1.5.2 I/O Expansion Modules
The hot-serviceable I/O expansion modules provide four ports but do not have battery
modules or controllers. I/O expansion modules are used with I/O controller modules in
non-redundant SANnet II 200 SCSI arrays, and in expansion units and SANnet II 200
SCSI JBODs.
1.5.3 Disk Drives
Each disk drive is mounted in its own sled assembly. Each sled assembly has EMI
shielding, an insertion and locking mechanism, and a compression spring for maximum
shock and vibration protection.
Each disk drive is slot-independent, meaning that once a logical drive has been
initialized, the system can be shut down and the drives can be removed and replaced in
any order. In addition, disk drives are field-upgradeable to larger drives without
interruption of service to user applications. The drive firmware is also field-
upgradeable, but the firmware upgrade procedure requires interruption of service.
In the event of a single disk drive failure, with the exception of RAID 0, the system
continues to service all I/O requests. Either mirrored data or parity data is used to
rebuild data from the failed drive to a spare drive, assuming one is assigned. If a spare
is not assigned, you have to manually rebuild the array.
In the unlikely event that multiple drive failures occur within the same logical drive,
data that has not been replicated or backed up might be lost. This is an inherent
limitation of all RAID subsystems and could affect application programs.
An air management sled FRU is available for use when you remove a disk drive and
do not replace it. Insert an air management sled into the empty slot to maintain
optimum airflow through the chassis.
The drives can be ordered in 36-Gbyte, 73-Gbyte, 146-Gbyte, and 300-Gbyte sizes.
Thirty-six-gigabyte drives have a rotation speed of 15,000 RPM, while 73-Gbyte, 146-
Gbyte, and 300-Gbyte drives have a rotation speed of 10,000 RPM.
Caution – You can mix disk drive capacity in the same chassis, but not spindle speed
(RPMs) on the same SCSI bus. For instance, you can use 36-Gbyte and 73-Gbyte
drives with no performance problems if both are 10K RPM drives. Violating this
configuration guideline leads to poor performance.

Chapter 1 Product and Architecture Overview 1-7
1.5.4 Battery Module
The battery module is designed to provide power to system cache for 72 hours in the
event of a power failure. When power is reapplied, the cache is purged to disk. The
battery module is a hot-swappable FRU that is mounted on the I/O board with guide
rails and a transition board. It also contains the EIA-232 and DB9 serial interface
(COM) ports. Hot-swappable means that a live upgrade can be performed. The battery
FRU can be removed and replaced while the RAID array is powered on and
operational. For battery replacement information, refer to the SANnet II Family FRU
Installation Guide.
1.5.5 Power and Fan Modules
Each array contains redundant (two) power and fan modules. Each module contains a
420-watt power supply and two radial 52 cubic feet per minute (CFM) fans. Power
module autoranging capabilities range:
■AC Power Supply. From 90 Volts Alternating Current (VAC) to 264 VAC.
■DC Power Supply. From –36 Volts Direct Current (VDC) to –72 VDC.
A single power and fan module can sustain an array.
1.6 Interoperability
The array is designed for heterogeneous operation and supports the following
operating systems:
■Solaris, versions 8, 9, and 10
■Sun™ Linux 5.0 on the Sun LX50 server
■Red Hat Linux AS, versions 2.1 and 3.0
■Windows 2000 Advanced Server and Windows 2003 Server
■IBM AIX, versions 5.1, 5.2, and 5.3
■HP-UX, versions 11.0 and 11i
Note – For information about supported versions of these operating systems, refer to
the SANnet II 200 SCSI Array Release Notes.
The array does not require any host-based software for configuration, management,
and monitoring, which can be handled through the built-in firmware application. The
console window can be accessed by means of the DB9 communications (COM) port
using the tip command, or by means of the Ethernet port using the telnet
command.

1-8 SANnet II 200 SCSI Array Installation, Operation, and Service Manual • June 2005
1.7 Additional Software Tools
The following additional optional software tools are available on the web site or your
purchased software CD:
■SANscape, a management and monitoring program
■SANscape Alert software, a monitoring utility
■SANscape CLI, a command line utility to download firmware and to view the event
log.
See the web site or installation CD for the additional user guides.
Other manuals for SANnet II 200
2
Table of contents
Other Dot Hill Systems Storage manuals

Dot Hill Systems
Dot Hill Systems RAIDCore User manual

Dot Hill Systems
Dot Hill Systems AssuredSAN 4000 Series User manual

Dot Hill Systems
Dot Hill Systems SANnet II 200 Parts list manual

Dot Hill Systems
Dot Hill Systems AssuredSAN 6004 Instruction Manual

Dot Hill Systems
Dot Hill Systems SANnet II 200 FC Manual