Scale Computing HC3 2100 Series User manual

Onboarding – 2100 Series
Welcome!
Thank you for your recent Scale Computing HC3 purchase! We are excited to have
you as a customer and look forward to making sure you are satisfied with every part of
your experience with Scale Computing.
Scale Computing aims to provide you with an informative and productive introduction to
your new HC3 solution. In this effort, every installation is paired with an experienced
installation specialist to assist you from the initial unboxing to the creation of your first
HC3 VM.
Let’s get started!
How to Contact Scale Computing
Standard Support Hours
● General – Monday through Friday 8 AM to 8 PM EDST
● Installation Scheduling – Monday through Friday 9 AM to 6 PM EDST
● Services Scheduling – Monday through Friday 9 AM to 6 PM EDST
● Email ScaleCare Support for assistance Monday through Friday from 8 AM to 8
PM EDST at [email protected].
● ScaleCare Support is available for critical issues 24/7/365 by phone at
1-877-SCALE-59 (1-877-722-5359) in the US and 0808 234 06 99 in Europe.

Telephone support is recommended for the fastest response on priority issues
and the only response after standard support hours.
Product Overview
These are the entry level HC3 systems. The HC2100 Series includes the HC2100 and
HC2150.
Hardware Specifications
HC2100
Base Model Description
Upgrade Options
CPU
Intel E5-2603v4
6 Cores
6 Threads
1.7GHz
Intel E5-2620v4
8 Cores
16 Threads
2.1GHz / 3GHz
RAM
64GB DDR4
128GB DDR4
Networking
4 x 10GbE NICs
2 Public NICs bonded active/passive
2 Private NICs bonded active/passive
4 x 10GbE NICs
2 Public NICs bonded
active/passive
2 Private NICs bonded
active/passive

Storage
NL-SAS (7200 RPM)
4TB (4 x 1TB) RAW Capacity
NL-SAS (7200 RPM)
8TB (4 x 2TB) RAW
Capacity
16TB (4 x 4TB) RAW
Capacity
32TB (4 x 8TB) RAW
Capacity
Rack
1U Height
1.7″ (43mm) H x 17.2″ (437mm) W x
19.85″ (503mm) D
N/A
Power
2 x 400W Power Supply Units
N/A
Certifications
UL/CB
CE
FCC
N/A
Miscellaneous
Rack Rails
Power Cables
Bezel
Quick Start Reference
N/A
HC2150
Base Model Description
Upgrade Options
CPU
Intel E5-2620v4
8 Cores
16 Threads
2.1GHz / 3GHz
Intel E5-2640v4
10 Cores
20 Threads
2.4GHz / 3GHz

RAM
64GB DDR4
128GB DDR4
Networking
4 x 1GbE NICs
2 Public NICs bonded active/passive
2 Private NICs bonded active/passive
4 x 10GbE NICs
2 Public NICs bonded
active/passive
2 Private NICs bonded
active/passive
Storage
NL-SAS (7200 RPM)
3TB (3 x 1TB) RAW Capacity
SSD
480GB (1 x 480GB) RAW Capacity
NL-SAS (7200 RPM)
8TB (4 x 2TB) RAW
Capacity
16TB (4 x 4TB) RAW
Capacity
32TB (4 x 8TB) RAW
Capacity
Rack
1U Height
1.7″ (43mm) H x 17.2″ (437mm) W x
19.85″ (503mm) D
N/A
Power
2 x 400W Power Supply Units
N/A
Certifications
UL/CB
CE
FCC
N/A
Miscellaneous
Rack Rails
Power Cables
Bezel
Quick Start Reference
N/A

Power Specifications
● * These power charts are approximate and may vary by 10 to 15 Watts
depending on various upgrades and options.
HC2100 / HC2150*
Watts
270
Max Potential Amps
390.2
Btu/hr
921.3
Max Potential BTU/hr
1331.5
Power Supply Btu/hr
1194.2
Amps
2.5
Decibels
6.9
Weight (lbs)
42.5
Software Overview
● The uniqueness of the Scale Computing solution is its patented HyperCore
Software, also known as HC3 Software. All HC3 features and software updates
are included at no additional cost to you. HC3 Software is ready to deploy
straight out of the box with no additional licensing or installation needed. HC3
Software continuously monitors all virtual machines as well as software and

hardware components. This allows it to detect and automatically respond to
common infrastructure events while maintaining operational simplicity through
highly intelligent software automation and architecture simplification.
Software Details
●General
○ Web browser-based GUI, email, and syslog notifications
○ Automatic VM failover in a node failure scenario
○ Automatic data restriping in the event of a failed disk
○ Self-monitoring and self-healing
○ Non-disruptive software upgrades
○ No single point of failure
○ Mix and match nodes
○ Scale up to 8 nodes in a system, no downtime required
●Features
○ HC3 VM Import and Export between HC3 Systems
○ HC3 to HC3 System VM Replication
○ VM Snapshot Scheduling
○ Bulk VM actions
○ VM Cloning
○ Non-disruptive VM live migration between nodes
○ VM and System performance monitoring
Preparing For Your Installation
General

● Take a few minutes to familiarize yourself with the available Scale Computing
onboarding documentation. The HC3 System Installation Guide provides more
detailed processes if needed, such as creating a new Portal account, port
requirements for various HC3 system features, and additional best practices.
○ HC3 System Onboarding Guide
● Ensure you have access to the Scale Computing Customer and/or Partner
Portal. Credentials should be emailed to all users on the account within 24 hours
of product shipment. If you haven’t received credentials within that time frame
you can sign up manually by selecting “New User?” under the appropriate Portal
from this link: https://www.scalecomputing.com/support/login/
○ When you first log into the Scale Computing Portal, be sure to fill out your
“My Profile” information from the top navigation bar. The selected
timezone will help determine scheduling availability.
● Schedule your installation and any purchased or included services on the Scale
Computing Portal. You’ll find these items under “Support -> Services” in the top
navigation bar. Click the “Schedule Services” link to schedule your installation
and/or professional services.
●“NOTE: Any purchased professional services, such as the Premium
Installation Service, the Networking Configuration Service, or the Switch
Configuration Service, will only allow scheduling in their proper order. This
means pre-installation services, such as the Networking Configuration
Service, will be required to be scheduled and completed before the
Installation can be scheduled in the Portal.”
○ All networking related services are reliant on using a switch from the
recommended switch list in the Networking Guidelines.
Rack Installation

● Watch this 2 minute video for step-by-step assistance in racking your HC5150D
node. And yes, it’s really that easy. As a note, this node is intended to be a two
person carry, so make sure to have someone to assist you during the racking
process!
Networking
●General
○ Review the HC3 Networking Guidelines and Recommendations for details
on switch requirements, recommended switches and cables, and
networking best practices.
●Public and Private System Networks
○ The HC3 system has two distinct physical networks in which it
participates. A public network provides a path to allow access to the HC3
web interface as well as access to VMs running on the system and is
known as the LAN network. A private network, known as the Backplane
network, is used for intra-system communication. This includes critical
system operations such as the mirroring of data blocks for redundancy
between the nodes. It is critical that the Backplane network is isolated to a
single HC3 system only (physically or through VLANs) to ensure system
stability and performance.
○ There are two NICs for both the LAN and Backplane network. These two
ports are bonded in an active/passive configuration. The “0” ports will
always be primary and the “1” ports will be secondary.
○ Below are the NIC layouts for the 1 GbE and 10 GbE HC2100 Series
hardware configurations.

■ Rear 1 GbE view of the HC2100 Series node:
○
■ Rear 10 GbE view of the HC2100 Series node:
○
■Network and Switch Configuration Recommendations
■ Two interconnected (or stacked) switches are recommended
for a full high availability configuration of the HC3 system.
Below is an example of the ideal high availability
configuration.

■
■ The Spanning Tree Protocol (STP) is a network protocol that
ensures a loop-free topology for bridged local area networks
(LANs). STP allows a network design to include spare
(redundant) links to provide automatic backup paths via STP
without the need for manual intervention. When STP is
enabled, the protocol monitors the participating ports/
VLANs. Should there be a change in topology (a port goes
active or a port goes down) STP blocks traffic on
participating ports until the network topology is determined.
Scale Computing recommends disabling STP on the LAN
and Backplane ports. If STP is required then Rapid STP is
recommended.
■ Flow control is useful for managing the data rates between
two links. It helps prevent a fast sending connection from
overwhelming a slower receiving connection and causing

retransmits. Scale Computing recommends enabling flow
control on the LAN and Backplane ports.
○Scheduling Your Installation
■ Schedule your pre-installation services and planning call in the
Scale Computing Portal to speak with your experienced installation
and professional services specialist by following the “Schedule
Services” prompts. All of the purchased and included installation
and professional services will be listed to the left of the scheduling
screen and unlocked in the order they will need to be completed.
To schedule, verify your timezone is correct, enter your contact
information, and select an available date and time. To finalize the
selected schedule select the white and blue “Schedule Time”
button.
■ * A separate welcoming email will provide a link to a simple
questionaire to help detail your environment; this
questionaire will also be covered during the planning call.
■ The scheduler will display expected time frames for each
service based on your number of purchased nodes.
■ This planning, preparation, and training is for you! The
installation and/or services specialist will review your
environment questionaire with you as well as any services
engagement agreements and answer any and all questions
you may have regarding the HC3 system or purchased
services.
■ Once the planning call is complete and everything is in place,
schedule your installation in the Scale Computing Portal through
the same “Schedule” link.

■ You can expect 1 to 3 hours for the installation depending on
how many nodes were purchased.
■For the scheduled HC3 Software system installation day
you’ll need:
■ A monitor and keyboard for physical access during node
configuration and system initialization for the HC3 system.
This typically takes less than 30 minutes for a 3 or 4 node
system.
■ Once initialized, all first-time system configuration and HC3
web interface training can be handled through a web
browser on a machine local to the HC3 nodes. This is
typically the remainder of the installation time.
■ A Windows Operating System ISO file is optional but
recommended for first-time VM creation in the HC3 web
interface with your installation specialist.
○Install the HC3 System
■What to Expect
■ Your assigned installation specialist will contact you at the
previously scheduled date and time with the provided
contact number to walk you through the HC3 system
installation.
■ The installation will start with physical access to the HC3
nodes in order to configure the node IPs and initialize the
nodes as a single HC3 system.
■ Once the nodes are initialized the HC3 web interface will be
available through a web browser and the remainder of the

installation will be handled through an online meeting
provided by the installation specialist.
■ The installation specialist will walk you through the first-time
configurations for the HC3 system, the available features
and tools in the HC3 web interface, and first-time HC3 VM
creation if a Windows Operating System ISO file is provided.
They will also answer any remaining questions you may
have regarding your new HC3 system solution.
○NOTE: If you have made the decision to self-install the HC3 system,
please contact ScaleCare Support once the system is initialized for a system
health check, any applicable software updates, and to make sure that the
system is properly registered to your account.
○Choosing to self-install may delay your deployment in the event
of any issues. ScaleCare Support reserves the right to take the
necessary corrective actions on the system post self-install on a
first-come, first-serve basis with deference given to previously
scheduled installation customers.
■Checklist for Installation Day
■ Physical access to the HC3 system nodes
■ VGA Monitor
■ A KVM may be used but is not recommended for
initial configuration
■ USB Keyboard
■ A KVM may be used but is not recommended for
initial configuration
■ 1 LAN IP for each node

■ It is recommended to have the last octet of the LAN
and Backplane IP match for ease of management
■ 1 Backplane IP for each node
■ It is recommended to have the last octet of the LAN
and Backplane IP match for ease of management
■ Network Subnet Mask
■ Network Gateway
■ All network cables and switching (including switch
configuration) should already be in place for the HC3 system
installation
■ A machine local to the HC3 system network for HC3 web
interface access after configuration (VPNs, port forwarding,
etc have been seen to block HC3 web interface access)
■ All HC3 nodes should be powered on for the HC3 system
installation
■ The command line login information (this will be provided by
the assigned installation specialist on installation day)
■Configure the Node IPs
■ Log into the node using the given username and password.
Scale Computing uses a top-down system configuration with
the top node being the “first” node of the system and the
bottom node being the “last.”This “first node”
classification has no bearing on system operations
outside of the installation configuration.
■ If the login prompt is not shown try pressing ctrl+c to
wake the node.

■ If the login prompt is still not shown press ctrl+alt+F1
to change the screen output.
■ NOTE: In general, the following commands leading with a
sudo will ask for a password. Use the same password as
the node login.
■ Enter the command to configure the node IP and then fill in
the information as prompted, hitting enter to finalize each IP
assignment.
■ A single node system:
■sudo scnodeinit
■LAN IP
■LAN Netmask
■LAN Gateway
■Backplane IP of this node
■Backplane IP of the first node in the
cluster
■This field is a special case and as
there is only one node should use
the same IP as the “Backplane IP
of this node” field.
■ A 3+ node system
■sudo scnodeinit
■LAN IP
■LAN Netmask
■LAN Gateway
■Backplane IP of this node

■Backplane IP of the first node in the
cluster
■This field is a special case and
should use the same IP as
“Backplane IP of this node” on
the first node of the system. All
subsequent system nodes after
the first node should reference
the first node’s Backplane IP.
■ The node will verify the entered IP information is
correct and that it can ping all LAN, Backplane, and
gateway IPs. The node configuration is complete
when it either returns to the command prompt or
when the output says “Entering forwarding state.” This
process generally takes 3-5 minutes per node.
■ If the verification fails, double check the
following:
■ The IPs were typed correctly.
■ The chosen LAN and Backplane IP are
being used elsewhere in the network.
■ The LAN and Backplane IP are on
separate networks. The Backplane IP
cannot be a publicly routable IP and
cannot be in the assigned LAN IP
network.
■ If there were any typographical issues or
networking issues that caused the verification

to fail the node will need to be reinitialized
using the following command:
■REINITIALIZE=yes sudo scnodeinit
■ It is also possible to bypass the network
gateway check by using the following
command if needed:
■BYPASS_NETWORK_CHECK=yes
sudo scnodeinit
■ The reinitialize and bypass commands can be
combined as well
■BYPASS_NETWORK_CHECK=yes
■REINITIALIZE=yes sudo scnodeinit
■ Repeat the node configuration process for all nodes in the
HC3 system.
■Initialize the HC3 system
■ Make sure you are on the first node of the HC3 System.
■ NOTE: In general, the following commands leading with a
sudo will ask for a password. Use the same password as
the node login.
■ Enter the command to initialize the configured nodes as a
single HC3 System.
■ A single node system:
■sudo singleNodeCluster=1 scclusterinit
■ A 3+ node system
■sudo scclusterinit

■ All of the nodes that were previously IP’d should be
listed on the screen; only a single node will be listed
when installing a single node system.
■ If any nodes are missing DO NOT proceed with
initialization.
■ Type ctrl+c and then enter no to the prompt.
This should return you to the command prompt
without initializing the system.
■ Working from the first node down, run the
sudo scnodeinit command again on all nodes
and ensure it completes successfully before
attempting system initialization once more.
■ If all nodes are shown in the list hit ctrl+c to proceed
with system initialization.
■ Type yes to confirm the initialization.
■ The nodes will now be initialized as a single HC3
system. This process typically takes 15-20 minutes.
Once it is complete the HC3 web interface will be
available in a web browser through the LAN IP of any
node in the system.
○Additional Resources
■ Videos
■HC3 Features Playlist
■HC3 “How To” Playlist
■ Technical Help Papers
■Networking Guidelines and Recommendations
■Information Security with HC3

■HC3, SCRIBE, and HyperCore Theory of Operations
■Migrating Your Existing Environment to Scale Computing’s
HC3 system
■Disaster Recovery Strategies with Scale Computing
■ Feature Notes
■HC3 Feature Guide
■Snapshot Scheduling
■HC3 Replication
■HyperCore SSD Tiering (HEAT)
■ Migration Quick Start Guides
■Migrating Your Existing Environment to Scale Computing’s
HC3 System
■HC3 Move Powered by Double-Take Quick Start
■HC3 Availability Powered by Double-Take Quick Start
■Import a Foreign Appliance into the HC3 System
■ User Guides and Support
■HC3 Software Support Matrix
■User Guide for 7.2 Software
This manual suits for next models
2
Table of contents
Other Scale Computing Server manuals