HPE Smart Array P824i-p MR Gen10 User manual

HPE Smart Array P824i-p MR Gen10
User Guide
Part Number: P06372-002
Published: March 2019
Edition: 2
Abstract
This document includes feature, installation, and configuration information about Hewlett
Packard Enterprise Smart Array P824i-p MR Gen10 controller and is for the person who
installs, administers, and troubleshoots servers and storage systems. Hewlett Packard
Enterprise assumes you are qualified in the servicing of computer equipment and trained in
recognizing hazards in products with hazardous energy levels.

© Copyright 2018, 2019 Hewlett Packard Enterprise Development LP
Notices
The information contained herein is subject to change without notice. The only warranties for Hewlett
Packard Enterprise products and services are set forth in the express warranty statements accompanying
such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained
herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession,
use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer
Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government
under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard
Enterprise has no control over and is not responsible for information outside the Hewlett Packard
Enterprise website.
Acknowledgments
Microsoft® and Windows® are either registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
MegaRAID™ and CacheCade™ are registered trademarks of Broadcom, Inc.
Linux ® is the registered trademark of Linus Torvalds in the U.S. and other countries.

Contents
HPE Smart Array P824i-p MR Gen10.....................................................5
Features................................................................................................... 6
Controller supported features....................................................................................................... 6
Operating environments.....................................................................................................6
RAID technologies..............................................................................................................6
Transformation................................................................................................................... 7
Drive technology.................................................................................................................7
Security.............................................................................................................................. 7
Reliability............................................................................................................................7
Performance.......................................................................................................................7
RAID technologies........................................................................................................................ 8
Selecting the right RAID type for your IT infrastructure......................................................8
Mixed mode (RAID and JBOD simultaneously)............................................................... 12
Make Unconfigured Good and Make JBOD.....................................................................12
Patrol read........................................................................................................................12
Striping............................................................................................................................. 12
Mirroring........................................................................................................................... 13
Parity................................................................................................................................ 15
Spare drives..................................................................................................................... 19
Drive rebuild..................................................................................................................... 20
Foreign configuration import............................................................................................ 20
Transformation............................................................................................................................ 20
Array transformations.......................................................................................................20
Logical drive transformations........................................................................................... 20
Drive technology......................................................................................................................... 21
HPE SmartDrive LED.......................................................................................................21
Consistency check........................................................................................................... 23
Online drive firmware update........................................................................................... 23
Discarding pinned cache..................................................................................................23
Dynamic sector repair...................................................................................................... 23
Security....................................................................................................................................... 24
Drive erase.......................................................................................................................24
Sanitize erase...................................................................................................................24
Reliability.....................................................................................................................................25
Recovery ROM.................................................................................................................25
Cache Error Checking and Correction (ECC).................................................................. 25
Thermal monitoring.......................................................................................................... 25
Performance................................................................................................................................25
SAS storage link speed....................................................................................................25
HPE Smart Array MR FastPath........................................................................................25
HPE Smart Array MR CacheCade................................................................................... 26
Cache...............................................................................................................................26
Installation............................................................................................. 29
Installation...................................................................................................................................29
Installing an HPE Smart Array P824i-p MR Gen10 controller in a configured server...... 29
3

Installing an HPE Smart Array P824i-p MR Gen10 controller in an unconfigured
server............................................................................................................................... 30
Configuring boot controller options.................................................................................. 31
Connecting storage devices.............................................................................................33
Cable part numbers..........................................................................................................33
Configuration.........................................................................................34
Array and controller configuration............................................................................................... 34
HPE MR Storage Administrator........................................................................................34
StorCLI............................................................................................................................. 35
UEFI System Utilities........................................................................................................35
Smart Array MR Gen10 configuration in UEFI System Utilities.................................................. 36
Viewing controller information and performing common actions......................................36
Configuration management..............................................................................................37
Controller management....................................................................................................43
Logical drive management............................................................................................... 50
Drive management...........................................................................................................52
Maintenance.......................................................................................... 58
System maintenance tools..........................................................................................................58
Updating software and firmware...................................................................................... 58
Diagnostic tools................................................................................................................58
Models....................................................................................................59
HPE Smart Array P824i-p MR Gen10 Controller........................................................................ 59
Energy pack options.............................................................................60
HPE Smart Storage Battery........................................................................................................ 60
HPE Smart Storage Hybrid Capacitor.........................................................................................60
Energy pack specifications......................................................................................................... 61
Specifications........................................................................................62
Memory and storage capacity conventions.................................................................................62
RAID conventions....................................................................................................................... 62
Controller specifications..............................................................................................................62
Support and other resources...............................................................63
Accessing Hewlett Packard Enterprise Support......................................................................... 63
Accessing updates......................................................................................................................63
Customer self repair....................................................................................................................64
Remote support.......................................................................................................................... 64
Warranty information...................................................................................................................64
Regulatory information................................................................................................................65
Documentation feedback............................................................................................................ 65
Websites................................................................................................ 66
4

HPE Smart Array P824i-p MR Gen10
HPE Smart Array P824i-p MR Gen10 is ideal for maximizing performance while supporting advanced
RAID levels. This controller operates in Mixed Mode which combines RAID and JBOD operations
simultaneously. It offers flash-backed write cache and read-ahead cache and provides enterprise-class
storage performance, reliability, security, and efficiency.
HPE Smart Array P824i-p MR Gen10 provides:
• 24 SAS lanes across 6 x4 internal Mini SAS HD ports
• SAS and SATA drive support
• RAID levels 0, 1, 5, 6, 10, 50, 60
• Mixed mode RAID and JBOD functionality simultaneously
• 12G SAS support
• UEFI and Legacy Boot modes
• 4 GB flash-backed write cache support
• HPE Smart Storage Battery support
• HPE Smart Storage Hybrid Capacitor support
• Smart Array management tools:
◦ HPE MR Storage Administrator
◦ HPE StorCLI
◦ UEFI Storage Configuration Utility
HPE Smart Array P824i-p MR Gen10 is supported in HPE ProLiant Gen10 servers.
The HPE Smart Array controllers are named according to the features of the controller, as shown below.
HPE Smart Array P824i-p MR Gen10 5

Features
Controller supported features
This section lists the features supported by the P824i-p Smart Array MR controller. For additional
information about the features, see MR Storage Administrator User Guide at the Hewlett Packard
Enterprise website http://www.hpe.com/support/MRSA.
Operating environments
The following operating environments are supported:
• Windows
• Linux
• VMware ESXI
• Legacy Boot mode
• UEFI Boot mode
RAID technologies
The following RAID technologies are supported:
• RAID levels 0, 1, 5, 6, 10, 50, 60
• Max logical drives - 64
• Max physical drives - 240
• Max physical per logical drives - 64
•Mixed mode (RAID and JBOD)
•Making unconfigured good and making JBOD
•Patrol read
•Read load balancing
•Parity groups
•Fast and full initialization
•Regenerative writes
•Backed out writes
•Full-stripe writes
•Dedicated spare
•Global spare
•Drive rebuilds
•Foreign configuration import
6 Features

Transformation
The following transformation features are supported:
•Expand Array
•Transportable controller
•Extend logical drive
•Migrate RAID level
•Transformation priority
Drive technology
The following drive technology features are supported:
•HPE SmartDrive LED
•Consistency check
•Discarding pinned cache
•Online drive firmware update
Security
The following security features are supported:
•Drive erase
•Sanitize erase
Reliability
The following reliability features are supported:
•Recovery ROM
•Cache Error Checking and Correction
•Thermal monitoring
Performance
The following performance features are supported:
•SAS storage link speed
•FastPath (SSD accelerator)
•CacheCade
•Read policy (read ahead)
•Write policy
•I/O policy
Features 7

•Drive caching
•Stripe size selection
RAID technologies
Selecting the right RAID type for your IT infrastructure
The RAID setting that you select is based upon the following:
• The number of parity groups that you have
• The fault tolerance required
• The write performance required
• The amount of usable capacity that you have
Configuring the RAID fault tolerance
If your IT environment requires a high level of fault tolerance, select a RAID level that is optimized for fault
tolerance.
This chart shows the relationship between the RAID level fault tolerance and the size of the storage array.
The chart includes RAID 0, 5, 50, 10, 6, and 60. It also shows the percent reliability in increments
between 1 and one billion and the storage array drive increments between 0 and 96.
This chart assumes that two parity groups are used for RAID 50 and RAID 60.
This chart shows that:
• RAID 10 is 30,000 times more reliable than RAID 0.
• The fault tolerance of RAID 5, 50, 6, and 60 decreases as the array size increases.
8Features

Configuring the RAID write performance
If your environment requires high write performance, select a RAID type that is optimized for write
performance
The chart below shows how RAID 10, 5, 50, 6, and 60 compare to the percent write performance of RAID
0.
The data in the chart assumes that drives are limited and that drive write performance is the same as
drive read performance.
Consider the following points:
• Write performance decreases as fault tolerance improves due to extra I/O.
• Read performance is generally the same for all RAID levels except for smaller RAID 5\6 arrays.
The table below shows the Disk I/O for every host write:
RAID type Disk I/O for every host write
RAID 0 1
RAID 10 2
Table Continued
Features 9

RAID type Disk I/O for every host write
RAID 5 4
RAID 6 6
Configuring the RAID usable capacity
If your environment requires a high usable capacity, select a RAID type that is optimized for usable
capacity. The chart in this section demonstrates the relationship between the number of drives in the
array and the percent usable capacity over the capacity for RAID 0.
Consider the following points when selecting the RAID type:
• Usable capacity decreases as fault tolerance improves due to an increase in parity data.
• The usable capacity for RAID 10 remains flat with larger arrays.
• The usable capacity for RAID 5, 50, 6, and 60 increases with larger arrays.
• RAID 50 and RAID 60 assumes two parity groups.
Note the minimum drive requirements for the RAID types, as shown in the table below.
RAID type Minimum number of drives
RAID 0 1
RAID 10 2
RAID 5 3
RAID 6 4
RAID 50 6
RAID 60 8
10 Features

Configuring the storage solution
The chart in this section shows the relevance of the RAID type to the requirements of your environment.
Depending on your requirements, you should optimize the RAID types as follows:
• RAID 6/60: Optimize for fault tolerance and usable capacity.
• RAID 1/10: Optimize for write performance.
• RAID 5/50: Optimize for usable capacity.
Features 11

Mixed mode (RAID and JBOD simultaneously)
Mixed mode allows for any drive to be a member of a logical drive (logical volume or RAID volume),
unconfigured and hidden from the operating system, or in a JBOD drive state which exposes the drive to
the host operating system as a physical drive.
Make Unconfigured Good and Make JBOD
When you power down a controller and insert a new drive and if the inserted drive does not contain valid
DDF metadata, the drive status is listed as JBOD (Just a Bunch of Drives) when you power on the system
again. When you power down a controller and insert a new drive and if the drive contains valid DDF
metadata, its drive state is Unconfigured Good. A new drive in the JBOD drive state is exposed to the
host operating system as a standalone drive. You cannot use JBOD drives to create a RAID
configuration, because they do not have valid DDF records. Therefore, you must convert JBOD drives to
unconfigured good drives.
If the controller supports JBOD drives, the MR Storage Administrator includes options for converting
JBOD drives to an unconfigured good drive, or an unconfigured good drive to a JBOD drive.
Patrol read
A patrol read periodically verifies all sectors of the drives connected to a controller, including the system
reserved area in the RAID configured drives. You can run a patrol read for all RAID levels and for all
spare drives. A patrol read is initiated only when the controller is idle for a defined period and has no
other background activities. You can set the patrol read properties and start the patrol read operation, or
you can start the patrol read without changing the properties.
Access the patrol rate by selecting Set Adjustable Task Rate under the More Actions menu then locating
it under the Priority Percentage column. Enter a number from 1 to 100. The higher the number, the faster
the patrol read will occur (and the system I/O rate might be slower as a result).
Striping
RAID 0
A RAID 0 configuration provides data striping, but there is no protection against data loss when a drive
fails. However, it is useful for rapid storage of large amounts of noncritical data (for printing or image
editing, for example) or when cost is the most important consideration. The minimum number of drives
required is one.
The maximum number of drives supported for RAID 0 is 32.
12 Features

This method has the following benefits:
• Useful when performance and low cost are more important than data protection.
• Has the highest write performance of all RAID methods.
• Has the lowest cost per unit of stored data of all RAID methods.
• All drive capacity is used to store data (none allocated for fault tolerance).
Mirroring
RAID 1 and RAID 1+0 (RAID 10)
In RAID 1 and RAID 1+0 (RAID 10) configurations, data is duplicated to a second drive. The usable
capacity is C x (n / 2) where C is the drive capacity with n drives in the array. A minimum of two drives is
required.
When the array contains only two physical drives, the fault-tolerance method is known as RAID 1.
The maximum number of drives supported for RAID 1 is 32.
Features 13

When the array has more than two physical drives, drives are mirrored in pairs, and the fault-tolerance
method is known as RAID 1+0 or RAID 10. If a physical drive fails, the remaining drive in the mirrored pair
can still provide all the necessary data. Several drives in the array can fail without incurring data loss, as
long as no two failed drives belong to the same mirrored pair. The total drive count must increment by 2
drives. A minimum of four drives is required.
The maximum number of drives supported for RAID 10 is 32.
This method has the following benefits:
• It is useful when high performance and data protection are more important than usable capacity.
• This method has the highest write performance of any fault-tolerant configuration.
• No data is lost when a drive fails, as long as no failed drive is mirrored to another failed drive.
• Up to half of the physical drives in the array can fail.
14 Features

Read load balancing
In each mirrored pair or trio, Smart Array balances read requests between drives based upon individual
drive load.
This method has the benefit of enabling higher read performance and lower read latency.
Parity
RAID 5
RAID 5 protects data using parity (denoted by Px,y in the figure). Parity data is calculated by summing
(XOR) the data from each drive within the stripe. The strips of parity data are distributed evenly over
every physical drive within the logical drive. When a physical drive fails, data that was on the failed drive
can be recovered from the remaining parity data and user data on the other drives in the array. The
usable capacity is C x (n - 1) where C is the drive capacity with n drives in the array. A minimum of three
drives is required.
The maximum number of drives supported for RAID 5 is 32.
This method has the following benefits:
• It is useful when usable capacity, write performance, and data protection are equally important.
• It has the highest usable capacity of any fault-tolerant configuration.
• Data is not lost if one physical drive fails.
RAID 50
RAID 50 is a nested RAID method in which the constituent drives are organized into several identical
RAID 5 logical drive sets (parity groups). The smallest possible RAID 50 configuration has six drives
organized into two parity groups of three drives each.
Features 15

For any given number of drives, data loss is least likely to occur when the drives are arranged into the
configuration that has the largest possible number of parity groups. For example, four parity groups of
three drives are more secure than three parity groups of four drives. However, less data can be stored on
the array with the larger number of parity groups.
All data is lost if a second drive fails in the same parity group before data from the first failed drive has
finished rebuilding. A greater percentage of array capacity is used to store redundant or parity data than
with non-nested RAID methods (RAID 5, for example). A minimum of six drives is required.
The maximum number of drives supported for RAID 50 is 256.
This method has the following benefits:
• Higher performance than for RAID 5, especially during writes.
• Better fault tolerance than either RAID 0 or RAID 5.
• Up to n physical drives can fail (where n is the number of parity groups) without loss of data, as long
as the failed drives are in different parity groups.
RAID 6
RAID 6 protects data using double parity. With RAID 6, two different sets of parity data are used (denoted
by Px,y and Qx,y in the figure), allowing data to still be preserved if two drives fail. Each set of parity data
uses a capacity equivalent to that of one of the constituent drives. The usable capacity is C x (n - 2)
where C is the drive capacity with n drives in the array.
A minimum of 4 drives is required.
The maximum number of drives supported for RAID 6 is 32.
16 Features

This method is most useful when data loss is unacceptable but cost is also an important factor. The
probability that data loss will occur when an array is configured with RAID 6 (Advanced Data Guarding
(ADG)) is less than it would be if it were configured with RAID 5.
This method has the following benefits:
• It is useful when data protection and usable capacity are more important than write performance.
• It allows any two drives to fail without loss of data.
RAID 60
RAID 60 is a nested RAID method in which the constituent drives are organized into several identical
RAID 6 logical drive sets (parity groups). The smallest possible RAID 60 configuration has eight drives
organized into two parity groups of four drives each.
For any given number of hard drives, data loss is least likely to occur when the drives are arranged into
the configuration that has the largest possible number of parity groups. For example, five parity groups of
four drives are more secure than four parity groups of five drives. However, less data can be stored on
the array with the larger number of parity groups.
The number of physical drives must be exactly divisible by the number of parity groups. Therefore, the
number of parity groups that you can specify is restricted by the number of physical drives. The maximum
number of parity groups possible for a particular number of physical drives is the total number of drives
divided by the minimum number of drives necessary for that RAID level (three for RAID 50, 4 for RAID
60).
A minimum of 8 drives is required.
The maximum number of drives supported for RAID 60 is 256.
All data is lost if a third drive in a parity group fails before one of the other failed drives in the parity group
has finished rebuilding. A greater percentage of array capacity is used to store redundant or parity data
than with non-nested RAID methods.
This method has the following benefits:
Features 17

• Higher performance than for RAID 6, especially during writes.
• Better fault tolerance than RAID 0, 5, 50, or 6.
• Up to 2n physical drives can fail (where n is the number of parity groups) without loss of data, as long
as no more than two failed drives are in the same parity group.
Parity groups
When you create a RAID 50 or RAID 60 configuration, you must also set the number of parity groups.
You can use any integer value greater than 1 for this setting, with the restriction that the total number of
physical drives in the array must be exactly divisible by the number of parity groups.
The maximum number of parity groups possible for a particular number of physical drives is the total
number of drives divided by the minimum number of drives necessary for that RAID level (three for RAID
50, four for RAID 60).
This feature has the following benefits:
• It supports RAID 50 and RAID 60.
• A higher number of parity groups increases fault tolerance.
Initialization state
Initialize a logical drive after you configure it. When you initialize the logical drive, you prepare the storage
medium for use.
CAUTION: All data on the logical drive is lost when you initialize it. Before you start this operation,
back up any data that you want to keep.
Fast initialization
During fast initialization, the firmware quickly overwrites the first and last 8 MB regions of the new logical
drive, clearing any boot records or partition information, and then completes the initialization in the
background. Monitor the progress of the initialization process using the progress indicator.
RAID levels that use parity (RAID 5, RAID 6, RAID 50, and RAID 60) require that the parity blocks be
initialized to valid values. Valid parity data is required to enable enhanced data protection through
background controller surface scan analysis and higher write performance (backed out write). After parity
initialization is complete, writes to a RAID 5 or RAID 6 logical drive are typically faster because the
controller does not read the entire stripe (regenerative write) to update the parity data. This feature
initializes parity blocks in the background while the logical drive is available for access by the operating
system. Parity initialization takes several hours to complete. The time it takes depends on the size of the
logical drive and the load on the controller. While the controller initializes the parity data in the
background, the logical drive has full fault tolerance.
This method has the benefit of allowing you to start writing data to the logical drive immediately.
Access the background initialization (BGI) rate by selecting Set Adjustable Task Rate under the More
Actions menu than locating it under the Priority Percentage column. Enter a number from 1 to 100. The
higher the number, the faster the initialization will occur (and the system I/O rate might be slower as a
result).
If you use RAID 5, you must have a minimum of five drives for a background initialization to start. If you
use RAID 6, you must have at least seven drives for a background initialization to start.
18 Features

Full initialization
During full initialization, a complete initialization is done on the new configuration. You cannot write data to
the new logical drive until the initialization is complete. This process can take a long time if the drives are
large. This initialization overwrites all blocks and destroys all data on the logical drive.
Monitor the progress of the initialization process using the progress indicator.
No initialization
If you select this option, the new configuration is not initialized, and the existing data on the drives is not
overwritten. You can initialize the logical drive at a later time.
Regenerative writes
Logical drives can be created with background parity initialization so that they are available almost
instantly. During this temporary parity initialization process, writes to the logical drive are performed using
regenerative writes or full stripe writes. Any time a member drive within an array is failed, all writes that
map to the failed drive are regenerative. A regenerative write is much slower because it must read from
nearly all of the drives in the array to calculate new parity data. The write penalty for a regenerative write
is n + 1 drive operations where n is the total number of drives in the array. As you can see, the write
penalty is greater (slower write performance) with larger arrays.
This method has the following benefits:
• It allows the logical drive to be accessible before parity initialization completes
• It allows the logical drive to be accessible when degraded
Backed-out writes
After parity initialization is complete, random writes to a RAID 5, 50, 6, or 60 can use a faster backed-out
write operation. A backed-out write uses the existing parity to calculate the new parity data. As a result,
the write penalty for RAID 5 and RAID 50 is always four drive operations, and the write penalty for a RAID
6 and RAID 60 is always six drive operations. As you can see, the write penalty is not influenced by the
number of drives in the array.
Backed-out writes is also known as "read-modify-write."
This method has the benefit of faster RAID, 5, 50, 6, or 60 random writes.
Full-stripe writes
When writes to the logical drive are sequential or when multiple random writes that accumulate in the
flash-backed write cache are found to be sequential, a full-stripe write operation can be performed. A full-
stripe write allows the controller to calculate new parity using new data being written to the drives. There
is almost no write penalty because the controller does not need to read old data from the drives to
calculate the new parity. As the size of the array grows larger, the write penalty is reduced by the ratio of
p / n where p is the number of parity drives and n is the total number of drives in the array.
This method has the benefit of faster RAID 5, 6, or 60 sequential writes.
Spare drives
Dedicated spare
A dedicated spare is a spare drive that is dedicated to one array.
It supports any fault tolerant logical drive such as RAID 1, 10, 5, 6, 50, 60, and CacheCade SSD volumes.
The dedicated spare drive activates any time a drive within the array fails.
Features 19

Global spare
A global spare drive replaces a failed drive in any array, as long as:
• The drive type is the same.
• The capacity of the global spare drives is equal to or larger than the capacity of the failed drive.
A global spare drive activates any time a drive fails within a fault tolerant logical drive or CacheCade SSD
volume. For RAID 0 logical drives, the global spare will active when a member drive reports a predictive
failure.
Drive rebuild
If a drive, which is configured as RAID 1, 5, 6, 10, 50, or 60 fails, the firmware automatically rebuilds the
data on a spare or replacement drive to prevent data loss. The rebuild is a fully automatic process.
Monitor the progress of drive rebuilds in the Background Processes in Progress window.
Access the drive rebuild rate by selecting Set Adjustable Task Rate under the More Actions menu then
locating it under the Priority Percentage column. Enter a number from 1 to 100. The higher the number,
the faster the rebuild will occur (and the system I/O rate might be slower as a result).
Foreign configuration import
A foreign configuration import is a RAID configuration that exists on a replacement set of drives that you
install in a computer system. You can use the MR Storage Administrator to import the foreign
configuration to the controller or clear the foreign configuration so that you can create a configuration
using these drives.
Transformation
Array transformations
Expand array
Increase the capacity of an existing array by adding currently existing unassigned drives to it. Any drive
that you want to add must meet the following criteria:
• It must be an unassigned drive.
• It must be of the same type as existing drives in the array (for example, SAS HDD, SAS SSD, SATA
HDD, or SATA HDD).
• It must have a capacity no less than the capacity of the smallest drive in the array.
This operation uses the Modify Array option in the HPE MR Storage Administrator user interface. This
feature is supported when there is a single logical drive configured in the array.
Logical drive transformations
Transportable controller
The controller firmware supports a transportable battery-backed cache memory to recover the data from a
faulty server. This transportable controller recovers from a faulty server by moving the entire controller to
a new replacement server.
In this design, the controller firmware assumes that the new server has the same configuration. That is,
the configuration includes the same server generation and family, and logical drives are migrated to the
new target server to facilitate cache flush when the data is restored.
20 Features
Table of contents
Other HPE Controllers manuals