Nvidia Clara Holoscan User manual

Clara Holoscan Developer Kit User Guide
Purpose: Provides the instructions to flash, setup, and start using the Clara Holoscan
Developer Kit.
Disclaimer: The Clara Holoscan Developer Kit is not an approved medical device and is not
intended for clinical use.
Version: 1.0
Contents
Checklist for Setting up the Developer Kit ............................................................. 2
Hardware Setup............................................................................................. 2
Requirements............................................................................................. 2
Precautions ............................................................................................... 2
System Overview ........................................................................................... 3
Main Components ........................................................................................ 3
Tech Specs................................................................................................ 3
I/O and external interfaces ............................................................................ 5
Powering up the System ................................................................................... 7
Flashing and Updating the Clara Holoscan Developer Kit using SDK Manager ..................... 7
Switching between iGPU and dGPU...................................................................... 8
Reinstalling Optional SDK Packages .................................................................. 10
Setting up SSD Storage.................................................................................... 10
Create the Partition .................................................................................... 11
Mount the Partition..................................................................................... 12
Setting up Docker and Docker Storage on SSD ........................................................ 13
Install the Clara Holoscan SDK........................................................................... 14
Additional Resources...................................................................................... 15

Clara Holoscan Developer Kit User Guide
Checklist for Setting up the Developer Kit
Ensure the following actions are taken before developing on the Clara Holoscan Developer Kit.
Each action is described in a corresponding section of this user guide.
•Read through the Hardware Setup requirements and precautions.
•Familiarize yourself with the System Overview: the main components and system I/O.
•Power up the system.
•Flash and update the Clara Holoscan Developer Kit using SDK Manager.
•Switch from iGPU to dGPU mode.
•Set up the 500GB SSD storage.
•Set up Docker and Docker storage.
•Install the Clara Holoscan SDK from Github.
Hardware Setup
Requirements
•A Clara Holoscan Developer Kit
•A compatible power cable
oThe power cable(s) included with the NVIDIA Clara Holoscan Developer Kit may
not be compatible with your local electrical requirements.
oA compatible cable should meet the following requirements:
Provides a certified local 3-prong AC power plug.
Provides a C13 connector.
Supports ratings of 100-120VAC/6A, 200-240VAC/3A, or higher with a
minimum wire thickness of 18AWG and insulation rating of 300V or higher.
•An Ubuntu 18.04 or 20.04 host system (for use during flashing)
•A standard USB-A to USB-C or USB-C to USB-C cable with data enabled (for use during
flashing)
•A standard Micro USB Type-B cable with data enabled (for accessing the baseboard
management controller (BMC) during flashing)
•Connection to the Internet for the host system before and during flashing, and for the
Clara Holoscan Developer Kit at minimum after the OS has been flashed
•A keyboard and mouse, as well as a monitor with DisplayPort connection for the Clara
Holoscan Developer Kit
Precautions
•Only connect and disconnect PCIe cards (e.g. miniSAS or dGPU) when the system is
powered down.
•Apply extra care when plugging and removing PCIe cards to avoid stress on the PCIe
connectors (e.g. wearing, bending, breaking).

Clara Holoscan Developer Kit User Guide
System Overview
Main Components
The Clara Holoscan Developer Kit contains the following major components:
•AGX Orin 32 GB module
•RTX A6000 discrete GPU
•ConnectX-6 DX
•500GB removeable SSD
Tech Specs
CPU
8-core Arm® Cortex®-A78AE
v8.2 64-bit CPU
2MB L2 + 4MB L3
Memory
32GB 256-bit LPDDR5
204.8 GB/s

Clara Holoscan Developer Kit User Guide
GPU
RTX A6000 | 48 GB GDDR6 | 768 GB/s | 10,752 CUDA cores |
3rd gen 336 Tensor Cores
Storage
500GB SSD
I/O
Micro USB Type B | (2x) USB3.0 | USB-C | HDMI In | (5x) DisplayPort |
1/100 GbE
Expansion
N/A
Power Supply
850W | 100-240V
Dimensions
262.7mm W X 147.7mm H X 370.0mm L

Clara Holoscan Developer Kit User Guide
I/O and external interfaces
1) Power cable connection
2) Power switch
3) DisplayPort (DP) output port from Jetson Orin module
4) Micro USB Type B

Clara Holoscan Developer Kit User Guide
5) Audio
6) 1 GbE RJ45 Ethernet connection to Orin module
7) Recovery port
8) USB-C port
9) USB-A ports (USB 2.0)
10) 1 GbE RJ45 Ethernet connection to BMC
11) HDMI input
12) VGA port
13) QSFP ConnectX-6 DX board
14) DisplayPort (DP) output ports from RTX A6000
15) Recovery button
16) Power button
To access ports 13-14, remove the left-hand side cover. The process is illustrated below.
Unscrew the two Phillips screws that secure the cover at the back of the machine. Next, push
and slide the cover towards the back of the machine without lifting (step 1). It should slide
about 0.5 inch, or less than 1.5 cm. Finally, you should be able to lift the cover off once it has
more than one degree of freedom and can be easily lifted outwards (step 2).
If you’d like to install an AJA video capture card, remove the QSFP ConnectX-6 DX board (13)
to free up the PCIe slot and install the AJA card.
Note: Only connect and disconnect a PCIe card (such as the ConnectX-6 DX board) when the
system is powered down.

Clara Holoscan Developer Kit User Guide
Powering up the System
1) Connect all peripherals to the system before powering up the system.
2) Connect the power cable to the system in the slot labeled (1) in the I/O drawing above and
make sure the power switch (2) is on.
3) Once the power is connected, press the power button (16) for approximately 1 second. It
should light up.
4) If you have a display connected, you might already see the system booting on it. During
flashing or re-flashing, connect your display to the DisplayPort (3), which is connected to
the Jetson Orin module. Reference the GPU section below to determine how to choose
between display outputs.
Note: The machine can be powered off by pressing the power button for approximately 10
seconds.
Flashing and Updating the Clara Holoscan Developer Kit using SDK
Manager
1. Register and activate an NVIDIA Developer Account here to access the latest version of
Jetpack in SDK Manager.
2. If you are running a VPN on your host system, log off before flashing the Clara Holoscan
Developer Kit. Otherwise, you might get an Internet connection error during the flashing
process.
3. Using a VM as your host machine isn’t officially supported, but it is possible with certain
VMs such as VMWare Workstation 16. If using a VM, ensure the ports that connect to the
USB-C recovery port (7) and Micro USB Type B port (4) on the Clara Holoscan Developer Kit
are routed to the VM.
4. From the host system, download and install the latest version of NVIDIA SDK Manager.
Instructions for downloading and setting up NVIDIA SDK Manager can be found here.
5. Connect the Clara Holoscan Developer Kit to the host system via two cables: USB-C to the
recovery port (7) and Micro USB Type B to the BMC access port (4).
6. Put the unit into recovery mode for flashing via BMC. Use the following command to find
the [number] for BMC:
ls /dev/ttyACM*
Out of the four results, the smallest number is for Orin and the second smallest number
is for BMC. First, enter the command sudo minicom -D /dev/ttyACM[number] on
your host machine. Next, log in with the credentials root/0penBmc. Note the number
0, not the letter O, in the password. You may need to press enter after connecting to
get the login prompt.
$ sudo minicom -D /dev/ttyACM [number]
login: root
Password: 0penBmc
Output similar to the following indicates that your host machine successfully connected
to the BMC:

Clara Holoscan Developer Kit User Guide
NVIDIA Mandalore BMC (OpenBMC Project Reference Distro)
nodistro.0.1643883756.1752021 mandalore ttyS0
mandalore login: root
Password:
root@mandalore:~#
From here, enter the command to put the unit into recovery mode:
root@mandalore:~# powerctrl recovery
If the flashing step fails in SDK Manager, make sure to re-issue this recovery mode
command every time before attempting to flash, otherwise you may run into the same
error message regardless of the number of attempts.
7. From the NVIDIA SDK Manager, download and flash the Clara Holoscan Developer Kit. See
the step-by-step instructions for more details. After the OS image is flashed in Step 3, SDK
Manager may require the Clara Holoscan Developer Kit to be connected to the Internet
before installing the rest of the components.
Note: We recommend selecting “Manual Setup” mode in the prompt at Step 3 in SDK
Manager while the unit is under recovery mode, as selecting “Automatic Setup” mode
may cause flashing to stall.
Connect the Clara Holoscan Developer Kit to the Internet using one of the following methods:
•An Ethernet cable connected to a router or Wi-Fi extender
•A USB Wi-Fi receiver
oNot all USB Wi-Fi receivers will work out of the box on the Clara Holoscan Developer
Kit.
oThe USB Wi-Fi receiver should support Ubuntu 20.04.
oIf the USB Wi-Fi receiver requires driver installation, use sudo minicom -D
/dev/ttyACM[n] to access the newly flashed Jetson OS from host, or use an Ethernet
cable temporarily for the duration of the USB Wi-Fi receiver setup.
Switching between iGPU and dGPU
The Clara Holoscan Developer Kit can use either the OrinAGX module GPU (iGPU, integrated
GPU) or the RTX A6000 add-in card GPU (dGPU, discrete GPU). You can only use one type of
GPU at a time.
By default, the Clara Holoscan Developer Kit uses the iGPU. To switch between the iGPU and
dGPU, use the nvgpuswitch.py script located in the /opt/nvidia/l4t-gputools/bin/
directory. To make the nvgpuswitch.py script accessible globally, copy it to a directory
included in $PATH if it hasn’t been already:
$ sudo cp /opt/nvidia/l4t-gputools/bin/nvgpuswitch.py /usr/local/bin/

Clara Holoscan Developer Kit User Guide
To switch from the iGPU to the dGPU, follow these steps:
1. Ensure that the developer kit has an Internet connection.
2. To view the currently installed drivers and their version, use the query command:
$ nvgpuswitch.py query
iGPU (nvidia-l4t-cuda, 34.1.2-20220613164700)
3. To install the dGPU drivers, use the install command with the “dGPU” parameter (note
that sudo must be used to install drivers):
$ sudo nvgpuswitch.py install dGPU
The install command will print out the list of commands that will be executed as part
of the driver install, then continue to execute those commands. This aids with debugging
if any of the commands fail to execute for any reason. The following arguments may
also be provided with the install command:
$ nvgpuswitch.py install -h
usage: nvgpuswitch.py install [-h] [-f] [-d] [-i] [-v] [-l LOG] [-r
[L4T_REPO]] {iGPU,dGPU}
positional arguments:
{iGPU,dGPU} install iGPU or dGPU driver stack
optional arguments:
-h, --help show this help message and exit
-f, --force force reinstallation of the specified driver
stack
-d, --dry do a dry run, showing the commands that would
be executed but not actually executing them
-i, --
interactive run commands interactively (asks before running
each command)
-v, --verbose enable verbose output (used with --dry to
describe the commands that would be run)
-l LOG, --log LOG writes a log of the install to the specified
file
-r [L4T_REPO], --l4t-repo [L4T_REPO]
specify the L4T apt repo (i.e. when using an
apt mirror; default is repo.download.nvidia.com/jetson)
4. Verify the dGPU driver install using the query command:
$ nvgpuswitch.py query
dGPU (cuda-drivers, 510.73.08-1)
NVIDIA RTX A6000, 49140 MiB

Clara Holoscan Developer Kit User Guide
5. After the dGPU drivers have been installed, rebooting the system to complete the switch to
the dGPU. At this point, the Ubuntu desktop will be output via DisplayPort on the dGPU
(port 14), so you must switch the DP cable from the iGPU DP out (port 3) to the dGPU DP
out (port 14).
Note: If the output connection isn’t switched before the devkit finishes rebooting, the
terminal screen will hang during booting.
6. Modify PATH and LD_LIBRARY_PATH. CUDA installs its runtime binaries such as nvcc into
its own versioned path that is not included in the default $PATH environment variable.
Because of this, attempts to run commands like nvcc will fail on dGPU unless the CUDA 11.6
path is added to the $PATH variable. To add the CUDA 11.6 path for the current user, add
the following lines to $HOME/.bashrc after the switch to dGPU:
export PATH=/usr/local/cuda-11.6/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-11.6/lib64:$LD_LIBRARY_PATH
At this time, the Clara Holoscan SDK is tested and supported only in dGPU mode. Switching back
to iGPU mode after switching to dGPU mode is not recommended.
Note: The GPU settings will persist through reboots until it is changed again with
nvgpuswitch.py.
Reinstalling Optional SDK Packages
This section only applies if you have selected “Additional SDKs” in Step 01 of the SDK Manager
installation process.
When switching between GPUs, CUDA is first uninstalled and then reinstalled by the script in
order to provide the correct versions used by iGPU or dGPU (CUDA 11.4 and 11.6, respectively).
Since some optionally installed packages via SDK Manager such as DeepStream depend on CUDA,
this means that these packages are also uninstalled when the active GPU is switched.
To reinstall the packages after switching GPUs, the corresponding *.deb packages that were
downloaded by SDK Manager during the initial installation can be copied to the Clara AGX
Developer Kit and installed using apt. By default, SDK Manager downloads the *.deb packages
to the following location on the host machine:
~/Downloads/nvidia/sdkmanager
Note that the version numbers may differ – if this is the case, use the latest version of the
arm64 package that exists in the download directory.
$ sudo apt install -y ./deepstream-6.1_6.1.0-1_arm64.deb
Setting up SSD Storage
If you don’t set up SSDK storage and move docker storage to SSD, you will likely quickly fill up
the root directory with Docker image pull operations since a complete installation of Jetpack
leaves about 40GB of storage remaining in the root 64GB.

Clara Holoscan Developer Kit User Guide
The Clara Holoscan Developer Kit includes a pre-installed 500GB m.2 solid-state drive (SSD),
but this drive is not partitioned or mounted by default. This page outlines the steps to partition
and format the drive for use after the initial SDK installation.
Note:
If the Clara AGX Developer Kit is re-flashed with a new JetPack image, the partition table
of the m.2 drive will not be modified and the contents of the partition will be retained. In
this case, you can skip the Create Partition steps; however, you should still follow the Mount
Partition steps to remount the partition.
Any state, binaries, or docker images that persist on the m.2 drive after flashing the system
may be incompatible with new libraries or components that are flashed onto the system.
You may need to recompile or rebuild these persistent objects to restore runtime
compatibility with the system.
Note:
The following steps assume that the m.2 drive is identified by the Clara Holoscan Developer
Kit as /dev/nvme0n1. This is the case if no additional drives have been attached, but if
other drives (such as USB drives) have been attached, then the disk identifier may change.
You can verify this by looking at the symlink to the drive that is created for the m.2
hardware address on the system. If the symlink below shows something other than
../../nvme0n1, replace all instances of “nvme0n1” in the instruction below with the
identifier that is being used by your system:
$ ls -l /dev/disk/by-path/platform-14160000.pcie-pci-0004\:01\:00.0-nvme-1
lrwxrwxrwx 1 root root 13 Jun 2 14:14 /dev/disk/by-path/platform-
14160000.pcie-pci-0004:01:00.0-nvme-1 -> ../../nvme0n1
Create the Partition
1. Launch fdisk utility:
$ sudo fdisk /dev/nvme0n1
2. Create a new primary partition using the “n” command, then accept the defaults by
pressing Enter for the next 4 questions. This will create a single partition that uses the
entire drive.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):

Clara Holoscan Developer Kit User Guide
First sector (2048-976773167, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-
976773167, default
976773167):
Created a new partition 1 of type 'Linux' and of size 465.8 GiB.
3. Write the new partition table and exit using the “w” command:
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
4. Initialize the ext4 filesystem on the new partition. Enter “y” when prompted:
$ sudo mkfs -t ext4 /dev/nvme0n1
mke2fs 1.45.5 (07-Jan-2020)
Found a dos partition table in /dev/nvme0n1
Proceed anyway? (y,N) y
Discarding device blocks: done
Creating filesystem with 122096646 4k blocks and 30531584 inodes
Filesystem UUID: 004332a8-b255-4156-836c-7ea734cb78c0
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616,
78675968,
102400000
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
Mount the Partition
1. Create a directory for the mount point. These instructions will use the path /media/m2,
but any path can be used.
$ sudo mkdir /media/m2
2. Determine the UUID of the new partition. The UUID will be displayed as a symlink to the
/dev/nvme0n1 partition within the /dev/disk/by-uuid directory. For example, the
following output shows that the UUID of the /dev/nvme0n1 partition is 004332a8-b255-
4156-836c-7ea734cb78c0:
$ ls -l /dev/disk/by-uuid/ | grep nvme
lrwxrwxrwx 1 root root 13 Jun 2 15:13 004332a8-b255-4156-836c-
7ea734cb78c0 -> ../../nvme0n1
3. Add the fstab entry.Using the mount path and the UUID from the previous steps, add
the following line to the end of /etc/fstab:
UUID=004332a8-b255-4156-836c-7ea734cb78c0 /media/m2 ext4 defaults 0 2

Clara Holoscan Developer Kit User Guide
4. Mount the partition. The /etc/fstab entry above will mount the partition
automatically at boot time. To instead mount the partition immediately without
rebooting, use the mount command (and df to verify the mount):
$ sudo mount –a
$ df -h /dev/nvme0n1
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1 458G 73M 435G 1% /media/m2
5. Manage permission on SSD. Use the “chmod” command to manage file system access
permission. For example:
$ sudo chmod -R 777 /media/m2
Setting up Docker and Docker Storage on SSD
1. Install Docker if it has not been installed on your system:
$ sudo apt-get update
$ sudo apt-get install -y docker.io
2. Create a Docker data directory on the new m.2 SSD partition. This is where Docker will
store all of its data, including build cache and container images. These instructions use
the path /media/m2/docker-data, but you can use another directory name if
preferred.
$ sudo mkdir /media/m2/docker-data
3. Configure Docker by writing the following to /etc/docker/daemon.json:
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia",
"data-root": "/media/m2/docker-data"
}
4. Restart the Docker daemon:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
5. Add the current user to the Docker group so Docker commands can run without sudo.
# Create the docker group.
$ sudo groupadd docker
# Add your user to the docker group.
$ sudo usermod -aG docker $USER
# Activate the changes to groups. Alternatively, reboot or re-login.
$ newgrp docker

Clara Holoscan Developer Kit User Guide
6. Verify that you can run a hello world container:
$ docker run hello-world
Install the Clara Holoscan SDK
The Clara Holoscan SDK is hosted on Github starting from v0.2. See
https://github.com/nvidia/clara-holoscan-embedded-sdk for information on installing the
Clara Holoscan Embedded SDK.
Known Issues
1. RDMA known issue and workaround
There’s a known issue that prevents GPU RDMA from being enabled on the Clara Holoscan
Developer Kit. While our team is working on a fix, the workaround is to run the following
command after every reboot to disable ACS:
$ sudo setpci -s 0007:02:00.0 ecap_acs+6.w=0
2. Automatic Setup hangs during the flashing process
When flashing the devkit using SDK Manager, at the dialog prompt where it says “SDK Manager
is about to flash your Clara AGX Developer Kit module” in Step 3, it has been observed that if
you choose Automatic Setup, even if your Developer Kit had been flashed before, the SDK
Manager UI can hang at this step.
Action: Put your Developer Kit into recovery mode following the steps in the Flashing and
Updating Clara AGX Developer Kit using SDK Manager section and choose “Manual Setup” in
Step 3 of the SDK Manager flashing process.
3. Attempting to switch to dGPU mode fails, and the system is not in iGPU or dGPU mode
If the nvgpuswitch.py script for installing dGPU fails for any reason, the system will not
default back to the previous iGPU mode; therefore, the system doesn’t have either of the GPU
modes enabled.
Action: When you are ready to try again, first check that the nvgpuswitch.py script is still
in your $PATH;if it is missing, find the location of the script and copy it to $PATH:
$ sudo find / -name nvgpuswitch.py

Clara Holoscan Developer Kit User Guide
/opt/nvidia/l4t-gputools/bin/nvgpuswitch.py
$ sudo cp /opt/nvidia/l4t-gputools/bin/nvgpuswitch.py /usr/local/bin/
Then, use the –f option when running nvgpuswitch.py to force the reinstall of the dGPU
stack.
$ sudo nvgpuswitch.py install dGPU -f
Additional Resources
For other documentation and release notes, see the Clara Holoscan SDK page.
For further Jetson documentation, see the L4T documentation.
For feedback, discussion, and questions, post to the Clara Holoscan SDK Developer Forum.
Table of contents
Other Nvidia Microcontroller manuals

Nvidia
Nvidia Tegra Ventana User manual

Nvidia
Nvidia JETSON TX1 Operating and maintenance instructions

Nvidia
Nvidia JETSON NANO User manual

Nvidia
Nvidia JETSON TX1 User manual

Nvidia
Nvidia JETSON XAVIER NX User manual

Nvidia
Nvidia Clara AGX User manual

Nvidia
Nvidia Clara AGX User manual

Nvidia
Nvidia JETSON AGX XAVIER User manual

Nvidia
Nvidia JETSON AGX XAVIER User manual

Nvidia
Nvidia JETSON NANO Guide