20241022-unraid-cut.jpeg

<aside> 👉

This document provides a guide to installing and configuring Unraid, a network-attached storage (NAS) and applications ecosystem. It covers topics ranging from the initial hardware setup and OS installation to array and pool management, share configuration, Docker integration, and “Community Applications” (CA). The guide is structured in a step-by-step format, offering practical advice and best practices to help users leverage the full potential of Unraid.

</aside>

Revision: 20241022-0 (init: 20240731)

The following are best practices from using Unraid for close to three years. Although much more complex installation (with ZFS pools or complex alternate OSes VM setup) are possible, it is written to support people interested in learning about the tool. I wrote about Unraid in the past, such as “Things I wished I had READ before my first Unraid Install... and more”. This post is different as I have recently installed a new workstation; and used the experience to document various aspects such as installation, array management, share configurations, and community applications.

I hope it provides valuable insights to benefit those interested in exploring Unraid further, especially as the 7.0 version is coming soon. We note that we will use external links to Unraid’s documentation for additional content on discussed topics.

About Unraid

Unraid is an operating system that manages storage, computing, and network resources. It provides the features of a software-based network-attached storage (NAS) system but also offers a comprehensive IT infrastructure solution.

As a NAS, Unraid’s “Array” allows mixing hard drives of different sizes and speeds, maximizing storage capacity, and using parity drive(s). The parity drive provides a means to recover data from a failed disk and emulate the content of a damaged drive until it can be replaced. Adding drives is possible without the need to rebuild the array (as long as the added drives are zero-ed out first and are smaller than the array’s parity drive). The array is usually made of spinning drives for mass storage.

Pools” are generally made of faster storage (SSDs or NVMes) and can be utilized as a cache to enhance performance, allowing frequently accessed data to be stored on faster drives. Pools are the recommended storage space for Docker containers, the applications’ data, and Virtual Machines’ disk images.

Shares” are the method for sharing data across the network and the folders created on the server’s disks, on the array, or the pool. Shares can be set with specific permissions for different users or groups. When sharing those over the network, they can be public, secure, and private, each providing various levels of access control. Shares can be configured to utilize cache pools, improving performance for frequently accessed data by temporarily storing it on faster drives.

As an IT infrastructure solution, its Docker integration and the ability to run Virtual Machines provide a solid base with its Linux kernel. It can be installed on many x86-64 hardware platforms. Unraid’s built-in hypervisor allows users to run applications and even entire operating systems in isolation.

Unraid runs on a large subset of hardware; the minimal setup requires

A 64-bit capable processor that runs at 1 GHz or higher.

A minimum of 4 GB of RAM for basic NAS functionality.

Linux hardware driver support for storage, Ethernet, and USB controllers.

Two hard disk drives to ensure data protection with a parity disk.

The more powerful the hardware, the more it is possible to run on Unraid (NAS, VMs, Gaming, …), so it is recommended to think ahead of your server's goals; their “Uses Cases” page gives a great idea of the possibilities. We will use it to run Docker services.

Unraid is popular among DIY enthusiasts and fellow self-hosters because of its flexibility, ease of use, and ability to repurpose older hardware. It is a commercial product developed by Lime Technology, Inc., requiring a license. The license is available in different tiers based on the number of storage drives you need to support.

UnRAID has a many community contributors providing docker applications and a wide range of plugins, which makes extending its functionality relatively straightforward. The “Community Apps” plugin is Unraid’s “App Store” and the first plugin to be installed after setup to get access to a wide range of applications. Most of those applications are Docker-based, and their containers can be managed through Unraid’s web-based user interface.

Running applications in Docker containers keeps them isolated from each other and the host system. This isolation helps prevent application conflicts and increases security by limiting what each application can access. It is also possible to control the amount of CPU and memory resources allocated to each container, ensuring that the server remains responsive and stable, even when running multiple services. Among the proposed applications are media servers, data-sharing applications, or various game servers, which can be installed with just a few clicks.

OS Installation

Initial installation

The Unraid OS runs entirely from a USB stick (keeping all disk drives as units of storage) and should be run from a system with an Ethernet connection (avoid Wifi) with a static IP for the server host.

To set up the hardware, obtain the “USB creator” and run it. The latest version of Unraid at the time of this installation was 6.12.11; we will, therefore, install this version on a recommended USB stick.

Unraid is a server application; we want to find it at the same IP on our subnet after each reboot. Deciding on a static IP or using the server’s MAC addresses can ensure a static DHCP reservation when creating the USB stick. We will use the 192.168.22.99 IP. During this stage, It is also possible to name the server (the default is tower); we will name ours unraid99.

Once the USB stick is ready, we boot our hardware using it. The Unraid OS will attempt to recognize and configure existing hardware devices. After some time, a Linux login prompt will appear. When the login prompt is displayed, we can configure our unraid99 OS instance. To do so, we must go to the IP of our host using a web browser (at http://192.168.22.99/). We will be presented with the OS’s dashboard and a reminder to register or purchase a license. Testing the OS for up to 30 days before buying a license is possible.

After creating the root password, we can set our Array and Pool(s).

Array, Pools, and Shares

After licensing our copy of Unraid (or using the 30-day trial), we are ready to configure our array and pools. In general, arrays are the primary storage space in unRAID, optimized for capacity, while pools are additional high-performance storage areas that can be configured for specific needs, like caching.

Arrays are often used as the main storage space and are composed of hard drives. Disks in the arrays should have one (up to two) parity drive(s). Depending on the license, it is possible to have more or less total drives in the system. Those parity drives will contain the XOR-ed content of all the other drives constituting the array. The parity drives must always be of equal or larger size than the largest of the data disks. For example, if we have 4TB, 5TB, and 2x 6TB drives to constitute our array, the 4+5+1x6TB can be our array drives, while the second 6TB will be our parity drive, creating a total array size of 15TB. Please see Unraid’s Storage Management page for additional details.

Drives in Unraid’s array are not using a Redundant Array of Independent Drive (RAID) solution. Each drive contains an independent filesystem that can be read individually on any Linux system. The array is optimized for capacity rather than performance. Files are placed on the drives as part of “Shares” and put on physical drives following a user-configurable share’s “Allocation method”. This means that although an independent disk can be read from another Linux system, a directory on a “share” on a single disk might not contain all the files for that share as those may have been allocated (placed) on another physical disk.

Pools are additional storage spaces using SSDs or other high-performance drives. They do not use the parity redundancy method, relying on standard RAID configurations like RAID 0, 1, 5, etc. Pools are typically used for caching (their former name), providing higher performance than the main array “[and] does this by redirecting write operations to a dedicated disk […] and moves that data to the array on a schedule”. Fast drive-based pools should be where applications and virtual machines’ data are stored while in use for high throughput. Multiple pools can be configured for different purposes, such as one optimized for read performance and another for write performance.

<aside> 👉 Although this is the usual way to configure Unraid Pools and Arrays, it is also possible to directly create zfs pools using hard drives. Please see https://unraid.net/blog/zfs-guide This writeup will use SSDs for pools and HDDs for array drives.

</aside>

There are many options for disk format, the primary ones being btrfs, xfs and zfs:

For our setup, we will use btrfs for the pool (unencrypted) and xfs for the data disks (encrypted).

In general, with any drives, to perform changes such as “Erase” (which might be needed before a “File system type” can be assigned), we need to go into the submenu accessed by clicking on the “Device” name, such asCache, Disk 1, Disk 2… For example, when selecting “Disk 1”, we enter a “Disk 1 Setting” tab. The “Erase” option is available, and we can delete any existing partition on the drive (we will need to confirm by typing disk1); we can also set the “File system type” at this stage.

Let’s first add our pool, then we will add disks to our array and encrypt those.

Adding a pool

In “Pool Devices,” we select “Add Pool,” give it a name, and match the number of physical SSD/NVMe present to use as pool data disks (those should not be disks from our array of spinning drives). We will use this location for the docker data, application data, and potential VMs to run on our system.

Depending on the number of drives added to each pool, different RAID options will become available under the pool disks’ selection.

If we only have one pool composed of one disk, we can name it cache. We will format it using btrfs (not encrypted), turning off “compression” but enabling “autotrim” (as our drive has trim capability). After selecting “Apply”, the drive is ready for our pool (it might be required to “Erase” it first).

Unraid-Cache.png

Adding encrypted disks to the array

Unraid recommends to fill hard disks with zeroes before adding them to the array. This is to speed up the array creation and parity creation, because 0 XOR x = x, ie a zeroed disk has no influence on parity.

It is also possible to create the parity drive by adding it last and for it to be filled with the XOR of all data disks. Because we will encrypt disks for our array, we will use this method, therefore we must make a note of which disk we intend to use as the parity drive but not add it just yet.

By using encryption on the data drive, should a drive die, the data on it is an encrypted blob. That disk is still readable on another Linux host, as long as 1) that system can read the partition type (here xfs) 2) LUKS can decrypt the disk content using the data encryption passphrase. As discussed earlier, because of share’s allocation method, not all files for a given share might be present on that disk.

Add each drive to the array in the “Disk” slot that you prefer (depending on license you will be able to use more of less total drives). Usually, the first disk will be written to first, then following the allocation method, another disk will be used, attempting to spread the data load on the array drives.

For each drive added, we can prepare the disk to our desired settings from the “Disk ID” selection. Under the “Disk ID Settings”, we will have a “File system type” dropdown and “Erase” button. Use those to prepare the disk to be used. This is particularily important if the drives were used before, to clear the previous partition and prepare the drive to be formatted. As such, First we erase all data disks.

After erasing all disks, the “Start the array” button is now available. Before using it, we will first encrypt the data disks.

Selecting our first disk, from the “Settings” submenu we can now change the “File system type” to `xfs - encrypted" and "Apply".

Unraid-Disk.png

On the bottom of our “Main” page is a new entry to enter an encryption “passphrase”, ignore it for now.

Repeat the modification of “File system type” for all disks.

We will follow the method at https://forums.unraid.net/topic/84256-changing-encryption-from-passphrase-to-keyfile/?do=findComment&comment=1179728 to create a passphrase that will re-populate at reboot (if a NAS is not available to obtain the key from, keeping the encryption file on the USB stick is an option). After creating the configuration, paste the “passphrase” onto the WebUI.

The “Start” button changes from greyed out to active. Confirm no “Parity” disk is added just yet and “start the array”.

After the start, all drives will show up with an Unmountable error. The “Array Operations” section will have an “Unmountable disks presents” with an option “Format” those. Let’s use it.

After formatting the array disks and pool(s), a green lock is present next to each encrypted disk.

Unraid-EncDisks.png

Go to “Settings → Disk Settings” and enable “Enable auto start”, then “Apply”.

Reboot (from the “Dashboard” tab) to confirm that the /root/keyfile is automatically enabled (our encryption passphrase).

After the reboot, and confirming that the encrypted array is automatically starting, we will now enable the Parity disk.

<aside> 💡 We recommend making labels for each drive in the system, with the type of drive and the last few digits of the drive’s serial for easy visual review. As you can see from the “Main” tab, knowing which drive is your parity drive and if possible the order in which the other drives are added might prove useful with future modifications. Taking a screenshot is a good way to have this information available at a later time.

</aside>

Users

To better use Unraid and its shares, it is useful to add “Users” beyond root to our system.

Adding users allows us to control who can access specific shares on the Unraid server. By creating user accounts, we can assign different levels of access to various shares. For example, we might want certain users to have read-only access while others have read-write permissions.

The users menu can be accessed from the “Users” tab or from the “Dashboard” tab when selecting the gears icon in the user’s section.

Once in the interface, select “Add User”, enter a ”User name” for the new user (It is recommended to use lowercase letters and keep the name under 30 characters to ensure compatibility across different operating systems). Optionally, provide a ”Description” and a ”Custom image” for the user. Set a ”Password” ****and confirm it. Make sure to click ”Add” to create the user.

Shares

Shares represent folders or drives on your Unraid server that can be accessed over a network

Shares allow the organization of data logically, with the use of separate shares for media, documents, backups, etc. They are created in /mnt/user as folders existing at this root. Each can have a primary storage and an optional secondary storage. The primary storage is the location where new files are initially written for a selected share, while the optional secondary storage determines where files can be moved to after they've been initially stored in the primary location. The mover process transfers files between different storage locations (typically from cache to array) at scheduled time, and its behavior is influenced by the share settings, including the allocation method and split level. Unraid offers different allocation methods for distributing files across disks, with “High-Water” (fills up one disk at a time until it reaches a certain threshold —the "high-water mark"— then moves on to the next disk), “Fill-up” (use the lowest numbered disk that still has free space above a threshold), or “Most-free” (use the disk with currently the most free space).

Shares can also —optionally— be shared over the network (using SMB) either as “visible” or “hidden” (must know the share’s name) and access control to those share can be selected per added user.

Some Shares are created by Unraid automatically and it is left to the end users to select their primary storage location.

Preliminary Docker setup

Docker by default uses a vDisk image of 20GB, which might not be enough to run some of the applications we will obtain from the “Apps” tab. We will therefore use cache-allocated share (not network reachable) to store as much data as we need.

From the “Shares” tab, “Add Share”

Unraid-Docker-share.png

This will create a “Share” that only exists on the “Cache” (ie it will not be copied to our disk array, unless some backup is enabled for this location).

New options will appear after the share is created, for SMB sharing for example. We will not share that directory.

From “Settings → System Settings → Docker”, disable Docker


Untitled

Untitled