TrueNAS can integrate with various other technologies or solutions to provide an enhanced experience or better integrate the TrueNAS system into a specific use case.
The articles in this guide are organized into two sections: optimization recommendations for specific TrueNAS use cases and integration information for various partner solutions.
Many of the TrueNAS tutorials in this section cover configuring the TrueNAS side of a partner solution and redirect over to that solution’s documentation for further guidance.
Overview of topics:
Media Workflow optimizations
Security recommendations
Asigra documentation
VMWare integration notes, including TrueNAS vCenter Plugin documentation.
Ready to get started? Choose a topic or article from the left-side Navigation pane.
Click the < symbol to expand the menu to show the topics under this section.
1 - Migrating to TrueNAS
This article describes general recomendations for migrating data into TrueNAS.
Every NAS user has a unique storage setup, but we can still give general recommendations for migrating your data into TrueNAS using share protocols and cloud storage.
NFS Migration
Rsync is an open-source file transfer utility that runs on computers with Linux OS (or a Unix-like OS). Locate usage instructions and tutorials for rsync here.
If you decide to use rsync, be aware that your filesystem must support Access Control Lists (ACLs).
SMB Migration
We recommend migrating via SMB sharing if you use computers with Windows OS (or if you prefer SMB).
Robocopy is ideal for users with Windows clients. SMB also allows you to drag & drop files to migrate them from your current NAS storage into TrueNAS. Using SMB will be slower than using a utility like Robocopy.
iSCSI Migration
vMotion uses block-level storage protocols to move data. If you prefer to use iSCSI (block-level storage) protocols, vMotion might be an option. vMotion is often used to move virtual machines from one host to another. Research whether vMotion is suitable for your setting and use needs.
Cloud Migration
Migrating via the cloud is another option. Services like minIO, S3, and Google Drive (among many others) can move your files and data from one NAS to another. Utilities like rclone facilitate migration through cloud storage platforms.
Be aware that cloud storage can be expensive when moving large amounts of data.
2 - Optimizations
The optimizations topic has articles discussing how best to configure TrueNAS for various use cases or specific needs.
This includes Disaster Recovery configurations, Media and Entertainment tuning, and Security best practices.
Ready to get started? Choose a topic or article from the left-side Navigation pane.
Click the < symbol to expand the menu to show the topics under this section.
2.1 - Cross-Site Disaster Recovery
TrueNAS supports many different disaster recovery (DR) scenarios!
Some of these scenarios with recovery processes are listed here.
Point-in-Time Recovery – ZFS Replication
Of the native ways to replicate data, ZFS replication is the most efficient and reliable method for asynchronously replicating data from one TrueNAS system to another. Replication is based on snapshots of datasets or zvols and synchronizes the snapshots of the first system to the second system. There are numerous advantages to using ZFS replication. One of those is that a snapshot is a point-in-time, read-only copy of the data. This ensures that the contents of the snapshot cannot be altered.
ZFS replication is commonly used for disaster recovery. Should the first system or site go down, the remote system can be brought back by cloning the snapshot to a new dataset and restoring the share. This recovery does require some work on the side of the admin, but it’s incredibly quick and ensures that whatever was transferred is retained. Snapshots and replications can be scheduled to run every few minutes.
Another benefit of ZFS replication is the capability for the snapshots and referenced data to be stored on systems and pools of different specs or pool configuration. All-flash, high-performance pools can be backed up to lower performance pools with traditional drives and different RAID configurations. Smaller systems can also be backed up to larger central repositories. Companies such as FirstLink and others use this to help clone edge devices like the TrueNAS Mini systems to a central core TrueNAS in their data center. ZFS replication on TrueNAS ensures data protection regardless of system complexity, size, or location.
File-based Recovery – Rsync
Rsync is a file-level migration that’s the same as rsync in the Linux/FreeBSD command line. It’s handy for semi-live sync of data if you need just the same files between sites each shared over a local share.
Rsync is useful for file transfer, but it’s not recommended if files are being modified. For example, if an rsync task starts while 100 GB is being written and the data is changed before the file is written, it will cause issues with versioning and data integrity. Rsync should never be used to copy active VM data stores, block-level data (iSCSI or fibre channel shares), or other data that could constantly be in use. Rsync is slower than ZFS replication, particularly for large datasets, so it’s recommended for convenience over data integrity. It can be used between TrueNAS and many other systems.
File Recovery To or From the Cloud – Cloud Sync
TrueNAS can copy, pull, and sync data to a variety of cloud-based data storage systems, including Amazon AWS, Microsoft Azure, Google GCP, Google Drive, Backblaze B2, Dropbox, Box, and more. By integrating rclone sync for file transfers, this feature can copy files on TrueNAS into a cloud repository of a user’s choosing.
For larger datasets, TrueNAS systems are more cost-effective long term than cloud offerings, including Amazon AWS. For this reason, using TrueNAS as a backup target for protecting cloud-based data, e.g., from AWS, Dropbox, or Google Drive, is ideal because data stored in TrueNAS will get scrubbed, checked, and retained with an unlimited number of snapshots available.
Automatic failover between sites is beyond the scope of TrueNAS systems alone. TrueNAS is a storage system, and while it handles data replication well in a variety of ways, automatic failover to a remote site requires knowledge of the services themselves. For environments with web or video streaming services, [DNS round-robinn(https://en.wikipedia.org/wiki/Round-robin_DNS) with failover might be feasible. Several web servers, like NGINX, also feature load-balancing services which could help mitigate service overload or downtime. TrueNAS systems provide a stable backend in this topology, with the option of also running ZFS replication for additional safety. Contact iXsystems if you need assistance with designing a storage system for your business.
An example design:
TrueNAS is a storage platform with powerful ways to ensure data integrity and consistency between local and remote sites. ZFS replication is the fastest and best way to ensure the data transferred is intact. Rsync is useful for file sync but cannot be used for live data or block-level data that could change during transfer. Cloud sync supports user workloads that archive to or from mainstream cloud providers. Beyond these tools, TrueNAS works with other systems, such as Asigra Backup and iconik smart media management, to provide an ultra-scalable backend with robust performance and a strong emphasis on data protection. The tools that TrueNAS provides combined with the flexibility to work with nearly any IT environment make it a robust system for cross-site and DR workloads.
2.2 - Media Workflows
Developing and delivering media content that reaches audiences whenever and wherever they are has increased in importance and complexity.
In today’s highly connected, entertainment-driven world, media and entertainment (M&E) companies need to stay competitive to succeed.
These organizations need to produce information and entertainment in a variety of different formats to display on mobile devices, desktops, workstations, Blu-ray players, game consoles, set-top boxes, and TVs as well as in digital and analog movie theaters.
Workflows grow in complexity daily and time-to-market windows continue to shrink.
Where and how to store and archive all this content remains top-of-mind. M&E projects run on multiple heterogeneous environments, need an enterprise- grade storage array’s features, and require multiple protocols.
Most M&E production houses purchase data storage based on capacity and performance dictated by the needs of existing applications.
As a result, businesses often end up with multiple classes of application-specific storage or storage silos including SAN, NAS, all-flash arrays, and many forms of direct attached storage (DAS) from a multitude of vendors.
Creative organizations are often forced to over-provision and over-purchase capacity or performance, or use an all-flash array to meet their production needs. This reactive purchasing drives up the cost of media production.
As media files grow, it becomes complex to manage and inefficient to increase the capacity or performance of DAS or consumer-grade NAS, so many turn to cloud storage.
The security risk and expense of cloud storage are a top priority of IT and Media Managers.
These factors and others put intense pressure on your budget and data storage infrastructure to keep up with the demand.
A TrueNAS storage system from iXsystems brings an enterprise-grade storage solution supporting multiple protocols to M&E production houses that is capable and affordable for many M&E applications.
It is designed to enable M&E customers to address media capacity and performance requirements while reducing total cost of ownership (TCO), consolidating digital assets, accelerating media workflows, and providing the features needed to protect all media assets.
Read more to learn how TrueNAS can be optimized for typical M&E production house usage.
General tuning recommendations are changing constantly!
Check back often to see what’s changed or add your own recommendations!
General Optimizations
Use SMB3 sharing on both the TrueNAS and any client systems.
A typical recommendation is to use Mixed RAID (2+1 RAIDZ) in most cases with added Read and Write cache.
The Write cache is optional if the system is only using SMB sharing.
6 or 7 disk wide RAIDZ2 (Protection-X or Protection) is possible for tier2, nearline, or archival storage. It also works when the system has extensive data storage of a few hundred Terabytes or more.
Setting jumbo frames (MTU: 9000) on the network, TrueNAS, and client side is important for large file streams.
Do not store Media Cache Files and Media Cache Databases on a NAS. These files must stay local on clients. Ideally, client systems use SSDs and NVMe devices to store these files.
With standard (non flash) systems, don’t move or copy files or footage while editing. This causes choppy playback.
Software-specific Tuning
Beyond general optimization for Media and Entertainment workflows are tunings or TrueNAS usage recommendations for specific applications.
Adobe Premiere®
System size is a primary factor when tuning TrueNAS for Adobe Premiere workflows.
4K workflows typically want 20 disks or more.
8K can be all-flash demanding, but Premiere has the proxies feature to reduce the performance impact.
Make sure your client systems or other applications support this feature too.
To get some performance improvement when scrubbing through long video files with audio tracks, de-select play audio while scrubbing under Preferences > Audio
Shared projects must enable Project Locking in Premiere.
2.3 - Security Recommendations
When using services on TrueNAS, especially services that allow outside connections, there are some best practices to follow to ensure your system is safe and secure.
Several different system services are disscused in this article.
iSCSI
Follow the iSCSI creation wizard unless a specific configuration is required.
To create an iSCSI share, go to Sharing > Block Shares (iSCSI) and click WIZARD.
The iSCSI wizard has several additional security settings.
When creating a new Portal, consider adding a Discovery Authentication Method.
This adds authentication between the initiator and the extent based on the chosen authentication method.
Entering a list of Initiators and Authorized Networks is also recommended.
This allows defining which systems or networks can connect to the extent.
When these options are empty, all initiators and all networks are allowed to connect to the extent.
NFS
Network File System (NFS) is a sharing protocol that allows outside users to connect and view or modify shared data.
NFS service settings are in Services after clicking the (pencil).
By default, all options are unset.
Unless needed for a specific use case, keep the default NFS service settings.
During Share Creation, define which systems are authorized for share connections.
Leaving the Authorized Networks or *Authorized Hosts and IP addresses lists empty allows any system to connect to the NFS share.
To define which systems can connect to the share, click the Advanced Options and enter all networks, hosts, and IP addresses to have share access.
All other systems are denied access.
SMB
Using Server Message Block (SMB) to share data is a very common situation for TrueNAS users.
However, it allows outside connections to the system and must be properly use to avoid security concerns.
Do not use NTLMv1 Auth with an untrusted network.
This encryption option is insecure and vulnerable.
When using MacOS to connect to the SMB share, enable Apple SMB2/3 Protocol Extensions.
This improves connection stability between the share and the Apple system.
If you need to add an Administrators Group, make sure the group members are correct.
Members of the administration group have full permissions to modify or delete the share data.
During Share Creation, a Purpose can be selected.
This changes the share configuration with one click.
For example, when selecting Private SMB Datasets and Shares from the list, TrueNAS automatically tunes some settings so the share is set up for private use.
To fully customize the share settings, select No presets for the Purpose.
Unless a specific purpose for the share is required, it is recommended to select Default share parameters as the Purpose.
SMB Server Signing is recommended.
To enable Server Signing, go to Services > SMB > Edit > Auxiliary Parameters and add this string to the Auxilary Parameters field:
server signing = mandatory
Then save, stop, and restart the SMB service.
SSH
Using Secure Shell (SSH) to connect to your TrueNAS is very helpful when issuing commands through the CLI.
SSH settings are in Services after clicking the (pencil).
For best security, disable the Log in as Root with Password and Allow Password Authentication SSH Service options.
Instead, create and exchange SSH keys between client systems and TrueNAS before attempting to connect with SSH.
Be careful when prompted to overwrite any existing SSH key pairs, as this can disrupt previously configured SSH connections.
Overwriting an SSH key pair cannot be undone.
Open Windows Powershell or a terminal.
Enter ssh-keygen.exe.
Type in a location to store the new key pair or press Enter to use the default location (recommended) shown in parentheses.
Type in a passphrase (recommended) for the keypair or press Enter to not use a passphrase. Confirm the passphrase.
Open the Terminal app
Enter ssh-keygen -t rsa -b 2048. This uses the RSA algorithm to create key of 2048 bits, which is generally considered acceptible.
Type in a location to store the new key pair or press Enter to use the default location (recommended).
Type in a passphrase (recommended) for the keypair or press Enter to not use a passphrase. Confirm the passphrase.
Open the shell.
Enter ssh-keygen. By default, uses the RSA algorithm to create a 3072 bit key pair.
Type in a location to store the new key pair or press Enter to use the default location (recommended).
Type in a passphrase (recommended) for the keypair or press Enter to not use a passphrase. Confirm the passphrase.
Open the shell.
Enter ssh-keygen -t rsa. This uses the RSA algorithm to create the key pair.
Type in a location to store the new key pair or press Enter to use the default location (recommended).
Type in a passphrase (recommended) for the keypair or press Enter to not use a passphrase. Confirm the passphrase.
Root account logins via SSH are never recommended.
Instead, create new TrueNAS user accounts with limited permissions and log in to these when using SSH.
If it is a critical and unavoidable situation and root logins must be allowed, first set up two-factor authentication (CORE 2FA, SCALE 2FA) as an additional layer of security.
Disable the Log in as Root with Password setting as soon as the situation is resolved.
Unless it is required, do not set Allow TCP Port Forwarding.
Many SSH ciphers are outdated and vulnerable.
It is not safe to enable any weak SSH ciphers.
Block both the CBC and Arcfour ciphers by going to Services > SSH > Edit > Advanced Options and adding this line in the Auxiliary Parameters:
Integrations discusses how TrueNAS can work with different third-party applications to create unique or efficient storage management environments.
Ready to get started? Choose a topic or article from the left-side Navigation pane.
Click the < symbol to expand the menu to show the topics under this section.
3.1 - AWS Images
Process Summary
Requirements
FreeBSD system
AWS Account
S3 Bucket
User with permissions for EC2
Download the user key to the local working directory and modify
Install bhyve and bsdec2-image-upload
Patch bsdec2-image-upload if needed
Create TrueNAS image file
Download TrueNAS .iso
Create blank image file
Load virtualization module
Create tap and bridge interface
Load image and iso into bhyve
Install TrueNAS
Upload image to EC2
Description and region name are required
Launch EC2 instance
Select name created with image
t2.large is the recommended instance type
Add HDD/SSD volumes as needed
Step 6: add new rule
Type: http
Launch the instance
Wait for AWS to finish status checks
Paste Public DNS or Public IP link in browser to access TrueNAS web interface
Using Virtualized TrueNAS with Amazon Web Services (AWS)
These instructions demonstrate how to create a virtualized TrueNAS image on FreeBSD, configure it with Amazon Elastic Compute Cloud (EC2), and access the TrueNAS web interface.
There are a few things that must be prepared before building the image.
The FreeBSD system needs two applications to create, configure, and upload the virtual machine image: bhyve and bsdec2-image-upload.
The most recent version (>=1.3.1) of bsdec2-image-upload is required, otherwise an SSL error occurs when attempting to upload the image.
If not available on the ports tree, the utility can be downloaded from the GitHub repository.
Currently, bsdec2-image-upload fails on images that aren’t 10GB.
An issue has been created, but in the meantime a workaround is to edit main.c and replace:
"BlockDeviceMapping.1.Ebs.VolumeSize=10&"
with
"BlockDeviceMapping.1.Ebs.VolumeSize=16&"
To build, use a FreeBSD system with either libressl-devel or openssl-devel, as well as ca_root_nss, and run make install.
Create an AWS account with an S3 bucket.
Record the region associated with the S3 bucket.
Set the bucket lifetime policy to delete data after 1 day, as bsdec2-image-upload does not delete files from S3 and the files are no longer needed after the AMI is registered.
A user with permissions to EC2 and S3 is must have these permissions:
When all the prerequisites are ready, download a TrueNAS 11.2 or later .iso file.
Open a shell and go to your local working directory.
Create an empty raw image file with truncate -s 16G {TRUENAS}.img.
Replace {TRUENAS} with a image file name.
This empty image is the installation target for the TrueNAS .iso.
Next, load the virtualization module and create a tap and bridge interface:
Use bhyveload -m 4GB -d truenas.img vm0 to load the image into the hypervisor and create virtual machine vm0 with four gigabytes of memory.
To install TrueNAS into the image, load both the image and TrueNAS .iso file into bhyve: bhyve -c 2 -m 4G -H -A -P -g 0 -s 0,hostbridge -s 1,lpc -s 2,virtio-net,tap0 -s 3,virtio-blk,{TRUENAS}.img -s 31,ahci-cd,{TRUENAS-VERSION}.iso -l com1,stdio vm0.
Replace {TRUENAS} with the name of the image file and {TRUENAS-VERSION} with the TrueNAS .iso file name.
If these commands fail, for instance an error concerning boot.lua, then try this command which uses a combines the two previous commands in a shell script included in the bhyve installation.
When the TrueNAS installer opens, make sure boot with BIOS is chosen and start the installation.
Power off the device when the installation is done.
Do not load the completed image into bhyve and boot after installation as TrueNAS will create invalid network settings.
If network issues occur, boot the image and create a DHCP interface manually named xn0.
Upload TrueNAS Image to EC2
Now that the image is created and configured, upload it to EC2.
Use bsdec2-image-upload with the image file: bsdec2-image-upload --public {TRUENAS}.img TrueNAS {description} {region} {S3 bucket} KEY.pem.
Replace {TRUENAS} with the image file name, {description} with a unique identifier for the Amazon Machine Image (AMI), {region} with your Amazon region, and {S3 bucket} with your AWS image storage location.
KEY.pem is the IAM user access key that was downloaded earlier.
These elements are required for the upload to start.
bsdec2-image-upload sends the image to the AWS bucket in 10 MiB segments.
The upload can take several hours, depending on connection speeds and other factors.
When the S3 bucket upload completes, the script creates a snapshot, registers the AMI, and copies the AMI to all regions for mirrors.
The upload command can fail for various reasons.
For example, entering a description that already exists.
If this happens, fix the error and rerun the command.
When successful, the upload simply finishes.
Accessing TrueNAS with the AMI
With the Amazon Machine Image (AMI) created and uploaded to AWS, an EC2 instance needs to be activated before the TrueNAS interface is accessible.
Log in to your Amazon Web Services account and click the EC2 Compute service.
Find the Launch instance section, open the Launch instance drop down, and click Launch instance.
The instance launcher follows several steps:
Click My AMIs and select the name that was uploaded by bsdec2-image-upload.
Any instance will work, but t2.large is recommended for TrueNAS given an 8GB memory recommendation.
Skip this step.
Add EBS volumes according to your TrueNAS use case.
At minimum, add a couple of cold HDD volumes for a storage pool.
General purpose SSD volumes can be used as L2ARC or SLOG devices.
Skip this step.
Add a rule with http. This allows you to connect to the TrueNAS web interface.
Review your settings and press Launch.
The running instance is added to the EC2 dashboard or can be seen in the Instances menu.
When the image has fully started, AWS performs two status checks.
The first checks for AWS uptime, and the second verifies the instance is functional.
After both checks pass, paste either the Public IP or Public DNS link in a new browser window to connect to the TrueNAS web interface.
TrueNAS Community AMI
Starting with 12.0-BETA, an AMI is provided for different TrueNAS releases and is available in the Community AMI section.
When using this AMI, login with the default credentials:
Username: root
Password: abcd1234
To secure the system, change the password after the initial login.
3.2 - Asigra Plugin
Asigra provides a TrueNAS plugin to simplify cloud storage backups with their service.
The Asigra plugin connects TrueNAS to a third party service and is subject to licensing.
TrueNAS must have a public static IP address for Asigra services to function.
Please read the Asigra Software License Agreement before using this plugin.
Follow the instructions in the Plugins section to install the Asigra Plugin.
To begin using Asigra services after installing the plugin, expand the plugin options and click Register.
A new browser tab opens to register a user with Asigra.
Refer to the Asigra documentation for details about using the Asigra platform:
DS-Operator Management Guide: Using the DS-Operator interface to manage the plugin DS-System service.
Click Management in the plugin options to open the DS-Operator interface.
DS-Client Installation Guide: How to install the DS-Client system.
DS-Client aggregates backup content from endpoints and transmits it to the DS-System service.
DS-Client Management Guide: Managing the DS-Client system after it has been successfully installed at one or more locations.
3.3 - Containers
TrueNAS CORE & Enterprise can both be used as backing storage for container workloads.
The democratic-csi driver (available at https://github.com/democratic-csi/democratic-csi) allows users to integrate popular container solutions like Kubernetes, Nomad, Cloud Foundry, or Mesos into the TrueNAS CLI. The driver is sponsored by and offically supported by iXsystems for TrueNAS Enterprise Customers.
A CSI (Container Storage Interface) is an interface between container workloads and third-party storage that supports creating and configuring persistent storage external to the orchestrator, its input/output (I/O), and its advanced functionality such as snapshots and cloning.
The democratic-csi focuses on providing storage using iSCSI, NFS, and SMB protocols, and includes several ZFS features like snapshots, cloning, and resizing.
Features
dynamically provisions/de-provision storage and shares it as appropriate for cluster usage
online resize operations to dynamically expand volumes as needed
snapshot support (using either zfs send/receive or zfs snapshot)
cross-architecture (amd64, armv7, arm64)
Installation
There are 3 steps to integrating a container solution in TrueNAS:
Prepare TrueNAS.
Prepare the nodes (ie: your Kubernetes cluster nodes).
Deploy your container orchestrator.
Prepare TrueNAS for a Container Solution
We recommend using TrueNAS 12.0-U2.1+. However, the driver typically works with previous versions too, but is unsupported. Before you start, log in to TrueNAS, go to Services, and make sure iSCSI, NFS, and SSH are enabled.
Create Pools
Go to Storage > Pools and create the pools to include in your container.
Set up SSH
Now you need to ensure that a supported shell is used by the user account that your container solution can use to SSH to TrueNAS.
Go to Accounts > Users and set the desired user’s Shell to either bash or sh, then click SAVE.
To use a non-root user for the SSH operations, you can create a csi user and then run visudo directly from the console. Make sure the line for the csi user has NOPASSWD added (this can get reset by TrueNAS if you alter the user in the GUI later):
csi ALL=(ALL) NOPASSWD:ALL
With TrueNAS CORE version 12.0+, you can use an apiKey instead of the root password for the HTTP connection.
Set up NFS
Go to Services and click the next to NFS to edit its properties.
Make sure Enable NFSv4, NFSv3 ownership model for NFSv4, and Allow non-root mount are checked, then click SAVE.
Set up iSCSI
Go to Sharing > Block Shares (iSCSI).
Use the default settings in the Target Global Configuration tab.
In the Portals tab, click ADD, then create a *Description. Set the IP Address to 0.0.0.0 and the Port to 3260, then click SUBMIT.
In the Initiators Groups tab, click ADD. For ease of use, check the Allow ALL Initiators, then click SAVE. You can make restrictions later using the Allowed Initiators (IQN) function.
Kubernetes will create Targets and Extents automatically.
When using the TrueNAS API concurrently, the /etc/ctl.conf file on the server can become invalid. There are sample scripts in the contrib directory to clean things up ie: copy the script to the server and directly and run - ./ctld-config-watchdog-db.sh | logger -t ctld-config-watchdog-db.sh &. Please read the scripts and set the variables as appropriate for your server.
Ensure you have preemptively created portals, initiator groups, and authorizations
Make note of the respective IDs (the true ID may not reflect what is visible in the UI)
You can make ID’s visible by clicking the Edit link and finding the ID in the browser address bar
Alternately, use these commands to retrieve appropriate IDs:
Openshift is another addon to Kubernetes and generally works fine with the democratic-csi. You will need to set special parameters with helm (support added in chart version 0.6.1):
# for sure required
--set node.rbac.openshift.privileged=true
--set node.driver.localtimeHostPath=false
unlikely, but in special circumstances may be required
–set controller.rbac.openshift.privileged=true
You can run the kubectl get pods -n democratic-csi -o wide command to make sure all the democratic-csi pods are running.
You can also run the kubectl get sc command to make sure your storage classes are present and set a default class.
Nomad is a “simple and flexible workload orchestrator to deploy and manage containers and non-containerized applications across on-prem and clouds at scale.”
The democratic-csi works in Nomad with limited functionality and has to be deployed as a set of jobs. The controller job runs as a single instance, and the node job runs on every node and manages mounting the volume.
Read the Nomad Support page in the democratic-csi GitHub for detailed setup instructions.
Visit the Nomad Storage Plugins page to learn how Nomad manages dynamic storage plugins.
Mesos
Mesos is an open source cluster manager that abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively.
Cloud Foundry
Cloud Foundry is an open source cloud platform as a service (PaaS) on which developers can build, deploy, run and scale applications.
As always, we welcome and encourage contributions from the community!
The Nextcloud plugin is a suite of client-server software for creating and using file hosting services.
Plugins Catalog
You must have a data pool available for plugin storage.
You must connect the system to the internet.
Go to Network > Interfaces, edit the intended plugin interface, and set Disable Hardware Offloading.
To see the plugin catalog, go to the Plugins screen.
Going to the Jails or Plugins screen for the first time prompts you to select a location on the system for storing Jail related data.
By default, this location stores all data related to jails and plugins, including downloaded applications, data managed by the jail or plugin, and any jail snapshots.
Disconnecting or deleting the pool that stores jail data can result in permanent data loss!
Make sure you back up any critical data or snapshots that are stored in a jail before changing the storage configuration.
To change the Jails and Plugins storage location, click , select a new pool, and click CHOOSE.
If the catalog doesn’t load:
Go to Network > Global Configuration and confirm the Default Gateway and DNS Servers addresses are correct.
Open the Shell and ping an Internet address.
The output confirms the system is connected to the Internet.
Open source plugins created and maintained by TrueNAS users.
By default, TrueNAS shows the iXsystems-supported plugins.
To see the community plugins, open Browse a Collection and select Community.
Installation
Go to Plugins and select Nextcloud, then click INSTALL.
Type a Jail Name and click SAVE.
After Nextcloud installs successfully, you can manage your instance of the plugin.
Click POST INSTALL NOTES to obtain your Nextcloud admin user and Nextcloud admin password information.
Click MANAGE to access the Nextcloud login page within your browser.
Enter the credentials from POST INSTALL NOTES and click Log in to access the Nextcloud Hub.
Go to Plugins and select Nextcloud, then click INSTALL.
Type a Jail Name, then disable the NAT checkbox and enter an available IP in the IPv4 Address field.
Select an IPv4 Netmask (iX recommends 24), then click SAVE.
After Nextcloud installs, you must add your Nextcloud IP to your Nextcloud jail trusted domains.
Go to Jails and expand your Nextcloud jail, then click > SHELL.
Enter ee /usr/local/www/nextcloud/config/config.php to edit your Nextcloud config file.
Scroll to the trusted_domains section and type your Nextcloud IP as a new line item. Use the image below for reference.
Type CTRL+C to close the editor, then type exit to close the config file.
Go back to Plugins and expand your Nextcloud instance.
Click POST INSTALL NOTES to obtain your Nextcloud admin user and Nextcloud admin password information. Click MANAGE to access the Nextcloud login page within your browser.
Enter the credentials from POST INSTALL NOTES and click Log in. You are directed to the Nextcloud Hub.
Refer to the Nextcloud documentation for details about using the Nextcloud platform:
There is an unsupported, open source driver for Cinder available for TrueNAS, available at https://github.com/iXsystems/cinder.
This is a simple driver that uses several scripts to allow block/iSCSI interactions between OpenStack Cinder and TrueNAS systems.
To review the driver documentation, including minimum hardware requirements, install instructions, and basic usage, see https://github.com/iXsystems/cinder/blob/master/README.md.
3.6 - Veeam
TrueNAS Unified Storage appliances are certified Veeam Ready and can be used to handle demanding backup requirements for file and VM backup.
These certification tests measure the speed and effectiveness of the data storage repository using a testing methodology defined by Veeam for Full Backups, Full Restores, Synthetic Full Backups, and Instant VM Recovery from within the Veeam Backup & Replication environment.
With the ability to seamlessly scale to petabytes of raw capacity, high-performance networking and cache, and all-flash options, TrueNAS appliances are the ideal choice for Veeam Backup & Replication repositories large and small.
These TrueNAS products are certified by Veeam:
This article discusses some of the best practices when deploying TrueNAS with Veeam, specific considerations users must be aware of, and some tips to help with performance.
The focus is on capabilities native to TrueNAS, and users are encouraged to also review relevant Veeam documentation, such as their help center and best practices for more information about using and optimizing Veeam.
What is Needed?
When deploying TrueNAS with Veeam users should prepare the following:
Veeam Backup & Replication dedicated server - either physical or VM
Windows Server and Microsoft SQL for Veeam
TrueNAS appliance with users pre-configured as determined by the admin
Networking - 1/10/40/100GbE infrastructure and cables
Veeam connected to the Hypervisor or other clients to pull the data to TrueNAS
All appropriate licenses
Backup proxies as defined by Veeam - they can be virtual machines or physical machines or the backup server itself for low workloads
This ensures the appliance has the latest bug fixes, security updates and software enhancements to ensure maximum performance and security.
If deploying on a closed network (LAN) without access to the Internet, users can also obtain and apply an update manually.
For assistance, please contact TrueNAS support.
Customers who purchase iXystems hardware or that want additional support must have a support contract to use iXystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
Monday - Friday, 6:00AM to 6:00PM Pacific Standard Time:
US-only toll-free: 1-855-473-7449 option 2 Local and international: 1-408-943-4100 option 2
Telephone
After Hours (24x7 Gold Level Support only):
US-only toll-free: 1-855-499-5131 International: 1-408-878-3140 (international calling rates apply)
Sizing Considerations
TrueNAS storage appliances range from entry-level to high-end, and the user’s current usage scenario and
backup demands must be considered.
While this guide focuses on Veeam, the unified design of TrueNAS allows it to multitask.
If TrueNAS is handling more than backup jobs, other usage needs should be taken into account.
For example, if the storage appliance has one LUN (dataset or zvol) set as a VMware datastore for hosting VMs, and another LUN set to be used for backups, both capacities must be considered.
The first step when estimating required capacity is to understand how much capacity is currently used by existing VMs and by files that users need to back up.
Veeam and the TrueNAS appliance both apply data compression, though different file types and the structure of the data in those files affect the achieved compression levels.
Some tools for capacity estimation are listed at the end of this section, but it is always good to err on the side of caution and 3x the current storage used is not unreasonable.
ZFS performs best with utilization below 80%.
Snapshots, full backups, and incremental backups all require more storage than primary storage being used today.
Bandwidth is harder to estimate and must take into account backup timeframes, backup sizes, and available network resources.
Typically, backups run during off-hours when IT equipment is under a lighter load.
This timeframe can be set, but if each backup is several terabytes in size, a longer amount of time and greater bandwidth is required.
iXsystems tests its Veeam backups using a 10 GbE mixed network with the datastore storage, hypervisor hosts, and backup repository (the TrueNAS) on the same network.
However, shorter backup windows, heavy network usage, and dozens of VMs being backed up at the same time may require 40 or 100 GbE networking and multiple Veeam Backup Proxies used in tandem.
For example, consider a scenario of backing up 1000 VMs (each 100 GB in size) with a backup window of 8 hours.
This requires around 5 virtual Proxy servers with 8 vCores (16 GB memory each) and around 3.7 GB/s of throughput.
In such a scenario, iXsystems would recommend 100 GbE interconnect and TrueNAS appliances with over 100+ hard drives.
However, bandwidth can be greatly reduced if users can accept incremental and staggered backups.
For example, run an incremental backup on all VMs each day, and a full backup on 100 VMs per night, rotating a different 100 VMs each night.
This strategy provides a 5X increase to the maximum number of VMs and reduces costs by 75%.
TrueNAS systems are excellent for backup and archiving, but must be sized correctly.
Recommended sizing:
Model
Backup Only?
Number of VMs Backed Up
Network Max
Usable Capacity
TrueNAS X10
Yes
6800
10 GbE
340 TB
TrueNAS X20
Yes
13600
10 GbE
680 TB
TrueNAS M40
No
29400
40 GbE
1.47 PB
TrueNAS M50
No
151800
100 GbE
7.59 PB
TrueNAS M60
No
303600
100 GbE
15.8 PB
Backup Only? assumes that the storage is being used only as a backup repository.
This can be understood as a recommendation, not a rule.
The number of VMs is based upon conservative throughput estimates with an average VM size set as 100GB and a backup window of 8 hours running full backups.
All other requirements for the number of Veeam Backup Proxies, and networking dependencies also apply.
Number of VMs Backed Up: Numbers are based on max capacity and estimating 100GB per VM and a 2:1 optimal compression ratio.
Compression and Deduplication settings can radically change the estimates, and Veeam allows for fine tuning.
For high-capacity deployments, iXsystems recommends 9+2+1 RAID groups (called “Virtual Devices” or “vdevs” by ZFS terminology).
This configuration consists of a RAIDZ2 (similar to RAID 6 with 2 drive parity so 2 drives can fail without data loss) with one to two global hot-spares added to the pool.
Pools can include several of these groups, so the capacity can be expanded as needed.
For example, 390 TB of usable space with 12 TB drives requires four groups and 48 drives.
Detailed configurations can be discussed with iXsystems sales representatives and engineers.
TrueNAS storage pools can be expanded online to the maximum size supported by
a particular TrueNAS system. Storage pools can be expanded one vdev (RAID group) at a time so long as each
vdev shares the same type. When deploying an iSCSI share requiring a zvol (LUN), users should consider thin
provisioning using the sparse option during setup.
In addition to the above considerations, there are many tools, forums, and other discussion groups to help verify
the amount of storage needed for Veeam backup. In many sites, Veeam compression or deduplication is around
1.5x to 2x, but this is more a reference than a rule. Backup types, applications, and the diversity of VMs can all
factor into the true amount of storage needed. Capacity must also be considered alongside desired performance,
as a smaller quantity of large drives often does not yield the same performance as a larger number of small drives.
For rough calculations, additional resources are listed below.
TrueNAS is a robust, unified storage system well-suited for nearly any environment.
For backups, the platform takes advantage of the data integrity offered by ZFS that includes features such as copy-on-write,
snapshots, and checksums that prevent bit-rot.
TrueNAS appliances can also be expanded at any time simply by adding more drives so datasets can grow to keep pace with your data.
Here are additional key features that are offered out-of-the-box at no extra cost to the user:
Self-healing file system: ZFS places data integrity first with data scrubs and checksums to ensure files are saved
correctly and preserved.
Native replication to TrueNAS systems: perfect for disaster recovery and compliance.
High-availability (HA) architecture with 99.999% availability: Ensure the system is always ready to receive the latest backups.
Triple-parity: RAID groups (vdevs) can be configured with mirror, single-parity (RAIDZ), dual-parity (RAIDZ2), or triple-parity (RAIDZ3) levels, while copy-on-write, checksums, and data scrubbing help protect long-term data integrity.
Certified with VMware® and Citrix® XenServer®: TrueNAS can be both a hypervisor datastore and a backup repository with data on different datasets and even pools.
Just be mindful of the scale of the workloads being run.
Unrivaled scalability in a single dataset: Scale the backup repository from terabytes to petabytes of usable capacity.
No LUN limits, clustering or licenses needed.
Setting Up TrueNAS as a Veeam Repository
Veeam Backup & Replication runs on a Windows operating system, typically Windows Server 2012 or newer, and can connect to a variety of storage systems.
iXsystems recommends using iSCSI with a Veeam scale-out repository architecture.
Users can also use SMB to mount the volume to the backup server directly.
With support for SMB/CIFS, NFS, AFP, iSCSI, and FC, TrueNAS offers many ways to connect to Veeam backup servers.
Performance Tuning for Veeam Backup & Replication
Test environment:
A 2TB datastore must be configured on TrueNAS System 1 utilizing the iSCSI wizard using default values. This is the backup source.
A 2TB datastore must be configured on TrueNAS System 2 utilizing the iSCSI wizard using default values. This is the backup target.
Connect the source datastore to the Hypervisor.
Ensure the NFS ISO datastore is mounted.
A 64-bit Microsoft Windows Server 2019 Standard VM should be constructed for Veaam Backup & Replication Server.
Install VMware guest additions.
Configure STATIC IP for Windows Server 2019 VM.
Connect storage to the Veeam VM
Install Veeam software on Veeam Backup & Replication Server.
Using a Scale-out Backup Repository, users can link multiple backup repositories (Extents) together to help with performance and load balancing across the various repositories.
In the topology above, the TrueNAS is broken across four LUNs to act as the scale-out extents.
Both the FreeNAS datastore and the TrueNAS backup only used one 10GbE link when connecting to the VMware server pool.
Scale-out Backup Repository is only available in Veeam Backup & Replication 9.5 Enterprise and Enterprise Plus editions.
Results
Testing in this configuration with a backup server and backup proxy, Windows Server 2019 Standard VMs, yielded excellent results with the TrueNAS R-Series platform.
iXsystems reference numbers can be seen below.
These were achieved with just a single Veeam Backup Server and a Veeam Backup Proxy Server.
For more demanding workloads, results can be scaled by adding more VMs to act as the Veeam Backup Proxy.
Test
Time Limit
TrueNAS Time
Full Backup
30:00 Minutes
27:41 Minutes
Full Restore
25:00 Minutes
16:48 Minutes
Synthetic Full Backup
50:00 Minutes
37:18 Minutes
3.7 - Clustering and Sharing SCALE Volumes with TrueCommand
Requirements and process description for using TrueCommand to cluster and share data from TrueNAS SCALE systems.
One unique capability of TrueNAS SCALE is it can cluster groups of systems together.
These clusters can then create new volumes within the existing SCALE storage pools.
Data stored in a clustered volume is shared between the clustered systems and can add additional redundancy or performance to the environment.
Currently, data stored in a clustered volume is shareable using Active Directory (AD) and the SMB protocol.
Clustering is considered experimental and should not be used in a production environment or for handling critical data!
Warnings and Restrictions
Clustering is a back-end feature in TrueNAS SCALE. You should only configure clustering using the TrueCommand web interface.
Attempting to configure or manage clustering from within the TrueNAS SCALE UI or Shell can result in cluster failures and permanent data loss.
Using the clustering feature on a SCALE system adds some restrictions to that system:
Any existing non-clustered SMB shares no longer function.
You cannot create new SMB shares separately from the clustering settings.
You cannot add the system to a different cluster.
Removing single systems from one cluster and migrating to another is currently unsupported. Removing a system from a cluster requires deleting the entire cluster.
Requirements
To set up clustering with TrueNAS SCALE, you need:
3-20 TrueNAS SCALE systems (version 22.02.2 or later) on the same network. Each SCALE system must have:
Two network interfaces and subnets.
The primary network interface and subnet are for client access to the SCALE system.
The secondary interface and subnet are only for cluster traffic. This interface must use static IP addresses.
Disks available or Storage pools already created and available for use.
A TrueCommand 2.2 or later environment on the same network as the SCALE systems.
A Microsoft Active Directory environment must be available and connected to the same network as the SCALE systems and TrueCommand environment.
You must configure Reverse DNS to allow the SCALE cluster systems to communicate back and forth with the AD environment.
Setting up the Environment
Configuring the cluster feature is a multi-step process that spans multiple systems.
TrueNAS SCALE Systems
Follow this procedure for each TrueNAS SCALE system that is to be connected to TrueCommand and used in the cluster.
Log in to the SCALE UI and go to the Storage page.
Ensure a storage pool is available for use in the cluster.
If not, click Create Pool and make a new pool using any of the available disks.
Go to the Network page and look at the Interfaces card.
a. Ensure two interfaces are available and note which is the primary interface that allows SCALE web interface access and access between SCALE systems, TrueCommand, and Active Directory environments.
Having two interfaces allows connecting the SCALE systems to Active Directory and using TrueCommand to create and manage the cluster.
b. Ensure the second interface has a static IP address on a different network/subnet that connects all the SCALE systems.
This interface securely handles all the data-sharing traffic between the clustered systems.
TrueNAS automatically adds entries to AD DNS for CTDB public IP addresses. Administrators should add the addresses before joining AD to prevent significant configuration errors.
Go to the Shares page and look at the Windows (SMB) Shares section. Note if there are any critical shares and take steps to ensure that disabling those shares isn’t disruptive.
Repeat this procedure for each SCALE system to be clustered.
Microsoft Active Directory
Verify that the Active Directory (AD) environment to pair with the cluster is available and administratively accessible on the same network as the TrueCommand and TrueNAS SCALE systems.
Log in to the Windows Server system and open the Server Manager.
Click Tools > DNS to open the DNS Manager.
In the left side menu, expand Reverse Lookup Zones and select the Active Directory-Integrated Primary zone to use for the cluster.
In a browser, enter the TrueCommand IP address and create the first user. Log in with these user credentials to see the Dashboard.
Click New System and add the credentials for the first SCALE system. Use the SCALE root account password. When ready, click ADD AND CONTINUE and repeat the process for each SCALE system intended for the cluster.
When complete, each SCALE system has a card on the TrueCommand Dashboard and is actively displaying system statistics.
A good practice is to back up the SCALE system configuration before creating the cluster.
In the TrueCommand Dashboard, click on the name of a connected system to open a detailed view of that system.
Click Config Backups and CREATE BACKUP to store the SCALE configuration file with TrueCommand.
Backups allow users to quickly restore the system configuration to the initial working state if something goes wrong.
Creating the Cluster
When the SCALE, AD, and TrueCommand environments are ready, log in to TrueCommand to cluster the SCALE systems.
Click the Clusters icon in the upper left. Click NEW CLUSTER to see the cluster creation options.
Enter a unique name for the cluster and open the dropdown to select the systems to include in the cluster.
When each SCALE system is listed, open the Network Address dropdown for each system and choose the static IP address from the previously configured subnet dedicated for cluster traffic.
Click NEXT, verify the settings, then click CREATE
It can take an extended amount of time to create the cluster.
After the initial creation step for the cluster, TrueCommand opens another sidebar to configure the new cluster for AD connectivity and SMB sharing:
Skipping this step is not recommended because there are no opportunities to reset the configuration after it is completed.
To go back and add the AD and SMB connection details requires deleting and remaking the cluster.
For each SCALE system, choose the IP address related to the primary subnet.
This is typically the IP address used to connect the SCALE system to TrueCommand.
Click NEXT.
Enter the Microsoft Active Directory credentials and click NEXT.
Verify the connection details are correct and click SUBMIT.
Creating a cluster has no visible effect on each SCALE web interface.
To verify the cluster is created and active, open the SCALE Shell and enter gluster peer status.
The command returns the list of SCALE IP addresses and current connection status.
Creating Cluster Volumes
In the TrueCommand Clusters screen, find the cluster to use and click CREATE VOLUME.
Enter a unique name for the cluster and select a Type.
Volume Types
Replicated - Replicate files across bricks in the volume. You can use replicated volumes in environments where high availability and high reliability are critical.
Distributed Replicated - Distribute files across replicated bricks in the volume. You can use distributed replicated volumes in environments where you need to scale storage and high reliability is critical. Distributed replicated volumes also offer improved read performance in most environments. Requires setting an additional Replica Count
Dispersed - Dispersed volumes are based on erasure codes, providing space-efficient protection against disk or server failures. It stores an encoded fragment of the original file in each brick so that only a subset of the fragments are needed to recover the original file. When creating the volume, the administrator configures the number of bricks that can be missing without losing access to data. Choosing Dispersed requires setting an additional Redundancy Count.
After configuring the Type, enter a Brick Size based on the available storage from the clustered pools and your storage requirements.
Review the Pools for each SCALE system in the cluster and ensure that the desired pool is used for this cluster volume.
Click NEXT.
Review the settings for the new volume and click CREATE when ready.
New cluster volumes are added to the individual cluster cards on the TrueCommand Clusters screen.
The web interface for the individual SCALE systems does not show any datasets created for cluster volumes.
To verify the volume is created, go to the Shell and enter gluster volume info all.
Sharing the Cluster Volume
Share a cluster volume by going to the TrueCommand Clusters screen, finding the cluster card, and clicking on the desired cluster volume.
Click CREATE SHARE.
Enter a unique Name for the share.
Choose an ACL to apply to the share.
POSIX_OPEN - Template that grants read, write, and execute permissions to all users.
POSIX_RESTRICTED - Template that grants read, write, and execute to owner and group, but not other. The template may optionally include the special-purpose ‘builtin_users’ and ‘builtin_administrators’ groups as well as Domain Users and Domain Admins groups in Active Directory environments.
Setting Readonly prevents users from making any changes to the cluster volume contents.
Click CONFIRM to create the SMB share and make it immediately active.
The SMB share is added to the SCALE Shares > SMB section for each system in the cluster.
Attempting to manage the share from the SCALE UI is not recommended.
Connecting to the Shared Volume
There are many different ways to access an SMB share, but this article demonstrates using the Windows 10 File Explorer.
From a Windows 10 system that is connected to the same network as the clustering environment, open File Explorer.
In the Navigation bar, clear the contents and enter \\ followed by the IP address or host name of one of the clustered SCALE systems. Press Enter.
When prompted, enter user name and password for an Active Directory user account. Be sure to enter the Active Directory system name before the user account name (example: AD01\sampuser).
Browse to the cluster volume folder to view or modify files.
There are several configuration recommendations and troubleshooting tips when using TrueNAS with a VMware hypervisor.
iSCSI IQN is an acronym that stands for “iSCSI Qualified Name”. It is comprised of the following naming schema with a preamble, node name and unique identifier:
VMware requires using an IQN in their software iSCSI implementation.
A VMware datastore backed by iSCSI-based storage will consist of at least three distinct pieces: the storage host, the switching infrastructure, and the VMware host itself. In order to maximize service availability, each of these elements needs to be able to tolerate some level of failure without significantly disrupting iSCSI traffic.
TrueNAS systems support high-availability (HA) through dual-controllers running in active/standby mode. A properly-configured HA TrueNAS system can offer up to 5x 9’s of system availability. TrueNAS also fully supports asymmetric logical unit access (ALUA) on iSCSI to significantly reduce failover time.
Network switching infrastructure can be made redundant and fault-tolerant through a number of methods, but multipathing is recommended as the best practice for iSCSI networks.
VMware’s official documentation details several ways the virtualization host(s) can be made redundant, so that is not covered here.
For a VMware ESXi host to communicate with an iSCSI capable storage array, the iSCSI protocol must be configured to provide: Discovery, Authentication, and Access Control (DAAC).
Discovery
iSCSI offers two methods of target discovery: dynamic and static. Dynamic discovery lets the storage array respond automatically to the host initiator’s “SendTargets” request. Static discovery requires an administrator to manually add a list of the iSCSI targets to the initiator. Either method of discovery is fine, but dynamic discovery can make the iSCSI setup process easier.
Authentication
iSCSI authentication is handled via the Challenge Handshake Authentication Protocol, or CHAP. CHAP uses a shared secret between targets and initiators to let them validate each other’s authenticity. By default, no CHAP-based authentication is performed by the VMware iSCSI initiator. If you do decide to use CHAP, authentication can either be unidirectional (where only the target authenticates the initiator) or bidirectional (where both the iSCSI initiator and the iSCSI target are required to authenticate to each other prior to transmitting iSCSI data).
VMware iSCSI initiators operating with unidirectional CHAP can be configured in two behavior modes. In “Required” mode, an iSCSI adapter will give precedence to non-CHAP connections, but if the iSCSI target requires it, the connection will use CHAP instead. Required mode is only supported by Software iSCSI and Dependent Hardware iSCSI adapters. Alternatively, initiators can run in “Prohibited” mode, where an iSCSI adapter will give precedence to CHAP connections, but if the iSCSI target does not support CHAP, the initiator can still connect.
Bidirectional CHAP (called “mutual CHAP” in TrueNAS) offers greater security by ensuring that both sides of the iSCSI connection authenticate against each other. Unidirectional CHAP does not let the iSCSI initiator authenticate the target, and running without CHAP obviously disables all authentication. For this reason, bidirectional CHAP is usually recommended but requires additional configuration and comes with greater administrative overhead when troubleshooting iSCSI connections.
Access Control
Access control policies are set up within a storage array to ensure only certain initiators can connect to the target (even if they possess the correct CHAP password). Access control can be performed using the initiator’s name (IQN), its IP address, or its CHAP username.
VMware and TrueNAS iSCSI Setup
The setup of vCenter iSCSI to TrueNAS requires that ESXi hosts be set up as initiators and TrueNAS storage arrays are set up as targets.
To configure ESXi hosts with vCenter, see the VMware vCenter 6.7 documentation.
To configure TrueNAS Enterprise storage arrays with vCenter, iXsystems has developed a vCenter plugin.
The plugin uses TrueNAS REST APIs to automate LUN creation and assignment.
When an VMFS (iSCSI) datastore is created using the plugin, the TrueNAS systems automatically activate their iSCSI system services.
Hosting VMware Storage with TrueNAS
When using TrueNAS as a VMware datastore:
Make sure guest VMs have the latest version of vmware-tools installed.
VMware provides instructions to install VMware Tools on different guest operating systems.
Increase the VM disk timeouts to better survive long reboots or other delayed disk operations.
Set the timeout to a minimum of 300 seconds.
VMware provides instructions for setting disk timeouts on some specific guest operating systems:
NOTE: Reboots or failovers will typically complete much faster than 300 seconds and Disk IO will resume automatically when finished.
VMware Snapshots on TrueNAS
When TrueNAS is used as a VMware datastore, you can coordinate creating and using ZFS and VMware snapshots.
See VMware-Snapshots for details.
vStorage APIs for Array Integration (VAAI) for iSCSI
VMware’s VAAI allows storage tasks such as large data moves to be offloaded from the virtualization hardware to the storage array.
These operations are performed locally on the NAS without transferring bulk data over the network.
VAAI for iSCSI supports these operations:
Allows multiple initiators to synchronize LUN access in a fine-grained manner rather than locking the whole LUN and preventing other hosts from accessing the same LUN simultaneously.
Copies disk blocks on the NAS.
Copies occur locally rather than over the network.
This operation is similar to Microsoft ODX.
Allows a hypervisor to query the NAS to determine whether a LUN is using thin provisioning.
Pauses virtual machines when a pool runs out of space.
The space issue can be fixed and the virtual machines continue instead of reporting write errors.
The system reports a warning when a configurable capacity is reached.
In TrueNAS, this threshold is configured at the storage pool level when using zvols or at the extent level for both file and device based extents.
Typically, the warning is set at the pool level, unless file extents are used, in which case it must be set at the extent level.
Informs TrueNAS that the space occupied by deleted files should be freed.
Without unmap, the NAS is unaware of freed space created when the initiator deletes files.
For this feature to work, the initiator must support the unmap command.
Zeros out disk regions.
When allocating virtual machines with thick provisioning, the zero write is done locally, rather than over the network.
This makes virtual machine creation and any other zeroing of disk regions much quicker.
3.8.1 - Deploying a TrueNAS CORE VM in ESXi
This article describes deploying TrueNAS CORE virtual machine (VM) in a VMWare ESXi environment.
ESXi version 6.7 is shown in this article.
Before You Begin
Before starting configuration work in VMWare:
Allocate a drive or a few drives in your server cluster for the TrueNAS virtual machine.
The anticipated storage needs for your deployment determines the size and number of drives you need.
Visit the TrueNAS CORE Hardware Guide and take note of the minimal system requirements.
Also note the information in the Memory and Storage Device Sizing sections.
The hardware guide provides guidance on how much memory, the number of CPUs, and drive size you need to configure. For example, you need a minimum of 2 CPUS, 8 GB memory, and two drives each with at least 16 GB storage. You can increase memory and drive sizes if you want to improve performance.
Determine your data storage requirements. Consider the number of storage pools and the type of storage you need for your deployment or how you plan to use the TrueNAS.
See Storage Configuration for information on pool layouts.
This article provides guidance on the number of virtual hard drives (vmdks) you want to create when setting up your virtual machine.
For example, if you want a mirror layout you need to add a minimum of three drives. One for the boot drive and two for the mirrored storage.
If you want a mirror with a hot spare, you need to add a minimum of four drives. One for the boot drive, two for the mirrored storage and one for the hot spare.
Configure your network per your system requirements. Have the information ready when you configure your TrueNAS global network settings in the web interface.
Deploying TrueNAS in VMWare ESXi
Launch your VMware ESXi interface using your login credentials.
Setting Up Storage
Set up the storage needed for the new VM. First click on Storage and then the drive allocated for TrueNAS. Create the datastore directories for the ISO media and the TrueNAS virtual machine.
Select Storage on the navigation panel on the left side of the screen.
Select the drive you allocated for the TrueNAS VM. The example uses esxi07-hhd01. The detailed view for this drive displays.
Click Datastore Browser to open the browser window, and then click Create directory. Enter the name of the directory in the New Directory dialog.
Add two directories. The first directory is for the TrueNAS CORE storage needs the other for the TrueNAS-CORE iso file you downloaded (name directory ISOs).
Choose a name that is easy to identify on a list of virtual machines. The example uses truenas1 as the directory name for the storage needs.
Click Create directory in the New directory dialog to create the directory.
Click Create directory again to open the New directory dialog to create the second new directory.
When finished you should have both directories listed in the Datastore Browser window.
Uploading the TrueNAS ISO
After creating the ISO directory upload the TrueNAS CORE iso file to the ISOs directory.
Select the directory created for the iso file and then click Upload.
Creating the Virtual Machine
After setting up the storage needs, create the new virtual machine.
Select Virtual Machines on the navigation panel on the left side of the screen.
Select the storage drive for the TrueNAS VM and then click Create/Register VM. The New virtual machine creation wizard displays.
Use these settings:
On the Select a name and guest OS wizard screen, select Other for Guest OS family and then FreeBSD 12 or later versions (64-bit) on the Guest OS Version dropdown list.
On the Customize settings wizard screen set CPU to 2, set Memory to 16 GB, and Hard disk 1 to 16 GB.
You need a minimum of two drives set to at least 16 GB. To add a drive, click Add hard disk.
You can add more hard drive now or use the Edit option to add drives later after saving the new virtual machine.
To create a mirror layout you need at least three hard drives, one for boot and two to create the mirrored storage.
Add as many hard drives as you need to create your desired storage layout. You can add more drives later after you install TrueNAS.
To create the virtual machine for your TrueNAS, from the Virtual Machines screen:
Click Create/Register VM to display the configuration wizard. On the Select creation type screen select Create a new virtual machine and then click Next.
Configure the VM name and guest OS settings. Type the name for the TrueNAS VM. Use the name you gave the new directory. The example uses truenas1.
Select Other from the Guest OS family dropdown list of options. Select FreeBSD 12 or later versions (64-bit) from the Guest OS version dropdown list of options. And the click Next
Select the storage drive you allocated for the TrueNAS VM. The example uses esxi07-hdd01. Click Next
Enter these settings in the Customize settings screen.
Setting
Value Description
CPU
2
Memory
8 GB
Hard disk 1
16 GB. This first disk is the boot disk.
CD/DVD Drive 1
Select Datastore ISO file from the dropdown list of options.
Add the second required disk. Click Add hard disk and select either New standard hard disk or Existing hard disk to add a second hard drive.
In the New Hard disk row set the disk to 16 GB at a minimum.
If the Location field does not display the drive and directory you created for TrueNAS, click Browse to open the Select directory window and select the directory for your TrueNAS deploment. Click on Select to change the location and close the Select directory window and return to the VM wizard screen.
Change any other disk drive settings you want or need to change for your hard disk drive hardware.
You can click Add hard disk again to add more hard drives if you want to equip your VM with more hard drives than the minimum required number of hard drives or you can click Next to finish creating the VM. You can use the Edit option later to add more drives to support your TrueNAS deployment.
Each storage layout has different disk minimum disk requirements.
Visit Storage Configuration for information on pool layouts
Review the Ready to Complete screen to verify the settings are correct for your deployment.
Click Finish. The new Truenas VM displays on the list of virtual machines.
Reviewing the New TrueNAS VM
To view the VM details screen click on the VM name.
You can now edit your TrueNAS VM to change any setting or add more hard drives to support your deployment, or you can proceed to installing TrueNAS.
Installing TrueNAS CORE
Click Power on and then click Console to display the dropdown list of console options.
When the console opens it displays the TrueNAS 13.0-RELEASE Console Setup screen.
Follow the instructions documented in Console Setup Menu to complete the installation of TrueNAS.
Editing the Virtual Machine
You can edit your VM settings after you complete the initial setup. You can add new hard drives to your VM using the Edit option found on the VM details screen. Click Edit to display the Editing Settings screen.
The Edit Settings screen resets the Memory setting back to MB so you must re-enter your 8 GB setting before you save and exit.
After you re-enter 8 GB in the Memory fields, you can add more hard drives to your VM.
Click Add hard disk and select the option you want to use. For a new drive select New standard hard disk. A New hard disk row displays highlighted in green.
To edit the hard disk details click on the row to expand it and display the drive settings you can configure.
vCenter Server provides a web interface to manage physical and virtual machines.
vCenter uses plugins to integrate server management into the vCenter application.
The iXsystems TrueNAS vCenter Plugin activates management options for TrueNAS hardware attached to vCenter Server.
This enables some management of TrueNAS systems from a single interface.
The current release version of the TrueNAS vCenter Plugin is 3.4.0.
This version is only compatible with VMware vCenter Server version 6.7.0.
Getting and Deploying the Plugin
Currently, the plugin is only available to TrueNAS Enterprise customers.
iXsystems Support staff are available to assist with deploying the TrueNAS vCenter Plugin.
Please contact iXsystems Support to learn more and schedule a time to deploy the plugin.
Customers who purchase iXystems hardware or that want additional support must have a support contract to use iXystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
Monday - Friday, 6:00AM to 6:00PM Pacific Standard Time:
US-only toll-free: 1-855-473-7449 option 2 Local and international: 1-408-943-4100 option 2
Telephone
After Hours (24x7 Gold Level Support only):
US-only toll-free: 1-855-499-5131 International: 1-408-878-3140 (international calling rates apply)
Using the Plugin
After being assisted with deploying the plugin, using the plugin follows a simple process of connecting TrueNAS hosts and configuring the various features to your use case.
The interface suspends after several minutes of inactivity and displays a warning that the interface is suspended and must be refreshed.
Connecting TrueNAS Hosts
In a browser, go to your vCenter Server web interface, log in, and click Menu > Global Inventory Lists > Manage TrueNAS > + Add host to add TrueNAS hosts to vCenter.
Fill in the required information.
A hostname or IP address can be used for the TrueNAS system.
For High Availability systems, use the VIP address or hostname to ensure the plugin remains connected in the event of a system failover.
Click Add Host and the TrueNAS hostname or IP address appears in the list of connected systems.
Right-click a list entry to see options to edit the host user credentials or remove that host from vCenter.
Click a hostname to see the system management options.
Clicking a system entry opens the management interface.
System Management
The system management screen shows a summary and options to modify the system.
To modify the TrueNAS system, click Configure.
Each submenu has a row of buttons to add or make changes to any items in the list.
vCenter works in the background when resolving change requests.
Refresh updates the list to see any items that might have finished being created or modified.
Tasks in progress display in the collapsible Recent Tasks area across the bottom of the screen.
Naming objects in the plugin follow a standard convention.
Names can contain spaces, alphanumeric, -, and . characters.
Click Summary to view basic information about this system.
The IP address, installed version of TrueNAS, storage availability, and system service status are shown.
The vCenter plugin can create two different kinds of datastores on a TrueNAS host:
Virtual Machine File System (VMFS) for iSCSI block-level access
Network File System (NFS) for file-level access
List
vCenter has a default limit of eight NFS datastores per ESX host.
See this VMware article about maximum supported volumes for more details.
The list shows Datastores that have been created and are managed by the plugin.
The list does not display other types of shares created and managed through the TrueNAS web interface.
Add Datastore
Click + (Add) to create a new datastore.
Choose an ESXi host for the datastore or an ESXi cluster to spread the reserved space across multiple systems.
Clusters can be used as long as a single member of the cluster supports the datastore features.
Click Next.
Choose the datastore type.
VMFS datastores provide block-level (iSCSI) storage for virtual machines.
NFS datastores provide file-level storage access.
Click Next to view specific options for each datastore type
Enter a name for the new datastore.
Enter a value and choose a unit for the Datastore Size.
The size must be smaller than the chosen Volume.
The minimum size for a VMFS datastore is 2GB.
The Data Path IP shows the TrueNAS system’s IP address.
Users can select other connected TrueNAS systems with the drop-down menu.
Select the datastore VMFS Version from the drop-down menu.
Choose between the modern version 6 or the legacy versions 3 and 5.
See the VMware VMFS documentation for detailed comparisons.
Enabling Sparse Volume reserves less than the total available size and metadata storage space, but it can cause writing to fail if the volume has little space remaining.
See zfs(8) for more details.
Select the TrueNAS pool to hold the datastore.
The Volume must be large enough to contain the chosen Datastore Size.
If you have a high availability NAS with a Fibre Channel license and a network configured to form a Fibre Channel fabric with the NAS and ESXi, you will also be able to select a Fibre Channel port for the datastore.
Selecting a Fibre Channel port enables that port with the datastore’s target on the NAS and creates a datastore with a corresponding Fibre Channel HBA on the ESXi.
Enter a Name for the new datastore.
The Data Path IP shows the TrueNAS system’s IP address.
Users can select other TrueNAS systems added to vCenter Server with the drop-down menu.
Select the path to the TrueNAS NFS share from the Mount Share Path drop-down menu.
Click Next.
Review Datastore Configuration
After configuring the VMFS or NFS datastore, vCenter will show a summary of the new datastore.
To begin creating the datastore, review the settings and click Finish.
The interface shows a warning when the datastore contains more than 80% of the available space.
Click Refresh to see the new datastore after creating it.
Extending a Datastore
Users needing additional space can increase the total size of a VMFS datastore.
Highlight a VMFS datastore from the list and click Edit to extend it.
The new size must be larger than the current size and less than the total available capacity.
For best performance, we recommend using less than 80% of the total available size.
Using decimal notation will round down the size to the nearest 1024 bytes (or whatever the volume’s configured default block size is).
Click Extend Datastore.
Datastores reserve some available space for internal use and set the available capacity to slightly less than the chosen amount.
Cloning Datastores
Cloning an NFS or VMFS datastore duplicates that datastore.
Select a datastore from the list and click Clone.
Choose an ESXi host to store the new datastore and click Next.
Enter a name for the clone and click Clone Datastore.
vCenter starts the cloning process and continues the task in the background.
Click Refresh after some time to see the cloned datastore.
An administrator can grant vCenter users specific role-based access to the TrueNAS systems managed by this plugin.
Role Name
User is allowed to:
Discover
Add TrueNAS systems to vCenter
Create Clones
Copy existing datastores
Create Storage
Create new datastores
Modify Storage
Edit existing datastores
Destroy Storage
Delete datastores
Each role gives the user the ability to perform the functions in that role and all of the roles that precede it in the list.
For example, a user with a Create Storage role can create a new datastore and clone existing datastores.
The vCenter administrator account always has all permissions.
New vCenter users must be created in Menu > Administration > Single Sign On > Users and Groups.
Add a Role to an Existing vCenter User
Click + to open the Add Role Based Access Control window.
Type a user name in the form DOMAIN.NAME\username, where DOMAIN.NAME is the user Domain found in the vCenter Menu > Administration > Single Sign On > Users and Groups page.
Open the Assign Role drop-down menu and choose a role for the user.
Click Add to add the role.
If the entry does not appear in the list immediately, click Refresh.
3.8.2.1 - Release Notes
These are the release notes for the various iterations of the TrueNAS vCenter Plugin.
Ready to get started? Choose a topic or article from the left-side Navigation pane.
Click the < symbol to expand the menu to show the topics under this section.
3.8.2.1.1 - 3.4.0
September 2, 2021
iXsystems is pleased to release version 3.4.0 of the TrueNAS vCenter plugin. You’ll find numerous improvements in the 3.4.0 plugin, including:
Fibre Channel datastore support.
Development script rewritten.
Revamped plugin build and MANIFEST files.
HTTPS that was disabled in 3.3.0 due to MANIFEST errors, has been re-enabled.
Fixed deployment script logging.
Please note that deploying the TrueNAS vCenter Plugin requires TrueNAS host systems with version 11.3 or later installed and vCenter 6.7-U3 or earlier deployed. To install or update to the 3.3.0 TrueNAS vCenter plugin, please contact iXsystems Support.
Customers who purchase iXystems hardware or that want additional support must have a support contract to use iXystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
VMFS rollback fails. Users will have to continue rollback themselves, disabling FC port and then deleting the target, since this release doesn't disable the FC port first, which causes an exception. See related ticket [NAS-111676].
vCenter 7.0b has issues rendering the plugin interface.
This is scheduled to be resolved in a future plugin update, but it is recommended for customers to continue using vCenter 6.7-U3 or earlier with this plugin.
The plugin replication feature has been removed due to numerous long-standing issues that could not be resolved for this version of the plugin.
Please continue to create replication tasks using the TrueNAS web interface.
Cloned datastores always use the first listed interface.
To work around this issue, either ensure the original datastore is using the desired interface or create a new datastore instead of making a clone.
3.8.2.1.2 - 3.3.0
November 24, 2020
iXsystems is pleased to release version 3.3.0 of the TrueNAS vCenter plugin!
This is a maintenance release of the plugin, designed to improve functionality and add support for TrueNAS 12.0 host systems. As part of this maintenance release, additional testing resources have been devoted to the plugin and several large-scale improvements have also been identified for future plugin versions.
Please note that deploying the TrueNAS vCenter Plugin requires TrueNAS host systems with version 11.3 or later installed and vCenter 6.7-U3 or earlier deployed. To install or update to the 3.3.0 TrueNAS vCenter plugin, please contact iXsystems Support.
Customers who purchase iXystems hardware or that want additional support must have a support contract to use iXystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
Monday - Friday, 6:00AM to 6:00PM Pacific Standard Time:
US-only toll-free: 1-855-473-7449 option 2 Local and international: 1-408-943-4100 option 2
Telephone
After Hours (24x7 Gold Level Support only):
US-only toll-free: 1-855-499-5131 International: 1-408-878-3140 (international calling rates apply)
Changelog
Improvement
VCP-78 - Convert to TrueNAS API 2.0 for improved compatibility and reliability with TrueNAS 11.3 and 12.0
Bug
VCP-84 - vCenter plugin does not function with TrueNAS 11.3 unless the legacy API 1.0 endpoint is enabled
Known Issues
vCenter 7.0b has issues rendering the plugin interface (VCP-89). This is scheduled to be resolved in a future plugin update, but it is recommended for customers to continue using vCenter 6.7-U3 or earlier with this plugin.
https has been disabled for the 3.3.0 release (VCP-105) due to an issue with connector initialization failures and conflicts with the Apache HTTPClient dependency. TrueNAS users must enable http on their TrueNAS system for the 3.3.0 plugin to connect properly. To verify TrueNAS 11.3 or 12.0 can connect, log in to the web interface, go to System > General, and make sure Web Interface HTTP > HTTPS Redirect is unset. This issue is scheduled for resolution in plugin version 4.0.
The plugin replication feature has been removed due to numerous long-standing issues that could not be resolved for this version of the plugin. Please continue to create replication tasks using the TrueNAS web interface.
Cloned datastores always use the first listed interface (VCP-113). To work around this issue, either ensure the original datastore is using the desired interface or create a new datastore instead of making a clone.
Plugin deployment complains about logging system error (VCP-114). This is a cosmetic error based on initial plugin deployments creating an empty log file. There is no impact to installing or using the vCenter Plugin.
3.8.2.1.3 - 3.2.0
March 24, 2020
iXsystems is pleased to release version 3.2.0 of the TrueNAS vCenter plugin! This is the newest release of the plugin, designed to allow managing TrueNAS systems from within VMware vCenter. You’ll find numerous improvements in the 3.2.0 plugin, like iSCSI fixes, communication support, and new vCenter 7.0 support. Here are a few other highlights of this release:
Initial support for vCenter 7.0 [ NAS-102950 ]
Added support for secure communication with TrueNAS (HTTPS) [ NAS-103636 ]
Refresh asynchronously when adding a new datastore [ NAS-100183 ]
iXsystems is pleased to announce the availability of vCenter 3.1.0, a standalone plugin for managing TrueNAS systems within VMware vSphere.
For more information about obtaining, installing, and using the vCenter plugin, or to ask questions regarding VMware integration, contact iXsystems Technical Support. You can contact Support by calling 1-855-GREP-4-iX or emailing support@ixsystems.com.
New Features
[NAS-100574] – Use standalone application for automatic deployment of vCenter plugin
[NAS-100839] – Add VMFS6 support
Improvements
[NAS-100070] – Provide an indication when user times out for inactivity
[NAS-100075] – Add ability to remove user and role from RBAC
[NAS-101357] – Remove duplicate Configure and Update tabs
[NAS-101600] – Add ability to select cluster for VMFS datastore
[NAS-102360] – Store deployment and support logs in log folder
Bug Fixes
[NAS-101355] – Fix issue that prevented readding a removed host
[NAS-101356] – Remove spurious “Other Action” from Actions menu
[NAS-101358] – Ensure network interfaces retrieve a bind IP to be used to create Portal
[NAS-101359] – Display High Availability status
[NAS-101791] – Update output text from installer
[NAS-101840] – Improve uninstall and upgrade handling
[NAS-101846] – Fix issue when mounting NFS share in a cluster
[NAS-102312] – Fix issue when creating a VMFS datastore in TrueNAS 11.2-U5
[NAS-102324] – Fix problem with stalls when cloning datastore
[NAS-102365] – Fix problems related to removing roles, users, and permissions
[NAS-102388] – Indicate that the user needs to use the stop then start commands to restart vCenter 6.5
[NAS-102429] – Ensure text box for Hostname is read-only
[NAS-102455] – Warn user in documentation of maximum supported volumes limit