Setting up ioDrive II duo on PVE background

Setting up ioDrive II duo on PVE

I have recently acquired used 2TB FusionIO ioDrive II Duo drives. The goal is to use them for fast storage that does not wear out as fast an an NVME drive (these ioDrives have about 30PB of write endurance)

An image of the two ioDrive-s on a wooden tabletop

I am setting this up on Proxmox 9.0.11 running Linux 6.14.11-4-pve kernel.

# Driver version

There are currently two relevant versions for this type devices; the version 3 and version 4.

Looking at docs.google.com (web.archive.org) you should find the exact card that you have and see which version of the driver supports it.

My "2410 GB MLC IO Accelerator Gen2 Duo for ProLiant" cards are supported by the version 3, that being iomemory-vsl, while yours might be supported by version 4 which is available at iomemory-vsl4.

Keep in mind that my commands will from now on only reference the v3 (iomemory-vsl) and you might have to add the 4 (making it iomemory-vsl4) when referencing the driver name. Such commands that have to be modified are marked with # (!mind the version) comment.

So for example, when loading the kernel module, instead of running modprobe -s iomemory-vsl you might have to run modprobe -s iomemory-vsl4.

# PVE node setup process

# iommu=pt

Add the IOMMU boot parameter (pve docs). If you are using an AMD proc, add the amd_iommu=on iommu=pt and if you have an Intel proc add only the iommu=pt parameter.

If you are using UEFI add these to the end of the line in the /etc/kernel/cmdline file.
If you are using BIOS add them to the end of the GRUB_CMDLINE_LINUX_DEFAULT string in the /etc/default/grub file.
After doing that, run the proxmox-boot-tool refresh command.

# Prep

You will need the some packages installed on your Proxmox node and the driver source code should be downloaded.

apt install pve-headers zip unzip gcc fakeroot build-essential debhelper rsync dkms git lsb-release

mkdir /root/iomemory
cd /root/iomemory

git clone https://github.com/RemixVSL/iomemory-vsl # (!mind the version)

# Building the kernel module

You then build and load the kernel module

cd /root/iomemory/iomemory-vsl # (!mind the version)
make dkms
modprobe -s iomemory-vsl # (!mind the version)

# Companion tooling

You will also need the fio-status and other utilities, so build the packages and install them and then download the remaining three and install those too.

cd /root/iomemory/iomemory-vsl # (!mind the version)
make dpkg

cd /root/iomemory/
dpkg -i iomemory-vsl-config-*.deb

# download extra utilities
wget https://aljax.us/assets/iodrive-2-duo-with-pve/fio-preinstall_4.3.7.1205-1.0_amd64.deb
wget https://aljax.us/assets/iodrive-2-duo-with-pve/fio-sysvinit_4.3.7.1205-1.0_all.deb
wget https://aljax.us/assets/iodrive-2-duo-with-pve/fio-util_4.3.7.1205-1.0_amd64.deb

# install extra utilities
dpkg -i ./fio-preinstall_4.3.7.1205-1.0_amd64.deb
dpkg -i ./fio-sysvinit_4.3.7.1205-1.0_all.deb
dpkg -i ./fio-util_4.3.7.1205-1.0_amd64.deb

Done

Now you should be ready to roll. Running fio-status -a should show you a list of installed devices with their properties listed.

root@ml350-g9-01:~# fio-status -a

Found 1 VSL driver package:
3.2.16 build 1731 Driver: loaded

Found 2 ioMemory devices in this system with 1 ioDrive Duo

Adapter: Dual Controller Adapter (driver 3.2.16)
HP 2410GB MLC PCIe ioDrive2 Duo for ProLiant Servers, Product Number:673648-B21, SN:...
ioDrive2 Adapter Controller, PN:674328-001
<...>

fct0 Attached
<...>

fioa State: Online, Type: block device, Device: /dev/fioa
<...>
1205.00 GBytes device size
Format: 294189453 sectors of 4096 bytes

fct1 Attached
<...>

fiob State: Online, Type: block device, Device: /dev/fiob
<...>
1205.00 GBytes device size
Format: 294189453 sectors of 4096 bytes

# ZFS Pool

What I did is created a "raid 0" with ZFS and used it for some emphemeral VMs.

zpool create -f -o ashift=12 fioZFSr0 /dev/fioa /dev/fiob
root@ml350-g9-01:~# cat /etc/pve/storage.cfg 
<...>

zfspool: fioZFSr0
        pool fioZFSr0
        content images,rootdir
        mountpoint /fioZFSr0
        nodes ml350-g9-01
        sparse 0