I want to do a series on Storages Spaces that I use to provide storage on my servers. This series have nothing to do with Home Assistant, except providing hosting capabilities for the Home Assistant appliance.
In this part 1 I will focus on my current setup, and how I ended here back in 2018.
I never intended to run Storage Spaces. My plan was to use FreeBSD / FreeNAS to build a great storage server with a built-in hypervisor. So I bought all the hardware with FreeNAS in mind. FreeNAS provide ZFS storage, and was one of few at that time to offer rock solid ZFS storage. OpenZFS on Linux later gained traction, and now, it´s everywhere.
I did the build and installed FreeNAS on my newly acquired AMD Epyc setup. It was a steep learning curve, but I figured out how to setup ZFS storage in CLI in FreeBSD, and I figured out how to manage VMs in B-Hyve via CLI. Unfortunately it wasn’t stable, the server would make a random reboot suddenly, and it was super annoying. I spend so much time trying different settings, on different versions of FreeNAS / FreeBSD, but it always rebooted without a crash error (I recorded on console screen via the BMC).
After months of trial and error, I tried installing Proxmox instead, and it was the exact same. I must have been too much of a first mover with AMD Epyc on Linux and FreeBSD. Because when I installed Windows Server 2016, with Hyper-V and Storage Spaces, it was stable, no crashes. Years later I can see many other having the same issues, but it looks like people found solutions for it now.
But in the summer 2018 I couldn’t find any help that fixed my issues, so I had to stick with Microsoft.
Not a problem, because I work as an IT pro with Microsoft products all day, and I have used Hyper-V since it´s release. But I had speced the system to run FreeNAS, so I had no RAID controller in the machine, only a HBA. So I played around with Storage Spaces, and after a long introduction, this is what I will focus on now.
Microsoft Storage Spaces (NOT Storage Spaces Direct (S2D)) is Microsoft’s take on software defined storage on a single machine. The idea was to make it simple and easy to setup and use, and that’s why it was introduced in Windows 8. Some will say that Storage Spaces is simple and easy to setup, but it really depends on your needs, if you want to run parity, to obtain storage efficiency, and you want to keep decent performance, then Storage Spaces is not simple and easy to setup, because the GUI will do it wrong, and that´s why you will see the internet flooded with threads saying Storages Spaces parity is slow, useless, and so on.
In part 2 I will go into more details about configuring Storage Spaces in parity via PowerShell, and why you need an understanding of things like columns, interleave, unit allocation size, and so on, to do it right, or do you in Server 2025?
Back in 2018 I played around with different settings and setups, but there where so many problems with going parity, one of them was my drives are SMT with very slow writes, and if you matches that with the very poor write performance Storage Spaces parity offer, its a recipe for disaster, even before we discuses columns, efficiency, and future capacity increase, so I sticked with mirror, which was a huge bummer when you have 16 HDDs, and was planning for capacity for 12 of them, but ended with capacity for 8 of them.
I had bought an Intel Optane disk as a SLOG in the ZFS setup, but since I only had 1 of them, I couldn’t use it in a good way in my Storage Spaces setup. I installed Intel cache acceleration software, and have been using that to accelerate the Storage Spaces volume. It´s not perfect by any means, but it´s working OK. The biggest issue is the slow rotation of data from the cache drive to the Storage Spaces volume. When running in the background its rotating something like ~2 MB/s, and there are no load on the volume. When you force it to flush all data from cache to the Storage Spaces volume, it hits ~10 MB/s. I think it´s Intel cache acceleration software that looks at QD and slows down the flush rate, but it´s very frustrating when you want to reboot your server, and you have to wait 1 or 2 hours for the cache to flush first. Of course it´s far from optimal to use a single drive as write cache, but it´s a calculated risk I am taking because the Optane drive is built the way it is. To address these 2 problems in a way, I have a scheduled task that trigger a cache flush every night.
6 years later I am still running this setup, and I still don’t like it. It´s a lot of different compromises that gives bad storage efficiency, poor performance when read or writes misses the cache, and other issues such as being able to identify dead drives without pulling out the disks of the server.
There is plenty of room for improvement here, and in Server 2022 Microsoft introduced the Storage Bus Cache function in Storage Spaces. That function should on paper at least solve some of my problems.
In the next part I will take a deep dive in to Storage Spaces in Server 2025, and show you how to create a parity volume with decent write performance.
This server / setup just killed itself via 1 of its weaknesses. There was a power outage, but I have a UPS. When the UPS goes below a certain point of battery level, it tells the server to shut down. But the server had to flush cache to drives before shutting down, so it took some additional minutes to shutdown, and the battery in the UPS is 5 years old, so the time ran out, and power was lost while the server was shutting down. When I started up the server again, the Hyper-V service couldn’t start, even after an uninstallation / reinstallation of the Hyper-V role, it just wouldn´t start, and the only error I count found was related to a WMI entry, but nothing came up when I googled it. I had already planned my next server at the time, but I still needed to buy all the stuff, and put it together. This took away some of the time I wanted to use on testing storage spaces in Server 2025, but I still had time to play around, you can read more about that in part 2.