Storage Spaces Leaves Me Empty

I was very intrigued when I found out about Storage Spaces and ReFS being introduced in Windows Server 2012 and Windows 8. But now that I’ve spent some time with it, I’m left disappointed, and I will not be trusting my precious data with either of these features, just yet.

 

Microsoft publicly announced Storage Spaces and ReFS in early Windows 8 blog posts. Storage Spaces was of special interest to the Windows Home Server community in light of Microsoft first dropping support for Drive Extender in Windows Home Server 2011, and then completely dropping Windows Home Server, and replacing it with Windows Server 2012 Essentials. My personal interest was more geared towards expanding my home storage capacity in a cost effective and energy efficient way, without tying myself to proprietary hardware solutions.

 

I archive all my CD’s, DVD’s, and BD discs, and store the media files on a Synology DS2411+ with 12 x 3TB drives in a RAID6 volume, giving me approximately 27TB of usable storage. Seems like a lot of space, but I’ve run out of space, and I have a backlog of BD discs that need to be archived. In general I have been very happy with Synology (except for an ongoing problem with “Local UPS was plugged out” errors), and they do offer devices capable of more storage, specifically the RS2212+ with the RX1211 expansion unit offering up to 22 combined drive bays. But, at $2300 plus $1700, this is expensive, capped at 22 drives, and further ties me in with Synology. Compare that with $1400 for a Norco DS24-E or $1700 for a SansDigital ES424X6+BS 24 bay 4U storage unit, an inexpensive LSI OEM branded SAS HBA from eBay, or a LSI SAS 9207-8e if you like the real thing, connected to Windows Server 2012, running Storage Spaces and ReFS, and things look promising.

Arguable I am swapping one proprietary technology for another, but with native Windows support, I have many more choices for expansion. One could make the same argument for the use of ZFS on Linux, and if I was a Linux expert, that may have been my choice, but I’m not.

 

I tested using a SuperMicro SuperWorkstation 7047A-73, with dual Xeon E5-2660 processors and 32GB RAM. The 7047A-73 uses a X9DA7 motherboard, that includes a LSI SAS2308 6Gb/s SAS2 HBA, connected to 8 hot-swap drive bays.

For comparison with a hardware RAID solution I also tested using a LSI MegaRAID SAS 9286CV-8e 6Gb/s SAS2 RAID adapter, with the CacheCade 2.0 option, and a Norco DS12-E 12 bay SAS2 2U expander.

For drives I used Hitachi Deskstar 7K4000 4TB SATA3 desktop drives and Intel 520 series 480GB SATA3 SSD drives. I did not test with enterprise class drives, 4TB models are still excessively expensive, and defeats the purpose of cost effective home use storage.

 

I previously reported that the Windows Server 2012 and Windows 8 install will hang when trying to install on a SSD connected to the SAS2308. As such I installed Server 2012 Datacenter on an Intel 480GB SSD connected to the onboard SATA3 controller.

Windows automatically installed the drivers for the LSI SAS2308 controller.

I had to manually install the drivers for the C600 chipset RSTe controller, and as reported before, the driver works, but suffers from dyslexia.

The SAS2308 controller firmware was updated to the latest released SuperMicro v13.0.57.0.

 

Since LSI already released v14.0.0.0 firmware for their own SAS2308 based boards like the SAS 9207-8e, I asked SuperMicro support for their v14 version, and they provided me with an as yet unreleased v14.0.0.0 firmware version for test purposes. Doing a binary compare between the LSI version and the SuperMicro version, the differences appear to be limited to descriptive model numbers, and a few one byte differences that are probably configuration or default parameters. It is possible to cross-flash between some LSI and OEM adapters, but since I had a SuperMicro version of the firmware, this was not necessary.

SuperMicro publishes a v2.0.58.0 LSI driver that lists Windows 8 support, but LSI has not yet released Windows 8 or Server 2012 drivers for their own SAS2308 based products. I contacted LSI support, and their Windows 8 and Server 2012 drivers are scheduled for release in the P15 November 2012 update.

I tested the SuperMicro v14.0.0.0 firmware with the SuperMicro v2.0.58.0 driver, the SuperMicro v14.0.0.0 firmware with the Windows v2.0.55.84 driver, and the SuperMicro v2.0.58.0 driver with the SuperMicro v13.0.57.0 firmware. Any combination that included the SuperMicro v2.0.58.0 driver or the SuperMicro v14.0.0.0 firmware resulted in problems with the drives or controller not responding. The in-box Windows v2.0.55.84 driver and the released SuperMicro v13.0.57.0 firmware was the only stable combination.

Below are some screenshots of the driver versions and errors:

LSI.2.0.55.84LSI.2.0.58.0

Eventlog.Controller.ErrorEventlog.IO.RetriedEventlog.Reset.DeviceFormat.Failed

 

One of the reasons I am not yet prepared to use Storage Spaces or ReFS is because of the complete lack of decent documentation, best practice guides, or deployment recommendations. As an example, the only documentation on SSD journal drive configuration is in TechNet forum post from a Microsoft employee, requiring the use of PowerShell, and even then there is no mention of scaling or size ratio requirements. Yes, the actual PowerShell commandlet parameters are documented on MSDN, but not the use or the meaning.

PowerShell is very powerful and Server 2012 is completely manageable using PowerShell, but an appeal of Windows has always been the management user interface, especially important for adoption by SMB’s that do not have a dedicated IT staff. With Windows Home Server being replaced by Windows Server 2012 Essentials, the lack of storage management via the UI will require regular users to become PowerShell experts, or maybe Microsoft anticipates that configuration UI’s will be developed by hardware OEM’s deploying Windows Storage Server 2012 or Windows Server 2012 Essentials based systems.

My feeling is that Storage Spaces will be one of those technologies that matures and becomes generally usable after one or two releases or service packs post the initial release.

 

I tested disk performance using ATTO Disk Benchmark 2.47, and CrystalDiskMark 3.01c.

I ran each test twice, back to back, and report the average. I realize two runs are not statistically significant, but with just two runs it took several days to complete the testing in between regular work activities. I opted to only publish the CrystalDiskMark data as the ATTO Disk Benchmark results varied greatly between runs, while the CrystalDiskMark results were consistent.

Consider the values useful for relative comparison under my test conditions, but not useful for absolute comparison with other systems.

 

Before we get to the results, a word on the tests.

The JBOD tests were performed using the C600 SATA3 controller.
The Simple, Mirror, Triple, and RAID0 tests were performed using the SAS 2308 SAS2 controller.
The Parity, RAID5, RAID6, and CacheCade tests were performed using the SAS 9286CV-8e controller.

The Simple test created a simple storage pool.
The Mirror test created a 2-way mirrored storage pool.
The Triple test created a 3-way mirrored storage pool.
The Parity test created a parity storage pool.
The Journal test created a parity storage pool, with SSD drives used for the journal disks.
The CacheCade test created RAID sets, with SSD drives used for caching.

 

As I mentioned earlier, there is next to no documentation on how to use Storage Spaces. In order to use SSD drives as journal drives, I followed information provided in a TechNet forum post.

Create the parity storage pool using PowerShell or the GUI. Then associate the SSD drives as journal drives with the pool.

Windows PowerShell
Copyright (C) 2012 Microsoft Corporation. All rights reserved.

PS C:\Users\Administrator> Get-PhysicalDisk -CanPool $True

FriendlyName CanPool OperationalStatus HealthStatus Usage Size
------------ ------- ----------------- ------------ ----- ----
PhysicalDisk4 True OK Healthy Auto-Select 447.13 GB
PhysicalDisk5 True OK Healthy Auto-Select 447.13 GB

PS C:\Users\Administrator> $PDToAdd = Get-PhysicalDisk -CanPool $True
PS C:\Users\Administrator>
PS C:\Users\Administrator> Add-PhysicalDisk -StoragePoolFriendlyName "Pool" -PhysicalDisks $PDToAdd -Usage Journal
PS C:\Users\Administrator>
PS C:\Users\Administrator>
PS C:\Users\Administrator> Get-VirtualDisk

FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size
me
------------ ------------------- ----------------- ------------ -------------- ----
Pool Parity OK Healthy False 18.18 TB

PS C:\Users\Administrator> Get-PhysicalDisk

FriendlyName CanPool OperationalStatus HealthStatus Usage Size
------------ ------- ----------------- ------------ ----- ----
PhysicalDisk0 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk1 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk2 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk3 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk4 False OK Healthy Journal 446.5 GB
PhysicalDisk5 False OK Healthy Journal 446.5 GB
PhysicalDisk6 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk7 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk8 False OK Healthy Auto-Select 447.13 GB
PhysicalDisk10 False OK Healthy Auto-Select 14.9 GB

PS C:\Users\Administrator>

I initially added the journal drives after the virtual drive was already created, but that would not use the journal drives. I had to delete the virtual drive, recreate it, and then the journal drives kicked in. There must be some way to manage this after virtual drives already exist, but again, no documentation.

 

In order to test Storage Spaces using the SAS 9286CV-8e RAID controller I had to switch it to JBOD mode using the commandline MegaCli utility.


D:\Install>MegaCli64.exe AdpSetProp EnableJBOD 1 a0

Adapter 0: Set JBOD to Enable success.

Exit Code: 0x00

D:\Install>MegaCli64.exe AdpSetProp EnableJBOD 0 a0

Adapter 0: Set JBOD to Disable success.

Exit Code: 0x00

D:\Install>

 

The RAID and CacheCade disk sets were created using the LSI MegaRAID Storage Manager GUI utility.

 

Below is a summary of the throughput results:

ReadWriteKBPS

ReadWriteIOPS

 

Not surprisingly the SSD drives had very good scores all around for JBOD, Simple, and RAID0. I only had two drives to test with, but I expect more drives to further improve performance.

The Simple, Mirror, and Triple test results speak for themselves, performance halving, and halving again.

The Parity test shows good read performance, and bad write performance. The write performance approaches that of a single disk.

The Parity with SSD Journal disks shows about the same read performance as without journal disks, and the write performance double that of a single disk.

The RAID0 and Simple throughput results are close, but the RAID0 write IOPS doubling that of the Simple volume.

The RAID5 and RAID6 read performance is close to Parity, but the write performance almost ten fold that of Parity. It appears that the SLI card writes to all drives in parallel, while Storage Spaces parity writes to one drive only.

The CacheCade read and write performance is less than without CacheCade, but the IOPS ten fold higher.

The ReFS performance is about 30% less than the equivalent NTFS performance.

 

 

Until Storage Spaces gets thoroughly documented and improves performance, I’m sticking with hardware RAID solutions.

7 Comments

  1. tilvar says:

    Thanks for this great analysis. I have just started to play around with Storage Spaces a bit as well and wanted share my experiences (although my hardware is nowhere near as cool as yours). The setup is for a low power home server and is as follows:
    Mainboard: PURE Mini E350 (IPC-E350M1) (AMD E-350@1.6 GHz)
    8 GByte DDR3 1066
    System drive: Western Digital WD3200BEKT Scorpio Black 320GB
    Storage Array: 4x Western Digital Red 3000GB, SATA 6Gb/s (WD30EFRX)
    I’m planning to upgrade the system drive to an SSD but the Scorpio Black is not _that_ shaggy and performs well enough for my purposes.
    The WD Reds are set up in one Parity Storage Space. There is just one ReFS virtual disk with effectively 8.18 TByte.
    Write performance was very bad at first with speeds of about 5 MByte/s when copying data from a Seagate Barracuda ST 3000 DM (3 TByte) using disk to disk copy over eSATA and windows explorer copy.
    Two things helped a bit:
    1) For all the WD Reds: Switch on “Enable Write caching on this device” and “Turn off Windows write-cache buffer flushing on the device” (under device manager, Disk Drives->Properties->Policies.
    2) Using Terracopy
    Now I’m getting at least around 20 MByte/s and 40-50 MByte/s sustained rates when copying several large files. Peak rate is well over 60 MByte/s. It’s still a far way to go to the 140 MByte/s for a single disk write (http://www.storagereview.com/western_digital_red_nas_hard_drive_review_wd30efrx) but we are getting there. CPU usage was about 30%.
    Crystal Disk Mark reports 290 MByte/s sequential read and sequential 25 MByte/s write.
    Generally I also think there must be a lot of potential to optimize storage spaces. I really hope Microsoft takes this issue seriously. ZFS or Linux’s software RAID 5 seems to show that it’s indeed possible (even without obscure SSD Parity Disks setups).

    Like

  2. Any plans for a revised casestudy using Storage Spaces in 2012 R2 instead?

    Like

  3. I would be curious as well about the 2012r2 redo to see how its changed.

    Like

  4. Greg Rogers says:

    It hasn’t changed guys. Running a 7 disk pool and a 5 disk pool (both with hotspares) and my performance is horrid compared to zfs or hardware raid.

    Like

  5. I don’t think mine is working as expected. Running parity storage pool (raid 5) with a 4th SSD dedicated to journaling. While watching resource monitor I see plenty of journaling activity however when I monitor the journal physical disk using permon I see no reads or writes…

    Like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.