Storage Spaces Leaves Me Empty

I was very intrigued when I found out about Storage Spaces and ReFS being introduced in Windows Server 2012 and Windows 8. But now that I’ve spent some time with it, I’m left disappointed, and I will not be trusting my precious data with either of these features, just yet.


Microsoft publicly announced Storage Spaces and ReFS in early Windows 8 blog posts. Storage Spaces was of special interest to the Windows Home Server community in light of Microsoft first dropping support for Drive Extender in Windows Home Server 2011, and then completely dropping Windows Home Server, and replacing it with Windows Server 2012 Essentials. My personal interest was more geared towards expanding my home storage capacity in a cost effective and energy efficient way, without tying myself to proprietary hardware solutions.


I archive all my CD’s, DVD’s, and BD discs, and store the media files on a Synology DS2411+ with 12 x 3TB drives in a RAID6 volume, giving me approximately 27TB of usable storage. Seems like a lot of space, but I’ve run out of space, and I have a backlog of BD discs that need to be archived. In general I have been very happy with Synology (except for an ongoing problem with “Local UPS was plugged out” errors), and they do offer devices capable of more storage, specifically the RS2212+ with the RX1211 expansion unit offering up to 22 combined drive bays. But, at $2300 plus $1700, this is expensive, capped at 22 drives, and further ties me in with Synology. Compare that with $1400 for a Norco DS24-E or $1700 for a SansDigital ES424X6+BS 24 bay 4U storage unit, an inexpensive LSI OEM branded SAS HBA from eBay, or a LSI SAS 9207-8e if you like the real thing, connected to Windows Server 2012, running Storage Spaces and ReFS, and things look promising.

Arguable I am swapping one proprietary technology for another, but with native Windows support, I have many more choices for expansion. One could make the same argument for the use of ZFS on Linux, and if I was a Linux expert, that may have been my choice, but I’m not.


I tested using a SuperMicro SuperWorkstation 7047A-73, with dual Xeon E5-2660 processors and 32GB RAM. The 7047A-73 uses a X9DA7 motherboard, that includes a LSI SAS2308 6Gb/s SAS2 HBA, connected to 8 hot-swap drive bays.

For comparison with a hardware RAID solution I also tested using a LSI MegaRAID SAS 9286CV-8e 6Gb/s SAS2 RAID adapter, with the CacheCade 2.0 option, and a Norco DS12-E 12 bay SAS2 2U expander.

For drives I used Hitachi Deskstar 7K4000 4TB SATA3 desktop drives and Intel 520 series 480GB SATA3 SSD drives. I did not test with enterprise class drives, 4TB models are still excessively expensive, and defeats the purpose of cost effective home use storage.


I previously reported that the Windows Server 2012 and Windows 8 install will hang when trying to install on a SSD connected to the SAS2308. As such I installed Server 2012 Datacenter on an Intel 480GB SSD connected to the onboard SATA3 controller.

Windows automatically installed the drivers for the LSI SAS2308 controller.

I had to manually install the drivers for the C600 chipset RSTe controller, and as reported before, the driver works, but suffers from dyslexia.

The SAS2308 controller firmware was updated to the latest released SuperMicro v13.0.57.0.


Since LSI already released v14.0.0.0 firmware for their own SAS2308 based boards like the SAS 9207-8e, I asked SuperMicro support for their v14 version, and they provided me with an as yet unreleased v14.0.0.0 firmware version for test purposes. Doing a binary compare between the LSI version and the SuperMicro version, the differences appear to be limited to descriptive model numbers, and a few one byte differences that are probably configuration or default parameters. It is possible to cross-flash between some LSI and OEM adapters, but since I had a SuperMicro version of the firmware, this was not necessary.

SuperMicro publishes a v2.0.58.0 LSI driver that lists Windows 8 support, but LSI has not yet released Windows 8 or Server 2012 drivers for their own SAS2308 based products. I contacted LSI support, and their Windows 8 and Server 2012 drivers are scheduled for release in the P15 November 2012 update.

I tested the SuperMicro v14.0.0.0 firmware with the SuperMicro v2.0.58.0 driver, the SuperMicro v14.0.0.0 firmware with the Windows v2.0.55.84 driver, and the SuperMicro v2.0.58.0 driver with the SuperMicro v13.0.57.0 firmware. Any combination that included the SuperMicro v2.0.58.0 driver or the SuperMicro v14.0.0.0 firmware resulted in problems with the drives or controller not responding. The in-box Windows v2.0.55.84 driver and the released SuperMicro v13.0.57.0 firmware was the only stable combination.

Below are some screenshots of the driver versions and errors:




One of the reasons I am not yet prepared to use Storage Spaces or ReFS is because of the complete lack of decent documentation, best practice guides, or deployment recommendations. As an example, the only documentation on SSD journal drive configuration is in TechNet forum post from a Microsoft employee, requiring the use of PowerShell, and even then there is no mention of scaling or size ratio requirements. Yes, the actual PowerShell commandlet parameters are documented on MSDN, but not the use or the meaning.

PowerShell is very powerful and Server 2012 is completely manageable using PowerShell, but an appeal of Windows has always been the management user interface, especially important for adoption by SMB’s that do not have a dedicated IT staff. With Windows Home Server being replaced by Windows Server 2012 Essentials, the lack of storage management via the UI will require regular users to become PowerShell experts, or maybe Microsoft anticipates that configuration UI’s will be developed by hardware OEM’s deploying Windows Storage Server 2012 or Windows Server 2012 Essentials based systems.

My feeling is that Storage Spaces will be one of those technologies that matures and becomes generally usable after one or two releases or service packs post the initial release.


I tested disk performance using ATTO Disk Benchmark 2.47, and CrystalDiskMark 3.01c.

I ran each test twice, back to back, and report the average. I realize two runs are not statistically significant, but with just two runs it took several days to complete the testing in between regular work activities. I opted to only publish the CrystalDiskMark data as the ATTO Disk Benchmark results varied greatly between runs, while the CrystalDiskMark results were consistent.

Consider the values useful for relative comparison under my test conditions, but not useful for absolute comparison with other systems.


Before we get to the results, a word on the tests.

The JBOD tests were performed using the C600 SATA3 controller.
The Simple, Mirror, Triple, and RAID0 tests were performed using the SAS 2308 SAS2 controller.
The Parity, RAID5, RAID6, and CacheCade tests were performed using the SAS 9286CV-8e controller.

The Simple test created a simple storage pool.
The Mirror test created a 2-way mirrored storage pool.
The Triple test created a 3-way mirrored storage pool.
The Parity test created a parity storage pool.
The Journal test created a parity storage pool, with SSD drives used for the journal disks.
The CacheCade test created RAID sets, with SSD drives used for caching.


As I mentioned earlier, there is next to no documentation on how to use Storage Spaces. In order to use SSD drives as journal drives, I followed information provided in a TechNet forum post.

Create the parity storage pool using PowerShell or the GUI. Then associate the SSD drives as journal drives with the pool.

Windows PowerShell
Copyright (C) 2012 Microsoft Corporation. All rights reserved.

PS C:\Users\Administrator> Get-PhysicalDisk -CanPool $True

FriendlyName CanPool OperationalStatus HealthStatus Usage Size
------------ ------- ----------------- ------------ ----- ----
PhysicalDisk4 True OK Healthy Auto-Select 447.13 GB
PhysicalDisk5 True OK Healthy Auto-Select 447.13 GB

PS C:\Users\Administrator> $PDToAdd = Get-PhysicalDisk -CanPool $True
PS C:\Users\Administrator>
PS C:\Users\Administrator> Add-PhysicalDisk -StoragePoolFriendlyName "Pool" -PhysicalDisks $PDToAdd -Usage Journal
PS C:\Users\Administrator>
PS C:\Users\Administrator>
PS C:\Users\Administrator> Get-VirtualDisk

FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size
------------ ------------------- ----------------- ------------ -------------- ----
Pool Parity OK Healthy False 18.18 TB

PS C:\Users\Administrator> Get-PhysicalDisk

FriendlyName CanPool OperationalStatus HealthStatus Usage Size
------------ ------- ----------------- ------------ ----- ----
PhysicalDisk0 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk1 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk2 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk3 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk4 False OK Healthy Journal 446.5 GB
PhysicalDisk5 False OK Healthy Journal 446.5 GB
PhysicalDisk6 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk7 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk8 False OK Healthy Auto-Select 447.13 GB
PhysicalDisk10 False OK Healthy Auto-Select 14.9 GB

PS C:\Users\Administrator>

I initially added the journal drives after the virtual drive was already created, but that would not use the journal drives. I had to delete the virtual drive, recreate it, and then the journal drives kicked in. There must be some way to manage this after virtual drives already exist, but again, no documentation.


In order to test Storage Spaces using the SAS 9286CV-8e RAID controller I had to switch it to JBOD mode using the commandline MegaCli utility.

D:\Install>MegaCli64.exe AdpSetProp EnableJBOD 1 a0

Adapter 0: Set JBOD to Enable success.

Exit Code: 0x00

D:\Install>MegaCli64.exe AdpSetProp EnableJBOD 0 a0

Adapter 0: Set JBOD to Disable success.

Exit Code: 0x00



The RAID and CacheCade disk sets were created using the LSI MegaRAID Storage Manager GUI utility.


Below is a summary of the throughput results:




Not surprisingly the SSD drives had very good scores all around for JBOD, Simple, and RAID0. I only had two drives to test with, but I expect more drives to further improve performance.

The Simple, Mirror, and Triple test results speak for themselves, performance halving, and halving again.

The Parity test shows good read performance, and bad write performance. The write performance approaches that of a single disk.

The Parity with SSD Journal disks shows about the same read performance as without journal disks, and the write performance double that of a single disk.

The RAID0 and Simple throughput results are close, but the RAID0 write IOPS doubling that of the Simple volume.

The RAID5 and RAID6 read performance is close to Parity, but the write performance almost ten fold that of Parity. It appears that the SLI card writes to all drives in parallel, while Storage Spaces parity writes to one drive only.

The CacheCade read and write performance is less than without CacheCade, but the IOPS ten fold higher.

The ReFS performance is about 30% less than the equivalent NTFS performance.



Until Storage Spaces gets thoroughly documented and improves performance, I’m sticking with hardware RAID solutions.


XBMC for Linux on Pivos XIOS DS

Pivos released a XBMC build for Linux, and I tried it out.

The Pivos XIOS DS is very small (less than 5” x 5” x 1”) HTPC supporting hardware accelerated 1080p video and HD audio playback. The XIOS DS supports XBMC for Android, and XBMC for Linux, with native hardware acceleration. I reviewed the Android port of XBMC in a previous post.

The XIOS DS is available for $115 at Amazon, placing it, price wise, between the $98 Roku 2 XS and the $178 Boxee Box.


I downloaded the 09/07/12 firmware release, and installed it using the system update procedure; extract update.img to MicroSD, hold reset button on back of unit, plug in power, release reset button when update screen displays.




XBMC launched immediately on reboot, very similar to the XBMC for Linux OpenELEC experience.





A quick zoom adjustment and the UI fits on the screen without the need to adjust resolution.



Unlike the Android version where I had to use a mouse and keyboard, I could use the included IR remote to perform all operations. And unlike the Android version, where I had to create special guest access SMB shares because NFS was not supported, the Linux version supported NFS shares with no problems.

I did encounter the same problem as current OpenELEC builds, where some addons are reported as broken in the repository, but as with OpenELEC, this did not prevent movie and series media from being correctly identified, or played.


I tested a variety of media formats, all in MKV containers, and all played without issue. I did not test DTS, DTS-HD, AC3, and TrueHD passthrough, as this build of XBMC is based on v11 Eden that does not support HD audio (included in the unreleased v12 Frodo), and I had the box directly connected to a television over HDMI, so all audio was downmixed to two channels.


All in all the Linux port of XBMC on the XIOS DS worked much better than the Android port, but as the Android port is classified as Alpha and the Linux port classified as Beta, that is expected.

The XIOS DS running Linux XBMC is not up to Boxee Box standards yet, but it may be a contender.

Koubachi Wi-Fi Plant Sensor

Last week several tech and gadget news outlets reported that the Koubachi Wi-Fi Plant Sensor has been released and is available for order (Amazon or SmartHome). I remembered reading about this device some time ago, and I decided to try it out.

Our houseplants are under my care, and they are generally happy and healthy. A simple water moisture indicator would not get my attention, and certainly would not be worthy of a place in my collection of semi-useful/useless Wi-Fi enabled devices (of which the Nest Learning Thermostat is the most useful), but the Koubachi promises more than just moisture monitoring:

Thanks to the unique Plant Care Engine (PCE), Koubachi is able to advise you about everything your plant needs: water, fertilizer, hu­mi­di­ty, temperature and light! Koubachi not only tells you WHEN to care for your plants, but also gives you specific instructions HOW.


The device is about the size and shape of a golf club driver head:

Koubachi.Box Koubachi.Battery


The installation and configuration process is interesting. There is only one button on the device, no USB plugs for direct configuration. In order to configure the device to connect to your Wi-Fi network, you first place it in Ad-Hoc mode, then directly connect to it using your computer Wi-Fi adapter, then access the device settings using a web browser, and configure the Wi-Fi settings, after which the device will connect to your home network.

Below are screenshots of the configuration process, starting with online account creation:



At this point everything appeared to be setup and working, except the “To my plants” button was not working. On clicking the “Plants” link, I got to a screen where I can add my first plant, but nothing happened when I clicked on or dragged the pot icon. I tried using Chrome and Internet Explorer, same thing.

The following day I logged in from the office, and now the plants link worked and dragging the pot to the canvas let me create my first plant. Maybe the sensor had to check in with the backend before the backend allowed me to add a plant?

After adding the plant to the canvas, you select the type of plant and pot from the online database. Only 538 types seems a bit limited, but my corn plant was easy enough to find.

Below are screenshots of the plant selection process:



After adding the plant, you have to associate the sensor with the plant.  But, that required that I press the button on the sensor, and since I was in the office, I had to wait until the next day at home to continue the setup.


Once configured, the sensor enters a calibration period that will take about a week. In the mean time it is displaying semi-interesting information:



The iPhone app shows similar information to the website, and includes push notifications of events. So far nothing exciting has happened, no email notifications, no push notification, I guess I’ll have to wait until the sensor completes the calibration procedure, or until I sacrifice a plant for the sake of curiosity.

In the mean time I’ll install some more sensors in the other house plants. I was planning on installing sensors in the patio plants, but this sensor is apparently not quite rain proof, and an outdoor sensor has been announced, to be released in October.


Oh, and in case you were concerned about effect of the Wi-Fi radiation on your plants, there is a FAQ entry for that Smile

Does the Wi-Fi radiation affect my plant?
No. According to the current state of knowledge there is neither a positive nor a negative effect of Wi-Fi radiation on plants. Note that the Koubachi Wi-Fi Plant Sensor usually transmits the data only once a day and that the transmission lasts only ca. 5 seconds. Therefore, the radiation exposure is many times smaller compared to other wireless devices.

Dyslexic Intel RSTe Driver

I encounter one problem after another running Windows 8 and Server 2012 on the dual Xeon E5 Intel C600 chipset based SuperMicro 7047A-T and 7047A-73 SuperWorkstation machines. I will say that this is really not representative of my Windows 8 experience in general, as all other machines I installed on worked fine with the in-box drivers.

The C602 includes the Intel Storage Controller Unit (SCU) SATA / SAS controller. Windows 8 and Server 2012 do not include in-box drivers for the SCU. The SCU drivers are part of the Intel Rapid Storage Technology Enterprise (RSTe) driver set. Note that the RSTe and RST drivers are different and not compatible with one another. When you install the full RSTe package, it includes SCU drivers for the SCU RAID controller, AHCI drivers for the SATA controller, and the Windows RST management application.

A clean install of Windows 8 will use the in-box drivers for the SATA controller. In the image below you can see the Intel 520 Series 480GB SSD drive show up with the correct model number:


After installing RSTe (,, the 4TB Hitachi drives attached to the SCU show up, but the model numbers of the drives, including the SSD drive attached to the SATA port, are now messed up:


The drive hardware identifiers are correct, but the friendly name is not:



It appears that the text BYTE’s are WORD swapped, i.e. ABCD becomes BADC.

The driver is also not functional, attempting to create a storage spaces pool using the Hitachi drives hangs forever, with no drive activity, requiring a hard power cycle:


And lastly, the Intel SSD Toolbox 3.0.3 is not compatible with Windows 8:


The clock is ticking for Windows Server 2012 (4 September, 1 day left) and Windows 8 (26 October, 7 weeks left) general availability, I can only hope compatible drivers, firmware, and utilities are forthcoming.


[Update: 4 September 2012]
SuperMicro posted updated RSTe drivers (package v3.5.0.1101, driver v3.5.0.1096). This driver set resolves the hang during storage space creation, but the drive names are still messed up.