Synology DS2411+ Performance Review

In my last post I compared the performance of  Synology DS1511+ against the QNAP TS-859 Pro. As I finished writing that post, Synology announced the new Synology DS2411+.
Instead of using a DS1511+ and DX510 extender for 10 disks, the DS2411+ offers 12 disks in a single device. The price difference is also marginal, DS1511+ is $836, the DX510 is $500, and the DS2411+ is $1700. That is a difference of only $364, and well worth it for the extra storage space, and the reliability and stability of all drives in one enclosure. I ended up returning my DX510 and DS1511+, and got a DS2411+ instead.

To test the DS2411+, I ran the same performance tests, using the same MPIO setup as I described in my previous post. The only slight difference was in the way I configured the iSCSI LUN; the DS1511+ was configured as SHR2, while the DS2411+ was configured as RAID6. Theoretically both are the same when all the disks are the same size, and SHR2 ends up using RAID6 internally.
iSCSI LUN configuration:
DS2411.iSCSI.LUN

At idle the DS2411+ used 42W power, and under load it used 138W power. The idle power usage is close to the advertised 39W idle power usage, but quite a bit more than the advertised 105W power usage under load.

I use Remote Desktop Manager to manage all my devices in one convenient application. RDM supports web portals, Remote Desktop, Hyper-V, and many more remote configuration options, all in a single tabbed UI. What I found was that the Synology DSM has some problems when running in a tabbed IE browser. When I open the log history, I get a script error, and whenever I focus away and back on the browser window, the DSM desktop windows shift all the way to the left. I assume this is a DSM problem related to absolute and relative referencing. I logged a support case, and I hope they can fix it.
Script error:
DS2411.DSM.Script.Error

Test results:

Device
ATTO Read
ATTO Write
CDM Read
CDM Write
PM810 267.153 260.839 256.674 251.850
DS2411+ 244.032 165.564 149.802 156.673
DS1511+ 244.032 126.030 141.213 115.032
TS-859 Pro 136.178 95.152 116.015 91.097

Chart
DS2411+:
Atto.Synology.MPIOCDM.Synology.MPIO
DS1511+
Atto.Synology.MPIOCDM.Synology.MPIO

The DS2411+ published performance numbers are slightly better than the DS1511+ numbers, and my testing confirms that. so far I am really impressed with the DS2411+.

Advertisements

Synology DS1511+ vs. QNap TS-859 Pro, iSCSI MPIO Performance

Untitled Page I have been very happy with my QNap TS-859 Pro (Amazon), but I’ve run out of space while archiving my media collection, and I needed to expand the storage capacity. You can read about my experience with the TS-859 Pro here, and my experience archiving my media collection here.
My primary objective with this project is storage capacity expansion, and my secondary objective is improved performance.

My choices for storage capacity expansion included:

  • Replace the 8 x 2TB drives with 8 x 3TB drives, to give me 6TB of extra storage. The volume expansion would be very time consuming, but my network setup can remain unchanged during the expansion.
  • Get a second TS-859 Pro with 8 x 3TB drives, to give me 18TB of extra storage. I would need to add the new device to my network, and somehow rebalance the storage allocation across the two devices, without changing the file sharing paths, probably by using directory mount points.
  • Get a Synology DS1511+ (Amazon) and a DX510 (Amazon) expansion unit with 10 x 3TB drives to replace the QNap, to give me 12TB of extra storage, expandable to 15 x 3TB drives for 36TB of total storage. I will need to copy all data to the new device, then mount the new device in place of the old device.

I opted for the DS1511+ with one DX510 expansion unit, I can always add a second DX510 and expand the volume later if needed.
As far as hard drives go, I’ve been very happy with the Hitachi Ultrastar A7K2000 2TB drives I use in my workstations and the QNap, so I stayed with the larger Hitachi Ultrastar 7k3000 3TB drives for the Synology expansion.

For improving performance I had a few ideas:

  • The TS-859 Pro is a bit older than the DS1511+, and there are newer and more powerful QNap models available, like the TS-859 Pro+ (Amazon) with a faster processor, or the TS-659 Pro II (Amazon) with a faster processor and SATA3 support, so it not totally fair to compare the TS-859 Pro performance against the newer DS1511+. But, the newer QNap models do not support my capacity needs.
  • I use Hyper-V clients and dynamic VHD files located on an iSCSI volume mounted in the host server. I elected this setup because it allowed me great flexibility in creating logical volumes for the VM’s, without actually requiring the space to be allocated. In retrospect this may have been convenient, but it was not performing well in large file transfers between the iSCSI target and the file server Hyper-V client.
    For my new setup I was going to mount the iSCSI volume as a raw disk in the file server Hyper-V client. This still allowed me to easily move the iSCSI volume between hosts, but the performance will be better than fixed size VHD files, and much better than dynamic VHD files.
    Here is a blog post describing some options for using iSCSI and Hyper-V.
  • I used iSCSI thin provisioning, meaning that the logical target has a fixed size, but the physical storage only gets allocated as needed. This is very convenient, but turned out to be slower than instant allocation. The QNap iSCSI implementation is also a file-level iSCSI LUN, meaning that the iSCSI volume is backed by a file on an EXT4 volume.
    For my new setup I was going to use the Synology block-level iSCSI LUN, meaning that the iSCSI volume is directly mapped to a physical storage volume.
  • I use a single LAN port to connect to the iSCSI target, meaning the IO throughput is limited by network bandwidth to 1Gb/s or 125MB/s.
    For my new setup I wanted to use 802.3ad link aggregation or Multi Path IO (MPIO) to extend the network speed to a theoretical 2Gb/s or 250MB/s. My understanding of link aggregation turned out to be totally wrong, and I ended up using MPIO instead.

To create a 2Gb/s network link between the server and storage, I teamed two LAN ports on the Intel server adapter, I created a bond of the two LAN ports on the Synology, and I created two trunks for those connections on the switch. This gave me a theoretical 2Gb/s pipe between the server and the iSCSI target. But my testing showed no improvement in performance over a single 1Gb/s link. After some research I found that the logical link is 2Gb/s, but that the physical network stream going from one MAC address to another MAC address is still limited by the physical transport speed, i.e. 1Gb/s. This means that the link aggregation setup is very well suited to e.g. connect a server to a switch using a trunk, and allow multiple clients access to the server over the switch, each at full speed, but it has no performance benefit when there is a single source and destination, as is the case with iSCSI. Since link aggregation did not improve the iSCSI performance, I used MPIO instead.

I set up a test environment where I could compare the performance of different network and device configurations using readily available hardware and test tools. Although my testing produced reasonably accurate relative results, due to the differences in environments, it can’t really be used for absolute performance comparisons.

Disk performance test tools:

Server setup:

Network setup:

  • HP ProCurve V1810 switch, Jumbo Frames enabled, Flow Control enabled.
  • Jumbo Frames enabled on all adapters.
  • CAT6 cables.
  • All network adapters connected to the switch.

QNap setup:

Synology setup:

To test the performance using the disk test tools I mounted the iSCSI targets as drives in the server. I am not going to cover details on how to configure iSCSI, you can read the Synology and QNap iSCSI documentation, and more specifically the MPIO documentation for Windows, Synology and QNap.
A few notes on setting up iSCSI:

  • The QNap MPIO documentation shows that LAN-1 and LAN-2 are in a trunked configuration. As far as I could tell the best practices documentation from Microsoft, DELL, Synology, and other SAN vendors, say that trunking and MPIO should not be mixed. As such I did not trunk the LAN ports on the QNap.
  • I connected all LAN cables to the switch. I could have done direct connections to eliminate the impact of the switch, but this is not how how I will install the setup, and the switch should be sufficiently capable of handling the load and not add any performance degradation.
  • Before trying to enable MPIO on Windows Server, first connect one iSCSI target and map the device, then add the MPIO feature. If you do not have a mapped device, the MPIO iSCSI option will be greyed out.
  • The server’s iSCSI target configuration explicitly bound the source and destination devices based on the adapters IP address, i.e. server LAN-1 would bind to NAS LAN-1, etc. This ensured that traffic would only be routed to and from the specified adapters.
  • I found that the best MPIO load balance policy was the Least Queue Depth Option.

During my testing I encountered a few problems:

  • The DX510 expansion unit would sometimes not power on when the DS1511+ is powered on, or would sometimes fail to initialize the RAID volume, or would sometimes go offline while powered on. I RMA’d the device, and the replacement unit works fine.
  • During testing of the DS1511+, the write performance would sometimes degrade by 50% and never recover. The only solution was to reboot the device. Upgrading the the latest 3.1-1748 DSM firmware solved this problem.
  • During testing of the DS1511+, when one of the MPIO network links would go down, e.g. I unplug a cable, ghost iSCSI connections would remain open, and the iSCSI processes would consume 50% of the NAS CPU time. The only solution was to reboot the device. Upgrading the the latest 3.1-1748 DSM firmware solved this problem.
  • I could not get MPIO to work with the DS1511+, yet no errors were reported. It turns out that LAN-1 and LAN-2 must be on different subnets for MPIO to work.
  • Both the QNap and Synology exhibits weird LAN traffic behavior when both LAN-1 and LAN-2 is connected, and the server generates traffic directed to LAN-1 only. The NAS resource monitor would show high traffic volumes on LAN-1 and and LAN-2, even with no traffic directed at LAN-2. I am uncertain why this happens, maybe a reporting issue, maybe a switching issue, but to avoid it influencing the tests, I disconnected LAN-2 while not testing MPIO.

My test methodology was as follows:

  • Mount either the QNap or Synology iSCSI device, power of the other device while not being tested.
  • Connect the iSCSI target using LAN-1 only and unplug LAN-2, or connect using MPIO with LAN-1 and LAN-2 active.
  • Run all CDM tests with iterations set at 9, and a 4GB file-set size.
  • Run ATTO with the queue depth set to 8, and a 2GB file-set size.
  • As a baseline, I also tested the Samsung PM810 SSD drive using ATTO and CDW.

Test result summary:

Device
ATTO Read
ATTO Write
CDM Read
CDM Write
Total (MB/s)
PM810 267.153 260.839 256.674 251.850 1,036.516
DS1511+ MPIO 244.032 126.030 141.213 115.032 626.307
TS-859 Pro MPIO 136.178 95.152 116.015 91.097 438.442
DS1511+ 122.294 120.172 89.258 105.618 437.342
TS-859 Pro 119.370 99.864 76.529 89.752 385.515

image

Detailed results:
PM810:
Atto.P810 CDM.P810
DS1511+ MPIO:
Atto.Synology.MPIO CDM.Synology.MPIO
TS-859 Pro MPIO:
Atto.Qnap.MPIO CDM.Qnap.MPIO
DS1511+:
Atto.Synology CDM.Synology
TS-859 Pro:
Atto.Qnap CDM.Qnap

Initially, I was a little concerned about the DX510 being in a separate case connected with an eSATA cable to the main DS1511+. Especially after I had to RMA my first DX510 because of what appeared to be connectivity issues. I was also concerned that there would be a performance difference between the 5 drives in the DS1511+ and the 5 drives in the DX510. Testing showed no performance difference between a 5 drive volume and a 10 drive volume, and the only physically noticeable difference was that the drives in the DX510 ran a few degrees hotter compared to the drives in the DS1511+.

As you can see from the results, the DS1511+ with MPIO performs really very well. Especially the 244MB/s ATTO read performance that gets close to the theoretical maximum of 250MB/s over a 2Gb/s link.

But technology moves quickly, and as I was compiling my test data for this post, Synology released two new NAS units, the DS3611xs and the DS2411+. The DS2411+ is very appealing, it is equivalent in performance to the DS1511+, but supports 12 drives in the main enclosure.
I may just have to exchange my DS1511+ and DX510 for a DS2411+…

[Update: 25 July 2011]
I returned the DS1511+ and DX510 in exchange for a DS2411+.
Read my performance review here.

Archiving my CD, DVD and BD collection

I am about two thirds done archiving my entire CD, DVD, and BD collection to network storage. I have been ripping on a part time basis for about 5 months, and so far I’ve ripped over 700 discs.

I have considered archiving my media collection for some time, but just never got around to it. Recently our toddler discovered how to open discs and use them as toys, so storing the discs safely quickly became a priority. I’d like to give you some insight into what I’ve learned and what process I follow.

 

After ripping, I store the discs in aluminum storage cases that hold 600 discs in hanging sleeves. There are similar cases with a larger capacity, but the dimensions of the 600-disc case allows for easy manipulation and storage in my garage. I download or scan the cover images as part of the ripping process, so I had no need to keep them, and I, reluctantly, threw them away. If I could I would have kept the covers, but I found no convenient way to store them.

Below is a picture of the storage case:

StorageCase

 

All the ripped content is saved on my home server, and the files are accessible over wired Gigabit Ethernet and 802.11n Wireless. My server setup is probably excessive, but it serves a purpose. I run a Windows 2008 R2 Hyper-V Server. In the Hyper-V host I run two W2K8R2 guests, one being a Domain Controller, DHCP server, and DNS Server, and the other being a File Server. The file server storage is provided by 2 x QNAP TS-859 Pro iSCSI targets, each with 8 x 2TB drives in RAID6. This gives the file server about 24TB of usable disk space.

24TB may sound like a lot of storage, but considering that I store my documents, my pictures of which most are RAW, my home movies of which most are HD, and all my ripped media in uncompressed format, I really need that much storage.

 

I am currently using Boxee Boxes for media playback. The Boxee Box does not have all the features of XBMC, and I sometimes have to hard boot it to become operational again, but it plays most file types, it runs the Netflix app, and is reasonably maintenance free.

Although Boxee is derived from XBMC, I really miss some of the XBMC features, specifically the ability to set the type of content in a directory, and to sort by media meta-data. Like XBMC, Boxee expects directories and video files to be named a specific way, and the naming is used to lookup the content details. Unlike XBMC, Boxee treats all media sources the same way, so when I add a folder with TV episodes and another folder with movies Boxee often incorrectly classifies the content, and I have to spend time correcting the meta-data. What makes it worse is that I have to apply the same corrections on each individual Boxee Box, it would have been much more convenient if my Boxee account allowed my different Boxee Boxes to share configurations.

 

Ripping and storing the discs is part of the intake process, but I also need a searchable catalog of the disc information, where the ripped files are stored, and where the physical disc is stored. I use Music Collector and Movie Collector to catalog and record the disc information. Unlike other tools I’ve tested, the Music Collector Connect and Movie Collector Connect online services allow access my catalog content anywhere using a web browser. The Connect service does allow you to add content online, theoretically negating the need for the desktop products, but I found the desktop products to be much more effective to use for intake, and then export the content online.

To catalog a CD I take the following steps: I start the automatic add feature, that computes the disc fingerprint and uses the fingerprint to lookup the disc details online. In most cases the disc is correctly identified, including album, artist, track names, etc. In many cases the front disc cover image is available, but it is rare that both the front and back covers are available. If either cover is not available, I scan my own covers, and add them to the record. I found that many of the barcode numbers (UPC) do not match the barcode of my version of the discs, if they do not match, I scan my barcode and update the record. If I made any corrections, or added missing covers, I submit the updated data, so that other users can benefit from my corrections and additions.

To catalog a DVD or BD I take the following steps: I start the automatic add feature, I use a barcode scanner and I scan in the barcode, the barcode is used to lookup the disc details online. In most cases the disc is correctly identified, including name, release year, etc. In some cases my discs do not have barcodes, this is especially true for box sets where the box may have a barcode but the individual movies in the box does not, or where I threw away the part of the box that had the barcode.

Since I buy most of my movies from Amazon, I can use my order history to find the Amazon ASIN number of the item I purchased. I then use IMDB to lookup the UPC code associated with the ASIN number. To do this search for the movie by name in IMDB, then click on the “dvd details” dropdown in the “quick links” section, then search the page for the ASIN number, and copy and paste the associated UPC code. Alternatively you can just use Google and search for the “[ASIN number] UPC”, this is sometimes successful. I don’t know why Amazon, who owns IMDB, does not display UPC codes on the product details page?

If I still do not have a UPC code, I search for the movie by name, look at the results, and pick the movie with the cover matching my disc. In most cases the disc front and back cover is available. If either cover is not available, I scan my own covers, and add them to the record. If I made any corrections, or added missing covers, I submit the updated data, so that other users can benefit from my corrections and additions.

Below are screenshots of Music Collector and Music Collector Online:

MusicCollector    MusicCollector.Online

Below are screenshots of Movie Collector and Movie Collector Online:

MovieCollector    MovieCollector.Online

 

In terms of the ripping process, ripping CD’s is really the most problematic and time consuming. Unlike BD’s that are very resilient, CD’s scratch easily resulting in read errors. Sometimes I had to re-rip the same disc multiple times, between multiple drives, before all tracks ripped accurately. I want accurate and complete meta-data for the ripped files. Sometimes automatic meta detection did not work and I had to manually find and enter the artist, album, song title, etc. This is especially problematic when there are multiple variants, such as pressings and regional track content or track order, of the same logical disc, and I have to match the online meta-data against my particular version of the disc. BD’s and DVD’s typically have only one movie per disc, where each CD has multiple tracks, and the correct metadata has to be set for the album and each track. So although a CD may physically rip much faster compared to a BD, it takes a lot more time and manual effort to accurately rip, tag, and catalog a CD.

I use dBpoweramp for ripping CD’s, it has two advantages over other tools I’ve tested; AccurateRip and PerfectMeta.

Unlike data CD’s, audio CD track data cannot be read 100% accurately using a data CD drive. If the CD drive reads a data track and encounters a read failure, it reports the failure to the reading software. If the CD drive reads an audio track and encounters a read failure, it may ignore the error, it may interpolate the data, or it may replace the data with silence, all without telling the reading software that there was an error. As a result the saved file may contain pops, inaccurate data, or silence. In order to rip a CD track accurately, the ripping software needs to read the the same track several times, and compare the results, and keep on re-reading the track until the same result has been obtained a number of times. This makes ripping CD’s accurately a very time consuming process. Even if you do get the same results with every read, you are still not guaranteed the what you read is accurate, you may just have read the same bad data multiple times. You can read more about the technicalities of ripping audio CD’s accurately here.

AccurateRip solves this problem by creating an online database of disc and track fingerprints. A track is read at full speed, the track’s fingerprint is computed, and compared against the online database of similar tracks, if the fingerprint matches, the track is known to be good, and there is no need to re-read the track. This allows CD’s to be ripped very fast and very accurately.

I use the Free Lossless Audio Codec (FLAC) format for archiving my CD’s. FLAC reduces the file size, but retains the original audio quality. FLAC also supports meta-data allowing the track artist, album, title, and image of the CD cover, etc. to be stored in the file. Unlike the very common MP3 format, FLAC playback is by default not supported by Windows Media Player (WMP). To make WMP, and Windows, play FLAC, you need to install the Xiph FLAC DirectShow filters. Or use Media Player Classic Home Cinema (MPC-HC). A typical audio CD rips to about 400MB in FLAC files.

Just like a CD track can be identified using a fingerprint, an entire CD can also be identified using a fingerprint. When the same CD is manufactured in different batches, or different factories, it results in different track fingerprints for the same logical CD. The same logical CD may also contain different tracks or track orders when released in different regions, also resulting in a different CD fingerprints. CDDB ID is the classic fingerprint, but with uniqueness problems, the more modern Disc ID algorithm does not suffer from such problems, and allows very unique fingerprints to be created by just looking at the track layout, i.e. no need to read the track data.

CD meta-data providers match CD fingerprints against logical album details. Some of this information is freely available, such as freeDB, Discogs, and MusicBrainz, and some information is commercially available, such as Gracenote, GD3, and AMG. Free providers are typically community driven, while commercial providers may have more accurate data.

PerfectMeta makes the tagging process easy, fast, detailed, and accurate. By integrating with a variety of different meta-data providers, including commercial GD3 and AMG, the track meta-data will automatically be selected based on the most reliable provider, or the most consistent data.

Below are screenshots of dBpoweramp ripping a CD, and reviewing the meta-data:

dBpoweramp.Rip     dBpoweramp.MetaData

 

I use MakeMKV for ripping DVD’s and BD’s, it is fast and easy to use, and supports extracting multiple audio, subtitle, and video tracks to a single output file.

MakeMKV creates Matroska Media Container (MKV) format output files. MKV supports multiple media streams and meta-data in the same file. MKV is not a compression format, it is just a container file, inside the container can be any type of media stream such as an AVC video stream, a DTS-HD audio stream, a PGS subtitle stream, chapter markers, etc. MKV playback is by default not supported by WMP or Windows Media Center (WMC). One solution is to install codec packs such as the K-Lite Codec Pack, but I prefer to use standalone players such as Boxee, XBMC, or MPC-HC.

MakeMKV does not perform any recompression of the streams found on the DVD or BD, it simply reads them from the source and writes them to the MKV file. This means that the playback quality is unaltered and equivalent to that of the source material. This also means that the MKV file is normally the same size as the original DVD or BD disc, typically 7GB for a DVD and 35GB for a BD.

I hate starting a BD or DVD, and I have to sit there watching one trailer after the next, especially when the disc prohibits skipping the clip and the kids are getting impatient. I paid good money for the disc, why am I forced to watch advertising on a disc I own? MakeMKV solves this problem by allowing me to rip only the main movie, and when I start playing the MKV file, I immediately see the main movie start. The downside to ripping only the main movie is that disc extras are not available, and the downside to ripping in general is that BD+ interaction is also not available. Some people prefer to rip a disc to an ISO, and then play the ISO with a software player that still allows menu navigation, I have no such need, and ripping only the main movie satisfies my requirements.

When I make my stream selection I pick the main movie, the main English audio track, the English subtitles, and the English forced subtitles. If a movie contains an HD audio track, such as DTS-HD, TrueHD, or LPCM, I also select the non-HD audio track. I do this in case the playback hardware device does not support HD audio, or the player software cannot down-convert the HD audio to a format supported by the playback hardware. On some discs an HD audio and a non-HD audio track is included, but if not, MakeMKV can automatically extract DTS from DTS-HD and can extract AC3 from TrueHD.

On some discs where there are many subtitle streams of the same language, selection gets very complicated, this is especially true when the disc contains forced subtitles. Forced subtitles are the subtitles that are displayed when there is dialog in a language other than the main audio langue, such as when aliens are talking to each other, but when people talk there are no subtitles. On DVD’s the forced subtitles are normally in a separate subtitle stream, on BD’s the subtitle stream includes a forced-bit for specific sentences. MakeMKV can automatically extract forced subtitles as a separate stream from a subtitle stream that contains normal and forced subtitles. When I encounter a disc where I cannot make out which video, audio, or subtitle streams to extract, I use EAC3TO to extract the individual tracks, view, listen, or read them, and then decide which tracks to select in MakeMKV.

Ripping television series on DVD or BD has its own challenges. In order for players like Boxee and XBMC to correctly identify the shows, the files and folders must be properly organized and named. A disc typically contains a few episodes of the series, and some discs contain extras. When you make the track selections you need to include the episodes but exclude the extras. MakeMKV creates a folder for every disc, and names each file according to its track number on that disc. This results in multiple folders, one per disc, with duplicate file names in each folder. In order to re-assemble the series in one folder, you need to rename the episodes from each folder according to the correct season and episode number, such as S01E01.mkv, then move all the files to one folder. What makes this very complicated is when the episode order on disc is different to the aired episode order. The TV scrapers use community television series websites, such as TheTVDB and TVRage, to retrieve show information. The season number and show number must match the aired episode number, not the disc order number. It is a real pain to manually match the disc to aired episode numbers, and I don’t know why discs would use a different show order compared to the aired order? Once you have your episodes named, such as S01E01.mkv, it is very easy to correctly name the file and folder by using an application called TVRename. Point TVRename to your ripped television show folder, it will try to automatically match show names to the TheTVDB show names, you can manually search and correct mappings, it will then automatically rename the show, season, and filenames, according to your preference, and in a format that Boxee and XBMC recognizes.

Below are screenshots of MakeMKV with the stream selection screen for a DVD and a BD:

MakeMKV.WrathOfKhan    MakeMKV.IronMan2

 

When I started ripping my collection I had no idea it would take this long. If I were to dedicate my time to ripping and ripping only, I would have been done a long time ago, but I typically rip only a few discs per week, in between regular work activities; get to the office, insert disc, start working, swap disc, continue working, swap disc, go to meeting, rip a few discs while having lunch at my desk, rip a few discs during the weekend, repeat. The time it takes to rip a disc is important when you stare at the screen, but less so when you have other things to do.

Over the months I’ve used a variety of BD readers, some worked well for BD’s, but were really bad for CD’s, some were fast and some were slow. To illustrate the performance, I selected a BD, a DVD, and a CD, and I ripped them all using the same settings, on the same machine, but using a variety of drive models.

Some drive models incorporate a feature called riplocking, that limits the read speed when reading video discs in order to reduce drive noise. A riplocked drive will read a video BD or DVD much slower than a data BD or DVD, and this results in slow rip times. I used an application called Media Code Speed Edit (MCSE) to remove the riplock restriction on some of the drives.

All drives include Regional Playback Control (RPC) that restricts the media than can be played in that drive by region. There are different regions for DVD and BD discs. RPC-1 drives allow software to enforce the region protection, RPC-2 drives perform the region protection in drive hardware. Most new drives are RPC-2 drives. Drive region protection is not an issue for MakeMKV, and it can rip any region disc on any region drive. RPC-1 versions of firmware is available for many drives at the RPC-1 Database.

I tested the following drives:

Drive

Firmware

Notes

LG BH12LS35 1.00  
LG BH12LS35 1.00 Riplock removed
LG UH10LS20 1.00  
LG UH10LS20 1.00 Riplock removed
LG UH10LS20 1.00 RPC-1
Plextor PX-B940SA 1.08 Rebranded Pioneer BDR-205
Sony BD-5300S 1.04 Rebranded Lite-On iHBS112
Lite-On iHBS212 5L09  
Pioneer BDR-206 1.05  

I measured the rip speed in Mbps, as computed by dividing the output file size by the rip time in seconds. The file size is the size of the MKV file for DVD’s and BD’s, and the size of all files in the album folder for CD’s. The rip time is computed by subtracting the file create time from the file modified time. The test methodology is not a standard test, and the results should not be used in absolute comparisons, but are very valid in relative comparisons. For more standard testing and reviews visit CDFreaks.

Test results:

Chart
From the results we can see that the Sony BD-5300S (a rebranded Lite-On iHBS112) and the Lite-On iHBS212 drives are the fastest overall ripping drives, the fastest BD ripping drives, the fastest DVD dripping drives, but second slowest CD ripping drives. It is further interesting to note that the stock Lite-On drives were still faster than the riplock removed LG drives. The Lite-On drives also have the smallest AccurateRip drive correction offsets of all the drives.

 

I still have quite a way to go before all my discs are ripped, but at least I have the process down; rip, swap, repeat.

Data Robotics DroboPro vs. QNAP TS-859 Pro

I previously wrote about my impressions of the DroboPro, and in case I was not clear, I was not impressed.

I recently read the announcement of the new QNAP TS-859 Pro, and from the literature it seemed like a great device, high performance, feature rich, and power saving.
The TS-859 Pro is now available, and I compared it with the DroboPro, and against a regular W2K8R2 file server.

The TS-859 is taller than the DroboPro, the DroboPro is deeper than the TS-859, and the width is about the same.

Before I get to the TS-859, let’s look at the DroboPro information and configuration screens.
The OS is Windows Server 2008 R2 Enterprise, but the steps should be about the same for Vista and Windows 7.
All but the DroboCopy context menu screens are listed below.
DroboPro information and configuration:

The dashboard believes there are no volumes, but Windows sees an unknown 2TB volume:

Creating a new volume:

As the dashboard software starts creating the volume, Windows will detect a new RAW volume being mounted, and asked if it should be formatted.
Just leave that dialog open and let the dashboard finish.
The dashboard will complete saying all is well, when in reality it is not:

The dashboard failed to correctly mount and format the volume.

Right click on the disk, bring it online, format the partition as a GPT simple volume.

The dashboard will pick up the change and show the correct state.

Email notifications are configured from the context menu.
The email notifications are generated by the user session application, so no user logged in, no email notifications.

DroboPro does not provide any diagnostics, even the diagnostic file is encrypted.

Unlike the DroboPro that comes with rudimentary documentation, the TS-859 has getting started instructions printed right on the top of the box, and includes a detailed configuration instruction pamphlet.
The DroboPro also has configuration instructions in the box, printed on the bottom of a piece of cardboard that looks like packaging material, and I only discovered these instructions as I was throwing out the packaging.
I loaded the TS-859 with 8 x Hitachi A7K2000 UltraStar 2TB drives.
On powering on the TS-859 the LCD showed the device is booting, then asked me if I want to initialize all the drives as RAID6.
You can opt-out of this procedure, or change the RAID configuration, by using the select and enter buttons on the LCD.
I used the default values and the RAID6 initialization started.
The LCD shows the progress, and the process completed in about 15 minutes.
Unlike the DroboPro that requires a USB connection and client side software, the TS-859 is completely web managed.
The LCD will show the LAN IP address, obtained via DHCP, login using the browser at http://%5BIP%5D:8080.
The default username and password is “admin”, “admin”.

Although the initial RAID6 initialization took only about 15 minutes, it took around 24 hours for the RAID6 synchronization to complete.
During this time the volume is accessible for storage, the device is just busy and not as responsive.

Unlike the DroboPro that shows no diagnostics, and generates an encrypted diagnostic file, the TS-859 has detailed diagnostics.

Unlike the DroboPro, email alerts are generated from the device and does not require any client software.

SMB / CIFS shares are enabled by default.

iSCSI target creation is very simple using a wizard.

While configuring the TS-859, I ran into a few small problems.
I quickly found the help and information I needed on the QNAP forum.
Unlike the DroboPro forum, the QNAP forum does not require a device serial number and is open to anybody.
The TS-859 default outbound network communication, SMTP, NTP, etc,. defaults to LAN1.
I had LAN1 directly connected for iSCSI and LAN2 connected to the routable network.
NTP time syncs were failing, after switching LAN1 and LAN2, the device could access the internet and NTP and the front page RSS feed started working.
Make sure to connect LAN1 to a network that can access the internet.
When I first initialized the RAID6 array, drive 8 was accessible and initializing, but didn’t report any SMART information.
I received instructions from the forum on how to use SSH to diagnose the drive, and after replacing the drive, SMART worked fine.
What I really wanted to do was compare performance, and to keep things fair I setup a configuration that had all machines connected at the same time.
This way I could run the tests one by one on the various devices, without needing to change configurations.

The client test machine is a Windows Server 2008 R2, DELL OptiPlex 960, Intel Quad Core Q9650 3GHz, 16GB RAM, Intel 160GB SSD, Hitachi A7K2000 2TB SATA, Intel Pro 1000 ET Dual Port.
The file server is a Windows Server 2008 R2, Intel S5000PSL, Dual Quad Core Xeon E5500, 32GB RAM, Super Talent 250GB SSD, Areca ARC-1680 RAID controller, 10 x Hitachi A7K2000 2TB SATA, RAID6, Intel Pro 1000 ET Dual Port.
The DroboPro has 8 x Hitachi A7K2000 2TB SATA, dual drive redundancy BeyondRAID, firmware 1.1.4.
The TS-859 Pro has 8 x Hitachi A7K2000 2TB SATA, RAID6, firmware 3.2.2b0128.

The client’s built in gigabit network card is connected to the switch.
The server’s built in gigabit network card is connected to the switch.
The TS-859 Pro LAN1 is connected to the switch.
The TS-859 Pro LAN2 is directly connected to the client on one of the Pro 1000 ET ports.
The DroboPro LAN1 is directly connected to the client on one of the Pro 1000 ET ports.

The DroboPro is configured as an iSCSI target hosting a 16TB volume.
The TS-859 Pro is configured as an iSCSI target hosting a 10TB volume.
The difference in size is unintentional, both units support thin provisioning, the DroboPro maximum defaults to the size of all drives combined, and the TS-859 maximum defaults to the effective RAID size.

The client maps the DroboPro iSCSI target as a GPT simple volume.

The client maps the TS-859 Pro iSCSI target as a GPT simple volume.

The first set of tests were done using ATTO Disk Benchmark 2.46.
Intel 160GB SSD:

Hitachi A7K2000 2TB SATA:

DroboPro iSCSI:

TS-589 Pro (1500 MTU) iSCSI:

TS-589 Pro Jumbo Frame (9000 MTU) iSCSI:

Read performance:
Device Speed (MB/s)
Intel SSD SATA 274
Hitachi SATA 141
TS-859 Pro Jumbo iSCSI 116
TS-859 Pro iSCSI 113
DroboPro iSCSI 62

Write performance:
Device Speed (MB/s)
Hitachi SATA 141
Intel SSD SATA 91
TS-859 Pro Jumbo iSCSI 90
TS-859 Pro iSCSI 83
DroboPro iSCSI 65


Summary:



The next set of tests used robocopy to copy a fileset from the local Hitachi SATA drive to the target drive backed by iSCSI.

The fileset consists of a single 24GB Ghost file, 3087 JPG files totaling 17GB, and 25928 files from the Windows XP SP3 Windows folder totaling 5GB.

DroboPro iSCSI:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 67998715 66449606 61345194 65264505
JPG 47376106 34469965 28865504 36903858
XP 33644442 21231487 18780348 24552092
Total 149019263 122151058 108991046 126,720,456


System load during Ghost file copy to DroboPro:


TS-859 Pro iSCSI:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 94824771 103356597 102596286 100259218
JPG 50591459 51817921 55830439 52746606
XP 39133922 38128876 37972580 38411793
Total 184550152 193303394 196399305 191,417,617


TS-859 Pro Jumbo iSCSI:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 91427745 113113714 112684967 105742142
JPG 49525622 51203544 51477482 50735549
XP 31910014 37429864 37699130 35679669
Total 172863381 201747122 201861579 192,157,361


System load during Ghost file copy to TS-859 Pro Jumbo:

This test uses the same fileset, but copies the files over SMB / CIFS.
Server SMB:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 108161169 116949441 115138722 113416444
JPG 53969349 56842239 55586620 55466069
XP 15829769 17550875 19336648 17572430
Total 177960287 191342555 190061990 186,454,944


TS-589 Pro SMB:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 64295886 65486617 63494735 64425746
JPG 52988736 52633239 53177864 52933279
XP 14345937 15703244 15506456 15185212
Total 131630559 133823100 132179055 132,544,238



Summary:

In terms of absolute performance the TS-859 Pro with Jumbo Frames is the fastest.
For iSCSI TS-859 Pro with Jumbo Frames is the fastest
For SMB the W2K8R2 server is the fastest.
If we look at the system load graphs we can see that the DroboPro network throughput is frequently stalling, while the TS-859 is consistently smooth.
This phenomena has been a topic of discussion on the DroboPro forum for some time, and the speculation is that the hardware cannot keep up with the network load.
Further speculation is that because the BeyondRAID technology is filesystem aware, it requires more processing power compared to a traditional block level RAID that is filesystem agnostic.

So let’s summarize:
The TS-859 Pro and the DroboPro are about the same price, around $1500.
The TS-859 Pro is a little louder than the DroboPro (with the DroboPro cover on).
The TS-859 Pro is not as pretty as the DroboPro, arguable.
The TS-859 Pro has ample diagnostics and remote managament capabilities, the DroboPro has none.
The TS-859 Pro has loads of features, the DroboPro provides only basic storage.
The TS-859 Pro is easy to setup, the DroboPro requires a USB connection and still fails to correctly configure, requiring manual intervention.
The TS-859 Pro outperforms the DroboPro by 52%.
The TS-859 Pro will stay in my lab, the DroboPro will go 🙂

DroboPro Impressions

In this post I am describing, and partly reviewing, my experience using a DroboPro over iSCSI.

I have been aware of of the Drobo storage devices for some time now, but never used one or knew anybody that owned one.

Recently a coworker’s large home RAID system had a controller failure, and after recovering the data, he migrated to a DroboPro using iSCSI.
After he told me how quite the device is, and how little power it uses, I wanted to try one out myself.
As I always do before purchasing hardware or software, I wanted to visit the community forums to see what owners have to say about their products.
But, to gain access to the Drobo Forums you have to register, and to register you need a valid Drobo device serial number, so there really was no way to know what was being discussed before purchasing a device.
This does seem rather weird, makes me wonder if they want to hide something, and searching online I found several other people that had similar feelings about Drobo’s forum policy, some saying so more politely than others.
Searching for DroboPro reviews online, I found mixed results, I found questions being asked about DroboPro and iSCSI, and several very negative Drobo comments, specifically unhappy Drobo Share owners.
One particular item of interest was the Drobo Users Community Forum, where the site owner closed the site in response to his dissatisfaction with the device and Data Robotics.
Even with the uncertainty of the device capabilities and stability, I decided to try one out anyway.
When I got the device I was surprised by just how small it is, about the size of a small form factor computer.
I unpacked the device, it comes with USB, FireWire, Ethernet, power cables, a CD and a user guide.
What I found missing was a getting started guide, and I went to the DroboPro support KB site in search of getting started documentation, I found none.
Admittedly, after I already had the device working, and as I was throwing away the packaging, I found the getting started steps printed on a piece of packaging.
I think a simple brochure would have been much more helpful compared to printing it under a part of the pretty packaging, that I discarded as I opened the box.
In order to configure the device you must use a USB connection and the Drobo Dashboard software.
I installed the dashboard, I plugged in the Ethernet cable and USB cables, and powered on.
Nothing, the dashboard software would not see the DroboPro.
Long story short, it turns out that you may have only one cable connected at a time, and since I had Ethernet and USB, the USB did not connect.
Admittedly, the getting started steps on the packaging did say Ethernet OR USB OR FireWire, but I did not literally take this as USB ONLY.
I now have the DroboPro running, and the dashboard sees the device.
There are no drives in the DroboPro, and the status light in the first drive slot is red, this means add a drive.
Strangely, even without a drive in the DroboPro a drive did appear in disk manager, the drive size was reported as a very big negative number, and a 32MB partition of unknown type, weird.
I inserted the first drive (Hitachi UltraStar A7K2000 2TB), the red light flashed for a bit, then turned green, and the second slot turned red.
I inserted the second drive, it turned green, and I continued inserting the remainder of the 8 drives.
While I was inserting the remainder of the drives, the second slot had some problem, I could hear the drive spinning up and down a few times, and then the slot turned red.
I replaced that drive with another, and the slot went back to green.
I went back to disk manager, and the previous 32MB disk was now gone, and instead there was a 2TB RAW drive, again a drive I did not create.
I opened the Drobo Dashboard volume manager, deleted the 2TB volume that was automatically created, and created a new 16TB NTFS volume.
The Drobo Dashboard automatically partitions and formats the volume for you, the supported file systems, on Windows, are FAT32 and NTFS.
While in the settings I changed the device settings to dual disk redundancy.
After applying this change, the device was busy for a few minutes flashing all drive lights, I assume while it was rearranging bits on the disks.
When you create a volume you must specify the partition file system format type.
My understanding is that the BeyondRAID technology used by Drobo requires understanding of the file system format, this is how they can dynamically move files around, and dynamically adjust the volume size, something that is not possible with traditional block level RAID.

Although the logical volume is reported as 16TB in size, the actual available storage using 8 x 2TB drives is about 11TB.

The logical volume size reported by the DroboPro to the OS is unrelated to the physical available storage size.
The Drobo documentation says one should create a volume as large as the maximum size you may ever need, and then simply add drives to back that storage as you need the space.
I tested this by creating 2 additional 16TB volumes, three times the physical storage capacity, and the drives showed up fine.
The one caveat is that if you ever format the partition, you must use quick format, regular format will fail.
While on the topic of sizes, the Drobo mixes SI and IEC prefixes, they say TB and GB, but they really mean TiB and GiB.
I even found a post about this on their forum, and the moderator response was that “most people don’t know the difference”, with this type of indifference the confusion will never be properly addressed.
I wanted to delete the 2 test volumes, and before I did this I wanted to USB Safely Remove the volume.
The safely remove failed, telling me that the device is in use, and that the DDService.exe was holding open the handles.
DDService.exe is the Drobo Dashboard Service.
Now was my opportunity to register with Drobo Forum.
After posting my question, a moderator almost immediately responded saying that I should use the dashboard to power down the device, and that the dashboard will unmount the volume.
I did not want to power down, I just wanted to unmount the volume.
I even found a Drobo support KB saying to use either the dashboard or the normal safely remove procedure.
Several users replied saying they have similar problems with the dashboard service preventing safe removal.
I deleted the two test volumes using the dashboard, it did appear to unmount them, and then reboot.
Still, one would expect the Drobo service to correctly respond to device removal notifications.
I wanted to know why the original drive in bay 2 had failed.
The dashboard does not display any diagnostic information, no drive power state, no SMART state, nothing.
When you right click on the dashboard tray icon there is an option to create a diagnostic report.
At first it seemed like the diagnostic report dialog hanged, then I noticed that DDService.exe crashed.
I restarted the dashboard and the service, and this time the report file was created on the desktop, to my surprised the file was encrypted.
Not allowing me access to any diagnostic information is highly unusual.
I found an old forum post on the now closed Drobo Users Community Forum, describing that the data file is a simple XOR.
But since the forum is closed the post was no longer available, fortunately the google cache still has the information.
Unfortunately it turns out that the encryption on newer models have changed.
I opened a support ticket attaching my diagnostic file, and requested the reason for the drive failure, I also asked why the file is encrypted.
I received a reply stating that the drive experienced 2 timeouts, and that is why it was kicked out.
The reason for the encryption is that apparently the log contains details of the BeyondRAID file movements, and that this is proprietary information.
Ok, I can understand not wanting to give away the secret sauce, but not making any diagnostic information available, and requiring tech support interaction for any questions will become a problem.

I was now ready to switch to iSCSI.
The PDF included on the CD was not much help, but the KB articles on the Drobo support site was helpful.
The steps calls for; power up with USB only, configure using dashboard, power down using dashboard, disconnect USB, connect Ethernet, power up, dashboard will reconnect after a few minutes.

The steps say that for DroboPro connected to a switch you cannot use automatic IP configuration, and you must use a static IP.

I could not see why no, so I ignored the steps and used automatic configuration, for whatever reason, it does not work.
I went back to USB, selected a static IP, rebooted, and this time after a few minutes the dashboard connected to the DroboPro, and the drive I had previously created re-appeared.
I assumed that the dashboard is configuring iSCSI targets for me,
I opened the Windows iSCSI Initiator, and as expected the target and device was already configured.
To test the device I started a robocopy of a large set of backup data from my PC to the DroboPro.
At this point I am not testing performance, I will do that later when I have the DroboPro connected to a dedicated Ethernet port.
The copy started, and I left the machine idle while it copied.
On returning later my machine had gone to sleep, and the DroboPro had gone to sleep, just the orange power light was on.
I woke up my machine, and noticed that the robocopy was still in progress, half way through a large file, but not resuming.
I waited, but the DroboPro did not wake up.
Every window I opened and every application I started on my PC would just hang.
In the end I had to reset my machine.
Back to the forum, and as before a moderator responded very quickly.
After a few back and forth questions, the moderator confirmed that it is a known problem with dashboard version 1.6.6 that I was using.
The suggested fix was to simply restart the dashboard and the dashboard service on wake from sleep.
This was not a reasonable solution for disappearing and hanging volumes will lead to data corruption.

I opened a support ticket, and was asked to revert to the older dashboard version 1.5.1.
On uninstalling the 1.6.6 dashboard, I received an error that I must be an administrator, but I am an administrator.
Support told me to disable UAC, and then uninstall.
This is rather surprising, Windows 7 has already shipped and the Drobo software is still not Vista / UAC ready.
On installing dashboard 1.5.1 I found that it is even more Vista unfriendly, the dashboard requires elevation, is added to the startup group, but applications requiring elevation are not allowed to auto start.
Even with the UAC quirks, so far with dashboard 1.5.1 I have not had any hanging problems on resume from sleep.
The dashboard includes an email alert feature.
But after I set it up, and pulled a drive, I did not receive an email alert.
Back to the forum, and a confirmation that the email alert is generated by the dashboard user session process.
This means that no user logged in, no email alert.
I find it rather weird that Drobo implemented iSCSI, and uses words like “enterprise ready“, “enterprise level“, and achieved “VMWare Ready” certification, yet there is not a single enterprise level feature in the product.
And not that I expect enterprise level reliability or performance in a consumer device, but basic functionality that is found in almost all comparable devices.
  • iSCSI and IP connectivity, but no web management interface.
  • USB for setup requires proximity to a physical machine, no remote management, and no virtual machine provisioning.
  • No DHCP support when connected to a LAN.
  • No raw volume management, must be a supported file system, must be managed by dashboard app.
  • I have to trust DroboPro with my data, but there is no diagnostic or health status.
  • I have to trust Data Robotics, but the forum is closed and diagnostic logs are encrypted.
  • Email alerts requires a user to be logged in, if I was logged in I would not need an email alert.
  • Software that is not fully Vista compatible, even after Windows 7 already shipped.
  • Software that shipped with known problems that could cause data corruption.
The DroboElite is more than double the price of a DroboPro.
The main differences between DroboPro and DroboElite are dual Ethernet ports, multi host access to volumes, and more volumes.
Although I do not have one to test, from what I can gather in documentation and the forum, none of the items above are any different.
As a direct attached USB or FireWire storage device some of the above mentioned items would be irrelevant, but iSCSI, I really expected more.

Next up, I’ll move the DroboPro from my workstation to my W2K8R2 server on a dedicated Ethernet port.

This will give me the ability to do some performance and benchmarking comparison between RAID6 DAS and the DroboPro BeyondRAID iSCSI.
[Update: 30 January 2010]

Hitachi Ultrastar and Seagate Barracude LP 2TB drives

In my previous post I talked about Western Digital RE4-GP 2TB drive problems.

In this post I present my test results for 2TB drives from Seagate and Hitachi.
The test setup is the same as for the RE4-GP testing, except that I only tested 4 drives from each manufacturer.
Unlike the enterprise class WD RE4-GP and Hitachi Ultrastar A7K2000 drives, the Seagate Barracuda LP drive is a desktop drive.
The equivalent should have been a Seagate Constellation ES drive, but as far as I know the 2TB drives are not yet available.
To summarize:
The Hitachi A7K2000 drives performed without issue on all three controllers, the Seagate Barracuda LP drive failed to work with the Adaptec controller.
The Hitachi Ultrastar A7K2000 outperformed the Seagate Barracuda LP drive, but this was not really a surprise given the drive specs.
The Areca ARC1680 controller produced the best and most reliable results, the Adaptec was close, but given the overheating problem, it is not reliable unless additional cooling is added.
Test hardware:
Intel S5000PSL motherboard, dual Xeon E5450, 32GB RAM, firmware BIOS-98 BMC-65 FRUSDR-48
Adaptec 51245 RAID controller, firmware 17517, driver 5.2.0.17517
Areca ARC1680ix-12 RAID controller, firmware 1.47, driver 6.20.00.16_80819
LSI 8888ELP RAID controller, firmware 11.0.1-0017 (APP-1.40.62-0665), driver 4.16.0.64
Chenbro CK12803 28-port SAS expander, firmware AA11
Drive setup:
– Boot drive, 1 x 1TB WD Caviar Black WD1001FALS, firmware 05.00K05
Simple volume, connected to onboard Intel ICH10R controller running in RAID mode
– Data drives, 4 x 2TB Hitachi Ultrastar A7K2000 HUA722020ALA330 drives, firmware JKAOA20N
1 x hot spare, 3 x drive RAID5 4TB, configured as GPT partitions, dynamic disks, and simple volumes
– Data drives, 4 x 2TB Seagate Barracuda LP ST32000542AS drives, firmware CC32
1 x hot spare, 3 x drive RAID5 4TB, configured as GPT partitions, dynamic disks, and simple volumes

I tested the drives as shipped, with no jumpers, running at SATA-II / 3Gb/s speeds.
Adaptec 51245, SATA-II / 3Gb/s:
As in my previous test I had to use an extra fan to keep the Adaptec card from overheating.
The Hitachi drives had no problems.
The Hitachi drives completed initialization in 16 hours.
The Seagate drives would not show up on the system, I tried different ports, resets, cable swaps, no go.
Adaptec, RAID5, Hitachi:

Adaptec, RAID5, WD:

Areca ARC1680ix-12, SATA-II / 3Gb/s:
The Areca had not problems with the Hitachi or Seagate drives.
The Hitachi drives completed initialization in 40 hours.
The Seagate drives completed initialization in 49 hours.
The array initialization time of the Areca is significantly longer compared to Adaptec or LSI.
Areca, RAID5, Hitachi:

Areaca, RAID5, Seagate:

Areca, RAID5, WD:

LSI 8888ELP and Chenbro CK12803, SATA-II / 3Gb/s:
The Hitachi drives reported a few “Invalid field in CDB” errors with, but it did not appear to affect the operation of the array.
The Hitachi drives completed initialization in 4 hours.
The Seagate drives reported lots of “Invalid field in CDB” and “Power on, reset, or bus device reset occurred” errors, but it did not appear to affect the operation of the array.
The Seagate drives made clicking sounds when they powered on, and occasionally during normal operation.
The Seagate drives completed initialization in 4 hours.

LSI, RAID5, Hitachi:

LSI, RAID5, Seagate:

LSI, RAID5, WD:

The Hitachi A7K2000 drives performed without issue on all three controllers, the Seagate Barracuda LP drive failed to work with the Adaptec controller.
The Hitachi A7K2000 outperformed the Seagate Barracuda LP drive, but this was not really a surprise given the drive specs.
The Areca ARC1680 controller produced the best and most reliable results, the Adaptec was close, but given the overheating problem, it is not reliable unless additional cooling is added.

I will be scaling my test up from 4 to 12 Hitachi drives, using the Areca controller, and I will expand the Areca cache from 512MB to 2GB.

Western Digital RE4-GP 2TB Drive Problems

In my previous two posts I described my research into the power saving features of various enterprise class RAID controllers.
In this post I detail the results of my testing of the Western Digital RE4-GP enterprise class “green” drives when used with hardware RAID controllers from Adaptec, Areca, and LSI.
To summarize, the RE4-GP drive fails with a variety of problems, Adaptec, Areca, and LSI acknowledge the problem and lays blame on WD, yet WD insists there are no known problems with the RE4-GP drives.
Test hardware:
Intel S5000PSL motherboard, dual Xeon E5450, 32GB RAM, firmware BIOS-98 BMC-65 FRUSDR-48
Adaptec 51245 RAID controller, firmware 17517, driver 5.2.0.17517
Areca ARC1680ix-12 RAID controller, firmware 1.47, driver 6.20.00.16_80819
LSI 8888ELP RAID controller, firmware 11.0.1-0017 (APP-1.40.62-0665), driver 4.16.0.64
Chenbro CK12803 28-port SAS expander, firmware AA11
Drive setup:
– Boot drive, 1 x 1TB WD Caviar Black WD1001FALS, firmware 05.00K05
Simple volume, connected to onboard Intel ICH10R controller running in RAID mode
– Data drives, 10 x 2TB WD RE4-GP WD2002FYPS drives, firmware 04.05G04
1 x hot spare, 3 x drive RAID5 4TB, 6 x drive RAID6 8TB, configured as GPT partitions, dynamic disks, and simple volumes
I started testing the drives as shipped, with no jumpers, running at SATA-II / 3Gb/s speeds.
Adaptec 51245, SATA-II / 3Gb/s:
The Adaptec card has 3 x internal SFF-8087 ports and 1 x external SFF-8088 port, supporting 12 internal drives.
The Adaptec card had immediate problems with the RE4-GP drives, in the ASM utility the drives would randomly drop out and in.
I could not complete testing.
Areca ARC1680ix-16, SATA-II / 3Gb/s:
The Areca card has 3 x internal SFF-8087 ports and 1 x external SFF-8088 port, supporting 12 internal drives.
Unlike the LSI and Adaptec cards that require locally installed management software, the Areca card is completely managed through a web interface from an embedded Ethernet port.
The Areca card allowed the RAID volumes to be created, but during initialization at around 7% the web interface stopped responding, requiring a cold reset.
I could not complete testing.
LSI 8888ELP and Chenbro CK12803, SATA-II / 3Gb/s:
The LSI card has 2 x internal SFF-8087 ports and 2 x external SFF-8088 port, supporting 8 internal drives.
Since I needed to host 10 drives, I used the Chenbro 28 port SAS expander.
The 8888ELP support page only lists the v3 series drivers, while W2K8R2 ships with the v4 series drivers, so I used the latest v4 drivers from the new 6Gb/s LSI cards.
The LSI and Chenbro allowed the volumes to be created, but during initialization 4 drives dropped out, and initialization failed.
I could not complete testing.
I contacted WD, Areca, Adaptec, and LSI support with my findings.
WD support said there is nothing wrong with the RE4-GP, and that they are not aware of any problems with any RAID controllers.
When I insisted that there must be something wrong, they suggested I try to force the drives to SATA-I / 1.5Gb/s speed and see if that helps.
I tested at SATA-I / 1.5Gb/s speed, and achieved some success, but I still insisted that WD acknowledge the problem.
The case was escalated to WD engineering, and I am still waiting for an update.
Adaptec support acknowledged a problem with RE4-GP drives when used with high port count controllers, and that a card hardware fix is being worked on.
I asked if the fix will be firmware or hardware, and was told hardware, and that the card will have to be swapped, but the timeframe is unknown.
Areca support acknowledged a problem between the Intel IOP348 controller and RE4-GP drives, and that Intel and WD are aware of the problem, and that running the drives at SATA-I / 1.5Gb/s speed resolves the problem.
I asked if a fix to run at SATA-II / 3Gb/s speeds will be made available, I was told this will not be possible without hardware changes, and no fix is planned.
LSI support acknowledged a problem with RE4-GP drives, and that they have multiple cases open with WD, and that my best option is to use a different drive, or to contact WD support.
I asked if a fix will become available, they said that it is unlikely that a firmware update would be able to resolve the problem, and that WD would need to provide a fix.
This is rather disappointing, WD advertises the RE4-GP as an enterprise class drive, yet 3/3 of the enterprise class RAID controllers I tested failed with the RE4-GP, and all three vendors blame WD, yet WD insists there is nothing wrong with the RE4-GP.
I continued testing, this time with the SATA-I / 1.5Gb/s jumper set.
Adaptec 51245, SATA-I / 1.5Gb/s:
This time the Adaptec card had no problems seeing the arrays, although some of the drives continue to report link errors.
A much bigger problem was that the controller and battery was overheating, the controller running at 103C / 217F.
In order to continue my testing I had to install an extra chassis fan to provide additional ventilation over the card.
The Adaptec and LSI have passive cooling, where in contrast the Areca has active cooling and only ran at around 51C / 124F.
The Areca and LSI batteries are off-board, and although a bit inconvenient to mount, they did not overheat like the Adaptec.
Initialization completed in 22 hours, compared to 52 hours for Areca and 8 hours for LSI.
The controller supports power management, and drives are spun down when not in use.
3 x Drive RAID5 4TB performance:

6 x Drive RAID6 8TB Performance:

Areca ARC1680ix-16, SATA-I / 1.5Gb/s:
This time the Areca card had no problems initializing the arrays.
Initialization completed in 52 hours, much longer compared to 22 hours for Adaptec and 8 hours for LSI.
Areca support said initialization time depends on the drive speed and controller load, and that the RE4-GP drives are known to be slow.
The controller supports power management, and drives are spun down when not in use.

3 x Drive RAID5 4TB performance:

6 x Drive RAID6 8TB Performance:

LSI 8888ELP and Chenbro CK12803, SATA-I / 1.5Gb/s:
This time only 2 drives dropped out, one out of each array, and initialization completed after I forced the drives back online.
Initialization completed in 8 hours, much quicker compared to 22 hours for Adaptec and 52 hours for Areca.

The controller only supports power management on unassigned drives, there is no support for spinning down configured but not in use drives.

3 x Drive RAID5 4TB performance:

6 x Drive RAID6 8TB Performance:

Although all three cards produced results when the RE4-GP drives were forced to SATA-I / 1.5Gb/s speeds, the results still show that the drives are unreliable.
The RE4-GP drive fails with a variety of problems, Adaptec, Areca, and LSI acknowledge the problem and lays blame on WD, yet WD insists there are no known problems with the RE4 drives-GP.
There are alternative low power drives available from Seagate and Hitachi.
I still haven’t forgiven Seagate for the endless troubles they caused with ES.2 drives and Intel IOP348 based controllers, and, like WD, also denying any problems with the drives, yet eventually releasing two firmware updates for the ES.2 drives.
I’ve always had good service from Hitachi drives, so maybe I’ll give the new Hitachi A7K2000 drives a run.
One thing is for sure, I will definately be returning the RE4-GP drives.
[Update: 11 October 2009]
I tested the Seagate Barracuda LP and Hitachi Ultrastar 2TB drives.
[Update: 24 October 2009]
WD support still has not responded to my request for the firmware.

Power Saving RAID Controller (Continued)

This post continues from my last post on power saving RAID controllers.
It turns out the Adaptec 5 series controller are not that workstation friendly.
I was testing with Western Digital drives; 1TB Caviar Black WD1001FALS, 2TB Caviar Green WD20EADS, and 1TB RE3 WD1002FBYS.
I also wanted to test with the new 2TB RE4-GP WD2002FYPS drives, but they are on backorder.
I found that the Caviar Black WD1001FALS and Caviar Green WD20EADS drives were just dropping out of the array for no apparent reason, yet they were still listed in ASM as if nothing was wrong.
I also noticed that over time ASM listed medium errors and aborted command errors for these drives.
In comparison the RE3 WD1002FBYS drives worked perfectly.
A little searching pointed me to a feature of WD drives called Time Limited Error Recovery (TLER).
You can read more about TLER here, or here, or here.
Basically the enterprise class drives have TLER enabled, and the consumer drives not, so when the RAID controller issues a command and the drive does not respond in a reasonable amount of time, the controller drops the drive out of the array.
The same drives worked perfectly in single drive, RAID-0, and RAID-1 configurations with an Intel ICH10R RAID controller, granted, the Intel chipset controller is not in the same performance league.
The Adaptec 5805 and 5445 controllers I tested did let the drives spin down, but the controller is not S3 sleep friendly.
Every time my system resumes from S3 sleep ASM would complain “The battery-backup cache device needs a new battery: controller 1.”, and when I look in ASM it tells me the battery is fine.
Whenever the system enters S3 sleep the controller does not spin down any of the drives, this means that all the drives in external enclosures, or on external power, will keep on spinning while the machine is sleeping.
This defeats the purpose of power saving and sleep.
The embedded Intel ICH10R RAID controller did correctly spin down all drives before entering sleep.
Since installing the ASM utility my system is taking a noticably longer time to shutdown.
Vista provides a convenient, although not always accurate, way to see what is impacting system performance in terms of even timing, and ASM was identified as adding 16s to every shutown.
Under [Computer Management][Event Viewer][Applications and Services Logs][Microsoft][Windows][Diagnostics-Performance][Operational], I see this for every shutdown event:
This service caused a delay in the system shutdown process:
File Name : AdaptecStorageManagerAgent
Friendly Name :
Version :
Total Time : 20002ms
Degradation Time : 16002ms
Incident Time (UTC) : 6/11/2009 3:15:57 AM
It really seems that Adaptec did not design or test the 5 series controllers for use in Workstations, this is unfortunate, for performance wise the 5 series cards really are great.
[Update: 22 August 2009]
I received several WD RE4-GP / WD2002FYPS drives.
I tested with W2K8R2 booted from a WD RE3 / WD1002FBYS drive connected to an Intel ICH10R controller on an Intel S5000PSL server board.
I tested 8 drives in RAID6 connected to a LSI 8888ELP controller, worked perfectly.
I connected the same 8 drives to an Adaptec 51245 controller, at boot only 2 out of 8 drives were recognized.
After booting, ASM showed all 8 drives, but they were continuously dropping out and back in.
I received confirmation of similar failures with the RE4 drives and Adaptec 5 series cards from a blog reader.
Adaptec support told him to temporarily run the drives at 1.5Gb/s, apparently this does work, I did not test it myself, clearly this is not a long term solution, nor acceptable.
I am still waiting to hear back from Adaptec and WD support.
[Update: 30 August 2009]
I received a reply from Adaptec support, and the news is not good, there is a hardware compatibility problem between the WD RE4-GP /WD2002FYPS drives.
“I am afraid currently these drives are not supported with this model of controller. This is due to a compatibility issue with the onboard expander on the 51245 card. We are working on a hardware solution to this problem, but I am currently not able to say in what timeframe this will come.”
[Update: 31 August 2009]
I asked support if a firmware update will fix the issue, or if a hardware change will be required.
“Correct, a hardware solution, this would mean the card would need to be swapped, not a firmeware update. I can’t tell you for sure when the solution would come as its difficult to predict the amount of time required to certify the solution but my estimate would be around the end of September.”
[Update: 6 September 2009]
I experienced similar timeouts testing an Areca ARC-1680 controller.
Areca support was very forthcoming with the problem and the solution.
“this issue had been found few weeks ago and problem had been reported to WD and Intel which are vendors for hard drive and processor on controller. because the problem is physical layer issue which Areca have no ability to fix it.
but both Intel and WD have no fix available for this issue, the only solution is recommend customer change to SATA150 mode.
and they had closed this issue by this solution.
so i do not think a fix for SATA300 mode may available, sorry for the inconvenience.”
That explains why the problem happens with the Areca and Adaptec controllers, but not the LSI, both use the Intel IOP348 processor.

Power Saving SATA RAID Controller

I’ve been a longtime user of Adaptec SATA RAID cards (3805, 5805, 51245), but over the years I’ve become more energy saving conscious, and the Adaptec controllers did not support Windows power management.
My workstations are normally running in the “Balanced” power mode so that they will go to sleep after an hour, but sometimes I need to run computationally intensive tasks that leaves the machines running 24/7.
During these periods the disks don’t need to be on and I want the disks to spin down, like they would had they been directly connected and not in a RAID configuration.
I was building a new system with 4 drives in RAID10, and I decided to the try a 3Ware / AMCC SATA 9690SA-4I RAID controller. Their sales support confirmed that the card does support native Windows power management.
I also ordered a battery backup unit with the card, and my first impressions of installing the battery backup unit was less than impressive. The BBU comes with 4 plastic screws with pillars, but the 9690SA card only had one mounting hole. After inserting the BBU in the IDC header I had to pull it back out and adjust it so that it would align properly.
After running the card for a few hours I started getting battery overheating warnings. The BBU comes with an extension cable, and I had to use the extension cable and mount the battery away from the controller board. After making this adjustment the BBU seemed to operate at normal temperature.
Getting back to installation, the 3Ware BIOS utility is very rudimentary (compared to Adaptec), I later found out that the 3Ware Disk Manager 2 (3DM2) utility is not much better. The BIOS only allowed you to create one boot volume, and the rest of the disk space was automatically allocated. The BIOS also only supports INT13 booting from the boot volume.
I installed Vista Ultimate x64 on the boot volume, and used the other of the volume for data. I also installed the 3DM2 management utility, and the client tray alerting application. The client utility does not work on Vista because it requires elevation, and elevation s not allowed for auto start items. The 3DM2 utility is a web server and you connect using your web browser.
At first the lack of management functionality did not bother me, I did not need it, and the drives seemed to perform fine. After a month or so I noticed that I was getting more and more controller reset messages in the eventlog. I contacted 3Ware support, and they told me they see CRC errors and that the fanout cable was probably bad. I replaced the cable, but the problems persisted.
The CRC errors reminded me of problems I had with Seagate ES2 drives on other systems, and I updated the firmware in the 4 500 GB Seagate drives I was using. No change, same problem.
I needed more disk space anyway, so I decided to upgrade the 500GB Seagate drives to 1TB WD Caviar Black drives. The normal procedure would be to remove the drives one by one, insert the new drive, wait for the array to rebuild, and when all drives have been replaced, to expand the volume.
A 3Ware KB article confirmed this operation, but, there was no support for volume expansion, what?
In order to expand the volume I would need to boot from DOS, Windows is not supported, run a utility to collect data, send the data to 3Ware, and they would create a custom expansion script for me that I then need to run against the volume to rewrite the META data. They highly recommend that I backup the data before proceeding.
I know the Adaptec Storage Manager (ASM) utility does support volume expansion, I’ve used it, it’s easy, it’s a right click in the GUI.
I never got to the point of actually trying the expansion procedure. After swapping the last drive I ran a verify, and one of the mirror units would not go past 22%. Support told me to try various things, disable scheduling, enable scheduling, stop the verify, restart the verify. When they eventually told me it seems there are some timeouts, and that the cause was Native Command Queuing (NCQ) and a bad BBU, I decided I had enough.
The new Adaptec 5-series cards do support power management, but unlike the 9690SA card they do not support native Windows power management, and requires power savings to be enabled through the ASM utility.
I ordered an Adaptec 5445 card, booted my system with the 9690SA still in place from WinPE, made an image backups using Symantec Ghost Solution Suite (SGSS), installed the 5445 card, created new RAID10 volumes, booted from WinPE, restored the images using Ghost, and Vista booted just fine.
From past experience I knew that when changing RAID controllers I had to make sure that the Adaptec driver would be ready after swapping the hardware, else the boot will fail. So before I swapped the cards and made the Ghost backup, I used regedit and changed the start type of the “arcsas” driver from disabled to boot. I know that SGSS does have support for driver injection used for bare metal restore, but since the Adaptec driver comes standard with Vista, I just had to enable it.
It has only been a few days, but the system is running stable with no errors. Based purely on boot times, I do think the WD WD1001FALS Caviar Black drives are faster than the Seagate ST3500320AS Barracuda drives I used before.
Let’s hope things stay this way.
[Updated: 17 July 2009]
The Adaptec was not that power friendly after all.
Read the continued post.