Synology DS2411+ Performance Review

In my last post I compared the performance of  Synology DS1511+ against the QNAP TS-859 Pro. As I finished writing that post, Synology announced the new Synology DS2411+.
Instead of using a DS1511+ and DX510 extender for 10 disks, the DS2411+ offers 12 disks in a single device. The price difference is also marginal, DS1511+ is $836, the DX510 is $500, and the DS2411+ is $1700. That is a difference of only $364, and well worth it for the extra storage space, and the reliability and stability of all drives in one enclosure. I ended up returning my DX510 and DS1511+, and got a DS2411+ instead.

To test the DS2411+, I ran the same performance tests, using the same MPIO setup as I described in my previous post. The only slight difference was in the way I configured the iSCSI LUN; the DS1511+ was configured as SHR2, while the DS2411+ was configured as RAID6. Theoretically both are the same when all the disks are the same size, and SHR2 ends up using RAID6 internally.
iSCSI LUN configuration:
DS2411.iSCSI.LUN

At idle the DS2411+ used 42W power, and under load it used 138W power. The idle power usage is close to the advertised 39W idle power usage, but quite a bit more than the advertised 105W power usage under load.

I use Remote Desktop Manager to manage all my devices in one convenient application. RDM supports web portals, Remote Desktop, Hyper-V, and many more remote configuration options, all in a single tabbed UI. What I found was that the Synology DSM has some problems when running in a tabbed IE browser. When I open the log history, I get a script error, and whenever I focus away and back on the browser window, the DSM desktop windows shift all the way to the left. I assume this is a DSM problem related to absolute and relative referencing. I logged a support case, and I hope they can fix it.
Script error:
DS2411.DSM.Script.Error

Test results:

Device
ATTO Read
ATTO Write
CDM Read
CDM Write
PM810 267.153 260.839 256.674 251.850
DS2411+ 244.032 165.564 149.802 156.673
DS1511+ 244.032 126.030 141.213 115.032
TS-859 Pro 136.178 95.152 116.015 91.097

Chart
DS2411+:
Atto.Synology.MPIOCDM.Synology.MPIO
DS1511+
Atto.Synology.MPIOCDM.Synology.MPIO

The DS2411+ published performance numbers are slightly better than the DS1511+ numbers, and my testing confirms that. so far I am really impressed with the DS2411+.

Advertisements

Synology DS1511+ vs. QNap TS-859 Pro, iSCSI MPIO Performance

Untitled Page I have been very happy with my QNap TS-859 Pro (Amazon), but I’ve run out of space while archiving my media collection, and I needed to expand the storage capacity. You can read about my experience with the TS-859 Pro here, and my experience archiving my media collection here.
My primary objective with this project is storage capacity expansion, and my secondary objective is improved performance.

My choices for storage capacity expansion included:

  • Replace the 8 x 2TB drives with 8 x 3TB drives, to give me 6TB of extra storage. The volume expansion would be very time consuming, but my network setup can remain unchanged during the expansion.
  • Get a second TS-859 Pro with 8 x 3TB drives, to give me 18TB of extra storage. I would need to add the new device to my network, and somehow rebalance the storage allocation across the two devices, without changing the file sharing paths, probably by using directory mount points.
  • Get a Synology DS1511+ (Amazon) and a DX510 (Amazon) expansion unit with 10 x 3TB drives to replace the QNap, to give me 12TB of extra storage, expandable to 15 x 3TB drives for 36TB of total storage. I will need to copy all data to the new device, then mount the new device in place of the old device.

I opted for the DS1511+ with one DX510 expansion unit, I can always add a second DX510 and expand the volume later if needed.
As far as hard drives go, I’ve been very happy with the Hitachi Ultrastar A7K2000 2TB drives I use in my workstations and the QNap, so I stayed with the larger Hitachi Ultrastar 7k3000 3TB drives for the Synology expansion.

For improving performance I had a few ideas:

  • The TS-859 Pro is a bit older than the DS1511+, and there are newer and more powerful QNap models available, like the TS-859 Pro+ (Amazon) with a faster processor, or the TS-659 Pro II (Amazon) with a faster processor and SATA3 support, so it not totally fair to compare the TS-859 Pro performance against the newer DS1511+. But, the newer QNap models do not support my capacity needs.
  • I use Hyper-V clients and dynamic VHD files located on an iSCSI volume mounted in the host server. I elected this setup because it allowed me great flexibility in creating logical volumes for the VM’s, without actually requiring the space to be allocated. In retrospect this may have been convenient, but it was not performing well in large file transfers between the iSCSI target and the file server Hyper-V client.
    For my new setup I was going to mount the iSCSI volume as a raw disk in the file server Hyper-V client. This still allowed me to easily move the iSCSI volume between hosts, but the performance will be better than fixed size VHD files, and much better than dynamic VHD files.
    Here is a blog post describing some options for using iSCSI and Hyper-V.
  • I used iSCSI thin provisioning, meaning that the logical target has a fixed size, but the physical storage only gets allocated as needed. This is very convenient, but turned out to be slower than instant allocation. The QNap iSCSI implementation is also a file-level iSCSI LUN, meaning that the iSCSI volume is backed by a file on an EXT4 volume.
    For my new setup I was going to use the Synology block-level iSCSI LUN, meaning that the iSCSI volume is directly mapped to a physical storage volume.
  • I use a single LAN port to connect to the iSCSI target, meaning the IO throughput is limited by network bandwidth to 1Gb/s or 125MB/s.
    For my new setup I wanted to use 802.3ad link aggregation or Multi Path IO (MPIO) to extend the network speed to a theoretical 2Gb/s or 250MB/s. My understanding of link aggregation turned out to be totally wrong, and I ended up using MPIO instead.

To create a 2Gb/s network link between the server and storage, I teamed two LAN ports on the Intel server adapter, I created a bond of the two LAN ports on the Synology, and I created two trunks for those connections on the switch. This gave me a theoretical 2Gb/s pipe between the server and the iSCSI target. But my testing showed no improvement in performance over a single 1Gb/s link. After some research I found that the logical link is 2Gb/s, but that the physical network stream going from one MAC address to another MAC address is still limited by the physical transport speed, i.e. 1Gb/s. This means that the link aggregation setup is very well suited to e.g. connect a server to a switch using a trunk, and allow multiple clients access to the server over the switch, each at full speed, but it has no performance benefit when there is a single source and destination, as is the case with iSCSI. Since link aggregation did not improve the iSCSI performance, I used MPIO instead.

I set up a test environment where I could compare the performance of different network and device configurations using readily available hardware and test tools. Although my testing produced reasonably accurate relative results, due to the differences in environments, it can’t really be used for absolute performance comparisons.

Disk performance test tools:

Server setup:

Network setup:

  • HP ProCurve V1810 switch, Jumbo Frames enabled, Flow Control enabled.
  • Jumbo Frames enabled on all adapters.
  • CAT6 cables.
  • All network adapters connected to the switch.

QNap setup:

Synology setup:

To test the performance using the disk test tools I mounted the iSCSI targets as drives in the server. I am not going to cover details on how to configure iSCSI, you can read the Synology and QNap iSCSI documentation, and more specifically the MPIO documentation for Windows, Synology and QNap.
A few notes on setting up iSCSI:

  • The QNap MPIO documentation shows that LAN-1 and LAN-2 are in a trunked configuration. As far as I could tell the best practices documentation from Microsoft, DELL, Synology, and other SAN vendors, say that trunking and MPIO should not be mixed. As such I did not trunk the LAN ports on the QNap.
  • I connected all LAN cables to the switch. I could have done direct connections to eliminate the impact of the switch, but this is not how how I will install the setup, and the switch should be sufficiently capable of handling the load and not add any performance degradation.
  • Before trying to enable MPIO on Windows Server, first connect one iSCSI target and map the device, then add the MPIO feature. If you do not have a mapped device, the MPIO iSCSI option will be greyed out.
  • The server’s iSCSI target configuration explicitly bound the source and destination devices based on the adapters IP address, i.e. server LAN-1 would bind to NAS LAN-1, etc. This ensured that traffic would only be routed to and from the specified adapters.
  • I found that the best MPIO load balance policy was the Least Queue Depth Option.

During my testing I encountered a few problems:

  • The DX510 expansion unit would sometimes not power on when the DS1511+ is powered on, or would sometimes fail to initialize the RAID volume, or would sometimes go offline while powered on. I RMA’d the device, and the replacement unit works fine.
  • During testing of the DS1511+, the write performance would sometimes degrade by 50% and never recover. The only solution was to reboot the device. Upgrading the the latest 3.1-1748 DSM firmware solved this problem.
  • During testing of the DS1511+, when one of the MPIO network links would go down, e.g. I unplug a cable, ghost iSCSI connections would remain open, and the iSCSI processes would consume 50% of the NAS CPU time. The only solution was to reboot the device. Upgrading the the latest 3.1-1748 DSM firmware solved this problem.
  • I could not get MPIO to work with the DS1511+, yet no errors were reported. It turns out that LAN-1 and LAN-2 must be on different subnets for MPIO to work.
  • Both the QNap and Synology exhibits weird LAN traffic behavior when both LAN-1 and LAN-2 is connected, and the server generates traffic directed to LAN-1 only. The NAS resource monitor would show high traffic volumes on LAN-1 and and LAN-2, even with no traffic directed at LAN-2. I am uncertain why this happens, maybe a reporting issue, maybe a switching issue, but to avoid it influencing the tests, I disconnected LAN-2 while not testing MPIO.

My test methodology was as follows:

  • Mount either the QNap or Synology iSCSI device, power of the other device while not being tested.
  • Connect the iSCSI target using LAN-1 only and unplug LAN-2, or connect using MPIO with LAN-1 and LAN-2 active.
  • Run all CDM tests with iterations set at 9, and a 4GB file-set size.
  • Run ATTO with the queue depth set to 8, and a 2GB file-set size.
  • As a baseline, I also tested the Samsung PM810 SSD drive using ATTO and CDW.

Test result summary:

Device
ATTO Read
ATTO Write
CDM Read
CDM Write
Total (MB/s)
PM810 267.153 260.839 256.674 251.850 1,036.516
DS1511+ MPIO 244.032 126.030 141.213 115.032 626.307
TS-859 Pro MPIO 136.178 95.152 116.015 91.097 438.442
DS1511+ 122.294 120.172 89.258 105.618 437.342
TS-859 Pro 119.370 99.864 76.529 89.752 385.515

image

Detailed results:
PM810:
Atto.P810 CDM.P810
DS1511+ MPIO:
Atto.Synology.MPIO CDM.Synology.MPIO
TS-859 Pro MPIO:
Atto.Qnap.MPIO CDM.Qnap.MPIO
DS1511+:
Atto.Synology CDM.Synology
TS-859 Pro:
Atto.Qnap CDM.Qnap

Initially, I was a little concerned about the DX510 being in a separate case connected with an eSATA cable to the main DS1511+. Especially after I had to RMA my first DX510 because of what appeared to be connectivity issues. I was also concerned that there would be a performance difference between the 5 drives in the DS1511+ and the 5 drives in the DX510. Testing showed no performance difference between a 5 drive volume and a 10 drive volume, and the only physically noticeable difference was that the drives in the DX510 ran a few degrees hotter compared to the drives in the DS1511+.

As you can see from the results, the DS1511+ with MPIO performs really very well. Especially the 244MB/s ATTO read performance that gets close to the theoretical maximum of 250MB/s over a 2Gb/s link.

But technology moves quickly, and as I was compiling my test data for this post, Synology released two new NAS units, the DS3611xs and the DS2411+. The DS2411+ is very appealing, it is equivalent in performance to the DS1511+, but supports 12 drives in the main enclosure.
I may just have to exchange my DS1511+ and DX510 for a DS2411+…

[Update: 25 July 2011]
I returned the DS1511+ and DX510 in exchange for a DS2411+.
Read my performance review here.

Data Robotics DroboPro vs. QNAP TS-859 Pro

I previously wrote about my impressions of the DroboPro, and in case I was not clear, I was not impressed.

I recently read the announcement of the new QNAP TS-859 Pro, and from the literature it seemed like a great device, high performance, feature rich, and power saving.
The TS-859 Pro is now available, and I compared it with the DroboPro, and against a regular W2K8R2 file server.

The TS-859 is taller than the DroboPro, the DroboPro is deeper than the TS-859, and the width is about the same.

Before I get to the TS-859, let’s look at the DroboPro information and configuration screens.
The OS is Windows Server 2008 R2 Enterprise, but the steps should be about the same for Vista and Windows 7.
All but the DroboCopy context menu screens are listed below.
DroboPro information and configuration:

The dashboard believes there are no volumes, but Windows sees an unknown 2TB volume:

Creating a new volume:

As the dashboard software starts creating the volume, Windows will detect a new RAW volume being mounted, and asked if it should be formatted.
Just leave that dialog open and let the dashboard finish.
The dashboard will complete saying all is well, when in reality it is not:

The dashboard failed to correctly mount and format the volume.

Right click on the disk, bring it online, format the partition as a GPT simple volume.

The dashboard will pick up the change and show the correct state.

Email notifications are configured from the context menu.
The email notifications are generated by the user session application, so no user logged in, no email notifications.

DroboPro does not provide any diagnostics, even the diagnostic file is encrypted.

Unlike the DroboPro that comes with rudimentary documentation, the TS-859 has getting started instructions printed right on the top of the box, and includes a detailed configuration instruction pamphlet.
The DroboPro also has configuration instructions in the box, printed on the bottom of a piece of cardboard that looks like packaging material, and I only discovered these instructions as I was throwing out the packaging.
I loaded the TS-859 with 8 x Hitachi A7K2000 UltraStar 2TB drives.
On powering on the TS-859 the LCD showed the device is booting, then asked me if I want to initialize all the drives as RAID6.
You can opt-out of this procedure, or change the RAID configuration, by using the select and enter buttons on the LCD.
I used the default values and the RAID6 initialization started.
The LCD shows the progress, and the process completed in about 15 minutes.
Unlike the DroboPro that requires a USB connection and client side software, the TS-859 is completely web managed.
The LCD will show the LAN IP address, obtained via DHCP, login using the browser at http://%5BIP%5D:8080.
The default username and password is “admin”, “admin”.

Although the initial RAID6 initialization took only about 15 minutes, it took around 24 hours for the RAID6 synchronization to complete.
During this time the volume is accessible for storage, the device is just busy and not as responsive.

Unlike the DroboPro that shows no diagnostics, and generates an encrypted diagnostic file, the TS-859 has detailed diagnostics.

Unlike the DroboPro, email alerts are generated from the device and does not require any client software.

SMB / CIFS shares are enabled by default.

iSCSI target creation is very simple using a wizard.

While configuring the TS-859, I ran into a few small problems.
I quickly found the help and information I needed on the QNAP forum.
Unlike the DroboPro forum, the QNAP forum does not require a device serial number and is open to anybody.
The TS-859 default outbound network communication, SMTP, NTP, etc,. defaults to LAN1.
I had LAN1 directly connected for iSCSI and LAN2 connected to the routable network.
NTP time syncs were failing, after switching LAN1 and LAN2, the device could access the internet and NTP and the front page RSS feed started working.
Make sure to connect LAN1 to a network that can access the internet.
When I first initialized the RAID6 array, drive 8 was accessible and initializing, but didn’t report any SMART information.
I received instructions from the forum on how to use SSH to diagnose the drive, and after replacing the drive, SMART worked fine.
What I really wanted to do was compare performance, and to keep things fair I setup a configuration that had all machines connected at the same time.
This way I could run the tests one by one on the various devices, without needing to change configurations.

The client test machine is a Windows Server 2008 R2, DELL OptiPlex 960, Intel Quad Core Q9650 3GHz, 16GB RAM, Intel 160GB SSD, Hitachi A7K2000 2TB SATA, Intel Pro 1000 ET Dual Port.
The file server is a Windows Server 2008 R2, Intel S5000PSL, Dual Quad Core Xeon E5500, 32GB RAM, Super Talent 250GB SSD, Areca ARC-1680 RAID controller, 10 x Hitachi A7K2000 2TB SATA, RAID6, Intel Pro 1000 ET Dual Port.
The DroboPro has 8 x Hitachi A7K2000 2TB SATA, dual drive redundancy BeyondRAID, firmware 1.1.4.
The TS-859 Pro has 8 x Hitachi A7K2000 2TB SATA, RAID6, firmware 3.2.2b0128.

The client’s built in gigabit network card is connected to the switch.
The server’s built in gigabit network card is connected to the switch.
The TS-859 Pro LAN1 is connected to the switch.
The TS-859 Pro LAN2 is directly connected to the client on one of the Pro 1000 ET ports.
The DroboPro LAN1 is directly connected to the client on one of the Pro 1000 ET ports.

The DroboPro is configured as an iSCSI target hosting a 16TB volume.
The TS-859 Pro is configured as an iSCSI target hosting a 10TB volume.
The difference in size is unintentional, both units support thin provisioning, the DroboPro maximum defaults to the size of all drives combined, and the TS-859 maximum defaults to the effective RAID size.

The client maps the DroboPro iSCSI target as a GPT simple volume.

The client maps the TS-859 Pro iSCSI target as a GPT simple volume.

The first set of tests were done using ATTO Disk Benchmark 2.46.
Intel 160GB SSD:

Hitachi A7K2000 2TB SATA:

DroboPro iSCSI:

TS-589 Pro (1500 MTU) iSCSI:

TS-589 Pro Jumbo Frame (9000 MTU) iSCSI:

Read performance:
Device Speed (MB/s)
Intel SSD SATA 274
Hitachi SATA 141
TS-859 Pro Jumbo iSCSI 116
TS-859 Pro iSCSI 113
DroboPro iSCSI 62

Write performance:
Device Speed (MB/s)
Hitachi SATA 141
Intel SSD SATA 91
TS-859 Pro Jumbo iSCSI 90
TS-859 Pro iSCSI 83
DroboPro iSCSI 65


Summary:



The next set of tests used robocopy to copy a fileset from the local Hitachi SATA drive to the target drive backed by iSCSI.

The fileset consists of a single 24GB Ghost file, 3087 JPG files totaling 17GB, and 25928 files from the Windows XP SP3 Windows folder totaling 5GB.

DroboPro iSCSI:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 67998715 66449606 61345194 65264505
JPG 47376106 34469965 28865504 36903858
XP 33644442 21231487 18780348 24552092
Total 149019263 122151058 108991046 126,720,456


System load during Ghost file copy to DroboPro:


TS-859 Pro iSCSI:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 94824771 103356597 102596286 100259218
JPG 50591459 51817921 55830439 52746606
XP 39133922 38128876 37972580 38411793
Total 184550152 193303394 196399305 191,417,617


TS-859 Pro Jumbo iSCSI:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 91427745 113113714 112684967 105742142
JPG 49525622 51203544 51477482 50735549
XP 31910014 37429864 37699130 35679669
Total 172863381 201747122 201861579 192,157,361


System load during Ghost file copy to TS-859 Pro Jumbo:

This test uses the same fileset, but copies the files over SMB / CIFS.
Server SMB:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 108161169 116949441 115138722 113416444
JPG 53969349 56842239 55586620 55466069
XP 15829769 17550875 19336648 17572430
Total 177960287 191342555 190061990 186,454,944


TS-589 Pro SMB:
Fileset Run 1 (B/s) Run 2 (B/s) Run 3 (B/s) Average (B/s)
Ghost 64295886 65486617 63494735 64425746
JPG 52988736 52633239 53177864 52933279
XP 14345937 15703244 15506456 15185212
Total 131630559 133823100 132179055 132,544,238



Summary:

In terms of absolute performance the TS-859 Pro with Jumbo Frames is the fastest.
For iSCSI TS-859 Pro with Jumbo Frames is the fastest
For SMB the W2K8R2 server is the fastest.
If we look at the system load graphs we can see that the DroboPro network throughput is frequently stalling, while the TS-859 is consistently smooth.
This phenomena has been a topic of discussion on the DroboPro forum for some time, and the speculation is that the hardware cannot keep up with the network load.
Further speculation is that because the BeyondRAID technology is filesystem aware, it requires more processing power compared to a traditional block level RAID that is filesystem agnostic.

So let’s summarize:
The TS-859 Pro and the DroboPro are about the same price, around $1500.
The TS-859 Pro is a little louder than the DroboPro (with the DroboPro cover on).
The TS-859 Pro is not as pretty as the DroboPro, arguable.
The TS-859 Pro has ample diagnostics and remote managament capabilities, the DroboPro has none.
The TS-859 Pro has loads of features, the DroboPro provides only basic storage.
The TS-859 Pro is easy to setup, the DroboPro requires a USB connection and still fails to correctly configure, requiring manual intervention.
The TS-859 Pro outperforms the DroboPro by 52%.
The TS-859 Pro will stay in my lab, the DroboPro will go 🙂