Synology DS1511+ vs. QNap TS-859 Pro, iSCSI MPIO Performance

Untitled Page I have been very happy with my QNap TS-859 Pro (Amazon), but I’ve run out of space while archiving my media collection, and I needed to expand the storage capacity. You can read about my experience with the TS-859 Pro here, and my experience archiving my media collection here.
My primary objective with this project is storage capacity expansion, and my secondary objective is improved performance.

My choices for storage capacity expansion included:

  • Replace the 8 x 2TB drives with 8 x 3TB drives, to give me 6TB of extra storage. The volume expansion would be very time consuming, but my network setup can remain unchanged during the expansion.
  • Get a second TS-859 Pro with 8 x 3TB drives, to give me 18TB of extra storage. I would need to add the new device to my network, and somehow rebalance the storage allocation across the two devices, without changing the file sharing paths, probably by using directory mount points.
  • Get a Synology DS1511+ (Amazon) and a DX510 (Amazon) expansion unit with 10 x 3TB drives to replace the QNap, to give me 12TB of extra storage, expandable to 15 x 3TB drives for 36TB of total storage. I will need to copy all data to the new device, then mount the new device in place of the old device.

I opted for the DS1511+ with one DX510 expansion unit, I can always add a second DX510 and expand the volume later if needed.
As far as hard drives go, I’ve been very happy with the Hitachi Ultrastar A7K2000 2TB drives I use in my workstations and the QNap, so I stayed with the larger Hitachi Ultrastar 7k3000 3TB drives for the Synology expansion.

For improving performance I had a few ideas:

  • The TS-859 Pro is a bit older than the DS1511+, and there are newer and more powerful QNap models available, like the TS-859 Pro+ (Amazon) with a faster processor, or the TS-659 Pro II (Amazon) with a faster processor and SATA3 support, so it not totally fair to compare the TS-859 Pro performance against the newer DS1511+. But, the newer QNap models do not support my capacity needs.
  • I use Hyper-V clients and dynamic VHD files located on an iSCSI volume mounted in the host server. I elected this setup because it allowed me great flexibility in creating logical volumes for the VM’s, without actually requiring the space to be allocated. In retrospect this may have been convenient, but it was not performing well in large file transfers between the iSCSI target and the file server Hyper-V client.
    For my new setup I was going to mount the iSCSI volume as a raw disk in the file server Hyper-V client. This still allowed me to easily move the iSCSI volume between hosts, but the performance will be better than fixed size VHD files, and much better than dynamic VHD files.
    Here is a blog post describing some options for using iSCSI and Hyper-V.
  • I used iSCSI thin provisioning, meaning that the logical target has a fixed size, but the physical storage only gets allocated as needed. This is very convenient, but turned out to be slower than instant allocation. The QNap iSCSI implementation is also a file-level iSCSI LUN, meaning that the iSCSI volume is backed by a file on an EXT4 volume.
    For my new setup I was going to use the Synology block-level iSCSI LUN, meaning that the iSCSI volume is directly mapped to a physical storage volume.
  • I use a single LAN port to connect to the iSCSI target, meaning the IO throughput is limited by network bandwidth to 1Gb/s or 125MB/s.
    For my new setup I wanted to use 802.3ad link aggregation or Multi Path IO (MPIO) to extend the network speed to a theoretical 2Gb/s or 250MB/s. My understanding of link aggregation turned out to be totally wrong, and I ended up using MPIO instead.

To create a 2Gb/s network link between the server and storage, I teamed two LAN ports on the Intel server adapter, I created a bond of the two LAN ports on the Synology, and I created two trunks for those connections on the switch. This gave me a theoretical 2Gb/s pipe between the server and the iSCSI target. But my testing showed no improvement in performance over a single 1Gb/s link. After some research I found that the logical link is 2Gb/s, but that the physical network stream going from one MAC address to another MAC address is still limited by the physical transport speed, i.e. 1Gb/s. This means that the link aggregation setup is very well suited to e.g. connect a server to a switch using a trunk, and allow multiple clients access to the server over the switch, each at full speed, but it has no performance benefit when there is a single source and destination, as is the case with iSCSI. Since link aggregation did not improve the iSCSI performance, I used MPIO instead.

I set up a test environment where I could compare the performance of different network and device configurations using readily available hardware and test tools. Although my testing produced reasonably accurate relative results, due to the differences in environments, it can’t really be used for absolute performance comparisons.

Disk performance test tools:

Server setup:

Network setup:

  • HP ProCurve V1810 switch, Jumbo Frames enabled, Flow Control enabled.
  • Jumbo Frames enabled on all adapters.
  • CAT6 cables.
  • All network adapters connected to the switch.

QNap setup:

Synology setup:

To test the performance using the disk test tools I mounted the iSCSI targets as drives in the server. I am not going to cover details on how to configure iSCSI, you can read the Synology and QNap iSCSI documentation, and more specifically the MPIO documentation for Windows, Synology and QNap.
A few notes on setting up iSCSI:

  • The QNap MPIO documentation shows that LAN-1 and LAN-2 are in a trunked configuration. As far as I could tell the best practices documentation from Microsoft, DELL, Synology, and other SAN vendors, say that trunking and MPIO should not be mixed. As such I did not trunk the LAN ports on the QNap.
  • I connected all LAN cables to the switch. I could have done direct connections to eliminate the impact of the switch, but this is not how how I will install the setup, and the switch should be sufficiently capable of handling the load and not add any performance degradation.
  • Before trying to enable MPIO on Windows Server, first connect one iSCSI target and map the device, then add the MPIO feature. If you do not have a mapped device, the MPIO iSCSI option will be greyed out.
  • The server’s iSCSI target configuration explicitly bound the source and destination devices based on the adapters IP address, i.e. server LAN-1 would bind to NAS LAN-1, etc. This ensured that traffic would only be routed to and from the specified adapters.
  • I found that the best MPIO load balance policy was the Least Queue Depth Option.

During my testing I encountered a few problems:

  • The DX510 expansion unit would sometimes not power on when the DS1511+ is powered on, or would sometimes fail to initialize the RAID volume, or would sometimes go offline while powered on. I RMA’d the device, and the replacement unit works fine.
  • During testing of the DS1511+, the write performance would sometimes degrade by 50% and never recover. The only solution was to reboot the device. Upgrading the the latest 3.1-1748 DSM firmware solved this problem.
  • During testing of the DS1511+, when one of the MPIO network links would go down, e.g. I unplug a cable, ghost iSCSI connections would remain open, and the iSCSI processes would consume 50% of the NAS CPU time. The only solution was to reboot the device. Upgrading the the latest 3.1-1748 DSM firmware solved this problem.
  • I could not get MPIO to work with the DS1511+, yet no errors were reported. It turns out that LAN-1 and LAN-2 must be on different subnets for MPIO to work.
  • Both the QNap and Synology exhibits weird LAN traffic behavior when both LAN-1 and LAN-2 is connected, and the server generates traffic directed to LAN-1 only. The NAS resource monitor would show high traffic volumes on LAN-1 and and LAN-2, even with no traffic directed at LAN-2. I am uncertain why this happens, maybe a reporting issue, maybe a switching issue, but to avoid it influencing the tests, I disconnected LAN-2 while not testing MPIO.

My test methodology was as follows:

  • Mount either the QNap or Synology iSCSI device, power of the other device while not being tested.
  • Connect the iSCSI target using LAN-1 only and unplug LAN-2, or connect using MPIO with LAN-1 and LAN-2 active.
  • Run all CDM tests with iterations set at 9, and a 4GB file-set size.
  • Run ATTO with the queue depth set to 8, and a 2GB file-set size.
  • As a baseline, I also tested the Samsung PM810 SSD drive using ATTO and CDW.

Test result summary:

Device
ATTO Read
ATTO Write
CDM Read
CDM Write
Total (MB/s)
PM810 267.153 260.839 256.674 251.850 1,036.516
DS1511+ MPIO 244.032 126.030 141.213 115.032 626.307
TS-859 Pro MPIO 136.178 95.152 116.015 91.097 438.442
DS1511+ 122.294 120.172 89.258 105.618 437.342
TS-859 Pro 119.370 99.864 76.529 89.752 385.515

image

Detailed results:
PM810:
Atto.P810 CDM.P810
DS1511+ MPIO:
Atto.Synology.MPIO CDM.Synology.MPIO
TS-859 Pro MPIO:
Atto.Qnap.MPIO CDM.Qnap.MPIO
DS1511+:
Atto.Synology CDM.Synology
TS-859 Pro:
Atto.Qnap CDM.Qnap

Initially, I was a little concerned about the DX510 being in a separate case connected with an eSATA cable to the main DS1511+. Especially after I had to RMA my first DX510 because of what appeared to be connectivity issues. I was also concerned that there would be a performance difference between the 5 drives in the DS1511+ and the 5 drives in the DX510. Testing showed no performance difference between a 5 drive volume and a 10 drive volume, and the only physically noticeable difference was that the drives in the DX510 ran a few degrees hotter compared to the drives in the DS1511+.

As you can see from the results, the DS1511+ with MPIO performs really very well. Especially the 244MB/s ATTO read performance that gets close to the theoretical maximum of 250MB/s over a 2Gb/s link.

But technology moves quickly, and as I was compiling my test data for this post, Synology released two new NAS units, the DS3611xs and the DS2411+. The DS2411+ is very appealing, it is equivalent in performance to the DS1511+, but supports 12 drives in the main enclosure.
I may just have to exchange my DS1511+ and DX510 for a DS2411+…

[Update: 25 July 2011]
I returned the DS1511+ and DX510 in exchange for a DS2411+.
Read my performance review here.

9 Comments

  1. MichaelG says:

    Wonderfully expressed…I'm just now trying iSCSI for the first time.I came across your post while looking for what to expect from a 1511+ that I just ordered. Any suggestions on how to improve my setup?Ubuntu 10.04LTS, 2x1Gb network ports, Bonded.1511+ (with it's 2x1Gb network ports).HP Procurve 2650 (with server on a 1Gb port).The other 1Gb port is still open.My network seems pretty weak, I just tried an iSCSI file transfer from a Windows7 client and the Ubuntu as a target. I'm only getting 10-13MB/sec for a 735MB ISO image file.My Windows is connected to a 100Mb port on the switch, so that explains the bottleneck.Would it be good to unbond the server and use a crossover cable to connect to 1 port on the 1511+, or just add an additional ethernet card with 2 ports to dedicate to the 1511+. This takes the switch out of it completely.It seems the traffic on our local subnet would get hammered by this thing, so isolating it seems like the right thing to do.

    Like

  2. I don't think the bonding / trunking is going to help much with iSCSI, see my post for details.As for the PC speed, the 100Mb/s is the bottleneck, get your entire network and adapters on gigabit.You should be able to to get close to 120MB/s by just going gigabit.

    Like

  3. MichaelG says:

    My 6TB Synology is installed and running great! Quiet, cool, and using 2x1Gb connections, one is using a crossover cable directly to the server it sits next to and all iSCSI targets are connected thru that. I'm getting 110MB/s average throughput… spikes around 124… I'm very happy! The server network interfaces were unbonded and now the other 1Gb interface goes to the 1GB port on an older switch. Management is not springing for a new 48 port Gigabit switch just yet. 😉 Especially since I just improved everyone's throughput to the server by a noticeable difference.BackupPC is rockin the house using the Synology as it's repository target. I'm setting up the offsite disaster recovery backup using rsync over SSH with a cron job on the server.BackupPC is being implemented as a preference to Roaming Profiles for all the windows backups due to the frustration with long logon/logoff times (5-30minutes!).

    Like

  4. DKSG says:

    Hi Pieter,you have mentioned about your concern about RAID spanning across the DS1511+ and also DX510. Have you did any tests to show that it is a safe setup to span RAID across the boxes ? Will there be any lost of array or data should the DX510 cease to function or the esata connection between the boxes failed during writes operations ?ThanksRegards David

    Like

  5. @DKSGThe array will be in a "crash" state if you power on without the DX510, or if the DX510 goes offline while the DS1511 is still on.If you reboot, with both on, the array comes back online.But I assume there is a chance of data loss.

    Like

  6. DKSG says:

    @PieterI'm not too concern losing the last bit of data should a crash happens or the esata connection breaks during the write operations.I am particularly concern should such an event occurs, the array will be rendered disabled even after the setup is back to normal condition again.Any info to share on this aspect ?Thanks

    Like

  7. dperrault says:

    I need about 12Tb of back up space. for my (4) 3TB hard drives in my mac pro. If I go with raid 5, I could go with the 5bay Synology DS1511+, or the QNAP 559. Otherwise w/ Raid 6, the QNAP is the only option between the two.I will eventually use this as a primary storage space, and buy a second unit to back up the first. Will be running adobe lightroom most of the time.I am not concerned with expandability, I can use iscsi to expand the qnap drives and not be limited by the cable length of sataWould Raid 6 vs Raid 5 be a compelling reason to spend the extra $400-500 for the 6 day QNAP 659 + or Pro IIThanks

    Like

  8. Photoman says:

    Hi, thanks for writing this post. I just wondered if you had any experience with the synology 1511+ and transfer speeds with macs?

    Like

  9. Don’t team the NIC’s if you want to use MPIO/iSCSI. This is not supported. Check this PDF page2 @ http://files.qnap.com/news/pressresource/product/How_to_connect_to_iSCSI_targets_on_QNAP_NAS_using_MPIO_on_Windows_2008.pdf)
    quote:
    DO NOT use NIC Teaming on the server: NIC Teaming used with iSCSI is not supported
    by Microsoft.

    Like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.