XBMC on NUC’s and Pi’s

I’m still looking for the perfect XBMC hardware; must be small, silent, low power, low heat, 1080p, HD audio, and play anything I throw at it without a hiccup. The number of options are increasing, but no clear winner.

 

 

I previously tested a XIOS DS running XBMC on Android, and XBMC on Linux. At that time the builds were pretty unstable. I retested the latest Linux builds, that also include XBMC 12 Frodo RC2.

I tested using the 121512 release, after rebooting, I just saw a black screen. I could see that the AVR had negotiated HDMI audio, but the screen remained black. Reading the forum thread there were many reports of similar problems, same symptoms, leave the system up, and after 15 minutes XBMC loaded. The bug has been identified, but not yet fixed in official firmware. I used a community build that included the fix, and the system booted normally.

I noticed that there are now two hardware variants of the DS, a M1 version, that I have, and a new M3 version, that apparently includes a faster processor and more memory, and is currently only shipped in the EU and UK. This seems to be consistent with the AMLogic AML8726-M SoC device containing an ARM Cortex-A9 and a Mali-400 graphic processor.

The playback results were rather disappointing, no HD audio pass-through, high bitrate content would stutter, and I would get frequent network re-buffering. This device still shows promise, but not in its current state.

 

 

I tested XBMC on a Raspberry Pi. The Pi devices are pretty cheap at $35, but the units at this price have very long lead times. Instead I opted to buy an in-stock Model B Revision 2 unit from Amazon, and also a case.

The Pi Model B Revision 2 uses the Broadcom BCM2835 SoC device containing an ARM1176JZ-F with VideoCore IV graphic processor.

Deploying XBMC to a Pi is rather more involved compared to the DS, and I opted to use the Raspbmc distribution that includes easy to use tools for Windows. The deployment tool creates a bootable SD card, that then retrieves and installs the latest builds over the internet, similar to many Linux network boot disk installers.

The playback results were rather disappointing, no HD audio support, high bitrate content would stutter, and I would get very frequent network re-buffering.

Similar to openELEC that provides a XBMC plugin for OS configuration, Raspbmc configuration in XBMC is done using the Raspbmc plugin. When I first clicked the plugin I thought it did nothing, and after several more remote clicks it suddenly displayed and did whatever my remote clicks did, causing a restart. The plugin provides lots of configuration options, including switching of XBMC versions, downloading and running nightly builds, and advanced configuration, but really it is super slow to load up.

XBMC on the DS supported HD audio passthrough, but Raspbmc did not include HD audio support. The plugin allowed me to enable the XBMC AudioEngine, with a warning that it may not work. After restarting XBMC with AE enabled, there were options for HD audio, but AE did not detect the HDMI audio output device and only offered audio output over analog or SPDIF.

MPEG2 and VC-1 codecs have to be purchased for the Pi, but as my test results were disappointing, I did not bother purchasing the codecs.

 

 

I tested one of the new Intel Next Unit of Computing devices, specifically the DC3217IYE. The device is barebones, and I used Kingston KVR16S11K2/16 16GB memory and a Kingston SMS100S2/64G 64GB mSATA card. Oh, and you need your own power cable, I happened to have a spare Monoprice 7687 3-prong power cable lying around that fit the PSU.

I don’t know what to make of it, but Intel included a gadget in the box, that plays the Intel jingle every time you open the box. I’m inclined to think that they could have included a power cable instead of the jingle gadget, but my kids do enjoy playing with the box, so it may have some marketing value.

Here are a few unboxing pictures:
IMG_1384_DxOIMG_1385_DxOIMG_1386_DxOIMG_1388_DxOIMG_1389_DxOIMG_1390_DxOIMG_1391_DxOIMG_1392_DxOIMG_1393_DxOIMG_1394_DxOIMG_1395_DxOIMG_1396_DxOIMG_1397_DxOIMG_1398_DxOIMG_1399_DxO

 

I installed openELEC v3 Beta 6, that includes XBMC 12 Frodo RC2.

Most things worked fine, audio output device was automatically detected and set to HDMI, but HD audio passthrough did not work, and several videos showed artifacts during playback, even worse, some videos caused lots of artifacts and caused the device to hang. I assume the video issue is a problem with the Intel HD graphics driver being picked up by openELEC.

I am using a D-Link DSM-22 RF remote (I wish I can find more for sale), and I found that the key presses were erratic, after moving the RF dongle from a rear USB port to the front USB port, everything worked fine. I assume there is some interference near the back of the unit.

Physical size wise the NUC compares well against a Zotac ZBox Nano XS AD11 Plus, but price wise the NUC is more expensive once memory and flash storage is added.

The Nano XS is a Fusion based device, which means it will never get HD audio passthrough (AMD drivers lack HD audio support on Linux), so if openELEC and Intel can resolve the video corruption on the NUC, and XBMC can resolve the HD passthrough problem with my setup, the NUC would be a good contender.

 

I am still running openELEC on my Zotac ZBOX ID84 system with a NVidia GeForce GT520M GPU. This GPU supports HD audio passthrough, but as with my other devices, it does not work on my setup. The problem appears to be related to how XBMC AudioEngine targets audio, and that instead of sending the audio to the AVR, it sends it to the television, but this is speculation on my part. I logged a ticket with openELEC and XBMC, and there is a forum thread at openELEC with other Yamaha and Onkyo AVR users reporting similar problems, but nobody from openELEC or XBMC has yet responded 😦

 

Here is a comparison of device sizes, top is Raspberry Pi, then XIOS DS, then ZBOX AD11, then Intel NUC, and ZBOX ID84 at the bottom:
PV_20121226_5060_DxOPV_20121226_5061_DxO

 

My quest continues.

LSI turns their back on Green

I previously blogged here and here on my research into finding a power saving RAID controllers.

I have been using LSI MegaRAID SAS 9280-4i4e controllers in my Windows 7 workstations and LSI MegaRAID SAS 9280-8e controllers Windows Server 2008 R2 servers. These controllers work great, my workstations go to sleep and wake up, and in workstations and servers drives spin down when not in use.

I am testing a new set of workstation and server systems running Windows 8 and Server 2012, and using the “2nd generation” PCIe 3.0 based LSI RAID controllers. I’m using LSI MegaRAID SAS 9271-8i with CacheVault and LSI MegaRAID SAS 9286CV-8eCC controllers.

I am unable to get any of the configured drives to spin down on either of the controllers, nor in Windows 8 or Windows Server 2012.

LSI has not yet published any Windows 8 or Server 2012 drivers on their support site. In September 2012, after the public release of Windows Server 2012, LSI support told me drivers would ship in November, and now they tell me drivers will ship in December. All is not lost as the 9271 and 9286 cards are detected by the default in-box drivers, and appear to be functional.

I had hoped the no spin-down problem was a driver issue, and that it would be corrected by updated drivers, but that appears to be wishful thinking.

I contacted LSI support about the drive spin-down issue, and was referred to this August 2011 KB 16563, pointing to KB 16385 stating:

newer versions of firmware no longer support DS3; the newest version of firmware to support DS3 was 12.12.0-0045_SAS_2108_FW_Image_APP-2.120.33-1197

When I objected to the removal, support replied with this canned quote:

In some cases, when Dimmer Switch with DS3 spins down the volume, the volume cannot spin up in time when I/O access is requested by the operating system.  This can cause the volume to go offline, requiring a reboot to access the volume again.

LSI basically turned their back on green by disabling drive spin-down on all new controllers and new firmware versions.

I have not had any issues with this functionality on my systems, and spinning down unused drives to save power and reduce heat is a basic operational requirement. Maybe there are issues with some systems, but at least give me the choice of enabling it in my environment.

A little bit of searching shows I am not alone in my complaint, see here and here.

And from Intel a November 2012 KB 033877 that they have disabled drive power save on all their RAID controllers, maybe not that surprising given that Intel uses rebranded LSI controllers.

After a series of overheating batteries and S3 failures, I have long ago given up on Adaptec RAID controllers, but this situation with LSI is making me take another look at them.

Adaptec is advertising Intelligent Power Management as a feature of their controllers, I ordered a 7805Q controller, and will report my findings in a future post.

Storage Spaces Leaves Me Empty

I was very intrigued when I found out about Storage Spaces and ReFS being introduced in Windows Server 2012 and Windows 8. But now that I’ve spent some time with it, I’m left disappointed, and I will not be trusting my precious data with either of these features, just yet.

 

Microsoft publicly announced Storage Spaces and ReFS in early Windows 8 blog posts. Storage Spaces was of special interest to the Windows Home Server community in light of Microsoft first dropping support for Drive Extender in Windows Home Server 2011, and then completely dropping Windows Home Server, and replacing it with Windows Server 2012 Essentials. My personal interest was more geared towards expanding my home storage capacity in a cost effective and energy efficient way, without tying myself to proprietary hardware solutions.

 

I archive all my CD’s, DVD’s, and BD discs, and store the media files on a Synology DS2411+ with 12 x 3TB drives in a RAID6 volume, giving me approximately 27TB of usable storage. Seems like a lot of space, but I’ve run out of space, and I have a backlog of BD discs that need to be archived. In general I have been very happy with Synology (except for an ongoing problem with “Local UPS was plugged out” errors), and they do offer devices capable of more storage, specifically the RS2212+ with the RX1211 expansion unit offering up to 22 combined drive bays. But, at $2300 plus $1700, this is expensive, capped at 22 drives, and further ties me in with Synology. Compare that with $1400 for a Norco DS24-E or $1700 for a SansDigital ES424X6+BS 24 bay 4U storage unit, an inexpensive LSI OEM branded SAS HBA from eBay, or a LSI SAS 9207-8e if you like the real thing, connected to Windows Server 2012, running Storage Spaces and ReFS, and things look promising.

Arguable I am swapping one proprietary technology for another, but with native Windows support, I have many more choices for expansion. One could make the same argument for the use of ZFS on Linux, and if I was a Linux expert, that may have been my choice, but I’m not.

 

I tested using a SuperMicro SuperWorkstation 7047A-73, with dual Xeon E5-2660 processors and 32GB RAM. The 7047A-73 uses a X9DA7 motherboard, that includes a LSI SAS2308 6Gb/s SAS2 HBA, connected to 8 hot-swap drive bays.

For comparison with a hardware RAID solution I also tested using a LSI MegaRAID SAS 9286CV-8e 6Gb/s SAS2 RAID adapter, with the CacheCade 2.0 option, and a Norco DS12-E 12 bay SAS2 2U expander.

For drives I used Hitachi Deskstar 7K4000 4TB SATA3 desktop drives and Intel 520 series 480GB SATA3 SSD drives. I did not test with enterprise class drives, 4TB models are still excessively expensive, and defeats the purpose of cost effective home use storage.

 

I previously reported that the Windows Server 2012 and Windows 8 install will hang when trying to install on a SSD connected to the SAS2308. As such I installed Server 2012 Datacenter on an Intel 480GB SSD connected to the onboard SATA3 controller.

Windows automatically installed the drivers for the LSI SAS2308 controller.

I had to manually install the drivers for the C600 chipset RSTe controller, and as reported before, the driver works, but suffers from dyslexia.

The SAS2308 controller firmware was updated to the latest released SuperMicro v13.0.57.0.

 

Since LSI already released v14.0.0.0 firmware for their own SAS2308 based boards like the SAS 9207-8e, I asked SuperMicro support for their v14 version, and they provided me with an as yet unreleased v14.0.0.0 firmware version for test purposes. Doing a binary compare between the LSI version and the SuperMicro version, the differences appear to be limited to descriptive model numbers, and a few one byte differences that are probably configuration or default parameters. It is possible to cross-flash between some LSI and OEM adapters, but since I had a SuperMicro version of the firmware, this was not necessary.

SuperMicro publishes a v2.0.58.0 LSI driver that lists Windows 8 support, but LSI has not yet released Windows 8 or Server 2012 drivers for their own SAS2308 based products. I contacted LSI support, and their Windows 8 and Server 2012 drivers are scheduled for release in the P15 November 2012 update.

I tested the SuperMicro v14.0.0.0 firmware with the SuperMicro v2.0.58.0 driver, the SuperMicro v14.0.0.0 firmware with the Windows v2.0.55.84 driver, and the SuperMicro v2.0.58.0 driver with the SuperMicro v13.0.57.0 firmware. Any combination that included the SuperMicro v2.0.58.0 driver or the SuperMicro v14.0.0.0 firmware resulted in problems with the drives or controller not responding. The in-box Windows v2.0.55.84 driver and the released SuperMicro v13.0.57.0 firmware was the only stable combination.

Below are some screenshots of the driver versions and errors:

LSI.2.0.55.84LSI.2.0.58.0

Eventlog.Controller.ErrorEventlog.IO.RetriedEventlog.Reset.DeviceFormat.Failed

 

One of the reasons I am not yet prepared to use Storage Spaces or ReFS is because of the complete lack of decent documentation, best practice guides, or deployment recommendations. As an example, the only documentation on SSD journal drive configuration is in TechNet forum post from a Microsoft employee, requiring the use of PowerShell, and even then there is no mention of scaling or size ratio requirements. Yes, the actual PowerShell commandlet parameters are documented on MSDN, but not the use or the meaning.

PowerShell is very powerful and Server 2012 is completely manageable using PowerShell, but an appeal of Windows has always been the management user interface, especially important for adoption by SMB’s that do not have a dedicated IT staff. With Windows Home Server being replaced by Windows Server 2012 Essentials, the lack of storage management via the UI will require regular users to become PowerShell experts, or maybe Microsoft anticipates that configuration UI’s will be developed by hardware OEM’s deploying Windows Storage Server 2012 or Windows Server 2012 Essentials based systems.

My feeling is that Storage Spaces will be one of those technologies that matures and becomes generally usable after one or two releases or service packs post the initial release.

 

I tested disk performance using ATTO Disk Benchmark 2.47, and CrystalDiskMark 3.01c.

I ran each test twice, back to back, and report the average. I realize two runs are not statistically significant, but with just two runs it took several days to complete the testing in between regular work activities. I opted to only publish the CrystalDiskMark data as the ATTO Disk Benchmark results varied greatly between runs, while the CrystalDiskMark results were consistent.

Consider the values useful for relative comparison under my test conditions, but not useful for absolute comparison with other systems.

 

Before we get to the results, a word on the tests.

The JBOD tests were performed using the C600 SATA3 controller.
The Simple, Mirror, Triple, and RAID0 tests were performed using the SAS 2308 SAS2 controller.
The Parity, RAID5, RAID6, and CacheCade tests were performed using the SAS 9286CV-8e controller.

The Simple test created a simple storage pool.
The Mirror test created a 2-way mirrored storage pool.
The Triple test created a 3-way mirrored storage pool.
The Parity test created a parity storage pool.
The Journal test created a parity storage pool, with SSD drives used for the journal disks.
The CacheCade test created RAID sets, with SSD drives used for caching.

 

As I mentioned earlier, there is next to no documentation on how to use Storage Spaces. In order to use SSD drives as journal drives, I followed information provided in a TechNet forum post.

Create the parity storage pool using PowerShell or the GUI. Then associate the SSD drives as journal drives with the pool.

Windows PowerShell
Copyright (C) 2012 Microsoft Corporation. All rights reserved.

PS C:\Users\Administrator> Get-PhysicalDisk -CanPool $True

FriendlyName CanPool OperationalStatus HealthStatus Usage Size
------------ ------- ----------------- ------------ ----- ----
PhysicalDisk4 True OK Healthy Auto-Select 447.13 GB
PhysicalDisk5 True OK Healthy Auto-Select 447.13 GB

PS C:\Users\Administrator> $PDToAdd = Get-PhysicalDisk -CanPool $True
PS C:\Users\Administrator>
PS C:\Users\Administrator> Add-PhysicalDisk -StoragePoolFriendlyName "Pool" -PhysicalDisks $PDToAdd -Usage Journal
PS C:\Users\Administrator>
PS C:\Users\Administrator>
PS C:\Users\Administrator> Get-VirtualDisk

FriendlyName ResiliencySettingNa OperationalStatus HealthStatus IsManualAttach Size
me
------------ ------------------- ----------------- ------------ -------------- ----
Pool Parity OK Healthy False 18.18 TB

PS C:\Users\Administrator> Get-PhysicalDisk

FriendlyName CanPool OperationalStatus HealthStatus Usage Size
------------ ------- ----------------- ------------ ----- ----
PhysicalDisk0 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk1 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk2 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk3 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk4 False OK Healthy Journal 446.5 GB
PhysicalDisk5 False OK Healthy Journal 446.5 GB
PhysicalDisk6 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk7 False OK Healthy Auto-Select 3.64 TB
PhysicalDisk8 False OK Healthy Auto-Select 447.13 GB
PhysicalDisk10 False OK Healthy Auto-Select 14.9 GB

PS C:\Users\Administrator>

I initially added the journal drives after the virtual drive was already created, but that would not use the journal drives. I had to delete the virtual drive, recreate it, and then the journal drives kicked in. There must be some way to manage this after virtual drives already exist, but again, no documentation.

 

In order to test Storage Spaces using the SAS 9286CV-8e RAID controller I had to switch it to JBOD mode using the commandline MegaCli utility.


D:\Install>MegaCli64.exe AdpSetProp EnableJBOD 1 a0

Adapter 0: Set JBOD to Enable success.

Exit Code: 0x00

D:\Install>MegaCli64.exe AdpSetProp EnableJBOD 0 a0

Adapter 0: Set JBOD to Disable success.

Exit Code: 0x00

D:\Install>

 

The RAID and CacheCade disk sets were created using the LSI MegaRAID Storage Manager GUI utility.

 

Below is a summary of the throughput results:

ReadWriteKBPS

ReadWriteIOPS

 

Not surprisingly the SSD drives had very good scores all around for JBOD, Simple, and RAID0. I only had two drives to test with, but I expect more drives to further improve performance.

The Simple, Mirror, and Triple test results speak for themselves, performance halving, and halving again.

The Parity test shows good read performance, and bad write performance. The write performance approaches that of a single disk.

The Parity with SSD Journal disks shows about the same read performance as without journal disks, and the write performance double that of a single disk.

The RAID0 and Simple throughput results are close, but the RAID0 write IOPS doubling that of the Simple volume.

The RAID5 and RAID6 read performance is close to Parity, but the write performance almost ten fold that of Parity. It appears that the SLI card writes to all drives in parallel, while Storage Spaces parity writes to one drive only.

The CacheCade read and write performance is less than without CacheCade, but the IOPS ten fold higher.

The ReFS performance is about 30% less than the equivalent NTFS performance.

 

 

Until Storage Spaces gets thoroughly documented and improves performance, I’m sticking with hardware RAID solutions.

XBMC for Linux on Pivos XIOS DS

Pivos released a XBMC build for Linux, and I tried it out.

The Pivos XIOS DS is very small (less than 5” x 5” x 1”) HTPC supporting hardware accelerated 1080p video and HD audio playback. The XIOS DS supports XBMC for Android, and XBMC for Linux, with native hardware acceleration. I reviewed the Android port of XBMC in a previous post.

The XIOS DS is available for $115 at Amazon, placing it, price wise, between the $98 Roku 2 XS and the $178 Boxee Box.

 

I downloaded the 09/07/12 firmware release, and installed it using the system update procedure; extract update.img to MicroSD, hold reset button on back of unit, plug in power, release reset button when update screen displays.

Firmware.Update.Reset

Firmware.Update.Linux

 

XBMC launched immediately on reboot, very similar to the XBMC for Linux OpenELEC experience.

XIOS.XBMC

XIOS.XBMC.1

XBMC.System

 

A quick zoom adjustment and the UI fits on the screen without the need to adjust resolution.

XBMC.Zoom

 

Unlike the Android version where I had to use a mouse and keyboard, I could use the included IR remote to perform all operations. And unlike the Android version, where I had to create special guest access SMB shares because NFS was not supported, the Linux version supported NFS shares with no problems.

I did encounter the same problem as current OpenELEC builds, where some addons are reported as broken in the repository, but as with OpenELEC, this did not prevent movie and series media from being correctly identified, or played.

 

I tested a variety of media formats, all in MKV containers, and all played without issue. I did not test DTS, DTS-HD, AC3, and TrueHD passthrough, as this build of XBMC is based on v11 Eden that does not support HD audio (included in the unreleased v12 Frodo), and I had the box directly connected to a television over HDMI, so all audio was downmixed to two channels.

 

All in all the Linux port of XBMC on the XIOS DS worked much better than the Android port, but as the Android port is classified as Alpha and the Linux port classified as Beta, that is expected.

The XIOS DS running Linux XBMC is not up to Boxee Box standards yet, but it may be a contender.

Koubachi Wi-Fi Plant Sensor

Last week several tech and gadget news outlets reported that the Koubachi Wi-Fi Plant Sensor has been released and is available for order (Amazon or SmartHome). I remembered reading about this device some time ago, and I decided to try it out.

Our houseplants are under my care, and they are generally happy and healthy. A simple water moisture indicator would not get my attention, and certainly would not be worthy of a place in my collection of semi-useful/useless Wi-Fi enabled devices (of which the Nest Learning Thermostat is the most useful), but the Koubachi promises more than just moisture monitoring:

Thanks to the unique Plant Care Engine (PCE), Koubachi is able to advise you about everything your plant needs: water, fertilizer, hu­mi­di­ty, temperature and light! Koubachi not only tells you WHEN to care for your plants, but also gives you specific instructions HOW.

 

The device is about the size and shape of a golf club driver head:

Koubachi.Box Koubachi.Battery

 

The installation and configuration process is interesting. There is only one button on the device, no USB plugs for direct configuration. In order to configure the device to connect to your Wi-Fi network, you first place it in Ad-Hoc mode, then directly connect to it using your computer Wi-Fi adapter, then access the device settings using a web browser, and configure the Wi-Fi settings, after which the device will connect to your home network.

Below are screenshots of the configuration process, starting with online account creation:

Create.AccountSetup.1Setup.2Setup.3Setup.4Setup.5Setup.6Setup.7Setup.8Setup.9Setup.10Setup.11Setup.12

 

At this point everything appeared to be setup and working, except the “To my plants” button was not working. On clicking the “Plants” link, I got to a screen where I can add my first plant, but nothing happened when I clicked on or dragged the pot icon. I tried using Chrome and Internet Explorer, same thing.

The following day I logged in from the office, and now the plants link worked and dragging the pot to the canvas let me create my first plant. Maybe the sensor had to check in with the backend before the backend allowed me to add a plant?

After adding the plant to the canvas, you select the type of plant and pot from the online database. Only 538 types seems a bit limited, but my corn plant was easy enough to find.

Below are screenshots of the plant selection process:

Plants.8Plants.2Plants.3Plants.4Plants.5Plants.6Plants.7

 

After adding the plant, you have to associate the sensor with the plant.  But, that required that I press the button on the sensor, and since I was in the office, I had to wait until the next day at home to continue the setup.

 

Once configured, the sensor enters a calibration period that will take about a week. In the mean time it is displaying semi-interesting information:

Info.1Info.2Info.3Info.4

 

The iPhone app shows similar information to the website, and includes push notifications of events. So far nothing exciting has happened, no email notifications, no push notification, I guess I’ll have to wait until the sensor completes the calibration procedure, or until I sacrifice a plant for the sake of curiosity.

In the mean time I’ll install some more sensors in the other house plants. I was planning on installing sensors in the patio plants, but this sensor is apparently not quite rain proof, and an outdoor sensor has been announced, to be released in October.

 

Oh, and in case you were concerned about effect of the Wi-Fi radiation on your plants, there is a FAQ entry for that Smile

Does the Wi-Fi radiation affect my plant?
No. According to the current state of knowledge there is neither a positive nor a negative effect of Wi-Fi radiation on plants. Note that the Koubachi Wi-Fi Plant Sensor usually transmits the data only once a day and that the transmission lasts only ca. 5 seconds. Therefore, the radiation exposure is many times smaller compared to other wireless devices.

SuperMicro Beta BIOS supports Windows 8 and Server 2012

In a previous post I reported that my SuperMicro SuperWorkstation 7047A-T failed to install Windows 8 or Windows Server 2012 due to a ACPI_BIOS_ERROR. I contacted SuperMicro support, and I was informed that new BIOS releases are on their way that will support Windows 8 and Server 2012.

This morning I received an email from SuperMicro, with a new Beta BIOS for the X9DAi motherboard used in the 7047A-T. The new BIOS allowed me to install Windows 8 and Server 2012.

I used a DOS bootable USB key, and installed the new BIOS.

The 7047A-T has USB ports on the back and on the front of the case. The ports on the front are all USB3, and it is not possible to boot from these ports, at least I have not yet found a configuration that allows booting from USB3 ports. I tried using USB2 keys and, my newest Kingston DataTraveler HyperX 3.0 super fast USB3 keys, the BIOS does not list any boot devices in these USB3 ports. To boot from USB you have to plug the USB key in one of the rear USB2 ports.

The new BIOS version is “1.0 beta”, compilation date “7/23/2012”. The BIOS screen looks like the more modern AMI EFI BIOS’s I’ve seen in other devices, i.e. the thin font instead of the classic console font.

BIOS.Beta

I performed a “Restore Optimized Defaults”, and then went through the options to see what has changed and what is new.

The [Advanced] [Chipset Configuration] [North Bridge] [IOH Configuration] now sets all PCIe busses to GEN3, the old BIOS defaulted to GEN2.

The [Advanced] [SATA Configuration] now enabled hot plug on all ports, the old BIOS defaulted to hot plug disabled.

The [Advanced] [Boot Feature] ads a new power configuration item called “EuP”. This seems to be related to EU Directive 2005/32/EC:

EU Directive 2005/32/EC enacted by the European Union member countries dictates that after January 1, 2010, no computer or other energy using product (EuP) sold in the member countries may dissipate more than 1 Watt in the standby (S5) state.

I measured the power utilization, and the machine uses 2W when powered off, 140W at idle in Windows 8 desktop, and 7W while sleeping.

I updated my Windows 8 USB key to the latest build (I have access to), booted from the USB key, and installed Windows 8 without any major issues.

I had swapped the NVidia Quadro 4000 for a faster ATI FirePro V7900. The v1.0 BIOS worked fine with the Quadro 4000, but after installing the V7900, the screen powered on and Windows 7 started booting before I had a chance to see the BIOS screen. After installing the new Beta BIOS, the V7900 works as expected and I can see the BIOS screen during POST.

This is a note for ATI; please make sure your VGA driver install UI fits on a 640×480 display. When I swapped the Quadro 4000 for the V7900, and rebooted into Windows 7, I booted into a 640×480 16 color screen. Imagine my frustration trying to guess which button has focus when you can only see the top half of the ATI driver installer.

Windows 8 automatically installed drivers for the V7900.

The only driver Windows 8 did not automatically install is the C600 chipset SAS driver. I installed the Intel Rapid Storage Technology Enterprise (RSTe) drivers, and that solved that problem.

While running Windows 7 on this machine, and running the Windows Experience Index Assessment, the test would always crash. The same test in Windows 8 completed successfully.

Win8.EI

I found the 2D and 3D results to be disappointing, and I tried to replace the “ATI FirePro V (FireGL V) Graphics Adapter (Microsoft Corporation – WDDM v1.20)” driver with the ATI Windows 8 Consumer Preview driver. Although the release notes indicate that the V7900 is supported, the driver installation failed with an unsupported hardware error. I’ll have to wait for newer Windows 8 drivers from ATI to see if the test scores improve.

I’m quite happy that I can use my new machines with Windows 8.

I just wish SuperMicro solved the BIOS incompatibility problems long ago, after all, it has been almost two years since the Windows 8 pre-release program started, and almost a year since the release of the public developer preview.

XBMC for Android on Pivos XIOS DS

In my ongoing quest to find the perfect Home Theater PC platform, I was excited to read that XBMC had been ported to Android. This opens possibilities for XBMC on low cost, low power, low noise, small form factor hardware, with hardware accelerated media playback.

The XBMC Android development was done on a Pivox XIOS DS device, and I ordered one from Amazon. At $115 it is not exactly low cost, especially compared to mature platforms like the Roku 2 XS for $98 or Boxee Box for $180.

 

The XIOS DS is really small, here is a picture showing the size of a Roku 2 XS compared to a XIOS DS compared to a Zotac ZBOX Nano XS AD11, compared to a Pulse Eight Pulse Box:

Size.Compare

 

“Piovs” vs. “Pivos”; while unpacking the box I found this little gem printed on the box, one would think that spelling your company name correctly on the packaging is important:

Pivos.Box.Back

 

If you’re interested in an a full unboxing, look here.

 

I installed the box, powered it up, and it takes about 90s to power up, much longer compared to the Roku, Boxee or OpenElec.

Navigation using the included IR remote is a bit clunky, the UI has no indication of where the current focus is, and the Ok button sometimes needs to be pressed twice. I can’t really fault Android for this as the UI is intended for tablet use, not for remote use, but it is something that needs work. Here is a screenshot of the opening page:

Main.Screen

By default LAN and WiFi are both disable, if you click the down button, the settings icon will be active, and you can press the Ok button, once or a few times, and then enable the LAN card.

The box comes installed with Android Gingerbread 2.3.4. The auto update functionality reports everything is up-to-date, but you can get the firmware and app updates from the Pivos forum. I updated the firmware and apps, instructions are on the forum, here is a summary; download the firmware and apps RAR files, extract the contents to a microSD card, insert the microSD card in the box, navigate to [Privacy][Update System] and select update:

Firmware.Update

After several minutes the new launch screen will be up:

Boot.After.Firmware 

This screen is even less remote friendly. It took me several tries to figure out that I need to press the left and right buttons to see the different desktops, this would be equivalent to swiping left and right on the screen. After pressing the right button you will see a desktop with the settings icon:

Settings.After.Firmware

The updated version is Android Ice Cream Sandwich 4.0.3.

I again needed to enable the LAN port, and set the correct time zone. Again the remote vs. touch had me struggling to enable the LAN port, you need to select network, then Ok, then right, and then up, and then Ok to enable the LAN port, highlighted below:

Ethernet.After.Firmware

 

I had the box up, and updated, I wanted to install XBMC, and I discovered that the announcement for XBMC on Android support did not include the availability of official binary packages, just source code, and build instructions.

I was not really up for setting up a build environment myself, and knowing the community, I started looking for unofficial builds, and I found one at the Miniand Tech forums for the MK802, but I did not want to install it until I could find confirmation if it would work on the DS. This morning I noticed a new thread on the Pivos forum containing a pre-release APK file for the DS.

I downloaded the APK file to the microSD card, and I needed to get to the file browser to install it. I gave up on fiddling with the remote, and I attached a USB mouse, from here on I clicked the apps icon, top right on main page, launched the file browser, opened the APK file, and installed XBMC:

Pivos.XBMC

 

Once up and running, I wanted to add some network media, and this turned out to be a challenge, as NFS is not supported, yet SMB is. I normally allow anonymous/root NFS read-only access to my media files, all media players are happy with this. I do allow SMB access using a domain username and password, and most players are happy with this, just more typing. But, I was unable to enter any symbol characters, the standard XBMC remote control data entry box would not enable the symbol buttons. I tried a USB keyboard, but the “_” character resulted in a “-“ character, and the UI would not close, unless you hit the Ok button on the remote several times. Next I tried setting up a XP VM image with the guest account enabled to allow anonymous SMB network access, and just browsing to the share, that also didn’t work, as I was prompted for a username and password. I created a test account on the XP image, using a simple username and password, and that allowed me to access to the folder. The remember credentials option did not work, every time I access the folder I have to re-enter the credentials. I’m sure NFS support will be added, and these issues resolved over time.

I used the series of bird test videos to test network playback, I have MKV files ranging from 20mbps to 110mbps. I haven’t yet found a player that can play the 110mbps video without dropping frames. Unfortunately the OSD for XBMC on Android does not show frame statistics, but by visual observation stuttering started around the 38Mbps mark. Note that these MKV files only contains a video stream, no audio or other streams.

I was disappointed as I couldn’t get any of my AVC/H264/DTS/AC3/AAC based movie files to play. Since the video only files played ok, I assume it is due to the audio stream types, or a configuration option, but I’m not sure.

 

The platform is promising, but in its current Alpha state it still needs lots of work, both in terms of remote control based Android navigation, and XBMC on Android stability. I will definitely try again once a more stable version is released for direct deployment via the appstore.

From Blogger to WordPress

I outlined my concerns with Blogger in my last post, and after much deliberation, I decided to move my blog from Blogger to WordPress.

There are two main choices; use WordPress.com for full service blog hosting, or use WordPress.org and host the WordPress application at a hosting provider. Here is a summary describing the differences.

I decided to try both options; I created a blog a WordPress.com, and I created a self-hosted blog using WordPress.org.

Creating the blog at WordPress.com was very quick and easy.

As with Blogger, you can pick any sub-domain name for the hosted blog, as long as it is unique. On Blogger my site is blogdotinsanegenius.blogspot.com, and on WordPress.com my site is blogdotinsanegenius.wordpress.com, not very imaginative, but descriptive and unique.

WordPress, like Blogger, allows you to point your own domain name to your hosted site using a CNAME record. But, unlike Blogger where it is free, WordPress.com charges $13 per year for this feature. WordPress.com offers additional paid domain services, including domain registration and DNS management.

WordPress supports importing from a variety of sites and formats, including Blogger, and my posts, settings, and comments were all imported in a few minutes.

Here is the screenshot of the various import options offered:
Tools.Import

There are certain restrictions in using WordPress.com vs. WordPress.org, and to a lesser degree Blogger, most notably no advertising of your own. WordPress.com will show their own ads, as that is their revenue model, similar to Blogger showing Google ads. But Blogger, as far as I know, does not restrict the use of other ads such as Amazon, nor do they restrict the use of affiliate links. WordPress.com specifically calls out that Amazon affiliate links are ok, as long as it is not the primary purpose of the site. WordPress.com offers a $30 option to remove all of their ads from your blog.

For self-hosted WordPress I needed a hosting provider, and WordPress.org offers some suggestions, probably with a revenue partnership. The world of low cost hosting is like the wild west; many brands owned by the same company, review sites owned by the hosting companies, referral programs leading to biased third party reviews, low cost signup high cost renewal, etc. I decided to try BlueHost and DreamHost, and I will give a brief review and overview of my signup and WordPress setup experience.

If you enter the BlueHost using http://www.bluehost.com/wordpress_hosting, the link from the WordPress.org hosting provider site, you are offered hosting at $3.95 a month, if you enter BlueHost using http://www.bluehost.com/ you are offered the same hosting at $4.95 a month.

BlueHost does not offer a trial, but they do offer a 30 day money back guarantee. Do read the terms, full refund less non-refundable fees, if cancelled within 3 days of signup.

When you go to the signup page, you have to enter a domain name, either a domain you own, or a domain you intend to buy. As I did not want restrict myself to a particular domain, I used the embedded support chat to contact support. After typing my question, and hitting the live chat button, I was redirected to a new page, where I had to select my contact option again, and then enter my question again. So basically the embedded support chat is bogus, whatever you type is thrown away, and you are directed to an outsourced chat provider.

When I finally managed to chat to an agent, they had to ask me what site I came from, another indication of the poor chat integration implementation. The agent assured me that I can change the domain any time, and their system just requires me to pick something. But, it turns out that this is not entirely true, once you remove the primary domain, you can never add it back again. I cannot imagine a technical reason for this restriction, so it may be related to avoiding a user creating a new hosting account vs. renewing an existing account at a much higher cost. To avoid any problems, I just used a domain I own but do not actively use.

The account creation and setup flow was optimized around taking my credit card information, once the account was created, it was rather confusing, starting with my login name being the domain name I selected in step one.

The first email I received, “Welcome to Bluehost! (redacted) – configure your account.”, told me that the fist step is to transfer my domain or to point my domain the the BlueHost DNS server, I did not want to do either of these. The email included links to the FTP server hosting the account, FTP username, and a link to change the password, but the change password link pointed to the main BlueHost site.

The second email I received, “Welcome to Bluehost! (redacted) – Get started now!”, included links to getting started tutorials.

The third email I received, “Welcome to Bluehost! (redacted)”, included a change your password link, and this URL was personalized, and let me create a new cPanel login password.

I proceeded to login to cPanel using my new password, and I was redirected to what I assume is the machine hosting the account, https://box835.bluehost.com:2083/frontend/bluehost/index.html. Notice that the port number is 2083, and this failed, as the network I was working on does not allow anything other that port 80 HTTP and port 443 HTTPS outbound traffic. I contacted support, who indicated I need to open port 2082 and 2083 outbound, no, I can’t do that. My own research into their own KB system gave this link, instructing me to use a different admin URL, and this worked, using standard SSL, and no host or port specific redirects.

I wanted to map a temporary domain name to the hosting account, so that I can install, configure, and test WordPress, before committing to point my blog’s DNS entry to BlueHost. There was no convenient way to do this, I either had to use the http://%5BIP%5D/%5Baccount%5D/ path format, or I had to map one of my own domain names to the hosting IP address, or I had to point one of my domains to use their DNS server. And, this other domain had to be an unused domain, as I don’t want to transfer a live site before having the destination ready.

At this point I decided to try DreamHost. The main DreamHost page lists the shared hosting as $8.95 a month, if you click on the WordPress link, you are offered the same hosting at $6.95 a month.

DreamHost offers a 2 week trial, you basically always sign up for the trial, and will only be charged if you do not cancel within 2 weeks.

The signup process is straightforward, the first thing you are asked is to create an account using your email address and select a password. After providing your credit card information, you are asked to provide a FTP username.

The first email I received, “[redacted] DreamHost Account Approval Notification!”, included the login information to the FTP server hosting the account, and indicated that the account is being created. The FTP password was system generated, and is different to the account password I already selected.

The second email I received, “[redacted] DreamHost FTP-only User Activated”, indicated that the FTP account had been successfully created.

Logging in to cPanel ran over standard HTTPS and I had no problems accessing the management portal.

The domain management portal allows you to create any number of domain to site mappings, and does not require the domain names to be mapped to or registered with DreamHost’s DNS. In order to create a sub-domain, you must first add the main-domain, even if you don’t intend to use or map it. DreamHost supports mirror domains, that allows you to use a dreamhosters.com sub-domain to point to your site. This was very convenient as it allowed me to register blogdotinsanegenius.dreamhosters.com, and use this domain for testing and configuration, and later I can use it as a CNAME for the blog’s DNS entry.

Installing WordPress was easy, DreamHost supports automatic deployment of a large number of popular applications.

Here is a screenshot of the available blogging applications:
DreamHost.Applications

I cannot speak to long term stability or performance, but judging based on the setup and administration process and experience, I think the $3 per month extra for DreamHost over BlueHost is well worth it.

As part of the blog migration I have to maintain existing permalinks, else search engines and users with links to content will not find the information.

As an example, consider the following permalinks:
Blogger: http://blogdotinsanegenius.blogspot.com/2012/06/looks-can-be-deceiving.html
WordPress.com: https://blogdotinsanegenius.wordpress.com/2012/06/19/looks-can-be-deceiving/

Blogger and WordPress.com uses different permalink formats. Blogger uses a yyyy/mm/title.html format, where WordPress.com uses a yyyy/mm/dd/title format. WordPress.org allows the permalink format to be changed, and also allows plugins to be used to convert between incoming and hosted formats.

I found many articles explaining the process of migrating from Blogger to hosted WordPress.org, but I could not find anything on similar functionality at WordPress.com. I asked about this on the WordPress user forum, and a forum user claimed that Blogger style permalinks are supported, yet I could find no information about it on WordPress site. I tested it, and it did indeed work. I contacted WordPress support to get an official answer, and they claimed it is not supported, and recommended that I use WordPress.org. The forum users’ comment was very insightful; “Most of the staff have less experience at WP.com than I do, but you can ask them.”

Another difference between Blogger and WordPress is the use of labels vs. tags and categories. On Importing the site from Blogger, all the labels were converted to categories. Most of the labels really needed to be tags, and fortunately WordPress offers a bulk tag to category, and category to tag converter.

Below are screenshots from Windows Live Writer showing Blogger style labels (categories) and WordPress style categories and tags:
WLW.Blogger
WLW.WordPress

 

WordPress.com supports all the features I need, and at $45 per year for no ads and a custom domain, it is cheaper than the cheapest self-hosting, and more importantly, maintenance free.

I am posting this directly to the WordPress.com sub-domain, next I will change the blog’s DNS CNAME to point to the WordPress.com sub-domain, and if all goes well, you are reading this post on blog.insanegenius.com.

Looks can be deceiving

It has been almost two weeks since I switched to using Blogger’s new dynamic template.

Browsing the site with the new template works really well; it uses most of the available browser real estate, it looks good on an iPad, it feels nice and fluid, but it also has problems.

 

For some reason my AdSense integration stopped working, and the AdSense site said my account needs to be verified. AdSense was working fine in the old template, so something in the new template, or switching to the new template, must have triggered this. I’ve had AdSense for almost a year, and in that time I’ve not even made enough for Google to trigger a payment. In order to verify my account, I had to enter a PIN they mailed me on a postcard, entered the amount of a test transfer in my bank account, and entered a PIN read to me on my phone. Two days after the verification steps were completed ads started showing up again.

 

Very few widgets support the dynamic template, and the options are limited to a handful of very basic widgets.

 

One of the supported widgets is the label cloud, and as I was configuring it, I decided to do some label cleanup. In the process I noticed that the new Blogger management interface is terrible at editing labels, and that direct links to labels no longer work.

 

In the old Blogger management interface it was easy and obvious how to add and remove labels, although renaming has never been supported. In the new interface there is only an add option, and to remove a tag, you have to add the same tag again to remove it, I discovered this by accident, as all the Blogger help still refers to the old management interface. Same as the old interface, you can filter all posts that contain a certain label, and then you can select one or more of those posts, and then add, or add again to remove, labels. Now, when a post has been selected, and you change the filter, that post does not get unselected, and when you then apply a label to a visibly selected post, it also applies to any previously selected posts that are not currently in the filter view. This is just silly.

 

Since I changed some of the labels, and I know that links to labels are case sensitive, this is another silly thing I never understood as label creation and editing is case insensitive, I wanted to test a label link. When clicking a label in the cloud widget on the main blog page, the link works fine, but when you directly navigate to a label link, you get a blank page. Not good.

 

Since I was so disappointed in only making a few dollars in a year of serving AdSense ads, I decided to create an Amazon Associates account, tag my links to Amazon products, and  show some Amazon ads, hoping I can at least recover the cost of the domain registration fees. It turns out that Blogger no longer natively supports Amazon ads, seems a bit anti-competitive to me, but that’s the nature of their business. Ok, you can host Amazon ads by using HTML in your template, but, the dynamic template does not support any customization, and it does not support any HTML widgets.

That leaves me to using just tagged links to Amazon pages, that is easy enough, just a bit of re-editing old pages. A friend suggested I use Bitly to shorten my Amazon tagged links, that way I can do link tracking, and since adding Bitly I’ve had … 3 clicks, seems I’ll have to keep paying those domain fees after all.

That same friend was kind enough to remind me of my associate obligation, to make it clear to users that I’m an Amazon associate, by adding legalese to my site. Something I would normally do in the footer, but wait, you guessed it, the dynamic template does not support any customization, and all I can do is add the text directly to every post.

Since that is a hassle, here is what I need to say, so I’ll just say it here to get some coverage:

blog.insanegenius.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.

 

Reading the blog on the iPad is a pleasant experience, but, the sidebar widgets that pop out are clearly designed for a mouse, not a finger, as such it is next to impossible to get them to pop out.

 

I use Windows Live Writer to author my blog posts, it is a great app, and the integrated image posting and sizing is so much easier than any alternatives I’ve tested. Unfortunately, the WYSIWIG functionality does not work with dynamic templates. In order to retrieve the blog template, WLW will make a test post, read the template, and then delete the test post. After making the test post, WLW times out reading the post, but at least it deletes it.

There are some rumblings that WLW may be discontinued, based on its absence from the Windows 8 Metro lineup of Live apps, and in support from the user community, there is a petition to not kill WLW.

A blog subscriber notified me that he was getting some “temporary post” titled posts in his feed. I’ve seen these before in Google Reader, even from Microsoft’s own MSDN and TechNet blogs. It seems that FeedBurner is so hasty that it streams the temporary post created by WLW before WLW had a chance to delete it. No harm, it just looks odd in the stream.

 

By now I was  pretty fed up with Blogger and the dynamic template, and I started looking for alternative and free blog hosting. There really seems to be only one free and feature rich alternative, and that is WordPress.com. WordPress has an easy to use Blogger importer, that imports posts, comments, and settings. Check out my blog in WordPress format. There is one catch, the free .com version of WordPress does not allow direct advertising, they do the advertising for their own revenue. Not that it really matters as the few dollars I stand to loose is well worth it if I don’t need to deal with Blogger.

 

I am still hopeful that Google will step up to the plate and fix Blogger and dynamic templates, but at least I know there is an easy migration path to WordPress.

Iomega TV With Boxee vs. D-Link Boxee Box

Untitled Page I have yet to find the perfect media player for playing my archived music and movie collection.

I’ve tried building my own using small form factor computers like the Zotac ZBOX HD-ID11 (Amazon), Nano AD10 (Amazon) and Nano VD01 (Amazon). I’ve tried software like Windows Media Center, XBMC, MediaPortal, and Boxee. I’ve tried commercial products like the WD TV Live Plus (Amazon), Popcorn Hour C-200 (Amazon), A-210, PopBox 3D (Amazon), Apple TV, D-Link Boxee Box (Amazon), and Iomega TV With Boxee. But, they all have problems and shortcomings.

I currently have three D-Link Boxee Boxes in my house, the D-Link Boxee Box does have problems, but so far it is the best I’ve found. The following are some of the most annoying problems:

  • The form factor is unique, but also impractical, it looks odd, uses too much vertical space, and does not fit in with the rest of the media components.
  • The fan gets loud and is audible in a quiet room. Covering the SD slot suppresses the sound somewhat.
  • There is no low power standby, it is always using full power even when not in use. Apparently this is a shortcoming of the Intel CE4110 SOC platform.
  • Every time Boxee releases a firmware update they break something that used to work. The most frustrating was the recent 1.2 firmware update that broke SMB network authentication and resulted in poor performance causing constant network re-buffering. In the end I had to install NFS on my Windows Server 2008 R2 box to get things working again. To make it worse, there is no option to opt-out of firmware updates, even a manual install of an older version just gets auto-updated again. I don’t mind auto updating functionality in products, but I do mind if the update breaks something that used to work, and there is no way back.
  • When using a HDMI switch, and the HDMI switch is already powered on when the Boxee powers on, there is no sound. The HDMI switch must be powered off when the Boxee powers on, then the HDMI switch can be powered on. This bug has been around since I bought the first Boxee, and the same switch works flawlessly with a variety of other hardware, including a Motorola HD-DVR, Motorola HD-STB, XBox 360, PS3, Roku 2 XS, Panasonic BD player, and a Sony DVD player, so it is not the switch, it is the Boxee.
  • Even after switching to NFS for networking, I still get network re-buffering and HD audio dropouts when watching certain high bitrate BD MKV movies.
  • Unlike XBMC, there is no separation between TV series and movies in Boxee, this makes it very difficult to find or watch TV shows, and Boxee rarely gets the metadata associations for TV shows right. XBMC does a much better job of treating TV shows as shows, with discrete seasons and episodes.
  • The metadata scrapers are incapable of correctly identifying titles from file and directory names that contain a “;” instead of a “:”. In NTFS a “:” is not a legal character for use in file or directory names, so when a show title contains a “:”, I substitute it for a “;” in the filename and directory name. This issue is not unique to Boxee, and I don’t understand why scrapers can’t do common substitutions or removal of punctuation when performing a search.

Even with all these issues, the Boxee Box still works most of the time for most content.

When the Iomega TV With Boxee was announced, one thing that stood out was the 1Gbps network port vs. the 100Mbps port on the D-Link. Theoretically 100Mbps is fast enough for BD content playback, but given the network re-buffering and HD audio dropouts on high bitrate content I was experiencing, I hoped it may help. I could not find official sources of hardware specs for the D-Link or the Iomega, but an ifixit teardown and a Wikipedia article on Boxee shows the devices may have similar processor specs, with the Iomega having gigabit networking and analog video output.

The Iomega Boxee is not available in the US, but I around the end of November I pre-ordered an import from Expansys USA. The box arrived a few days ago, end of December, and I started setting it up by replacing one of my D-Link Boxee’s.

The box comes with very little documentation, as an example, there are no instructions on how to open the remote to insert the batteries. It took me a few minutes to figure out where the battery compartments, yes there are two, were on the remote, and how to open them, i.e. press on the little arrows, apply lots of force, and slide the lids off.
iomtv-remote-battery

The power supply is 12V 2A, 110V to 220V, with EU/UK power plugs supplied, I used a universal adapter to plug it into a US 110V outlet.
The box itself does not include WiFi capabilities, but Iomega supplies a WiFi USB dongle with the kit.

The form factor of the Iomega box is much more practical compared to the D-Link, it fits in nicely with the rest of my AV equipment. Iomega does supply a stand to mount the device vertically if you want it that way.
Boxee.Front Boxee.Back

The Iomega remote is a bit larger than the D-Link, but it also includes some handy buttons missing on the D-Link remote.
Boxee.Remote.Front Boxee.Remote.Back

The Iomega powered on and displays IO on the screen while booting, unlike D-Link that plays a startup animation, the Iomega box has no startup animation. As with the D-Link, the first thing I had to do was calibrate the screen overscan.

The next step was to login to my Boxee account, and this is where things started going wrong. I could not get the remote to work correctly, it would not respond, or it would enter the wrong characters. The keyboard on the remote has a little button that needs to be pushed to activate the keyboard, once activated, you can enter keys using the keypad. Pressing the button again deactivates the keypad, and you can use the navigation buttons on the front. I tried using the navigation buttons and the on-screen keyboard, the cursor would either not move, or jump way over to the wrong location, or start entering characters when I press navigation buttons. I tried the keyboard, and it would enter a few characters fine, and then just stop working, I could never get the backspace key to work.

I did some general troubleshooting by replacing the batteries, and power cycling the device, but still the same issues. I found a few user reports of similar troubles with the remote. Since my Iomega is an import, there is no chance for local support, but given the general complaints about the remote, I am not going to bother trying to replace it.

The Iomega remote is an IR remote, while the D-Link remote is an RF remote. When I first started using the D-Link RF remote I found it a bit of an inconvenience to switch between my Harmony One universal IR remote, and the dedicated D-Link RF remote, but I must say for fast accurate navigation, the RF remote works great. In fact, I wish there was a standard for RF remotes, so that something like a RF Harmony One can be built, this would alleviate the annoyance and frustration of having to aim at IR devices.

I had a spare D-Link RF remote with a USB dongle, and I eventually plugged that into the Iomega, it worked fine, and I could login to my account, and configure the Iomega.

As I was configuring the device, I noticed that the Iomega was running firmware version 1.2.1.20452, while the D-Link was running 1.2.2.20482. Manually running an update said that 1.2.1 was the latest firmware for the Iomega. Strangely, the Iomega support site lists a firmware version 1.3061, but the version number format does not follow the typical Boxee a.b.c.d formatting. I tried to install the 1.3061 firmware using the manual USB update procedure, but the firmware install never completes, and a hard power cycle is required to boot back up. So I really don’t know what this 1.3061 firmware is supposed to be or do.

While applying the firmware, I noticed that there is a mouse cursor on the screen, and I noticed that the Iomega IR remote acts like a trackpad, as I slide my finger over the directional buttons, the mouse cursor moves around the screen. I did not try it out, but this may be useful for web browser navigation, if the remote actually works.

US content providers like Hulu, Vudu, and Netflix were not available on the Iomega, or at least I could not find them in any obvious way. I don’t know if this is because of regional targeting differences, or because the Iomega has analog video output and content provider DRM requirements may prohibit such content on this device. The lack of content providers is not a big deal for me, as I only use the Boxee for local content playback. For Netflix and Amazon Instant Video I use a Roku 2 XS. Boxee does not offer Amazon Instant Video, and the Roku’s Netflix experience is far superior to the Boxee’s.

The next difference seemed rather weird, the D-Link has HDMI audio passthrough support for DTS-HD and Dolby TrueHD, but the Iomega does not list any HD audio formats. When I tried to play a MKV file with TrueHD and AC3 tracks, the Iomega automatically selected the AC3 track. When I manually selected the TrueHD track, there was no audio.

I configured the Iomega the same way I configured the D-Link, using NFS, I added my music, movies, and TV series, hosted on a Windows server 2008 R2 server, accessed via gigabit Ethernet. Just like the D-Link, it took a while to catalog all the content, and just like the D-Link, the device is less than responsive while cataloging content.

Once the activity light stopped flashing, and the device appeared idle, I started playing some movies that suffer from network re-buffering and audio dropouts on the D-Link. The Iomega played all content perfectly, no re-buffering, and no audio dropouts. Unfortunately this is not really a meaningful test, as the more problematic content contains HD audio tracks, and the Iomega can’t play HD audio.

I will leave the Iomega connected to get some more airtime with it, but unfortunately the lack of HD audio is a deal breaker, not because I need HD audio, but because many of my BD MKV rips only have an HD audio stream. So either the Iomega needs to decode and play HD audio, or it needs to do bitstream passthrough. I doubt this is a hardware limitation, so hopefully a future firmware update adds HD audio passthrough support.