OpenFiler is indeed based on Linux, but unfortunately that is not as good as it could be. It runs on a distribution called Foresight Linux which is again based on rPath Linux, a commercially driven project by a company using the same name.
DistroWatch.com says about this distribution:
rPath Linux, built with the Conary distributed software management system, is not only a distribution in its own right, but also a base technology explicitly designed to enable you to create purpose-built operating system images using the rBuilder Online technology.
This makes clear why OpenFiler runs on this distribution and it may be nice for the developers to build their own OS with less effort. But for users and community developers it is a huge disadvantage.
As already mentioned, rPath Linux is a commercial project financed and organized by a corporation. It is still free (even I don't know if it is also open-source), but there seems to be little to no active community behind it, it is not well known and it is hard to get help online.
Additionally, Conary may be a nice package manager, but in my opinion (and I'm not the only one thinking so) there are far better tools for this. I got quite used to it now, but this took me weeks of mostly frustrating effort to get to that level.
The documentation is quite fine, but still it is not the same to read everything in wikis rather than searching forums or reading mailing lists. You are very likely to find an answer to your questions on the OpenFiler community rather than by Foresight Linux or rPath.
Unfortunately, rPath is not the only company behind OpenFiler, but OpenFiler itself is also commercially oriented. It is still free and open-source and a corporation behind such a project could also be a huge advantage. Not so for OpenFiler.
Support in the forum is very rare, mostly you can read about complains that OpenFiler is not working as it should, that there is no help from the community or that you won't find OpenFiler developers or officials. Reading this can really be frustrating, but still I think that OpenFiler is a great product. Nevertheless you should be an advanced Linux user and you should not expect an absolutely stable appliance where everything is working correctly.
Even if OpenFiler is not made to be installed on an USB pen drive like FreeNAS, it is still possible and also working very fine. The huge advantage is the same. But in comparison to FreeNAS it is much easier to install additional packages on OpenFiler. Or lets say it would be much easier if it, again, weren't rPath with the complicated Conary repository manager and very hard-to-find software builds.
Nevertheless I have to admit that OpenFiler comes very complete just out-of-the-box, which means that there is almost anything you can imagine right there with a fresh installation. Also you can keep these packages up to date with Conary or even via the web interface.
Unfortunately the release cycle of OpenFiler is not regularly nor is it really current and there is not even a roadmap. The community got very frustrated with the release of version 2.99.1 in April 2011, while it was promised to be a major 3.0 release in the end of 2010. This candidate has been postponed several times to Q1 2011 to the end of 2011, now it is Q1 2012 and there is still no OpenFiler 3.0 available.
Now it is said that OpenFiler 2.99.1 with all updates is 2.99.2 which should be exactly the same product as the upcoming 3.0, but only with the old web interface. Still it seems that 2.99 is a beta release that raised lots of errors, issues and bugs that just stay unsolved and even uncommented. There are only very few people left who still believe that OpenFiler is really under active development.
As already mentioned, OpenFiler comes with lots of packages out-of-the-box. This also includes the zfs-on-linux release which allows creating RAIDz pools and datasets just as it is possible on FreeBSD.
I think there are very less people recognizing these great news so far, so nobody has really tested this on OpenFiler yet. I did and it seems to be working somehow, but not really well. Additionally it is said that these packages should be considered to be developmental and testing rather than stable.
Therefore you are really recommended not to use it in production and critical environments. This is too bad, I would have loved to use ZFS on OpenFiler, but it's more important to have a stable working storage solution rather than having a testing playground.
On the other hand it is actually no such big deal. I thought a lot about using ZFS for my SAN, but at the moment I'm not so sure about this anymore. I want to share my disks via iSCSI as one single LUN which will then host several virtual disk images (VDIs), each providing one VM.
- Still I think that LVM2 on Linux with a software RAID 6 is also very fine and probably much more stable.
- And then there are snapshots. Actually, snapshots are only great for a somehow “traditional” file system, meaning that you store files on it directly rather than providing VDIs. There is no real application you could benefit from restoring a snapshot of the whole system state when you have running several VMs on it. This would just reset all your machines at the same time to some specific restore point. Therefor it will be better to just do snapshots of the VMs themselves via the hypervisor and running “classical” backup strategies inside the VMs.
Besides zfs-on-linux there is something similar under development – the B-tree file system (Btrfs). This seems to be quite identical to ZFS but is also considered to be not stable yet.
I was able to set up an iSCSI target on OpenFiler on top of an “iSCSI block device” and successfully installed a VM on it. The process is quite intuitive and straight forward, just a few clicks and there you go. Also it was possible to use “write-thru” mode which might be somehow comparable to “pass” on FreeNAS (which didn't work there).
What I have not done (yet) are some benchmarking tests for file input and output (Input/Output). Some people claim that “blockio” (which I'm using too) is not quite fast and that you are better off with using “fileio” instead. Also I didn't test any read or write speeds. Luckily this doesn't really bother me, as the installation of the VM was as smooth and fast as normal and I have no real bottlenecks experienced yet.
I didn't have the same problems with link aggregation (actually called "bonding" on Linux) on OpenFiler as I had with FreeNAS. But this is mainly because I simply wasn't seduced to do so, because I'm running it inside a cluster. As the heartbeat needs its own dedicated network connection, I had to reserve one of my only two network interface cards (NICs) for this. So nothing left to bond.
This will also prevent MPIO, but it may be still working somehow. I'm not really sure about this (yet), but while setting up only one iSCSI target on the active cluster master node, I can still see two targets on my initiators (as long as both nodes are running). But on the other side when checking the connections of my targets, I can see two of them, but only one from each XCP host. So probably no MPIO.
I will have to do some further testing on my XCP machines, as I configured them to use MPIO and it just worked without any error messages. But while trying to probe the storage repository (SR) for multi-pathing, it gave me an error with using iscsiadm. It didn't say that it can't find multiple targets, but only that the service is not able to execute because a file is missing. So I will give this another try when time allows me to do so.
Also I will test the reactions of sudden disconnections then, to figure out if redundancy is working or not. This is more important for me than load-balancing, as my VMs will not have such heavy traffic. Still I would be happy to have this feature, too.
Clustering on OpenFiler works like a charm. OK, it is not provided by default as they want you to pay for commercial support to activate the services. Nevertheless you can just follow this simple how-to and make it work!
OpenFiler uses the great free and open-source Linux tools Corosync, Pacemaker and DRBD to enable an active/passive hot-standby master/slave cluster. DRBD can be considered as a RAID 1 mirror, just via the network between two servers instead of two disks inside one server.
Therefor you can only use a cluster size of two nodes, but this is just fine for me. All I wanted to have is to prevent the classical SPoF with shared storage, as this just vanishes the actual advantage of that principle.
There will be at least two hypervisors running, so if one host fails, the VMs get migrated to the other node automatically and the clients don't even recognize that there was an actual system failure. But what if the shared storage is only provided by one single server and this one fails?
The whole cluster crashes and you again lose your “high availability” functionality. But with Pacemaker and DRBD the two storage nodes keep in sync all the time on a deep file system basis, so if the active master node fails, the passive slave node can take over all the services immediately without any data loss. This is why it is called "hot-standby" in comparison to a "cold-standby" solution as provided by FreeNAS and rsync.
The web interface looks nice, only this self-signed security certificate thing sucks when using Internet Explorer. Actually, if you think of OpenFiler as an appliance, it is only a collection of tools and a GUI to handle these. So it should be also possible to set up your own storage server with, let's say, Ubuntu and just copy the files from OpenFiler to use its web interface.
I think the GUI of OpenFiler is really OK. When you use it some time you soon get used to it and it becomes quite comfortable. As mentioned for FreeNAS already, these GUIs are all not very user-friendly or intuitive to use, but still this one is as good as it could be.
The only thing is that there are really a lot of things that just don't work via the web interface or are simply not available. So you actually have to use the CLI a lot.
A few examples:
Installing updates on a clean fresh install raises numerous issues and can even crash your system, so you can only install some of them from the GUI and the rest via the CLI.
Partitioning does just not work via the web interface, so you have to use parted commands.
There is a menu for clustering, but it is not available as it only says that the service is not running. Of course you're not able to start it. So you have to configure everything via the CLI, but as soon as you're finished, you can at least see the settings inside the GUI, even if you are not able to change anything.
So I think if you are already familiar with Linux and storage tools and protocols, you may be better off with using the command line only and just forget about a GUI. If you are an absolute beginner, you will also have difficulties with this web interface, but at least you should be able to get what you want. With little help from the community, it should also be no problem to type simple commands, just for the case that you have an issue which cannot be solved via the GUI.
S.M.A.R.T. is also available for OpenFiler via the smartmontools package. I have tested some simple smartctl commands and everything seems to be working correctly. It is not as convenient as for FreeNAS, which comes with full S.M.A.R.T. functionality pre-built.
For OpenFiler you would have to configure everything yourself and use cron and sendmail to enable email notifications. I just had not the time to set this all up yet, but am planning to do so soon. Of course I will let you know how it works.
I don't use SNMP so I also don't know if it is available for OpenFiler. What it indeed has are email notifications for RAID events and this is working like a charm. It will let you know when and which drive fails and after you replace it, you will get a rebuild progress message regularly.
There are also some simple networking and storage statistics available, but these are not sent via email automatically. As mentioned several times already: this is Linux, so you are very welcome to set up everything as you like!
|first: Part 1: Introduction||previous: Part 2: FreeNAS||last: Part 4: Solaris-based Appliances|