# Christian Zartl, BSc

## Private blog and WWW page

Christian Zartl, BSc

# Network Problem Description

You have Windows Explorer open on your Windows Server 2003 R2, do a right click on “My Computer” and choose “Map Network Drive…”.

Windows Explorer on Windows Server 2003

Afterwards you choose a drive letter and the NFS share you would like to mount.

Map Network Drive Screen in Windows Explorer

But finally when clicking “Finish” you receive following error: “The drive could not be mapped because no network was found”.

Network Error Message inside Windows Explorer on Windows Server

Still you are able to ping your NFS server:

C:\Documents and Settings\ATCZ01admin>ping atbup002

Pinging atbup002.akron-group.local [192.168.8.29] with 32 bytes of data:

Reply from 192.168.8.29: bytes=32 time=1ms TTL=64
Reply from 192.168.8.29: bytes=32 time=1ms TTL=64
Reply from 192.168.8.29: bytes=32 time=1ms TTL=64
Reply from 192.168.8.29: bytes=32 time=1ms TTL=64

Ping statistics for 192.168.8.29:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms

# Check if NFS Client is installed

This indicates that there is actually no network problem but that you have simply no NFS support on your Windows server yet. So choose “Start --> Control Panel --> Add or Remove Programs”.

Select “Other Network File and Print Services” and click on “Details…”:

Now check if “Microsoft Services for NFS” is selected. Otherwise do so and press “Details…” again:

Add Microsoft Services for NFS on Windows Server 2003

Now select all the components you need. You should be good with the same I have chosen here, except you plan to also share files via NFS from your Windows server (as you will additionally need the server components for this).

Select needed NFS components on Windows Server 2003

Afterwards click “OK --> OK --> Next”. If you have just chosen the components yet, the installation will start automatically. You will need the Windows Server System CD 2 for this!

Inject Windows Server System CD 2 when asked

Browse for the following path: CMPNENTS\R2

Component Files on Windows Server System CD

This should allow the installation to proceed:

Installation Screen for Windows Services for NFS on Windows Server 2003

# UseReservedPorts Registry Hack

If you are able to map a NFS network drive now, be happy you lucky one! If the same error from above persists, go on trying to debug via cmd. Use the following command:

C:\Documents and Settings\ATCZ01admin>mount atbup002:/mnt/BackupVol *

Network Error - 53

Type 'NET HELPMSG 53' for more information.

Go on with:

C:\Documents and Settings\ATCZ01admin>NET HELPMSG 53

The network path was not found.

Now check if you have the UseReservedPorts key set in your registry. Click “Start --> Run …” and type “regedit”:

Open Windows Registry

Press [Enter] or click on "OK". In the Registry Editor navigate to “HKEY_LOCAL_MACHINE --> SOFTWARE --> Microsoft --> Client for NFS --> Current Version”:

Windows Client for NFS

If the key is not there, just create it. Do a right click on “Default” and choose “New --> DWORD Value”:

Create UseReservedPorts DWORD Key

Name it “UseReservedPorts”:

UseReservedPorts DWORD Value Registry Key

Now double click it and set “Value data” to 1:

Set DWORD Value

Hopefully you should be able now to map the drive via Windows Explorer or mount it with the above command. If it is still not working, you can check if there is a problem on the NFS host server:

C:\Documents and Settings\ATCZ01admin>showmount -e 192.168.8.29
Exports list on 192.168.8.29:
/mnt/BackupVol/sugar               192.168.8.0
/mnt/BackupVol/atsan000            192.168.8.0
/mnt/BackupVol/atrsp001            192.168.8.0
/mnt/BackupVol                     192.168.8.0

You can also try to restart the NFS service on your Windows server. Do not use the services.msc for this, it will show that it worked but actually it doesn’t! So type:

C:\Documents and Settings\ATCZ01admin>nfsadmin client stop
The service was stopped successfully.

The service was started successfully.

# Copying error

So finally I’m able to map my NFS network drive and access it. Creating folders, opening and deleting files works everything fine. But what I actually wanted to do is to copy a file from my Windows server onto my backup NAS.

But when trying to do so, I received the following error: “The process cannot access the file because another process has locked a portion of the file.”

Copying Error Message

Let me tell you it was so hard to find a solution for this it almost drove me nuts. It shouldn’t be a big deal if you really know how NFS works, but for me it was more luck than skills until I actually stumbled across the fix.

First I had to disconnect my drive again by doing a right click and choosing “Disconnect”:

Disconnect mapped network drive

Afterwards I had to mount it again with the option nolock enabled:

C:\Documents and Settings\ATCZ01>mount atbup002:/mnt/BackupVol Z: -o nolock
Z: is now successfully connected to atbup002:/mnt/BackupVol

The command completed successfully.

# How to configure Auto-Updates on Linux Ubuntu Servers

If you install Ubuntu you get asked if you want to install security updates automatically. This is a nice feature, but you can even configure your new setup to install all updates you want without intervention and letting you know via email.

When you select to install auto-updates, then you will have the correct package already. Otherwise you have to install it first:

sudo apt-get install unattended-upgrades
Building dependency tree
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

sudo nano /etc/apt/apt.conf.d/50unattended-upgrades

As mentioned in my last blog post, you can use any text editor you like, but for me nano is the easiest one. So check for the following part:

// Automatically upgrade packages from these (origin, archive) pairs
"Ubuntu lucid-security";
};

updates will most probably be commented out, so remove the // or # if you would like to install all current updates automatically. Now you can configure email notifications by editing the following part:

// Send email to this address for problems or packages upgrades
// If empty or unset then no email is sent, make sure that you
// have a working mail setup on your system. The package 'mailx'
// must be installed or anything that provides /usr/bin/mail.
Unattended-Upgrade::Mail "c.zartl@***.com";

sudo apt-get install postfix

This configuration depends on how you want to send emails, if you have a running mail server already and so on, so I won't go into much detail here. If you do something wrong or forget a setting, just run:

sudo dpkg-reconfigure postfix

Still there is one general step left you should do: set the correct sender. First edit main.cf:

sudo nano /etc/postfix/main.cf

# Set correct sender
sender_canonical_maps = hash:/etc/postfix/sender_canonical

Now you have to create this senders file:

sudo nano /etc/postfix/sender_canonical

For me the file looks like this:

root sugar@***.com
atcz01admin sugar@***.com

First you provide the name of the user you want to set a sender email address. Then, seperated by a space, add the email address you want to use for this person.

Finally run the following command:

sudo postmap /etc/postfix/sender_canonical

sudo /etc/init.d/postfix reload
* Reloading Postfix configuration...

At last you will have to install mailutils:

sudo apt-get install mailutils

Now you can send a test mail if you like:

sudo nano testmail.txt

Type any text you like here, close the file, and send it:

mail -s "Test" c.zartl@***.com < testmail.txt

Finally go back to the configuration file:

sudo nano /etc/apt/apt.conf.d/50unattended-upgrades

You can also configure to auto-remove old dependencies:

// Do automatic removal of new unused dependencies after the upgrade
// (equivalent to apt-get autoremove)
Unattended-Upgrade::Remove-Unused-Dependencies "true";

At last set the update schedule:

sudo nano /etc/apt/apt.conf.d/10periodic

Here is my config:

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

# How to access Microsoft Windows Shares from Linux Ubuntu

First of all create a folder where you want to mount the shared Windows directory to:

sudo mkdir -p /mnt/win
[sudo] password for atcz01admin:

Type in your password when asked to do so. Now you will need to install smbfs to be able to access the shared folder:

sudo apt-get install smbfs
Building dependency tree
The following extra packages will be installed:
keyutils libtalloc2 libwbclient0 samba-common samba-common-bin
Suggested packages:
smbclient
The following NEW packages will be installed:
keyutils libtalloc2 libwbclient0 samba-common samba-common-bin smbfs
0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
Need to get 7,831kB of archives.
After this operation, 21.3MB of additional disk space will be used.
Do you want to continue [Y/n]? Y
Get:1 http://at.archive.ubuntu.com/ubuntu/ lucid/main keyutils 1.2-12 [28.2kB]
Get:2 http://at.archive.ubuntu.com/ubuntu/ lucid/main libtalloc2 2.0.1-1 [20.7kB]
[...]
Setting up smbfs (2:3.4.7~dfsg-1ubuntu3.8) ...
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place

Just type Y to confirm the installation of the new packages. Afterwards you can instantly mount the share:

sudo mount -t smbfs -o username=ATCZ01admin,password=*** //atser003/c\$/Buffer /mnt/win

Therefore you will need a Windows user who has access rights to this folder and the Uniform Naming Convention (UNC network) path. This will be exactly the same as inside the Windows network, except that the backslashes (\) have to be changed to slashes (/) for Linux. Finally you can navigate to the mounted folder and access its content:

cd /mnt/win

ls –l
total 5060240
-rwxr-xr-x 1 root root  100010496 2010-12-13 16:12 20101213_Ebanking_Backup
drwxr-xr-x 1 root root          0 2007-08-27 13:02 Adobe
-rwxr-xr-x 1 root root    4447744 2007-06-06 09:42 apache_2.2.4-win32-x86-no_ssl.msi
[...]

You will take over the access rights of the Windows user you have provided within the mount command.

# Prerequisites

For this little how-to I assume that you have Xen Cloud Platform (XCP) already installed and successfully configured. The first things to do will be

• creating and / or joining a resource pool,
• creating and attaching a storage repository and
• configuring the network (e.g. with bonding).

I will not describe how to do this here, as it really depends on your configuration. You can look for further help here. Even if these documents are really outdated (XCP v0.1), most information is still accurate and useful.

You just have to ignore some special things which have been taken over from Citrix XenServer (e.g. installing Linux Tools is absolutely not necessary as you will also see here). Another thing that should be working already is a connection to the Dom0 console however you prefer it.

Access Local Command Shell via xsconsole on Xen Cloud Platform 1.1

If you are accessing xsconsole directly, you can choose "Local Command Shell". The recommended way would be using ssh and this is also what I do. First of all you have to choose which guest (DomU) operating system you would like to install.

In general there are two possible ways of installing VMs on XCP:

• Installing Windows VMs
• Installing Linux VMs

There are a few differences in installing Windows guests in comparison to Linux VMs. I will not describe how to install Windows guests but you can find all necessary information here.

# Installing Linux VMs

Still there is another decision to make:

1. Installing with templates
2. Installing from vendor media

Do a xe template-list | less to scroll through the available templates. If you can find a template for the distribution you would like to install, you should go on with step 1.

It is not absolutely necessary to use templates, so if you want to install the most recent version of some Linux distribution where there is no template available yet, this is also possible. It is just more comfortable with templates. Still I'm not going to describe how to install a guest from vendor media, so please see here for a step-by-step guide.

# 1. Installing with templates

I'm going to install the most current Ubuntu Long Term Support (LTS) version which is 10.04 Lucid Lynx in 64-bit mode. Don't worry about the (experimental) in the template name, it does work very well.

So now you can install your VM with the following command:

xe vm-install template=[Your\ desired\ OS] new-name-label=[NameOfYourVM]

You can always use tab to automatically finish one command, so you don't have to type the whole name of the template. This will give you the UUID of the created VM:

9ecea6e2-6964-b944-e909-65a23b819ce7

Next you should check all parameters of your VM:

xe vm-list uuid=9ecea6e2-6964-b944-e909-65a23b819ce7 params=all

Again use tab to write the UUID automatically. What I'm actually interested in are the memory settings:

uuid ( RO)                          : 9ecea6e2-6964-b944-e909-65a23b819ce7
name-label ( RW): sugar
name-description ( RW): Installed via xe CLI
user-version ( RW): 1
[...]
memory-actual ( RO): 0
memory-target ( RO):
memory-static-max ( RW): 268435456
memory-dynamic-max ( RW): 268435456
memory-dynamic-min ( RW): 268435456
memory-static-min ( RW): 134217728
[...]
protection-policy ( RW):
is-snapshot-from-vmpp ( RO): false
tags (SRW):

These are only 268MB of RAM for this VM, so the installation will be very slow. Therefor we should change this by using a great Xen functionality called Dynamic Memory Control (DMC).

There are two modes available:

• Target Mode and
• Dynamic Range Mode.

Please read this FAQ about what to use and how. I've decided to use Target Mode. But first you have to change the static and dynamic max and min values to something more appropriate (otherwise you won't be able to set a memory-target).

So here is how it works:

xe vm-param-set uuid=9ecea6e2-6964-b944-e909-65a23b819ce7 memory-static-max=[SMa]

xe vm-param-set uuid=9ecea6e2-6964-b944-e909-65a23b819ce7 memory-dynamic-max=[DMa]

xe vm-param-set uuid=9ecea6e2-6964-b944-e909-65a23b819ce7 memory-dynamic-min=[DMi]

xe vm-param-set uuid=9ecea6e2-6964-b944-e909-65a23b819ce7 memory-static-min=[SMi]

The formula for these values is $SMi \le DMi \le DMa \le SMa$. What to choose for each of them is very up to you. Here is just a little recommendation from me but you really don't have to do this:

As $SMa$ can be the largest value, you could set it to your actual amount of RAM. But please consider that there is also Dom0 which needs some of this memory. So I have 4GB of RAM in my old servers, therefor I set $SMa$ to 3.5GB or $SMa = 3500000000$.

$SMi$ depends on how much RAM you have in your servers and how many VMs you are planning to execute on them. As I am very limited with 4GB I set $SMi = 512000000$ and $DMi = SMi$ and $DMa = SMa$. Finally you can set your memory-target:

xe vm-memory-target-set uuid=9ecea6e2-6964-b944-e909-65a23b819ce7 target=[memory-target]

This will be most likely the same value as $DMa$.

Now you have a last choice to make:

1. Install from Install Repository
2. Install from vendor media

As I'm using a template I will also use the default way of installation via an install repository. Even if you are using templates you could still install from vendor media, e.g if you only have a slow or even no Internet connection at all. Again, I will not describe how to do this here as you can find it in the Xen Cloud Platform Virtual Machine Installation Guide.

### 1.1 Install from Install Repository

First of all you have to find out the URL of your desired distribution install repository. You will have to search a bit to figure this out. For Ubuntu you can find it on http://archive.ubuntu.com/ubuntu.

It is recommended to use a mirror near to your location, so add your country code before archive. For Austria this is http://at.archive.ubuntu.com/ubuntu. Now set this repository for your created VM:

xe vm-param-set uuid=9ecea6e2-6964-b944-e909-65a23b819ce7 other-config:install-repository=http://at.archive.ubuntu.com/ubuntu/

Next you will have to assign a virtual network interface (VIF) for your VM. So check your available networks first:

xe network-list

Depending on how many network interface cards (NICs) you have in your servers this will show you different amounts of Xen bridges (xenbr). I will use the one from eth0 for my VM now:

uuid ( RO)                : edf3408d-29b6-51de-252d-c28031363a7b
name-label ( RW): Host internal management network
name-description ( RW): Network on which guests will get assigned a private link-local IP address
[...]
uuid ( RO)                : baf6dd81-6550-7446-b57f-dfe633759b33
name-label ( RW): Pool-wide network associated with eth0
name-description ( RW):
bridge ( RO): xenbr0
[...]
name-label ( RW): Host internal management network
name-description ( RW): Network on which guests will get assigned a private link-local IP address
bridge ( RO): xapi1

So this means xenbr0 for me:

xe network-list bridge=xenbr0 --minimal

This will give me the UUID only:

baf6dd81-6550-7446-b57f-dfe633759b33

At last we can create the VIF:

xe vif-create vm-uuid=9ecea6e2-6964-b944-e909-65a23b819ce7 network-uuid=baf6dd81-6550-7446-b57f-dfe633759b33 mac=random device=0

This will again show you the UUID of the newly created VIF:

348f8790-0f9b-bbe5-abd2-f4d86cf26fd7

Finally, we can start the VM with the following command:

xe vm-start uuid=9ecea6e2-6964-b944-e909-65a23b819ce7

This will take a few seconds and if there are no errors, it means that it worked and the VM is running. Now the last thing to do is to follow the installation and do some configuration steps inside the OS. Therefor you would have to access the console.

Actually this is provided by Virtual Network Computing (VNC) so you would only need to connect with a VNC viewer. Unfortunately this is quite circumstantial and not that easy to handle. So I would recommend you to use a graphical software for this.

If you are working on Windows you could use XenCenter which is provided by Citrix for free, even if it is not open source. Alternatively you could try OpenXenManager which is open source but in my experience not working very well on Windows.

Ubuntu 10.04 Desktop VM in Oracle VM VirtualBox on Windows Vista

If you are running Linux then this may be your first choice. I have an Oracle VM running on my Windows PC with Ubuntu 10.04 Desktop and have OpenXenManager installed on this.

Xen VNC console in OpenXenManager on Linux Ubuntu

# Open-Source SAN – Part 4: Solaris-based Appliances

There are two open-source storage appliances that are based on Oracle's Solaris: EON and NexentaStor. Additionally you could use OpenIndiana, which is based on illumos, a fork of the former OpenSolaris operating system.

Oracle is still developing Solaris, but there is no open-source version available anymore. OpenIndiana is the community driven version now, based on the still released Solaris code.

As I will stay with OpenFiler for now, I haven't been able to install and test these appliances. Therefor I will also not go into much detail as I don't know enough about them.

Still I wanted to mention them as it seems sometimes like everyone only knows FreeNAS and OpenFiler. But there are good alternatives if you feel unhappy with those two.

# Linux-based

Solaris is based on UNIX, just as BSD and Linux are. This makes them comparable, still it is not the same. Additionally there are several different BSD descendants and hundreds of distinct Linux distributions. So every single of these products has its right to exist and also advantages and disadvantages depending on the use case.

This makes it impossible to say if one is better than the other or which one you should use. Personally, I'm just more comfortable with Linux and am much more experienced with it.

Also I found that there is more support available on the Internet as it is very widely-used. So even if they have the same root, Solaris is not Linux-based.

# ZFS

As already mentioned, ZFS was originally developed by SUN Microsystems and was therefor instantly integrated into Solaris. So you can use ZFS natively on EON, NexentaStor and also OpenIndiana and it will work like a charm.

But even if you won't have similar issues to using ZFS on Linux (which is said to be unstable), it is still recommended to have fairly new hardware with a lot of RAM and a fast 64-bit CPU. It would also work on my old servers with only 4 GB of RAM, but I would not really benefit from ZFS rather than running into light performance issues.

# iSCSI

EON and NexentaStor have both iSCSI support integrated, even if NexentaStor has only an initiator. Still it is quite easy to activate this functionality via the CLI.

As already mentioned, I'm not really familiar with package management on Solaris, so I don't know how it works and which packages are necessary. Still I think it should be easily possible to just install them on OpenIndiana and therefor make iSCSI also work there.

# Clustering

Solaris is made to be a clustering operating system by default, so it should be working fine. You also don't even need additional packages like Corosync and Pacemaker on Linux but just a class of systems. It could also be possible that clustering is only available for enterprise versions and not in community editions.

But what I know is that DRBD is not available for Solaris. As an alternative there are clustered file systems, even I don't know if they exist on Solaris or not. Writing about that I would really like to find this out and test and play with it a while.

As soon as my small cluster is running stable, I will install VMs and check out Solaris a while. I'm really interested into this now. Additionally, if nothing else works, it is still possible to use Xen for doing the replication with Corosync, Pacemaker and DRBD but still store your data on ZFS/Solaris. So you would use the disks inside your XCP hosts instead real shared storage SAN, still you would prevent a SPoF.

# Web interface

EON has no GUI at all, NexentaStor has a web interface and OpenIndiana can be installed with a desktop just like any other operating system. This might be a bit of overload for a storage appliance, so you might be better off by using Webmin instead. Or you would have to abdicate a GUI and just do anything via the CLI.

The web interface of NexentaStor looks very good as it shows detailed performance measures and the like. Still this could be a bit overloaded and therefor difficult for beginners. Also there seem to be endless configuration options available, so great for people who want to control just anything but too complicated for everyone else.

# S.M.A.R.T.

I haven't tested any of these appliances yet so I just don't know anything about their monitoring abilities. As already mentioned, NexentaStor seems to be very good in this.

 first: Part 1: Introduction previous: Part 3: OpenFiler

# How to secure WordPress

One of my favorite blogs on the Internet is the one by eITWebguru which offers very useful information for a wide area of computer and especially hosting topics. Additionally there are various tricks and tweaks for several problems and issues.

Recently, author Milind, as far as I know the only one posting on this blog at the moment, wrote an article about steps to secure WordPress. As I'm using this great free tool for my homepage myself and am also always interested in security, I instantly tested and implemented these steps.

Nevertheless I found some of them not to be very current or accurate. Therefor I wanted to offer you a short summary of the single steps:

1. WordPress can check for updates automatically
2. You should always install the most current version
3. Also keep all your plugins updated
4. Don't use plugins that are not under active development
5. Delete inactive plugins
2. Secure wp-config.php
1. Set permissions to 750
3. Hide the version of your WordPress install
1. Use the Secure WordPress plugin
4. Disallow wp-* folders from being crawled
1. Create or update your robots.txt
5. Use Intrusion Detection System (IDS) Plugins
1. Mute Screamer
2. Installs PHPIDS, a state-of-the-art security layer for PHP
3. Includes monitoring
7. Run Backups regularly

# Linux-based

OpenFiler is indeed based on Linux, but unfortunately that is not as good as it could be. It runs on a distribution called Foresight Linux which is again based on rPath Linux, a commercially driven project by a company using the same name.

rPath Linux, built with the Conary distributed software management system, is not only a distribution in its own right, but also a base technology explicitly designed to enable you to create purpose-built operating system images using the rBuilder Online technology.

This makes clear why OpenFiler runs on this distribution and it may be nice for the developers to build their own OS with less effort. But for users and community developers it is a huge disadvantage.

As already mentioned, rPath Linux is a commercial project financed and organized by a corporation. It is still free (even I don't know if it is also open-source), but there seems to be little to no active community behind it, it is not well known and it is hard to get help online.

Additionally, Conary may be a nice package manager, but in my opinion (and I'm not the only one thinking so) there are far better tools for this. I got quite used to it now, but this took me weeks of mostly frustrating effort to get to that level.

The documentation is quite fine, but still it is not the same to read everything in wikis rather than searching forums or reading mailing lists. You are very likely to find an answer to your questions on the OpenFiler community rather than by Foresight Linux or rPath.

Unfortunately, rPath is not the only company behind OpenFiler, but OpenFiler itself is also commercially oriented. It is still free and open-source and a corporation behind such a project could also be a huge advantage. Not so for OpenFiler.

Support in the forum is very rare, mostly you can read about complains that OpenFiler is not working as it should, that there is no help from the community or that you won't find OpenFiler developers or officials. Reading this can really be frustrating, but still I think that OpenFiler is a great product. Nevertheless you should be an advanced Linux user and you should not expect an absolutely stable appliance where everything is working correctly.

Even if OpenFiler is not made to be installed on an USB pen drive like FreeNAS, it is still possible and also working very fine. The huge advantage is the same. But in comparison to FreeNAS it is much easier to install additional packages on OpenFiler. Or lets say it would be much easier if it, again, weren't rPath with the complicated Conary repository manager and very hard-to-find software builds.

Nevertheless I have to admit that OpenFiler comes very complete just out-of-the-box, which means that there is almost anything you can imagine right there with a fresh installation. Also you can keep these packages up to date with Conary or even via the web interface.

Unfortunately the release cycle of OpenFiler is not regularly nor is it really current and there is not even a roadmap. The community got very frustrated with the release of version 2.99.1 in April 2011, while it was promised to be a major 3.0 release in the end of 2010. This candidate has been postponed several times to Q1 2011 to the end of 2011, now it is Q1 2012 and there is still no OpenFiler 3.0 available.

Now it is said that OpenFiler 2.99.1 with all updates is 2.99.2 which should be exactly the same product as the upcoming 3.0, but only with the old web interface. Still it seems that 2.99 is a beta release that raised lots of errors, issues and bugs that just stay unsolved and even uncommented. There are only very few people left who still believe that OpenFiler is really under active development.

# ZFS

As already mentioned, OpenFiler comes with lots of packages out-of-the-box. This also includes the zfs-on-linux release which allows creating RAIDz pools and datasets just as it is possible on FreeBSD.

I think there are very less people recognizing these great news so far, so nobody has really tested this on OpenFiler yet. I did and it seems to be working somehow, but not really well. Additionally it is said that these packages should be considered to be developmental and testing rather than stable.

Therefore you are really recommended not to use it in production and critical environments. This is too bad, I would have loved to use ZFS on OpenFiler, but it's more important to have a stable working storage solution rather than having a testing playground.

On the other hand it is actually no such big deal. I thought a lot about using ZFS for my SAN, but at the moment I'm not so sure about this anymore. I want to share my disks via iSCSI as one single LUN which will then host several virtual disk images (VDIs), each providing one VM.

This will compensate most of the advantages of ZFS, so it doesn't make so much sense anymore. The mots promising opportunities for me were RAIDz and snapshots.

• Still I think that LVM2 on Linux with a software RAID 6 is also very fine and probably much more stable.
• And then there are snapshots. Actually, snapshots are only great for a somehow “traditional” file system, meaning that you store files on it directly rather than providing VDIs. There is no real application you could benefit from restoring a snapshot of the whole system state when you have running several VMs on it. This would just reset all your machines at the same time to some specific restore point. Therefor it will be better to just do snapshots of the VMs themselves via the hypervisor and running “classical” backup strategies inside the VMs.

Besides zfs-on-linux there is something similar under development – the B-tree file system (Btrfs). This seems to be quite identical to ZFS but is also considered to be not stable yet.

# iSCSI

I was able to set up an iSCSI target on OpenFiler on top of an “iSCSI block device” and successfully installed a VM on it. The process is quite intuitive and straight forward, just a few clicks and there you go. Also it was possible to use “write-thru” mode which might be somehow comparable to “pass” on FreeNAS (which didn't work there).

What I have not done (yet) are some benchmarking tests for file input and output (Input/Output). Some people claim that “blockio” (which I'm using too) is not quite fast and that you are better off with using “fileio” instead. Also I didn't test any read or write speeds. Luckily this doesn't really bother me, as the installation of the VM was as smooth and fast as normal and I have no real bottlenecks experienced yet.

### MPIO

I didn't have the same problems with link aggregation (actually called "bonding" on Linux) on OpenFiler as I had with FreeNAS. But this is mainly because I simply wasn't seduced to do so, because I'm running it inside a cluster. As the heartbeat needs its own dedicated network connection, I had to reserve one of my only two network interface cards (NICs) for this. So nothing left to bond.

This will also prevent MPIO, but it may be still working somehow. I'm not really sure about this (yet), but while setting up only one iSCSI target on the active cluster master node, I can still see two targets on my initiators (as long as both nodes are running). But on the other side when checking the connections of my targets, I can see two of them, but only one from each XCP host. So probably no MPIO.

I will have to do some further testing on my XCP machines, as I configured them to use MPIO and it just worked without any error messages. But while trying to probe the storage repository (SR) for multi-pathing, it gave me an error with using iscsiadm. It didn't say that it can't find multiple targets, but only that the service is not able to execute because a file is missing. So I will give this another try when time allows me to do so.

Also I will test the reactions of sudden disconnections then, to figure out if redundancy is working or not. This is more important for me than load-balancing, as my VMs will not have such heavy traffic. Still I would be happy to have this feature, too.

# Clustering

Clustering on OpenFiler works like a charm. OK, it is not provided by default as they want you to pay for commercial support to activate the services. Nevertheless you can just follow this simple how-to and make it work!

OpenFiler uses the great free and open-source Linux tools Corosync, Pacemaker and DRBD to enable an active/passive hot-standby master/slave cluster. DRBD can be considered as a RAID 1 mirror, just via the network between two servers instead of two disks inside one server.

Therefor you can only use a cluster size of two nodes, but this is just fine for me. All I wanted to have is to prevent the classical SPoF with shared storage, as this just vanishes the actual advantage of that principle.

There will be at least two hypervisors running, so if one host fails, the VMs get migrated to the other node automatically and the clients don't even recognize that there was an actual system failure. But what if the shared storage is only provided by one single server and this one fails?

The whole cluster crashes and you again lose your “high availability” functionality. But with Pacemaker and DRBD the two storage nodes keep in sync all the time on a deep file system basis, so if the active master node fails, the passive slave node can take over all the services immediately without any data loss. This is why it is called "hot-standby" in comparison to a "cold-standby" solution as provided by FreeNAS and rsync.

# Web interface

The web interface looks nice, only this self-signed security certificate thing sucks when using Internet Explorer. Actually, if you think of OpenFiler as an appliance, it is only a collection of tools and a GUI to handle these. So it should be also possible to set up your own storage server with, let's say, Ubuntu and just copy the files from OpenFiler to use its web interface.

I think the GUI of OpenFiler is really OK. When you use it some time you soon get used to it and it becomes quite comfortable. As mentioned for FreeNAS already, these GUIs are all not very user-friendly or intuitive to use, but still this one is as good as it could be.

The only thing is that there are really a lot of things that just don't work via the web interface or are simply not available. So you actually have to use the CLI a lot.

A few examples:

• Installing updates on a clean fresh install raises numerous issues and can even crash your system, so you can only install some of them from the GUI and the rest via the CLI.

• Partitioning does just not work via the web interface, so you have to use parted commands.

• There is a menu for clustering, but it is not available as it only says that the service is not running. Of course you're not able to start it. So you have to configure everything via the CLI, but as soon as you're finished, you can at least see the settings inside the GUI, even if you are not able to change anything.

So I think if you are already familiar with Linux and storage tools and protocols, you may be better off with using the command line only and just forget about a GUI. If you are an absolute beginner, you will also have difficulties with this web interface, but at least you should be able to get what you want. With little help from the community, it should also be no problem to type simple commands, just for the case that you have an issue which cannot be solved via the GUI.

# S.M.A.R.T.

S.M.A.R.T. is also available for OpenFiler via the smartmontools package. I have tested some simple smartctl commands and everything seems to be working correctly. It is not as convenient as for FreeNAS, which comes with full S.M.A.R.T. functionality pre-built.

For OpenFiler you would have to configure everything yourself and use cron and sendmail to enable email notifications. I just had not the time to set this all up yet, but am planning to do so soon. Of course I will let you know how it works.

I don't use SNMP so I also don't know if it is available for OpenFiler. What it indeed has are email notifications for RAID events and this is working like a charm. It will let you know when and which drive fails and after you replace it, you will get a rebuild progress message regularly.

There are also some simple networking and storage statistics available, but these are not sent via email automatically. As mentioned several times already: this is Linux, so you are very welcome to set up everything as you like!

 first: Part 1: Introduction previous: Part 2: FreeNAS last: Part 4: Solaris-based Appliances

# New Wiki for Storage and Virtualization

I have updated my Docs section and changed from DokuWiki to MediaWiki. This was just because MediaWiki is much more state-of-the-art, has better functionality and is easier to maintain.

I will try to create new articles with useful information for the categories of storage and virtualization on a regular basis. This will mainly contain FreeNAS 8 and OpenFiler 2.99 SAN appliances and Xen Cloud Platform (XCP) hypervisor.

In separation to the how-tos in my blog these will be more of a kind of knowledge articles and guides rather than simple step-by-step processes. I really hope that this will help some people to get solutions for the issues they were looking for.

For now there are the following articles online:

• German translation of the (yet un)official FreeNAS 8 FAQ
• OpenFiler 2.99 Update

Enjoy reading and please let me know when the information you found there was somehow useful for you!

# Linux-based

FreeNAS is a very great stable working product, well documented and has a big active and motivated community. By the way, I also contributed my part by doing the German translation for the (yet un)official FreeNAS 8 FAQ. You can find it in my wiki, which will hopefully grow from time to time with useful knowledge articles for the categories of storage and virtualization.

Unfortunately it is not Linux-based, but runs on a specially minimized version of FreeBSD. This minimization makes it fit on a plain USB pen drive you can directly boot it from.

This is a huge advantage, as it allows you to use all your hard disks for storage only and you don't lose space for a system partition. Specially in my case this would be a great solution as I only have old hardware with quite a lot but small disks (6x 320 GB SATA). So I'm very happy for any MB I can use for storage and don't have to reserve for an OS.

But this is also a disadvantage. Not only that FreeNAS is no Linux, it is also very limited due to a maximum size to fit on an USB stick. This makes it quite difficult to install additional packages you might need for your server. There's not even an automatic update function available.

# ZFS

Still, there is another point that clearly speaks for FreeNAS. ZFS was developed by SUN Microsystems (now Oracle) and soon available for their own operating system Solaris. Some time later, ZFS has been ported to FreeBSD and seems to be working very well on this system now, too. For Linux this is not that easy and ZFS is not yet natively available for any Linux distribution.

So why is ZFS so important for me? It is simply a very promising file system (FS) and logical volume manager (LVM), including data integrity verification against data corruption modes, snapshots and copy-on-write clones, continuous integrity checking and automatic repair and is implemented as open-source software. Additionally, RAIDz offers a great new solution for setting up clever and secure mirrors or arrays.

In my opinion, the biggest advantage are ZFS snapshots, and FreeNAS does not only offer this functionality, but also introduces automatic snapshot and replication tasks. The FreeNAS 8.0.2 User Guide says:

FreeNAS supports the secure replication of ZFS snapshots to another remote FreeNAS system (or any other system running the same version of ZFS and a listening SSH server). This allows you to create an off-site backup of the storage data.

So it's creating snapshots of the whole system regularly however often you want it to and enables you to do a restore of the full system state. And with the replication tasks, you can additionally save the snapshots on another machine to have double the security.

Unfortunately, I've never been able to set up such a replication between two FreeNAS boxes. I'm not affirming that it's not possible, all I'm saying is that it's really difficult and I was just not able to get it up and running. You have to configure SSH keys to enable mutual authentication between the two servers. This is very well documented in the user guide, but although I was exactly following all the steps, it was just never working.

# iSCSI

I had two FreeNAS boxes running for a while and had the chance to test all the functions I need. I also tested iSCSI. First of all, what do I need iSCSI for?

It is an IP-based storage networking standard for linking data storage facilities. By carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers over intranets and can be used to transmit data over LANs. So it can enable location-independent data storage and retrieval. This makes it a SAN protocol, allowing organizations to consolidate storage into data center storage arrays while providing hosts (in my case virtual machine hosts) with the illusion of locally-attached disks. Unlike traditional Fibre Channel, which requires mostly expensive special-purpose cabling, iSCSI can use existing network infrastructure [Wikipedia].

I was able to set up an iSCSI target on FreeNAS on top of ZFS and successfully installed a VM on it. Still, there were some major issues I was a little bit concerned about. The user guide says:

type of device: choices are disk, DVD, tape, or pass (choose pass in a virtual environment)

under iSCSI Target Settings.

As I want to use the server for VM storage, I chose pass. Unfortunately it is not possible to start the iSCSI service then, which makes the target inaccessible. Therefore I selected disk and well, it still was working, so who cares? You know, I do somehow, if it gives you an option you should but can not choose, this is somehow strange for me.

### Multipath I/O (MPIO)

Also it was not easy to set up link aggregation on FreeNAS, I finally got it running after several tries, but had to do it directly via the console, as I lost connection to the web interface. There was another thing I thought that would not be working correctly, but now I know that it was my own fault.

When setting up link aggregation, you're not able to use iSCSI MPIO, as also described in the user guide:

NOTE: LACP and other forms of link aggregation generally do not work well with virtualization solutions. In a virtualized environment, consider the use of iSCSI MPIO through the creation of an iSCSI Portal as demonstrated in section 8.14.6. This allows an iSCSI initiator to recognize multiple links to a target, utilizing them for increased bandwidth or redundancy.

I wanted to have redundancy and load-balancing on network level, but obviously I was just thinking in a wrong direction. So you can use FreeNAS very well for iSCSI and VM storage!

# Clustering

Besides the fact that FreeNAS is not Linux-based, there is another very big disadvantage for me, why I won't use it for my system. There is just no clustering, high availability or reliability functionality available, except you change to TrueNAS, the commercially developed, non-open-source version of FreeNAS.

What you can use and is also recommended by the user guide, is called rsync:

Rsync is a utility that automatically copies specified data from one system to another over a network. Once the initial data is copied, rsync reduces the amount of data sent over the network by sending only the differences between the source and destination files. Rsync can be used for backups, mirroring data on multiple systems, or for copying files between systems.

So that says what it is all about - a passive backup solution. It's great and very useful though, but you still have the problem of a single point of failure (SPoF). So if your first FreeNAS box crashes, you would still have your data on the other one, but would have to reconfigure all clients to access the new server now. So it's more a kind of a passive standby that needs some manual work to get it up and running.

# Web interface

The web interface looks nice, but is not really working with Internet Explorer, as there are some major display errors which make it impossible to configure everything as you want. This is no big deal, in Firefox it is working just fine.

Occasionally, a GUI would not really be required, but as I'm not that Linux and console pro (yet ) it just makes my life much more easy. Also it enables you to do simple settings much quicker when it only needs two clicks rather than typing config files and complex commands. Also I have to admit that the interfaces of the appliances I have tested so far are not really intuitive. So you need some time to get comfortable with them, too.

# S.M.A.R.T.

What is S.M.A.R.T and what do I need it for? Wikipedia says:

S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) is a monitoring system for computer hard disk drives to detect and report on various indicators of reliability, in the hope of anticipating failures. When a failure is anticipated by S.M.A.R.T., the user may choose to replace the drive to avoid unexpected outage and data loss.

This makes it a great tool to monitor your disks for temperature, corrupted cylinders and so on. It will automatically send email notifications when preconfigured limits are reached or other problems are detected. It doesn't necessarily have to be S.M.A.R.T., FreeNAS also offers SNMP and email notifications for RAID events, but also networking and storage statistics.

 first: Part 1: Introduction next: Part 3: OpenFiler last: Part 4: Solaris-based Appliances

# How to take ownership of user profile folders in Microsoft Windows domains

User profile folders in Microsoft Windows domains are automatically generated by the system itself and grant access for the user only. If you want to have access to this folder as a domain administrator, you have to take ownership of the folder first.

The following how-to is not only for profile folders but works always if you want to change messed up security settings or system-generated folders.

When you do a right click on the user profile folder and choose the "Security" tab, you will see a window similar to this:

Properties of a system-generated User Profile Folder

It says "You do not have permission to view or edit this object's permission settings."

• Now choose the "Owner" tab

Take ownership of a system-generated folder

• Click "Edit..."
• Tick the checkbox
• "Replace owner on subcontainers and objects"

Replace owner on subcontainers and objects

• Click "Apply"
• You will get a very important warning message

Windows Security warning message

It says: "All permissions will be replaced if you press Yes." So click "Yes" only if you know how to set the correct permissions lateron.

After pressing "Yes" it shows some progress depending on the amount of files and subfolders inside the folder and finally another Windows security information:

Windows Security information

So click "OK" four times on open the properties again. Now you are able to edit the security settings, but your user is the only one with granted access. For a user profile folder you will have to at least add the SYSTEM and the user account who belongs to this profile.

Permissions

Grant them all full control.