Some time ago I pontificated about the benefits of virtualization as a means of abstraction. However, I warned that if the virtualization layer enforces the use of encapsulation boundaries that interfere with that abstraction, then virtualization becomes less useful. I put forward the use of a software iSCSI initiator on Debian under XenServer as a case in point. Today I had the opportunity to test this scenario under VMware ESX and can report success.
First, what was the problem? If data volumes and server operating system instance are tied together, than moving an application to a new environment becomes difficult. This could be an application which has outgrown the capacity of a VM and needs to move to a physical server, or it might just be that for disaster recovery purposes you want to be able to have the flexibility of attaching that data (whether that volume, or a replicated snapshot promoted to a new volume elsewhere) to any server (physical or virtual) that can host the application. If the volume has some signature on it that prevents it from being used as a standalone volume, that defeats the purpose. This is what happens if you attach an iSCSI volume to XenServer as a storage repository (I presume something similar happens if you attach an iSCSI volume as a VMware Datastore too). To achieve the extra flexibility, I’m willing to pay a small performance penalty (or even limit how many VMs on any virtualization server actually form these kind of connections). Up to now, I have not been able to achieve this under XenServer. Below is how I did it under VMware. The methods are the same as on any standalone instance of Debian.
Terminology for open-iscsi
iSCSI, and open-iscsi have their own set of terminology, and in at least for one term (“node”), they differ a bit. Here is a quick rundown on a few terms that might be helpful for you to understand the documentation:
A network location for the “server” that will provide iSCSI targets; it consists of an IP address and a TCP port.
A storage resource located on a portal; these are identified by unique names (e.g. iqn.2001-05.com.equallogic:0-8a0701-44216d703-871459e523e4b2a7-testVolume)
In open-iscsi terminology, a node, or node record is a target on a portal; when in node mode, iscsiadm will require both –targetname and –portal arguments.
Installing open-iscsi in Debian (or Ubuntu) is straightforward:
apt-get install open-iscsi
The configuration of iSCSI is done in two places:
This file (located in /etc/iscsi) contains settings for the iSCSI daemon, and in particular settings could be overwritten by iSCSI discovery or manually updated using the iscsiadm utility.
- Persistent configuration implemented as DBM database. The database contains two tables:
- Discovery table (/etc/iscsi/send_targets)
- Node table (/etc/iscsi/nodes)
There is also a file named initiatorname.iscsi in /etc/iscsi. This contains a unique name and it should have been established during installation; there is generally no reason to change this.
For initial testing, you may leave the settings to their default values. There are at least two you will probably want to change, either right away, or later:
- node-startup = automatic
This ensures that iSCSI sessions set to ‘automatic’ will start when iscsid starts.
- node.conn.startup = automatic
You may have to add this to the configuration file, i.e. there may not be an existing value there. Setting this ensures automatic login to discovered nodes, e.g., on subsequent system reboots. It only affects nodes discovered after this setting is in place (setting changed and iscsid has been restarted), and not existing nodes. If you wish to change this for an existing node, use the command:
# iscsiadm -m node -T targetname --op update -n node.conn.startup -v automatic
This utility is for managing the persistent configuration in the DBM database (query, insert, update, delete). It has a lengthy manpage which you should consult. There are five modes for iscsiadm:
- discovery — finding targets
- node — logging in and out of targets
- session — check connection status
- fw — display boot firmware values
- iface — setup iSCSI interfaces for binding (usually for multipath I/O)
We will only concern ourselves with the first three.
Our initial setup involves several steps:
- Create test volume
- Discovering targets
- Logging into targets
- Creating a partition and filesystem
- Automatic mounting of iSCSI targets at system boot
Create Test Volume
First create a volume, e.g., on your iSCSI SAN, and allow access to it from your Debian system. There are usually several ways of restricting access to a volume, including:
- IP address
- iSCSI initiator name
- CHAP authentication
To keep the initial setup simpler, don’t worry about CHAP authentication right now, but this method of access control is recommended.
Once you have iscsid running and you’ve testing IP connectivity to the portal, e.g., your iSCSI SAN, use the following command to get a list of available targets:
# iscsiadm -m discovery -t st -p <IP address>:3260
The clearer, but more verbose version of this would be:
# iscsiadm --mode discovery --type sendtargets --portal <IP address>:3260
This will return a list of nodes (including portal and target).
Logging into Targets
There a couple of ways to log into targets. The first logs into all available targets:
# iscsiadm -m node -l
The second logs into an individual target:
# iscsiadm -m node -T <completeTargetName> -l -p <IP address>:3260
The later is particularly helpful for attaching to snapshots, for example.
Logging out of a target is the same except that you use -u instead of the -l option.
Creating a Partition and Filesystem
Identify the new iSCSI device by searching through dmesg for it. It will show up as /dev/sd*, but without individual partitions. Next use fdisk to create a Linux partition on the device. After you’ve written the partition table, you can create a filesystem, for example:
# mk2efs -j /dev/sdb1
Automatic Mounting of iSCSI Targets
With the two settings in place in iscsid.conf for automatic startup, your volume should automatically be attached at system reboot. However, you do have to add an entry to /etc/fstab for it to be mounted. The only option that is different is ‘_netdev’ which delays mounting until after the network has been started and ensures that it is unmounted before the network subsystem is stopped at system shutdown.
/dev/sdb1 /mountpoint ext3 _netdev,defaults 0 0
This was all accomplished in a VMware virtual machine, allowing enormous flexibility to move data to where ever the application using that data is hosted. While this was possible in XenServer using RedHat Enterprise Linux, it has not been for Debian.