Installing and Configuring GlusterFS
Install GlusterFS, FUSE and other requirements.
emerge fuse glusterfs [ebuild N ] sys-fs/fuse-2.8.5 [ebuild N ] sys-cluster/glusterfs-3.3.1 USE="-emacs -extras fuse (-infiniband) -static-libs -vim-syntax"
apt-get install glusterfs-server attr
On Gluster Server systems ensure that the glusterd daemon is started on boot.
rc-config add glusterd default
GlusterFS Single Server NFS Style
In the most simplest configuration GlusterFS can be used in similar fashion to NFS. This configuration has a central GlusterFS server that allows multiple clients to connect. One nice thing to notice is that the clients have no configuration on themselves. The client configuration information is obtained from the server on demand.
This setup will simply use a single system as a GlusterFS host, similar to a single NFS type system. Replace, $name, $host, $path with proper value, $path can reference an existing directory.
gluster volume create $name transport tcp $host:$path gluster volume start $name
Simple huh? Now when glusterd is stop/started this volume will be retained.
If you are curious about this information look in /var/lib/glusterd
.
GlusterFS Replica
This configuration has two servers, which are like a mirror for the volume. Assume the servers are named gfs1 and gfs2, these commands are run on gfs1
gluster peer probe gfs2 gluster volume create BigVol replica 2 transport tcp gfs1:/brick/BigVol gfs2:/brick/BigVol gluster volume start BigVol
GlusterFS Convert Distribute to Replica
Suppose you start off with only one Gluster Brick and then get another server, hooray! Now, rather than Distribute you can convert this one Brick system to a Replica.
gluster peer probe $new_host gluster volume add-brick $name replica 2 $new_host:/$path gluster volume info $name
Yep, just adding a new Brick to an existing Distribute volume gets the job done.
GlusterFS Distribute Replica - Four(4) Hosts
This places files across two hosts, in a four host volume.
gluster peer probe gfs2 gluster peer probe gfs3 gluster peer probe gfs4 gluster volume create BigVol replica 2 transport tcp gfs1:/brick/BigVol gfs2:/brick/BigVol gfs3:/brick/BigVol gfs4:/brick/BigVol gluster volume start BigVol
If you examine the volfile you will see that gfs1 and gfs2 make up a mirror, gfs3 and gfs4 also make up a mirror. Files, reads and writes will be distributed across these systems.
Connecting a Client
Almost too easy, simply mount it. $host is the Gluster server, $name is the name of the exported volumne and $path is the local path to mount to.
root@desk # mount -t glusterfs $host:$name $path # eg: root@desk # mount -t glusterfs carbon:raid /mnt/raid
You can examine the Volume using the getspec
sub-command.
gluster --remote-host=gfs1 system getspec BigVol
Configuring Quota
Quota can be set on the directory, with '/' referencing the entire volume.
Here we set a volume wide limit of 10GB and a limit of 2G on a specific directory.
Then we set a quota timeout of 5 seconds.
Then, using list
to show the quota settings
gluster volume quota BigVol enable gluster volume quota BigVol limit-usage / 10GB gluster volume quota BigVol limit-usage /path/in/volume 2G gluster volume set BigVol features.quota-timeout 30 gluster volume quota BigVol list gluster volume quota BigVol list /path/in/volume
Clear these out with
gluster volume quota BigVol remove / gluster volume quota BigVol remove /path/in/volume
Performance Options
Over TCP you really must have fat ethernet cards, gigabit or better; dual nics are advisable.
gluster volume set BigVol diagnostics.brick-log-level WARNING gluster volume set BigVol diagnostics.client-log-level WARNING gluster volume set BigVol nfs.enable-ino32 on gluster volume set BigVol nfs.addr-namelookup off gluster volume set BigVol nfs.disable on gluster volume set BigVol performance.cache-max-file-size 2MB gluster volume set BigVol performance.cache-refresh-timeout 4 gluster volume set BigVol performance.cache-size 256MB gluster volume set BigVol performance.write-behind-window-size 4M gluster volume set BigVol performance.io-thread-count 32