Gluster Backing Filesystems

Gluster can use Ext#, ZFS or any backing store, however XFS is recommended. Let's make a 16 disk RAID6, and make it XFS

mdadm --create /dev/md0 --level=6 --raid-devices=16 \
	/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj /dev/sdk /dev/sdl /dev/sdm /dev/sdn /dev/sdo /dev/sdp /dev/sdq
mkfs.xfs -i size=512
mount -t xfs \
	-o logbufs=8,logbsize=256k,osyncisdsync,barrier,largeio,noatime,nodiratime \
	/dev/md0 /mnt/R6

Checking Connected Clients

There is not a native tool in Gluster to check this, sadly. My current work around is to use netstat.

netstat -tapu | grep gluster

Produce netstat, and filter to just show GlusterFS processes. If your hostnames are configured nicely, like a common prefix of gnode you might be able to filter out the inter-server connections with something like:

netstat -tapu | grep gluster | grep -v gnode

Checking Open Directories/Files

Gluster provides a top like utility.

gluster volume top VOLUME open
gluster volume top VOLUME read
gluster volume top VOLUME write
gluster volume top VOLUME opendir
gluster volume top VOLUME readdir

Quick Performance Check

These commands will run the perf-tests across all bricks

gluster volume top VOLUME read-perf  bs 2014 count 1024
gluster volume top VOLUME write-perf bs 2014 count 1024

A specific brick can bet check with

gluster volume top VOLUME read-perf  bs 2014 count 1024 brick BRICK
gluster volume top VOLUME write-perf bs 2014 count 1024 brick BRICK

BRICK background entry self-heal failed on FILE

This is the worst error possible from GlusterFS, worst. It generally means that some how the Bricks of the Volume are out of sync. And when this error is thrown, the calling process hangs in an IO syscall,

Volume File Filters

These are filter scripts that get executed whenever the Vol files are re-written

from what he was saying, sounded like glusterd was filtering what it read from the volfile, not sending all options to the clients
s> seems plausible
 semiosis: However, the script to add it could be invoked via /usr/lib*/glusterfs/$VERSION/filter
s> interesting
 semiosis: Scripts there should be invoked any time a volfile is rewritten, with the path to the volfile as an argument

See Also