salt.modules.glusterfs

Manage a glusterfs pool

salt.modules.glusterfs.add_volume_bricks(name, bricks)

Add brick(s) to an existing volume

name
Volume name
bricks
List of bricks to add to the volume
salt.modules.glusterfs.create(name, bricks, stripe=False, replica=False, device_vg=False, transport='tcp', start=False, force=False)

Create a glusterfs volume.

name
Name of the gluster volume
bricks
Bricks to create volume from, in <peer>:<brick path> format. For multiple bricks use list format: '["<peer1>:<brick1>", "<peer2>:<brick2>"]'
stripe
Stripe count, the number of bricks should be a multiple of the stripe count for a distributed striped volume
replica
Replica count, the number of bricks should be a multiple of the replica count for a distributed replicated volume
device_vg
If true, specifies volume should use block backend instead of regular posix backend. Block device backend volume does not support multiple bricks
transport
Transport protocol to use, can be 'tcp', 'rdma' or 'tcp,rdma'
start
Start the volume after creation
force
Force volume creation, this works even if creating in root FS

CLI Example:

salt host1 glusterfs.create newvolume host1:/brick

salt gluster1 glusterfs.create vol2 '["gluster1:/export/vol2/brick",         "gluster2:/export/vol2/brick"]' replica=2 start=True
salt.modules.glusterfs.delete(target, stop=True)

Deletes a gluster volume

target
Volume to delete
stop
Stop volume before delete if it is started, True by default
salt.modules.glusterfs.info(name)

New in version 2015.8.4.

Return the gluster volume info.

name
Volume name

CLI Example:

salt '*' glusterfs.info myvolume
salt.modules.glusterfs.list_peers()

Return a list of gluster peers

CLI Example:

salt '*' glusterfs.list_peers

GLUSTER direct CLI example (to show what salt is sending to gluster):

$ gluster peer status

GLUSTER CLI 3.4.4 return example (so we know what we are parsing):

Number of Peers: 2

Hostname: ftp2 Port: 24007 Uuid: cbcb256b-e66e-4ec7-a718-21082d396c24 State: Peer in Cluster (Connected)

Hostname: ftp3 Uuid: 5ea10457-6cb2-427b-a770-7897509625e9 State: Peer in Cluster (Connected)

salt.modules.glusterfs.list_volumes()

List configured volumes

CLI Example:

salt '*' glusterfs.list_volumes
salt.modules.glusterfs.peer(name)

Add another node into the peer list.

name
The remote host to probe.

CLI Example:

salt 'one.gluster.*' glusterfs.peer two

GLUSTER direct CLI example (to show what salt is sending to gluster):

$ gluster peer probe ftp2
GLUSTER CLI 3.4.4 return example (so we know what we are parsing):

#if the "peer" is the local host: peer probe: success: on localhost not needed

#if the peer was just added: peer probe: success

#if the peer was already part of the cluster: peer probe: success: host ftp2 port 24007 already in peer list

salt.modules.glusterfs.start_volume(name, force=False)

Start a gluster volume.

name
Volume name
force
Force the volume start even if the volume is started .. versionadded:: 2015.8.4

CLI Example:

salt '*' glusterfs.start mycluster
salt.modules.glusterfs.status(name)

Check the status of a gluster volume.

name
Volume name

CLI Example:

salt '*' glusterfs.status myvolume
salt.modules.glusterfs.stop_volume(name, force=False)

Stop a gluster volume.

name
Volume name
force
Force stop the volume .. versionadded:: 2015.8.4

CLI Example:

salt '*' glusterfs.stop_volume mycluster