Manage a glusterfs pool
salt.modules.glusterfs.
add_volume_bricks
(name, bricks)¶Add brick(s) to an existing volume
CLI Example:
salt '*' glusterfs.add_volume_bricks <volume> <bricks>
salt.modules.glusterfs.
create_volume
(name, bricks, stripe=False, replica=False, device_vg=False, transport='tcp', start=False, force=False)¶Create a glusterfs volume
CLI Examples:
salt host1 glusterfs.create newvolume host1:/brick
salt gluster1 glusterfs.create vol2 '["gluster1:/export/vol2/brick", "gluster2:/export/vol2/brick"]' replica=2 start=True
salt.modules.glusterfs.
delete_volume
(target, stop=True)¶Deletes a gluster volume
True
, stop volume before deleteCLI Example:
salt '*' glusterfs.delete_volume <volume>
salt.modules.glusterfs.
disable_quota_volume
(name)¶Disable quota on a glusterfs volume.
CLI Example:
salt '*' glusterfs.disable_quota_volume <volume>
salt.modules.glusterfs.
enable_quota_volume
(name)¶Enable quota on a glusterfs volume.
CLI Example:
salt '*' glusterfs.enable_quota_volume <volume>
salt.modules.glusterfs.
info
(name=None)¶New in version 2015.8.4.
Return gluster volume info.
CLI Example:
salt '*' glusterfs.info
salt.modules.glusterfs.
list_quota_volume
(name)¶List quotas of glusterfs volume
CLI Example:
salt '*' glusterfs.list_quota_volume <volume>
salt.modules.glusterfs.
list_volumes
()¶List configured volumes
CLI Example:
salt '*' glusterfs.list_volumes
salt.modules.glusterfs.
peer
(name)¶Add another node into the peer list.
CLI Example:
salt 'one.gluster.*' glusterfs.peer two
GLUSTER direct CLI example (to show what salt is sending to gluster):
$ gluster peer probe ftp2
#if the "peer" is the local host: peer probe: success: on localhost not needed
#if the peer was just added: peer probe: success
#if the peer was already part of the cluster: peer probe: success: host ftp2 port 24007 already in peer list
salt.modules.glusterfs.
peer_status
()¶Return peer status information
The return value is a dictionary with peer UUIDs as keys and dicts of peer information as values. Hostnames are listed in one list. GlusterFS separates one of the hostnames but the only reason for this seems to be which hostname happens to be used first in peering.
CLI Example:
salt '*' glusterfs.peer_status
GLUSTER direct CLI example (to show what salt is sending to gluster):
$ gluster peer status
GLUSTER CLI 3.4.4 return example (so we know what we are parsing):
Number of Peers: 2
Hostname: ftp2 Port: 24007 Uuid: cbcb256b-e66e-4ec7-a718-21082d396c24 State: Peer in Cluster (Connected)
Hostname: ftp3 Uuid: 5ea10457-6cb2-427b-a770-7897509625e9 State: Peer in Cluster (Connected)
salt.modules.glusterfs.
set_quota_volume
(name, path, size, enable_quota=False)¶Set quota to glusterfs volume.
CLI Example:
salt '*' glusterfs.set_quota_volume <volume> <path> <size> enable_quota=True
salt.modules.glusterfs.
start_volume
(name, force=False)¶Start a gluster volume
CLI Example:
salt '*' glusterfs.start mycluster
salt.modules.glusterfs.
status
(name)¶Check the status of a gluster volume.
CLI Example:
salt '*' glusterfs.status myvolume
salt.modules.glusterfs.
stop_volume
(name, force=False)¶Stop a gluster volume
Force stop the volume
New in version 2015.8.4.
CLI Example:
salt '*' glusterfs.stop_volume mycluster
salt.modules.glusterfs.
unset_quota_volume
(name, path)¶Unset quota on glusterfs volume
CLI Example:
salt '*' glusterfs.unset_quota_volume <volume> <path>