salt.modules.glusterfs

Manage a glusterfs pool

salt.modules.glusterfs.add_volume_bricks(name, bricks)

Add brick(s) to an existing volume

name

Volume name

bricks

List of bricks to add to the volume

CLI Example:

salt '*' glusterfs.add_volume_bricks <volume> <bricks>
salt.modules.glusterfs.create_volume(name, bricks, stripe=False, replica=False, device_vg=False, transport='tcp', start=False, force=False, arbiter=False)

Create a glusterfs volume

name

Name of the gluster volume

bricks

Bricks to create volume from, in <peer>:<brick path> format. For multiple bricks use list format: '["<peer1>:<brick1>", "<peer2>:<brick2>"]'

stripe

Stripe count, the number of bricks should be a multiple of the stripe count for a distributed striped volume

replica

Replica count, the number of bricks should be a multiple of the replica count for a distributed replicated volume

arbiter

If true, specifies volume should use arbiter brick(s). Valid configuration limited to "replica 3 arbiter 1" per Gluster documentation. Every third brick in the brick list is used as an arbiter brick.

New in version 2019.2.0.

device_vg

If true, specifies volume should use block backend instead of regular posix backend. Block device backend volume does not support multiple bricks

transport

Transport protocol to use, can be 'tcp', 'rdma' or 'tcp,rdma'

start

Start the volume after creation

force

Force volume creation, this works even if creating in root FS

CLI Examples:

salt host1 glusterfs.create newvolume host1:/brick

salt gluster1 glusterfs.create vol2 '["gluster1:/export/vol2/brick",         "gluster2:/export/vol2/brick"]' replica=2 start=True
salt.modules.glusterfs.delete_volume(target, stop=True)

Deletes a gluster volume

target

Volume to delete

stopTrue

If True, stop volume before delete

CLI Example:

salt '*' glusterfs.delete_volume <volume>
salt.modules.glusterfs.disable_quota_volume(name)

Disable quota on a glusterfs volume.

name

Name of the gluster volume

CLI Example:

salt '*' glusterfs.disable_quota_volume <volume>
salt.modules.glusterfs.enable_quota_volume(name)

Enable quota on a glusterfs volume.

name

Name of the gluster volume

CLI Example:

salt '*' glusterfs.enable_quota_volume <volume>
salt.modules.glusterfs.get_max_op_version()

New in version 2019.2.0.

Returns the glusterfs volume's max op-version value Requires Glusterfs version > 3.9

CLI Example: .. code-block:: bash

salt '*' glusterfs.get_max_op_version

salt.modules.glusterfs.get_op_version(name)

New in version 2019.2.0.

Returns the glusterfs volume op-version

name

Name of the glusterfs volume

CLI Example:

salt '*' glusterfs.get_op_version <volume>
salt.modules.glusterfs.get_version()

New in version 2019.2.0.

Returns the version of glusterfs. CLI Example: .. code-block:: bash

salt '*' glusterfs.get_version

salt.modules.glusterfs.info(name=None)

New in version 2015.8.4.

Return gluster volume info.

name

Optional name to retrieve only information of one volume

CLI Example:

salt '*' glusterfs.info
salt.modules.glusterfs.list_quota_volume(name)

List quotas of glusterfs volume

name

Name of the gluster volume

CLI Example:

salt '*' glusterfs.list_quota_volume <volume>
salt.modules.glusterfs.list_volumes()

List configured volumes

CLI Example:

salt '*' glusterfs.list_volumes
salt.modules.glusterfs.peer(name)

Add another node into the peer list.

name

The remote host to probe.

CLI Example:

salt 'one.gluster.*' glusterfs.peer two

GLUSTER direct CLI example (to show what salt is sending to gluster):

$ gluster peer probe ftp2

GLUSTER CLI 3.4.4 return example (so we know what we are parsing):

#if the "peer" is the local host: peer probe: success: on localhost not needed

#if the peer was just added: peer probe: success

#if the peer was already part of the cluster: peer probe: success: host ftp2 port 24007 already in peer list

salt.modules.glusterfs.peer_status()

Return peer status information

The return value is a dictionary with peer UUIDs as keys and dicts of peer information as values. Hostnames are listed in one list. GlusterFS separates one of the hostnames but the only reason for this seems to be which hostname happens to be used first in peering.

CLI Example:

salt '*' glusterfs.peer_status

GLUSTER direct CLI example (to show what salt is sending to gluster):

$ gluster peer status

GLUSTER CLI 3.4.4 return example (so we know what we are parsing):

Number of Peers: 2

Hostname: ftp2 Port: 24007 Uuid: cbcb256b-e66e-4ec7-a718-21082d396c24 State: Peer in Cluster (Connected)

Hostname: ftp3 Uuid: 5ea10457-6cb2-427b-a770-7897509625e9 State: Peer in Cluster (Connected)

salt.modules.glusterfs.set_op_version(version)

New in version 2019.2.0.

Set the glusterfs volume op-version

version

Version to set the glusterfs volume op-version

CLI Example:

salt '*' glusterfs.set_op_version <volume>
salt.modules.glusterfs.set_quota_volume(name, path, size, enable_quota=False)

Set quota to glusterfs volume.

name

Name of the gluster volume

path

Folder path for restriction in volume ("/")

size

Hard-limit size of the volume (MB/GB)

enable_quota

Enable quota before set up restriction

CLI Example:

salt '*' glusterfs.set_quota_volume <volume> <path> <size> enable_quota=True
salt.modules.glusterfs.start_volume(name, force=False)

Start a gluster volume

name

Volume name

force

Force the volume start even if the volume is started .. versionadded:: 2015.8.4

CLI Example:

salt '*' glusterfs.start mycluster
salt.modules.glusterfs.status(name)

Check the status of a gluster volume.

name

Volume name

CLI Example:

salt '*' glusterfs.status myvolume
salt.modules.glusterfs.stop_volume(name, force=False)

Stop a gluster volume

name

Volume name

force

Force stop the volume

New in version 2015.8.4.

CLI Example:

salt '*' glusterfs.stop_volume mycluster
salt.modules.glusterfs.unset_quota_volume(name, path)

Unset quota on glusterfs volume

name

Name of the gluster volume

path

Folder path for restriction in volume

CLI Example:

salt '*' glusterfs.unset_quota_volume <volume> <path>