Gluster testing with docker

Last Updated or created 2022-04-13

GlusterFS (Gluster File System) is an open source Distributed File System that can scale out in building-block fashion to store multiple petabytes of data.

Below is a test environment which creates 5 docker instances, which represent 5 gluster servers.
This was for test repairing our work gluster.

First install gluster and pull a image: docker pull gluster/gluster-centos

gethosts

for f in 1 2 3 4 5;
do 
echo "$(docker exec -it gluster_${f} ip a s | grep 172 | awk '{ print $2 }' | cut -f1 -d/) gluster_${f}"
done

create_dockers

for f in 1 2 3 4 5; do
docker run --name gluster_${f} --privileged=true -d gluster/gluster-centos /usr/sbin/init
done

create_bricks

for f in 1 2 3 4 5; do
docker exec -it gluster_${f} mkdir -p /bricks/brick01
done

destroy_dockers

for f in 1 2 3 4 5; do
docker stop gluster_${f}
docker rm gluster_${f}
done

diskcreator

for f in $(seq 1 5); do
dd if=/dev/zero of=/root/disk${f} count=1 bs=100M
losetup /dev/loop${f} /root/disk${f}
docker run --name gluster_${f} --privileged=true --device=/dev/loop${f} -d  gluster/gluster-centos /usr/sbin/init
done

lvm-dockers

modprobe dm_thin_pool (in docker)
modprobe dm_thin_pool (ook in VM zelf)
modprobe device-mapper ??

pvcreate /dev/loop0
vgcreate brick01 /dev/loop0
lvcreate -L 50M -T brick01 -n thin_brick01

lvcreate -V 40M -T brick01/thin_brick01 -n testvolume
mkfs -t xfs -i size=512 /dev/brick01/testvolume
mount /dev/brick01/testvolume /bricks/brick01

lvextend -L+10M /bricks/brick01
xfs_growfs /dev/brick01/testvolume
fash@fash-Vortex:~$ cat docker-lvm
modprobe dm_thin_pool (in docker)
modprobe dm_thin_pool (ook in VM zelf)
modprobe device-mapper ??

pvcreate /dev/loop0
vgcreate brick01 /dev/loop0
lvcreate -L 50M -T brick01 -n thin_brick01

lvcreate -V 40M -T brick01/thin_brick01 -n testvolume
mkfs -t xfs -i size=512 /dev/brick01/testvolume
mount /dev/brick01/testvolume /bricks/brick01

lvextend -L+10M /bricks/brick01
xfs_growfs /dev/brick01/testvolume

How to use

./create

./gethosts voor info

docker exec -it gluster_1 /bin/bash

# GEEN HOSTSNAMES INGEVULD!
gluster peer probe 172.17.0.2
gluster peer probe 172.17.0.3
gluster peer probe 172.17.0.4
gluster peer probe 172.17.0.5

Geen persistent storage aangemaakt evt kunnen we ook in de docker zelf testen

docker exec -it gluster_1 mkdir -p /bricks/brick01
docker exec -it gluster_2 mkdir -p /bricks/brick01
docker exec -it gluster_3 mkdir -p /bricks/brick01
docker exec -it gluster_4 mkdir -p /bricks/brick01

gluster volume create testvolume 172.17.0.2:/bricks/brick01 172.17.0.3:/bricks/brick01 172.17.0.4:/bricks/brick01 172.17.0.5:/bricks/brick01 force

gluster volume start testvolume

### NOG TE TESTEN
#gluster volume create testvolume replica 2 172.17.0.2:/bricks/brick01 172.17.0.3:/bricks/brick01 172.17.0.4:/bricks/brick01 172.17.0.5:/bricks/brick01 force

### NOG TE TESTEN
#gluster volume create testvolume replica 2 arbiter 1 172.17.0.2:/bricks/brick01 172.17.0.3:/bricks/brick01 172.17.0.4:/bricks/brick01 172.17.0.5:/bricks/brick01 force

mount -t glusterfs 172.17.0.2:/testvolume /media/

cd /media

touch {1..9}

exit

for f in 1 2 3 4 ; do echo "gluster_${f}" ; docker exec -it gluster_${f} ls /bricks/brick01 ;done

# DESTROY 
for f in 1 2 3 4 5; do 
docker stop gluster_${f}
docker rm gluster_${f}
done

Howto reset-replicated-brick-same-server

Using clean glusterdockers

./create_dockers
./create_bricks
./gethosts

# docker exec -it gluster_1 /bin/bash


# gluster peer probe 172.17.0.2
# gluster peer probe 172.17.0.3
# gluster peer probe 172.17.0.4
# gluster peer probe 172.17.0.5

# Gluster peer status 
----------------------------------
(peers = 3 + localhost maakt 4 ;-)

# gluster volume create testvolume replica 2 172.17.0.2:/bricks/brick01 172.17.0.3:/bricks/brick01 172.17.0.4:/bricks/brick01 172.17.0.5:/bricks/brick01 force

# gluster volume start testvolume ; gluster volume info testvolume
----------------------------------

Volume Name: testvolume
Type: Distributed-Replicate
Volume ID: e5536d11-77ee-40a5-9282-e4223979f443
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
----------------------------------


# mount -t glusterfs 172.17.0.2:/testvolume /media/
# cd /media
# touch {1..9}

# exit

From dockerhost we see the files nicely spread over the bricks

# for f in 1 2 3 4 ; do echo "gluster_${f}" ; docker exec -it gluster_${f} ls /bricks/brick01 ;done
------------------------------------------------------------------------
gluster_1
1  5  7  8  9
gluster_2
1  5  7  8  9
gluster_3
2  3  4  6
gluster_4
2  3  4  6
---------------------------------------------------------------------------------



Logon op gluster_3
# docker exec -it gluster_3 /bin/bash
# rm -rf /bricks

- wacht ff -

# gluster volume status
----------------------------------------------------------------------------
Status of volume: testvolume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 172.17.0.2:/bricks/brick01            49152     0          Y       210  
Brick 172.17.0.3:/bricks/brick01            49152     0          Y       151  
Brick 172.17.0.4:/bricks/brick01            N/A       N/A        N       N/A   <----- gone  
Brick 172.17.0.5:/bricks/brick01            49152     0          Y       152 
----------------------------------------------------------------------------

# exit

From dockerhost:

# for f in 1 2 3 4 ; do echo "gluster_${f}" ; docker exec -it gluster_${f} ls /bricks/brick01 ;done
------------------------------------------------------------------------------------
gluster_1
1  5  7  8  9
gluster_2
1  5  7  8  9
gluster_3
ls: cannot access /bricks/brick01: No such file or directory
gluster_4
2  3  4  6
--------------------------------------------------------------------------------------

Logon on gluster_1
# docker exec -it gluster_1 /bin/bash

# gluster volume reset-brick testvolume 172.17.0.4:/bricks/brick01 start

#This is the moment to swap the md3260, but we are using here the next commands:

Create new storage on gluster_3
# docker exec -it gluster_3 mkdir -p /bricks/brick01 
# docker exec -it gluster_3 ls /bricks/brick01 

Logon on gluster_1
# docker exec -it gluster_1 /bin/bash

# gluster volume reset-brick testvolume 172.17.0.4:/bricks/brick01  172.17.0.4:/bricks/brick01 commit force


[root@svr1035 ~]# 


From dockerhost we see the files nicely spread over the bricks

# for f in 1 2 3 4 ; do echo "gluster_${f}" ; docker exec -it gluster_${f} ls /bricks/brick01 ;done
------------------------------------------------------------------------
gluster_1
1  5  7  8  9
gluster_2
1  5  7  8  9
gluster_3
2  3  4  6
gluster_4
2  3  4  6
---------------------------------------------------------------------------------