Docker Swarm is a native orchestration solution for Docker, allowing you to manage multiple Docker nodes as a single virtual system. In this blog post, we will show you how to set up a Docker Swarm cluster with Gluster shared storage, which will provide high availability and scalability for your applications.
Prerequisites
Before setting up a Docker Swarm cluster with Gluster shared storage, you need to ensure that you have the following prerequisites:
- Three or more servers running Ubuntu 20.04 or later, with Docker and GlusterFS already installed.
- A static IP address for each server.
- A shared storage volume created using GlusterFS.
Setting up GlusterFS shared storage
To set up GlusterFS shared storage, follow these steps:
- On each server, create a directory to be used as a GlusterFS volume. For example, you can create the directory
/gluster-dataon each server.
mkdir /gluster-data- On one of the servers, run the following command to create a GlusterFS volume:
gluster volume create gv0 replica 3 server1:/gluster-data server2:/gluster-data server3:/gluster-dataReplaceserver1,server2, andserver3with the hostnames or IP addresses of your servers. This command creates a replica 3 volume namedgv0, which means that the data will be replicated on three servers.
- Start the GlusterFS volume:
gluster volume start gv0
- On each server, mount the GlusterFS volume:
mount -t glusterfs server1:/gv0 /gluster-dataReplace server1 with the hostname or IP address of one of the servers in your GlusterFS volume.Setting up Docker Swarm
With GlusterFS shared storage set up, you can now set up a Docker Swarm cluster. Follow these steps:
- On one of the servers, initialize the Swarm:
docker swarm init --advertise-addr <IP-ADDRESS>Replace <IP-ADDRESS> with the static IP address of the server you are using to initialize the Swarm.- On the other servers, join the Swarm as workers:
docker swarm join --token <TOKEN> <IP-ADDRESS>:2377Replace<TOKEN>with the token that was outputted when you initialized the Swarm, and replace<IP-ADDRESS>with the static IP address of the server that initialized the Swarm.
- Create a Docker overlay network:
docker network create -d overlay --opt encrypted my-overlay-networkThis creates an encrypted overlay network named my-overlay-network.
- Create a service with GlusterFS shared storage:
docker service create --name my-web-app --replicas 3 --network my-overlay-network --mount type=bind,source=/gluster-data,target=/app --publish published=8080,target=80 nginx:alpineThis creates a service namedmy-web-appwith three replicas, which are deployed on three different servers in the Swarm. The service is connected to themy-overlay-networkand mounts the GlusterFS shared storage at/gluster-datato the/appdirectory in the containers. The service also exposes port 80 on the containers, which can be accessed through port 8080 on the host servers.
Verifying the setup
To verify that your Docker Swarm cluster with Gluster shared storage is set up correctly, follow these steps:
- Run the following command to check the status of your Swarm:
docker node lsThis should show you a list of all the nodes in your Swarm, with the status of each node.
- Run the following command to check the status of your service:
docker service ps my-web-appThis should show you the status of each replica of your service, including which node each replica is running on.
- To test the GlusterFS shared storage, create a file in the
/gluster-datadirectory on one of the servers, and then verify that the file can be accessed from the other servers. - To test the network, access the web app from a web browser by navigating to
http://<IP-ADDRESS>:8080, where<IP-ADDRESS>is the static IP address of one of the servers.
Conclusion
By setting up a Docker Swarm cluster with Gluster shared storage, you can ensure high availability and scalability for your applications. This setup provides a powerful and flexible platform for running and managing containers, while also providing a robust and reliable storage solution. With this setup, you can easily deploy, scale, and manage your applications with confidence.