An experiment in creating clusters and distributing loads with Raspberry Pis
Connecting Raspberries
After installing the server software, you can build the cluster. From now on, the commands should be entered on cluster01; that is, you don't have to execute the input on all of the nodes. Before you set up the actual volume, though, you should check whether the computers in the cluster are able to contact one another. You can use the Gluster tool to do this:
# gluster peer probe cluster01 peer probe: success: on localhost not needed # gluster peer probe cluster02 peer probe: success # gluster peer probe cluster03 peer probe: success
Now you should create the volume volpi
using the command in Listing 6.
Listing 6
Creating the Common Volume
# gluster volume create volpi replica 3 transport tcp cluster01:/export/brick cluster02:/export/brick cluster03:/export/brick volume create: volpi: success: please start the volume to access data
All of the files that you now save in volpi
will be saved on all of the nodes. If you had used replica 2
instead of replica 3
, the files would only be saved on two out of the three nodes.
To access the volume you have just set up, you should start it with the first command in Listing 7. The second and third commands let you take a quick look at the volume to see whether it is working. Once you are ready to use the volume, all you have to do is mount it on just one of the clients with the following command:
Listing 7
Accessing and Checking the Volume
# gluster volume start volpi volume start: volpi: success # gluster volume status volpi Status of volume: volpi Gluster process Port Online Pid ----------------------------------------------------------- Brick cluster01:/export/brick 49152 Y 7324 Brick cluster02:/export/brick 49152 Y 2738 Brick cluster03:/export/brick 49152 Y 6496 NFS Server on localhost 2049 Y 7336 Self-heal Daemon on localhost N/A Y 7340 NFS Server on cluster02 2049 Y 2750 Self-heal Daemon on cluster02 N/A Y 2754 NFS Server on cluster03 2049 Y 6508 Self-heal Daemon on cluster03 N/A Y 6512 There are no active volume tasks # gluster volume info volpi Volume Name: volpi Type: Replicate Volume ID: b66888e7-903c-4a93-93c4-3c953cf9bb2e Status: Started Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: cluster01:/export/brick Brick2: cluster02:/export/brick Brick3: cluster03:/export/brick
$ sudo mount -t glusterfs cluster01:/volpi /mnt/gluster/
You should make sure the exact same version of GlusterFS is installed on each node. At this point, you can work with the mounted filesystem just as you would with any other device. This concludes the first part of the experiment.
Geo-Replication
The cluster filesystem needs an extremely fast Internet connection with which to combine the individual nodes to achieve synchronicity. Therefore, a normal DSL connection with its relatively modest upload rate probably will not work for a cluster that is distributed over the Internet.
However, geo-replication is a good alternative for at least keeping data synchronized in one direction. Geo-replication makes it possible to copy an entire cluster and then keep the copy synchronized. The technology is suitable for creating a backup or for moving servers.
To install this kind of replication, you should create an SSH key pair on the master and transmit the public key to the target computer. Instead of using a password, you can simply tap the Return key when you see the corresponding prompts (Listing 8, lines 5 and 6). Next, test the connection using the command:
Listing 8
Setting Up the Key Pair
01 $ ssh-keygen -f /var/lib/glusterd/geo-replication/secret.pem 02 Generating public/private rsa key pair. 03 /var/lib/glusterd/geo-replication/secret.pem already exists. 04 Overwrite (y/n)? y 05 Enter passphrase (empty for no passphrase): 06 Enter same passphrase again: 07 Your identification has been saved in /var/lib/glusterd/geo-replication/secret.pem. 08 Your public key has been saved in /var/lib/glusterd/geo-replication/secret.pem.pub. 09 The key fingerprint is: 10 51:1b:67:4a:cf:56:04:7b:19:e1:89:ac:26:01:b4:c2 root@cluster01 11 The key's randomart image is: 12 +--[ RSA 2048]----+ 13 | .o + +o=. | 14 | . o o X = + | 15 | E . o o B = | 16 | . o o . | 17 | S o | 18 | o | 19 | | 20 | | 21 | | 22 +-----------------+ 23 $ ssh-copy-id -i /var/lib/glusterd/geo-replication/secret.pem.pub root@cluster04 24 root@cluster04's password: 25 Now try logging into the machine, with "ssh 'root@cluster04'", and check in: 26 27 ~/.ssh/authorized_keys 28 29 to make sure we haven't added extra keys that you weren't expecting.
$ ssh root@cluster04 -i /var/lib/glusterd/geo-replication/secret.pem
Afterward, you should start to replicate the local cluster. SSH will not request a password unless you have made a mistake in the configuration.
Geo-replication can use a filesystem or a volume as a target. To mount the copy later on cluster04, you should create a local volume on the computer and keep things simple by also calling it volpi
(Listing 9, lines 1-4). You then start the replication (lines 5 and 6). To see whether everything worked correctly, enter the commands in lines 7-10. Entering ls
on cluster04 confirms that all is well (Listing 10).
Listing 9
Installing Geo-Replication
§§number 01 # gluster volume create volpi transport tcp cluster04:/export/brick 02 volume create: volpi: success: please start the volume to access data 03 # gluster volume start volpi 04 volume start: volpi: success 05 # gluster volume geo-replication volpi cluster04:volpi start 06 Starting geo-replication session between volpi & cluster04:/export/brick \ has been successful 07 # gluster volume geo-replication volpi cluster04:volpi status 08 NOD MASTER SLAVE STATUS 09 ---------------------------------------------------- 10 cluster01 volpi cluster04:volpi OK
Listing 10
Confirming the Replication
# ls /export/brick -la total 16 drwxr-xr-x 3 root root 4096 Oct 7 12:23 . drwxr-xr-x 4 root root 4096 Oct 7 11:50 .. drwxr-xr-x 4 root root 4096 Oct 7 12:22 clients -rw-r--r-- 1 root root 104857600 Oct 7 08:37 test
At this point, you should refer to the documentation for GlusterFS. The filesystem is a powerful tool that can help a user create highly available filesystems, even in very complex environments. If this is the scope you desire, then you might want to consider employing the faster Banana Pis [3] or the Cubieboard [4] instead of Rasp Pis.
« Previous 1 2 3 4 Next »
Buy this article as PDF
Pages: 6
(incl. VAT)