Configurazione di un cluster Glusterfs replicato su due nodi in Debian
Jump to navigation
Jump to search
Installazione su Debian Squeeze
Installazione ultima versione= 3.3.x
- È disponibile SOLO per architettura 64 bit !!! In alternativa, è possibile installare la 3.2.7 dai backports, disponibile anche per i386, oppure la 3.0.5 standard.
- Occorre abilitare il repository per usare la 3.3, altrimenti si usa la 3.0.5 molto diversa:
sudoedit /etc/apt/sources.list.d/glusterfs.list
deb http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.1/Debian/squeeze.repo squeeze main
- Installare la chiave:
wget -O - http://download.gluster.org/pub/gluster/glusterfs/3.3/3.3.1/Debian/gpg.key | sudo apt-key add -
- Aggiornare, ed installare:
sudo apt-get update
- TBD
Riferimenti
Installazione versione standard 3.0.5
- È disponibile per tutte le architetture, i386 ed amd64
- Installare i pacchetti normalmente:
sudo apt-get install glusterfs-server
- Preparare le directory da esportare, che possono anche essere già esistenti e contenenti dei files:
sudo mkdir /files/gluster/homes sudo mkdir /files/gluster/profiles
- Dichiarare i volumi in cui esportare queste directories:
sudoedit /etc/glusterfs/glusterfsd.vol
volume homes type storage/posix option directory /files/gluster/homes end-volume volume profiles type storage/posix option directory /files/gluster/profiles end-volume volume locks-homes type features/locks subvolumes homes end-volume volume locks-profiles type features/locks subvolumes profiles end-volume volume brick-homes type performance/io-threads option thread-count 8 subvolumes locks-homes end-volume volume brick-profiles type performance/io-threads option thread-count 8 subvolumes locks-profiles end-volume volume server type protocol/server option transport-type tcp option auth.addr.brick-homes.allow 192.168.6.* option auth.addr.brick-profiles.allow 192.168.6.* subvolumes brick-homes brick-profiles end-volume
- In pratica per ogni directory da esportare si dichiarano le classi di volumi che includono i precedenti, e poi si esportano gli ultimi bricks nell'unico volume server
- Riavviare il demone:
sudo invoke-rc.d glusterfs-server stop sudo invoke-rc.d glusterfs-server start
- Verificare nei log:
sudo tail -f /var/log/glusterfs/glusterfsd.vol.log
- Ripetere le stesse operazioni sul server secondario:
sudo mkdir /files/gluster/homes sudo mkdir /files/gluster/profiles cd /tmp scp master.example.com:/etc/glusterfs/glusterfsd.vol . sudo mv glusterfsd.vol /etc/glusterfs/ sudo invoke-rc.d glusterfs-server stop sudo invoke-rc.d glusterfs-server start sudo tail -f /var/log/glusterfs/glusterfsd.vol.log
- Creare ora il file client del master server per homes:
sudoedit /etc/glusterfs/homes.vol
#HOMES
volume pubserver.pubblistil.priv-homes
type protocol/client
option transport-type tcp
option remote-host pubserver.pubblistil.priv
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick-homes
end-volume
volume pubstor01.pubblistil.priv-homes
type protocol/client
option transport-type tcp
option remote-host pubstor01.pubblistil.priv
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick-homes
end-volume
volume mirror-homes
type cluster/replicate
subvolumes pubserver.pubblistil.priv-homes pubstor01.pubblistil.priv-homes
end-volume
volume readahead-homes
type performance/read-ahead
option page-count 4
subvolumes mirror-homes
end-volume
volume iocache-homes
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed 's/[^0-9]//g') / 5120 ))`MB
option cache-timeout 1
subvolumes readahead-homes
end-volume
volume quickread-homes
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache-homes
end-volume
volume writebehind-homes
type performance/write-behind
option cache-size 4MB
subvolumes quickread-homes
end-volume
volume statprefetch-homes
type performance/stat-prefetch
subvolumes writebehind-homes
end-volume
- Creare il files per profiles:
sudoedit /etc/gluster/profiles.vol
# PROFILES
volume pubserver.pubblistil.priv-profiles
type protocol/client
option transport-type tcp
option remote-host pubserver.pubblistil.priv
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick-profiles
end-volume
volume pubstor01.pubblistil.priv-profiles
type protocol/client
option transport-type tcp
option remote-host pubstor01.pubblistil.priv
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick-profiles
end-volume
volume mirror-profiles
type cluster/replicate
subvolumes pubserver.pubblistil.priv-profiles pubstor01.pubblistil.priv-profiles
end-volume
volume readahead-profiles
type performance/read-ahead
option page-count 4
subvolumes mirror-profiles
end-volume
volume iocache-profiles
type performance/io-cache
option cache-size `echo $(( $(grep 'MemTotal' /proc/meminfo | sed 's/[^0-9]//g') / 5120 ))`MB
option cache-timeout 1
subvolumes readahead-profiles
end-volume
volume quickread-profiles
type performance/quick-read
option cache-timeout 1
option max-file-size 64kB
subvolumes iocache-profiles
end-volume
volume writebehind-profiles
type performance/write-behind
option cache-size 4MB
subvolumes quickread-profiles
end-volume
volume statprefetch-profiles
type performance/stat-prefetch
subvolumes writebehind-profiles
end-volume
- Anche qui, per la directory homes ci sono dei volumi nidificati, e la stessa cosa per la profiles.
- Montare i volumi:
sudo mount -t glusterfs pubserver.pubblistil.priv:homes /files/gluster/homes/ sudo mount -t glusterfs pubserver.pubblistil.priv:profiles /files/gluster/profiles/