<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://kb.rvmgroup.it/index.php?action=history&amp;feed=atom&amp;title=Configurare_un_Cluster_in_HA_in_Proxmox</id>
	<title>Configurare un Cluster in HA in Proxmox - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://kb.rvmgroup.it/index.php?action=history&amp;feed=atom&amp;title=Configurare_un_Cluster_in_HA_in_Proxmox"/>
	<link rel="alternate" type="text/html" href="https://kb.rvmgroup.it/index.php?title=Configurare_un_Cluster_in_HA_in_Proxmox&amp;action=history"/>
	<updated>2026-05-06T18:03:20Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.44.2</generator>
	<entry>
		<id>https://kb.rvmgroup.it/index.php?title=Configurare_un_Cluster_in_HA_in_Proxmox&amp;diff=9489&amp;oldid=prev</id>
		<title>Gabriele.vivinetto: Created page with &quot;Un cluster normale non necessita di quorum, ma non può gestire il failover in automatico, facendo ripartire le VM da un nodo morto ad uno vivo.  * Proxmox 3.x supporta un clu...&quot;</title>
		<link rel="alternate" type="text/html" href="https://kb.rvmgroup.it/index.php?title=Configurare_un_Cluster_in_HA_in_Proxmox&amp;diff=9489&amp;oldid=prev"/>
		<updated>2016-01-19T15:40:24Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;Un cluster normale non necessita di quorum, ma non può gestire il failover in automatico, facendo ripartire le VM da un nodo morto ad uno vivo.  * Proxmox 3.x supporta un clu...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Un cluster normale non necessita di quorum, ma non può gestire il failover in automatico, facendo ripartire le VM da un nodo morto ad uno vivo.&lt;br /&gt;
&lt;br /&gt;
* Proxmox 3.x supporta un cluster HA a 2 nodi +1 1 qorum disk e necessita della configurazione del fencing&lt;br /&gt;
* Proxmox 4x invece necessita di almeno 3 nodi, NON supporta il quorum disk, ma NON necessita della configurazione del fencing, perchè è autogestito.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Proxmox 3.x=&lt;br /&gt;
&lt;br /&gt;
==Configurazione del Fencing tramite IPMI==&lt;br /&gt;
&lt;br /&gt;
sudo apt-get install ipmitool &lt;br /&gt;
sudo modprobe ipmi_si&lt;br /&gt;
sudo modprobe ipmi_devintf&lt;br /&gt;
sudo modprobe ipmi_msghandler&lt;br /&gt;
&lt;br /&gt;
sudoedit /etc/modules&lt;br /&gt;
&lt;br /&gt;
ipmi_devintf&lt;br /&gt;
ipmi_si&lt;br /&gt;
ipmi_msghandler&lt;br /&gt;
&lt;br /&gt;
sudo /etc/init.d/kmod start&lt;br /&gt;
&lt;br /&gt;
 sudoedit /etc/default/ipmievd&lt;br /&gt;
&lt;br /&gt;
 ENABLED=true&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
sudo /etc/init.d/ipmievd restart&lt;br /&gt;
&lt;br /&gt;
sudo ipmitool -I open chassis status&lt;br /&gt;
sudo ipmitool lan set 1 ipsrc static&lt;br /&gt;
sudo ipmitool lan print 1&lt;br /&gt;
sudo ipmitool lan set 1 ipaddr 192.168.41.32&lt;br /&gt;
sudo ipmitool lan set 1 netmask 255.255.255.0&lt;br /&gt;
sudo ipmitool lan set 1 defgw ipaddr 192.168.41.254&lt;br /&gt;
sudo ipmitool lan set 1 defgw macaddr 00:0f:fe:24:36:0b&lt;br /&gt;
sudo ipmitool lan set 1 arp respond on&lt;br /&gt;
ping 192.168.41.32&lt;br /&gt;
sudo ipmitool lan set 1 auth ADMIN MD5&lt;br /&gt;
sudo ipmitool lan set 1 access on&lt;br /&gt;
sudo ipmitool lan print 1&lt;br /&gt;
sudo ipmitool user set name 2 admin&lt;br /&gt;
sudo ipmitool user set password 2&lt;br /&gt;
sudo ipmitool channel setaccess 1 2 link=on ipmi=on callin=on privilege=4&lt;br /&gt;
sudo ipmitool user enable 2&lt;br /&gt;
&lt;br /&gt;
Da un altro pc&lt;br /&gt;
&lt;br /&gt;
 ipmitool -I lan -H 192.168.41.31 -E chassis status&lt;br /&gt;
&lt;br /&gt;
ipmitool -I lan -H 192.168.41.31 -E chassis power off&lt;br /&gt;
 fa shutdown&lt;br /&gt;
&lt;br /&gt;
ipmitool -I lan -H 192.168.41.31 -E chassis power on&lt;br /&gt;
&lt;br /&gt;
sudo apt-get remove --purge acpi acpid&lt;br /&gt;
&lt;br /&gt;
ipmitool -I lan -H 192.168.41.31 -E chassis power off&lt;br /&gt;
&lt;br /&gt;
si spegne hard&lt;br /&gt;
&lt;br /&gt;
ipmitool -I lan -H 192.168.41.31 -E chassis power on&lt;br /&gt;
&lt;br /&gt;
===Abilitazione del Fencing===&lt;br /&gt;
&lt;br /&gt;
Eseguire su tutti i nodi:&lt;br /&gt;
&lt;br /&gt;
* Enable fencing in /etc/default/redhat-cluster-pve (Just uncomment the last line, see below): &lt;br /&gt;
&lt;br /&gt;
 sudoedit  /etc/default/redhat-cluster-pve&lt;br /&gt;
&lt;br /&gt;
 FENCE_JOIN=&amp;quot;yes&amp;quot;&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/init.d/cman restart&lt;br /&gt;
&lt;br /&gt;
* Join teh fence domain&lt;br /&gt;
&lt;br /&gt;
  sudo fence_tool join&lt;br /&gt;
&lt;br /&gt;
* Verificare&lt;br /&gt;
 fence_tool ls&lt;br /&gt;
&lt;br /&gt;
* Probabilmente il cluster andrà out fo sync, su tutti i nodi restartare:&lt;br /&gt;
 service pve-cluster restart&lt;br /&gt;
&lt;br /&gt;
===Configurazione fencing===&lt;br /&gt;
&lt;br /&gt;
* Fare una copia del file di configurazione&lt;br /&gt;
 sudo cp /etc/pve/cluster.conf /etc/pve/cluster.conf.new&lt;br /&gt;
&lt;br /&gt;
* Incrementare la versione&lt;br /&gt;
 sudoedit /etc/pve/cluster.conf.new&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;cluster alias=&amp;quot;hpiloclust&amp;quot; config_version=&amp;quot;12&amp;quot; name=&amp;quot;hpiloclust&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Imposatre la modalità a due nodi:&lt;br /&gt;
  &amp;lt;cman keyfile=&amp;quot;/var/lib/pve-cluster/corosync.authkey&amp;quot;&lt;br /&gt;
  two_node=&amp;quot;1&amp;quot; expected_votes=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Aggiungere i fence devices fuori dai tag &amp;gt;clusternodes&amp;gt;:&lt;br /&gt;
     &amp;lt;fencedevices&amp;gt;&lt;br /&gt;
        &amp;lt;fencedevice agent=&amp;quot;fence_ipmilan&amp;quot; name=&amp;quot;galprox01-fence&amp;quot; login=&amp;quot;&amp;quot; ipaddr=&amp;quot;192.168.41.31&amp;quot; passwd=&amp;quot;gal80xl700&amp;quot; power_wait=&amp;quot;5&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;fencedevice agent=&amp;quot;fence_ipmilan&amp;quot; name=&amp;quot;galprox02-fence&amp;quot; login=&amp;quot;&amp;quot; ipaddr=&amp;quot;192.168.41.32&amp;quot; passwd=&amp;quot;gal80xl700&amp;quot; power_wait=&amp;quot;5&amp;quot;/&amp;gt;&lt;br /&gt;
&amp;lt;/fencedevices&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Ora in ogni &amp;lt;clusternode&amp;gt;&amp;lt;/clusternode&amp;gt; inserire quale fencing device va usato:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;clusternodes&amp;gt;&lt;br /&gt;
    &amp;lt;clusternode name=&amp;quot;galprox01&amp;quot; nodeid=&amp;quot;1&amp;quot; votes=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		&amp;lt;fence&amp;gt;&lt;br /&gt;
			&amp;lt;method name=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;device name=&amp;quot;galprox01-fence&amp;quot; action=&amp;quot;off&amp;quot;/&amp;gt;&lt;br /&gt;
                &amp;lt;/method&amp;gt;&lt;br /&gt;
        &amp;lt;/fence&amp;gt;&lt;br /&gt;
    &amp;lt;/clusternode&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;clusternodes&amp;gt;&lt;br /&gt;
    &amp;lt;clusternode name=&amp;quot;galprox02&amp;quot; nodeid=&amp;quot;1&amp;quot; votes=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
		&amp;lt;fence&amp;gt;&lt;br /&gt;
			&amp;lt;method name=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;device name=&amp;quot;galprox02-fence&amp;quot; action=&amp;quot;off&amp;quot;/&amp;gt;&lt;br /&gt;
                &amp;lt;/method&amp;gt;&lt;br /&gt;
        &amp;lt;/fence&amp;gt;&lt;br /&gt;
    &amp;lt;/clusternode&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Testare la sintassi:&lt;br /&gt;
 sudo ccs_config_validate -v -f /etc/pve/cluster.conf.new&lt;br /&gt;
&lt;br /&gt;
* Ecco un esempio:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;?xml version=&amp;quot;1.0&amp;quot;?&amp;gt;&lt;br /&gt;
&amp;lt;cluster config_version=&amp;quot;12&amp;quot; name=&amp;quot;M4&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;cman keyfile=&amp;quot;/var/lib/pve-cluster/corosync.authkey&amp;quot;&lt;br /&gt;
  two_node=&amp;quot;1&amp;quot; expected_votes=&amp;quot;1&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;fencedevices&amp;gt;&lt;br /&gt;
        &amp;lt;fencedevice agent=&amp;quot;fence_ipmilan&amp;quot; name=&amp;quot;galprox01-fence&amp;quot; ipaddr=&amp;quot;192.168.41.31&amp;quot; login=&amp;quot;&amp;quot; passwd=&amp;quot;gal80xl700&amp;quot; power_wait=&amp;quot;5&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;fencedevice agent=&amp;quot;fence_ipmilan&amp;quot; name=&amp;quot;galprox02-fence&amp;quot; ipaddr=&amp;quot;192.168.41.32&amp;quot; login=&amp;quot;&amp;quot; passwd=&amp;quot;gal80xl700&amp;quot; power_wait=&amp;quot;5&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;/fencedevices&amp;gt;&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;clusternodes&amp;gt;&lt;br /&gt;
    &amp;lt;clusternode name=&amp;quot;galprox01&amp;quot; nodeid=&amp;quot;1&amp;quot; votes=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;fence&amp;gt;&lt;br /&gt;
                        &amp;lt;method name=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;device name=&amp;quot;galprox01-fence&amp;quot; action=&amp;quot;off&amp;quot;/&amp;gt;&lt;br /&gt;
                &amp;lt;/method&amp;gt;&lt;br /&gt;
        &amp;lt;/fence&amp;gt;&lt;br /&gt;
        &amp;lt;/clusternode&amp;gt;&lt;br /&gt;
    &amp;lt;clusternode name=&amp;quot;galprox02&amp;quot; nodeid=&amp;quot;2&amp;quot; votes=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
                &amp;lt;fence&amp;gt;&lt;br /&gt;
                        &amp;lt;method name=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
                        &amp;lt;device name=&amp;quot;galprox02-fence&amp;quot; action=&amp;quot;off&amp;quot;/&amp;gt;&lt;br /&gt;
            &amp;lt;/method&amp;gt;&lt;br /&gt;
        &amp;lt;/fence&amp;gt;&lt;br /&gt;
        &amp;lt;/clusternode&amp;gt;&lt;br /&gt;
  &amp;lt;/clusternodes&amp;gt;&lt;br /&gt;
  &amp;lt;rm/&amp;gt;&lt;br /&gt;
&amp;lt;/cluster&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Tornare nella gui, in Datacenter/HA e premere&lt;br /&gt;
 Activate&lt;br /&gt;
&lt;br /&gt;
* testare ora dal nodo2, deve spegnersi il nodo1&lt;br /&gt;
 fence_node galprox01 -vv&lt;br /&gt;
&lt;br /&gt;
fence galprox01 dev 0.0 agent fence_ipmilan result: success&lt;br /&gt;
agent args: action=off nodename=galprox01 agent=fence_ipmilan ipaddr=192.168.41.31 login= passwd=gal80xl700 power_wait=5 &lt;br /&gt;
fence galprox01 success&lt;br /&gt;
&lt;br /&gt;
* Riavviarlo via ipmi&lt;br /&gt;
&lt;br /&gt;
==Impostazione disco di quorum==&lt;br /&gt;
&lt;br /&gt;
* Utilizzando un qorum disk, si evitano i problemi dell&amp;#039;architettura a 2 nodi, simulandone un terzo&lt;br /&gt;
&lt;br /&gt;
* Un disco di quorum può essere ricavato sottranedo 100mb ad un volume lvm di swap da uno storage condiviso&lt;br /&gt;
&lt;br /&gt;
* Verificare che lo swap possa essere disabilitato&lt;br /&gt;
$ free&lt;br /&gt;
             total       used       free     shared    buffers     cached&lt;br /&gt;
Mem:       6112636    5402828     709808          0      11204    5259216&lt;br /&gt;
-/+ buffers/cache:     132408    5980228&lt;br /&gt;
Swap:      3903484          0    3903484&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
*Disablitarlo&lt;br /&gt;
sudo swapoff -a&lt;br /&gt;
&lt;br /&gt;
*Verificare&lt;br /&gt;
&lt;br /&gt;
$ free&lt;br /&gt;
             total       used       free     shared    buffers     cached&lt;br /&gt;
Mem:       6112636    5402780     709856          0      11308    5260048&lt;br /&gt;
-/+ buffers/cache:     131424    5981212&lt;br /&gt;
Swap:            0          0          0&lt;br /&gt;
&lt;br /&gt;
* Identificare il volume di swap:&lt;br /&gt;
cat /etc/fstab| grep swap&lt;br /&gt;
/dev/mapper/vgroot-swap none            swap    sw              0       0&lt;br /&gt;
&lt;br /&gt;
* Ridurlo di 100Mb&lt;br /&gt;
&lt;br /&gt;
 sudo lvreduce -L -100M /dev/mapper/vgroot-swap&lt;br /&gt;
&lt;br /&gt;
  WARNING: Reducing active logical volume to 3.62 GiB&lt;br /&gt;
  THIS MAY DESTROY YOUR DATA (filesystem etc.)&lt;br /&gt;
Do you really want to reduce swap? [y/n]: y&lt;br /&gt;
  Reducing logical volume swap to 3.62 GiB&lt;br /&gt;
  Logical volume swap successfully resized&lt;br /&gt;
&lt;br /&gt;
* Ricreare lo swap&lt;br /&gt;
sudo mkswap /dev/mapper/vgroot-swap&lt;br /&gt;
&lt;br /&gt;
	mkswap: /dev/mapper/vgroot-swap: warning: don&amp;#039;t erase bootbits sectors&lt;br /&gt;
        on whole disk. Use -f to force.&lt;br /&gt;
Setting up swapspace version 1, size = 3801084 KiB&lt;br /&gt;
no label, UUID=bcfbdfef-72f8-49e1-8a44-961aec0b22ff&lt;br /&gt;
&lt;br /&gt;
* Riabilitare lo swap:&lt;br /&gt;
$ sudo swapon -a&lt;br /&gt;
mnt.vvngrl@galstorage02[2015-07-21 19:50:23]&lt;br /&gt;
~&lt;br /&gt;
$ free&lt;br /&gt;
             total       used       free     shared    buffers     cached&lt;br /&gt;
Mem:       6112636    5409352     703284          0      11648    5264896&lt;br /&gt;
-/+ buffers/cache:     132808    5979828&lt;br /&gt;
Swap:      3801084          0    3801084&lt;br /&gt;
&lt;br /&gt;
* Create the LV to be shared via iSCSI: &lt;br /&gt;
&lt;br /&gt;
 sudo lvcreate -n proxmox2quorumdisk -L 10M vgroot&lt;br /&gt;
* Verificare&lt;br /&gt;
 sudo lvdisplay | grep proxmox2quorumdisk&lt;br /&gt;
&lt;br /&gt;
* Installare iscsi&lt;br /&gt;
 sudo apt-get install iscsitarget iscsitarget-dkms&lt;br /&gt;
&lt;br /&gt;
* Abilitare:&lt;br /&gt;
 sudoedit /etc/default/iscsitarget&lt;br /&gt;
&lt;br /&gt;
 ISCSITARGET_ENABLE=true&lt;br /&gt;
&lt;br /&gt;
* Dichiare le risorse sharata&lt;br /&gt;
 sudoedit /etc/iet/ietd.conf&lt;br /&gt;
&lt;br /&gt;
 Target iqn.2015-07.priv.example:myserver.lun1&lt;br /&gt;
        IncomingUser iscsi_username iscsi_password&lt;br /&gt;
        OutgoingUser&lt;br /&gt;
        #make sure the partition isn&amp;#039;t mounted :&lt;br /&gt;
        Lun 0 Path=/dev/mapper/vgroot-proxmox2quorumdisk,Type=fileio&lt;br /&gt;
        Alias LUN1&lt;br /&gt;
        #MaxConnections 6&lt;br /&gt;
&lt;br /&gt;
* Consentire la connessione&lt;br /&gt;
 sudoedit /etc/iet/initiators.allow&lt;br /&gt;
&lt;br /&gt;
 ALL ALL&lt;br /&gt;
 sudo /etc/init.d/iscsitarget restart&lt;br /&gt;
&lt;br /&gt;
===Collegamento quorum disk===&lt;br /&gt;
&lt;br /&gt;
* Sui nodi proxmox, collegare il quorum disk&lt;br /&gt;
&lt;br /&gt;
* Impostare l&amp;#039;autenticazione e la partenza automatica del demone&lt;br /&gt;
 sudoedit /etc/iscsi/iscsid.conf&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
node.startup = automatic&lt;br /&gt;
node.session.auth.username = iscsi_username&lt;br /&gt;
node.session.auth.password = iscsi_password&lt;br /&gt;
discovery.sendtargets.auth.username = iscsi_username&lt;br /&gt;
discovery.sendtargets.auth.password = iscsi_password&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/init.d/open-iscsi restart&lt;br /&gt;
&lt;br /&gt;
* Elencare le LUN esposte dallo storage:&lt;br /&gt;
 sudo iscsiadm -m discovery -t sendtargets -p 192.168.41.103&lt;br /&gt;
&lt;br /&gt;
 192.168.41.103:3260,1 iqn.2015-07.priv.galimberti.m4:galstorage02.lun1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* Collgeare la LUN con l&amp;#039;id precedente con il comando:&lt;br /&gt;
&lt;br /&gt;
 sudo iscsiadm --mode node --targetname iqn.2015-07.priv.galimberti.m4:galstorage02.lun1 --portal 192.168.41.103 --login&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Logging in to [iface: default, target: iqn.2015-07.priv.galimberti.m4:galstorage02.lun1, portal: 192.168.41.103,3260] (multiple)&lt;br /&gt;
Login to [iface: default, target: iqn.2015-07.priv.galimberti.m4:galstorage02.lun1, portal: 192.168.41.103,3260] successful.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Verificare che sia stato aggiunto il disco:&lt;br /&gt;
&lt;br /&gt;
 sudo less /var/log/syslog&lt;br /&gt;
&lt;br /&gt;
Jul 22 09:31:26 galprox02 kernel: scsi 7:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4&lt;br /&gt;
Jul 22 09:31:26 galprox02 kernel: sd 7:0:0:0: Attached scsi generic sg4 type 0&lt;br /&gt;
Jul 22 09:31:26 galprox02 kernel: sd 7:0:0:0: [sdb] 24576 512-byte logical blocks: (12.5 MB/12.0 MiB)&lt;br /&gt;
Jul 22 09:31:26 galprox02 kernel: sd 7:0:0:0: [sdb] Write Protect is off&lt;br /&gt;
Jul 22 09:31:26 galprox02 kernel: sd 7:0:0:0: [sdb] Mode Sense: 77 00 00 08&lt;br /&gt;
Jul 22 09:31:26 galprox02 kernel: sd 7:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn&amp;#039;t support DPO or FUA&lt;br /&gt;
Jul 22 09:31:26 galprox02 kernel: sdb: unknown partition table&lt;br /&gt;
Jul 22 09:31:26 galprox02 kernel: sd 7:0:0:0: [sdb] Attached SCSI disk&lt;br /&gt;
Jul 22 09:31:26 galprox02 iscsid: Connection1:0 to [target: iqn.2015-07.priv.galimberti.m4:galstorage02.lun1, portal: 192.168.41.103,3260] through [iface: default] is operational now&lt;br /&gt;
&lt;br /&gt;
===Preparazione del quorum disk===&lt;br /&gt;
* Non occorrerà montare il disco, verrà gestito direttamente dal cluster&lt;br /&gt;
&lt;br /&gt;
* Creare la partizione sul quorum disk (utilizzare l&amp;#039;unità corretta):&lt;br /&gt;
 sudo fdisk /dev/sdb&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
* Formattare il quorum disk:&lt;br /&gt;
 sudo mkqdisk -c /dev/sdb1 -l proxmox1_qdisk&lt;br /&gt;
&lt;br /&gt;
mkqdisk v1364188437&lt;br /&gt;
&lt;br /&gt;
Writing new quorum disk label &amp;#039;proxmox1_qdisk&amp;#039; to /dev/sdb1.&lt;br /&gt;
WARNING: About to destroy all data on /dev/sdb1; proceed [N/y] ? y&lt;br /&gt;
Initializing status block for node 1...&lt;br /&gt;
Initializing status block for node 2...&lt;br /&gt;
Initializing status block for node 3...&lt;br /&gt;
Initializing status block for node 4...&lt;br /&gt;
Initializing status block for node 5...&lt;br /&gt;
Initializing status block for node 6...&lt;br /&gt;
Initializing status block for node 7...&lt;br /&gt;
Initializing status block for node 8...&lt;br /&gt;
Initializing status block for node 9...&lt;br /&gt;
Initializing status block for node 10...&lt;br /&gt;
Initializing status block for node 11...&lt;br /&gt;
Initializing status block for node 12...&lt;br /&gt;
Initializing status block for node 13...&lt;br /&gt;
Initializing status block for node 14...&lt;br /&gt;
Initializing status block for node 15...&lt;br /&gt;
Initializing status block for node 16...&lt;br /&gt;
&lt;br /&gt;
* Procedere al collegamento del quorum disk sugli altri nodi, senza ricreare il quorum disk, naturalmente&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Impostazione del cluster per utilizzo quorum disk===&lt;br /&gt;
* Creare la nuova configurazione:&lt;br /&gt;
 sudo cp /etc/pve/cluster.conf /etc/pve/cluster.conf.new&lt;br /&gt;
&lt;br /&gt;
* Editarla&lt;br /&gt;
 sudoedit /etc/pve/cluster.conf.new&lt;br /&gt;
&lt;br /&gt;
* Incrementare il numero di configurazione&lt;br /&gt;
* Rimuovere la direttiva two_node=&amp;quot;1&amp;quot;&lt;br /&gt;
* Impostare  expected_votes=&amp;quot;3&amp;quot;&lt;br /&gt;
* Aggiungere le direttive per l&amp;#039;utilizzo del quorum disk:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;?xml version=&amp;quot;1.0&amp;quot;?&amp;gt;&lt;br /&gt;
&amp;lt;cluster config_version=&amp;quot;14&amp;quot; name=&amp;quot;M4&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;cman keyfile=&amp;quot;/var/lib/pve-cluster/corosync.authkey&amp;quot;&lt;br /&gt;
   expected_votes=&amp;quot;3&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;quorumd votes=&amp;quot;1&amp;quot; allow_kill=&amp;quot;0&amp;quot; interval=&amp;quot;1&amp;quot; label=&amp;quot;proxmox1_qdisk&amp;quot; tko=&amp;quot;10&amp;quot;/&amp;gt;&lt;br /&gt;
  &amp;lt;totem token=&amp;quot;54000&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Validare la configurazione&lt;br /&gt;
 sudo ccs_config_validate -v -f /etc/pve/cluster.conf.new&lt;br /&gt;
&lt;br /&gt;
* Attivarla da web gui&lt;br /&gt;
&lt;br /&gt;
* Verificare lo stato del cluster:&lt;br /&gt;
&lt;br /&gt;
 sudo pvecm s&lt;br /&gt;
&lt;br /&gt;
Version: 6.2.0&lt;br /&gt;
Config Version: 14&lt;br /&gt;
Cluster Name: M4&lt;br /&gt;
Cluster Id: 206&lt;br /&gt;
Cluster Member: Yes&lt;br /&gt;
Cluster Generation: 356&lt;br /&gt;
Membership state: Cluster-Member&lt;br /&gt;
Nodes: 2&lt;br /&gt;
Expected votes: 3&lt;br /&gt;
Total votes: 2&lt;br /&gt;
Node votes: 1&lt;br /&gt;
Quorum: 2  &lt;br /&gt;
Active subsystems: 6&lt;br /&gt;
Flags: &lt;br /&gt;
Ports Bound: 0 177  &lt;br /&gt;
Node name: galprox02&lt;br /&gt;
Node ID: 2&lt;br /&gt;
Multicast addresses: 239.192.0.206 &lt;br /&gt;
Node addresses: 192.168.41.102 &lt;br /&gt;
&lt;br /&gt;
=== Abilitazione quorum disk===&lt;br /&gt;
&lt;br /&gt;
* Stoppare rgmanager:&lt;br /&gt;
 sudo /etc/init.d/rgmanager stop&lt;br /&gt;
 Stopping Cluster Service Manager: [  OK  ]&lt;br /&gt;
root@galprox01[2015-07-22 10:11:25]&lt;br /&gt;
~&lt;br /&gt;
&lt;br /&gt;
* Fare il reload delal configurazione del custer, per attivare il qorum disk:&lt;br /&gt;
&lt;br /&gt;
 sudo   /etc/init.d/cman reload &lt;br /&gt;
&lt;br /&gt;
Stopping cluster: &lt;br /&gt;
   Leaving fence domain... [  OK  ]&lt;br /&gt;
   Stopping dlm_controld... [  OK  ]&lt;br /&gt;
   Stopping fenced... [  OK  ]&lt;br /&gt;
   Stopping qdiskd... [  OK  ]&lt;br /&gt;
   Stopping cman... [  OK  ]&lt;br /&gt;
   Waiting for corosync to shutdown:[  OK  ]&lt;br /&gt;
   Unloading kernel modules... [  OK  ]&lt;br /&gt;
   Unmounting configfs... [  OK  ]&lt;br /&gt;
Starting cluster: &lt;br /&gt;
   Checking if cluster has been disabled at boot... [  OK  ]&lt;br /&gt;
   Checking Network Manager... [  OK  ]&lt;br /&gt;
   Global setup... [  OK  ]&lt;br /&gt;
   Loading kernel modules... [  OK  ]&lt;br /&gt;
   Mounting configfs... [  OK  ]&lt;br /&gt;
   Starting cman... [  OK  ]&lt;br /&gt;
   Starting qdiskd... [  OK  ]&lt;br /&gt;
   Waiting for quorum... [  OK  ]&lt;br /&gt;
   Starting fenced... [  OK  ]&lt;br /&gt;
   Starting dlm_controld... [  OK  ]&lt;br /&gt;
   Tuning DLM kernel config... [  OK  ]&lt;br /&gt;
   Unfencing self... [  OK  ]&lt;br /&gt;
   Joining fence domain... [  OK  ]&lt;br /&gt;
&lt;br /&gt;
* Restartare rgmanager se fermo:&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/init.d/rgmanager status&lt;br /&gt;
&lt;br /&gt;
 rgmanager is stopped&lt;br /&gt;
&lt;br /&gt;
 sudo /etc/init.d/rgmanager start&lt;br /&gt;
&lt;br /&gt;
 Starting Cluster Service Manager: [  OK  ]&lt;br /&gt;
&lt;br /&gt;
* Verificare che il quorum disk sia attivo:&lt;br /&gt;
  sudo clustat &lt;br /&gt;
&lt;br /&gt;
Cluster Status for M4 @ Wed Jul 22 10:16:34 2015&lt;br /&gt;
Member Status: Quorate&lt;br /&gt;
&lt;br /&gt;
 Member Name                                ID   Status&lt;br /&gt;
 ------ ----                                ---- ------&lt;br /&gt;
 galprox01                                      1 Online, Local&lt;br /&gt;
 galprox02                                      2 Online&lt;br /&gt;
 /dev/block/8:17                                0 Online, Quorum Disk&lt;br /&gt;
&lt;br /&gt;
* Verificare lo stato,e  notare che ora c&amp;#039;è:&lt;br /&gt;
&lt;br /&gt;
 sudo pvecm s&lt;br /&gt;
&lt;br /&gt;
 Quorum device votes: 1&lt;br /&gt;
 Total votes: 3&lt;br /&gt;
&lt;br /&gt;
* Farlo su tutti i nodi&lt;br /&gt;
&lt;br /&gt;
==Test==&lt;br /&gt;
* Aggiungere una VM in HA via web gui&lt;br /&gt;
* Spostare la VM su un nodo in modo che ci sia solo quella&lt;br /&gt;
* spegnere il nodo via ipmi&lt;br /&gt;
* Nel syslog del nodo attivo si vedrà che verrà fatto il fencing, e quando avrà avuto successo, verrà riavviata la VM&lt;br /&gt;
* Se al nodo viene fisicamente scollegata l&amp;#039;alimentazione o viene scollegato dalla lan, il fnecing non avrà mai successo, e verrà ritentato infinitamente&lt;br /&gt;
 Jul 22 14:35:53 galprox02 fence_ipmilan: Failed: Unable to obtain correct plug status or plug is not available&lt;br /&gt;
&lt;br /&gt;
* In questo caso, occorre confermare l&amp;#039;avvenuto spegnmento della macchina scollegata con&lt;br /&gt;
 sudo  fence_ack_manual galprox01&lt;br /&gt;
[sudo] password for mnt.vvngrl: &lt;br /&gt;
About to override fencing for galprox01.&lt;br /&gt;
Improper use of this command can cause severe file system damage.&lt;br /&gt;
&lt;br /&gt;
Continue [NO/absolutely]? absolutely&lt;br /&gt;
Done&lt;br /&gt;
&lt;br /&gt;
* Appena fatto questo, la VM in HA verrà riavviata.&lt;br /&gt;
&lt;br /&gt;
==Riferimenti==&lt;br /&gt;
&lt;br /&gt;
*[https://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster#Configuring_Fencing Two-Node High Availability Cluster - Proxmox VE]&lt;br /&gt;
*[http://pve.proxmox.com/wiki/Fencing#Test_fencing Fencing - Proxmox VE]&lt;br /&gt;
*[http://forum.proxmox.com/threads/14162-Error-fencing-proxmox-cluster-node Error fencing proxmox cluster node]&lt;br /&gt;
*[http://www.redhat.com/archives/linux-cluster/2011-July/msg00005.html Re: [Linux-cluster] fence_ipmilan fails to reboot - SOLVED]&lt;br /&gt;
*[http://www.redhat.com/archives/linux-cluster/2011-July/msg00005.html Re: [Linux-cluster] fence_ipmilan fails to reboot - SOLVED]&lt;br /&gt;
*[http://ibiblio.org/gferg/ldp/IPMI_on_Debian.html IPMI HOWTO for Debian GNU/Linux on the Intel SR2300 (Server Board SE7501WV2)]&lt;br /&gt;
*[https://pve.proxmox.com/wiki/Fencing#IPMI_.28generic.29 Fencing - Proxmox VE]&lt;br /&gt;
*[http://insanelabs.com/linux/debian-open-iscsi-use-iscsi-initiator-to-connect-to-a-san/ Debian: open-iscsi, use iSCSI initiator to connect to a SAN | Ali Aboosaidi]&lt;br /&gt;
&lt;br /&gt;
*[http://www.vionblog.com/debian-iscsi-initiator-and-target/ Debian iSCSI Initiator and Target - VION Technology Blog]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Proxmox 4.x=&lt;br /&gt;
TBD&lt;/div&gt;</summary>
		<author><name>Gabriele.vivinetto</name></author>
	</entry>
</feed>