Quantcast
Viewing all articles
Browse latest Browse all 171031

Problem creating a 2 nodes cluster with proxmox VE 2.3

Hello,

I'm trying to create a 2 nodes cluster on proxmox VE 2.3, using unicast (my hosting company does not support multicast) on a dedicated network interface

Here are the steps I've done :

1. fresh installation of proxmox VE then :

Code:

root@node1:~# aptitude update && aptitude full-upgrade -y
2. configuration of /etc/hosts :

Code:

root@node1:~# cat /etc/hosts
127.0.0.1      localhost

88.190.xx.xx  sd-xxxxx.dedibox.fr node1ext
10.90.44.xx    node1 pvelocalhost
10.90.44.xx  node2
root@node1:~# hostname
node1



3. Creation of the cluster

Code:

root@node1:~# pvecm create dataexperience
Restarting pve cluster filesystem: pve-cluster[dcdb] notice: wrote new cluster config '/etc/cluster/cluster.conf'
.
Starting cluster:
  Checking if cluster has been disabled at boot... [  OK  ]
  Checking Network Manager... [  OK  ]
  Global setup... [  OK  ]
  Loading kernel modules... [  OK  ]
  Mounting configfs... [  OK  ]
  Starting cman... [  OK  ]
  Waiting for quorum... [  OK  ]
  Starting fenced... [  OK  ]
  Starting dlm_controld... [  OK  ]
  Tuning DLM kernel config... [  OK  ]
  Unfencing self... [  OK  ]

4. Modification of the cluster.conf

Code:

root@node1:~# cp /etc/pve/cluster.conf /etc/pve/cluster.conf.new
root@node1:~# vi /etc/pve/cluster.conf.new


Code:

<?xml version="1.0"?>
<cluster name="dataexperience" config_version="2">

  <cman keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu" two_node="1" expected_votes="1">
  </cman>

  <clusternodes>
  <clusternode name="node1" votes="1" nodeid="1"/>
  </clusternodes>

</cluster>

Code:

root@node1:~# ccs_config_validate -v -f /etc/pve/cluster.conf.new
Creating temporary file: /tmp/tmp.BopYEiuGdz
Config interface set to:
Configuration stored in temporary file
Updating relaxng schema
Validating..
Configuration validates
Validation completed

Then I activated the cluster configuration through GUI

In order to ensure the new cluster configuration is taken into account, I rebooted the master (a bit overkilling) and following reboot :

Code:

root@node1:~# pvecm status
cman_tool: Cannot open connection to cman, is it running ?

Here is what I've seen in the log :

Code:

May  9 11:39:51 node1 pmxcfs[1457]: [dcdb] crit: cpg_initialize failed: 6
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:57 node1 pmxcfs[1457]: [quorum] crit: quorum_initialize failed: 6
May  9 11:39:57 node1 pmxcfs[1457]: [confdb] crit: confdb_initialize failed: 6
May  9 11:39:57 node1 pmxcfs[1457]: [dcdb] crit: cpg_initialize failed: 6
May  9 11:39:57 node1 pmxcfs[1457]: [dcdb] crit: cpg_initialize failed: 6

That's my 3rd try with exactly the same result

Viewing all articles
Browse latest Browse all 171031

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>