Hi all,
We had two disks on a server which failed in a raw, and we lost the Raid (Raid 10). I replaced the disks and reinstalled proxmox on the server, with the same IP and hostname (srv-virt2) than before, this is perhaps not the best option...
There are three nodes in the cluster (srv-virt1, srv-virt2 and srv-virt3), so the quorum is 2, and we have it (still two nodes). The failed node still appear in the the cluster :
I can no more delete the VMs that were on the failed node. I have VM backups. I read the wiki on PVE 2.0 cluster, it seems I have to be careful. See :
https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
There seems to be two options possible : remove the node, which permanently remove it, and re-add the server as a new node (so new name ?). But I cannot delete the ghosts VMs. What will happen for these VMs IDs ? Is it possible to force delete the node.
Second option would be "Re-installing a cluster node", but even if I have a backup for /etc, I don't have a backup for /var/lib/pve-cluster, nor /root/.ssh, so it does not seem a viable option.
What would be the best procedure for me to be sure to have a sane cluster at the end ?
Promox version is 3.1 :
Thanks for advices
Alain
We had two disks on a server which failed in a raw, and we lost the Raid (Raid 10). I replaced the disks and reinstalled proxmox on the server, with the same IP and hostname (srv-virt2) than before, this is perhaps not the best option...
There are three nodes in the cluster (srv-virt1, srv-virt2 and srv-virt3), so the quorum is 2, and we have it (still two nodes). The failed node still appear in the the cluster :
Code:
# pvecm nodes
Node Sts Inc Joined Name
1 M 308 2013-10-20 17:29:19 srv-virt1
2 X 320 srv-virt2
3 M 14628 2013-12-07 17:35:29 srv-virt3
https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
There seems to be two options possible : remove the node, which permanently remove it, and re-add the server as a new node (so new name ?). But I cannot delete the ghosts VMs. What will happen for these VMs IDs ? Is it possible to force delete the node.
Second option would be "Re-installing a cluster node", but even if I have a backup for /etc, I don't have a backup for /var/lib/pve-cluster, nor /root/.ssh, so it does not seem a viable option.
What would be the best procedure for me to be sure to have a sane cluster at the end ?
Promox version is 3.1 :
Code:
# pveversion
pve-manager/3.1-20/c3aa0f1a (running kernel: 2.6.32-26-pve)
Alain