Hi all,
I have 3 identical servers (IBM x3850 x5) : a cluster of 2, and another isolated one.
Some VMs are running on isolated one.
Now, I want to join the isolated node to the cluster :
- stop VMs running on it,
- backup,
- restore on cluster,
- start VM.
VM won't start on any node of the cluster! :
root@px1-cluster1:/etc/pve/nodes/px1-cluster1/qemu-server# qm start 16000
kvm: -drive file=/dev/Disques-VMs-PRA/vm-16000-disk-1,if=none,id=drive-virtio0,aio=native,cache=none: could not open disk image /dev/Disques-VMs-PRA/vm-16000-disk-1: Invalid argument
lvscan :
root@px1-cluster1:~# lvscan
ACTIVE '/dev/Disques-VMs-PRA/vm-101-disk-1' [512.00 GiB] inherit
ACTIVE '/dev/Disques-VMs-PRA/vm-33001-disk-1' [32.00 GiB] inherit
inactive '/dev/Disques-VMs-PRA/vm-40000-disk-1' [50.00 GiB] inherit
ACTIVE '/dev/Disques-VMs-PRA/vm-16000-disk-1' [50.00 GiB] inherit
ACTIVE '/dev/pve/swap' [34.75 GiB] inherit
ACTIVE '/dev/pve/root' [69.50 GiB] inherit
ACTIVE '/dev/pve/data' [157.71 GiB] inherit
But other VMs (says vmid 101) directly created on this iSCSI storage (LVM) have no problem.
A move disk from the raw lvm to a local storage (qcow2 format) is also working correctly : VM starts!
Another move disk, back and forth (VM running or not), does NOT change anything...
root@px1-cluster1:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
root@px1-cluster1:~#
Any idea?
Thanks,
Christophe.
I have 3 identical servers (IBM x3850 x5) : a cluster of 2, and another isolated one.
Some VMs are running on isolated one.
Now, I want to join the isolated node to the cluster :
- stop VMs running on it,
- backup,
- restore on cluster,
- start VM.
VM won't start on any node of the cluster! :
root@px1-cluster1:/etc/pve/nodes/px1-cluster1/qemu-server# qm start 16000
kvm: -drive file=/dev/Disques-VMs-PRA/vm-16000-disk-1,if=none,id=drive-virtio0,aio=native,cache=none: could not open disk image /dev/Disques-VMs-PRA/vm-16000-disk-1: Invalid argument
lvscan :
root@px1-cluster1:~# lvscan
ACTIVE '/dev/Disques-VMs-PRA/vm-101-disk-1' [512.00 GiB] inherit
ACTIVE '/dev/Disques-VMs-PRA/vm-33001-disk-1' [32.00 GiB] inherit
inactive '/dev/Disques-VMs-PRA/vm-40000-disk-1' [50.00 GiB] inherit
ACTIVE '/dev/Disques-VMs-PRA/vm-16000-disk-1' [50.00 GiB] inherit
ACTIVE '/dev/pve/swap' [34.75 GiB] inherit
ACTIVE '/dev/pve/root' [69.50 GiB] inherit
ACTIVE '/dev/pve/data' [157.71 GiB] inherit
But other VMs (says vmid 101) directly created on this iSCSI storage (LVM) have no problem.
A move disk from the raw lvm to a local storage (qcow2 format) is also working correctly : VM starts!
Another move disk, back and forth (VM running or not), does NOT change anything...
root@px1-cluster1:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
root@px1-cluster1:~#
Any idea?
Thanks,
Christophe.