February 5, 2015, 11:43 pm
Hello,
I need to virtualize some severs which the software is licensed to the hard drive serial number. I know this is possible with plain qemu+kvm, but I do not see any documentation regarding this in Proxmox and the VM conf files see to have a different non XML format than plain qemu+kvm.
Thanks for any help.
Alan
↧
February 5, 2015, 11:53 pm
Hi there everyone,
Now here is what happened. This node was running since March 2013 perfectly. I have created 8 VMs back then and they were all up, they are still.
Now last night when I was trying to create backup of a vm using the gui backup tool via the web interface. I saw the snapshot option and decided to give it a shot. Once the process ended, I looked-up the file and realized that this is not the kind of backup I wanted (I needed the .tar archive so) so I went back to the GUI and Deleted the snaphost.
And boom VM stopped working.
When I consoled into the VM using VNC, the vm was stuck at boot with error "boot failed, This hard drive is not bootable"
All I've done was created a snapshot and then DELETED it, here is the task log via the web UI.:
Creating snapshot:
Formatting '/var/lib/vz/images/105/vm-105-state-feb.raw', fmt=raw size=7017070592
TASK OK
Removing Snapshot:
TASK OK
Vm config:
root@instant:~# cat /etc/pve/qemu-server/105.conf
#gaurav
balloon: 2048
bootdisk: ide0
cores: 2
cpu: host
cpuunits: 100
ide0: local:105/vm-105-disk-1.qcow2,format=qcow2,cache=writeback,size=76G
ide2: none,media=cdrom
memory: 3096
name: server.brandwick.com
net0: e1000=E6:FB:7E:BD:43:06,bridge=vmbr0
onboot: 1
ostype: l26
sockets: 1
Output of pveversion -v:
root@instant:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 3.10.0-1-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-3.10.0-1-pve: 3.10.0-5
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-31-pve: 2.6.32-132
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
kernel :
root@instant:~# uname -a
Linux instant.cruzehost.com 3.10.0-1-pve #1 SMP Tue Dec 17 13:12:13 CET 2013 x86_64 GNU/Linux
file:
root@instant:/var/lib/vz/images/105# file vm-105-disk-1.qcow2
vm-105-disk-1.qcow2: QEMU QCOW Image (unknown version)
Fdisk:
root@instant:/var/lib/vz/images/105# fdisk -l vm-105-disk-1.qcow2
Disk vm-105-disk-1.qcow2: 66.8 GB, 66801762304 bytes
255 heads, 63 sectors/track, 8121 cylinders, total 130472192 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk vm-105-disk-1.qcow2 doesn't contain a valid partition table
ls:
root@instant:/var/lib/vz/images/105# ls -l vm-105-disk-1.qcow2
-rw-r--r-- 1 root root 66801762304 Feb 6 06:31 vm-105-disk-1.qcow2
When I did qemu-check I got thousands of errors (Leak errors etc.)
qemu-img check vm-105-disk-1.qcow2
So I used the -r all flag to fix the disk, that fixed all of the errors.
qemu-img check -r all vm-105-disk-1.qcow2
And ran the qemu-check again
root@instant:/var/lib/vz/images/105# qemu-img check vm-105-disk-1.qcow2
No errors were found on the image.
303104/1245184 = 24.34% allocated, 0.00% fragmented, 0.00% compressed clusters
Image end offset: 66800386048
Still no luck.
After that I've decided to use the qemu-nbd command to mount the disk and run fsck
Here is what happened next.
root@instant:/var/lib/vz/images/105# modprobe nbd
FATAL: Module nbd not found.
root@instant:~# lsmod | grep ndb
root@instant:~#
Though I'm able to view the LVM partitions via tetsdisk.
IDK if im on the right track or not and don't know what to do next either. Can anybody please help me.
↧
↧
February 6, 2015, 3:07 am
how much speed difrance is there between running a vm local or on a nfs?
is it uge or minor?
↧
February 6, 2015, 4:09 am
When I am trying to restore a backuped openvz container, I am getting the following message:
Code:
vzrestore vzdump-openvz-114-2015_02_05-00_55_12.tar.lzo 122
Use of uninitialized value $archive in -f at /usr/share/perl5/PVE/API2/OpenVZ.pm line 334.
Use of uninitialized value $archive in concatenation (.) or string at /usr/share/perl5/PVE/API2/OpenVZ.pm line 334.
can't find file ''
Can anyone help?
I can't upgrade or restart the server because it is running in production.
Details on the actual environment:
Code:
# pveversion -v
pve-manager: 3.0-20 (pve-manager/3.0/0428106c)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.0-15
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-6
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-12
ksm-control-daemon: 1.1-1
Thank you.
↧
February 6, 2015, 4:14 am
Hi,
now I have a running ceph three-node cluster with two ssd storage nodes and one monitor.
What I would like to achieve is adding another two storage nodes with spinning drives and create a new pool and keep these two pools separated.
I suppose I need to edit crushmap something like here
http://www.sebastien-han.fr/blog/201...ge-with-crush/ to add the new hosts and drives but I am not sure how does it deal with pveceph.
Should I install the new nodes with pveinstall and then create osds via gui or rather via cli ceph-disk zap or it does not matter?
Will it not lunch a rebuilding of the current ceph pool?
This is my current output of ceph osd tree
# id weight type name up/down reweight
-1 1.68 root default
-2 0.84 host cl2
0 0.21 osd.0 up 1
1 0.21 osd.1 up 1
2 0.21 osd.2 up 1
3 0.21 osd.3 up 1
-3 0.84 host cl1
4 0.21 osd.4 up 1
5 0.21 osd.5 up 1
6 0.21 osd.6 up 1
7 0.21 osd.7 up 1
Thank you for all the answers
↧
↧
February 6, 2015, 4:57 am
Hello,
I am new to Proxmox and I'm experiencing problems with the standard included keyboards for VM's and CT's.
The standard keyboard lay-out is very limited and does not support the keyboards I am using.
At home I am using an Apple Belgium-Dutch keyboard. At work I am using a standard Belgium-Dutch keyboard.
Contrary to popular belief, these keyboards are not equal to French (Belgium) or Dutch.
How can I add my keyboard lay-outs to Proxmox?
At the moment I'm swapping keyboards in function of what I'm typing. For example, using the US keyboard, I can type a slash, using the french keyboard I found a dot. But this is not easy working...
Thanks for any feedback!
↧
February 6, 2015, 5:25 am
Hints appreciated as live migration resume operations just makes VMs burn one vCPU 100% thus rendering VM useless/not-responding :confused:
root@node1:~# pveversion
pve-manager/3.3-15/0317e201 (running kernel: 2.6.32-37-pve)
root@node1:~# uname -a
Linux node1 2.6.32-37-pve #1 SMP Fri Jan 30 06:16:52 CET 2015 x86_64 GNU/Linux
root@node1:~# dpkg -l | egrep qemu\|pve\|ceph
ii ceph-common 0.87-1~bpo70+1 amd64 common utilities to mount and interact with a ceph storage cluster
ii ceph-fuse 0.87-1~bpo70+1 amd64 FUSE-based client for the Ceph distributed file system
ii clvm 2.02.98-pve4 amd64 Cluster LVM Daemon for lvm2
ii corosync-pve 1.4.7-1 amd64 Standards-based cluster framework (daemon and modules)
ii dmsetup 2:1.02.77-pve4 amd64 Linux Kernel Device Mapper userspace library
ii fence-agents-pve 4.0.10-2 amd64 fence agents for redhat cluster suite
ii libcephfs1 0.87-1~bpo70+1 amd64 Ceph distributed file system client library
ii libcorosync4-pve 1.4.7-1 amd64 Standards-based cluster framework (libraries)
ii libcurl3-gnutls:amd64 7.29.0-1~bpo70+1.ceph amd64 easy-to-use client-side URL transfer library (GnuTLS flavour)
ii libdevmapper-event1.02.1:amd64 2:1.02.77-pve4 amd64 Linux Kernel Device Mapper event support library
ii libdevmapper1.02.1:amd64 2:1.02.77-pve4 amd64 Linux Kernel Device Mapper userspace library
ii liblvm2app2.2:amd64 2.02.98-pve4 amd64 LVM2 application library
ii libopenais3-pve 1.1.4-3 amd64 Standards-based cluster framework (libraries)
ii libpve-access-control 3.0-16 amd64 Proxmox VE access control library
ii libpve-common-perl 3.0-22 all Proxmox VE base library
ii libpve-storage-perl 3.0-28 all Proxmox VE storage management library
ii lvm2 2.02.98-pve4 amd64 Linux Logical Volume Manager
ii novnc-pve 0.4-7 amd64 HTML5 VNC client
ii openais-pve 1.1.4-3 amd64 Standards-based cluster framework (daemon and modules)
ii pve-cluster 3.0-15 amd64 Cluster Infrastructure for Proxmox Virtual Environment
ii pve-firewall 1.0-17 amd64 Proxmox VE Firewall
ii pve-firmware 1.1-3 all Binary firmware code for the pve-kernel
ii pve-kernel-2.6.32-32-pve 2.6.32-136 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-2.6.32-37-pve 2.6.32-146 amd64 The Proxmox PVE Kernel Image
ii pve-libspice-server1 0.12.4-3 amd64 SPICE remote display system server library
ii pve-manager 3.3-15 amd64 The Proxmox Virtual Environment
ii pve-qemu-kvm 2.1-12 amd64 Full virtualization on x86 hardware
ii python-ceph 0.87-1~bpo70+1 amd64 Python libraries for the Ceph distributed filesystem
ri qemu-server 3.3-14 amd64 Qemu Server Tools
ii redhat-cluster-pve 3.2.0-2 amd64 Red Hat cluster suite
ii resource-agents-pve 3.9.2-4 amd64 resource agents for redhat cluster suite
ii tar 1.27.1+pve.1 amd64 GNU version of the tar archiving utility
ii vzctl 4.0-1pve6 amd64 OpenVZ - server virtualization solution - control tools
↧
February 6, 2015, 5:39 am
Wondering if there are good info around on how to setup IPMI fencing f.ex. with iLom/IPMI on HP Proliant boxes to do power fencing for HA clustering?
I have bought and read the Proxmox High Availability book from PackT Publishing, but that merely mentions IPMI as an option and then in details talks of snmp network switch fencing.
↧
February 6, 2015, 6:50 am
Hi.
We have windows 2012 r2 Standart installed on proxmox
pveversion --verbose
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
Host (128 GB RAM, CPU - intel E6-2650v2)
One VM running
boot: dcn
bootdisk: virtio0
cores: 7
cpu: host
ide2: local:iso/virtio-win-0.1-100.iso,media=cdrom,size=68314K
ide3: local:iso/DRP_14.6.iso,media=cdrom,size=1778352K
memory: 122880
name: w2012R2S-TERMINAL
net0: virtio=00:50:56:09:4E:62,bridge=vmbr1
net1: e1000=AE:55:60:F7:DE:02,bridge=vmbr0
onboot: 1
ostype: win8
sockets: 2
virtio0: LVMlocal:vm-201-disk-1,size=1000G
Windows server we use like terminal server (about 15 simultaneous rdp sessions).
At first day server works too bad (very high CPU load) but on second day everything was perfect, now - third day and server again works bad (and again too high CPU load - about 90-100%)
only one thing I found - too high Hardware Interrupts and DPCs
Any ideas will be helpful?
↧
↧
February 6, 2015, 9:49 am
Hi all,
Does anyone have a proper procedure for removing a node from ceph, uninstalling the ceph software but leaving it part of the proxmox cluster? I've got a machine I want to remove from ceph and replace it with another machine, but keep the original machine in the proxmox cluster.
↧
February 7, 2015, 1:33 am
Hello,
Every time Proxmox take a snapshot of a OpenVZ container, with a cPanel installation it it, it fails. The logs say:
Code:
Error: unsupported deleted submount: (deleted)/home/virtfs/installatron/home/installatron
Every time I delete a account on the cPanel system there is another line in the logs that a similar thing like
Code:
Error: unsupported deleted submount: (deleted)/home/virtfs/accountname
The full log:
Code:
100: Feb 07 00:00:02 INFO: Starting Backup of VM 100 (openvz)100: Feb 07 00:00:02 INFO: CTID 100 exist mounted running
100: Feb 07 00:00:02 INFO: status = running
100: Feb 07 00:00:02 INFO: backup mode: suspend
100: Feb 07 00:00:02 INFO: ionice priority: 7
100: Feb 07 00:00:02 INFO: starting first sync /var/lib/vz/private/100/ to /var/lib/vz/dump/vzdump-openvz-100-2015_02_07-00_00_02.tmp
100: Feb 07 01:25:14 INFO: Number of files: 379608
100: Feb 07 01:25:14 INFO: Number of files transferred: 326477
100: Feb 07 01:25:14 INFO: Total file size: 281828500867 bytes
100: Feb 07 01:25:14 INFO: Total transferred file size: 281467432252 bytes
100: Feb 07 01:25:14 INFO: Literal data: 281467432816 bytes
100: Feb 07 01:25:14 INFO: Matched data: 0 bytes
100: Feb 07 01:25:14 INFO: File list size: 9146976
100: Feb 07 01:25:14 INFO: File list generation time: 0.001 seconds
100: Feb 07 01:25:14 INFO: File list transfer time: 0.000 seconds
100: Feb 07 01:25:14 INFO: Total bytes sent: 281525199747
100: Feb 07 01:25:14 INFO: Total bytes received: 6664350
100: Feb 07 01:25:14 INFO: sent 281525199747 bytes received 6664350 bytes 55067357.28 bytes/sec
100: Feb 07 01:25:14 INFO: total size is 281828500867 speedup is 1.00
100: Feb 07 01:25:14 INFO: first sync finished (5112 seconds)
100: Feb 07 01:25:14 INFO: suspend vm
100: Feb 07 01:25:14 INFO: Setting up checkpoint...
100: Feb 07 01:25:14 INFO: suspend...
100: Feb 07 01:25:14 INFO: Can not suspend container: Invalid argument
100: Feb 07 01:25:14 INFO: Error: unsupported deleted submount: (deleted)/home/virtfs/installatron/home/installatron
100: Feb 07 01:25:14 INFO: Checkpointing failed
100: Feb 07 01:31:36 ERROR: Backup of VM 100 failed - command 'vzctl --skiplock chkpnt 100 --suspend' failed: exit code 16
↧
February 7, 2015, 1:55 am
Hi, sorry if this have been posted, but i can not find it...
Is there any way we can modify novnc config ?
I'm looking for:
1.- A way to add/modify sendkeys values. I.E. I need to send Windows-Desktop key to windows VMs.
2.- A way to hide address bar ("https:// ip-of-server :8006/ ...") as it is already on explorer caption.
3.- A way to (selectively) hide or reduce Status bar "Connected (encrypted) to: QEMU ......"
regards
↧
February 7, 2015, 3:50 am
Hi,
I have put my proxmox netapp plugin on github, if someone is interested to use it.
https://github.com/odiso/proxmox-pve-storage-netapp
We are using it in production since 3years without any bug.
templates/snapshot/clone/clone rollback are done on the netapp san through netapp api integration.
↧
↧
February 7, 2015, 4:27 am
So I've got two physical networks working on my PVE cluster, one public facing over ethic and one private backend network bonded over two NICs eth1+eth2.
Over the private backend network I run Ceph Storage fine.
Now I also want to use this for Inter Application communication, thus I created 3x extra switches on top of bond1, only I dont seem to get those networks working between cluster nodes and wondering what I do wrong. Properly something with van tagging/trunking somehow.
Appreciate any advice on my mistakes here, TIA!
root@node4:~# ping node3.ceph
PING node3.ceph.sprawl.dk (10.0.3.3) 56(84) bytes of data.
64 bytes from 10.0.3.3: icmp_req=1 ttl=64 time=0.192 ms
64 bytes from 10.0.3.3: icmp_req=2 ttl=64 time=0.211 ms
^C
--- node3.ceph.sprawl.dk ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.192/0.201/0.211/0.017 ms
root@node4:~# ping 10.20.0.3
PING 10.20.0.3 (10.20.0.3) 56(84) bytes of data.
^C
--- 10.20.0.3 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
root@node4:~# ping 10.30.0.3
PING 10.30.0.3 (10.30.0.3) 56(84) bytes of data.
^C
--- 10.30.0.3 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms
root@node4:~# ping 10.40.0.3
PING 10.40.0.3 (10.40.0.3) 56(84) bytes of data.
^C
--- 10.40.0.3 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms
root@node4:~# brctl show
bridge name bridge id STP enabled interfaces
vmbr0 8000.001b7894055a no eth0
vmbr1 8000.001b78940558 no bond1
vmbr2 8000.001b78940558 no bond1.20
vmbr3 8000.001b78940558 no bond1.30
vmbr4 8000.001b78940558 no bond1.40
vmbr0 Link encap:Ethernet HWaddr 00:1b:78:94:05:5a
inet addr:xx.xx.xx.xx Bcast:xx.xx.xx.31 Mask:255.255.255.224
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18066632 errors:0 dropped:0 overruns:0 frame:0
TX packets:14748051 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:6041989878 (5.6 GiB) TX bytes:5542030644 (5.1 GiB)
vmbr1 Link encap:Ethernet HWaddr 00:1b:78:94:05:58
inet addr:10.0.3.4 Bcast:10.0.3.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:10107445 errors:0 dropped:0 overruns:0 frame:0
TX packets:9768673 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5768727202 (5.3 GiB) TX bytes:5302442244 (4.9 GiB)
vmbr2 Link encap:Ethernet HWaddr 00:1b:78:94:05:58
inet addr:10.20.0.4 Bcast:10.20.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:502 errors:0 dropped:0 overruns:0 frame:0
TX packets:833 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:160372 (156.6 KiB) TX bytes:66816 (65.2 KiB)
vmbr3 Link encap:Ethernet HWaddr 00:1b:78:94:05:58
inet addr:10.30.0.4 Bcast:10.30.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:578 (578.0 B)
vmbr4 Link encap:Ethernet HWaddr 00:1b:78:94:05:58
inet addr:10.40.0.4 Bcast:10.40.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:578 (578.0 B)
root@node4:~# cat /etc/network/interfaces
# network interface settings
auto bond1.20
iface bond1.20 inet manual
vlan-raw-device bond1
auto bond1.30
iface bond1.30 inet manual
vlan-raw-device bond1
auto bond1.40
iface bond1.40 inet manual
vlan-raw-device bond1
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet manual
auto eth1
iface eth1 inet manual
auto eth2
iface eth2 inet manual
auto bond1
iface bond1 inet manual
slaves eth1 eth2
bond_miimon 100
bond_mode 802.3ad
# Pub NIC/Switch
auto vmbr0
iface vmbr0 inet static
address xx.xx.xx.xx
netmask 255.255.255.224
gateway xx.xx.xx.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
post-up /sbin/ethtool -s eth0 speed 100 duplex full autoneg off
# Ceph Storage Network
auto vmbr1
iface vmbr1 inet static
address 10.0.3.4
netmask 255.255.255.0
bridge_ports bond1
bridge_stp off
bridge_fd 0
# Inter Application Network #1
auto vmbr2
iface vmbr2 inet static
address 10.20.0.4
netmask 255.255.0.0
bridge_ports bond1.20
bridge_stp off
bridge_fd 0
# Inter Application Network #2
auto vmbr3
iface vmbr3 inet static
address 10.30.0.4
netmask 255.255.0.0
bridge_ports bond1.30
bridge_stp off
bridge_fd 0
# Inter Application Network #3
auto vmbr4
iface vmbr4 inet static
address 10.40.0.4
netmask 255.255.0.0
bridge_ports bond1.40
bridge_stp off
bridge_fd 0
↧
February 8, 2015, 2:32 am
Hi, I'm experimenting with clusters, I understand the importance of HA and fencing and the paramount request that the same VM is not started in more than one node (destruction!), but a typical usage for having redundancy is: 2 nodes, shared storage or DRDB, one node crashes, you see it and you can also unplug it's power cord, be able to easily run it's VM on the surviving one through GUI. At the moment you can set permanently quorum to 1 but nevertheless you can't select the VM on the died node and move to the running one, since GUI says that the node is down! That's very frustrating :)
I would love Proxmox to check if node number is 2, quorum is 1, the other node seems down, let "migrate" the VM maybe with and additional warning "Be sure node XYZ is turned off before proceed".
When you repair the node and turn it on again, it should join the cluster, see that config has changed and update it's copy, see that the VM are running and not start them again (this part should already be working this way, correct?)
Thanks a lot for the attention
↧
February 8, 2015, 6:28 am
↧
February 8, 2015, 10:51 am
Hi,
I have a small cluster that I've had running for a while. Because I now have a wildcard cert for one of my domains, I'd like to use that cert for my PVE cluster. Of course, the cert is not for the domain name configured for my cluster. So, I want to change the domain name, but not the host names of the cluster nodes. At some point in the past, I did go through and change hostnames on a cluster and that was a bit of a pain to get everything working properly again. If I recall the issues correctly, changing the domain name might not be quite as bad, but any pointers anyone has would be greatly appreciated.
Essentially, what I want to do is something like:
- node1.example.com > node1.otherdomain.com
- node2.example.com > node2.otherdomain.com
Is all I'd need to do to change the domain parts in /etc/hosts and reboot, and then of course in my DNS?
Thanks in advance for any suggestions.
↧
↧
February 8, 2015, 12:52 pm
Hello,
I'm running Proxmox VE 3.3 ( pve-manager/3.3-1/a06c9f73 (running kernel: 2.6.32-32-pve) ) together with the latest FreeNAS as iSCSI backend in a two node cluster. After configuration and setup everything seemed to work fine until both nodes began to connect to the iSCSI backend every 20 seconds. At first I didn't really think about it as it's just a test environment, but it never stopped and so it's still running now for multiple days. I can see that the connection is clearly coming from the nodes to the backend and that the old connections are not dropped and data is transferred correctly. Everything seems to work except those connections every 20 seconds.
I wonder if I'm dealing with some kind of auto discovery here that went crazy? Or maybe some "alive checks" that are used to see if the backend is still alive.
Besides of that, everytime the backend is "probed" the iSCSI disk, which is used by the nodes, is reintroduced to the kernel. So I can see two messages in dmesg, that show the SCSI id and the size of the disk.
It looks like this:
Code:
sd 8:0:0:0: [sdc] 536870912 4096-byte logical blocks: (2.19 TB/2.00 TiB)
sd 8:0:0:0: [sdc] 65536-byte physical blocks
And as said above it comes up every 20 seconds on both nodes...
I'm open to any idea as I already examined iscsid, the configuration and so on ...
KR,
G.
↧
February 9, 2015, 12:53 am
Hello...
We have only one guest (Windows 2012 Server), which i have migrated from VMware guest. The guest was running the whole day very closely, so i activated the auto-start option.
After restarting the host hangs during the boot process with the message "Waiting for vmbr0 to get ready (MAXWAIT 2 seconds)."
The web interface and console is not accessible.
I tried to boot with a Knoppix-CD, but i failed to mount the system disk.
How can I fix it anyway?
↧
February 9, 2015, 1:17 am
Hello,
I tried to restore a dump on my VM, but I didn't leave space enough to fully restore it.
Code:
root@Uranus:~# Erreur closing file '/var/tmp/pve-reserved-ports.tmp.3097' failed - No space left on device (500)
So I tried to increase my disk space but I can't do anything.
Then I had a problem in accessing this VM with the I/O error. I found a solution there by restarting the file system
Code:
root@Uranus:~# /etc/init.d/pve-cluster restart
Restarting pve cluster filesystem: pve-cluster.
root@Uranus:~# qm unlock 100
Then I could increase the disk space but now, I can't even reach the PVE panel. When I do :
Code:
root@Uranus:~# /etc/init.d/pve-cluster start
Starting pve cluster filesystem : pve-clusterBus error
failed!
Sorry if my English is bad. Has someone already encounter this problem ?
↧