Hi, I like proxmox very well, more every day...But the only problem for me is the backup, because the name backup has not the name by hostname VM.I follow this url, and I works very well:[url]http://undefinederror.org/kvmopenvz-dumps-with-specific-name-using-a-hook/url] But not removing old files backup... I try change the parameter maxfiles in /etc/vzdump.conf: # vzdump default settings#tmpdir: DIR#dumpdir: DIR#storage: STORAGE_ID#mode: snapshot|suspend|stop#bwlimit: KBPS#ionice: PRI#lockwait: MINUTES#stopwait: MINUTES#size: MBmaxfiles: 1script: /home/maykel/vzdump-hook-script.pl#exclude-path: PATHLISTBut not found. It seems that somehow the backup script parses the file name to know the backups number.Can I help me please??Thanks in advanced.
↧
Backup with specific hostname and remove old backup
↧
irqbalance: WARNING, didn't collect load info for all cpus, balancing is broken
Hi,
I have the following message in /var/log/messages:
irqbalance: WARNING, didn't collect load info for all cpus, balancing is broken
It is repeated every 10 seconds in line with the default update time for irqbalance.
I have a container created from the Proxmox template for Centos 6 on which I have then updated the packages with yum.
From what I have managed to understand from our friend Google this is related to the fact that irqbalance does not like custom kernels.
Is anybody running the Centos 6 with multi cpus that has a solution for this issue ?
I am running Asterisk VOIP and so timing and performance is important.
Thanks for any help.
I have the following message in /var/log/messages:
irqbalance: WARNING, didn't collect load info for all cpus, balancing is broken
It is repeated every 10 seconds in line with the default update time for irqbalance.
I have a container created from the Proxmox template for Centos 6 on which I have then updated the packages with yum.
From what I have managed to understand from our friend Google this is related to the fact that irqbalance does not like custom kernels.
Is anybody running the Centos 6 with multi cpus that has a solution for this issue ?
I am running Asterisk VOIP and so timing and performance is important.
Thanks for any help.
↧
↧
Proxmox Issue Getting Virtual Machines Started
Good evening
I am a newbie here, and I need some assistance please. We have servers at OVH and we use the Proxmox virtual environment. We have an issue where we had a hardware failure on a server which required reboot. When we did this none of our virtual machines would restart. Keeps coming up with 'connection error' but server is running in OVH. We are not sure how to resolve this. Can anyone help or advise please?
Thanks
Chris
I am a newbie here, and I need some assistance please. We have servers at OVH and we use the Proxmox virtual environment. We have an issue where we had a hardware failure on a server which required reboot. When we did this none of our virtual machines would restart. Keeps coming up with 'connection error' but server is running in OVH. We are not sure how to resolve this. Can anyone help or advise please?
Thanks
Chris
↧
Feature Request: scale ability for vncterm
Hi all, really good job!
I would like to see a "scale" in the toolbar of vncterm windows (to scale at 25%, 50%, 75%, etc.)
I found myself dealing with extra-large windows when working with KVM.
For some strange reasons I have some virtual machines running on KVM and screen are too large (i.e 1440x900) for my laptop (1366x768)
when I work from linux there's no problem: I can move window.
But when working from Windows is a nightmare, I can't see bottom of vncterm screen.
Scaling vncterm content (as you can do in tightvnc or ultravnc viewers) would improve user experience's when working with KVM virtual machines.
thanks!!
I would like to see a "scale" in the toolbar of vncterm windows (to scale at 25%, 50%, 75%, etc.)
I found myself dealing with extra-large windows when working with KVM.
For some strange reasons I have some virtual machines running on KVM and screen are too large (i.e 1440x900) for my laptop (1366x768)
when I work from linux there's no problem: I can move window.
But when working from Windows is a nightmare, I can't see bottom of vncterm screen.
Scaling vncterm content (as you can do in tightvnc or ultravnc viewers) would improve user experience's when working with KVM virtual machines.
thanks!!
↧
RHEL7 released
RHEL7 has been released. guess it just a little while until we get an offical 3.10 kernel from the OpenVZ guys and into Proxmox :)
looking forward to this.
looking forward to this.
↧
↧
How to read config without mount /etc/pve/
Hi,.
I've a machine mort.
I can't up machine.
But if copy of /var/lib/pve-cluster/config.db
How can read config XXX.conf files form this file?
Apreciate help
Or I ilke read config files XXX.conf....
I've a machine mort.
I can't up machine.
But if copy of /var/lib/pve-cluster/config.db
How can read config XXX.conf files form this file?
Apreciate help
Or I ilke read config files XXX.conf....
↧
howto for IPv6
Hello
is there a Howto to configure IPv6 for the VM which should be reached from outside (from the net) to setup a webserver.
Setting up the "real" Server and the Virtuell Server
have a nice day
vincent
is there a Howto to configure IPv6 for the VM which should be reached from outside (from the net) to setup a webserver.
Setting up the "real" Server and the Virtuell Server
have a nice day
vincent
↧
BSOD after Update
I updated my Proxmox system two days ago to the current pve-no-subscription packages. The update finished without any error messages and I restarted the Proxmox server. Right now the following software versions are installed on the server:
Everything went fine for one and a half day but then the KVM-VM with Windows Small Business Server 2008 had a blue screen with the error message IRQL_NOT_LESS_OR_EQUAL
bsod_sbs2008.JPG
I experienced the same problem some months ago. I solved this by selecting the CPU type "host" instead of "Default (kvm64)".
Does somebody have a hint how I can fix this problem or at least get a more detailed report about the reason of this BSOD. Because I can't find any hint for the reason of the BSOD in the logs of the Windows VM or in the logs of the Proxmox server.
Code:
# pveversion -v
proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
bsod_sbs2008.JPG
I experienced the same problem some months ago. I solved this by selecting the CPU type "host" instead of "Default (kvm64)".
Does somebody have a hint how I can fix this problem or at least get a more detailed report about the reason of this BSOD. Because I can't find any hint for the reason of the BSOD in the logs of the Windows VM or in the logs of the Proxmox server.
↧
Enable fencing the cluster loose sync
Hi,
I have a pve cluster 3.2 with two nodes and DRBD as shared storage, run perfect.
I will configure fencing, so:
- uncomment the line: FENCE_JOIN="yes" in /etc/default/redhat-custer-pve on both nodes
- restart cman on both nodes
- join the fence domain, first on the first node then on the second
but when I return on the web interface (on the first node) I see the second red.
The pvecm status on both nodes is normal:
first node:
# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: pvecl2
Cluster Id: 6942
Cluster Member: Yes
Cluster Generation: 96
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: lxsrv1
Node ID: 1
Multicast addresses: 239.192.27.57
Node addresses: 192.168.10.252
second node:
# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: pvecl2
Cluster Id: 6942
Cluster Member: Yes
Cluster Generation: 96
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: lxsrv2
Node ID: 2
Multicast addresses: 239.192.27.57
Node addresses: 192.168.10.253
Why?
Can I resync the nodes?
Where I can find information for the reason?
I have tryed:
- leave the fence domain on both nodes
- comment the line in redhat-cluster-pve
- restart cman
but nothing to do, the cluster is out of sync....
thank you,
Gianni
I have a pve cluster 3.2 with two nodes and DRBD as shared storage, run perfect.
I will configure fencing, so:
- uncomment the line: FENCE_JOIN="yes" in /etc/default/redhat-custer-pve on both nodes
- restart cman on both nodes
- join the fence domain, first on the first node then on the second
but when I return on the web interface (on the first node) I see the second red.
The pvecm status on both nodes is normal:
first node:
# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: pvecl2
Cluster Id: 6942
Cluster Member: Yes
Cluster Generation: 96
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: lxsrv1
Node ID: 1
Multicast addresses: 239.192.27.57
Node addresses: 192.168.10.252
second node:
# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: pvecl2
Cluster Id: 6942
Cluster Member: Yes
Cluster Generation: 96
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Total votes: 2
Node votes: 1
Quorum: 2
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: lxsrv2
Node ID: 2
Multicast addresses: 239.192.27.57
Node addresses: 192.168.10.253
Why?
Can I resync the nodes?
Where I can find information for the reason?
I have tryed:
- leave the fence domain on both nodes
- comment the line in redhat-cluster-pve
- restart cman
but nothing to do, the cluster is out of sync....
thank you,
Gianni
↧
↧
Promox on RD340 server lenovo
Hello,
When I want to install Promox 3.2 on my Lenevo RD340 server with LSI SASRAID controller (two disks on RAID1) , Proxmox does not recognize the logical drive it gives the choice between an installation on /dev/sda or /dev/sdb.
Could you please help me ?
cordially
When I want to install Promox 3.2 on my Lenevo RD340 server with LSI SASRAID controller (two disks on RAID1) , Proxmox does not recognize the logical drive it gives the choice between an installation on /dev/sda or /dev/sdb.
Could you please help me ?
cordially
↧
Run script after vm created?
I need to automatically update /etc/inetd.conf when a VM is created so I can add their VNC entries (I hate the java console). Is there any type of hook or way to easily run a bash script or whatever after a VM is created or destroyed? I would even take being able to just do it after one is started or stopped.
↧
Windows Guest slow network performance
I just installed a new Windows 2k8 Std server on a 3.2 host that already has a 2k8 guest running on it. The new guest is getting horrible network performance, even to the other guest vm on the machine, to the tune of ~200Kbits/sec where as the existing guest is getting ~500Mbit/sec.
Both adapters are VirtIO and using the same driver and are both stored on the same NAS.
Does anyone have any ideas as to why this VM would get such poor network performance?
Both adapters are VirtIO and using the same driver and are both stored on the same NAS.
Does anyone have any ideas as to why this VM would get such poor network performance?
↧
Could not read qcow2 header: Operation not permitted (500) on GlusterFS volume
Hello,
I have a standalone proxmox node with with several VMs on ZFS volume and it is working fine.
Now, on a separate two node test cluster, I created 2 ZFS datasets. These 2 datasets will be replicated through GlusterFS on both nodes (2).Each dataset will store a separate VM which will be continously snapshoted through ZFS for backup.
I have configured zfs datasets with the following options:
compression=off
xattr=sa
sync=disabled
GlusterFS volume is configured with default settings.
Below is the ZFS configuration:
Gluster config is as follows:
Mount points:
Everything looks ok.I have also configured Gluster on Proxmox -> Datacenter -> Storage.
But when I try to create a VM on this storage, I get the following error:
PVE-Version:
If I dd or create anything via cli there is no problem.I can see that these files are replicated on both nodes.
However if I try to create a qcow2 disk with qemu-img I get the same error.
I tried the same scenario without ZFS but with hdd -> ext4+glusterfs combination and there is no problem there.
Must be something in ZFS+GlusterFS combination.
Anyone with such experience?
thanks
I have a standalone proxmox node with with several VMs on ZFS volume and it is working fine.
Now, on a separate two node test cluster, I created 2 ZFS datasets. These 2 datasets will be replicated through GlusterFS on both nodes (2).Each dataset will store a separate VM which will be continously snapshoted through ZFS for backup.
I have configured zfs datasets with the following options:
compression=off
xattr=sa
sync=disabled
GlusterFS volume is configured with default settings.
Below is the ZFS configuration:
Code:
zpool status
pool: zfspool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zfspool1 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
errors: No known data errors
Code:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool1 26.4G 12.7G 11.1G /zfspool1
zfspool1/images 15.3G 12.7G 37K /zfspool1/images
zfspool1/images/101 10.9G 12.7G 10.9G /zfspool1/images/101
zfspool1/images/102 4.35G 12.7G 424K /zfspool1/images/102
Code:
Status of volume: glusterzfs
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick gluster1:/zfspool1 49155 Y 3057
Brick gluster2:/zfspool1 49155 Y 2949
NFS Server on localhost 2049 Y 3064
Self-heal Daemon on localhost N/A Y 3068
NFS Server on gluster2 2049 Y 3259
Self-heal Daemon on gluster2 N/A Y 3264
There are no active volume tasks
Code:
Volume Name: glusterzfs
Type: Replicate
Volume ID: 976e38ae-bd13-4adf-929a-b63bbb6ec248
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/zfspool1
Brick2: gluster2:/zfspool1
Options Reconfigured:
cluster.server-quorum-ratio: 55%
Code:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 299M 468K 298M 1% /run
/dev/mapper/pve-root 2.5G 2.0G 427M 83% /
tmpfs 5.0M 0 5.0M 0% /run/lock
zfspool1 24G 12G 13G 47% /zfspool1
zfspool1/images 13G 128K 13G 1% /zfspool1/images
zfspool1/images/101 24G 11G 13G 47% /zfspool1/images/101
zfspool1/images/102 13G 512K 13G 1% /zfspool1/images/102
tmpfs 597M 50M 548M 9% /run/shm
/dev/mapper/pve-data 4.5G 138M 4.3G 4% /var/lib/vz
/dev/sda1 495M 65M 406M 14% /boot
/dev/fuse 30M 24K 30M 1% /etc/pve
172.21.3.3:/storage/nfs2 727G 28G 700G 4% /mnt/pve/nfs1
gluster:glusterzfs 24G 12G 13G 47% /mnt/pve/glusterzfs
But when I try to create a VM on this storage, I get the following error:
Code:
unable to create image: qemu-img: gluster://gluster/glusterzfs/images/102/vm-102-disk-4.qcow2: Could not read qcow2 header: Operation not permitted (500)
Code:
proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
However if I try to create a qcow2 disk with qemu-img I get the same error.
I tried the same scenario without ZFS but with hdd -> ext4+glusterfs combination and there is no problem there.
Must be something in ZFS+GlusterFS combination.
Anyone with such experience?
thanks
↧
↧
Raid Questions
I received a new server today, they incorrectly sent me a server with Raid300 SAS software raid. I am debating on either sending it back or adding a Raid500 Hardware card (I think its the LSI 9240-8i). Has anyone successfully installed ProxMox 3.1 or 3.2 on the LSI 9240-8i raid card? Is it worth it? I plan on having only one hard drive.. Would I be better off returning and getting a SATA drive or adding the 9240 card?
Your opinions are appreciated!
Your opinions are appreciated!
↧
USB key and VM
Hi,
I'm quite new to PROXMOX and this is - I feel - a very dumb question but I still haven't figured out how to do it.
I have some content on a USB key.
I would like a specific VM to have access to that USB.
So how do I even do that ?
I mean I can plug the USB key to the physical server running PROXMOX.
Then how do I give access to the content of this USB key to a specific VM ?
Thanks !
I'm quite new to PROXMOX and this is - I feel - a very dumb question but I still haven't figured out how to do it.
I have some content on a USB key.
I would like a specific VM to have access to that USB.
So how do I even do that ?
I mean I can plug the USB key to the physical server running PROXMOX.
Then how do I give access to the content of this USB key to a specific VM ?
Thanks !
↧
CVE-2014-3153 / vzkernel 042stab090.3
Hi
Any plans when you are going to release an updated kernel with CVE-2014-3153 fixed? CVE-2014-3153 is pretty nasty.
According to OpenVZ this was fixed on 2014-06-06 in kernel 042stab090.3, https://twitter.com/_openvz_/status/474989972663988224
Any plans when you are going to release an updated kernel with CVE-2014-3153 fixed? CVE-2014-3153 is pretty nasty.
According to OpenVZ this was fixed on 2014-06-06 in kernel 042stab090.3, https://twitter.com/_openvz_/status/474989972663988224
↧
OpenVZ CT cannot access to the LAN network
Hi.
My PVE server is connected to the LAN with the interface eth1.
The OpenVZ CT has a veth bridged with eth1.
CT can access to the IP address of PVE's eth1, but cannot access to other hosts in the LAN.
The strange thing is that other hosts in the LAN can access to CT.
There are no firewall rules in PVE, CT or other hosts, all is ACCEPT and no iptables rules at the moment.
CT0 can access to other hosts in the LAN of course (so no physical connection problems).
ip_forward is enabled on PVE.
Could you help me please?
Thank you very much!
Bye
My PVE server is connected to the LAN with the interface eth1.
The OpenVZ CT has a veth bridged with eth1.
CT can access to the IP address of PVE's eth1, but cannot access to other hosts in the LAN.
The strange thing is that other hosts in the LAN can access to CT.
There are no firewall rules in PVE, CT or other hosts, all is ACCEPT and no iptables rules at the moment.
CT0 can access to other hosts in the LAN of course (so no physical connection problems).
ip_forward is enabled on PVE.
Could you help me please?
Thank you very much!
Bye
↧
↧
how to set up "preferred" Host for a VM
Hi All,
I need to configure a "preferred" Host for a VM in HA mode.
It's possible to do that?
scenario:
I have a 3 Node Cluster (HA). On Node 3, I set up a new virtual machine within HA mode. If Node3 goes down then normally the virtual machine will be moved to another Node (1or2), so far so good .
If Node3 are after maintenance or solving of the problem back on the cluster, then the VM must live migrate automatically back to Node3.
hmm, I had a look on the rhel cluster stack as well on the proxmox wiki but found no information.
Have anybody ideas or tipps to do that?
Regards
Martin
I need to configure a "preferred" Host for a VM in HA mode.
It's possible to do that?
scenario:
I have a 3 Node Cluster (HA). On Node 3, I set up a new virtual machine within HA mode. If Node3 goes down then normally the virtual machine will be moved to another Node (1or2), so far so good .
If Node3 are after maintenance or solving of the problem back on the cluster, then the VM must live migrate automatically back to Node3.
hmm, I had a look on the rhel cluster stack as well on the proxmox wiki but found no information.
Have anybody ideas or tipps to do that?
Regards
Martin
↧
KVM Live Migration hangs on fresh 3-node install of Proxmox VE 3.2
We are just starting to evaluate proxmox ve and testing the various features provided. The setup uses 3 Intel-based nodes with GlusterFS. Since we do not have an enterprise subscription (yet) we installed Debian 7.5 + GlusterFS 3.5, then performed the conversion to the Proxmox kernel using the pve-no-subscription repo. Perhaps it is important to point out that we did NOT first install proxmox ve 3.1 using the standard pve repo, as the wiki indicates, we instead went direct to 3.2 using pve-no-subscription, we assume this is supported.
So far we have only configured the pvecm cluster and joined the nodes. No fencing is enabled yet.
We created a simple CentOS 6.5 minimal VM (1GB ram, 1 cpu (kvm64), 32GB qcow2 disk virtio, virtio network), and verified we can perform _offline_ migrations between the various hosts to validate the GlusterFS shared storage was indeed working.
Then we tried Live Migration from proxmox2 to proxmox1, and the last line we get is:
Jun 13 08:19:42 starting VM 100 on remote node 'proxmox1'
On that proxmox1 system, I can see the kvm process running and the qm migrate process:
root 111359 111357 0 08:19 ? 00:00:00 /usr/bin/perl /usr/sbin/qm start 100 --stateuri tcp --skiplock --migratedfrom proxmox2 --machine pc-i440fx-1.7
root 111376 1 0 08:19 ? 00:00:00 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name centostest -smp sockets=1,cores=1 -nodefaults -boot menu=on -vga cirrus -cpu Westmere,+x2apic -k en-us -m 1024 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -drive if=none,id=drive-ide2,media=cdrom,aio=native -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=gluster://localhost/vms/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=10 0 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge -device virtio-net-pci,mac=DE:8E:BA:60:D7:B4,netdev=net0,bus=pci.0,ad dr=0x12,id=net0,bootindex=300 -machine type=pc-i440fx-1.7 -incoming tcp:localhost:60000 -S
I've tried scanning the various log files in /var/log, but not finding any errors on either server.
One last comment, on each of the hosts, under Services, it shows RGManager as stopped. I'm not sure what that service does, but perhaps it is relevant.
Any help would be much appreciated, Thanks!
-Brad
So far we have only configured the pvecm cluster and joined the nodes. No fencing is enabled yet.
We created a simple CentOS 6.5 minimal VM (1GB ram, 1 cpu (kvm64), 32GB qcow2 disk virtio, virtio network), and verified we can perform _offline_ migrations between the various hosts to validate the GlusterFS shared storage was indeed working.
Then we tried Live Migration from proxmox2 to proxmox1, and the last line we get is:
Jun 13 08:19:42 starting VM 100 on remote node 'proxmox1'
On that proxmox1 system, I can see the kvm process running and the qm migrate process:
root 111359 111357 0 08:19 ? 00:00:00 /usr/bin/perl /usr/sbin/qm start 100 --stateuri tcp --skiplock --migratedfrom proxmox2 --machine pc-i440fx-1.7
root 111376 1 0 08:19 ? 00:00:00 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name centostest -smp sockets=1,cores=1 -nodefaults -boot menu=on -vga cirrus -cpu Westmere,+x2apic -k en-us -m 1024 -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -drive if=none,id=drive-ide2,media=cdrom,aio=native -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=gluster://localhost/vms/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=10 0 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge -device virtio-net-pci,mac=DE:8E:BA:60:D7:B4,netdev=net0,bus=pci.0,ad dr=0x12,id=net0,bootindex=300 -machine type=pc-i440fx-1.7 -incoming tcp:localhost:60000 -S
I've tried scanning the various log files in /var/log, but not finding any errors on either server.
One last comment, on each of the hosts, under Services, it shows RGManager as stopped. I'm not sure what that service does, but perhaps it is relevant.
Any help would be much appreciated, Thanks!
-Brad
↧
Proxmox 3.1-21 and CT random ping response
Hi,
I have a server using using ISO of Proxmox 3.1-21.
Now i've just detected that randomly, the CTs didn't reply to ping request outside of the network. Do you know what happened?
Thank you for this project !
With kind regards,
long93
I have a server using using ISO of Proxmox 3.1-21.
Now i've just detected that randomly, the CTs didn't reply to ping request outside of the network. Do you know what happened?
Thank you for this project !
With kind regards,
long93
↧