February 16, 2015, 11:58 am
Hello,
For some reason Im not able to start my server in normal mode. Therefore Im making a backup of my mounted files. I run Ubunto resuce mode.
Im now moving vz folder to my backup server and will restore to my new server.
There are couple of VMs and every vm has 100 GB raw file which will take some time to transfer.
However my question is I need to know the name of the VM(Hostname) as I will not be able to know which VM ID belongs to whom.
Can someone let me know where I can found that information.
Question 2
Is it possible to backup the whole vm information as LZO via ubuntu so I will save space and time?
↧
February 16, 2015, 2:31 pm
Vzdump can't complete the backup of an openvz VM.
Ok, the cause is an open console or a midnight panel in the VM three or ...
Is there a way/option/parameter/trick/... to force the backup (at my risk)?
Thanks, P.
↧
↧
February 16, 2015, 3:30 pm
Hello,
Doing my daily check for updates and noticed that Parted was held back and three other pkgs were updated.
Tanks for the quick turn around on quemu and firewall. That is awesome.
wk
↧
February 17, 2015, 12:43 am
Hi,
is there a way to change the default language in web interface to other than English? I found no configuration file or solution in web search. Thanks for help.
looper
↧
February 17, 2015, 1:56 am
Dear My Friends,
I have some problem with Web GUI of the proxmox that I install,
The Version was pve-manager/3.1-21/93bf03d4 (running kernel: 2.6.32-26-pve)
After installation, I can access Web GUI and I change the IP Address and the interface, this is the rc.local config on my proxmox
# activate main interface
ifconfig eth1 up
ifconfig eth0 up
# activate vmbr
brctl addbr vmbr1
brctl addif vmbr1 eth1
brctl addbr vmbr0
brctl addif vmbr0 eth0
ifconfig vmbr1 up
ifconfig vmbr0 up
# add ip for manage proxmox
ifconfig vmbr1.5 172.17.5.253 netmask 255.255.255.0 up
# add routing
ip route add default via 172.17.5.1
After I reboot the machine, the proxmox can be access via ssh but Web GUI can't access anymore
Please kindly give me suggestion.
Thanks
↧
↧
February 17, 2015, 3:12 am
Hello,I
I have a 4 node cluster with the following version:
root@proxmox-cluster-01:~# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-6
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
I've installed the new server with the older packages and it matches the old/existing servers:
root@proxmox-cluster-05:~# pveversion -v
proxmox-ve-2.6.32: 3.3-147 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-37-pve: 2.6.32-147
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-6
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
When I try to join the cluster it will try to start CMAN and it fails with:
copy corosync auth key
stopping pve-cluster service
Stopping pve cluster filesystem: pve-cluster.
backup old database
Starting pve cluster filesystem : pve-cluster.
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... corosync died with signal: 11 Check cluster logs for details
[FAILED]
waiting for quorum...
In the syslog:
Feb 17 12:10:37 proxmox-cluster-05 corosync[18082]: [MAIN ] Corosync Cluster Engine ('1.4.5'): started and ready to provide service.
Feb 17 12:10:37 proxmox-cluster-05 corosync[18082]: [MAIN ] Corosync built-in features: nss
Feb 17 12:10:37 proxmox-cluster-05 corosync[18082]: [MAIN ] Successfully read config from /etc/cluster/cluster.conf
Feb 17 12:10:37 proxmox-cluster-05 corosync[18082]: [MAIN ] Successfully parsed cman config
Feb 17 12:10:37 proxmox-cluster-05 corosync[18082]: [MAIN ] Successfully configured openais services to load
Feb 17 12:10:37 proxmox-cluster-05 corosync[18082]: [TOTEM ] Initializing transport (UDP/IP Multicast).
Feb 17 12:10:37 proxmox-cluster-05 corosync[18082]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Feb 17 12:10:37 proxmox-cluster-05 corosync[18082]: [TOTEM ] The network interface is down.
Feb 17 12:10:38 proxmox-cluster-05 pmxcfs[17567]: [status] crit: cpg_send_message failed: 9
Feb 17 12:10:38 proxmox-cluster-05 pmxcfs[17567]: [status] crit: cpg_send_message failed: 9
Feb 17 12:10:38 proxmox-cluster-05 pmxcfs[17567]: [status] crit: cpg_send_message failed: 9
Feb 17 12:10:38 proxmox-cluster-05 pmxcfs[17567]: [status] crit: cpg_send_message failed: 9
Feb 17 12:10:38 proxmox-cluster-05 pmxcfs[17567]: [status] crit: cpg_send_message failed: 9
Feb 17 12:10:38 proxmox-cluster-05 pmxcfs[17567]: [status] crit: cpg_send_message failed: 9
Feb 17 12:10:41 proxmox-cluster-05 pmxcfs[17567]: [quorum] crit: quorum_initialize failed: 6
Feb 17 12:10:41 proxmox-cluster-05 pmxcfs[17567]: [confdb] crit: confdb_initialize failed: 6
Feb 17 12:10:41 proxmox-cluster-05 pmxcfs[17567]: [dcdb] crit: cpg_initialize failed: 6
Feb 17 12:10:41 proxmox-cluster-05 pmxcfs[17567]: [dcdb] crit: cpg_initialize failed: 6
I've also tried to stop services, remove all cluster folders and rety but nogo. Also tried with the latest version and latest kernel.
↧
February 17, 2015, 8:25 am
Hello,
Currently we have multiple backup jobs, one per storage.
This proves to be quite the performance hog though. There are constantly (during morning-noon) backup jobs running - even some timing out due to not being able to aquire the vzdump lock in time.
What is the recommended way to mirror backups to multiple storages without running multiple backup jobs?
Is there some sort of hook we can use to call a simple mirror script?
↧
February 17, 2015, 11:02 am
Hi,
I have a big problem, since i do a last update i can't access to web interface of proxmox and after reboot i have same problem.
pvedaemon, pveproxy etc ... won't start and give this error :
Starting PVE Daemon: pvedaemonERROR: no command specified
USAGE: pvedaemon <command> [ARGS] [OPTIONS]
I tried with service pvedaemon start and /etc/init.d/pvedaemon start with no success !
↧
February 17, 2015, 11:45 am
I have a server installed with the latest prox. It's not top of the line box.
About every 4-5 min. The I/O delay goes up to 50%+ for about 10-20 seconds, then back down again. The server has 16*300 GB sas drives in a raid 10, 32 GB Ram and 16 cores. We tried swapping out the raid card - no change. If I install Debian 7.8, or Ubuntu, or Cent, and pound the array with I/O, there is no delays at all.
Tried a bare-metal install and that did not solve the issue. The only VM running is a mysql slave that is replicating for HA.
CPU usage is 1-4%. Ram is 8GB out of 32.
pveperf shows:
CPU BOGOMIPS: 81065.52
REGEX/SECOND: 1045465
HD SIZE: 54.88 GB (/dev/disk/by-uuid/505ed783-87e8-435e-b0b8-775244613544)
BUFFERED READS: 354.39 MB/sec
AVERAGE SEEK TIME: 4.79 ms
FSYNCS/SECOND: 3479.59
I'm stumped. Does anybody have any ideas what we can try to isolate this?
↧
↧
February 17, 2015, 12:34 pm
Hi !!!
I have successfully installed wildcard certificate from K Software to Proxmox VE. No issues with Chrome, when i browsing to Proxmox VE web interface. But when I trying to connect to VM with Windows 64bit VirtViewer 2.0 - I have error message:
Code:
C:\temp>(remote-viewer.exe:1656): remote-viewer-DEBUG: No configuration file C:\Users\wadim\AppData\Local\virt-viewer\settings(remote-viewer.exe:1656): remote-viewer-DEBUG: fullscreen display 0: 0
(remote-viewer.exe:1656): remote-viewer-DEBUG: Opening display to 11.vv
(remote-viewer.exe:1656): remote-viewer-DEBUG: Guest (null) has a spice display
(remote-viewer.exe:1656): remote-viewer-DEBUG: After open connection callback fd=-1
(remote-viewer.exe:1656): remote-viewer-DEBUG: Opening connection to display at 11.vv
(remote-viewer.exe:1656): remote-viewer-DEBUG: New spice channel 000000000113BFA0 SpiceMainChannel 0
(remote-viewer.exe:1656): remote-viewer-DEBUG: notebook show status 0000000001133460
((null):1656): Spice-Warning **: ../../../spice-common/common/ssl_verify.c:429:openssl_verify: Error in certificate chain verification: unable to get
local issuer certificate (num=20:depth1:/C=US/ST=KY/L=Ashland/O=K Software/CN=K Software Certificate Authority (DV))
(remote-viewer.exe:1656): GSpice-WARNING **: main-1:0: SSL_connect: error:00000001:lib(0):func(0):reason(1)
(remote-viewer.exe:1656): remote-viewer-DEBUG: Disposing window 000000000115B0A0
(remote-viewer.exe:1656): remote-viewer-DEBUG: Set connect info: (null),(null),(null),-1,(null),(null),(null),0
I think, I have to put somewhere in Windows my pve-root-ca.crt. I've imported it - but no success.
Please, help !!!
Regards
Vadim.
↧
February 17, 2015, 3:45 pm
Done some experiments with vzdump/vzrestore & c.
OpenVZ CT CentOS-7-x86_64 + VirtualMin
Works fine.
If I do a vzdump and vzrestore it on the same server or into another, the clone works fine except for apache CGI and FastCGI that don't works.
After many investigations I found that in the cloned CT the /usr/sbin/suexec lost his capabilities attributes (rpm -V give a list of the files that have the same problem (ping, ping6, ...)).
Probably --xattrs for rsync or for tar can solve the question (I reinstalled the packages involved).
And there are some file that aren't copied (../help/.....html in the webmin package)
Regards, P.
↧
February 17, 2015, 6:43 pm
Hi, Just looking at the new fw feature in 3.3.
My set ups are standalone non clustered instances of proxmox host.
Normally:
fw Host: I use shorewall, its what i'm use to.
fw Guest: My guest have there own fw installed either shorewall or CSF with & without LFD, as I like the management & reporting / blocking options integrated into cPanel & Virtualmin guests
Question: Is there any compelling reason why I would/should activate & use new built in fw, give my basic setup - non clustered ?
Thanks
Garry
↧
February 17, 2015, 11:43 pm
Hello all,
I've been using pve since 2.x as a free platform and continued doing so. At the moment, I believe I have 3.1 installed:
Code:
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1
Two questions:
1) In the 2.x times, I used to do (IIRC) just some apt sources tinkering, an apt-get update and an apt-get dist-upgrade. Can I still do something like that, considering I do not have some sort of support license?
2) Are there any issues regarding OpenVZ containers and 3.3? I think I read somewhere that they are not supported...
I would appreciate any info available on the issues mentioned above :)
↧
↧
February 18, 2015, 12:20 am
Hi My Friends,
I have proxmox on server with 4GB memory and the VM was running normal,
But after I change the memory to 8GB and start the proxmox, I use ssh to access proxmox and use command : qm start 100, the proxmox show error, like no vm 100 on the proxmox, then I check on the configuration file, /etc/pve/qemu-server/100.conf but I just found /etc/pve/, there is no file on pve folder but the /var/lib/vz/images/100/ still there and the image too
The Version was pve-manager/3.1-21/93bf03d4 (running kernel: 2.6.32-26-pve)
Any suggestion to start my vm ?
↧
February 18, 2015, 12:53 am
Greetings!
#uname -a
Linux h3 3.10.0-7-pve #1 SMP Thu Jan 22 11:20:00 CET 2015 x86_64 GNU/Linux
#modinfo kvm_intel|grep -i nested
parm: nested:bool
#cat /sys/module/kvm_intel/parameters/nested
Y
In config file of virtual machine I set "args: -enable-nesting" option.
But when I tried to start I've got "kvm: -enable-nesting: invalid option"...
Whats wrong, maybe I need special pve-qemu-kvm?
↧
February 18, 2015, 1:30 am
Hi guys,
anyone can please tell me what are ways to upload .vmdk file to proxmox and how to convert it to .qcow2 format.
Thanks in advance pls!!!!!!!!!!!!!!!
↧
February 18, 2015, 3:48 am
Hi all :-) I have an issue with 2 recent Proxmox installs? These are now the 5th and sixth installs in this business. The machines giving me grief are a HP Dl385 G5 and HP ML350 G6 - all the firm ware is uptodate on the servers and I downloaded latest Proxmox and do normla install. The windows machines appear to be stalling or freezing monmentairly and sometime they become unuseable while everything on the host is fine?
Here is one Proxmox install that works fine (I am running Windows for Terminal Servers). I think this is the latest version of Proxmox.
pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-33-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Here is one of the Proxmox machines giving me grief. I downloaded new ISO and install proxmox. But this version looks older??? Can anyone advise or can help with debugging issues? I have tried apt-get dist-upgrade?
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Thanks in advance
Stephen
↧
↧
February 18, 2015, 3:53 am
Hi,
I found an issue after transfering four big (4TB) vm-disks form one ceph pool (pve) to another ceph pool (file).
The Task show OK, but the old file isn't deleted:
Code:
create full clone of drive virtio7 (ceph_pve:vm-410-disk-4)
transferred: 0 bytes remaining: 4398046511104 bytes total: 4398046511104 bytes progression: 0.00 % busy: true
transferred: 41943040 bytes remaining: 4398004568064 bytes total: 4398046511104 bytes progression: 0.00 % busy: true
transferred: 94371840 bytes remaining: 4397952139264 bytes total: 4398046511104 bytes progression: 0.00 % busy: true
transferred: 157286400 bytes remaining: 4397889224704 bytes total: 4398046511104 bytes progression: 0.00 % busy: true
transferred: 199229440 bytes remaining: 4397847281664 bytes total: 4398046511104 bytes progression: 0.00 % busy: true
transferred: 251658240 bytes remaining: 4397794852864 bytes total: 4398046511104 bytes progression: 0.01 % busy: true
transferred: 293601280 bytes remaining: 4397752909824 bytes total: 4398046511104 bytes progression: 0.01 % busy: true
...
transferred: 4397696286720 bytes remaining: 350224384 bytes total: 4398046511104 bytes progression: 99.99 % busy: true
transferred: 4397811630080 bytes remaining: 234881024 bytes total: 4398046511104 bytes progression: 99.99 % busy: true
transferred: 4397906001920 bytes remaining: 140509184 bytes total: 4398046511104 bytes progression: 100.00 % busy: true
transferred: 4398000373760 bytes remaining: 46137344 bytes total: 4398046511104 bytes progression: 100.00 % busy: true
transferred: 4398046511104 bytes remaining: 0 bytes total: 4398046511104 bytes progression: 100.00 % busy: false
Removing all snapshots: 100% complete...done.
image has watchers - not removing
Removing image: 0% complete...failed.
rbd: error: image still has watchers
TASK OK
The old file (luckily) still there:
Code:
# rbd -p pve ls
vm-410-disk-4
# rbd -p file ls
...
vm-410-disk-1
vm-410-disk-2
vm-410-disk-3
vm-410-disk-4
...
The VM-Config show the new storage place - but this is wrong!
Code:
cat /etc/pve/qemu-server/410.conf
bootdisk: virtio0
cores: 4
cpu: kvm64
ide2: none,media=cdrom
memory: 32768
name: prod-srv
net0: virtio=46:E3:35:AA:93:07,bridge=vmbr20
ostype: l26
sockets: 2
virtio0: d_sas_r0:vm-410-disk-1,cache=writethrough,size=32G
virtio1: d_sas_r0:vm-410-disk-2,cache=writethrough,backup=no,size=100G
virtio2: d_sas_r0:vm-410-disk-3,cache=writethrough,backup=no,size=50G
virtio3: ceph_file:vm-410-disk-1,cache=writethrough,backup=no,size=4096G
virtio4: ceph_file:vm-410-disk-2,cache=writethrough,backup=no,size=4096G
virtio5: ceph_file:vm-410-disk-3,cache=writethrough,backup=no,size=4096G
virtio6: d_sas_r0:vm-410-disk-4,cache=writethrough,size=170G
virtio7: ceph_file:vm-410-disk-4,cache=writethrough,backup=no,size=4096G
because the VM still use the old disk (I put some breaks in the ps output for better reading)
Code:
root 10062 86.8 21.3 40884180 28131620 ? Sl Feb12 7340:51 /usr/bin/kvm -id 410 \
-chardev socket,id=qmp,path=/var/run/qemu-server/410.qmp,server,nowait \
-mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/410.vnc,x509,password \
-pidfile /var/run/qemu-server/410.pid -daemonize -name prod-srv -smp sockets=2,cores=4 \
-nodefaults -boot menu=on -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -k de -m 32768 \
-device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f \
-device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1\
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 \
-iscsi initiator-name=iqn.1993-08.org.debian:01:15a1d996f6d3 \
-drive file=/dev/d_sas_r0/vm-410-disk-2,if=none,id=drive-virtio1,cache=writethrough,aio=native,detect-zeroes=on \
-device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb \
-drive file=rbd:pve/vm-410-disk-4:mon_host=172.20.2.64 172.20.2.65 172.20.2.62:id=pve:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ceph_pve.keyring,if=none,id=drive-virtio7,cache=writethrough,aio=native,detect-zeroes=on \
-device virtio-blk-pci,drive=drive-virtio7,id=virtio7,bus=pci.2,addr=0x2 \
-drive file=/dev/d_sas_r0/vm-410-disk-3,if=none,id=drive-virtio2,cache=writethrough,aio=native,detect-zeroes=on \
-device virtio-blk-pci,drive=drive-virtio2,id=virtio2,bus=pci.0,addr=0xc \
-drive file=/dev/d_sas_r0/vm-410-disk-4,if=none,id=drive-virtio6,cache=writethrough,aio=native,detect-zeroes=on \
-device virtio-blk-pci,drive=drive-virtio6,id=virtio6,bus=pci.2,addr=0x1 \
-drive file=rbd:file/vm-410-disk-1:mon_host=172.20.2.64 172.20.2.65 172.20.2.62:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ceph_file.keyring,if=none,id=drive-virtio3,cache=writethrough,aio=native,detect-zeroes=on \
-device virtio-blk-pci,drive=drive-virtio3,id=virtio3,bus=pci.0,addr=0xd \
-drive if=none,id=drive-ide2,media=cdrom,aio=native \
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 \
-drive file=/dev/d_sas_r0/vm-410-disk-1,if=none,id=drive-virtio0,cache=writethrough,aio=native,detect-zeroes=on \
-device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=105 \
-drive file=rbd:file/vm-410-disk-2:mon_host=172.20.2.64 172.20.2.65 172.20.2.62:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ceph_file.keyring,if=none,id=drive-virtio4,cache=writethrough,aio=native,detect-zeroes=on \
-device virtio-blk-pci,drive=drive-virtio4,id=virtio4,bus=pci.0,addr=0xe \
-drive file=rbd:pve/vm-410-disk-3:mon_host=172.20.2.64 172.20.2.65 172.20.2.62:id=pve:auth_supported=cephx:keyring=/etc/pve/priv/ceph/ceph_pve.keyring,if=none,id=drive-virtio5,cache=writethrough,aio=native,detect-zeroes=on \
-device virtio-blk-pci,drive=drive-virtio5,id=virtio5,bus=pci.0,addr=0xf \
-netdev type=tap,id=net0,ifname=tap410i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on \
-device virtio-net-pci,mac=46:E3:35:AA:93:07,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300
What is the best way to fix this issue? I assume stop the VM, edit the conf for vm-410-disk-4 back to ceph_pve and start the VM again?! After that try again the move?
Udo
↧
February 18, 2015, 5:15 am
hi,
VM is not booting after conversion of .vmdk to .qcow2, pls can anyone help me.
what i did,
step 1: i converted my pc (c-drive:250gb) to .vmdk file using vmware standalone converter
step 2: created one vm (id-100) with 250 disk size using localdisk (qcow2) on proxmox
step 3: uploaded .vmdk file to proxmox using winscp app. into location /var/lib/vz/images/vmid(100)
step 4: and i deleted default .qcow2 file and i converted .vmdk into .qcow2 using qemu-img tool
After succesfull conversion i turned on the vm but the vm is not booting. it is strucked at "BOOTING FROM HDD....."
Thanks in advance!!!!!!
↧
February 18, 2015, 10:12 am
So I have setup proxmox 2 node HA in the process of setting the cluster.conf file for fencing and somethign has gone wrong. node1 no longer sees node2 (it's not listed in the web interface.
Node2 see node1 in the interface, but shows as offline.
the cluster.conf file is as follows:
Code:
<?xml version="1.0"?>
<cluster name="proxcluster" config_version="4">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey">
</cman>
<fencedevices>
<fencedevice agent="fence_ilo" ipaddr="192.168.1.3" login="proxmox" name="prox00" passwd="password"/>
<fencedevice agent="fence_ilo" ipaddr="192.168.1.4" login="proxmox" name="prox01" passwd="password"/>
</fencedevices>
<clusternodes>
<clusternode name=prox00 nodeid="1" votes="1">
<fence>
<method name="1">
<device name=prox00 action="reboot"/>
</method>
</fence>
</clusternode>
<clusternode name=prox01 nodeid="2" votes="1">
<fence>
<method name="1">
<device name=prox01 action="reboot"/>
</method>
</fence>
</clusternode>
</clusternodes>
</cluster>
I can't seem to find the problem so I decided to try remove prox00 from prox01 by running
Code:
pvecm delnode prox00
When I do that I get the following errors so I am assuming there is some error in the cluster.conf file which I am missing.
Code:
/etc/pve/cluster.conf:13: parser error : AttValue: " or ' expected
<clusternode name=prox00 nodeid="1" votes="1">
^
/etc/pve/cluster.conf:13: parser error : attributes construct error
<clusternode name=prox00 nodeid="1" votes="1">
^
/etc/pve/cluster.conf:13: parser error : Couldn't find end of Start Tag clusternode line 13
<clusternode name=prox00 nodeid="1" votes="1">
^
/etc/pve/cluster.conf:16: parser error : AttValue: " or ' expected
<device name=prox00 action="reboot">
^
/etc/pve/cluster.conf:16: parser error : attributes construct error
<device name=prox00 action="reboot">
^
/etc/pve/cluster.conf:16: parser error : Couldn't find end of Start Tag device line 16
<device name=prox00 action="reboot">
^
/etc/pve/cluster.conf:19: parser error : Opening and ending tag mismatch: clusternodes line 12 and clusternode
</clusternode>
^
/etc/pve/cluster.conf:21: parser error : AttValue: " or ' expected
<clusternode name=prox01 nodeid="2" votes="1">
^
/etc/pve/cluster.conf:21: parser error : attributes construct error
<clusternode name=prox01 nodeid="2" votes="1">
^
/etc/pve/cluster.conf:21: parser error : Couldn't find end of Start Tag clusternode line 21
<clusternode name=prox01 nodeid="2" votes="1">
^
/etc/pve/cluster.conf:24: parser error : AttValue: " or ' expected
<device name=prox01 action="reboot">
^
/etc/pve/cluster.conf:24: parser error : attributes construct error
<device name=prox01 action="reboot">
^
/etc/pve/cluster.conf:24: parser error : Couldn't find end of Start Tag device line 24
<device name=prox01 action="reboot">
^
/etc/pve/cluster.conf:27: parser error : expected '>'
</clusternode>
^
/etc/pve/cluster.conf:27: parser error : Extra content at the end of the document
</clusternode>
^
ccs_tool: Error: unable to parse requested configuration file
Can anyone see where I have gone wrong?
↧