Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Still having backup issues with crashing VMs

$
0
0
Hi,

We are still seeing backup issues with Proxmox. Issues only occour on Windows VMs.
Backup completes 100%, no errors. But inside Windows VM we receive an Atapi event ID 9 (The device, \Device\Ide\IdePort0, did not respond within the timeout period.).
And Windows either get unrespondable or Windows will keep running but database will crash.

It's a Windows 2003 R2 server. Doesn't use Virtio drivers. Any clues?


Here are my VM config:

root@proxmox:~# cat /etc/pve/local/qemu-server/101.conf
bootdisk: ide0
cores: 1
cpu: kvm64
ide0: local:101/vm-101-disk-1.raw,format=raw,size=15G
ide2: cdrom,media=cdrom
memory: 512
name: w2003-dev
net0: e1000=6A:E5:5A:CC:E6:ED,bridge=vmbr0
ostype: wxp
sockets: 1
vga: std

And my pveversion:

root@proxmox:~# pveversion -v
proxmox-ve-2.6.32: 3.2-124 (running kernel: 2.6.32-28-pve)
pve-manager: 3.2-2 (running version: 3.2-2/82599a65)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-6
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Win2k8r2 dies when "Microsoft Windows 7/2008" profile is set

$
0
0
When "Microsoft Windows 7/2008" profile is selected on OS tab during VM creation, then qemu will be invoked with options -cpu kvm64,hv_spinlocks=0xffff,hv_relaxed,+lahf_lm,+x2a pic,+sep -global kvm-pit.lost_tick_policy=discard, so when VM is started Windows dies with bsod code 0x5d (Unsupported processor). But when CPU is set to `host' it works as it should. May be this problem should be listed in wiki.

Backup freeze after nfs server shutdown

$
0
0
Hi,

I have a strang probleme. I try to stop a backup and it failed. In my log I see:

kernel: ct0 nfs: server 10.222.217.166 not responding, still trying

If I type ps aux | grep vzdump, I find the PID but if try, as root, kill PID# it doesn't work.

The result is: all the GUI is not responsive. I do not want to reboot, must be a way! All my CT and VM are running.

I'm running 3.2.

Ps: we have 2 separate node with subscription (basic one)!

very slow IO, almost hangs time to time

$
0
0
Hi there, I am experiencing a occasional system hangs/freeze ups w/ my new server which is:

SUPERMICRO 6017R-WRF
E5-2670
2 x Intel530 480gb SSD
2 x WD 3TB SATA3
On-board Controller

The HDDs are set to JBOD.

I have the proxmox installed on one of the WD3TB SATA3 with my other hard drives connected to.

It seemed fine at a glance but upon careful inspection, it appears that the proxmox host would occasionally freeze up, maybe for up to 5-10 minutes every hour or so. This happens even when server has ZERO load, no outside process, no cronjobs, running.

During this time, the Openvz containers inside created are fine (SSD) and shows no hang ups.

Now I have thought it was the hard drive defected so I switched the host w/ my other WD3TB and still experience same problem.
Moreover, the VM's created using WD3TB, also experience same hangups/slowness.

Here is pveperf done from the host:

root@server1:~# pveperf
CPU BOGOMIPS: 166389.92
REGEX/SECOND: 558502
HD SIZE: 94.49 GB (/dev/mapper/pve-root)
BUFFERED READS: 0.17 MB/sec
AVERAGE SEEK TIME: 2540.09 ms
FSYNCS/SECOND: 10.33
DNS EXT: 95.44 ms
DNS INT: 187.68 ms

Which is VERY SLOW!

I even replaced the WD3TB hard drives thinking they were defective, but even w/ new drives, they are still slow!

Anyone have any ideas why this appears to be the case?

Move vm from venet to veth

$
0
0
Hi,

I recently changed my hosting provider for another.
My previous one was allowing venet but the new one requires veth (with ip failover).

I have tried to move my vm to the new server ; I can restore the VM, change the network card with a veth one but the vm doesn't seems to have any NIC anymore :
when I ask for an "ifconfig" with the command
Code:

etc/vz/conf# vzctl exec 102 ifconfig
, I only see the lo interface.

I have followed the following tuto : http://help.ovh.ie/BridgeClient that actually works with new CT.

Do I have to add manually the network card ? How ?

Many thanks for your help.


Michaël

[solved] - Move vm from venet to veth

$
0
0
Hi,I recently changed my hosting provider for another.My previous one was allowing venet but the new one requires veth (with ip failover).I have tried to move my vm to the new server ; I can restore the VM, change the network card with a veth one but the vm doesn't seems to have any NIC anymore :when I ask for an "ifconfig" with the command
Code:

etc/vz/conf# vzctl exec 102 ifconfig
, I only see the lo interface.I have followed the following tuto : http://help.ovh.ie/BridgeClient that actually works with new CT.Do I have to add manually the network card ? How ?Many thanks for your help.Michaël

NFS Storage with Kerberos support ?

$
0
0
Hello,

Is it possible in webinterface to mount storage NFS with Kerberos gss/krb5 auth. support?
We have NAS with configured Kerberos environment and now want to change NFS storage in Proxmox to more secure.

Best regards,
Widomski Łukasz

2-node HA DRBD Concurrent local writes

$
0
0
Hello,

Once in a while I was getting this warning in dmesg ( kern.log ):
block drbd0: kvm[433746] Concurrent local write detected! [DISCARD L] new: 623356039s +8192; pending: 623356039s +8192

So I decided to study it more and I was able to trigger message whenever I was running sqlio benchmark tool in Windows 2003 guests.
I run a 2-node HA cluster with drbd as "shared" storage, however this message is about subsequent local (not from the other node) writes in the same location ( or overlapping locations, although in my case they were always identical: new and pending )
I run 6 Windows 2003r2 64-bit guests with virtio driver for disk with version 0-1.49 ( I am getting BSOD if I try to run any newer version ). So I am not sure if
this is a driver problem or something else, but I was able to get rid of these warnings after I activated caching "writethrough" for the virtual disks of the guests (in the proxmox gui).
The guest disks are raw-LVM on top of DRBD ( not using CLVM ). I have played a bit with all the settings ( https://pve.proxmox.com/wiki/Perform...aks#Disk_Cache )
but only writethrough and writeback where the only ones that prevented this behaviour. Maybe somehow with this caching the I/O requests get properly queued. As far as I understood this message means that a subsequent request for the same location arrives before the first one gets completed.

I'll leave it like this for now, but if anyone can clarify what's going on it would be nice to know :)

Debian 7 KVM template

$
0
0
Hi guys! I have VM with debian 7 installed and I want to create template from it. How can I remove all user accounts and other settings (SSH keys, MAC)? Maybe some script or package?

Poor performance with 10Gbe and open vSwitch

$
0
0
Hi,

I'm trying to achieving a new infrastructure for our data-center, the infrastructure is composed of three nodes equipped with one dual port 10Gbe nic, first i have tested the network with native linux support and got 9.5Gbit/s in the iperf bandwidth test, now when i running the same test with ovs i got 3.5Gbit/s at maximum in the iperf bandwidth test, my ceph performances are now affected by this issue.
I use a ovs bridge connected to my physical switch via a ovs bond and the management network for all the three host is connected to a ovs internal port with tagged vlan.

increasing hdd size for guest

$
0
0
This is one of the feature that is important when using a VM.

I created a standard centos guest with 32GB and it's IDE.

Now I want to increase the volume size. I shutdown the guest. Next, I tried using the resize function in the GUI. After than I can see the size increased in the proxmox gui.

After that I start up the server and it starts fine. I use fdisk:

Code:

[root@localhost ~]# fdisk -l

Disk /dev/sda: 39.7 GB, 39728447488 bytes
255 heads, 63 sectors/track, 4830 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000ab504


  Device Boot      Start        End      Blocks  Id  System
/dev/sda1  *          1          64      512000  83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        4178    33041408  8e  Linux LVM


Disk /dev/mapper/VolGroup-lv_root: 32.8 GB, 32791068672 bytes
255 heads, 63 sectors/track, 3986 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000




Disk /dev/mapper/VolGroup-lv_swap: 1040 MB, 1040187392 bytes
255 heads, 63 sectors/track, 126 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

I can see the size of /dev/sda has increased. However, I tried using lvextend or lvresize or vgresize and so on to extend the LVM size without success.

Code:

[root@localhost ~]# vgdisplay  --- Volume group ---
  VG Name              VolGroup
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  6
  VG Access            read/write
  VG Status            resizable
  MAX LV                0
  Cur LV                2
  Open LV              2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size              31.51 GiB
  PE Size              4.00 MiB
  Total PE              8066
  Alloc PE / Size      8066 / 31.51 GiB
  Free  PE / Size      0 / 0
  VG UUID              1q01KM-sIal-wXTi-YT4A-Esva-iU3M-nCCSUv

See, it is showing no Free space available.

Anyone can help? I have been googling around but this is so confusing. A lot of the tutorial told about using qemu-img to convert the vm to raw then increase the size and dd into the new storage and then convert back to qcow. If that is the case, what is the resize function for in the GUI?

I have also tried the commandline method of qemu-img and dd. However, once I created the new image, how do I add it back into the VM?

Openvz maximum length of vmid

$
0
0
Hello support forum,

I have to following issue. When I create a container with "pvectl create 1049273580 ..." everything is working fine.
When I add an extra network cards with:
vzctl set 1049273580 --netif_add eth0 --save
vzctl set 1049273580 --netif_add eth1 --save
vzctl set 1049273580 --netif_add eth2 --save
it adds the netwrok card without an error message (echo $? is 0), but ....

When I look in the container config file the .X (VLAN) tag isn't added to the networkinterface: (output adjusted to indicate the "error")

NETIF=
ifname=eth0,mac=00:18:51:F6:61:EA,host_ifname=veth 1049273580.,host_mac=00:18:51:C5:C8:29; IT SHOULD BE host_ifname=veth1049273580.0
ifname=eth1,mac=00:18:51:31:70:E0, host_ifname=veth1049273580.,host_mac=00:18:51:06:0 0:08; IT SHOULD BE host_ifname=veth1049273580.1
ifname=eth2,mac=00:18:51:08:CE:F5,host_ifname=veth 1049273580.,host_mac=00:18:51:39:8E:03" IT SHOULD BE host_ifname=veth1049273580.2

Is there a maximum on the host_ifname name length of 16 characters?

Kind regards,

Rudi

Spice client on windows cannot connect

$
0
0
When I try to connect from my Win7 machine, I get the error:

"Unable to connect to the graphic server"

After a little searching around I tried all hints, nothing helps.

I ran remote-viewer with option --debug to see error messages.
I did this within one second(!!) after download of the config file.

What I can see is:

((null):8896): Spice-Warning **: ../../../spice-common/common/ssl_verify.c:428:openssl_verify:
Error in certificate chain verification: certificate has expired
(num=10:depth1:/CN=Proxmox Virtual Environment/OU=5554741f7f9da7f9f644ea30d27c181a/O=PVE Cluster Manager CA)
(remote-viewer.exe:8896): GSpice-WARNING **: main-1:0: SSL_connect: error:00000001:
lib(0):func(0):reason(1)

Server and Client have correct date and time and both have German Summer Time (CEST).

Any help?

Thanks
Birger

pvesh error - API issue?

$
0
0
Hi,
after I could not to connect to some server via api I logged-in to the server and did
# pvesh
pve:/> help
and got the error:
Use of uninitialized value in string comparison (cmp) at /usr/bin/pvesh line 326.

this was for about 40 lines with same line and than:

Use of uninitialized value $path in substitution (s///) at /usr/bin/pvesh line 333.
Use of uninitialized value $path in concatenation (.) or string at /usr/bin/pvesh line 337.
create -storage <string> -type <string> [OPTIONS]
Use of uninitialized value $path in substitution (s///) at /usr/bin/pvesh line 333.
Use of uninitialized value $path in concatenation (.) or string at /usr/bin/pvesh line 337.
create -poolid <string> [OPTIONS]
Use of uninitialized value $path in substitution (s///) at /usr/bin/pvesh line 333.
Use of uninitialized value $path in concatenation (.) or string at /usr/bin/pvesh line 337.
get

anyway know what to do?

best regards,
Star Network.

Proxmox network help

$
0
0
Hi All

Hoping someone can help.

Hetzner dedi server with proxmox 3.2-4

proxmox works great so far

centos 6.5 guest installed but networking not working

i have a subnet which is different to the proxmox server, ive followed every guid known to man but just cant get it working.

any pointers ?

tahnsk

adam

Bug in iPXE while booting DHCP

$
0
0
Hi all,

From a few weeks, I have a problem booting VMs via PXE, using DHCP.

DHCP does not work if network card type is Virtio or Intel e1000.

But DHCP is OK if type is RTL8139.

In the first case, I can see DHCPDISCOVER (from iPXE) and DHCPOFFER (from my DHCP server) on the interface of the VM. But nothing else and it ends with a Connection timed out.

All is working fine if I set network card to RTL8139.

Another strange thing, if I run a tcpdump on the host, while VM is looking for DHCP, it ends to a Connection timed out even if type is RTL8139 ! But it works if no tcpdump is running.

Have you encounter this ? Is it a known bug ?

Thank you.

After upgrade version 1.9 to 3.2(3.1), Windows 2008 R2 performance issue

$
0
0
Hi all!

Long time 1.9 version was satisfy performing for VM virtualisation around 10 VM (some Linux Ubuntu 10/12) other Windows (XP, 2003, 2008, 2008 R2).
Faced with difficulties to get up windows 2012 R2 on proxmox 1.9. Got to know then easiest way - upgrade to 3.x.
Also got information on this forum that better option is 3.2 version (no problems with backup as it was with 3.1).

The way of upgrade:
- Install on new clear SSD from latest ISO Proxmox 3.2 (old disk removed from the system)
- recreate new config file in /etc/pve/nodes/pro/qemu-server from old conf files
- setup as before all vmbr's
- config same backup place

Starting all VMs was successfull (Linux and Widows). The only problems was with windows 2008 R2 Enterprise.
3 of them did some drivers renew (COM, CPU, other) and after that started normally.
1 of them didn't start in any way. After researching found two courses:
args: -no-hpet -no-kvm-pit-reinjection - shouldn't be used
and
cpu type coundn't be win7/2008r2 nor 2008, only started with some xp/2003 or Linux 3.X/2.6 Kernel .

During working those 4 VM (win2008 r2) got significant performance regression. All have terminal server role.
Slow windows switching, long opening files.
=============
Currently what I have.
I used all performance tricks:
tablet: 0
scsihw: virtio-scsi-pci
args: -no-hpet -no-kvm-pit-reinjection (yes and no)
tried E1000 and IDE
Then I've reinstalled Proxmox 3.1 and test all once again.
I did almost all tests with different Virtio drivers combination.

I'm testing with DPC latency checker. It shows exactly when responce of system is not acceptable.
1) Chart of one Windows 2008 R2 Standart (Domain Controller) hasn't impacted after upgrade :
2008r2good.png
Using virtio 0.1-65. No nigh load. No change after restarting VM.
2) Chart of problematic Windows 2008 R2 Enterprise (terminal server, virtio).
Just after start:
2008r2newstart.png
During no load:
new2008r2working.png
Under regular load:
2008r2working.png
Same with E1000 under load:
2008r2e1000working.png
System responce very poor for users activity. VM CPU showing 40-70%, Proxmox shows 60-100% of CPU using.
All others VM with windows XP, 2003, 2008 have nice charts and performance, but non of them have terminal role.

Pveverion:
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2


Comeback to 1.9 didn't performed because needs in upgrade all 2008R2 to 2012R2. Rollback left for no chance to solve this issue.
Seems my issue is related to kernel versions or maybe some bios settings?

Has anybody faced with such difficulties?
Attached Images

Cannot connect to centos 6 vm with spice (no spice port (500))

$
0
0
On 3.2, I can connect via VNC, but connecting via spice I get no spice port (500) in the web gui. I can use spice to connect to the proxmox host.

Openvz creation very slow on nfs

$
0
0
I saw a few threads about that but no one had the solution. Not sure if I was supposed to reply to those old threads. So I started a new one.
I just installed 2 proxmox nodes that stores everything on a NAS(Nexenta) using NFS. After finally fixing up the permissions problems. I am able to create and Install KVMs with no issues. But when it comes to Containers It takes forever to create. The network connection between the Proxmox nodes and NAS is a 10G fiber so it shouldn't be a issue.
I checked those 2 threads but no solution is provided:
http://forum.proxmox.com/threads/120...r-on-NFS-share
http://forum.proxmox.com/threads/135...tion-very-slow
If anyone can help me it would be very appreciated.

Backup VPS - OPENVZ , KVM not working - ERROR

$
0
0
Can someone tell me why it is not working to make backup of my virtual machines?

Every time i do backup i get error :

INFO: starting new backup job: vzdump 100 --remove 0 --mode suspend --compress gzip --storage local --node Proxmox-VE
INFO: Starting Backup of VM 100 (openvz)
INFO: CTID 100 exist mounted running
INFO: status = running
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: starting first sync /var/lib/vz/private/100/ to /var/lib/vz/dump/vzdump-openvz-100-2014_06_19-12_22_26.tmp
INFO: rsync: read errors mapping "/var/lib/vz/private/100/home/extremt1/imap/extremtorent.org/admin/Maildir/new/1402661539.H206237P28956.directadmin.extra-cheap.com": Input/output error (5)
INFO: rsync: read errors mapping "/var/lib/vz/private/100/home/extremt1/imap/extremtorent.org/admin/Maildir/new/1402661539.H206237P28956.directadmin.extra-cheap.com": Input/output error (5)
INFO: ERROR: home/extremt1/imap/extremtorent.org/admin/Maildir/new/1402661539.H206237P28956.directadmin.extra-cheap.com failed verification -- update retained.
INFO: Number of files: 77453
INFO: Number of files transferred: 67776
INFO: Total file size: 4270586745 bytes
INFO: Total transferred file size: 4253739317 bytes
INFO: Literal data: 4253416049 bytes
INFO: Matched data: 323268 bytes
INFO: File list size: 1912179
INFO: File list generation time: 0.001 seconds
INFO: File list transfer time: 0.000 seconds
INFO: Total bytes sent: 4258830616
INFO: Total bytes received: 1433531
INFO: sent 4258830616 bytes received 1433531 bytes 16480712.37 bytes/sec
INFO: total size is 4270586745 speedup is 1.00
INFO: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1070) [sender=3.0.9]
ERROR: Backup of VM 100 failed - command 'rsync --stats -x --numeric-ids -aH --delete --no-whole-file --inplace '/var/lib/vz/private/100/' '/var/lib/vz/dump/vzdump-openvz-100-2014_06_19-12_22_26.tmp'' failed: exit code 23
INFO: Backup job finished with errors
TASK ERROR: job errors


I really do not know why .




I have plently of space :

root@Proxmox-VE ~ # df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.2G 340K 3.2G 1% /run
/dev/disk/by-uuid/51ad04af-dae5-474a-a64e-babeaaf1d6b3 5.3T 81G 5.0T 2% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 22M 32G 1% /run/shm
/dev/md1 1008M 47M 910M 5% /boot
/dev/fuse 30M 16K 30M 1% /etc/pve
/var/lib/vz/private/100 1.0T 4.2G 1020G 1% /var/lib/vz/root/100
none 2.0G 4.0K 2.0G 1% /var/lib/vz/root/100/dev
none 2.0G 0 2.0G 0% /var/lib/vz/root/100/dev/shm
/var/lib/vz/private/101 1.0T 7.9G 1017G 1% /var/lib/vz/root/101
none 4.0G 4.0K 4.0G 1% /var/lib/vz/root/101/dev
none 4.0G 0 4.0G 0% /var/lib/vz/root/101/dev/shm
/var/lib/vz/private/103 500G 4.5G 496G 1% /var/lib/vz/root/103
none 2.0G 4.0K 2.0G 1% /var/lib/vz/root/103/dev
none 2.0G 0 2.0G 0% /var/lib/vz/root/103/dev/shm



root@Proxmox-VE ~ # pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1






CAN SOMEONE TELL ME PLEASE , I NEED HELP




Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>