Quantcast
Channel: Proxmox Support Forum
Viewing all 172430 articles
Browse latest View live

Problem upgrading from VE 2.1 to VE 2.3

$
0
0
Hi guys new to the forum-

Recently I upgraded my proxmox host from VE 2.1 to the most recent release VE 2.3-13/7946f1f1. After doing so I noticed that the I/O performance on all of my VM's went south (Currently using NFS share on my freenas box for all VM storage needs (never had performance issues before).At first I thought maybe it was my gigabit switch that was to blame or maybe a bad nic however that was not the case and I have since ruled that out. Can anyone shed some light on this issues? Maybe i missed a step during the move to the latest build.

Any help is greatly appreciated :cool:

Proxmox 2.3 - df in ovz containers wrong

$
0
0
g'day,

just installed a Proxmox 2.3 node, and moved an ovz container from a CentOS based node to the new Proxmox node. I know that there are about 20G taken in the VM out of 30G allocated, yet df inside the container only displays 600M in use - how come? The VM is a cPanel CentOS VM, if that helps. A second VM (also with cPanel) has the same issue. Reporting 2G taken out of 200G, when I know over 100G are taken already (node df is fine).

any ideas?

cheers,
cw

VM drive cache mode and performance

$
0
0
Hello everybody,

Could anybody try to explain why cache=directsync and cache=none (which is write-back with host cache disabled) give completely same results? Shouldn't it be better on writing with cache=none?

------------------
Hardware:
- Dell PE 710
- PERC H700 Integrated
- RAID 0+1
- Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
- BBU is in place

Proxmox:
- Proxmox VE 2.3 (is under some daily load during the tests)

Virtual machine:
- Windows 2008 R2 SP1
- 3 drives (virto HDD, 10GB raw image on ext3, i.e. on local storage)
-- drive E: cache=directsync
-- drive I: cache=none
-- drive F: cache=writethrough
------------------








Regards,
Stanislav

My two cents on Proxmox

$
0
0
Hi everybody,

I'd like to say that you have a great product, with many great features that makes it a good choise to create a corporate virtualization environmet. But I need to say that the poor and few documentation (outdated wiki posts, no man pages, --help almost inexistent) make it almost imposible to choose it for a production environment. If you can take sometime to make documentation more robust and uptodate with the releases, you'll make this plataform a real choise.
I'm affraid that I have to leave to ovirt for this matter as I've installed my nodes with proxmox and planned to buy the license for support, but as everystep is a fight to get info about how to go on.
Keep on this great project and don't miss sight of documentation that is almost as importat as the software itself.
Regards,

Ceph Performance inside Client

$
0
0
Hi,
I have an 3-node ceph cluster with ssd-journal and 10GE-connection.
Inside an VM (wheezy) I got > 100MB/s troughput with dd - but if I cp (or dd) an big file on the same rbd-disk, the performance drop down to 12MB/s.
The networkperformance between host and ceph-nodes is 9.7 GB/s (iperf).

I have tried different cache options, different OS, different filesystems inside VM...

Any hint?

Udo

Win 7 doesnt boot on VE 2.3

$
0
0
We are observing a strange behaviour on a HP Server which is newly installed w Proxmox 2.3-13:
After installing (installation works fine) windows 7 machines they do not boot. After bios messages they stop at 'booting from harddisk'.
We tried different iso files ( which all worked good on a Proxmox 2.2 machine).

What makes it more weird: The vm cant even boot from cd (iso files) until the virtual hard disk with the stalling system is removed.

We tried virtio and IDE. Same result.

Any idea anyone?

Regards, Holger

Migrating from Qemu to OpenVZ

$
0
0
Hi there,

maybe that's a stupid question, but does anyone ever migrated VMs (their whole hdd content) from qemu to OpenVZ containers? Are there any recommendations?

Regards, Tim

add hostname to backup file name

$
0
0
Hi All,

I am trying to add the CTID hostname (openvz containers) to the backup files that are created on these containers. I tried using a hookfile which had no effect while passing the --script hookfile.pl . Can anyone assist ? This is an annoying problem as a container that has CTID 110 can be retaken by a new container if that previous container is destroyed.


#!/usr/bin/perl -w


# example hook script for vzdump (--script option)


use strict;
use File::Copy qw(move);


my $basedir="/mnt/backups/";
print "HOOK: " . join (' ', @ARGV) . "\n";


my $phase = shift;
if ($phase eq 'backup-end' ){
my $mode = shift; # stop/suspend/snapshot
my $vmid = shift;
my $vmtype = $ENV{VMTYPE} if defined ($ENV{VMTYPE}); # openvz/qemu
my $dumpdir = $ENV{DUMPDIR} if defined ($ENV{DUMPDIR});
my $hostname = $ENV{HOSTNAME} if defined ($ENV{HOSTNAME});
# tarfile is only available in phase 'backup-end'
my $tarfile = $ENV{TARFILE} if defined ($ENV{TARFILE});
# logfile is only available in phase 'log-end'
my $logfile = $ENV{LOGFILE} if defined ($ENV{LOGFILE});
print "HOOK-ENV: vmtype=$vmtype;dumpdir=$dumpdir;hostname=$hostname ;tarfile=$tarfile;logfile=$logfile\n";
if ($phase eq 'backup-end' and defined ($tarfile) and defined ($hostname)) {
if ( $tarfile=~/($basedir\/vzdump-(qemu|openvz)-\d+-)(\d\d\d\d_.+)/ ){
my $tarfile2=$1.$hostname."-".$3;
print "HOOK: Renaming file $tarfile to $tarfile2\n";
move $tarfile, $tarfile2;
}
}
}


exit (0);

Plex Media

$
0
0
Plex Media Server Virtual Appliance.

More than one vm on same core

$
0
0
Hi I have tried to find info on this but can't seem to. I would like to know if it's possible to share the same core across vms. That is to say I have a dual core but I would like 3 or 4 windows vms. The idea is that I don't really need the processing power all of the time. IF you can do that then could I share both cores with all of the vms? Then I would get maximum power at all times? Is this possible or will it cause problems?

migrate one node to another cluster

$
0
0
hi

how can i move one node from cluster A to cluster B ?

regards

high load after processor failed

$
0
0
Hello,

I've a strange problem after the upgrade from 2.1 to 2.3.

In my setup there are two Servers in a cluster. On of them
works without problems, but several times a day the other
server has a realy high load (3-5) and all clients on this host
are freezing.

The only solution to get the server working again is to do a restart.

This Server has a Raid5 on SAS-HDDs, after I added "elevator=deadline" to
grub the problem does not shown itself so often.

In syslog I found this logs, and I think after that the Server got these problems.

Code:

May  1 04:34:14 desokvm1 pvestatd[1934]: WARNING: closeing with write buffer at /usr/share/perl5/IO/Multiplex.pm line 913.
May  1 05:13:21 desokvm1 corosync[1565]:  [TOTEM ] A processor failed, forming new configuration.
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] CLM CONFIGURATION CHANGE
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] New Configuration:
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] #011r(0) ip(10.0.3.1)
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] #011r(0) ip(10.0.3.2)
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] Members Left:
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] Members Joined:
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] CLM CONFIGURATION CHANGE
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] New Configuration:
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] #011r(0) ip(10.0.3.1)
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] #011r(0) ip(10.0.3.2)
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] Members Left:
May  1 05:13:30 desokvm1 corosync[1565]:  [CLM  ] Members Joined:
May  1 05:13:30 desokvm1 corosync[1565]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
May  1 05:13:32 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 10
May  1 05:13:33 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 20
May  1 05:13:34 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 30
May  1 05:13:35 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 40
May  1 05:13:36 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 50
May  1 05:13:37 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 60
May  1 05:13:38 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 70
May  1 05:13:39 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 80
May  1 05:13:40 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 90
May  1 05:13:41 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 100
May  1 05:13:41 desokvm1 pmxcfs[1434]: [dcdb] notice: cpg_send_message retried 100 times
May  1 05:13:41 desokvm1 pmxcfs[1434]: [status] crit: cpg_send_message failed: 6
May  1 05:13:42 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 10
May  1 05:13:43 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 20
May  1 05:13:44 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 30
May  1 05:13:45 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 40
May  1 05:13:46 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 50
May  1 05:13:47 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 60
May  1 05:13:48 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 70
May  1 05:13:49 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 80
May  1 05:13:50 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 90
May  1 05:13:51 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 100
May  1 05:13:51 desokvm1 pmxcfs[1434]: [dcdb] notice: cpg_send_message retried 100 times
May  1 05:13:51 desokvm1 pmxcfs[1434]: [status] crit: cpg_send_message failed: 6
May  1 05:13:52 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 10
May  1 05:13:53 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 20
May  1 05:13:54 desokvm1 corosync[1565]:  [CPG  ] chosen downlist: sender r(0) ip(10.0.3.1) ; members(old:2 left:0)
May  1 05:13:54 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 30
May  1 05:13:55 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 40
May  1 05:13:56 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 50
May  1 05:13:57 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 60
May  1 05:13:58 desokvm1 pmxcfs[1434]: [status] notice: cpg_send_message retry 70
May  1 05:13:59 desokvm1 corosync[1565]:  [MAIN  ] Completed service synchronization, ready to provide service.
May  1 05:13:59 desokvm1 pmxcfs[1434]: [dcdb] notice: cpg_send_message retried 75 times

Does anyone know what I can do to solve the problem?

kvm guest won't shutdown

$
0
0
Hello,
i have installed an vm machine with ubuntu (kvm)
It works fine. But if i would like to shutdown this vm with Proxmox nothing happens.
Does someone has an idea why it won't work?

Thanks a lot

Tobias

Restore Time Very Long 2.3-13

$
0
0
I noticed a couple of times now that restoring a vm in 2.3 takes a very long time, much longer than previous version.

Restore time for a Win2003 with 2 HD's (32Gb & 50Gb) took 12 hours. Has anyone else experienced this?

I AM glad for the new backup/restore systems because now the restores work without crashing my entire Proxmox server. But it seems to me this is an excessive time...

pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-93
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-19-pve: 2.6.32-93
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-18
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-8
ksm-control-daemon: 1.1-1

Proxmox Installation Login Fail

$
0
0
Hey

I installed Proxmox on my Root Server with no Error. When i try to go https://myip:8006 and i try to login with my username: root and my password from the ssh, it keeps saying Login Failed oO
I dont know why. I just can connect to the SSH Server with Putty and WinSCP to my root... but its not working for the Proxmox Webinterface... I just looked around the forum but i cant find a
solution for this.

Someone has an idea?

PM VE 2.3 browser caching issues

$
0
0
Just curious if anyone else has been having web gui login problems on 2.3 due to browser caching issues. On both Chrome & Firefox (latest as of post date) when I try to load the Proxmox Web GUI I get a blank white screen, no login prompt. Clearing cache works sporadically. Entering "incognito" mode on Chrome works always. Disabling cache in developer mode on Chrome, or in about:config on Firefox also works flawlessly.

In Firefox about:config I first set browser.cache.check_doc_frequency to 1, which is supposed to force a check every load and still did not work. Only completely disabling cache did the trick.

vzmigrate +" DISK_QUOTA=off" FAILS

$
0
0
Hi everyone!

I've two identical servers with Proxmox 2.2 and a GFS2 shared storage. If I migrate an OpenVZ container with "DISK_QUOTA=on" (in vz.conf config file) it almost takes 1 hour on "initializing remote quota" proccess (150GB of small size files).

Otherwise, if I disable quotas globally (DISK_QUOTA=off in vz.conf) and I try to migrate, vzmigrate will try to dump the container quota file which don't exists (quotas disabled). It seems a bug fixed on vzctl >= 3.0.24 (https://bugzilla.openvz.org/show_bug.cgi?id=1094), but for some reason it still fails.

Here I post some logs and info:

Quote:

Originally Posted by Quotas_on
May 01 16:52:10 starting migration of CT 247 to node 'XXXX''
May 01 16:52:10 container is running - using online migration
May 01 16:52:10 container data is on shared storage 'containers'
May 01 16:52:10 start live migration - suspending container
May 01 16:52:11 dump container state
May 01 16:53:49 dump 2nd level quota
May 01 16:53:51 initialize container on remote node 'XXXX'
May 01 16:53:51 initializing remote quota
May 01 17:41:28 turn on remote quota
May 01 17:41:29 load 2nd level quota
May 01 17:41:29 starting container on remote node 'XXXX'
May 01 17:41:29 restore container state
May 01 17:42:31 start final cleanup
May 01 17:42:32 migration finished successfuly (duration 00:50:23)
TASK OK

Quote:

Originally Posted by Quotas_off
May 01 14:09:28 starting migration of CT 101 to node 'XXXX'
May 01 14:09:29 container data is on shared storage 'almacen'
May 01 14:09:29 dump 2nd level quota
May 01 14:09:29 # vzdqdump 101 -U -G -T > /almacen/dump/quotadump.101
May 01 14:09:29 ERROR: Failed to dump 2nd level quota: vzquota : (error) Can't open quota file for id 101, maybe you need to reinitialize quota: No such file or directory
May 01 14:09:29 aborting phase 1 - cleanup resources
May 01 14:09:29 start final cleanup
May 01 14:09:29 ERROR: migration aborted (duration 00:00:01): Failed to dump 2nd level quota: vzquota : (error) Can't open quota file for id 101, maybe you need to reinitialize quota: No such file or directory
TASK ERROR: migration aborted

Quote:

Originally Posted by pveversion
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-6-pve: 2.6.32-55
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1

Quote:

Originally Posted by /etc/vz/vz.conf
## Global parameters
VIRTUOZZO=yes
LOCKDIR=/var/lib/vz/lock
DUMPDIR=/var/lib/vz/dump
VE0CPUUNITS=1000

## Logging parameters
LOGGING=yes
LOGFILE=/var/log/vzctl.log
LOG_LEVEL=0
VERBOSE=0

## Disk quota parameters
DISK_QUOTA=yes
VZFASTBOOT=no

# Disable module loading. If set, vz initscript does not load any modules.
#MODULES_DISABLED=yes

# The name of the device whose IP address will be used as source IP for CT.
# By default automatically assigned.
#VE_ROUTE_SRC_DEV="eth0"

# Controls which interfaces to send ARP requests and modify ARP tables on.
NEIGHBOUR_DEVS=detect

## Fail if there is another machine in the network with the same IP
ERROR_ON_ARPFAIL="no"

## Template parameters
TEMPLATE=/var/lib/vz/template

## Defaults for containers
VE_ROOT=/var/lib/vz/root/$VEID
VE_PRIVATE=/var/lib/vz/private/$VEID

## Filesystem layout for new CTs: either simfs (default) or ploop
#VE_LAYOUT=ploop

## Load vzwdog module
VZWDOG="no"

## IPv4 iptables kernel modules to be enabled in CTs by default
IPTABLES="ipt_REJECT ipt_tos ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length"
## IPv4 iptables kernel modules to be loaded by init.d/vz script
IPTABLES_MODULES="$IPTABLES"

## Enable IPv6
IPV6="yes"

## IPv6 ip6tables kernel modules
IP6TABLES="ip6_tables ip6table_filter ip6table_mangle ip6t_REJECT"

Any hint would be grateful. Thanks.

Public IP - VM

$
0
0
Hello all,

I want to know the differents solutions for configure any VM with public IP.

I know this solution :
Code:

auto eth0
iface eth0 inet static
        address <pub>
        netmask 255.255.255.0
        network <pub>
        broadcast <pub>
        gateway <pub>

auto vmbr0
iface vmbr0 inet manual
        address <pub>
        netmask 255.255.255.0
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

And I configure an other public IP in the VM.

But I "lost" one public IP.

I would configure directly an public IP for a VM without lost one public IP in vmbr bridge.

Nota: The NAT solution is not possible in my infrastructure principes.

It's possible ?

Thanks

Proxmox2 snapshot backup crash on windows 2003 VM

$
0
0
Helo:

We recently migrate from proxmox 1 to 2.0.

Buy when the backup starts, at 5 minutes crash the VM and the backup stop. In the same node the backups of linux VM works ok.

any help?

Thanks, best regards.

INFO: starting new backup job: vzdump 102 --remove 0 --mode snapshot --compress lzo --storage backups --node proxmox2-0
INFO: Starting Backup of VM 102 (qemu)
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/backups/dump/vzdump-qemu-102-2013_05_02-08_54_54.vma.lzo'
INFO: started backup task 'e4bd8f09-7cc3-4987-a68f-92bcdf7dc826'
INFO: status: 0% (254279680/115964116992), sparse 0% (2703360), duration 3, 84/83 MB/s
INFO: status: 1% (1162412032/115964116992), sparse 0% (5771264), duration 16, 69/69 MB/s
INFO: status: 2% (2360475648/115964116992), sparse 0% (18194432), duration 32, 74/74 MB/s
ERROR: VM 102 not running
INFO: aborting backup job
ERROR: VM 102 not running
ERROR: Backup of VM 102 failed - VM 102 not running
INFO: Backup job finished with errors
TASK ERROR: job errors

Sharing folders between (virtual) systems

$
0
0
On the Proxmox I have installed three virtual systems:
* Linux Ubuntu server (Ubuntu Linux server has a public IP address)
* 2 x Win7


I need to make on Proxmox system a (read/write) directory accessable by Linux serwer and 2 Win7. At the same time I do not want the directory to be accessible to the public.

Can you suggest me something? Which way to best go (samba?) And how to configure the network?


Regards,
Bartek
Viewing all 172430 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>