Quantcast
Channel: Proxmox Support Forum
Viewing all 171725 articles
Browse latest View live

Backup tooks so long to finish

$
0
0
Hello
I have a VM with Windows 2012 running nicely in PVE 3.2.
This VM has a 300 GB disk size that was re-partition with 5 or 6 different parts, in order to allocate different databases...
The issue is that the backup task takes 6 hours to generate a lzo file with 13 GB size!
However, I have another VM, which is Windows 2008 with disk 120 GB size.
The backup makes 74 GB file of size, and takes approximately 30 minutes!!!
I can't see why the first task took so long time to finish!!!
Can anybody assist me??
Thanks

bootstrap vs bootstrap --minimal

$
0
0
Hello!

I want to use dab to create a Debian 7 Wheezy OpenVZ server-template that includes all basic packages and configurations that is needed for every Debian OpenVZ Container in my local network.

Here a the important entries in my dab.conf I used to accomplish this task.
Code:

Suite: wheezy
Architecture: amd64
Name: debian-7.6
Version: 7.6-1
Section: system
Maintainer: xxx
Infopage: xxx
Description: Debian 7 Wheezy

When using tasksel in Debian to install the "Standard system utilities" it does not install all of the X11 packages but when using the default "dab bootstrap" command it will install x11-apps, x11-common, x11-session-utils, x11-utils, x11-xfs-utils, x11-xkb-utils and x11-xserver-utils.
The problem is I want to install all of the commonly used system utilities but i don't want the X11 tools or a X-Server to be installed on my Debian Servers.
And I am not really sure what packages i exactly need otherwise I would just install the minimal packages and install all others after that manually.

So is there a possibility to use dab bootstrap and install all "common system utilities" but not the X11 stuff?


Maybe some people are interested in what packages come with either the default "dab bootstrap" or the "dab bootstrap --minimal" so here are the lists.

default
Code:

dab bootstrap
minimal
Code:

dab bootstrap --minimal
150 packages
Code:

adduser
apt
apt-utils
aptitude
aptitude-common
base-files
base-passwd
bash
bsdmainutils
bsdutils
coreutils
cpio
cron
dash
debconf
debconf-i18n
debian-archive-keyring
debianutils
diffutils
dmidecode
dpkg
e2fslibs
e2fsprogs
findutils
gcc-4.7-base
gnupg
gpgv
grep
groff-base
gzip
hostname
ifupdown
info
initscripts
insserv
install-info
iproute
iptables
iputils-ping
isc-dhcp-client
isc-dhcp-common
kmod
less
libacl1
libapt-inst1.5
libapt-pkg4.12
libattr1
libblkid1
libboost-iostreams1.49.0
libbsd0
libbz2-1.0
libc-bin
libc6
libcomerr2
libcwidget3
libdb5.1
libedit2
libept1.4.12
libgcc1
libgcrypt11
libgdbm3
libgnutls26
libgpg-error0
libgssapi-krb5-2
libidn11
libk5crypto3
libkeyutils1
libkmod2
libkrb5-3
libkrb5support0
liblocale-gettext-perl
liblzma5
libmount1
libncurses5
libncursesw5
libnewt0.52
libnfnetlink0
libp11-kit0
libpam-modules
libpam-modules-bin
libpam-runtime
libpam0g
libpipeline1
libpopt0
libprocps0
libreadline6
libsasl2-2
libselinux1
libsemanage-common
libsemanage1
libsepol1
libsigc++-2.0-0c2a
libslang2
libsqlite3-0
libss2
libssl1.0.0
libstdc++6
libtasn1-3
libtext-charwidth-perl
libtext-iconv-perl
libtext-wrapi18n-perl
libtinfo5
libudev0
libusb-0.1-4
libustr-1.0-1
libuuid1
libwrap0
libxapian22
login
logrotate
lsb-base
man-db
manpages
mawk
mount
multiarch-support
nano
ncurses-base
ncurses-bin
net-tools
netbase
netcat-traditional
openssh-client
openssh-server
openssl
passwd
perl-base
postfix
procps
readline-common
rsyslog
sed
sensible-utils
ssh
ssl-cert
sysv-rc
sysvinit
sysvinit-utils
tar
tasksel
tasksel-data
traceroute
tzdata
util-linux
vim-common
vim-tiny
wget
whiptail
xz-utils
zlib1g

291 packages
Code:

adduser
apt
apt-listchanges
apt-utils
aptitude
aptitude-common
at
base-files
base-passwd
bash
bash-completion
bc
bind9-host
bsd-mailx
bsdmainutils
bsdutils
bzip2
coreutils
cpio
cpp
cpp-4.7
cron
dash
db5.1-util
dc
debconf
debconf-i18n
debian-archive-keyring
debian-faq
debianutils
diffutils
dmidecode
dnsutils
doc-debian
dpkg
e2fslibs
e2fsprogs
file
findutils
fontconfig-config
ftp
gcc-4.7-base
gettext-base
gnupg
gpgv
grep
groff-base
gzip
host
hostname
ifupdown
info
initscripts
insserv
install-info
iproute
iptables
iputils-ping
isc-dhcp-client
isc-dhcp-common
kmod
krb5-locales
less
libacl1
libapt-inst1.5
libapt-pkg4.12
libasprintf0c2
libattr1
libbind9-80
libblkid1
libboost-iostreams1.49.0
libbsd0
libbz2-1.0
libc-bin
libc6
libc6-i386
libcap2
libclass-isa-perl
libcomerr2
libcwidget3
libdb5.1
libdns88
libdrm2
libedit2
libept1.4.12
libevent-2.0-5
libexpat1
libfontconfig1
libfontenc1
libfreetype6
libfs6
libgc1c2
libgcc1
libgcrypt11
libgdbm3
libgeoip1
libgl1-mesa-glx
libglapi-mesa
libgmp10
libgnutls-openssl27
libgnutls26
libgpg-error0
libgpgme11
libgpm2
libgssapi-krb5-2
libgssglue1
libgssrpc4
libice6
libidn11
libisc84
libisccc80
libisccfg82
libk5crypto3
libkadm5clnt-mit8
libkadm5srv-mit8
libkdb5-6
libkeyutils1
libkmod2
libkrb5-3
libkrb5support0
libldap-2.4-2
liblocale-gettext-perl
liblockfile-bin
liblockfile1
liblwres80
liblzma5
libmagic1
libmount1
libmpc2
libmpfr4
libncurses5
libncursesw5
libnewt0.52
libnfnetlink0
libnfsidmap2
libp11-kit0
libpam-modules
libpam-modules-bin
libpam-runtime
libpam0g
libpci3
libpcre3
libpipeline1
libpng12-0
libpopt0
libprocps0
libpth20
libreadline6
librpcsecgss3
libsasl2-2
libselinux1
libsemanage-common
libsemanage1
libsepol1
libsigc++-2.0-0c2a
libslang2
libsm6
libsqlite3-0
libss2
libssl1.0.0
libstdc++6
libswitch-perl
libtasn1-3
libtext-charwidth-perl
libtext-iconv-perl
libtext-wrapi18n-perl
libtinfo5
libtirpc1
libtokyocabinet9
libudev0
libusb-0.1-4
libustr-1.0-1
libuuid1
libwrap0
libx11-6
libx11-data
libx11-xcb1
libxapian22
libxau6
libxaw7
libxcb-glx0
libxcb-shape0
libxcb1
libxcomposite1
libxcursor1
libxdamage1
libxdmcp6
libxext6
libxfixes3
libxft2
libxi6
libxinerama1
libxkbfile1
libxml2
libxmu6
libxmuu1
libxpm4
libxrandr2
libxrender1
libxt6
libxtst6
libxv1
libxxf86dga1
libxxf86vm1
locales
login
logrotate
lsb-base
lsof
m4
man-db
manpages
mawk
mime-support
mlocate
mount
multiarch-support
mutt
nano
ncurses-base
ncurses-bin
ncurses-term
net-tools
netbase
netcat-traditional
openssh-client
openssh-server
openssl
passwd
patch
perl
perl-base
perl-modules
postfix
procmail
procps
python
python-apt
python-apt-common
python-chardet
python-debian
python-debianbts
python-fpconst
python-minimal
python-reportbug
python-soappy
python-support
python2.6-minimal
python2.7
python2.7-minimal
readline-common
reportbug
rpcbind
rsyslog
sed
sensible-utils
ssh
ssl-cert
sysv-rc
sysvinit
sysvinit-utils
tar
tasksel
tasksel-data
telnet
texinfo
time
traceroute
ttf-dejavu-core
tzdata
ucf
util-linux
vim-common
vim-tiny
w3m
wamerican
wget
whiptail
whois
x11-apps
x11-common
x11-session-utils
x11-utils
x11-xfs-utils
x11-xkb-utils
x11-xserver-utils
xauth
xbase-clients
xinit
xz-utils
zlib1g


These packages are additionally installed when using not the "--minimal" parameter.
141 packages
Code:

apt-listchanges
at
bash-completion
bc
bind9-host
bsd-mailx
bzip2
cpp
cpp-4.7
db5.1-util
dc
debian-faq
dnsutils
doc-debian
file
fontconfig-config
ftp
gettext-base
host
krb5-locales
libasprintf0c2
libbind9-80
libc6-i386
libcap2
libclass-isa-perl
libdns88
libdrm2
libevent-2.0-5
libexpat1
libfontconfig1
libfontenc1
libfreetype6
libfs6
libgc1c2
libgeoip1
libgl1-mesa-glx
libglapi-mesa
libgmp10
libgnutls-openssl27
libgpgme11
libgpm2
libgssglue1
libgssrpc4
libice6
libisc84
libisccc80
libisccfg82
libkadm5clnt-mit8
libkadm5srv-mit8
libkdb5-6
libldap-2.4-2
liblockfile-bin
liblockfile1
liblwres80
libmagic1
libmpc2
libmpfr4
libnfsidmap2
libpci3
libpcre3
libpng12-0
libpth20
librpcsecgss3
libsm6
libswitch-perl
libtirpc1
libtokyocabinet9
libx11-6
libx11-data
libx11-xcb1
libxau6
libxaw7
libxcb-glx0
libxcb-shape0
libxcb1
libxcomposite1
libxcursor1
libxdamage1
libxdmcp6
libxext6
libxfixes3
libxft2
libxi6
libxinerama1
libxkbfile1
libxml2
libxmu6
libxmuu1
libxpm4
libxrandr2
libxrender1
libxt6
libxtst6
libxv1
libxxf86dga1
libxxf86vm1
locales
lsof
m4
mime-support
mlocate
mutt
ncurses-term
patch
perl
perl-modules
procmail
python
python-apt
python-apt-common
python-chardet
python-debian
python-debianbts
python-fpconst
python-minimal
python-reportbug
python-soappy
python-support
python2.6-minimal
python2.7
python2.7-minimal
reportbug
rpcbind
telnet
texinfo
time
ttf-dejavu-core
ucf
w3m
wamerican
whois
x11-apps
x11-common
x11-session-utils
x11-utils
x11-xfs-utils
x11-xkb-utils
x11-xserver-utils
xauth
xbase-clients
xinit

balloon ram error on MS Server 2008 R2 ( pc.ram: Cannot allocate memory )

$
0
0
Hi!
Ihave a physical machine, where is running MS Win server 2008 R2.

I successfully (almost successful, there were some problems with MBR and bootcode) to move it to Proxmox 3.2 using Clonezilla. (using disk IDE in guest VM and fixed (not Balloon) RAM )
Next install virtio drivers, install balloon service, reboot, and shutdown.
After this i'm wanted to configure balloon RAM and set RAM size to 512/6000.
But when machine started, i saw error:

Quote:

Cannot set up guest memory 'pc.ram': Cannot allocate memory
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -smbios 'type=1,uuid=809fc45c-c170-4d73-bfd6-49ce8fc78f1d' -name ntc-srv-v1c -smp 'sockets=1,cores=4' -nodefaults -boot 'menu=on' -vga std -no-hpet -cpu 'kvm64,hv_spinlocks=0xffff,hv_relaxed,+lahf_lm,+x2 apic,+sep' -k en-us -m 4096 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -readconfig /usr/share/qemu-server/pve-usb.cfg -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'usb-host,hostbus=1,hostport=1.5' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:84ac71905dc2' -drive 'file=/var/lib/vz/images/101/vm-101-disk-1.qcow2,if=none,id=drive-virtio1,format=qcow2,cache=writeback,aio=native' -device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb' -drive 'file=/var/lib/vz/images/101/vm-101-disk-3.qcow2,if=none,id=drive-ide0,format=qcow2,cache=writeback,aio=native' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=101' -drive 'file=/mnt/pve/ntc-disk-iso/template/iso/virtio-win-0.1-81.iso,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,romfile=,mac=42:20:A5:4C:4B:DE,netdev=net0,bus =pci.0,addr=0x12,id=net0' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1
After several unsuccessful attempts to find a working configuration RAM, I stopped at a working version 512/4096.

The same error is detected at the other server which was installed and not migrate, and which also uses IDE disk.

I'm install test server with virtio disk (MS Server 2008 R2), but there is no such error, and it work fine.
I'm don't know, whether there is a relationship between these cases or is it some kind of magic? :D

Sorry for my bad bad english :rolleyes:

Proxmox 3.2 ISO Post-installation no webinterface

$
0
0
Hi evrybody.

I have a hp elite 7500 core i7 with 8Go and 2 PCI/E Ethernet card with 2 DD ( 500go and 1To )
I would like to install the Proxmox VE 3.2 ISO 3.2-5a885216-5 on this machine.
So,
i make the installation with linux ext4.
i am behind a proxy.
configure the repo with No-Subscription Repository
enabled my http_proxy var.
$>aptitude update
$>aptitude full-upgrade
the update is performed without error
$>reboot

And i can't connect with my browser on it.
the ping work.
when i do a nmap i have "All 1000 scanned ports on 172.16.0.5 are closed"

Can you help me.
Regards
screenshot:
https://drive.google.com/file/d/0B-P...it?usp=sharing
https://drive.google.com/file/d/0B-P...it?usp=sharing

Problem : Date definitively fixed (at 2014-08-25) , hwclock allright

$
0
0
Hello,

i have a mysterious problem.
1) is NOT an UTC / Time zone problem + (edit) NOT a "ntpd" problem


########### INFORMATIONS of servers :
2) DESTINATION
-------------------
ROOT - DESTINATION # pveversion -vproxmox-ve-2.6.32: not correctly installed (running kernel: 3.2.59-01-amd64-proxmox)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: not correctly installed
vzprocps: not correctly installed
vzquota: not correctly installed
pve-qemu-kvm: 1.7-8
ksm-control-daemon: not correctly installed
glusterfs-client: 3.4.2-1

ROOT -DESTINATION # uname -a
Linux DESTINATION 3.2.59-01-amd64-proxmox #1 SMP Tue Jun 3 11:48:44 CEST 2014 x86_64 GNU/Linux
-------------------

3) SOURCE (proxmox v2)
-------------------
ROOT - SOURCE # pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 3.2.52-01-amd64-proxmox
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: not correctly installed
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-3
vzctl: not correctly installed
vzprocps: not correctly installed
vzquota: not correctly installed )

ROOT - SOURCE # uname -a
Linux SOURCE 3.2.52-01-amd64-proxmox #1 SMP Mon Oct 28 12:01:23 CET 2013 x86_64 GNU/Linux
-------------------

########### INFORMATIONS of servers !

[EDIT] --------------------
Debian 7.6 in both and similar kernel :
# uname -a
Linux SOURCE 3.2.59-01-amd64
{-proxmox} #1 SMP

On my vm exactly :
Linux SOURCE 3.2.59-01-amd64 #1 SMP
Debian 7.6

On my proxmox V3:
Linux SOURCE 3.2.59-01-amd64-proxmox #1 SMP
Debian 7.6
On my proxmox V2:
Linux SOURCE 3.2.52-01-amd64-proxmox #1 SMP
Debian 7.6

666.conf
-------------
bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 2048
name: my-vm
net0: virtio=MAC ADRESS
onboot: 1
ostype: l26
sockets: 1
virtio0: my-lvm-vstore:vm-666-disk-1,size=50G
------------

add or not "localtime: 1" in 666.conf have no effect !
--------------------
[EDIT]



So my problem ; i migrated one VM from "SOURCE" to "DESTINATION" = in order proxmox v2 to proxmox v3.




Today 2014-08-28 when i started my VM (on proxmox V3) "date" say 2014-08-25@15:44
hwclock have good time : 2014-08-28@10:00 (when i write this)


When i started my VM on proxmox V2; the SAME VM, the same /etc/pve/qemeu-serrver/666.conf And the same LVM
Date and hwclock are good !


In first case proxmox V3 date is always 2014-08-25 ("date" say it) but hwclock have good time (time of my server)
In second case proxmox V2 date is good and hwclock too.


What i tryed (on my VM) :
1) hwclock --hctosys didn't work, "date" reset after reboot.
2) modify /etc/rcS to add some thing like "hwclock --hctosys" take no effect. (and more tries)
3) hwclock --adjust to modify /etc/adjtime ; to take effet
4) delete /etc/adjtime ; my hwclock reset to "date"
(and i neeed to make hwclock --sete --date "good date")


In all case "date" say 2014-08-25@15h44
Instead of hwclock say the "real time"


(NB : in all case i started in single mode ; there is an fsck issue to "future"
and i use single mode for down my network interface ; i don't want to update "date" by ntp : I want to solve this problem )



[EDIT] --------------------
For example :
1) My VM still running
2) i update "date" with hwclock or ntpd ; is same result !
3) Date reply Thu aug 28 12:00 = "real time" / good time
4) # reboot
5) When my VM restart
a) problem with fsck "wrong date ; from future" ... date reply >> 2014-08-25@15:44 last mount Thu aug 28 12:00 = "today" / actual time.
b) "give root password for maintenance (or type CTRL-D to continue )"
c) typing root password
d) my-vm # hwclock && date
Thu aug 28 12:00 ( = today )
Mon aug 25 15:44

Is it clear with example ?

If you prefer like : Groundhog Day or Edge of tomorow ! ^_^
This "image" is more comprehensive ??
-------------------- [EDIT]
So i presume i missed something but...


If you have any question to need more informations of my case you welcome
If you have any suggestions, with pleasure

thank you,

L.

Allocation resource

$
0
0
Hi,
I am needing help to understand what would be an appropriate strategy for allocating resources (CPU and memory) in PVE 2.57.
For example, I have this configuration and I doubt that is correct:
- CLUSTER:
CPU 6 core AMD FX 6100
Mem: 4 GB 1333 Mhz
Swap: 4 GB
HD: 1 TB
- VM 1 (ERP + Postgres / Debian 6 / OpenVZ):
CPU: 6 Processors
Mem: 3 GB
Swap: 4 GB
- VM 2 (File server / Debian 6 / OpenVZ):
CPU: 2 processors
Mem: 1 GB
Swap: 1 GB
- VM 3 (Web Server + MySQL / Debian 7 / OpenVZ):
CPU: 4 processors
Mem: 512 GB
Swap: 512 GB
- VM 4 (Terminal Server / Win XP / VM)
CPU: 2 processors
Mem: 1 GB
Swap: 1 GB


Thank you very much for your help!


Gaston Favereau.

Devices changes during normal operation..

$
0
0
Hello my friends...


I don't know if this behavior has to do with Debian Linux, PVE Kernel or part of the system.
I have a Server with Supermicro mainboard. Everything works fine but, suddenly, the disk changed it identification.
I have an External USB hard driver and another SATA driver connected to this system.
So, this morning, the VM froze, because a malfunction and we need to shutdown the VM.
But, when try to start it again, the Proxmox not found the VM image, because the ID of the disk where VM image file reside, changed from /dev/sdc1 to /dev/sdd1.
Is there someway to prevent this kind of trouble??


I will appreciate any help or advice!


Thank you

Snapshot not used anymore?

$
0
0
I had an issue with one of my Proxmox servers so I setup a new Proxmox host and copied over all data from /var/lib/vz/images/.
Whilst on the previous Proxmox server I had a VM that was running on a Snapshot but once I imported that VM over to the new Proxmox host the web GUI details that a snapshot is no longer being used, see images below:

The .raw file indicating the presence and use of a snapshot on the filesystem:
zpwsVCX.png

After the copy no snapshot shown but .raw file still present on filesystem.
jsr3lUk.png

Can I get some guidance on what I should do here?
Attached Images

Using USB Hard Drive as Storage (type: Directory) for Proxmox

$
0
0
Hi,
This is my first post here :-).

I have a small, simple Proxmox environment - a few servers, no clusters, no shared storage.

On one of the servers I'm running out of space on local disks. The upgrade is planned, but before it happens, I need to "survive" somehow.

My idea was to use a 2TB USB 3.0 Hard Drive, and add it as a storage for keeping vms.
For a reason (too complicated to explain), the hard drive is formatted with NTFS.

So what I did was:

-connected the usb hdd,
then:
mkdir /mnt/usb
mount -t ntfs-3g /mnt/usb /dev/sdc1


So far, so good. Got into the Proxmox GUI and used "add Storage" option to add mounted storage as Type: Directory (I need to store vmdk machines, that's why directory and not lvm)
The usb hdd was added, I could see how much space is available, and everything seemed to be fine.

Started moving disks of one virtual machine from local storage to added usb hdd storage. The move went well.

The problem starts here: After moving the disk to usb hdd storage, I tried to start the vm and got the following error:

kvm: -drive file=/mnt/usb/images/132/vm-132-disk-2.vmdk,if=none,id=drive-virtio0,format=vmdk,aio=native,cache=none: could not open disk image /mnt/usb/images/132/vm-132-disk-2.vmdk: Invalid argument
TASK ERROR: start failed: command '/usr/bin/kvm -id 132 -chardev 'socket,id=qmp,path=/var/run/qemu-server/132.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/132.vnc,x509,password -pidfile /var/run/qemu-server/132.pid -daemonize -name APP-TEST -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+x2apic,+sep -k en-us -m 1024 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/mnt/usb/images/132/vm-132-disk-2.vmdk,if=none,id=drive-virtio0,format=vmdk,aio=native,cache=none' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=10 0' -netdev 'type=tap,id=net0,ifname=tap132i0,script=/var/lib/qemu-server/pve-bridge,vhost=on' -device 'virtio-net-pci,romfile=,mac=8E:FE:AC:32:DD:63,netdev=net0,bus =pci.0,addr=0x12,id=net0'' failed: exit code 1

Can somebody help me out with that?

Seccond NAS Kills all NFS Connections

$
0
0
I have a seccond NAS that I am trying to add though the web GUI. I can add the NAS just fine as a NFS share, but when I do it makes all of my external storage options stop working. I can then no longer view the contents of any of these storage options through the web GUI. VMs that are running continue to run, but it completely breaks the GUI and I cannot make new VMs or access any of the storage devices. The only solution is to remove the new NFS share. Any idea why this is happening? The NAS is a seagate black armor 404.

Q: Upgrade 3-node Cluster (PVE 2.3->3.2) Sanity check

$
0
0
Hi,

I'm trying to plan for a possible Proxmox upgrade,
- cluster of 3 proxmox hosts (Intel modular server, proxmox cluster, have shared LVM storage from modular server chassis for VMs - kind of a "SAS attached SAN")
- works fine but version is starting to become concern, ie, upgrade to PVE 3.X will be nice.
- I have reviewed process docs as per PVE Wiki, https://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0

It sounds quite clear cut, even for Cluster, ie,

  • first PVE host: evacuate all VMs so nothing is running here (live/migrate to other cluster nodes)
  • do the upgrade, reboot, migrate VMs back into the new 3.X PVE node
  • rinse and repeat on other nodes


I don't have access to a dev test environment that mimics this. So that kind of stinks. I prefer to avoid 'horrible disruption' of the upgrade going poorly and having to do clean install, restore everything from cold backups, etc. as the amount of VM disk image space is not trivial (ie, ~10Tb LVM LUN which is mostly provisioned full of VM images running on this cluster).

I wanted to kind of put out a sanity test query,

  • has anyone else running a Proxmox 2.X cluster -- done an upgrade using script as per the PVE Wiki link, https://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0 - and was the experience a success ?
  • wanted to confirm, inherent in this process - appears to be that we have temporarily in place a "Mixed Node" cluster, ie,
  • after we upgrade node1, but have not yet updated node2,3 - we have a cluster with one PVE3.X node, and two x PVE2.X nodes.
  • then we upgrade node2, have 2 nodes on PVE3 and 1 node on PVE2
  • finally upgrade node3, we have all 3 nodes on PVE3, fun work is finished.


I had tested PVE upgrade scripts earlier in the year in a test environment, as follows,

  • upgrade standalone PVE1.X to PVE2.X via script, reboot
  • try then to upgrade this host via script, from PVE2 to PVE3.
  • at this point the upgrade process seems to have failed, at least for me, and I ended up with a non-bootable box. I didn't end up testing more, being slightly concerned with how the process went.




  • Clearly doing a clean install / burn down and restore-vm-from-backup method is "Safe" but has the minor downside, of fairly large downtime for your VMs with this approach. (ie, 10Tb of data takes a while to move over gig-ether, especially when moving a couple of times - off the old box into NFS storage, then back into 'new box' after clean install).
  • Clearly another way to do it would be "just buy new hardware", stand up a new PVE3.X cluster, and then one at a time, power off VMs, archive them to NFS storage, restore them on the new PVE cluster, and happy days ensue. But this still has a series of 'shorter' outages (ie, each individual VM has an outage window based on how big its disk is / how long it takes to backup-then-restore .. via gig ether in this case .. the VM).


So, anyhow, I have rattled on more than long enough.

If anyone is able to give any comments or feedback it would really be greatly appreciated.

Thanks,

Tim

Seccond NAS Kills all NFS Connections

$
0
0
I have a seccond NAS that I am trying to add though the web GUI. I can add the NAS just fine as a NFS share, but when I do it makes all of my external storage options stop working. I can then no longer view the contents of any of these storage options through the web GUI. VMs that are running continue to run, but it completely breaks the GUI and I cannot make new VMs or access any of the storage devices. The only solution is to remove the new NFS share. Any idea why this is happening? The NAS is a seagate black armor 404.

QCOW2 files differeing in size

$
0
0
I did a direct copy of /var/lib/vz/images/* over SSH from one server to another so that I can decommission the older server.

I need some help understanding this because the file sizes of the QCOW2 files on both the old and new server differ in size.

Old Server
VM Size
New Server
VM Size
100 32G 100 91G
101 45G 101 97G
102 19G 102 46G
103 18G 103 81G
104 1.3G 104 11G
105 15G 105 81G
106 14G 106 81G
107 26G 107 26G
108 4.5G 108 4.5G
109 24G 109 46G
110 5.0G 110 121G

I think that what has happened is the file copy has copied the empty space in the VM itself.
Is there a way to reduce the filesizes once again without destroying data?

High available ZFS storage

How to attach existing HD image to a new VM?

$
0
0
This just... doesn't seem to be an option. Anywhere. Every time you go to add a new hard disk it assumes you want to create a new image. There isn't an option to use an existing one. To wit:



I got a little further by following this thread for migrating from ESXi to Proxmox (which is what I'm doing): http://noltechinc.com/miscellany/mig...to-proxmox-2-1

The image file is a QCOW2 file converted from a VMDK. I moved it to the name of the disk that Proxmox created. Then I got this:

kvm: -drive file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none: could not open disk image /var/lib/vz/images/100/vm-100-disk-1.qcow2: Image is not in qcow2 format
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name Gemenon -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -k en-us -m 4096 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,vhost=on' -device 'virtio-net-pci,mac=56:19:B2:9A:0A:C4,netdev=net0,bus=pci.0,ad dr=0x12,id=net0,bootindex=300'' failed: exit code 1

Does anyone know what I might be doing wrong here? It was converted to QCOW2.

PXE booting VMs not working.

$
0
0
Somewhere along the way, PXE booting VMs on my Proxmox hosts stopped working. Here's what I'm running:

# pveversion -v
proxmox-ve-2.6.32: 3.2-132 (running kernel: 3.10.0-3-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-3.10.0-3-pve: 3.10.0-11
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

I only recently moved to using the 3.10 kernel over the 2.6.32 kernel - I can't remember if I was ever able to successfully PXE boot a VM on 3.10 or not.

The behavior I'm seeing is that a VM will start booting, will begin to load vmlinuz and will hang halfway through the download. After a couple of minutes, the VM console will start spewing "can't find kernel vmlinux" messages.

Nothing in my support infrastructure has changed recently; I'm using a Fedora 20 host as a TFTP server which I've done countless sucessful PXE boots from in the past. I did see an earlier forum thread about a QEMU 1.7 issue related to PXE booting, so I'm wondering if a recent QEMU or kernel update caused a regression.

Let me know if I can send any other helpful information along.

Thanks!

Lost Vm after power failure

$
0
0
Hi everybody,
i have lost a vm after power failure.
i tried to run testdisk and the results show the vm i need as a file instead of being a folder. the VM in concern is VM 107
is there any way to recover it.
attached is a screen shut of testdisk result

.Proxmox.jpg




thanks in advance.
Attached Images

Error in /etc/cron.daily/pve file

$
0
0
Hello!

Every day I receive an email from cron daemon:
Subj: test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
/etc/cron.daily/pve:
Use of uninitialized value $3 in hex at /etc/cron.daily/pve line 65, <GEN11> line 1.


I have a cluster with 3 nodes. But this error I received only from one node.

pmn02:~# pveversion -v
proxmox-ve-2.6.32: 3.2-132 (running kernel: 2.6.32-31-pve)

pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-24-pve: 2.6.32-111
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-31-pve: 2.6.32-132
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1


How can I fix that?

Windows KVM frequent restarts

$
0
0
A KVM with windows 2008 R2 has frequent unwanted reboots.

I noticed that reboots happen when cpu load starts to grow.

Code:

bootdisk: ide0
cores: 4
cpu: qemu64
ide2: none,media=cdrom
memory: 16384
net0: e1000=8E:52:D9:AD:33:E5,bridge=vmbr0,rate=125
onboot: 1
ostype: win7
sockets: 1
virtio0: A:110/vm-110-disk-1.raw,format=raw,size=80G
virtio1: B:110/vm-110-disk-1.raw,format=raw,backup=no,size=500G

Code:

proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Event log doesnt seem to help.

no failback when using failover domains on two node cluster

$
0
0
Hello

I'm experimenting on two node cluster with drbd and lvm on top for vms.
I don't have a fencing device for the time so I'm using fence_manual and fence_ack_manual command to simulate fencing.Everything works as expected .The only difficulty I have is with failover domains. I have 5 HA vms. 3 of them running on node B and 2 running on Node A. When I reset node B and give "fence_ack_manual nodeB" from nodeA, the 3 vms located on nodeB are started on nodeA as expected.But when NodeB comes up, these 3 vms are not relocated back on nodeB as I wish to.I'm posting my cluster.conf:

Code:

<?xml version="1.0"?><cluster config_version="22" name="pvecluster">
  <cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
  <fencedevices>
    <fencedevice agent="fence_manual" name="human"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="proxmox" nodeid="1" votes="1">
      <fence>
        <method name="single">
          <device name="human" nodename="proxmox"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="proxmox2" nodeid="2" votes="1">
      <fence>
        <method name="single">
          <device name="human" nodename="proxmox2"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
<failoverdomains>
  <failoverdomain name="ordered" nofailback="0" ordered="0" restricted="1">
      <failoverdomainnode name="proxmox"  priority="1"/>
      <failoverdomainnode name="proxmox2" priority="1"/>
  </failoverdomain>
</failoverdomains>
    <pvevm domain="ordered" autostart="1" vmid="111" recovery="relocate"/>
    <pvevm domain="ordered" autostart="1" vmid="107" recovery="relocate"/>
    <pvevm domain="ordered" autostart="1" vmid="108" recovery="relocate"/>
    <pvevm domain="ordered" autostart="1" vmid="110" recovery="relocate"/>
    <pvevm domain="ordered" autostart="1" vmid="112" recovery="relocate"/>
  </rm>
</cluster>

thank you
Viewing all 171725 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>