Quantcast
Channel: Proxmox Support Forum
Viewing all 171768 articles
Browse latest View live

SoftRAID+lvm+kvm+proxmox+high IO = crash ?

$
0
0
Hi, I wanted to give Proxmox a try.
But just as we wanted to go "live" and after configuring five VMs we experience crashes reproducable when doing high IO (bonnei++)

I did some IO-stress test before (without md-RAID) and no crash.

But then for HA reasons I moved to Softraid. (and went from ext3 to ext4)

Everything is fine until there is some load. An import of a mysqldump is sufficiant to crash the mysqlserver.

The system is a Ubuntu lts 12.04.

Does anybody experience the same issue?

Thanks for any hint
d.


"
[441973.188022] [sched_delayed] sched: RT throttling activated
[453494.377366] ata3.00: exception Emask 0x0 SAct 0x3 SErr 0x0 action 0x6 frozen
[453494.377504] ata3.00: failed command: WRITE FPDMA QUEUED
[453494.377582] ata3.00: cmd 61/08:00:20:1c:c1/00:00:01:00:00/40 tag 0 ncq 4096 out
[453494.377582] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[453494.377751] ata3.00: status: { DRDY }
[453494.377818] ata3.00: failed command: WRITE FPDMA QUEUED
[453494.377897] ata3.00: cmd 61/08:08:40:1c:c1/00:00:01:00:00/40 tag 1 ncq 4096 out
[453494.377897] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
[453494.378064] ata3.00: status: { DRDY }
[453494.378140] ata3: hard resetting link....

[453555.112562] ata3: EH complete
[453592.606815] BUG: soft lockup - CPU#0 stuck for 29s! [scsi_eh_2:176]
[453592.606883] Modules linked in: microcode(F) psmouse(F) joydev(F) serio_raw(F) i2c_piix4(F) mac_hid(F) virtio_balloon(F) lp(F) parport(F) hid_generic(F) usbhid(F) hid(F) floppy(F) e1000(F) ahci(F) libahci(F)
[453592.606955] CPU 0
[453592.606959] Pid: 176, comm: scsi_eh_2 Tainted: GF 3.8.0-33-generic #48~precise1-Ubuntu Bochs Bochs
[453592.606961] RIP: 0010:[<ffffffff816f4319>] [<ffffffff816f4319>] _raw_spin_unlock_irqrestore+0x19/0x30
[453592.606997] RSP: 0018:ffff88011ba69cc0 EFLAGS: 00000286


"

Is it possible to extract VM config from backup vma.gz file?

$
0
0
Hi,

I need to recover VM configs from backup files. I just need the config files and it will take a lot of time to make restores from bakups firsts. So is there any way to just get VM configs from vma files?

problem install proxmox-ve-2.6.32 in debian wheezy

$
0
0
i am novice at using proxmox-ve...
while installing proxmox-ve , meet this problem...
please help me solve it...

# aptitude install proxmox-ve-2.6.32
The following partially installed packages will be configured:
clvm fence-agents-pve libpve-access-control libpve-storage-perl
proxmox-ve-2.6.32 pve-cluster pve-manager qemu-server
redhat-cluster-pve resource-agents-pve vzctl
No packages will be installed, upgraded, or removed.
0 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B of archives. After unpacking 0 B will be used.
Setting up pve-cluster (3.0-8) ...
[....] Starting pve cluster filesystem : pve-cluster[main] crit: Unable to get local IP address
(warning).
invoke-rc.d: initscript pve-cluster, action "start" failed.
dpkg: error processing pve-cluster (--configure):
subprocess installed post-installation script returned error exit status 255
dpkg: dependency problems prevent configuration of redhat-cluster-pve:
redhat-cluster-pve depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing redhat-cluster-pve (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of fence-agents-pve:
fence-agents-pve depends on redhat-cluster-pve; however:
Package redhat-cluster-pve is not configured yet.

dpkg: error processing fence-agents-pve (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libpve-access-control:
libpve-access-control depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing libpve-access-control (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of clvm:
clvm depends on redhat-cluster-pve; however:
Package redhat-cluster-pve is not configured yet.

dpkg: error processing clvm (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libpve-storage-perl:
libpve-storage-perl depends on clvm; however:
Package clvm is not configured yet.

dpkg: error processing libpve-storage-perl (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of qemu-server:
qemu-server depends on libpve-storage-perl; however:
Package libpve-storage-perl is not configured yet.
qemu-server depends on pve-cluster; however:
Package pve-cluster is not configured yet.
qemu-server depends on redhat-cluster-pve; however:
Package redhat-cluster-pve is not configured yet.

dpkg: error processing qemu-server (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of resource-agents-pve:
resource-agents-pve depends on redhat-cluster-pve; however:
Package redhat-cluster-pve is not configured yet.

dpkg: error processing resource-agents-pve (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of pve-manager:
pve-manager depends on qemu-server (>= 1.1-1); however:
Package qemu-server is not configured yet.
pve-manager depends on pve-cluster (>= 1.0-29); however:
Package pve-cluster is not configured yet.
pve-manager depends on libpve-storage-perl; however:
Package libpve-storage-perl is not configured yet.
pve-manager depends on libpve-access-control (>= 3.0-2); however:
Package libpve-access-control is not configured yet.
pve-manager depends on redhat-cluster-pve; however:
Package redhat-cluster-pve is not configured yet.
pve-manager depends on resource-agents-pve; however:
Package resource-agents-pve is not configured yet.
pve-manager depends on fence-agents-pve; however:
Package fence-agents-pve is not configured yet.

dpkg: error processing pve-manager (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of vzctl:
vzctl depends on pve-cluster; however:
Package pve-cluster is not configured yet.
vzctl depends on libpve-storage-perl; however:
Package libpve-storage-perl is not configured yet.

dpkg: error processing vzctl (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of proxmox-ve-2.6.32:
proxmox-ve-2.6.32 depends on pve-manager; however:
Package pve-manager is not configured yet.
proxmox-ve-2.6.32 depends on qemu-server; however:
Package qemu-server is not configured yet.
proxmox-ve-2.6.32 depends on vzctl (>= 3.0.29); however:
Package vzctl is not configured yet.

dpkg: error processing proxmox-ve-2.6.32 (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
pve-cluster
redhat-cluster-pve
fence-agents-pve
libpve-access-control
clvm
libpve-storage-perl
qemu-server
resource-agents-pve
pve-manager
vzctl
proxmox-ve-2.6.32
E: Sub-process /usr/bin/dpkg returned an error code (1)
A package failed to install. Trying to recover:
Setting up pve-cluster (3.0-8) ...
[....] Starting pve cluster filesystem : pve-cluster[main] crit: Unable to get local IP address
(warning).
invoke-rc.d: initscript pve-cluster, action "start" failed.
dpkg: error processing pve-cluster (--configure):
subprocess installed post-installation script returned error exit status 255
dpkg: dependency problems prevent configuration of pve-manager:
pve-manager depends on pve-cluster (>= 1.0-29); however:
Package pve-cluster is not configured yet.

dpkg: error processing pve-manager (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of proxmox-ve-2.6.32:
proxmox-ve-2.6.32 depends on pve-manager; however:
Package pve-manager is not configured yet.

dpkg: error processing proxmox-ve-2.6.32 (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libpve-access-control:
libpve-access-control depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing libpve-access-control (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of qemu-server:
qemu-server depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing qemu-server (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of redhat-cluster-pve:
redhat-cluster-pve depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing redhat-cluster-pve (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of vzctl:
vzctl depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing vzctl (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of resource-agents-pve:
resource-agents-pve depends on redhat-cluster-pve; however:
Package redhat-cluster-pve is not configured yet.

dpkg: error processing resource-agents-pve (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of fence-agents-pve:
fence-agents-pve depends on redhat-cluster-pve; however:
Package redhat-cluster-pve is not configured yet.

dpkg: error processing fence-agents-pve (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of clvm:
clvm depends on redhat-cluster-pve; however:
Package redhat-cluster-pve is not configured yet.

dpkg: error processing clvm (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libpve-storage-perl:
libpve-storage-perl depends on clvm; however:
Package clvm is not configured yet.

dpkg: error processing libpve-storage-perl (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
pve-cluster
pve-manager
proxmox-ve-2.6.32
libpve-access-control
qemu-server
redhat-cluster-pve
vzctl
resource-agents-pve
fence-agents-pve
clvm
libpve-storage-perl

Add NODE ERROR!!! Proxmox VE 3.1

$
0
0
Hello, when I want to add a node to Clustering, gives me this error.


NODE MASTER: 192.168.1.35

Comand from Node Cliente:
#> pvecm add 192.168.1.35

ERROR MSG:
"Permission denied ( publickey, password).Unable to add node: command fialed ( ssh 192.168.1.35 --BatchMode=yes pvecm addnode nodo02 --force 1)"



But when I want to connect via ssh to the master node IP, and I agree I have no problems normally.
They may be doing this wrong?

Trankyou!

NFS performance and mount options

$
0
0
Hi,

I have an old NFS sharing NAS which is performing good, since years, not excellent but good for its job.
Now I have a new NFS sharing NAS which is really slow...

old NAS specs: qnap ts809-rp, 8x 1.5tb SAMSUNG HD154UI 1AG0 in 1 raid5 vol, Core 2 Duo 2.8 GHz, 2GB DDRII RAM, 1GB eth MTU 1500
new NAS specs: qnap ts879-rp, 7x 2.0tb Hitachi HUA723020ALA640 MK7O in 1 raid5 vol, Quad Core Xeon® E3-1225 v2 Processor 3.2 GHz, 4 GB DDR3 ECC RAM, 1GB eth MTU 1500

i've run pveperf test on both, from two cluster nodes, and got these results:

Code:

# pveperf /mnt/pve/ts809/ (old NAS)
CPU BOGOMIPS:      72527.28
REGEX/SECOND:      947321
HD SIZE:          9617.06 GB (ts809:/myshare)
FSYNCS/SECOND:    1740.50
DNS EXT:          57.34 ms
DNS INT:          1.33 ms

# pveperf /mnt/pve/ts879/ (new NAS)
CPU BOGOMIPS:      72527.28
REGEX/SECOND:      948892
HD SIZE:          11081.12 GB (ts879:/myshare)
FSYNCS/SECOND:    204.80
DNS EXT:          135.45 ms
DNS INT:          0.67 ms

this puzzled me a bit.. I know these are sw raid units, but I expected the new NAS to perform far better than the old...
so I looked at mount options (I mounted those units through pve gui)

Code:

IP_of_ts809:/iso_images on /mnt/pve/ts809 type nfs (rw
relatime
vers=3
rsize=262144
wsize=262144
namlen=255
hard
proto=tcp
timeo=600
retrans=2
sec=sys
mountaddr=IP_of_ts809
mountvers=3
mountport=659
mountproto=udp
local_lock=none
addr=IP_of_ts809)

IP_of_ts879:/Download on /mnt/pve/ts879 type nfs (rw
relatime
vers=3
rsize=32768
wsize=32768
namlen=255
hard
proto=tcp
timeo=600
retrans=2
sec=sys
mountaddr=IP_of_ts879
mountvers=3
mountport=59669
mountproto=udp
local_lock=none
addr=IP_of_ts879)

I think rsize and wsize are involved... but how can I change them (and to which values, how to find)?
and why pve mounted those in such different ways?
if it can make a difference:
- ts809 was mounted in cluster on a node with old, 2.x before upgrade it to 3.1,
- ts879 was mounted in cluster on another node 2.x, after upgrading it to 3.1

Have you any hints of what is happening and how to use at best the two units? other tests?

I can run tests from cli on both NAS, but that is a custom qnap linux, there's no apt-get, and I don't want to mess something there, if possibile...

Thanks, Marco

1 PCI Card, 4 Devices - PCI Passtrough

$
0
0
Hello there,

first of all: proxmox is very nice, good work, i have used virtualbox before.

I have an Asrock 970 Board with an AMD 6 core, BIOS enabled IOMMU and kernel_boot Option "amd_iommu=on".
dmesg show me this:

Code:

dmesg | grep -e DMAR -e IOMMUPlease enable the IOMMU option in the BIOS setup AMD-Vi: Enabling IOMMU at 0000:00:00.2 cap 0x40

My dvb Card (only one Card) has this lspci infos at proxmox host:

Code:

03:05.0 Multimedia video controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder (rev 05)
03:05.1 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [Audio Port] (rev 05)
03:05.2 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [MPEG Port] (rev 05)
03:05.4 Multimedia controller: Conexant Systems, Inc. CX23880/1/2/3 PCI Video and Audio Decoder [IR Port] (rev 05)

Also, i have all dvb* and card specific modules in the blacklist and set the option
echo "options kvm allow_unsafe_assigned_interrupts=1">/etc/modprobe.d/kvm_iommu_map_guest.conf


if i use the line "hostpci0: 03:05.0" in the the vm.conf, it won't start with parsing error.
With "hostpci 03:05.0" it starts... (hostpci without the zero).

Then in the vm i see with lspci this:

lspci
Code:

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon
00:12.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)

Where is my device and where is the "rest" of it ?


Jan

Leaking local IP addresses to the external interface through NAT on Linux KVM

$
0
0
wanted to ask about network setup using nat and kvm with proxmox
I am using Hetzner as service provider

and basically have such configuration:

in sysctl.conf using
Code:

net.ipv4.ip_forward=1
and my network setup is

Code:

# Loopback device:
auto lo
iface lo inet loopback


# device: eth0
auto  eth0
iface eth0 inet static
  address  xx.xx.xx.42
  broadcast xx.xx.xx.63
  netmask  255.255.255.224
  gateway  xx.xx.xx.33
  # default route to access subnet
  up route add -net xx.xx.xx.32 netmask 255.255.255.224 gw xx.xx.xx.33 eth0


auto vmbr0
iface vmbr0 inet static
    address  10.0.0.254
    netmask  255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0

and I do use nat for my guest kvm machines

Code:

iptables -t nat -A POSTROUTING -s '10.0.0.0/24' -j SNAT --to-source xx.xx.xx.42
all is working great but today I was banned by Hetzner
with such message

Quote:

Dear Sir or Madam
We have noticed that you have been using other IPs from the same subnet in addition to the main IP mentioned in the above subject line.
As this is not permitted, we regret to inform you that your server has been deactivated.
Guidelines regarding further course of action may be found in our wiki: http://wiki.hetzner.de/index.php/Lei...versperrung/en.
Yours faithfully
Your Hetzner Support Team
and a log with my local ip addresses which I have checked are really visible from my eth0 on my hardware node with tcpdump
Code:

09:42:16.976198 a1:b2:c3:d4:e5:f6 > aa:bb:cc:dd:ee:ff, ethertype IPv4
(0x0800), length 60: 10.0.0.7.2312 > 192.198.93.78.80: Flags [F.], seq
3579355710, ack 2348566885, win 65101, length 0
09:42:17.076330 a1:b2:c3:d4:e5:f6 > aa:bb:cc:dd:ee:ff, ethertype IPv4
(0x0800), length 60: 10.0.0.7.2271 > 65.75.156.119.80: Flags [F.], seq
3329167346, ack 2138564996, win 65408, length 0
09:42:17.177311 a1:b2:c3:d4:e5:f6 > aa:bb:cc:dd:ee:ff, ethertype IPv4
(0x0800), length 60: 10.0.0.7.2096 > 149.47.143.131.80: Flags [F.], seq
833600034, ack 1463451994, win 65205, length 0
09:42:17.378092 a1:b2:c3:d4:e5:f6 > aa:bb:cc:dd:ee:ff, ethertype IPv4
(0x0800), length 60: 10.0.0.7.2160 > 193.234.222.240.80: Flags [F.], seq
380954537, ack 1918089133, win 65530, length 0
09:42:17.478724 a1:b2:c3:d4:e5:f6 > aa:bb:cc:dd:ee:ff, ethertype IPv4
(0x0800), length 60: 10.0.0.7.2522 > 199.231.188.243.80: Flags [F.], seq
2524482819, ack 2992113059, win 64726, length 0
09:42:17.482664 a1:b2:c3:d4:e5:f6 > aa:bb:cc:dd:ee:ff, ethertype IPv4
(0x0800), length 60: 10.0.0.7.2376 > 118.139.177.199.80: Flags [F.], seq
3912490494, ack 3173571000, win 65464, length 0
09:42:17.512824 a1:b2:c3:d4:e5:f6 > aa:bb:cc:dd:ee:ff, ethertype IPv4
(0x0800), length 60: 10.0.0.7.3493 > 192.126.137.25.8800: Flags [R], seq
714854646, win 0, length 0
09:42:17.512847 a1:b2:c3:d4:e5:f6 > aa:bb:cc:dd:ee:ff, ethertype IPv4
(0x0800), length 60: 10.0.0.7.3493 > 192.126.137.25.8800: Flags [R], seq
714854646, win 0, length 0

is there any way how I can hide my 10.0.0.0/24 ips ?

my software version are
Code:

cat /etc/debian_version
7.2

uname -a
Linux 1.server.com 2.6.32-25-pve #1 SMP Tue Oct 1 09:17:16 CEST 2013 x86_64 GNU/Linux

pveversion -v
proxmox-ve-2.6.32: 3.1-113 (running kernel: 2.6.32-25-pve)
pve-manager: 3.1-17 (running version: 3.1-17/eb90521d)
pve-kernel-2.6.32-25-pve: 2.6.32-113
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-7
qemu-server: 3.1-5
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-13
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

Getting started with a small cluster but trying to get it right from the start

$
0
0
I have been running VM Ware for a few years now and the main Problem is getting complete snapshots efficently and effectively done without the several thousand dollar package.

A few years back I got the Basic Package, but let's not spend time talking about that.

I have been testing on the side with Proxmox. Had a 2.x machine running for my own testing on some old hardware (doing great - install was out of the box / iso CD). I installed a Proxmox 3.1 (free repository) ontop of Debian Wheezy on an external Server and the KVMs are running quite nice.

At the office on my testing grounds I started building a test cluster (not HA) which all seems to point the way that Proxmox is going to suit my needs just fine.

So now I am just looking for a little advice. I intend to pull up three external Servers. Mixed KVM and OpenVZ. Not HA. I do not intend to set up a Storrage Server. I was playing with the Round Robin Idea ... as this will not be a nightly thing, rather a on the WE thing, in other words I will still be running nightly backups of important data, but the KVM Snapshot / Backup ist rather a convenience thing.

I will be using Intel® Core™ i7-4770 CPUs. Having 32 GB Ram on each machine.

I was planning on using two SATA Drives in each machine running Software Raid with Level 1 ... but I just got done reading another thread and this seems like a horrible Idea for Proxmox.

I was toying with the Idea to setup an OpenVPN Net and running a forth Node here at my office. The advantage being that I could set up here and then "deploy" the KVM, but as this is nothing I will be doing daily or weekly for that mather I se no advantage.

Last Info you would need is what in hell am I planing on running on this setup. Well it is mainly KVMs running some variation of Debian inside (or OpenVZ) of which more than 50% are running some version of Tomcat with some testing or staging funktion for out Programmers. The other 50% are either running a few small websites for customers or spezialised applications inside tomcats for customers.

For anything running way higher loads I still plan to set up a single separate server as I have done in the past. The above is mainly there to relive me of the VM Ware ESXi pains I am having or to suit low traffic customers pürojects not justifying renting a whole server.

I hope I am not boring anyone, will appreciate any hints or constructive comments.

NFS VZ QUOTA file softlimit exceeded

$
0
0
I have a repeated issue with a Zoneminder container.
This can be easily repeated with any CentOS 6 x64, with Zoneminder 1.26 and above.
Clean installation of a standard VZ always presents the following errors in the CT and Proxmox Host.
Code:

kernel: VZ QUOTA: file softlimit exceeded for id=116
My PVE version:
Code:

pveversionpve-manager/3.1-24/060bd5a6 (running kernel: 2.6.32-26-pve)
My storage.cfg:
Code:

cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content images,iso,vztmpl,rootdir
        maxfiles 0


nfs: OpenFiler
        path /mnt/pve/OpenFiler
        server 192.168.222.150
        export /mnt/vg_nfs/nfs-ext3/nfs-datastore
        options vers=3
        content images,backup,rootdir
        maxfiles 1


nfs: backups01
        path /mnt/pve/backups01
        server 192.168.222.11
        export /vdev1/proxBKUP
        options vers=3
        content backup
        maxfiles 3


nfs: iso01
        path /mnt/pve/iso01
        server 192.168.222.11
        export /vdev1/proxISO
        options vers=3
        content iso
        maxfiles 0


nfs: templates01
        path /mnt/pve/templates01
        server 192.168.222.11
        export /vdev1/proxISO
        options vers=3
        content vztmpl
        maxfiles 0


nfs: Omni01KVM
        path /mnt/pve/Omni01KVM
        server 192.168.222.11
        export /vdev1/proxKVM
        options vers=3
        content images
        maxfiles 0


nfs: Omni1CT
        path /mnt/pve/Omni1CT
        server 192.168.222.11
        export /vdev1/proxCT
        options vers=3
        content rootdir
        maxfiles 0


nfs: Omni2CT
        path /mnt/pve/Omni2CT
        server 192.168.222.12
        export /poolPROX/proxCT
        options vers=3
        content rootdir
        maxfiles 0


nfs: Net30Backup
        path /mnt/pve/Net30Backup
        server 192.168.222.12
        export /poolPROX/BACKUPS
        options vers=3
        content backup
        maxfiles 30


nfs: Omni2KVM
        path /mnt/pve/Omni2KVM
        server 192.168.222.12
        export /poolPROX/proxKVM
        options vers=3
        content images
        maxfiles 0


nfs: backups2
        path /mnt/pve/backups2
        server 192.168.222.12
        export /poolPROX/BACKUPS
        options vers=3
        content rootdir,backup
        maxfiles 3

My 116.conf:
Code:

cat /etc/pve/openvz/116.conf
ONBOOT="yes"


PHYSPAGES="0:4096M"
SWAPPAGES="0:1024M"
KMEMSIZE="1861M:2048M"
DCACHESIZE="930M:1024M"
LOCKEDPAGES="2048M"
PRIVVMPAGES="unlimited"
SHMPAGES="unlimited"
NUMPROC="unlimited"
VMGUARPAGES="0:unlimited"
OOMGUARPAGES="0:unlimited"
NUMTCPSOCK="unlimited"
NUMFLOCK="unlimited"
NUMPTY="unlimited"
NUMSIGINFO="unlimited"
TCPSNDBUF="unlimited"
TCPRCVBUF="unlimited"
OTHERSOCKBUF="unlimited"
DGRAMRCVBUF="unlimited"
NUMOTHERSOCK="unlimited"
NUMFILE="unlimited"
NUMIPTENT="unlimited"


# Disk quota parameters (in form of softlimit:hardlimit)
DISKSPACE="104857600:115343360"
DISKINODES="20000000:22000000"
QUOTATIME="0"
QUOTAUGIDLIMIT="0"


# CPU fair scheduler parameter
CPUUNITS="1000"
CPUS="4"
HOSTNAME="zwtzone.hcpct2.hc.hctx.net"
SEARCHDOMAIN="hcpct2.hc.hctx.net"
NAMESERVER="10.30.27.160"
NETIF="ifname=eth0,bridge=vmbr1,mac=4E:AD:2A:ED:82:50,host_ifname=veth116.0,host_mac=72:ED:2B:53:42:F0"
VE_ROOT="/var/lib/vz/root/116"
VE_PRIVATE="/mnt/pve/Omni1CT/private/116"
OSTEMPLATE="centos-6-x86_64.tar.gz"

The container was created with 100gb of space. All attached and local storage have more than 100GB available. The CT should be located on Omni1CT

Any help is appreciated. Please let me know what I should do, or if you need any additional info.

Short question: 2 different networks working?

$
0
0
Hi there,

the following configuration:

There are two different DSL-networks (DSL1 and DSL2) available.

I wanna put a additional NIC in my server, then i would bridge these NIC to a VM (this should go via DSL2 to the internet).

The other VMs are connected via DSL1 to the internet.

DSL1-network is also used to administrate the Proxmox-Host.

of course, the networks have different internal ip-ranges (e.g. 192.168.100.xxx and 192.168.200.xxx)!

Will this work without any issues?

Thanks!

[solved] NFS performance and mount options

$
0
0
Hi,

I have an old NFS sharing NAS which is performing good, since years, not excellent but good for its job.
Now I have a new NFS sharing NAS which is really slow...

old NAS specs: qnap ts809-rp, 8x 1.5tb SAMSUNG HD154UI 1AG0 in 1 raid5 vol, Core 2 Duo 2.8 GHz, 2GB DDRII RAM, 1GB eth MTU 1500
new NAS specs: qnap ts879-rp, 7x 2.0tb Hitachi HUA723020ALA640 MK7O in 1 raid5 vol, Quad Core Xeon® E3-1225 v2 Processor 3.2 GHz, 4 GB DDR3 ECC RAM, 1GB eth MTU 1500

i've run pveperf test on both, from two cluster nodes, and got these results:

Code:

# pveperf /mnt/pve/ts809/ (old NAS)
CPU BOGOMIPS:      72527.28
REGEX/SECOND:      947321
HD SIZE:          9617.06 GB (ts809:/myshare)
FSYNCS/SECOND:    1740.50
DNS EXT:          57.34 ms
DNS INT:          1.33 ms

# pveperf /mnt/pve/ts879/ (new NAS)
CPU BOGOMIPS:      72527.28
REGEX/SECOND:      948892
HD SIZE:          11081.12 GB (ts879:/myshare)
FSYNCS/SECOND:    204.80
DNS EXT:          135.45 ms
DNS INT:          0.67 ms

this puzzled me a bit.. I know these are sw raid units, but I expected the new NAS to perform far better than the old...
so I looked at mount options (I mounted those units through pve gui)

Code:

IP_of_ts809:/iso_images on /mnt/pve/ts809 type nfs (rw
relatime
vers=3
rsize=262144
wsize=262144
namlen=255
hard
proto=tcp
timeo=600
retrans=2
sec=sys
mountaddr=IP_of_ts809
mountvers=3
mountport=659
mountproto=udp
local_lock=none
addr=IP_of_ts809)

IP_of_ts879:/Download on /mnt/pve/ts879 type nfs (rw
relatime
vers=3
rsize=32768
wsize=32768
namlen=255
hard
proto=tcp
timeo=600
retrans=2
sec=sys
mountaddr=IP_of_ts879
mountvers=3
mountport=59669
mountproto=udp
local_lock=none
addr=IP_of_ts879)

I think rsize and wsize are involved... but how can I change them (and to which values, how to find)?
and why pve mounted those in such different ways?
if it can make a difference:
- ts809 was mounted in cluster on a node with old, 2.x before upgrade it to 3.1,
- ts879 was mounted in cluster on another node 2.x, after upgrading it to 3.1

Have you any hints of what is happening and how to use at best the two units? other tests?

I can run tests from cli on both NAS, but that is a custom qnap linux, there's no apt-get, and I don't want to mess something there, if possibile...

Thanks, Marco

NFS not sending backups to remote computer.

$
0
0
Hi,
I'm trying to setup sending Proxmox backups to a remote computer, but I receive the following errors:
# pvesm list Backups
Backups:163/vm-163-disk-1.raw raw 107374182400 163
Backups:backup/archive.tar tar 3123701760
Backups:backup/vzdump-qemu-162-2013_11_26-17_15_07.vma.lzo vma.lzo 1239901701
Backups:backup/vzdump-qemu-163-2013_11_26-17_21_17.vma.lzo vma.lzo 2076754804



# /usr/bin/rpcinfo -p IPAddressOfRemoteComputer
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused


# rpcinfo -p IPAddressOfRemoteComputer
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused


# showmount -e IPAddressOfRemoteComputer
clnt_create: RPC: Port mapper failure - Unable to receive: errno 111 (Connection refused)

Should port mapper be installed? I would have thought if port mapper was needed, then Proxmox would already have port mapper installed.

WebGUI goes offline when running NFS Backup

$
0
0
This happened again. When i run a NFS backup WebGUI goes sideways. All Nodes shows offline even though everything is running just fine. Even after hitting Stop backup, the backup progress bar keep running.
backup.png

Using #service pvedaemon/pvestatd/pveproxy restart temporarily bring everything back to normal but goes offline again. I double checked if all the VMs that are showing offline are running or not. All are functioning properly without showing online. Backup happens on seperate LAN/Subnet/Switch. Any clues?
Attached Images

RGManager won't start

$
0
0
Hi,

i'm trying to set up a test cluster with 3 nodes in HA with a fencing device.
when i'm trying to active it at the HA tab i'm getting the follow error:

When i go to Node > services > start RGManager it says:
Starting Cluster Service Manager: [ OK ]
TASK OK

But the status says ''stopped''

When using
/etc/init.d/rgmanager start in CLi it says the same



root@Proxmox01:~# /etc/init.d/rgmanager start
Starting Cluster Service Manager: [ OK ]
root@Proxmox01:~#



root@Proxmox01:~# /etc/init.d/rgmanager status
rgmanager is stopped



Everything is configured properly.

- 3 Proxmox ve 3.1 Nodes
- Nodes are in cluster.
- Shared storage by NFS

Configured as HA ( followed instruction from the Fencing wiki page)

http://pve.proxmox.com/wiki/Fencing#...g_on_all_nodes

Failover configured by CLI

http://pve.proxmox.com/wiki/Fencing#...e_cluster.conf

Fail over script:

<?xml version="1.0"?>
<cluster config_version="37" name="pilotfase">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<fencedevices>
<fencedevice agent="fence_apc" ipaddr="192.168.1.60" login="hpapc" name="apc" passwd="12345678" power_wait="10"/>
</fencedevices>
<clusternodes>
<clusternode name="Proxmox01" nodeid="1" votes="1">
<fence>
<method name="power">
<device name="apc" port="1" secure="on"/>
<device name="apc" port="2" secure="on"/>
</method>
</fence>
</clusternode>
<clusternode name="Proxmox02" nodeid="2" votes="1">
<fence>
<method name="power">
<device name="apc" port="3" secure="on"/>
<device name="apc" port="4" secure="on"/>
</method>
</fence>
</clusternode>
<clusternode name="Proxmox03" nodeid="3" votes="1">
<fence>
<method name="power">
<device name="apc" port="5" secure="on"/>
<device name="apc" port="6" secure="on"/>
</method>
</fence>
</clusternode>
</clusternodes>
</cluster>

I hope somebody has any idea to solve this issue.

Best Regards,
Jimmy

rebooting node, unmounting configfs takes forever...

$
0
0
it's still there... what can be the cause, how to solve?

cman status:

# service cman status
Found stale lock file

what that means?

[update]
meanwhile I rebooted with

echo 1 > /proc/sys/kernel/sysrq
echo b > /proc/sysrq-trigger

but would like to understand what happens and how to manage...

Thanks, Marco

[solved] HN fails to boot after update, RAMDISK and kernel panic

$
0
0
Really unusual problem today and figured it out through an hour of google and forum searching, figured I would post just for others.

I won;t get into specifics or details, but the main issue was that on reboot, I got a kernel panic saying basically "ramdisk: write incomplete", along with some VFS errors. To fix, I booted from a Debian Squeeze netinst cd into advanced/rescue mode. When I got to the section on mounting a root partition, I chose my root (/dev/mapper/pve-root), executed a shell in it, and then simply ran "update-initramfs -u -t -k all"

Rebooted and was good to go. Still trying to figure out how the h*ll init got corrupted in the first place but whatever - hopefully this helps someone in a later search.

BTW - Proxmox devs - thanks for the awesome work!

Adding external node to existing local cluster

$
0
0
Sorry if this is repeated, I searched until I had to sign up and harass everyone.

I have an existing local cluster of two nodes , I would like to add an external, or colo node to the local cluster.
I assume this can be done in several ways but this should be done the correct way.

How is this accomplished - links or advice would all be appreciated.

Thanks, Rick

USB3 or eSata for vm backups?

$
0
0
Any opinions?

Is the USB 3 support in Proxmox reliable enough for backups?

If you get a decent dock can you swap regular sata drives every day without damaging them?

couple of our VM's top 300GB so fast backup is desirable :)

Thanks - Lindsay

PHP API cannot Clone VM (qemu)

$
0
0
Hello,
i hope you understand my problem. Sry for my english, I am from Germany.

I would like to develope a API for Hosting qemu VM´s.

Status from all VM´s from the github example works perfect. But to clone a VM to another it doesent work. Proxmoxx api doesnt send an error or notice. I only get "array(1) { [0]=> bool(false) }" looks like false :D. The VM didnt created. But i cannot see whats wrong. Proxmoxx only say this.
my php code
PHP Code:

if ($pve2->constructor_success()) {
 
   
    if (
$pve2->login()) {
        
//clone VM6 (Testsystem)
if(isset($_GET["reinstall"]) && $_GET["reinstall"] != "")
{
    
$vmid $_GET["reinstall"];
    if(
$vmid == "108")
    {
        
        
$parameters = array ();
$parameters["newid"] = 108;
$parameters["node"] = "srv01";
$parameters["vmid"] = 106;
$parameters["full"] = true;
$parameters["storage"] = "local";
$parameters["format"] = 1;



        
$status[] = $pve2->post("/nodes/srv01/qemu/108/clone",$parameters);
        
var_dump($status);exit;
        
        
    }
    
    
    


why it doesent work? I hope anyone understand my problem and can help me.

Greetings from Germany

Interface list display as not active

$
0
0
Hello,

I have an issue with one proxmox server. Fresh install so nothing special.
When i go to network interface all interface are display Not active (Active Columns)

Bridge work fine.
How proxmox put an interface as active or not active ? Should only if interface is up i guess (and seems from other promox server)

How turn the interface active as i have a third party plugin using proxmox api for list network interface. And right now i haven't any interface avaible.

Thanks for help
Viewing all 171768 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>