Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Proxmox + Deluged (Bittorrent server)

$
0
0
Hello, I was trying to get Proxmox and Deluged working but ran into a problem where speed would spike from 11MB to 200KB and everywhere in between. *I tried different kernel but didn't work*
This happened inside a container and on the host itself.
Also disk IO would go to 50% when d/l torrent. I thought it was a disk io problem and tried a few disk io optimization which did not help. People at FreeNode ##proxmox helped a lot but I still could not figure out the reason.

Solution:
Finally I installed Debian 7.5 Wheezy and installed Proxmox using the following guide https://pve.proxmox.com/wiki/Install..._Debian_Wheezy This solved the problem and I now longer see spikes with under 10% io under load.

Pay CLOSE ATTENTION and don't forget to partition properly.
---Partitioning Guide *-Really simple guide that got me through it :)
http://forum.online.net/index.php?/t...capabillities/
-Few things of note:
Before you do vgextend, you do vgcreate
Toward the end just remember to create a /var/lib/vz directory so it can be mounted.


-IRC stuff which was really helpful
04:15 < dima202> how do i find the name of my linux-image-kernel_3.10.23 (my kernel name 'linux-image-3.10.23-amd64')
04:24 < Jb_boin> and its never a good idea to remove the running kernel, at least reboot on the "new" one and check that it works well before thinking about removing the previous one
04:30 < Jb_boin> if the server is already booting on the local hard drive you just have to figure out in which position is the proxmox kernel on the grub config file and change the default value
04:31 < Jb_boin> by default its 'set default="0"' where 0 means it will boot on the first kernel entry listed on /boot/grub/grub.cfg
04:33 < dima202> http://pastebin.com/KDvtKC8W
04:35 < Jb_boin> well if you dont want to manually do it, you can edit /etc/default/grub and change the DEFAULT=0 to DEFAULT=1 then run update-grub --Rebooted Proxmox
04:48 < dima202> unfortunately now proxmox is behaving quite oddly
04:49 < Jb_boin> are you on the right kernel at least? :)
04:50 < dima202> yeah looks like 2.6.32-26-pve
04:50 < Jb_boin> what is the issue now?
04:52 < dima202> proxmox shows 2 containers which previously failed to install
04:52 < dima202> can't delete them
04:52 < dima202> cant create new ones because it says already exiss no matter the ct number
--Found answer on another thread Goto /etc/pve/nodes/!!"YOUR HOST"!!/openvz
--Fixed odd behavior by editing /etc/apt/sources with latest repositories https://pve.proxmox.com/wiki/Package_repositories
--apt-get update
--apt-get upgrade
05:09 < dima202> Jb_boin: your kung fu is awesome. i think i got it to the way i can use it
05:15 < Jb_boin> :)

--Useful Stuff
pveversion -v Show you the current kernel version & more.

kernel cleanup on remove

$
0
0
This is not really a big deal but why does not old initrd files get removed on kernel removal?

Code:

# ls -l /boot/
total 71763
-rw-r--r-- 1 root root  105982 Nov  8 08:42 config-2.6.32-34-pve
drwxr-xr-x 3 root root    5120 Nov  9 09:12 grub
-rw-r--r-- 1 root root 16485840 May 27 08:11 initrd.img-2.6.32-29-pve
-rw-r--r-- 1 root root 16533820 Jun 25 05:21 initrd.img-2.6.32-30-pve
-rw-r--r-- 1 root root 16535006 Jul 28 12:57 initrd.img-2.6.32-31-pve
-rw-r--r-- 1 root root 16571003 Nov  9 09:12 initrd.img-2.6.32-34-pve
drwxr-xr-x 2 root root    12288 May 27 05:04 lost+found
-rw-r--r-- 1 root root  2637662 Nov  8 08:42 System.map-2.6.32-34-pve
-rw-r--r-- 1 root root  4298448 Nov  8 08:42 vmlinuz-2.6.32-34-pve

Networking somehow broken. Kinda confusing.

$
0
0
I have somehow created a situation i cant explain and would really appreciate some help.
I'll copy paste a pastebin which i posted in IRC, where nobody was able to help:


Server 1 is the "frontend server", which should communicate over SOAP
with Server 2. Both are virtual proxmox OpenVZ server with their own IP.

Server 1 runs Debian wheezy
Server 2 runs CentOS 6 (with Interworx Panel)

Somehow Server 2 cant ping server 1, and server 1 can ping server 2
but does not get any soap results trough dont finding the host.

Server 1:

Code:

# ping SERVER2
PING SERVER2 (SERVER2) 56(84) bytes of data.
64 bytes from SERVER2: icmp_req=1 ttl=64 time=0.026 ms
64 bytes from SERVER2: icmp_req=2 ttl=64 time=0.027 ms

# curl https://SERVER2:2443/soap?wsdl --insecure
curl: (7) couldn't connect to host


Server 2:

Code:

ping SERVER1
PING SERVER1 (SERVER1) 56(84) bytes of data.
^C
--- SERVER1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2535ms



Local:
Code:

$ sudo ping SERVER2
PING SERVER2 (SERVER2) 56(84) bytes of data.
64 bytes from SERVER2: icmp_seq=1 ttl=53 time=24.9 ms
64 bytes from SERVER2: icmp_seq=2 ttl=53 time=25.3 ms

$ curl https://SERVER2:2443/soap?wsdl --insecure
<?xml version="1.0"?>
... ALL GOOD ...

There is no Firewall enabled currently and i really have no idea what could cousing this.

Use pvcreate on existing hardisk

$
0
0
I tried to mount my Proxmox Harddisk into another machine, I wrongly use this command :
proxmox-ve:~# pvcreate /dev/sdg1 Physical volume "/dev/sdb1" successfully created proxmox-ve:~#
proxmox-ve:~# vgcreate usb-stick /dev/sdg1 Volume group "usb-stick" successfully created proxmox-ve:~#
And right now I dont have any access to my DATA :( What can I do know ?

only usb-stick is in my vgscan list and lvdisplay and other commands didnt show any thing !

Load avg difference between bare metal

$
0
0
Hi all,
I've a Dell R415, dual Opteron 8core, 16 core total with 64GB ram.
I migrate application (Nginx+PHP-FPM) from Debian on bare metal to Proxmox 3.3 with kernel 3.10.0-5-pve;
I use only two VM with vCPU 8 core total (2 socket with 4 core) and 8-28GB dynamic ram.

All seems work fine, but load avg are very high, about 10 from previous ~2.
On the two VM load are from 5 with low traffic and +12 (with spike +20) when are high traffic but on previous
bare metal load avg still low; why?

No IO problem, volume are local on raid1 with two Samsung 850 PRO:
top -n1:
%Cpu(s): 48.9 us, 7.0 sy, 0.0 ni, 40.6 id, 0.7 wa, 0.0 hi, 1.4 si, 1.5 st

Local Backup partition not showing free space correctly

$
0
0
I am running proxmox 3.3-1/a06c9f73 on all my nodes. I have a local partition that is exported via NFS for another node to backup on. The disk filled up and could not complete any backups. From the remote node, under backups I deleted some older backups. After doing do the disk utilization did not change either via NFS or locally.

On the local node.

Code:

df -h /backup
Filesystem                    Size  Used Avail Use% Mounted on
/dev/mapper/lvm_local-backup  3.4T  3.3T  133G  97% /backup

du -hs /backup
1.5T    /backup

It shows the same on the remote node as well.

Code:

df -h /remote/backup
Filesystem        Size  Used Avail Use% Mounted on
10.0.0.1:/backup/  3.4T  3.3T  133G  97% /remote/backup

du -hs /remote/backup
1.5T    /remote/backup

So if you notice df shows 3.3T used but du only shows 1.5T in use!!!! Any idea what would cause something like this?

Eric

EXT4 or not EXT4 (+ mount options)

$
0
0
I know this question has been asked a few times already and I was reading through every topic in this forum I could find about it, but some are outdated or only contain inconclusive answers. The question is: Is it safe or even recommended to use EXT4 as the storage file system (in my case local SATA storage, RAID-10) and/or install Proxmox with the boot parameter "linux ext4"? If so, why isn't it default yet, I mean have there been any recent reports about bugs with Proxmox and ext4 or is it just to stay "as stable as possible"?

My second question is about the mount options for local ext4 storage on a SATA RAID-10 array with BBU (KVM images only): I noticed that it's possible to gain a quite huge performance boost from using "defaults,noatime,nodiratime,data=writeback,nobarr ier,nobh,commit=10,nodelalloc" with the setup I mentioned (especially "nodelalloc" increased the fsyncs a lot and in addition I use "blockdev --setra 512 /dev/mapper/pve-data"). I came up with these settings after a while of research and I'm almost certain that it would work great on a Proxmox node, but I'm not abolutely sure about how secure and stable they are in a production environment. Would anyone be willing to try these settings in a similar testing environment or elaborate on any issues that might occur with these mount options?

Infiband: Solaris/nexenta/omnios IBoIP performance tuning


Same VLAN for Node and some(!) VMs (bond/bridge problem)

$
0
0
hi,

I have for the external communication one bond0 with LACP configured (Cisco). bond0 is a trunk interface, with all VLANs I need. The problem is, that the external node address (for webinterface) is in the same VLAN, like I need for VMs. How should I configure Proxmox 3.2, that I can access the webinterface (vlan 555) and also configure a VM to use 555 (Web -> VM > Hardware > Network > VLAN), on the same bond0. I tested some variations, but they failed all.

The background is, that the VM must not be able to setup/configure a Vlan inside (apt-get install vlan ... and fire up ...) , for security reasons. So, in other words: The users inside the VM never sees anything else, than the configured VLAN -> no Trunk traffic inside the VM

I have a working cluster with 10 nodes (and growing), so migration must be possible.

Any suggestions?


Update


I have a new version, and it is working. Every VM which uses the same VLAN like the node, has to be drop into the vmbr1 bridge, all others uses the vmbr0. I hope, that it is ok, what I have done.

Code:

# Webinterface ueber v601
auto bond0.601
iface bond0.601 inet manual
vlan-raw-device bond0

auto vmbr0
iface vmbr0 inet manual
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0

auto vmbr1
iface vmbr1 inet static
    address 1.2.3.4
    netmask 255.255.255.128
    gateway 1.2.3.254
    bridge_ports bond0.601
    bridge_stp off
    brdige_fd 0

About raid controller + cache

$
0
0
Hello!
I have a question.
If i use raid controller with bbu + write back cache then what cache mode i need to setup in proxmox? Write-back or no cache option? Or any other?
Thanks!
(sorry for my bad english)

Administration of VMs on a central server

$
0
0
Hello Proxmox Community!

I have a current project running and want to hear some opinions from you wether proxmox is the right solution for me or not... I have to be honest and must say that I am a total beginner in the field of virtualization.
The requirements are written as a User Story.

User Story

I want to have access to all VMs on a central server. The server saves all VMs. I want to start, change (by means of a GUI) and clone VMs from everywhere. Within the VMs I want to start more VMs(nested virtualizatin). Furthermore I want to use local hardware(USBs, Webcam) on the remote VMs and want the VMs to run fast enough(maybe through paravirtualized drivers). As a virtualization software I use kvm and virtualbox. We use these VMs to demonstrate different things.

Currently I have found two possible solutions:


1) Proxmox
Proxmox seems to be the perfect solution for my problem. Some colleagues have tried Proxmox a year and a half ago and have found that a newer Kernel can't be installed (easily).

I know that Proxmox supports usb/pci passthrough, nested virtualization for kvm and paravirtualized drivers, but how difficult is it to set these things up?

Is it possible to allocate space dynamically?



2) Using shell scripts to adminstrate VMs

Starting a VM by means of a shell seems to be quite a difficult and time eating task. To make things easier scripts can be written to administrate VMs.

• Are there already such scripts available?

Soundcard PCIe Passthrough stuttering

$
0
0
Hi,

I'm using Proxmox for a few months now and I'm very happy with its capabilities. There's a small issue which I can't resolve with the documentation and Google.
I have made a passthrough setup with a PCIe soundcard (Orban 1101e Optimod) using the following conf:
Quote:

bootdisk: virtio0
cores: 3
hostpci0: 09:00.0,pcie=1,driver=vfio
machine: q35
memory: 3072
name: Windows
net0: virtio=2E:37:BA:0A:E3:D8,bridge=vmbr0
ostype: win8
smbios1: uuid=68de0a49-f703-4ea9-93ab-f07408d439ce
sockets: 1
virtio0: local:115/vm-115-disk-2.raw,format=raw,size=100G
virtio1: local2:115/vm-115-disk-1.raw,size=1000G
The virtual machine is running Windows 2012 R2 hosting a music player which uses the orban soundcard to process the audio and then comes back into an audio encoder application to stream the audio to a different shoutcast server for use in internet radio. So far so good, everything works.

Problems arise when I copy data (audio files) from a remote network share towards the second local disk (vm-115-disk-1.raw). On that case the audio stream itself starts to stutter. Not the encoded audio but the audio generated by the music player. Assigning more memory or more cpu doesn't seem to have a positive effect, it continues to stutter whenever data is copied.

I was thinking it might be a bandwith issue from and towards the passthrough'ed soundcard. Does anyone have an idea how to monitor its usage to see if this hypothesis is right and even more important; a clue how to fix it. (e.g. using prioritization for the pcie passthrough).

Thanks in advance for thinking :)

To use pve-kernel-3.10 or higher version, should I must have a valid subscription?

$
0
0
Just want to try the implemented PCIe pass through function, according to Wiki, pve-kernel-3.10 at least required, but this update requires a subscription?

Issue with LVM local storage: Found duplicate PV

$
0
0
Hello,

I have a proxmox 3.3 cluster, with local LVM storage.
I have a Vm (KVM) which run a SLES 11 64 bits. This Vm use internal lvm2 config.

For test purpose, i have restored a backup of that Vm on the same node where the original Vm runs.
The original Vm had number 100, and the new had number 110.

After restore, at the end of the log, i have founded the following message:

Found duplicate PV n0oZ93MQU1LSwD4anAxDMJWSJ2U09PWv: using /dev/dataproxmox-01/vm-110-disk-2 not /dev/dataproxmox-01/vm-100-disk-2
Found duplicate PV SpHywqVzfjpE5IMupAT5yS1uvEddizjo: using /dev/dataproxmox-01/vm-110-disk-3 not /dev/dataproxmox-01/vm-100-disk-3
Found duplicate PV EoUX6oNuA5vcsXcNKtml2uiqGGn1I3CX: using /dev/dataproxmox-01/vm-110-disk-4 not /dev/dataproxmox-01/vm-100-disk-4
Found duplicate PV n0oZ93MQU1LSwD4anAxDMJWSJ2U09PWv: using /dev/dataproxmox-01/vm-110-disk-2 not /dev/dataproxmox-01/vm-100-disk-2
Found duplicate PV SpHywqVzfjpE5IMupAT5yS1uvEddizjo: using /dev/dataproxmox-01/vm-110-disk-3 not /dev/dataproxmox-01/vm-100-disk-3
Found duplicate PV EoUX6oNuA5vcsXcNKtml2uiqGGn1I3CX: using /dev/dataproxmox-01/vm-110-disk-4 not /dev/dataproxmox-01/vm-100-disk-4

So, i have destroyed the new Vm number 110, and i made a pvdisplay and a vgdisplay on the node.

The problem, and it seems not appear with debian, is that Volume groups names and logical volumes names of the Vm appears directly in the node.
In that case: sysvg, datavg, applivg
I have no multipath or bounded devices installed.

The following state is taken after removing Vm number 110. Before removing, number 110 was in place of 100.
Ex: /dev/dataproxmox-01/vm-110-disk-4 against /dev/dataproxmox-01/vm-100-disk-4

Some body have an idea ? Is there something to custom in lvm.conf ?
Please note that i have not this problem with debian wheezy Vms.


--- Physical volume ---
PV Name /dev/sdb1
VG Name dataproxmox-01
PV Size 1.09 TiB / not usable 3.79 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 286069
Free PE 87925
Allocated PE 198144
PV UUID xKCr8C-qqK4-iNL7-yFYW-Ry86-eqI8-k0r457

--- Physical volume ---
PV Name /dev/dataproxmox-01/vm-100-disk-4
VG Name datavg
PV Size 200.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 51199
Free PE 0
Allocated PE 51199
PV UUID EoUX6o-NuA5-vcsX-cNKt-ml2u-iqGG-n1I3CX

--- Physical volume ---
PV Name /dev/sda3
VG Name pve
PV Size 136.20 GiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 34866
Free PE 4095
Allocated PE 30771
PV UUID L6E9wI-LtEW-Snkj-yFM4-IGnU-8Zn5-fekgsu

--- Physical volume ---
PV Name /dev/dataproxmox-01/vm-100-disk-3
VG Name applivg
PV Size 20.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 5119
Free PE 0
Allocated PE 5119
PV UUID SpHywq-Vzfj-pE5I-MupA-T5yS-1uvE-ddizjo

--- Physical volume ---
PV Name /dev/dataproxmox-01/vm-100-disk-2
VG Name sysvg
PV Size 40.00 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 10239
Free PE 0
Allocated PE 10239
PV UUID n0oZ93-MQU1-LSwD-4anA-xDMJ-WSJ2-U09PWv



--- Volume group ---
VG Name dataproxmox-01
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 56
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 8
Open LV 8
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.09 TiB
PE Size 4.00 MiB
Total PE 286069
Alloc PE / Size 198144 / 774.00 GiB
Free PE / Size 87925 / 343.46 GiB
VG UUID FlNS57-OlYU-MpBR-4P9O-WlKr-tom9-By1cpp


--- Volume group ---
VG Name datavg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 200.00 GiB
PE Size 4.00 MiB
Total PE 51199
Alloc PE / Size 51199 / 200.00 GiB
Free PE / Size 0 / 0
VG UUID Wxu4gQ-2hmV-Pxk6-dQRX-8M3T-3ha2-iKB9wA

--- Volume group ---
VG Name pve
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 136.20 GiB
PE Size 4.00 MiB
Total PE 34866
Alloc PE / Size 30771 / 120.20 GiB
Free PE / Size 4095 / 16.00 GiB
VG UUID IBh9gN-U9PD-xfg7-aaFf-1ibs-FNAc-wbo8uL

--- Volume group ---
VG Name applivg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 20.00 GiB
PE Size 4.00 MiB
Total PE 5119
Alloc PE / Size 5119 / 20.00 GiB
Free PE / Size 0 / 0
VG UUID mVbMvv-hN9E-Q3g8-5cJA-gRse-FIHP-wc4rJM

--- Volume group ---
VG Name sysvg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 4
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 40.00 GiB
PE Size 4.00 MiB
Total PE 10239
Alloc PE / Size 10239 / 40.00 GiB
Free PE / Size 0 / 0
VG UUID WXFxcS-dlEn-1jTT-mjfK-QtvP-axp7-b6E6fF

Missing or corrupted isolinux VE 3.3

$
0
0
Having an issue booting proxmox iso on a dell 2950. the md5 sum matches up and the partition is set to bootable. Not sure what else to check.

Code:

[root@VASHDESK ~]# fdisk /dev/sdc

Welcome to fdisk (util-linux 2.25.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.




Command (m for help): p
Disk /dev/sdc: 961 MiB, 1007681536 bytes, 1968128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xddddd703


Device    Boot Start    End Sectors  Size Id Type
/dev/sdc1  *        0 1163263 1163264  568M 17 Hidden HPFS/NTFS




Command (m for help): q

Code:

vashidu@VASHDESK (~/Downloads) $: cat derp
b2531905a538bf01eebc25ee41aba0dc  proxmox-ve_3.3-a06c9f73-2 (1).iso
b2531905a538bf01eebc25ee41aba0dc


Is a SSD cache worthwhile for VM disk performance?

$
0
0
Using bcache to create a SSD cache for one desktop PC is certainly worthwhile.

But I'm wondering if the same applies for VM hosting, which I imagine has rather different IO patterns.


I'm testing a two node/brick glusterfs replicate setup, with the proxmox ndoes also being gluster nodes. Perfomance is ok, but to my surprise the hard disk read/write performance is holding things up, not the network.

I have a 60GB SSD partition spare on both nodes and we'd be looking at 7 VM's on each node, one node dedicated to server vm's (AD, SQl Server, Terminal Server"), the other node windows developer VM's. So a lot of random read/writes on both nodes withing large files, but once started, not a lot of large sequential read/writes. No real directory

Now I write it out, it seems a good candidate for caching. I did play with dm-cache which had good results until I managed to destroy the filesystem. dm-Cache is a fiddly pain in the ass to manage - no simple flush command! WriteBack was the best, but is dangerous, writethrough gave excellent read results but actually reduced write performance.

bcache is definitely much simpler and robust in that regard, but I'd have to pull in the bcache tools from an external repo or build them, which I don't like to do - keep our servers pristine!

Network Adapter PerformanceKb

$
0
0
I have two 2008 R2 boxes, one of which is an RDS server and the other just a console essentially. I'm getting some weird performance issues on the RDS server with high interrupts which through some research seems to be coming from the virtio driver. As a test using iperf, I switched out the e1000 NIC for a virtio NIC on the secondary 2k8 server with interesting results.

The original test I did between the RDS (virtio) and the 2k8R2 (e1000) yielded 167 Mbits/sec. Each VM is on a separate VM host with a gigabit switch between them.

I then did a test between the RDS (virtio) server and the vm host which yielded 652 Mbits/sec.

I then swapped out the the 2k8R2 (e1000) for a virtio NIC and reran my tests. My tests between the RDS and 2k8R2 are seeing ~185Kbits/sec. Yea, Kbits!

Why in the world would the performance be SO bad for two servers with virtio NICs?

VM live migration with local storage

$
0
0
Hello,
Proxmox support live migration with local storage ?

Active Active DRBD for containers

$
0
0
Hi All,


I'm trying to configure active / active DRBD for proxmox containers.. Please suggest is it possible or not... I'm using active / passive and is working fine.

Also, is it right that using active/active DRBD scenario for containers, we can mount the partitions on both the nodes, write on them, and the data will be synced instantly.



Thanks.

global lock timeout to slow?

$
0
0
Hi all,

how log is the timeout of "trying to get global lock - waiting...", well we have many backups in the night and have the sporadically, but always the error message "INFO: trying to get global lock - waiting...
ERROR: can't aquire lock '/var/run/vzdump.lock' - got timeout".

if the timeout to slow?

regards
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>