Quantcast
Channel: Proxmox Support Forum
Viewing all 171704 articles
Browse latest View live

Freenas NFS Storage slow read performance with ATTO 64KB+

$
0
0
Hey guys,

First of all, just want to thanks you guys for ProxmoxVE pieces of softwares. Just discovered it (Don't know why took so much time) last weekend. I have been using VMWare ESXI 5.1 Free (and previous) version since 2 years for personal servers and projects at home. Just converted both of my server to test out Proxmox live migration and clustering and I must say im more than impressed. Everything works smoothly, cleanly and without errors !

Anyway, im almost up to the point to swap ESXI for Proxmox but I keep getting some strange performance results with my shared storage on Freenas NFS. I have been using NFS with ESXI for year without any performance issues.

Here is the ATTO result i have from guest on Proxmox and from ESXI running on the same hardware. No mather what I try to do on Proxmox, as soon as ATTO get into 64KB+ block size performance seem to strangely CAP Read at 10MB/sec. I have the same result on any of my two host using Windows XP or Windows 2012 Guest Proxmox. Look only i have problem on Windows Guest while using NFS...

Things I tried:

- All type of file format - Still slow
- Both host from shared NFS - Both slow
- VirtIO - Still slow
- Local storage - Good, speed is normal
- dd benchmark directly on NFS using SSH with 1M block - Good, speed is normal
- dd benchmark in a Centos Guest with 1M block - Good, speed is normal

Any Idea ?

VMWare NFS Freenas.pngProxmox NFS Freenas.png
Attached Images

Disappeared template list

$
0
0
Hello.
A couple of months I did not login into the control panel, but a couple of days ago was required to create a new VM. When creating it I have not found a previously uploaded templates. In the Control Panel list of templates is empty, but on the server they are located in the correct folder. If download new templates, the downloading process goes without errors, but it does not appear in the list. But template appears on the server in the correct folder. The same happens when uploading iso. For all the time only 3-4 times power is lost and server turn off incorrectly.

What can it be?

Problem, glusterfs, zfs

$
0
0
I install glusterfs server and mount to proxmox 3.1-24 via web frontend.
Create vm machine was successful, but there is a problem that the virtual machine writes to disk which lies on the ZFS Linux, although again, the very image of the machine creates.

question: why the machine can create your image, but have a problem with writing data to it?

iSCSI

$
0
0
Hi,

I try to test iSCSI. I've created a target and volume on my Synology box and used the Proxmox webinterface to connect the iSCSI volume. What else do I have to do to be able to use it? My goal in the future is to use two Syonology boxes in HA mode and use iSCSI multipathing like I do it with Datacore and vmware.

TIA
Matthias

input / output error on /etc/pve

$
0
0
Hi Guys,

Just a quick question, I'm getting input / output errors on /etc/pve aka /dev/fuse.

This server is a single server not a node / cluster or anything. If i did /etc/init.d/pve-cluster restart what would happen, baring in mind this is a live server with live vm's that I can not take down. Also would this fix the input / output errors?


Thanks
Paul

OpnVZ HA

$
0
0
Hi All,
I'm using proxmox 1.9 active/passive DRBD with Heartbeat to protect Openvz data and also HA. It's easy and working for a few years.
I have decided to replace the hardware so I have two servers with a small SSD disk and a normal disk so I decided to contiue the same way but with proxmox 3.1.
I'm unable to install heartbeat with 3.1.
I have no problem to use a third server and fencing but I don't wish to use NFS storage for Openvz ( I would like to enjoy SSD).
Do you have any idea how to accomplish that?


Thanks,,,

iPXE Boot Loop

$
0
0
Hello,

have someone the same isse with iPXE and two network cards?
I configured a KVM VM with two network cards. On both are iPXE enabled. I changed the boot order to Network, CD-ROM and Disk. When it boot from iPXE it works fine. But when i want to boot from HDD or CD-ROM it loops in the iPXE boot rom. It don't find any DHCP and restart and don't recognize the other boot option like CD or HDD. When i remove the second NIC, it works..but it need both. Only way is to allow one option and change it every time.

Can anyone confirm that?

qemu-nbd ok, but guestfish/virt-rescue ?

$
0
0
Hi,

is guestfish/virt-rescue usage possible/safe on stopped vm or unused disks on pve itself?

I found some old disks on an old, unused iscsi LUN, which once was an lvm/iscsi storage in pve 1.x and managed to mount it in wheezy vm, then I activated LVs and mounted those volumes in the vm qith qemu-nbd.
Now I know I could use qemu-nbd also with files like .img or .qcow2, or .raw, but I was thinking if tools like guestfish/virt-rescue (or others!) could be used to do some maintenance on pve kvm disks or not...

Marco

Disk IO Graph

$
0
0
Hi,

We have a Proxmox 3.0 Cluster Running with Ceph. For each VM Proxmox generates 4 rrd graphs.

CPU, Memory, Network and Disk IO.

My Problem with Disk IO is that at some VMs theres a P to measure the amount of IO, and at others is a T or a k. So what does they mean?


Thanks

Problems with two containers on two hypervisors

$
0
0
I have two containers, on two hosts. They are paired application servers. Approximately one hour ago the application reported failures.

Looking at the containers I find that one cannot execute common unix commands (example vi). The other I cannot console in - 'vzctl enter $VMID' puts me right back to the proxmox host console.

Normally I'd say 'delete the container(s) and rebuild' however there is some key data inside the containers that I really need to recover. Help ...






root@tn-proxmox-1:~# vzctl --verbose stop 102
Stopping container ...
Killing container ...
Container was stopped
Container is unmounted
root@tn-proxmox-1:~# vzctl --verbose start 102
Starting container ...
Container is mounted
Found osrelease 2.6.32 for dist ubuntu-12.04-x86_64.tar.gz
Running container script: /etc/vz/dists/scripts/debian-add_ip.sh
Got signal 4
Setting CPU units: 1000
Setting CPUs: 16
Running container script: /etc/vz/dists/scripts/debian-set_hostname.sh
Got signal 4
Configure veth devices: veth102.0
Adding interface veth102.0 to bridge vmbr0 on CT0 for CT102
Container start in progress...
root@tn-proxmox-1:~#
root@tn-proxmox-1:~# vzctl enter 102
entered into CT 102
exited from CT 102
root@tn-proxmox-1:~#




root@tn-proxmox-2:~# vzctl --verbose stop 103
Stopping container ...
Container was stopped
Container is unmounted
root@tn-proxmox-2:~# vzctl --verbose start 103
Starting container ...
Container is mounted
Found osrelease 2.6.32 for dist ubuntu-12.04-x86_64.tar.gz
Running container script: /etc/vz/dists/scripts/debian-add_ip.sh
Setting CPU units: 1000
Setting CPUs: 16
Running container script: /etc/vz/dists/scripts/debian-set_hostname.sh
Running container script: /etc/vz/dists/scripts/set_dns.sh
Running container script: /etc/vz/dists/scripts/set_ugid_quota.sh
/bin/bash: line 474: 55 Segmentation fault /usr/sbin/update-rc.d vzquota remove > /dev/null 2>&1
Configure veth devices: veth103.0
Adding interface veth103.0 to bridge vmbr0 on CT0 for CT103
Container start in progress...
root@tn-proxmox-2:~# vzctl --verbose enter 103
Entering CT
entered into CT 103
root@tn-es-node-2:/# ls
apps bin boot dev etc fastboot home lib lib64 lost+found media mnt opt proc root run sbin selinux srv sys tmp usr var
root@tn-es-node-2:/# vi /tmp/foo.txt
Segmentation fault
root@tn-es-node-2:/#

Proxmox 2 Nodes in Switch from WAN (10MBI) to LAN (1GBIT)

$
0
0
Hello Proxmox Freaks,

i have running to proxmox Nodes, when i do a qcow offline migration with rsync I only get 10MB / S .
The Nodes only know each others from their "WAN-IPs" ...
in the last days connected the two nodes directly with a GBIT Switch.
Both machines are able to ping, to connect ssh trew their LAN-Ips (gbit Ports)...

but the (live) Migration work only threw the Wan Ips ....

Is there any possiblity to change this to get them talking directly threw their lan interfaces ?
Wich config Files should i change to get live migration working ?
Could i ran into any problems ? Do i Have to rebuild the complete "Cluster" ?

Thanks a lot

tempes

Vm / kvm

$
0
0
Hello,

How to find VM resources (CPU, RAM, HDD) usage with terminal ?

I try with ps -e -orss=,args= | sort -b -k1,1n | pr -TW$COLUMNS |grep VID, but WinXP RAM usage ~100 MB, with ps command ~500 MB. Need more accurately.

Please help :)

OpenVPN VM loses connection

$
0
0
Hi all,

I installed for the first time a proxmox server and I am learning to work with it. One of the VM I created is hosting a OpenVPN community server. The servers works almost perfectly, except that the clients are always disconnected after approx 16 minutes.

I got a lot of support from the OpenVPN community, and we narrowed the problem down to the VM, and they advised me to continue asking here because they are not able to identify the cause of the issue further.

It seems to be mainly a firewall issue, since when I log the dropped packets in the VM, I get a lot of:
Dec 18 02:17:55 vpn kernel: [852189.017793] iptables denied: IN=eth0 OUT= MAC=7e:44:56:0a:26:b2:7e:3b:16:c5:1c:7b:08:00 SRC=CLIENT_PUB_IP DST=10.99.0.11 LEN=113 TOS=0x00 PREC =0x00 TTL=116 ID=30248 PROTO=UDP SPT=50686 DPT=1194 LEN=93

Here is my network config on the host:
auto vmbr2
iface vmbr2 inet static
address 10.99.0.254
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -t nat -A POSTROUTING -s '10.99.0.0/24' -o vmbr0 -j MASQUERADE # define out rule
post-down iptables -t nat -D POSTROUTING -s '10.99.0.0/24' -o vmbr0 -j MASQUERADE # kill out rule
post-up iptables -t nat -A PREROUTING -i vmbr0 -p udp --dport 1194 -j DNAT --to 10.99.0.11:1194
post-down iptables -t nat -D PREROUTING -i vmbr0 -p udp --dport 1194 -j DNAT --to 10.99.0.11:1194


and those are on the VM:
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t filter -A INPUT -p udp --dport 1194 -j ACCEPT
iptables -t nat -I POSTROUTING -o eth0 -j SNAT --to 10.99.0.11
iptables -I INPUT -i tun+ -j ACCEPT
iptables -I FORWARD -i tun+ -j ACCEPT
iptables -I OUTPUT -o tun+ -j ACCEPT
iptables -I FORWARD -o tun+ -j ACCEPT
iptables -t nat -I POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE


Any idea why I am getting a lot of packets dropped by the firewall ?

Thank you a lot for your help

Backup 3x times bigger than VM content

$
0
0
Hi all,

There's an issue that I can't really understand.

We have one KVM VM (W2k3) with a disk of 32G in LVM. The used disk inside the VM is about 9G, and the backup make 23'5G.Who can understand this ?

We tried to move it to a /directory storage converting it to qcow2, to defrag the entire disk ...and always the same result: 23'5G.


Code:

INFO: starting new backup job: vzdump 312 --remove 0 --mode snapshot --compress lzo --storage backups_TMP --node hw01
INFO: Starting Backup of VM 312 (qemu)
INFO: status = stopped
INFO: update VM 312: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: creating archive '/backups/dump/vzdump-qemu-312-2013_12_19-00_11_29.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task 'e5146c78-b255-49cc-b4b7-11f26f2dfa6f'
INFO: status: 2% (830603264/34359738368), sparse 0% (59838464), duration 3, 276/256 MB/s
INFO: status: 4% (1620901888/34359738368), sparse 0% (68513792), duration 6, 263/260 MB/s
...
INFO: status: 96% (33205846016/34359738368), sparse 10% (3646115840), duration 129, 223/223 MB/s

INFO: status: 98% (34003812352/34359738368), sparse 10% (3646181376), duration 132, 265/265 MB/s
INFO: status: 100% (34359738368/34359738368), sparse 10% (3658600448), duration 134, 177/171 MB/s
INFO: transferred 34359 MB in 134 seconds (256 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 23.52GB
INFO: Finished Backup of VM 312 (00:02:17)
INFO: Backup job finished successfully
TASK OK

And after trying almost everything, we tried to make a backup using gzip instead of LZO and almost the same, but this time backup needed 7x more time for a similar result:

Code:

INFO: starting new backup job: vzdump 312 --remove 0 --mode snapshot --compress gzip --storage backups_TMP --node hw01
INFO: Starting Backup of VM 312 (qemu)
INFO: status = running
INFO: update VM 312: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/backups/dump/vzdump-qemu-312-2013_12_19-00_50_59.vma.gz'
INFO: started backup task '38f5e637-c14d-44bc-a9c9-0f99bc8e4271'
INFO: status: 0% (106430464/34359738368), sparse 0% (446464), duration 3, 35/35 MB/s
INFO: status: 1% (368705536/34359738368), sparse 0% (55898112), duration 10, 37/29 MB/s
INFO: status: 2% (710803456/34359738368), sparse 0% (59432960), duration 22, 28/28 MB/s
INFO: status: 3% (1053229056/34359738368), sparse 0% (62881792), duration 34, 28/28 MB/s
...
INFO: status: 98% (33673248768/34359738368), sparse 10% (3645939712), duration 1047, 29/29 MB/s

INFO: status: 99% (34046345216/34359738368), sparse 10% (3646005248), duration 1059, 31/31 MB/s
INFO: status: 100% (34359738368/34359738368), sparse 10% (3658424320), duration 1071, 26/25 MB/s
INFO: status: 100% (34359738368/34359738368), sparse 10% (3658424320), duration 1072, 0/0 MB/s
INFO: transferred 34359 MB in 1072 seconds (32 MB/s)
INFO: archive file size: 22.03GB
INFO: Finished Backup of VM 312 (00:17:52)
INFO: Backup job finished successfully
TASK OK

We have a similar VM (same template) with same disk, OS, etc and the 14GBs used space makes a 9GBs backup. This means there's something weird with that VM disk.

Any ideas ? maybe this is normal ? A VM with 9GBs of total data generating backups of 24GBs ?

Thanks

Dedicate entire (with content) hdd to VM

$
0
0
Hey all! I have a proxmox server I am in the process of building and I am wanting to put my old file server into a VM. The hdd's (5 of them) have data already on them and I would like to "attatch" them to the VM so that when I setup the VM it will see the hdd's just as before in its standalone server. What is the easiest way to dedicate hardware through the host OS to the guest VM? Thanks!

Disk Space

$
0
0
Hi,

I plan to build a cluster which uses network storage for vm's only. I can get servers with an internal USB stick with 4GB. Is this sufficient or is this not a good idea in general?

An other option is to use a MD set for the local proxmox setup. May this be OK?

TIA
Matthias

IP Tables/csf

$
0
0
Hi,

I am using proxmox 3.1-3 and am trying to make IP tables work

i have loaded the config in /etc/vz/vz.conf

IPTABLES="ipt_REJECT ipt_recent ipt_owner ipt_REDIRECT ipt_tos ipt_TOS ipt_LOG ip_conntrack ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state iptable_nat ip_nat_ftp"

but I am still getting the error below

Testing ipt_LOG...FAILED [FATAL Error: iptables: No chain/target/match by that name.] - Required for csf to function
Testing ipt_multiport/xt_multiport...FAILED [FATAL Error: iptables: No chain/target/match by that name.] - Required for csf to function
Testing ipt_state/xt_state...FAILED [FATAL Error: iptables: No chain/target/match by that name.] - Required for csf to function
Testing ipt_limit/xt_limit...FAILED [FATAL Error: iptables: No chain/target/match by that name.] - Required for csf to function
Testing ipt_recent...FAILED [Error: iptables: No chain/target/match by that name.] - Required for PORTFLOOD and PORTKNOCKING features
Testing xt_connlimit...FAILED [Error: iptables: No chain/target/match by that name.] - Required for CONNLIMIT feature
Testing ipt_owner/xt_owner...FAILED [Error: iptables: No chain/target/match by that name.] - Required for SMTP_BLOCK and UID/GID blocking features

RESULT: csf will not function on this server due to FATAL errors from missing modules [4]


below is the proxmox version

proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve) pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e) pve-kernel-2.6.32-23-pve: 2.6.32-109 lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4 corosync-pve: 1.4.5-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.0-1 pve-cluster: 3.0-7 qemu-server: 3.1-1 pve-firmware: 1.0-23 libpve-common-perl: 3.0-6 libpve-access-control: 3.0-6 libpve-storage-perl: 3.0-10 pve-libspice-server1: 0.12.4-1 vncterm: 1.1-4 vzctl: 4.0-1pve3 vzprocps: not correctly installed vzquota: 3.1-2 pve-qemu-kvm: 1.4-17 ksm-control-daemon: not correctly installed glusterfs-client: 3.4.0-2

Bond-lacp-rate

$
0
0
Have a simple question?

bond-lacp-rate 0 is slow
bond-lacp-rate 1 is fast

What is bond-lacp-rate 4 ?
?

The reason why I ask is that I have seen it used in sample configurations on the internet.
I have searched all over the net for answears but found none. Only that rate 0 slow and rate 1 fast exists.
Have some people misspelled their configurations or is my question valid?

)-|algeir

Not able to access some servers outside with different subnet

$
0
0
Hi,

I have configured porxmox in blade server HS23 with two ethernet ports. The default bridge is vmbr0 assigned with ip 10.30.x.x and the other bridge vmbr1 is with ip 10.29.x.x. Traffic of vmbr1 was going out through vmbr0. The interface file entries is as below. Now the problem is am not able to access vmbr1 clients(10.29.x.x) outside. Could anyone tell me if anything is wrong in this setup. Thanks in advance.

Recently we have migrated the LUN presented to this server to Storwize 7000. Also the FC switch which is available in blade chasis was made as passthrough switch(zoning removed) and the LUN is coming via a new Brocade FC Switch. No changes were made to ethernet connectivity.

auto lo
iface lo inet loopback
auto eth3
iface eth3 inet static
address 10.30.x.x
netmask 255.255.255.0
gateway 10.30.x.x

auto vmbr0
iface vmbr0 inet static
address 10.30.x.x
netmask 255.255.255.0
gateway 10.30.x.x
bridge_ports eth3
bridge_stp off
bridge_fd 0
up route add -host 10.29.x.x dev vmbr0


auto vmbr1
iface vmbr1 inet static
address 10.29.x.x
netmask 255.255.255.224
bridge_ports eth2
bridge_stp off
bridge_fd 0

WebUI connection timeout after IP change

$
0
0
I know this has been asked before, but the answers I found didn't help out.

This problem occurs on a proxmox (v2) installation with only one host (i.e. no cluster).

After changing the IP in "/etc/network/interfaces" and "/etc/hosts" PING and SSH works, but I can't connect to the WebUI via https://<NewIP>:8006.

Things I've tried:
- reboot of the machine
- creation of a interfaces.new file + reboot
- setting the host parameter in "/usr/bin/pveproxy"

Are there any other files I should check for the old IP? How can I tell the WebServer (pveproxy?) to use the new IP?

Regards
Viewing all 171704 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>