Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

2 pfSense openVPN Gateways on external WAN IPs

$
0
0
dear all,

a view weeks ago i start transition from vmware esxi 5.5 infrastructure to proxmox, at the moment i have replaced vmware on 2 Dell PowerEdge R420, still 3 Server left, main reasons are costs and performance...

finally, on one HP Proliant 320 i'm still running esxi 5.x
on this server are 2 pfSense installations for openVPN connections (1 for IT access, 2. for empoyees) both of them use a external IP addresses, at the moment i have no clear idea howto accomplish this with proxmox, that means every pfSense VM must have an external IP address..

maybe someone can give me a friendly hint ;)

best regards

roman

all VM can not restart qcow2' does not exist

$
0
0
all happen few hours ago, not really sure the reason just yet

anyone had this problem before?

Quote:

Nov 22 11:40:42 data1 task UPID:data1:00000CCD:00001498:5470BC86:vzstart:101: root@pam:: command 'vzctl start 101' failed: exit code 62
Nov 22 11:40:42 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: <root@pam> starting task UPID:data1:00000D0B:0000162B:5470BC8A:vzstart:103: root@pam:
Nov 22 11:40:42 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: starting CT 103: UPID:data1:00000D0B:0000162B:5470BC8A:vzstart:103: root@pam:
Nov 22 11:40:42 data1 kernel: CT: 103: started
Nov 22 11:40:43 data1 kernel: venet0: no IPv6 routers present
Nov 22 11:40:45 data1 kernel: CT: 103: stopped
Nov 22 11:40:46 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: command 'vzctl start 103' failed: exit code 62
Nov 22 11:40:46 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: <root@pam> starting task UPID:data1:00000D44:000017C0:5470BC8E:vzstart:104: root@pam:
Nov 22 11:40:46 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: starting CT 104: UPID:data1:00000D44:000017C0:5470BC8E:vzstart:104: root@pam:
Nov 22 11:40:46 data1 kernel: CT: 104: started
Nov 22 11:40:47 data1 pvestatd[3247]: command '/usr/sbin/vzctl exec 104 /bin/cat /proc/net/dev' failed: exit code 8
Nov 22 11:40:47 data1 pvestatd[3247]: command '/usr/sbin/vzctl exec 104 /bin/cat /proc/net/dev' failed: exit code 8
Nov 22 11:40:49 data1 kernel: CT: 104: stopped
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: command 'vzctl start 104' failed: exit code 62
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: <root@pam> starting task UPID:data1:00000D84:00001952:5470BC92:qmstart:105: root@pam:
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: start VM 105: UPID:data1:00000D84:00001952:5470BC92:qmstart:105: root@pam:
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: you can't start a vm if it's a template
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: <root@pam> starting task UPID:data1:00000D85:00001955:5470BC92:vzstart:106: root@pam:
Nov 22 11:40:50 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: starting CT 106: UPID:data1:00000D85:00001955:5470BC92:vzstart:106: root@pam:
Nov 22 11:40:50 data1 kernel: CT: 106: started
Nov 22 11:40:53 data1 kernel: CT: 106: stopped
Nov 22 11:40:54 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: command 'vzctl start 106' failed: exit code 62
Nov 22 11:40:54 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: <root@pam> starting task UPID:data1:00000DBE:00001AE6:5470BC96:qmstart:109: root@pam:
Nov 22 11:40:54 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: start VM 109: UPID:data1:00000DBE:00001AE6:5470BC96:qmstart:109: root@pam:
Nov 22 11:40:56 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: volume 'local:105/base-105-disk-1.qcow2/109/vm-109-disk-1.qcow2' does not exist
Nov 22 11:40:56 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: <root@pam> starting task UPID:data1:00000DC1:00001BB0:5470BC98:vzstart:110: root@pam:
Nov 22 11:40:56 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: starting CT 110: UPID:data1:00000DC1:00001BB0:5470BC98:vzstart:110: root@pam:
Nov 22 11:40:56 data1 kernel: CT: 110: started
Nov 22 11:40:57 data1 pvestatd[3247]: command '/usr/sbin/vzctl exec 110 /bin/cat /proc/net/dev' failed: exit code 8
Nov 22 11:40:57 data1 pvestatd[3247]: command '/usr/sbin/vzctl exec 110 /bin/cat /proc/net/dev' failed: exit code 8
Nov 22 11:40:59 data1 kernel: CT: 110: stopped
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: command 'vzctl start 110' failed: exit code 62
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: <root@pam> starting task UPID:data1:00000E01:00001D43:5470BC9C:qmstart:113: root@pam:
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: start VM 113: UPID:data1:00000E01:00001D43:5470BC9C:qmstart:113: root@pam:
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: you can't start a vm if it's a template
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: <root@pam> starting task UPID:data1:00000E02:00001D45:5470BC9C:qmstart:118: root@pam:
Nov 22 11:41:00 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: start VM 118: UPID:data1:00000E02:00001D45:5470BC9C:qmstart:118: root@pam:
Nov 22 11:41:01 data1 task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam:: volume 'local:105/base-105-disk-1.qcow2/118/vm-118-disk-1.qcow2' does not exist
Nov 22 11:41:01 data1 pvesh: <root@pam> end task UPID:data1:00000CCA:00001493:5470BC86:startall::ro ot@pam: OK
Nov 22 11:43:54 data1 pvedaemon[3242]: <root@pam> successful auth for user 'root@pam'
Nov 22 11:44:56 data1 pvedaemon[3683]: start VM 109: UPID:data1:00000E63:00007923:5470BD88:qmstart:109: root@pam:
Nov 22 11:44:56 data1 pvedaemon[3239]: <root@pam> starting task UPID:data1:00000E63:00007923:5470BD88:qmstart:109: root@pam:
Nov 22 11:44:56 data1 pvedaemon[3683]: volume 'local:105/base-105-disk-1.qcow2/109/vm-109-disk-1.qcow2' does not exist
Nov 22 11:44:56 data1 pvedaemon[3239]: <root@pam> end task UPID:data1:00000E63:00007923:5470BD88:qmstart:109: root@pam: volume 'local:105/base-105-disk-1.qcow2/109/vm-109-disk-1.qcow2' does not exist
Nov 22 11:45:56 data1 kernel: usb 3-1: USB disconnect, device number 2
Nov 22 11:49:58 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 11:54:53 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:03:41 data1 pveproxy[3252]: worker 3253 finished
Nov 22 12:03:41 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:03:41 data1 pveproxy[3252]: worker 3915 started
Nov 22 12:04:34 data1 pveproxy[3252]: worker 3254 finished
Nov 22 12:04:34 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:04:34 data1 pveproxy[3252]: worker 3927 started
Nov 22 12:04:59 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:09:00 data1 pveproxy[3252]: worker 3255 finished
Nov 22 12:09:00 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:09:00 data1 pveproxy[3252]: worker 3979 started
Nov 22 12:09:53 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:17:01 data1 /USR/SBIN/CRON[4077]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Nov 22 12:20:00 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:24:35 data1 pveproxy[3252]: worker 3915 finished
Nov 22 12:24:35 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:24:35 data1 pveproxy[3252]: worker 4172 started
Nov 22 12:24:53 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:28:05 data1 pveproxy[3252]: worker 3927 finished
Nov 22 12:28:05 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:28:05 data1 pveproxy[3252]: worker 4218 started
Nov 22 12:33:01 data1 pmxcfs[2781]: [dcdb] notice: data verification successful
Nov 22 12:33:26 data1 pvedaemon[3223]: worker 3242 finished
Nov 22 12:33:26 data1 pvedaemon[3223]: starting 1 worker(s)
Nov 22 12:33:26 data1 pvedaemon[3223]: worker 4295 started
Nov 22 12:35:00 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:38:06 data1 pveproxy[3252]: worker 3979 finished
Nov 22 12:38:06 data1 pveproxy[3252]: starting 1 worker(s)
Nov 22 12:38:06 data1 pveproxy[3252]: worker 4354 started
Nov 22 12:39:54 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:40:04 data1 rrdcached[2767]: flushing old values
Nov 22 12:40:04 data1 rrdcached[2767]: rotating journals
Nov 22 12:40:04 data1 rrdcached[2767]: started new journal /var/lib/rrdcached/journal/rrd.journal.1416678004.148776
Nov 22 12:50:01 data1 pmxcfs[2781]: [status] notice: received log
Nov 22 12:54:55 data1 pmxcfs[2781]: [status] notice: received log

Quote:

root@data1:/var/lib/vz/dump# fsckfsck from util-linux 2.20.1
e2fsck 1.42.5 (29-Jul-2012)
/dev/mapper/pve-root is mounted.
e2fsck: Cannot continue, aborting.

PVEProxy constantly needs a restart?

$
0
0
Hello,

I've been experiencing some weird problems recently where the web interface won't load!
It's not just me, but everyone else as well who tries to access the web interface in it's unusual down time.
The only way that I know fixes this temp is to do "service pveproxy restart"


Here are the results for ioping /dev/sdb

Qz9OvB4.png

and these are the results for doing free -g

rHDPWUe.png


I don't know how to pinpoint this problem or to fix it, any help will be great!

Thank you in advance,
Attached Images

Manual command for applying HA new configuration

$
0
0
Hi,

I'm searching for a command line to apply the new cluster.conf configuration after being validated using the following command

Code:

ccs_config_validate -v -f /etc/pve/cluster.conf.new
The documented way is to goto Datacenter then the HA tab, it should show the differences, click Activate to active and sync the config.
am searching for a command to do this from the command line rather then the GUI.

Thanks in advance,

Dell C6100 IPMItool not connecting

$
0
0
Hello all. I have had a headache trying to get fencing working for my setup. I have 4 servers that are within 2 Dell c6100's. I have gone through numerous forums on here and other places to get ipmi working. As of now I can run ipmitool -I open but not ipmitool -I lanplus or lan. I have updated my dell bmc and bios according to this guy http://forum.proxmox.com/archive/index.php/t-15762.html but even after the update still no luck. I thought at 1 point that it might be the bmc card and have tried multiple configurations. I then decided to install the ipmitool on a windows 7 box connected to the same bmc network and it connects with ipmitool -I lanplus without any issues at all. It can even read the other 4 proxmox ipmi information without any issue from the windows 7 box.

I can only assume this is either a Linux or proxmox specific issue. I installed ipmitool via apt-get install ipmitool and made sure the correct modules are loaded and are also in /etc/modules for bootup.

ANy help would really be appreciated because I am at a stand still with my setup until this is working

Wildcard SSL-VNC TLS handshake issue

$
0
0
I recently installed wildcard SSL from Comodo in a Proxmox cluster and now VNC does not work any more. No matter which node i am trying to access it gives me the following error message:
Code:

Error: TLS handshake failed javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
SSL is installed exactly as suggested in Proxmox wiki and verified that it is working properly. All nodes also have been rebooted just in case instead of just restarting services.
Any idea?

It is on Proxmox 3.3.

scheduled w2k3 backup fails because "unable to create fairsched node"

$
0
0
Hello,

i have a 2-node proxmox setup and besides different debian-kvm's one w2k3-server (w2k3 guest best practices - check). Now i've discovered that since a week the scheduled backup doesn't run. Since i wasn't there, i'm pretty sure there were no changes made to the setup.

Unfortunately i couldn't find much when searching for "fairsched" and related problems, so i'm hoping someone here could help me.

The backup itself runs at night with:
Compression: GZIP
Mode: Stop
Include: only this vm

The stop-process itself runs fine - the vm does stop, but then the backup immediately fails:

Quote:

INFO: starting new backup job: vzdump 105 --quiet 1 --mode stop --compress gzip --storage store1
INFO: Starting Backup of VM 105 (qemu)
INFO: status = running
INFO: update VM 105: -lock backup
INFO: backup mode: stop
INFO: ionice priority: 7
INFO: stopping vm
INFO: creating archive '/data/store1/dump/vzdump-qemu-105-2014_11_23-01_00_01.vma.gz'
INFO: starting kvm to execute backup task
unable to create fairsched node
INFO: restarting vm
INFO: vm is online again after 9 seconds
ERROR: Backup of VM 105 failed - start failed: command '/usr/bin/kvm -id 105 -chardev 'socket,id=qmp,path=/var/run/qemu-server/105.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/105.vnc,x509,password -pidfile /var/run/qemu-server/105.pid -daemonize -name ponza -smp 'sockets=2,cores=2' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -k de -m 4096 -cpuunits 1000 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:49ec7ca2d15' -drive 'file=/data/store0/images/105/vm-105-disk-1.qcow2,if=none,id=drive-ide0,format=qcow2,cache=writethrough,aio=native,de tect-zeroes=on' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -netdev 'type=tap,id=net0,ifname=tap105i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=7E:09:D2:57:C3:22,netdev=net0,bus=pci.0 ,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -machine 'type=pc-i440fx-1.7' -S' failed: exit code 1
INFO: Backup job finished with errors
TASK ERROR: job errors
I've already rebooted everything (host and vm) multiple times and migrated the vm so the backup could run on different hosts - still no luck.

The weird thing about this is, when i use "backup now" (store0/stop/gzip) the backup runs fine, so apparently the problem only occurs if it runs scheduled.

Here's my version info:
Quote:

proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-34-pve: 2.6.32-139
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

KVM has no internet

$
0
0
Hello all

I have at home one server, which is running proxmox:
Code:

pve-manager/3.1-21/93bf03d4 (running kernel: 2.6.32-26-pve)
The network config is the following:
Code:

# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
        address  192.168.1.120
        netmask  255.255.255.0
        gateway  192.168.1.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

Now I have set up two KVM with Linux Debian 7.6.0 AMD64 Netinst. Their network configuration looks like this:
Code:

# The loopback network interface
auto lo
iface lo inet loopback

iface eth0 inet manual

auto eth0
iface eth0 inet static
        address 192.168.1.123
        netmask 255.255.255.0
        gatewaz 192.168.1.1

The KVM can ping the proxmox server and the gateway at 192.168.1.1, but as soon as I ping google.ch or 8.8.8.8 i get this error:
Code:

ping 8.8.8.8
connect: Network is unreachable
root@groot:~# traceroute 8.8.8.8
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
connect: The network is not reachable

On this server is a ContainerVZ running, which has no trouble reaching the internet.

Do you have a clou, what is going wrong with the KVM. Have I forgotten something or did I made an error configuring the KVM?

Thanks and regards
jompsi

HA Proxmox without license

$
0
0
Hi community,

I'm new using Proxmox and i found it very useful and complete. My questions:
- I can use HA and network storage without license?
- What are the features or service give me the license?

I'm interesting in configure 3 nodes with DELL Storage PV MD3420 with SAS disks. I need to know if i'll need license and if this storage (or Dell storage) is compatible with Proxmox. Any succesful story?

Are many questions ... i hope you can help me.
Thanks!

Ceph with SSDs vs spindle

$
0
0
Hi guys. So I've been doing some testing with a 3 node pve 3.3 cluster with Ceph. I currently have 1 512GB SSD in each box, and the io is just disappointing. As I'm learning more about Ceph, I'm finding that I just need more OSD's, and I'm trying to determine how many OSD's I need to get optimal performance.

I have 3 Dell R610's, each with 4 available drive bays, for a max of 12 OSD's. Obviously, 12 512GB SSD's would be costly (~$2400), so not an option at this time. However, I could get 3 more of these 512GB SSDs giving me a total of 6 SSD's for OSD's.

Now with the cost I would have to invest in getting 3 more SSD's (at ~$200 each), I could get 12 300GB 10K drives (WD VelociRaptors would be cheapest, at about $35/each -- around $425 total). 300GB 10k SAS drives would cost me more, but would be more reliable. With the reliability aside, would I see better performance from 6 SSD's as OSD's, or 12 10k drives as OSD's? None of my VM's will be hogging serious i/o, but lets say I'll have 20-25 CentOS-based VM's, each with a 10GB drive running a low load. Clearly I don't need a large amount of storage (backups and ISO's will live on a NFS share), and suspect ~500GB will be plenty. So yeah...

1) 12 10K OSD's vs 6 SSD OSD's -- which will I see better performance with?

2) Would I see a decrease in performance by using a replication factor of 3, versus a replication factor of 2?

Thanks!

Extrem low IO-Performance (Fsync/Seconds)

$
0
0
Hi @all.

I have IO problems with my Proxmox Server. If a single VM has a peak, for the whole system IO-wait increases extremly.

At the beginning there where only one VM, so I cannot say if it were slower with the proxmox-upgrades (starting 3.0) or with more VMs coming.

Now the system is most of the time extrem slow, so I look a bit deeper and found:

#1 software raid are not supported - damn - don't realize at the beginning
#2 Ext4 has sometimes poor Performance
#3 I have absymbal fsync/sec values

So, #1 is bad, but I cannot change immediately (online hosted server), so I hope it was not the main reason for the bad IO performance.

#2 seems slower than ext3, but not soo much as here, right?

And the main reason for #3 I found, was wrong disk alignment. So I checked this and get different results:

parted says the alignment should be OK:
Code:

root@abba:/# parted -s /dev/sda unit s print
Model: ATA TOSHIBA DT01ACA3 (scsi)
Disk /dev/sda: 5860533168s
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start    End          Size        File system  Name  Flags
 3      2048s    4095s        2048s                          bios_grub
 1      4096s    528383s      524288s                        raid
 2      528384s  5860533134s  5860004751s                    raid

The build in alignment test were successfull also.

but with fdisk it seems wrong:
Code:

root@abba:/# fdisk -c -u -l /dev/sda
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 3000.6 GB, 3000592982016 bytes
256 heads, 63 sectors/track, 363376 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000


  Device Boot      Start        End      Blocks  Id  System
/dev/sda1              1  4294967295  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.

Maybe because fdisk don't support GPT?

What is the correct result?


Here is my pveperf output:

Not running a VM:
Code:

root@abba:/# pveperfCPU BOGOMIPS:      54397.28
REGEX/SECOND:      1734493
HD SIZE:          4.96 GB (/dev/mapper/vg0-abba_root)
BUFFERED READS:    179.83 MB/sec
AVERAGE SEEK TIME: 6.55 ms
FSYNCS/SECOND:    531.51
DNS EXT:          64.11 ms

With running some VMs, but no traffic/workload on the VMs
Code:

root@abba:/# pveperf
CPU BOGOMIPS:      54402.00
REGEX/SECOND:      1581485
HD SIZE:          4.96 GB (/dev/mapper/vg0-abba_root)
BUFFERED READS:    174.91 MB/sec
AVERAGE SEEK TIME: 9.88 ms
FSYNCS/SECOND:    9.86
DNS EXT:          57.10 ms



Here are my versiondump:
Code:

root@abba:/# pveversion -v
proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-20-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Here my running VMs (only one really used, the others are very low frequented):
Code:

root@abba:/# qm list|grep running
      VMID NAME                STATUS    MEM(MB)    BOOTDISK(GB) PID
      100 XXXX          running    6144              4.00 8241
      101 XXXX      running    512                1.00 8409
      103 XXXX          running    1024              5.00 8548
      104 XXXX            running    768                1.00 8646
      107 XXXX              running    3072              1.00 8742
      108 XXXX              running    512                1.00 10803

Can someone help me to find a possible reason for this?


:thorsten

Proxmox VE 3.3 does not boot after install, Proxmox VE 3.1 does!

$
0
0
Hi all,

Many installation problems on various hardware platforms recently with PVE 3.2 and PVE 3.3 :
http://forum.proxmox.com/threads/202...-after-install

http://forum.proxmox.com/threads/196...st-work-around

http://forum.proxmox.com/threads/203...n-IBM-X3100-M4

http://forum.proxmox.com/threads/977...M1015)-no-boot

In every situation, installation goes OK, but PVE will never boot after that.

UEFI seems involved.

In every situation, an easy fix is :
- install PVE 3.1,
- apt-get update ; apt-get dist-upgrade

Please don't remove download links to 3.1 ISO!

A bug report has been written.

Thanks,

Christophe.

Migrating KVM to OpenVZ

$
0
0
Hi all,

I've got a few KVM VPS which I want to migrate them under OpenVZ. All of them are on the same node host if that matters.
Would please anyone share if that is possible and if yes briefly how I can perform this?

Thanks

novnc code 1006

$
0
0
i have yet to find a solution for this issue. when trying to connect from an ipad to the vm i get this on any apple device. the log file shows the following error:
TASK ERROR: command '/bin/nc -l -p 5901 -w 10 -c '/usr/sbin/qm vncproxy 100 2>/dev/null'' failed: exit code 1
i've tried creating a rule in the firewall to allow vnc but experienced the same results.

thanks devs for this awesome type 1 hypervisor btw

edit: i wasn't sure whether to post this here or in networking and firewall so hope i chose the right one.

running windows quests supported

$
0
0
Hi!


So far i have used and i am using Proxmox for Linux quests and it is good, thanks for the good job! Until now i haven't had much need to run Windows guests, though i have some random experiences and they worked too. And it seems i need to run more windows (especially Windows7, Windows Server 2008 and possibly Windows Server 2012 sometimes in the future) so I would like to ask two things


1. are Windows guests supported from Proxmox Server Solutions GmbH's point of view or it is more like best effort to get them running (and if needed assuming that i have valid Proxmox subscription)
2. is Microsoft support accepting Proxmox as supported platform for running their operating systems (and say for running active directory and file server applications)



I am afraid that although they techinally work beautifully but one day there would be a need to ask Microsoft support something pertaining to the say Active Directory clustering and then they ask to gather some information with some tool about computers in question and they find out i run them in Proxmox and they refuse support because of that.


I still think both my questions can be answered affirmatively and i guess that running Proxmox on Redhat kernel is part of that. I found that Microsoft considers RedHat supported platform for running several of Microsoft's operating systems (and these links go about KVM (maybe also somehow for Xen))


https://www.redhat.com/promo/svvp/
https://access.redhat.com/documentat...Systems_1.html


I would be very thankful if someone could comment on that.




Best regards,

Imre

why no Container HA

$
0
0
I have been using proxmox for a few years now and I am wondering why there is still no supported HA solution for containers? At this point I an willing to code it myself....(i am a sysadmin though). This feature would literally phenomenal.... Is there some sort of theoretical limitation?

Container assign host IP

$
0
0
I installed proxmox on IP: 1.2.3.4 with ssh server port 22 . Now can to create container with IP 1.2.3.4 with ssh port 220 ?

1.2.3.4 - public address


sorry for my bad english .

64 bit precreated debian 7.0 template is 7.6/7.7 and upstart

$
0
0
the debian 7.0 minimal template here: http://wiki.openvz.org/Download/template/precreated

ie http://download.openvz.org/template/...minimal.tar.gz

is debian 7.6 per /etc/debian_version

(so why not name it as such?)

Same deal with the 'full' image http://download.openvz.org/template/...-x86_64.tar.gz except it's 7.7 but named 7.0 (and yet create date seconds apart from the minimal).

And why has it been set to upstart instead of sysVinit? Thought upstart was totally deprecated and not part of the debian war over systemd/sysVinit?

also:

Code:

$ cat /etc/apt/preferences.d/parallels
Package: sysvinit
Pin: release c=main
Pin-Priority: -1

Windows 8.1 Automatic Repair boot loop

$
0
0
I installed Windows 8.1 64B in a VM on Proxmox-VE. It was running fine for a few days until it needed to be rebooted (updates). First, I tried reinstalling the OS which worked -- until it needed to be rebooted. After the reboot it was back into the automatic repair loop. It eventually ends in "Automatic repair couldn't repair your PC". I can't get it to boot at all.


What's even more concerning is that C:\ is listed as "System Reset" and I believe D:\ is my "boot" partition.


NUOat.png




The `bootrec` command doesn't find an OS either.
Nhl5m.png




Any ideas on how to fix this? How could this have happened? I let Windows automatically partition the drive during install.

Proxmox settings:
Selection_030.png

Selection_031.png
Attached Images

noob questions: configuring storage for home server

$
0
0
I recently bought an HP Proliant Microserver (n54l with 8GB RAM, 1 x 120GB SSD and 2 x WD 3TB HDD's) with the intention of learning about linux and virtualization. My intention was to use the SSD for running the operating systems and the HDD's forr storing music / films / photos / personal documents etc, but can't see which storage model would be appropriate. The first thing I'd like to be able to do is set up a media server (Plex) in one of the vm's so I can stream music and films when I'm away from home and further down the line I'd like to look at understanding how to set up web hosting and email servers etc. I installed proxmox to the SSD and have been able to create a couple of VM's without any issues. However, due to my inexperience with linux I'm having a lot of difficulty figuring out how to configure the 2 x 3TB HDD's. I want to have them set up in RAID1 and be accessible to any VM that I may run for now.

Should I mount these HDD's as directories in proxmox and then use mdadm to set up RAID? And then assign them as shared storage to VM's?

Or would it be better to create VM to use as an NFS server, mount the HDD's and configure RAID there, and then provide that VM's IP address to other VM's?

I'm struggling to follow the Storage Model page as I'm only planning on running a small home server with a few VM's so a lot of the stuff about backing up disk images seems to be way beyond what I'd need for now. Sorry for the noob questions. I know this is probably overkill for my requirements but I figured doing this would be a good way to get some experience with linux distros and virtualization!
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>