Quantcast
Channel: Proxmox Support Forum
Viewing all 171921 articles
Browse latest View live

Upgrade from Proxmox VE 3.3 to VE 3.4 -> Win7 VM deactivated

$
0
0
Hello,

after some investigation I am pretty sure, that upgrading from Proxmox VE 3.3 to VE 3.4 deactivated the licence of my Windows 7 VM. Using the recent backup and a test server, I am able to reproduce the fact, that it stays activated on Proxmox 3.3 and loses activation on Proxmox 3.4. Sure I can contact MS for reactivation, but now I doubt wether it was a good idea to start working with a Win7 VM instead of a real hardware. Is there any explanation for that behaviour?

Kind regards,
Ralf

kernel panic

$
0
0
after you update the server console directly by the kernel upgrade version 4, I began to have trouble writing and now got an error kernel panic, and do not understand why ?:confused:

Can't install

Proxmox VE license

$
0
0
Hi all,

I'm evaluating Proxmox VE for my business use.
This page shows us Proxmox VE is published under the free software license GNU AGPL.

www . proxmox . com / en / proxmox-ve / pricing

But I got a dialog which looks like the following after installation.

Title: No valid subscription
Message: You do not have a valid subscription or this server. Please visit www . proxmox . com to get a list of available options.

So I'm confused, why does PVE shows this dialog even it's free software?
By quick googling, many people disable this dialog by modifying some scripts, but I don't want to do it cos I want to be a legitimate user.

Finally my question is as follows.

Q1: Why is this dialog shown even it's free software?
Q2: Should I pay the subscription if I want to use PVE for my business?

Thanks in advance,

MWJ

Servers not starting after backup

$
0
0
Hi

We are using VMware at work, but i have been trying to get proxmox accepted for a few years, but lately i am having problems because servers dont run stable.

I am still using the free version, and right now it is ver 3.4-9, the servers are backed up every night at 1 oclock, and when i get to work 1 or 2 of them have stopped. It is not a big problem, just go and press start on the server, and they run again, but this is only accepted because it is development servers, it would be really bad if it was production servers.

I am having 6 servers in a cluster, and they are running 20-30 servers.
Storage is 4 servers with nfs, 2 are used for images, and the other 2 for backup files.

Does anyone have an idea to solve this problem ?

Best regards
Allan

Proxmox 4 beta 1 ZFS kmem_alloc crash

$
0
0
Hi!

the last years only reading in the forum solved my Problems, but now I wanted to share my experience if someone else has similar problems:

This time I am building a really small "Server", took an Intel NUC5PPYH with 8 gigs of Ram and an SSD which was already here.
Installes Beta1 without problems and for testing I tried to make 3 KVMs, 1 Debian 8, Windows 7 and Windows 8.
Installed with zfs RAID0 on the SSD.
First Problem was, the machine crashed very often, especially when some load got to the Harddisk.
The logs showed things like

Code:

SRST failed (errno=-16)
hard resetting link
link is slow to respond, please be patient (ready=0)
SRST failed (errno=-16)
hard resetting link
link is slow to respond, please be patient (ready=0)

and some research let me expect, the drive is eventually faulty. Even a firmware upgrade on the SSD hasn't made things better and smartctl showed no errors.

So I took another SSD and it seemed a bit better at the first moment, but under heavy stress the system stopped.

The logs showed this error

Code:

Large kmem_alloc(39036, 0x0), please file an issue at: https github.com/ zfsonlinux/zfs/issues/new
Some of this messages where in the logs and it seems, that the host becoms about half a minute after that unresponsive and crashed.

I found this ressources:
https github.com/ zfsonlinux/zfs/issues/2459
https github.com/ zfsonlinux/zfs/issues/3057
https github.com/ zfsonlinux/zfs/issues/3251
(Can't post links on my first post...)

so that it seems to me kernel related.

After that I tried to install Proxmox 3.4 with same parameters to look, if it crashes, too, but until now all stresstests are allright and the little box is running running running.

So for me there isn't a problem at the moment. I just hope this is helpful if some other people have a similar error.
Besides that I think it is more a kernel related problem than something in Proxmox.

And last but not least: Thanks for proxmox, happy user since 1.x (don't remember exactly, was there 1.2?)

VNC Error

$
0
0
Hi all,

I have just done an upgrade 3.3-5 on my node and started to getting some VNC on KVM.

When I load the console via NoVNC on any VM (even brand new ones) I get this error:
no connection : Connection timed out
TASK ERROR: command '/bin/nc -l -p 5900 -w 10 -c '/usr/sbin/qm vncproxy 801 2>/dev/null'' failed: exit code 1

Any ideas?

apt-get upgrade problem with zfsutils 0.6.4

$
0
0
Hello,

I'm having an issue while upgrading zfsutils at the last version.
The problem appears to be the same as the thread http://forum.proxmox.com/threads/230...16703&posted=1


I issued the apt-get update, synced the repos without any error, then issued apt-get upgrade, and while updating the last packages I got the error on zfsutils

Code:

etting up zfsutils (0.6.4-4~wheezy) ...
insserv: There is a loop between service zfs-mount and zfs-zed if stopped
insserv:  loop involving service zfs-zed at depth 5
insserv:  loop involving service zfs-import at depth 4
insserv:  loop involving service umountfs at depth 7
insserv:  loop involving service zfs-mount at depth 15
insserv: exiting now without changing boot order!
update-rc.d: error: insserv rejected the script header
dpkg: error processing zfsutils (--configure):
 subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of zfs-initramfs:
 zfs-initramfs depends on zfsutils; however:
  Package zfsutils is not configured yet.

dpkg: error processing zfs-initramfs (--configure):
 dependency problems - leaving unconfigured
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-2.6.32-40-pve
Errors were encountered while processing:
 zfsutils
 zfs-initramfs
E: Sub-process /usr/bin/dpkg returned an error code (1)

I still have to reboot the machine after the upgrade but I actually cannot suffer a down of many hours.


My configuration consist in a single raidz-3 pool with 9 disks, shared between data and VM disks.

the solution in the thread I linked before was not useful for me, so I am still facing the problem.

Thank you in advance
Best regards

Suspending a VM

$
0
0
Trying to find some information about the proxmox "suspend" feature. Does this flush the guests buffers to disk before, or simply save its state in memory? Trying to come up with an easy way to flush guest buffers from the host for backup purposes. I appreciate the input!

Starting VMs on first node after second node failure on 2-node Proxmox cluster (DRBD)

$
0
0
Hello,

I have two servers configured in a Proxmox 2-node cluster with LVM on top of DRDB for storage for my VMs. See diagram below (crude paint drawing :P)

pve_cluster.png

My second node has failed after a upgrade to Proxmox 3.4. I cannot resolve this issue immediately.

I would like to start the VMs that were located on the second node on my first node. When the second node is back online I can do a re-sync of the data using the first node and the source.

I cannot migrate them in the web interface of Proxmox because the second node is currently offline (see screenshot below). I need to find a way to do this manually but I can't find anything in the documentation to do so.

pve_screenshot_1.png

Here is the output of my logical volumes. As you can see the VMs with IDs 101, 102, 103, and 105 were located on the second node.

Code:

--- Logical volume ---
  LV Path                /dev/vg0/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                vg0
  LV UUID                6sJdZu-qwqj-dmMd-Imak-NGk8-V0z8-DR2u9J
  LV Write Access        read/write
  LV Creation host, time pve3, 2014-09-04 18:49:22 +0200
  LV Status              available
  # open                1
  LV Size                40.00 GiB
  Current LE            10240
  Segments              2
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:3


  --- Logical volume ---
  LV Path                /dev/vg0/vm-101-disk-1
  LV Name                vm-101-disk-1
  VG Name                vg0
  LV UUID                BZFIbz-rgyL-oc6B-h3DW-VmtJ-My5e-ow33WH
  LV Write Access        read/write
  LV Creation host, time pve4, 2014-09-05 17:29:25 +0200
  LV Status              NOT available
  LV Size                20.00 GiB
  Current LE            5120
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto


  --- Logical volume ---
  LV Path                /dev/vg0/vm-102-disk-1
  LV Name                vm-102-disk-1
  VG Name                vg0
  LV UUID                seM9aZ-k70s-Ge8u-v8ux-NPJ4-bYDe-0nwrUk
  LV Write Access        read/write
  LV Creation host, time pve4, 2014-10-17 13:06:39 +0200
  LV Status              NOT available
  LV Size                20.00 GiB
  Current LE            5120
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto


  --- Logical volume ---
  LV Path                /dev/vg0/vm-103-disk-1
  LV Name                vm-103-disk-1
  VG Name                vg0
  LV UUID                288VtP-yXug-NQ7M-xdLQ-qT3z-5CwL-e1kzI2
  LV Write Access        read/write
  LV Creation host, time pve3, 2015-01-03 15:54:54 +0100
  LV Status              NOT available
  LV Size                20.00 GiB
  Current LE            5120
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto


  --- Logical volume ---
  LV Path                /dev/vg0/vm-104-disk-1
  LV Name                vm-104-disk-1
  VG Name                vg0
  LV UUID                qraFwh-e8vu-dUVp-Cphc-TP0l-ALZd-bkU5XK
  LV Write Access        read/write
  LV Creation host, time pve3, 2015-03-15 14:01:06 +0100
  LV Status              available
  # open                1
  LV Size                100.00 GiB
  Current LE            25600
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto
  - currently set to    256
  Block device          253:4


  --- Logical volume ---
  LV Path                /dev/vg0/vm-105-disk-1
  LV Name                vm-105-disk-1
  VG Name                vg0
  LV UUID                ikPpEU-n3il-aX7z-LCLV-ko7g-x8E8-lX1FkK
  LV Write Access        read/write
  LV Creation host, time pve4, 2015-03-16 15:49:38 +0100
  LV Status              NOT available
  LV Size                100.00 GiB
  Current LE            25600
  Segments              1
  Allocation            inherit
  Read ahead sectors    auto

So the data is there on the first node but the LVs are listed as "NOT available".

Does anybody know how I can start these VMs on the first node?

I assume it involves marking the LVs as available and then starting the VMs manually or migrating the manually but I can't find this in the documentation. I've looked here:

https://pve.proxmox.com/wiki/DRBD
https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster

I don't want to break anything. Is it as simple as doing the following steps?


  1. Moving the configuration files from /etc/pve/nodes/pve4/qemu-server/ to /etc/pve/nodes/pve3/qemu-server/
  2. Using lvchange -aey on the logical volumes I want to make available
  3. Restarting Promox


Thanks for your reply.
Attached Images

[P] File system Passthrough - or are there other alternatives?

$
0
0
Hello!

I have configured multi-disk spanning BTRFS partition including subvolumes.
The subvolumes are used to store typical media files, e.g. music, video.

On the host, any subvolume is mounted to a file system:
Code:

Subvolume    Mount
music        /mnt/music
video        /mnt/video

Now I intend to use a NAS software running in KVM: OpenMediaVault

Question:
What is the best / efficient approach to utilize the host file system with the guest "OpenMediaVault"?

I found this blog explaining the configuration "File System Pass-Through in KVM/Qemu/libvirt".
Would this be a reasonable approach?

THX

Storage Problems With Proxmox/Ceph

$
0
0
I'm trying to test out a Proxmox/Ceph cluster and the gui/storage seems to stop working for all storage related tasks/info when I setup Ceph.

I setup a nested proxmox cluster (wiki/Nested_Virtualization) and everything seems to work with that. Hardware server is running pve-manager: 3.4-9 with kernel: 3.10.0-11-pve. I have three VMs setup:
OS: PVE 3.4-9, Kernel 2.6.32-40-pv,
NICs: VIRTIO0 - Bridge to Internet connected NIC
VIRTIO1 - Bridge to be used for Proxmox/VMs
VIRTIO2 - Bridge to be used for Ceph
Hard Drives: VIRTIO0 - For Proxmox
VIRTIO1 - For Ceph Journal
VIRTIO2 - For Ceph disk #1
VIRTIO3 - For Ceph disk #2

Everything seems to run fine with the nested proxmox cluster until I setup ceph (wiki - Ceph_Server). I have ceph installed, monitors setup, osd disks setup and a pool created. Now if I try to check content through the gui I get communication failure. If I try and create a VM, the Hard Disk -> Storage box is grey/unavailable. Once I get this communication failure, everything related to storage gets a communication failure as well. Anything that makes a call to /api2/json/nodes/pc1/storage generates a error. Once the storage timeouts start, I may get timeouts in other parts of the gui and the graphs do not show any data. Making a request through the gui for storage related information seems to start this. The same thing happens on all three nested proxmox nodes. I've started over from bare metal install on this configuration several times, double-checking every step along the way.

I've tried creating a VM from the command line with local storage and that works, but the storage is not created.

Any ideas on what is going on would be appreciated.

From access.log
Quote:

xxx - root-at-pam [20/Aug/2015:10:29:52 -0700] "GET /api2/json/nodes/pc1/storage?content=images HTTP/1.1" 596 -
xxx - root-at-pam [20/Aug/2015:10:29:52 -0700] "GET /api2/json/nodes/pc1/storage?content=iso HTTP/1.1" 596 -
xxx - root-at-pam [20/Aug/2015:10:30:19 -0700] "GET /api2/json/nodes/pc1/storage/local/status HTTP/1.1" 596 -
xxx - root-at-pam [20/Aug/2015:10:30:49 -0700] "GET /api2/json/nodes/pc1/storage/RBD_Pool1/status HTTP/1.1" 596 -
xxx - root-at-pam [20/Aug/2015:10:32:20 -0700] "GET /api2/json/nodes/pc1/storage/local/status HTTP/1.1" 596 -
xxx - root-at-pam [20/Aug/2015:10:32:50 -0700] "GET /api2/json/nodes/pc1/storage/RBD_Pool1/status HTTP/1.1" 596 -
From syslog
Quote:

Aug 20 10:29:36 pc1 pveproxy[87782]: proxy detected vanished client connection
Aug 20 10:33:06 pc1 pvestatd[3807]: status update time (300.102 seconds)
Aug 20 10:35:22 pc1 pveproxy[87780]: proxy detected vanished client connection
Top level version information
Quote:

proxmox-ve-2.6.32: 3.4-160 (running kernel: 3.10.0-11-pve)
pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)
pve-kernel-2.6.32-40-pve: 2.6.32-160
pve-kernel-3.10.0-11-pve: 3.10.0-36
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Nested version information
Quote:

proxmox-ve-2.6.32: 3.4-160 (running kernel: 2.6.32-40-pve)
pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)
pve-kernel-2.6.32-40-pve: 2.6.32-160
pve-kernel-2.6.32-39-pve: 2.6.32-157
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Virtualization working on VM. CPUINFO
Quote:

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx lm
constant_tsc arch_perfmon rep_good unfair_spinlock pni vmx ssse3 cx16 sse4_1 x2apic hypervisor lahf_lm vnmi
VM Config file
Quote:

args: -enable-kvm
bootdisk: virtio0
cores: 2
cpu: host
ide2: local:iso/proxmox-ve_3.4-102d4547-6.iso,media=cdrom
memory: 12288
name: PC1
net0: virtio=92:E7:5E:18:CC:E7,bridge=vmbr0
net1: virtio=56:07:4F:EE:E6:5F,bridge=vmbr1
net2: virtio=96:78:D1:F5:11:70,bridge=vmbr2
numa: 0
onboot: 1
ostype: l26
smbios1: uuid=e73284d5-4878-4904-beec-3b3100829b4f
sockets: 2
virtio0: local:101/vm-101-disk-1.qcow2,format=qcow2,size=15G
virtio1: local:101/vm-101-disk-2.qcow2,format=qcow2,size=15G
virtio2: local:101/vm-101-disk-3.qcow2,format=qcow2,size=25G
virtio3: local:101/vm-101-disk-4.qcow2,format=qcow2,size=25G
VM network (have tried with configuring eth1/eth2 instead of vmbr1/vmbr2)
Quote:

auto lo
iface lo inet loopback

auto vmbr1
iface vmbr1 inet static
address 10.10.10.101
netmask 255.255.255.0
bridge_ports eth1
bridge_stp off
bridge_fd 0

auto vmbr2
iface vmbr2 inet static
address 10.10.11.101
netmask 255.255.255.0
bridge_ports eth2
bridge_stp off
bridge_fd 0

auto vmbr0
iface vmbr0 inet static
address xx.xx.xx.xx
netmask xx.xx.xx.xx
gateway xx.xx.xx.xx
bridge_ports eth0
bridge_stp off
bridge_fd 0

Storage.cfg
Quote:

rbd: RBD_Pool1
monhost pc1
pool pool1
content images
username admin

dir: local
path /var/lib/vz
content images,iso,vztmpl,rootdir
maxfiles 0

Ceph Status
Quote:

cluster fe377151-e3ac-498f-8fac-daaf98defe56
health HEALTH_OK
monmap e3: 3 mons at {0=10.10.11.101:6789/0,1=10.10.11.102:6789/0,2=10.10.11.103:6789/0}
election epoch 40, quorum 0,1,2 0,1,2
osdmap e185: 6 osds: 5 up, 5 in
pgmap v550: 320 pgs, 2 pools, 0 bytes data, 0 objects
188 MB used, 124 GB / 124 GB avail
320 active+clean

HA Failover and resources

$
0
0
Hello,

I have been running Proxmox for roughly two years now in production, know it quite well and working very well. We are currently putting together a new improved cluster of 5 nodes together with HA and fencing which is all up and running perfect in our office test rack.

Is there a way to change how rgmanager/proxmox decides which machine to failover to? I understand that it goes off the failover group and priority but can it be configured to work based on resources?

All our nodes have 64gb ram, some of our vm's have as much as 32gb of ram assigned to them and in the past I have accidentally dished out ram that the server does not have available which causes the host and all vm's to run dog slow and even at times freeze.

At the same time I have also been messing around with a small 3 node XenServer cluster/NFS shared storage which was a dream to setup (one click ha setup), supports auto failover based on available resources which works amazingly. But what I don't like is that XenCenter is windows only (I use mac and the ported openXenManager is so buggy) and the hassle it would be to migrate from Proxmox to XenServer.

Any advice would be amazing,

Thanks
Richard

Nutanix Community Edition with Proxmox

$
0
0
I'm new with Proxmox.

I'm planing to use Nutanix with Proxmox.

Is this a good Idea ?

There is any problem or negative point should be good to know before do this ?

Anyone already did it ?

Best Regards !!!

Carlos Eduardo.

[Proxmox 4b1] q35 machines failing to reboot, problems with PCI passthrough.

$
0
0
As in title, on Proxmox 4 beta 1 q35 machines are unable to reboot without stop/start cycle. After reboot (from within VM), it fails to boot, instead hanging forever at SeaBIOS, just after printing "Press F12 for boot menu.", before stating that it will boot from hard disk.

PCI passthrough sometimes renders machines unbootable - they are freezing in the same way, displaying "Press F12 for boot menu." forever. Removing the 'hostpci0: 03:04.0,pcie=1,driver=vfio' line from config makes it possible to boot, and then typing:

device_add vfio-pci,host=03:04.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0

in qm monitor correctly adds the card to the VM.

Any pointers as to how fix the above?

ZFS over iSCSI with Multipath

$
0
0
Hello! We are working on a new implementation and the hardware we have is a good setup for trying to utilize multipath. I was reading over the wiki regarding ZFS over iSCSI and saw a note that multipath specifically wouldn't work (https://pve.proxmox.com/wiki/Storage:_ZFS_over_iSCSI). I was wanting to find out what the issue is that would prevent this from working. From what I can tell, Proxmox connects to the ZFS host and provisions a ZVol and then exports it over iSCSI. The Proxmox host then gets a block device on the other side as a result of the iSCSI LUN presented to it. I would think multipath being in there wouldn't change a thing in whether or not this works but I wanted to understand better the impact.

Also, along these same lines, is all of the information on that page considered current? I know it was last updated earlier this year but there are references to things that are much older as well (such as a plugin for OpenMediaVault) and it made me wonder if a lot of the content might be out of date.

Thanks for any insight, help, or advice on the subject!

Windows Clients Freeze on Proxmox 3.4-6

$
0
0
Hi all,

A customer of ours has a Proxmox Server (offcourse :) ) and sometimes a Virtual Machine (he has 4), just freezes. It stops working. The only way to get it working again is to choose Stop - Start (reset doesn't work).

If you need more information, just ask. I don't know what information I should deliver to you.

Modify LXC template

$
0
0
How to modify lxc template?
With openvz I make a tar of /var/lib/vz/images/<ctid>

Thanks!

disk vm windows suddenly stop running

$
0
0
Hi everyone,
i have a problem, that my vm guest windows is suddenly not working. At resource monitor there is no activity in the hardisk. Low cpu and RAM. Physical hardisk is good.
Could it be a bug? Anyone can help me please?

v4 ETA?

$
0
0
We are in the process of building a new cluster that's not yet in production. We're trying to hold off on going live with it until v4 is released since it will contain several features we will want to use. Any idea when that may be? Ballpark estimates would be great.
Viewing all 171921 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>