Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

repeated kernel panics


VNC console doesn't work only on CentOS

$
0
0
Hi all,

I've installed a CentOS 6 and console doesn't work. It just shows this black thing with a cursor and nothing else.

I did everything explained in wiki (http://pve.proxmox.com/wiki/OpenVZ_Console) for CentOS but still the same problem.

From the same wiki pages on a Debian installation the instructions worked fine and I was able to use the console without a problem on the same node like CentOS.
Its only the CentOS that doesn't work.

Any help is much appreciated.

Thanks

Proxmox IPv6

$
0
0
Hello
i have done it like this
http://www.wombat.ch/index.php?optio...-sw&Itemid=101

at the point where i shoul do a restart
/etc/init.d/networking restart

i got a error
Quote:

~# /etc/init.d/networking restart
Running /etc/init.d/networking restart is deprecated because it may not re-enable some interfaces ... (warning).
Reconfiguring network interfaces...
Waiting for vmbr0 to get ready (MAXWAIT is 2 seconds).
RTNETLINK answers: File exists
vzifup-post ERROR: Unable to add route ip route add 2002:b2c1:1f1f:0:0:0:0:aab3 dev venet0
run-parts: /etc/network/if-up.d/vzifup-post exited with return code 34
RTNETLINK answers: File exists
Failed to bring up vmbr0.
done.
any idea?
have a nice day
vinc

KVM guest CPU high, strange display issues

$
0
0
OK, bear with me as I needed to just re-type all of this as I lost my progress from being logged out automatically.

kvmguestcpu.jpgstrange guest behavior.jpg

Let me start off by letting you know how I have things setup.

I have installed ZFS on linux through the wiki's instructions:

http://pve.proxmox.com/wiki/ZFS

I've limited the amount of RAM it uses to 4096 MB and allowed it to start mounting datasets just as it would any other filesystem when proxmox starts up every time.

Besides having Htop installed, there's really nothing else custom about my proxmox installation.

I've recently switch my installation off of a little 16 GB usb drive to a 120 GB SSD and freshly restored all of my old VMs.

As you can see from the above, this happens from time to time in a newly installed Linux Mint KVM guest. KVM processes on my proxmox spike to almost full blast on all cores for some reason and my VM becomes unresponsive. And when they are able to respond, it shows randomly replaced font letters with random ASCII symbols. In the screenshot above, it decided to replace these random selections with just blank spaces.

I can reproduce this as all it takes is attempting to use the VM for a solid 15-30 minutes or sometimes every 5 minutes. It happens in both Spice and Novnc displays.

I love proxmox but it has been my only experience with KVM virtualization and I'm not sure where I should start looking.

Why would my KVM guest spike CPU like this? Is there KVM logs where I can determine what exactly it is doing? Is there any way to limit the CPU usage and would that necessarily help?

Any help is appreciated. I love proxmox and use it for all my homelab needs to experiment but it's become unusable for me and I want to continue to use it.

My feeling is that it has to do with recent updates, either 3.2 or 3.3, as this was never a problem in 3.1. I know that's very vague and I'm willing to wipe and re-install in steps.
Attached Images

Proxmox cluster without HA and with Ceph, manually managed for simple redundancy

$
0
0
Hi, my needs are:
Simple redundancy of storage and elaboration power.
I would love to have:
a) 2 nodes, ceph as shared storage, each node running it's own VMs.
b) Be able to migrate VMs between nodes
c) If a node is own, be able to manually run the VM on the other node without the risk that when the node comes up again it starts it's vm and destroyes them
d) Be able to increase the node number to 3 or 4 without quorum concerns (and that's why I will not use BRDB

The scenario is:
With the above I want to achieve:
- if I have to do maintenance of a node, manually migrate it's VM, turn it off or disconnect from the cluster, do what I need, turn it on and then re-migrate the VM
- if a node does not work (i.e. power supply failure) turn on the VM on the other node, repair the node, turn it on (has NOT to start the VM automatically), re-migrate the VM back
- let's say they are remote to me, if I'm called that some VM are not working, connect in ssh to the survived node, start the remaing vm there, and not fear that if the dead node restarts on it's own it will start it's vm and destroy them (since they have been started by me in the other node)

Is it possible and how? As far as I understand, quorum to 1 can be forced in proxmox but don't know in ceph.
Also important, seems that when the node turns ON, it starts it's vm without checking if they are already running, is it true?.
Thanks a lot

Error installing: Installation failed! unable to partition harddisk /dev/sda

$
0
0
Motherboard / CPU:
- Gigabyte GA-J1900N-D3V
- Intel® Celeron® Quad-Core J1900 SoC
- One 500 GB Sata Drive
http://www.gigabyte.com/products/pro...px?pid=4918#sp


Tried to install it from usb dvd-rom and usb stick but:

1. usb stick - i can select both the stick and the hdd in the second install screen but then it errors out and the usb drive isn't bootable anymore - i suspect it writes to usb stick instead of hdd
2. usb dvd - errors with the same message error - unable to partition harddisk /dev/sda

I tried
- dd-ing the first 100 mb of the drive with /dev/zero
- creating a GPT partition table with ubuntu live cd
- creating a MBR partition table with ubuntu live cd
- using different sata ports / cables

I did a few old-school screenshots of the monitor and put them in a imgurl album at http://imgur.com/a/wZzDn#gBbdrWc

I tried with 3.3 and 3.2 with same results.

Any ideas on what to try next?

Is Writeback caching safe?

$
0
0
Hi everyone.

In the Windows 2012 Guest Best Practices (https://pve.proxmox.com/wiki/Windows...best_practices) there is an option recommended.
Quote:

  • Select Bus/Device: VIRTIO, Storage: "your preferred storage" and Cache: Write back in the Hard Disk tab and click Next.

I've read a lot about what caching option should you choose when setting up a guest. Even more, there are lots of threads in this forum stating that Writeback is dangerous.
http://forum.proxmox.com/threads/106...he-Performance

Sometimes is even really dangerougs
http://forum.proxmox.com/threads/882...writeback-safe

:confused::confused::confused:
What am i missing?
Is it really not enough dangerous to recommend it in Wiki article as "Best Practice?"

I am completely confused.

Thanks for any advice or comment.

Valentin

Help! Directory storage remounts local storage. Pics inside

$
0
0
I have ZOL installed and have local ZFS datasets holdling my images.

I've mounted the following location /tank/IMAGES as my main vm image storage. This is a raidz1 with (3) 500 GB drives.

Occasionally, whenever I boot up my proxmox, it will mount this in the root drive of the proxmox installation. See below:

storage.PNG

Then, if I go to the storage(Again, this is a raidz1 with (3) 500 GB drives), it appears to only have the remaining storage left on my root drive:
storage2.PNGstorage3.PNG

Anyone running local ZFS run into this before?

To fix it, I have to completely destroy and re-create the ZFS dataset and re-add the directory in the proxmox GUI.

Edit: To prove to you this location should show 900 GB free:
storage4.PNG

Here's the output of mount:
mountoutput.PNG
Attached Images

kernel.pid_ns_hide_child=1 for monitoring purposes

$
0
0
Hi all.

I'm using ZenOSS to monitor our proxmox servers.
During the modelling phase, ZenOSS will grab an initial processlist and match against a table of known processes that should be running.
Trouble is, it's picking up the processes running inside the various containers.
When a process stops in a container that happens to be unique (ie the process is only running in that container), we get two alerts. One for the container and one for the proxmox host.

I've seen that you can hide processes running inside containers from the host node by setting
Code:

kernel.pid_ns_hide_child=1
.
However, I read on the OpenVZ wiki (https://openvz.org/Processes_scope_and_visibility) that this will break live migration and checkpointing.

Has anyone else dealt with this kind of issue?

Thanks

Ceph OSD failure causing Proxmox node to crash

$
0
0
Over the weekend 2 HDD(OSD) died in one of our Ceph cluster. After i replaced them as usual cluster went in to rebalancing mode. But since then i cannot access the RBD storage from Proxmox. It also making access to some of the Proxmox node inaccessible through GUI. If i login from working GUI on other node, some of the node keeps giving connection error. Syslog shows all nodes are having the following message.
Code:

Jan 19 04:18:10 00-01-01-21 pveproxy[363619]: WARNING: proxy detected vanished client connection
Nodes that are inaccessible has higher number of this messages. No VM cannot be started. Node cannot even access NFS share without connection error. As soon as i disable RBD storage, Proxmox cluster becomes normal. All connection error goes away, can access NFS share and all other share. But of course I no longer have RBD thus no VM.
Any idea?

More than 32 NICs possible?

$
0
0
Hi everyone,
i am working with proxmox for a while now and everything worked very well.
But now there is a little problem. Im setting up a virtual pfsense router to manage different networks, the testing with 3 networks worked without any problems.
I configured VLANs on my host that uses eth1, bound them to the virtual pfsense and worked with them like with physical interfaces. The Switch does understand the tagged packets.

After testing i wanted to add my configured vmbr's but the maximum amount of network interfaces is set to 32 - i need at least 50 interfaces.
Some search resulted in changing line 468 in /usr/share/perl5/PVE/QemuServer.pm where the maximum amount of interfaces is set.

At first i thought it worked, but when adding more interfaces they don't appear in the GUI - would be no problem but after booting the pfsense vm, none of the 50 interfaces worked, after deleting interfaces 33-50 it worked again.

Does anyone know if it is possible to add more than 32 interfaces to one vm?
If not i have to set up a second router or use vmware as host.

Thanks!

Falk

pveversion -v:
proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Iptables configuration on Proxmox host

$
0
0
I am very new in Proxmox world. Could you guys help me to configure iptables on Proxmox host machines like only from 192.168.0.0/24 network ssh, http, https and 8006 ports are accessible other than those deny all but on the guest machines all ports are opened?:confused:

Question about xinetd

$
0
0
Hi,
I have a proxmox server. It's a really small implementation, for my personal use, at my home.
Code:

# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-33-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

I'm testing Observium inside a container for monitoring.
On proxmox I installed and configured snmpd (v3) so observium gets all MIBs.
But this tool also uses an agent called "Unix Agent" for better information retrieving that runs under xinetd.
Also I noted that proxmox doesn't have xinetd nor inetd

Can I safelly install xinetd on my server?
Would it break anything?
Thanks

Storage woes

$
0
0
Since I hear more and more stories of admins having problems with Ceph I begin to be more and more glad for my ZFS storage. And yes, it is not clustered and high available but it simply keeps on running doing what it is supposed to do with only minimal interaction from my side, it never bothers me with strange errors or behaviors, it never complains about resources or suddenly shows an unaccountable drop in performance. All in all it is simply rock solid and never loose any data:D

vzdump bwlimit not run (in RHEL6 HN)

$
0
0
Hello, all is fine with my containers running in Proxmox, but I have some OpenVZ HostNodes installed directly in CentOS 6, the bwlimit option don't take effect.

I run allways in snapshot mode.

Code:

vzdump --dumpdir /sata/vzdump --tmpdir /sata/vzdump --snapshot --compress --bwlimit 5120 100
Code:

vzdump --dumpdir /sata/vzdump --tmpdir /sata/vzdump --snapshot --compress --bwlimit 51200 100
In both, the result is the same: Total bytes written: 1659156480 (1.6GiB, 17MiB/s)
In the first, don't take care about the 5MB/s limit
In the second, don't get the performance specified.

Allways is around 15 MB/s

I can ensure that is not a issue on disk IO performance.

I've tested with many versions of vzdump (the vzdump 1.2-4, provided in OpenVZ repo and the one provided on the SolusVM repo), and same result with all.

Looking on the documentation, the vzdump is developed by @Dietmar, so, maybe you can help me to find the issue.

Thanks!!!

Raid workaround inside VM

$
0
0
Hi,

On my city it is difficult to get a good hardware RAID, and for software RAID implementation is quite difficult in PROXMOX VE.
Now what I want to raid is not the proxmox it self but the vm.

Mostly people will use ceph or raid the datastorage for vm, but since the resource it's not available here I've been thinking of using some workaround.

1st Storage - This is where PROXMOX VE is installed and also use for ISO storage
2nd Storage - for vm storage, connected locally
3rd Storage - for vm storage, connected locally

Now for each VM lets say database server I'm going to use RAID 10. So I create 2 disk from 2nd Storage and 2 disk from 3rd Storage and attach all four to this VM.

From inside the VM, I use mdadm to create RAID 10 this way if one of the Storage broken I simply change it and rebuild the RAID inside the VM.
This way I got two storage performance instead of one which is the real RAID does. Of course this method must be specific to only use either RAID 1 or RAID 10. But is there a catch ?

Another problem is in above RAID 10 the RAID 0 is done from 2 virtual disk inside one storage, will it be faster read and write or better put each virtual disk in it's own storage ( thats mean 4 storage is needed )

Allow a CT to access my server's monitor/screen?

$
0
0
I'm running a small virtual datacenter on one single machine. The machine is hooked up via HDMI to a television.

I'm trying to allow an Ubuntu CT running XBMC to use the machine screen/monitor. Is this possible? Ideally, I'd want to have a monitor and sound passthrough from the bare-metal (proxmox) to the virtual media server.

Ceph Giant supported ?

$
0
0
Hi,

Is Giant version of Ceph (0.87) supported by last Proxmox version ?

Thank you.

Sheepdog 0.9

$
0
0
Hi,

I encountered recently a few problems with Sheepdog storage and snapshots.
We use snapshots extensively and ProxMox uses it too for backup purpose which is great but Sheepdog 0.8.2 which is the latest release provided within pve-sheepdog is not very smart snapshot wise and it is keeping garbaged data when snapshots are removed and these data are never purged using storage space for nothing.

As the authors recommended themselvesn it would be better to upgrade to 0.9 which is considered stable.

Is the pve-sheepdog package going to be upgraded soon ? I'd prefer stay on a minimally modified setup with PVE packages instead of installing an alternative Sheepdog repo.

Regards.

ESXi -> Proxmox -> pfSense all bridged to same lan, possible?

$
0
0
Hello all,

as described in the object I've a ESXi with installed on it Proxmox with installed on it a pfSense fw

Routerboard Mikrotik: 192.168.1.1
ESXi: 192.168.1.10
Proxmox: 192.168.1.12
pfSense: 192.168.1.15

everything is bridged but:

from ESXi I can ping Proxmox, Mikrotik but not pfSense
from Proxmox I can ping ESXi and pfSense
from pfSense I can only ping Proxmox

despite they are all on the same bridge network, from pfSense I can't ping hosts locate deeper than Proxmox level.

Is it normal?

Thanks a lot
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>