Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Clarifications about HA / Clustering in 4.0

$
0
0
Good Evening,

we're currently evaluating proxmox for a new deployment. I was unable to find extensive documentation about HA / Cluster implementation in 4.0, what I learned so far is that the only "standard" component the new stack is actually using is corosync for distribute messaging and membership and rgmanager has been dropped in favor of custom tools. So far so good, but I've got some questions:

1)Fencing: I read that the new cluster infrastructure uses watchdog based fencing. Does this means that in the event of a split brain, quorum-less nodes will shoot themselves ?
2)How does the cluster knows ? It simply waits for a timeout ? What if watchdog fails and machine keeps running?
3)If 1) is true, in case of a two-node cluster (for management purposes only, not for HA) what will happen? Does the fencing get activated only if HA is ?

I tried searching on the forum and on the wiki, but found nothing about how the cluster actually works.

S.

Backup REALLY slow after upgrade to Proxmox VE 4.0

$
0
0
After upgrading my server to Proxmox VE 4 my backup to USB-disk has become REALLY slow. Excerpts from the backup report mails:

Before upgrade (PVE 3.4-6):
103 webmail OK 00:13:07 5.04GB /media/usb0//dump/vzdump-qemu-103-2015_11_15-01_01_16.vma.lzo

After upgrade (PVE 4.0-57):
103 webmail OK 04:59:50 5.08GB /media/usb0//dump/vzdump-qemu-103-2015_11_17-04_50_15.vma.lzo

Anyone with an idea what might be going on?


Regards.

Backup data from server without quorum

$
0
0
I have a server that we are decommissioning and I want to get some data off of it. This server is the only remaining server in a cluster and doesn't have quorum so when I try to do a backup the lock fails because pve is read-only.

Is there a way to tell it to ignore the quorum and do the backup?

The datastore is ceph rbd. Is there another way to mount the ceph storage directly to get the data?

Best regards,
Eric

No VLAN For VM

$
0
0
Hi ...
I am trying to get the following configuration working;

/--------\ /---------\ /-----\ /-----\ /------\
|Internet|---|eth0/eth1|---|bond0|---|vmbr0|---|VM NIC|
\--------/ \---------/ \-----/ \-----/ \------/
Can someone, please, give an example of the "interfaces" file for the above configuration to function correctly under proxmox4.

Thank you for your quick prompts.

Proxmox VE 4.0 - Delete Cluster + Ceph Node

$
0
0
I have been working on a few STH articles on Proxmox VE 4.0. (e.g. http://www.servethehome.com/add-raid...to-proxmox-ve/ and http://www.servethehome.com/proxmox-...le-grayed-out/ as examples) Absolutely great job on 4.0. It is absolutely awesome how well the cluster is performing.

I did run into a minor issue with the test cluster. The 4-node cluster has 3x Intel Xeon D-1540 nodes and 1x Intel Xeon E5 V3 node (fmt-pve-01). All four were running Ceph. The "big" fmt-pve-01 node had a double Kingston V200 SSD 240GB failure within 72 hours which took out the ZFS mirror boot volume.

Proxmox-VE-Ceph-OSD-listing-600x365.jpg

That leaves the other three nodes which can have a quorum active. I do have two more nodes ready to join, but I do not want to proceed and mess up the cluster further. With a no-Ceph cluster I would normally just remove the PVE node from the cluster. I would then install new boot drives and then I would re-join the node to the cluster. That is not too hard. What I am wondering/ worried about is the addition of Ceph to the cluster.

My questions are:
1. Do I need to do something to remove the node/ OSDs from the Ceph config before removing the node from the cluster? Or does Promox take care of the Ceph config when I pvecm delnode fmt-pve-01?
2. I do have two more nodes ready to join with additional disks. Would it be best to add these nodes to the Proxmox/ Ceph cluster before removing the first node?

Any tips would be appreciated! Thank you again.

Patrick
Attached Images

Storage Model Advice File Server

$
0
0
Hi all,
I have been having problems with my current storage model,in one of my LXC that have Samba, it caches every file i transfer to ram and doesn't restore it, only if i delete it or move outside the server or restart the Container... Anyway my LXC is accessing a Physical Disk, that i mounted on the Host and has a ext4 file system.
Since im new to proxmox, what would be the best way to mount my disk? to access it in LXC and use it with Samba? is it a bad practice to mount it directly in the host? should i make RAW image and use it in the container, or do something completely different? i am going to make a NAS and attach it with NFS later. But for now i only use this disk for storage, should i keep it physical mounted or make a new IMAGE of it? i dont intent to make snapshots or something, it will store my backups, photos etc.. from all my home devices, and i want to get the most performance out of it, i currently transfer files at 113mbs, only when the RAM hits max on the LXC it drops to 70/80mbs, would i see those speeds if i create a DISK Image?
Thanks

BIOS setting optimisation for best performance

$
0
0
Hi good people. I have a question about some BIOS tuning practices for get maximum from virtualization server. We got new Supermicro servers, and i need advice about settings to make proxmox run best on this servers. Last 3 years i have worked exclusively with HP Proliant rack servers, and are not familar with Supermicro platform.

I know some resources from another brand manufacturers about tuning BIOS for maximum performance or energy saving:
Cisco: http://www.cisco.com/c/en/us/product...11-727827.html
Dell: http://en.community.dell.com/techcen...apers/20248740
HP(actually not hp but for hp +vmware): https://boerlowie.wordpress.com/2010...7-for-vsphere/
VMware: https://www.vmware.com/pdf/Perf_Best...vSphere5.5.pdf (page 17)

The question is: what settings are optimal or safe and stable with best performance? All this documents are similar but some parameters have different settings and often opposite. For example - vmware suggest to turn off interliving, cisco suggest turn it on. Cisco proposes to disable turbo boost but fujitsu and vmware talking about eneabling it. I can't find recommendations or practics about KVM virtualization - even in Red Hat documentations.
Maybe someone has links for documentations or some tests or own knowledge to share (good if it will be matherial about KVM and Supermicro motherboards).

Thx and good luck.

Host loses connectivity

$
0
0
Hi thereI did setup a Debian Jessie and on top I install proxmox 4.All seems to work fine but after a while I can't ping the proxmox host IP anymore. I can't ssh into it anymore.The proxmox webinterface still works though.So, I have to reboot the whole machine, then it's connectable for a while and during that time I can do upgrades or install other stuff. The problem is, I like to do backups with rsync over ssh and that's impossible.

GPU Passthrough resulting in high CPU load - Ideas?

$
0
0
I have been working on an attempt to pass my gpu to a vm running on my proxmox system.
I'm running version 4.0-48

I have followed the Proxmox document for PCI-E setup.
This includes changing Grub entries.

I have enabled IOMMU on my motherboard as well as virtualization.
My motherboard calls the feature "SVM Mode" and that has been enabled.
*I'm running an AMD CPU.

I have the following lines in my vm.conf file:
machine: q35
hostpci0: 01:00.0,x-vga=on,pcie=1
*My GPU is registered to BUS 01.

The vm is a Windows 7 image with the QEMU drivers + chipset installed.
If I remove the hostpci0 line from my vm config, the vm runs great. Adding the line reduced system performance and the CPU load is always near 100%.

I assume what is happening is my CPU is drawing the window frames instead of my GPU doing the work. Even though the system recognizes I have a GPU installed and the OS can detect the model, yet it somehow doesn't pass full control to it.
Initially, I had thought the issue related to maybe the CPU/motherboard didn't fully support visualization. But since I can just remove the "hostpci0" option from the vm.conf, it works great no CPU issues.

Does anyone have ideas on what I could try to resolve this problem? Or hit a similar wall?

Determine system name from backup lzo

$
0
0
Is it possible to determine what the name of the KVM virtual machine is from a backup lzo file?

Best regards,
Eric

pve-zsync sync inturrupted , now get error ** solved **

$
0
0
Hello

I had rebooted the pve-zsync target server . there was a pve-zsync in progress.

now when pve-zsync runs we get this in an email
Code:

Subject: Cron <root@dell1> pve-zsync sync --source 3129 --dest 10.1.10.46:tank/pve-zsync-bkup
    --name sogo-wheezy-syncjob --maxsnap 12 --method ssh

COMMAND:
        zfs send -i tank/kvm/vm-3129-disk-1@rep_sogo-wheezy-syncjob_2015-11-18_13:15:02 --
tank/kvm/vm-3129-disk-1@rep_sogo-wheezy-syncjob_2015-11-18_15:45:02 | ssh root@10.1.10.46 -- zfs
recv -F -- tank/pve-zsync-bkup/vm-3129-disk-1
GET ERROR:
        cannot receive incremental stream: dataset is busy

Job --source 3129 --name sogo-wheezy-syncjob got an ERROR!!!
ERROR Message:


does anyone have a suggestion to fix?

Proxmox 4.0 - Previously running LXC container won't start

$
0
0
Got a bit of a problem with my shiny new LXC containers. When the system boots, everything starts up fine and is running very well.

However, when I shutdown a container from the web interface regardless of whether I make a change or not, I'm unable to restart the container.

Task output shows:

Code:

lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 346 To get more details, run the container in foreground mode.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
TASK OK

Sorry, but task is decidedly NOT OK.

When trying to start the container in the foreground from an ssh session, I get this output:

Code:

root@destiny:~# lxc-start --name 101 --foreground
RTNETLINK answers: No buffer space available
Dump terminated
Use of uninitialized value $tag in concatenation (.) or string at /usr/share/perl5/PVE/Network.pm line 176.
unable to add vlan  to interface veth101i0
lxc-start: conf.c: run_buffer: 342 Script exited with status 25
lxc-start: conf.c: lxc_create_network: 3047 failed to create netdev
lxc-start: start.c: lxc_spawn: 954 failed to create the network
lxc-start: start.c: __lxc_start: 1211 failed to spawn '101'
lxc-start: lxc_start.c: main: 344 The container failed to start.
lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.

If I reboot the host, everything starts working fine again. Also, there is nothing special about the containers. They are just Debian containers with some storage and a single network interface with an IPv4 address and no VLANs or anything like that.

Thoughts, anyone? Any help would be appreciated.

Moving container created in Suse to Proxmox

$
0
0
Is it possible to move a LXC container created within SUSE enterprise 11 sp4 (SLES11sp4) or SLES12 into proxmox? The container is also the same as base OS.

failed to start udev

$
0
0
After following the upgrade instructions here server no longer restarts.

Quote:

failed to start udev "kernel device manager"

Prox4 and drbd, Raid card and 2 SSD, which way is best for volume creation? TRIM?

$
0
0
Hi, as far as I understand drbd9 is over lvm, and I have 2 SSD disks on each server, that I need to "sum" to have their total capacity. I will have an Adaptec 6805e raid controller.
The question is: is better create a raid0 with sdb+sdc at controller level, or create two separate raid0 disks (the controller does not let "single disks") and them use something like vgcreate drbdpool /dev/vdb1 /dev/vdc1 ?
What about trim support? I'm a lot confused about this issue, since I can enable Trim on the guest (fstab), as config flag when creating the VM, and also there is the "scsi-virtio" type of controller stuff that should "pass trims". Finally, drbd is over lvm that I don't know if is of with trims, and if the raid controller passes trim back to storage.
Final question, with DC class ssd (the new Samsung SM863 MZ-7KM960E, hoping to be able to set a better overprovisioning than default), is raid writeback controller cache (with bbu) necessary or not?
Thanks for the advices

[SOLVED] Change default mount options to raw image

$
0
0
Good day I want to change the default mount options for a lxc container

/images/102/vm-102-disk-1.raw on / type ext4 (rw,relatime,data=ordered)

to improve db performance on bacula despooling they recommend use

barrier=0 on mount options

How can I configure this container with the barrier option?

Thanks in advance
Nomar

Proxmox 4.0 HA sequence of events

$
0
0
Can somebody halp me understand what is happening when one of the nodes fail, we have 3 nodes cluster on Dell servers and Ceph based storage. Does the VM restarts and powers on a working node ? Also are the PVE nodes in HA active-active or active-passive ? What I mean by this is, can I have VMs running on all 3 nodes at the same time and only the VMs from the failing node will be moved automatically to another working node ?

Thank you

The both nodes are stoping by idrac at the same time

$
0
0
Hi,
I configure a cluster Proxmox HA with 3 nodes (Two proxmox + one quorum)
The fencing is by iDRAC express
The problem is when I test, I have the both nodes stoping by idrac at the same time.
Proxmox version's: 3.4.1
Do you have this problem before? or do you have some suggestion?
Thank you

Disabling Proxmox Firewall

$
0
0
I want to use my own IPTables rules. How do I disable the Proxmox firewall so it doesn't interfere with my rules?
The cli command "pve-firewall stop" works until I restart my server.

DRBD 9 outdated after VM cloning

$
0
0
Hi,

we've got a 3 node PVE 4 (latest tools/updates) cluster with DRBD 9.
When cloning a VM, the state is always "Outd" on the 3rd node as follows:

Code:

102:vm-123-disk-1/0  Connected(3*) Seco(node1,node3)/Prim(node2) UpTo(node3)/Outd(node1)/UpTo(node2)
On the other 2 nodes everything is ok (UpTo).

This can be solved by migrating the VM to the outdated node1. The question is, if this a real problem or not and why this happens?+
Thanks.
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>