Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

How to create Cluster with dedicated NIC (OVH vrack) ?

$
0
0
Hello,

I'm working with 2 servers from OVH (hosting provider). Each one has its own public IP, and also its own private IP (172.16.0.1 and 172.16.0.2) on a separated NIC (interface eth1). The private IPs can communicate with multicast thanks to their new "vrack 1.5" technology.

I created a cluster on the first server (private IP: 172.16.0.1) (pvecm create my-own-cluster). Then on the second server I did pvecm add 172.16.0.1.

This led me to a:

Code:

Waiting for quorum... Timed-out waiting for cluster
[FAILED]

In the meantime I realized I had to modify quorum. So I did on the first server pvecm expected 1, which didn't change anything. As the second server was still waiting, I did a CTRL-C. On each server, I ran a pvecm status and saw the Node addresses is the public IP (on both servers).

So my questions are:

  1. How to specify the network interface / IP address when creating a cluster or adding a node ?
  2. How could I do this provided I've already tried to create a cluster


# pveversion
pve-manager/3.1-24/060bd5a6 (running kernel: 2.6.32-26-pve)

Thanks in advance
Matthieu

root@pam cannot edit any VMs options

$
0
0
I was under the impression (and still am convince) that root@pam has the "administrator" role which is "do anything", yet it would be nice if I could make my VMs autostart in a predefine order.

I'm stuck with restarting VMs manually each time ProxMox is rebooted.

cluster.conf.new: Permission denied

$
0
0
Hello,

I'm trying to setup Unicast and modifying /etc/pve/cluster.conf with the instructions from http://pve.proxmox.com/wiki/Fencing#...e_cluster.conf, but I can't copy the file:

Code:

# cp /etc/pve/cluster.conf /etc/pve/cluster.conf.new
cp: cannot create regular file `/etc/pve/cluster.conf.new': Permission denied

I can't even create a single empty file

Code:

# touch dummy-file
touch: cannot touch `dummy-file': Permission denied

Here are the rights on the files:
Code:

# ls -al
total 5
drwxr-x---  2 root www-data    0 Jan  1  1970 .
drwxr-xr-x 91 root root    4096 Dec 28 10:32 ..
-r--r-----  1 root www-data  294 Dec 27 20:11 cluster.conf
-r--r-----  1 root www-data  155 Jan  1  1970 .clusterlog
-rw-r-----  1 root www-data    2 Jan  1  1970 .debug
lr-xr-x---  1 root www-data    0 Jan  1  1970 local -> nodes/proxmox9
-r--r-----  1 root www-data  242 Jan  1  1970 .members
lr-xr-x---  1 root www-data    0 Jan  1  1970 openvz -> nodes/proxmox9/openvz
lr-xr-x---  1 root www-data    0 Jan  1  1970 qemu-server -> nodes/proxmox9/qemu-server
-r--r-----  1 root www-data  205 Jan  1  1970 .rrd
-r--r-----  1 root www-data  256 Jan  1  1970 .version
-r--r-----  1 root www-data  18 Jan  1  1970 .vmlist

What am I missing ?

How do I reset the greylist???

$
0
0
I'm havin a problem with the greylist in that it shows my internal exchange server without a sender with the recipient being postmaster@proxmox.

It is causing server load to be way too high and emails are not being delivered.

Anyone know or had the same similar thing happen to them???

Any help would be appreciated.

MSI Z77a-g43 + Dell Perc6i Perc 6/i SAS T774H

$
0
0
Hello,

I have a question.

First of all i don't know much about the raid controllers.
I saw the Dell Perc6i Perc 6/i SAS T774H with NU209 Battery for write-back cache.
I have problem with the I/O delay ~10% and if it goes to more the server get blocked.
I now use software raid 1 and i hope if i use this card i get lower i/o delay or i hope the cpu don't get blocked if the i/o is high.
Later i want to use raid 10 with 2 more disk (i now have only 2 disk 1TB WD 7200rpm stat3) for better performance.

The question is: Anyone have any experience with this card or anyone know can i use this card with this motherboard? (MSI Z77a-g43)

Live migration not working after removing Network from boot order

$
0
0
PVE 3.1-21. Dual Proxmox server with LVM Freenas iSCSI Shared Storage.

I have noticed that after chaging boot order from Network to None it made live migration impossible. This behavior was present on all my VMs. It is easily reproductible on my cluster with this procedure.

1) Create new VM with Windows XP OS, No media, 1G HD (On shared storage), All other default
2) Start VM (Will loop on boot sequence)
3) Up to now everything is fine, can live migrate VM between hosts.
4) Go into Option, edit boot order and set boot device 3 from Network to None.
5) Now migration is unable to complete with the following error

Code:

Dec 28 10:26:55 starting migration of VM 106 to node 'proxmox1' (192.168.1.70)
Dec 28 10:26:55 copying disk images
Dec 28 10:26:55 starting VM 106 on remote node 'proxmox1'
Dec 28 10:26:57 starting ssh migration tunnel
Dec 28 10:26:57 starting online/live migration on localhost:60000
Dec 28 10:26:57 migrate_set_speed: 8589934592
Dec 28 10:26:57 migrate_set_downtime: 0.1
Dec 28 10:26:59 ERROR: online migrate failure - aborting
Dec 28 10:26:59 aborting phase 2 - cleanup resources
Dec 28 10:26:59 migrate_cancel
Dec 28 10:27:00 ERROR: migration finished with problems (duration 00:00:05)
TASK ERROR: migration problems

Two way I found to correct the problem:

1) Put back Network as boot device 3
2) Stop and reboot this VM.

Is this reproductible on someone else cluster ?

Thanks

Can't access web configuration after fresh install

$
0
0
I just installed proxmox 3.1. I can ping to the interface. I can ping from the interface. I had this same server running proxmox 2.8 last week and a hard drive crash and had to start over.(Now with redund raid hard drives)

New server, VM mbr problems

$
0
0
Hello.

I have been using a regular desktop pc with a AMD Phenom2 cpu as a proxmox server for a couple of years which has been working great and have been extremely happy with it. Recently I got a HP Proliant ML370G5 server (2 x E5430 Xeon cpu) and started using that with proxmox instead since it is a real server. The problem is that none of the backed up windows guests boot with the "new" Proliant server when restored to the new server. They can be made bootable with fixmbr, but what is it thats different with the new system? Is it because of the hardware or because of newer kernel or some other package that has broken the windows MBRs?

Booting a Win VM with the "KVM hw virtualization" unchecked does start booting ok but BSODs with unknown cpu type error, which seems to be a known problem.

Also, with the old server I used to be able to ddrescue a physical harddrive to a image and then dd that image to a VM's hdd but that doesnt work anymore. The VM hangs with "booting from hardisk" just like a restored windows VM.

The old server was originally installed as version 1.9 and has been upgraded gradually to 3.1.
The new Proliant server with problems got a fresh 3.1 install and is also fully updated.

Can I downgrade any packages to make the new server work like the old with windows mbr's and dd images or is this a hardware issue??
I also installed the same proxmox 3.1 on a core2duo desktop just for testing and tried restoring a backup there with the same results, windows mbr doesnt work.


--- pveversion -v from the "old" server that works great ---

proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

--- pveversion -v from the "new" server that cannot handle MBR and dd images ---

proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

Would dowgrading the kernel help or is it something else?
And I have to also thank you for this great software!

From 5 ips, only one doesnt work.

$
0
0
Hello! I have proxmox 3.1 and yesterday i added a new ip to my centos CT and i saw that doesnt work. I tested ip on other CT and doesnt work too. Then I added another ip to same CT and worked. How to manage that ip to work? Locally i can acces the ip,but externally i cant acces. Can u help me,please?

Poor KVM performance 50% of host server and openvz

$
0
0
I am testing on a completely empty ( no users other than the testing ) server with 8 AMD cores and indel 3500 SSD drives. The KVM uses virtio in both HD and network and default cpu ( actually tried the native cpu too same results ) and tried multiple caching options but the differences where small.
I used the tests in this page http://www.howtoforge.com/how-to-ben...-with-sysbench

The Mysql benchmark is as below
sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=test --mysql-user=root --mysql-password=yourrootsqlpassword prepare
then

sysbench --test=oltp --oltp-table-size=1000000 --mysql-db=test --mysql-user=root --mysql-password=yourrootsqlpassword --max-time=60 --oltp-read-only=on --max-requests=0 --num-threads=8 run

On the main server I get
transactions: 81373 (1356.13 per sec.)
Running on openvz with all cpus available to the vm
transactions: 86230 (1437.10 per sec.)
(openvz was higher might just be run variation that vz is higher but it shows that vz is veryclose to baremetal enough that indicidual variation makes a difference )
But when running on KVM with all the cpus available to the VM
transactions: 44220 (736.93 per sec.)

This is a HUGE drop close to unacceptable

When doing the pure cpu benchmark
sysbench --test=cpu --cpu-max-prime=20000 run

baremetal gets
execution time (avg/stddev): 18.7910/0.00
( in seconds so lower is better )

And in the kvm machine
execution time (avg/stddev): 20.4926/0.00

Which is an acceptable slow down and point that the problem is IO.

any ideas how to make a proxmox equalent of vmwares site recovery

$
0
0
hi all...

any ideas how to make something like site recovery mananger in a proxmox enviroment?

Im using synology nas for shared storage.. so maybe some rsync magic?

Thanks!

Casper

Error kernel: kvm: 3499: cpu1 unhandled rdmsr

$
0
0
Hello,

Recently I have a problem restarting on my proxmox machine.
I have the machines WINDOWS + LINUX

I see log, a lot of these errors:

kernel: kvm: 3499: cpu1 unhandled rdmsr: 0x606

____
/# pveversion -v
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1


________________________

====

/# pveperf
CPU BOGOMIPS: 44801.92
REGEX/SECOND: 1037887
HD SIZE: 19.38 GB (/dev/sda2)
BUFFERED READS: 156.79 MB/sec
AVERAGE SEEK TIME: 9.10 ms
FSYNCS/SECOND: 890.29

Thanks.

[SOLVED] Console don't open Connection to single VM

$
0
0
Hey guys,

i have a problem due a little mistake while configuring the network adapter of one VM.

I changed the network adapter of one VM from bridged vmbr0 to the same configuration with vlan tag "1".
I did this while the machine was running and a kernel panic was given due this mistake ;-)

Okay, shutdown and revert of the configuration change.

Now i can not connect via "Console". Error message is :
" TASK ERROR: command '/bin/nc -l -p 5900 -w 10 -c '/usr/sbin/qm vncproxy 100 2>/dev/null'' failed: exit code 2 "

I've found many failure reports or threads about this, but found no really working hint to get it back working.

ssh access is working an machine runs properly, just console-access via web-interface doesn't work anymore. Any other machine works great and can be accessed this way.

I tried rebooting and restarting the host and any guest but no way ;-) - It's not always a solution ;-)
Also i didn't found a iptable rule , blocking access or else.
The java plugin just shows "Status: Connected to server" and after this (10sec) it changes to "Network error: remote side closed connection" and nothing happens.


Any ideas?
//Solved


The vm.conf was corrupted .... damn china hardware

Proxmox Cluster (2 nodes) change IP adres

$
0
0
Hello,

I have 2 nodes that are in a cluster.
The nodes have an ip adres like 192.168.1.250 and 192.168.1.251

The 2 servers are being moved to the datacenter, where i get external IP's (other subnet etc)

Is it possible to change the IP addresses, and keep the nodes + cluster working?

Thanks for the reply's

Procedure after cluster node failure

$
0
0
Hello,

I have a 2 node cluster setup without HA. After a power failure on my node2 I migrated the vm's manually to node1 with the comands:

(node1)
#pvecm e 1
#mv /etc/pve/nodes/node2/qemu-server/*.conf /etc/pve/nodes/node1/qemu-server/


I wonder if I can start the node2 without causing problems.

Regards.

bridge - lost internet connection for VM's

$
0
0
People, help me please.
I'm using host Proxmox 3.1 with 3 psyhical interfaces eth0; eth1; eth2

eth0 is used for my LAN network
eth1 throw bridge vmbr0 is used for my first VM (guest: Debian 6.0.8-amd64)
eth2 throw bridge vmbr1 is used for my second VM (guest: Ubuntu 10.04-server-amd64)

And the problem is... for about innactivity of each VM about 30 min. i have lost internet connection for my VM's
I can only connect via the web gui throw eth0 and using VNC from web GUI can connect to my VM's, once I start pinging any host from the guest machine, the Internet is presented immediately.
What could be wrong?

=====================================
For my first VM /etc/network/interfaces

auto eth0
iface eth0 inet static
address 212.142....
netmask 255.255....
gateway 212.142....

=====================================

=====================================
For my second VM /etc/network/interfaces

auto eth0
iface eth0 inet static
address 109.73.....
netmask 255.255....
gateway 109.73....

=====================================

Her's my /etc/network/interfaces for my HOST

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 192.168....
netmask 255.255.255.0
gateway 192.168....

auto eth1
iface eth1 manual

auto eth2
iface eth2 manual

auto vmbr0
iface vbmr0 inet manual
bridge_ports eth1
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
bridge_ports eth2
bridge_stp off
bridge_fd 0

HA Cluster with 2 Nodes, Backup

$
0
0
Hi,

First: Have a happy new year for everyone :-).

Second: I plan to setup a HA-Cluster using this guide: http://pve.proxmox.com/wiki/Two-Node...bility_Cluster

Will it work without the Quorum Disk (3rd node at all in general)? I want to use DRBD with LVM on it.

Also: In the past I had some problems with LVM-nodes when the secondary DRBD becames master in a fail-over-scenario. I had to create a small init-script to re-use LVM again. Will it work without no problems?

Third: In the past :-) I could not use whole size of the disk for one LVM because it needed some space left to create snapshot-partitions when performing backups (snapshot-mode). Is this still nessescarry? If yes, how big should it be? Is the size based of the max-amount of disk-space a VE has or are 20 / 30 G ok for the data which newly appear when performing backups?

And at last: I found out with Proxmox 1 / 2 it was not directly possible to use openVZ with DRBD/LVM. (see http://pve.proxmox.com/wiki/DRBD). Is it directly possible with Proxmox 3.1?

Thank you!

using Ubuntu Cloud-Images with Proxmox?

$
0
0
The current method I use to create VMs in Proxmox is very time consuming. I'd like to see if I can use Ubuntu's Cloud-Image to speed up the process of creating new VMs. Does anyone have any experience with this?

Right now, I base my VMs on Ubuntu and KVM. First I upload the ubuntu ISO, run through an install using the GUI console and then convert that install into a template. When I create a new VM I need to boot the VM, ssh in and manually set the network for the VM. This is rather time consuming.

The Ubuntu cloud images are a preinstalled version of Ubuntu but without any network settings. But they support cloud-init (http://cloudinit.readthedocs.org/) and this blog post (http://ubuntu-smoser.blogspot.ca/201...out-cloud.html) shows how to base a VM on the Ubuntu Cloud-Image and then create a CD with genisoimage with the cloud-init settings for the VM. When the VMs boots, it will read the settings from the CD.

My plan is to create a VM with ProxMox and then replace the disk file with the Ubuntu cloud image. And then to replace the CD image with the ISO I created with the cloud-init settings.

Original conf file
virtio0: local:100/base-100-disk-1.raw,format=raw,size=40G
ide2: local:iso/ubuntu-12.04-server-amd64.iso,media=cdrom

Update file
virtio0: local:100/ubuntu-cloudimage.raw,format=raw,size=40G
ide2: local:iso/my-cloudinit-settings.iso,media=cdrom

Am I on the right track? Has anyone done this before?

LDAP/Active Directory authentication for Proxmox

$
0
0
Hi everyone,

I have a question about configurating LDAP/Active Directory. We are running Proxmox on a Hyper-V, but i want the Proxmox server to connect to the LDAP/Active Directory for authentication. Is there any tutorial on how to do this? Or can someone explain in easy steps on how to do this?

Thanks in advance,

Sander

Can't access Proxmox webui at port 8006

$
0
0
Hi
I have installed proxmox and changed the NIC in interfaces to eth1 because it did'nt work on eth0. I dont get errorrs but i can't access the webui.
This is my networkconfig:
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
address xxx.xxx.xxx.xx
netmask 255.255.255.xxx
gateway xxx.xxx.xx.xx
bridge_ports eth1
bridge_stp off
bridge_fd 0

Any ideas ?
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>