Quantcast
Channel: Proxmox Support Forum
Viewing all 179628 articles
Browse latest View live

Migrating from RAW LVM to qcow2

$
0
0
I guess this is more for posterity and reference as I didnt find much on it here...

Main questions- is this sane? am i missing anything?

Currently I have a RAW LVM storage media (block devices), on local volume group called VMStor1, and wish to move to a normal ext3 formatted array mounted at /mnt/VMStor2.

shutdown VM.... convert:
Code:

qemu-img convert -O qcow2 /dev/VMStor1/vm-100-disk-1 /mnt/VMStor2/images/100/vm-100-disk-1.qcow2
change storage /etc/pve/qemu-server/100.conf to this:
Code:

virtio0: VMStor2:100/vm-100-disk-1.qcow2,size=8G
restart VM

I know RAW is typically rated as fasterr, but newer qcow2 seems to show better performance closing that gap, and offers instant snapshotting, which I ffind more important than speed in this scenario.... am I missing anything else here??

The following packages have unmet dependencies: pve-manager

$
0
0
I am running SolydX (based on Wheezy 7.6) as in my opinion the most performant and useful Debian distro for my Macbook Air hardware.
http://solydxk.com/homeedition/solydx
(Just Linux no OSX)

In the past I got proxmox up and running on a bare Wheezy server. Because it would take me days to rebuild the Distro on the Wheezy PPA
please help me to solve similar errors "nasim" had. The Air from last year has to rely on the 3.10 Kernel, but the the following error messages
are the same with the 2.6.32 Kernel on different environments. The aktive PPAs are:

deb http://home.solydxk.com/production solydxk main upstream import
deb http://debian.solydxk.com/production testing main contrib non-free
deb http://debian.solydxk.com/security testing/updates main contrib non-free
deb http://community.solydxk.com/production solydxk main
deb http://download.proxmox.com/debian wheezy pve-no-subscription


But even when I disable all except "deb http://download.proxmox.com/debian wheezy pve-no-subscription" it didn't help:

pvetest test # apt-get install pve-manager
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
pve-manager : Depends: libauthen-pam-perl but it is not installable
Depends: liblockfile-simple-perl but it is not installable
Depends: vncterm but it is not going to be installed
Depends: qemu-server (>= 1.1-1) but it is not going to be installed
Depends: vlan but it is not installable
Depends: ifenslave-2.6 (>= 1.1.0-10) but it is not installable
Depends: liblinux-inotify2-perl but it is not installable
Depends: pve-cluster (>= 1.0-29) but it is not going to be installed
Depends: libpve-common-perl but it is not going to be installed
Depends: libpve-storage-perl but it is not going to be installed
Depends: libpve-access-control (>= 3.0-2) but it is not going to be installed
Depends: libfilesys-df-perl but it is not installable
Depends: libfile-readbackwards-perl but it is not installable
Depends: libfile-sync-perl but it is not installable
Depends: redhat-cluster-pve but it is not going to be installed
Depends: resource-agents-pve but it is not going to be installed
Depends: fence-agents-pve but it is not going to be installed
Depends: cstream but it is not installable
Depends: lzop but it is not installable
Depends: dtach but it is not installable
Depends: libanyevent-perl but it is not installable
Depends: libanyevent-http-perl but it is not installable
Depends: spiceterm but it is not going to be installed
Depends: librados2-perl but it is not going to be installed
Depends: pve-firewall but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
pvetest test #


Please help to me to combine SolydX and ProxMox 3.3 for testing purposes on a modern notebook.
It would be a great combination.

Thanks!SolydX.png
Attached Images

Removing Disk Images not returning space

$
0
0
Hello,

I've noticed that when removing disks from the webgui or with rm, those files are still assumed to be in use and are therefore still taking up space until reboot of, well, I assume every node in the cluster.
We are using NFS shares.

Is this intended?

Also, is there some nicer way to "clean up" then reboot or manually cleaning inodes?
Restarting all kvm processes should work as well, but is also quite untidy so to speak.

Restore error

$
0
0
Hi all,
I have 2 proxmox 3.0 servers. On the first one I have made a live backup of a big qemu VM (700 Go) and copied the .vma.lzo resulting file on the second server under /var/lib/vz/dump (using a usb disk between the 2 servers).
Now on the second server, I can't restore :

root@atlas:/var/lib/vz/dump# qmrestore /var/lib/vz/dump/vzdump-qemu-100-2014_10_08-16_04_52.vma.lzo 113
can't find archive file '/var/lib/vz/dump/vzdump-qemu-100-2014_10_08-16_04_52.vma.lzo'

root@atlas:/var/lib/vz/dump# cat vzdump-qemu-100-2014_10_08-16_04_52.vma.lzo | qmrestore - 113
restore vma archive: vma extract -v -r /var/tmp/vzdumptmp142819.fifo - /var/tmp/vzdumptmp142819
command 'vma extract -v -r /var/tmp/vzdumptmp142819.fifo - /var/tmp/vzdumptmp142819' failed: got timeout

And I don't see the file in the GUI (I hoped to see it under the local storage).

What do I miss?

Thanks

Proxmox under Virtualbox

$
0
0
Hi all,

I've been struggling to get a Proxmox PVE under Virtualbox working on my OSX laptop. Unlike on my Mac Mini where IP addresses are fixed, the laptop is often online and/or the network addresses changes.

I've been through various articles and compiled a rough article on the wiki: https://pve.proxmox.com/wiki/Virtualbox

I would most appreciate any edits!

Thanks,
Martin.

/dev/mapper/pve-root full

$
0
0
I tried all threads in forum but I can not find which files is filling pve-root Is not there an easy way?

Solaris 10 Guest very slow with proxmox 3.3

$
0
0
Hi,

we ran some solaris hosts since proxmox 3.0 with an ok performance for more than 1 year.
We never rebootet them

Yesterday we restarted the solaris boxes under proxmox 3.3 and since then we have a very bad performance. Booting takes 20 Minutes.
What changed in the meantime might be the kvm version, nothing else changed.

We also tested all the recommendations on https://pve.proxmox.com/wiki/Solaris_Guests and http://www.linux-kvm.org/page/Guest_Support_Status without success.

Does someone habe an idea what might be causing the problem ?

Thanks

Please help - I apt-get purged the proxmox core

$
0
0
Specifically, I did "apt-get purge postfix" on our proxmox server, which hosted a bunch of VMs and containers with, eg. our website. This is from apt.history:
Code:

Start-Date: 2014-10-02  15:21:38
Commandline: apt-get purge postfix
Purge: postfix:amd64 (2.9.6-2), proxmox-ve-2.6.32:amd64 (3.1-114), pve-manager:amd64 (3.1-21), bsd-mailx:amd64 (8.1.2-0.20111106cvs-1)
End-Date: 2014-10-02  15:21:42

Now our website is down, and our containers can't be accessed. Also, I tried connecting to proxmox's gui interface with Firefox, but get "Unable to connect." And I was told that all the configs for stuff like the proxy and pve-managwr were purged.

This seems like a big problem, and I'm not sure where to start to go about fixing this. Any help would be appreciated.

Console not work at PVE 3.3

$
0
0
Hello!

Console work great with default PORXMOX OpenVZ templates
but when i use template from openvz.org 64bit debian 7.0 - console start but black screen and nothing, not work!


May be somebody fix this problem ?

Thanks

PROXMOX 3.3 - vnc and NoVnc both size windows guest consoles incorrectly

$
0
0
Windows 2008 R2 64 bit KVM guest using VNC Console

I have been seeing this problem when launching vnc consoles for windows guests on two different ProxMox hosts.
I see the behavior when launching the console using Firefox 32.03 and IE 11, Windows 7 Professional 64 bit

I saw the behavior on my own Dell Precision M4400 with discrete NVIDIA video, as well as on a Dell Optiplex 9010 with intel integrated video.


When I launch the console using VNC I get the guest machine's desktop, not stretched or squeezed (i.e. not scaled) at the actual resolution of 1024x768 in the upper left hand corner of the browser window, but with a large border around the lower right and bottom edges:
VNCView1024x768DC03.png



Windows 2008 R2 64 bit KVM guest using NoVNC Console
If I use NoVNC I get a browser window that is the same size, but the guest OS window is stretched to fill it, and there is no way to resize the browser window so that the scaling is gone and I can use more screen real estate.

NoVNCView1024x768DC03-stretched.jpg
1024x768 is not enough screen real estate for the way I work, and if I were to increase it to 1280x1024 it extends WAY beyond the desktop of my laptop which has a native resolution of 1920x1200.


Is this a known issue? is there a config file somewhere that I can modify to fix this?


Thanks

Chris.
Attached Images

Open vSwitch and multicast issues (cluster keeps losing quorum)

$
0
0
I'm having issues where multicast doesn't appear to work when I use openvswitch to configure my bonds and bridges. Initially at boot, everything comes up and the system has quorum, then in the logs you start seeing totem retransmissions on all systems and everything gets out of whack. Obviously this is multicast related, but I've seen no guidance on workarounds when using Open vSwitch.

What I'm trying to do is bond 2 nics in LACP for redundancy and bandwidth aggregation across 2 Juniper EX switches in a stack (aka chassis cluster). Then I have my bridge on top of that with a couple of internal interfaces for the local machine to use off that bridge, one for proxmox cluster communication and one for ceph communication.

This worked fine when I was using standard linux bonding and bridging, as long as I used a post-up script to turn on the multicast querier on any bridges like:
Code:

post-up ( echo 1 > /sys/devices/virtual/net/$IFACE/bridge/multicast_querier && sleep 5 )

However, that doesn't appear to be an option on openvswitch bridges, there's no bridge settings in the /sys/devices/virtual/net/$IFACE for openvswitch bridges. Is there another way to make this work? Google didn't turn up anything. Right now I've switched to using cman transport="udpu" as a workaround which seems to have worked, but I know it isn't considered a good idea.

It should also be mentioned that I am running the latest release from the pve-no-subscription repository, currently, and using the 3.10.0-4-pve kernel.

Here's my /etc/network/interfaces:
Code:

auto lo
iface lo inet loopback

allow-vmbr0 ovsbond
iface ovsbond inet manual
  ovs_bridge vmbr0
  ovs_type OVSBond
  ovs_bonds eth0 eth1
  pre-up ( ifconfig eth0 mtu 9000 && ifconfig eth1 mtu 9000 )
  ovs_options bond_mode=balance-tcp lacp=active other_config:lacp-time=fast
  mtu 9000

auto vmbr0
allow-ovs vmbr0
iface vmbr0 inet manual
  ovs_type OVSBridge
  ovs_ports ovsbond vlan50 vlan55
  mtu 9000

# Proxmox cluster communication vlan
allow-vmbr0 vlan50
iface vlan50 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=50
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.50.10.44
  netmask 255.255.255.0
  gateway 10.50.10.1
  mtu 1500

# Ceph cluster communication vlan (jumbo frames)
allow-vmbr0 vlan55
iface vlan55 inet static
  ovs_type OVSIntPort
  ovs_bridge vmbr0
  ovs_options tag=55
  ovs_extra set interface ${IFACE} external-ids:iface-id=$(hostname -s)-${IFACE}-vif
  address 10.55.10.44
  netmask 255.255.255.0
  mtu 9000

And because of a bug, as I found in this forum, I also had to append this to /etc/default/openvswitch-switch to get the interfaces to come up during boot:
Code:

RUN_DIR="/run/network"
IFSTATE="$RUN_DIR/ifstate"

check_ifstate() {
    if [ ! -d "$RUN_DIR" ] ; then
        if ! mkdir -p "$RUN_DIR" ; then
            log_failure_msg "can't create $RUN_DIR"
            exit 1
        fi
    fi
    if [ ! -r "$IFSTATE" ] ; then
        if ! :> "$IFSTATE" ; then
            log_failure_msg "can't initialise $IFSTATE"
            exit 1
        fi
    fi
}

check_ifstate

Thanks!
-Brad

Installing Proxmox with UEFI BIOS Setting On and Secure Boot Off

$
0
0
Does Proxmox install on a UEFI enabled BIOS (with or without secure boot -- my motherboard has the ability to enable UEFI while also disabling secure boot)?

Background:

Because my previous attempts to install Proxmox 3.2 on a server with the UEFI bios-setting enabled have failed, I recently disabled that setting and installed Proxmox 3.3 successfully. Specifically, the 3.2 failures would indeed install Proxmox, but after reboot it would not boot Proxmox.

However, now I'm considering reinstalling Proxmox with UEFI enabled (if that's possible), because of its support for larger than 2TB partitions.

You see, we have a Dell T620 with eight 900GB (10,000 RPM) hard drives. I configured its H710 RAID controller with RAID 10 and created only one virtual volume spanning all 8 drives.

That should produce 3600GB of usable space, but unfortunately I'm just now noticing that Proxmox is only showing 1.65TB of space under the local storage for this node. This is likely due to bios limitations caused by not having UEFI enabled. Or, it could be that I don't have the latest drivers for my H710 RAID controller. I honestly can't remember how much space it reported at the time I created the RAID (going to check that tomorrow).

I hate to do it, but I'm going to miagrate all virtual machines to another node in the cluster and take this machine down for the 2nd time (the first time was to add additional hard drives to the RAID 10).

Man I wish I would have realized this earlier. But, please save me some time if UEFI is a known issue with Proxmox, because my Google searches aren't producing much certainty or guidance. Please testify if you've installed on top of UEFI successfully, and any tips would be greatly appreciated.

rgmanager won't start automatically

$
0
0
We have a three node cluster withProxmox 3.3-1/a06c9f73 with Ceph up and running.
After each reboot of any of the clusters nodes rgmanager won’t starting automatically on the rebooted node.

Also we are not able to start rgmanager via web interface.

To start rgmanager we have to restart cman and pve-cluster first. After this erverything is running fine.

Each node has three bonded interfaces. Cluster communication is established via bridge connected to one of the bonds.
I’ve tested multicast transmissions (omping / asmping) between all nodes already with the result that the transmission seems to be fine (no losses, constant ping times)
Fencing is configured and tested.

All three nodes are also acting as CEPH server nodes. CEPH communictaion takes place on a dedicated bond interface.


Here are some information I collected directly after reboot:


Code:

:~# clustat
Cluster Status for dmc-cluster-ni @ Fri Oct 10 08:56:12 2014
Member Status: Quorate
 
 Member Name                                                    ID  Status
 ------ ----                                                    ---- ------
 lx-vmhost-ni1                                                      1 Online
 lx-vmhost-ni0                                                      2 Online
 lx-vmhost-ni2                                                      3 Online, Local


Code:

:~# /etc/init.d/rgmanager restart
Stopping Cluster Service Manager: [  OK  ]
Starting Cluster Service Manager: [FAILED]

Code:

#clusvcadm -e testservice
Local machine trying to enable service:testservice...Could not connect to resource group manager

Code:

#pvecm status
Version: 6.2.0
Config Version: 16
Cluster Name: dmc-cluster-ni
Cluster Id: 49049
Cluster Member: Yes
Cluster Generation: 304
Membership state: Cluster-Member
Nodes: 3
Expected votes: 3
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 1
Flags:
Ports Bound: 0
Node name: lx-vmhost-ni2
Node ID: 3
Multicast addresses: 239.192.191.89
Node addresses: 172.18.0.37

Code:

#cat /var/log/cluster/rgmanager.log
Oct 10 08:52:29 rgmanager Waiting for quorum to form
Oct 10 08:52:52 rgmanager Quorum formed

Nothing in fenced.log
Nothing in dlm_control.log


Code:

#cat /var/log/cluster/corosync.log
Oct 10 08:51:40 corosync [MAIN  ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.
Oct 10 08:51:40 corosync [MAIN  ] Corosync built-in features: nss
Oct 10 08:51:40 corosync [MAIN  ] Successfully read config from /etc/cluster/cluster.conf
Oct 10 08:51:40 corosync [MAIN  ] Successfully parsed cman config
Oct 10 08:51:40 corosync [MAIN  ] Successfully configured openais services to load
Oct 10 08:51:40 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 10 08:51:40 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Oct 10 08:51:40 corosync [TOTEM ] The network interface [172.18.0.37] is now up.
Oct 10 08:51:40 corosync [QUORUM] Using quorum provider quorum_cman
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Oct 10 08:51:40 corosync [CMAN  ] CMAN 1364188437 (built Mar 25 2013 06:14:01) started
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: corosync CMAN membership service 2.90
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: openais cluster membership service B.01.01
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: openais event service B.01.01
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: openais checkpoint service B.01.01
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: openais message service B.03.01
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: openais distributed locking service B.03.01
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: openais timer service A.01.01
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: corosync configuration service
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: corosync profile loading service
Oct 10 08:51:40 corosync [QUORUM] Using quorum provider quorum_cman
Oct 10 08:51:40 corosync [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Oct 10 08:51:40 corosync [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Oct 10 08:51:40 corosync [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:51:40 corosync [CLM  ] New Configuration:
Oct 10 08:51:40 corosync [CLM  ] Members Left:
Oct 10 08:51:40 corosync [CLM  ] Members Joined:
Oct 10 08:51:40 corosync [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:51:40 corosync [CLM  ] New Configuration:
Oct 10 08:51:40 corosync [CLM  ]      r(0) ip(172.18.0.37)
Oct 10 08:51:40 corosync [CLM  ] Members Left:
Oct 10 08:51:40 corosync [CLM  ] Members Joined:
Oct 10 08:51:40 corosync [CLM  ]      r(0) ip(172.18.0.37)
Oct 10 08:51:40 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 10 08:51:40 corosync [QUORUM] Members[1]: 3
Oct 10 08:51:40 corosync [QUORUM] Members[1]: 3
Oct 10 08:51:40 corosync [CPG  ] chosen downlist: sender r(0) ip(172.18.0.37) ; members(old:0 left:0)
Oct 10 08:51:40 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Oct 10 08:52:24 corosync [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:24 corosync [CLM  ] New Configuration:
Oct 10 08:52:24 corosync [CLM  ]      r(0) ip(172.18.0.37)
Oct 10 08:52:24 corosync [CLM  ] Members Left:
Oct 10 08:52:24 corosync [CLM  ] Members Joined:
Oct 10 08:52:24 corosync [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:24 corosync [CLM  ] New Configuration:
Oct 10 08:52:24 corosync [CLM  ]      r(0) ip(172.18.0.37)
Oct 10 08:52:24 corosync [CLM  ] Members Left:
Oct 10 08:52:24 corosync [CLM  ] Members Joined:
Oct 10 08:52:24 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 10 08:52:24 corosync [CPG  ] chosen downlist: sender r(0) ip(172.18.0.37) ; members(old:1 left:0)
Oct 10 08:52:24 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Oct 10 08:52:44 corosync [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:44 corosync [CLM  ] New Configuration:
Oct 10 08:52:44 corosync [CLM  ]      r(0) ip(172.18.0.37)
Oct 10 08:52:44 corosync [CLM  ] Members Left:
Oct 10 08:52:44 corosync [CLM  ] Members Joined:
Oct 10 08:52:44 corosync [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:44 corosync [CLM  ] New Configuration:
Oct 10 08:52:44 corosync [CLM  ]      r(0) ip(172.18.0.37)
Oct 10 08:52:44 corosync [CLM  ] Members Left:
Oct 10 08:52:44 corosync [CLM  ] Members Joined:
Oct 10 08:52:44 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 10 08:52:44 corosync [CPG  ] chosen downlist: sender r(0) ip(172.18.0.37) ; members(old:1 left:0)
Oct 10 08:52:44 corosync [MAIN  ] Completed service synchronization, ready to provide service.
Oct 10 08:52:52 corosync [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:52 corosync [CLM  ] New Configuration:
Oct 10 08:52:52 corosync [CLM  ]      r(0) ip(172.18.0.37)
Oct 10 08:52:52 corosync [CLM  ] Members Left:
Oct 10 08:52:52 corosync [CLM  ] Members Joined:
Oct 10 08:52:52 corosync [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:52 corosync [CLM  ] New Configuration:
Oct 10 08:52:52 corosync [CLM  ]      r(0) ip(172.18.0.32)
Oct 10 08:52:52 corosync [CLM  ]      r(0) ip(172.18.0.35)
Oct 10 08:52:52 corosync [CLM  ]      r(0) ip(172.18.0.37)
Oct 10 08:52:52 corosync [CLM  ] Members Left:
Oct 10 08:52:52 corosync [CLM  ] Members Joined:
Oct 10 08:52:52 corosync [CLM  ]      r(0) ip(172.18.0.32)
Oct 10 08:52:52 corosync [CLM  ]      r(0) ip(172.18.0.35)
Oct 10 08:52:52 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 10 08:52:52 corosync [CMAN  ] quorum regained, resuming activity
Oct 10 08:52:52 corosync [QUORUM] This node is within the primary component and will provide service.
Oct 10 08:52:52 corosync [QUORUM] Members[2]: 1 3
Oct 10 08:52:52 corosync [QUORUM] Members[2]: 1 3
Oct 10 08:52:52 corosync [QUORUM] Members[3]: 1 2 3
Oct 10 08:52:52 corosync [QUORUM] Members[3]: 1 2 3
Oct 10 08:52:52 corosync [CPG  ] chosen downlist: sender r(0) ip(172.18.0.35) ; members(old:2 left:0)
Oct 10 08:52:52 corosync [MAIN  ] Completed service synchronization, ready to provide service.


Syslog directly after reboot:

Code:

Oct 10 08:51:39 lx-vmhost-ni2 kernel: igb 0000:81:00.1: eth3: igb: eth3 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Oct 10 08:51:39 lx-vmhost-ni2 kernel: igb 0000:02:00.0: eth0: igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Oct 10 08:51:39 lx-vmhost-ni2 kernel: igb 0000:02:00.1: eth1: igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Oct 10 08:51:39 lx-vmhost-ni2 kernel: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: bond0: link status definitely up for interface eth1, 1000 Mbps full duplex.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: vmbr0: port 1(bond0) entering forwarding state
Oct 10 08:51:39 lx-vmhost-ni2 kernel: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready
Oct 10 08:51:39 lx-vmhost-ni2 kernel: device bond2 entered promiscuous mode
Oct 10 08:51:39 lx-vmhost-ni2 kernel: device eth4 entered promiscuous mode
Oct 10 08:51:39 lx-vmhost-ni2 kernel: device eth5 entered promiscuous mode
Oct 10 08:51:39 lx-vmhost-ni2 kernel: ADDRCONF(NETDEV_UP): bond2: link is not ready
Oct 10 08:51:39 lx-vmhost-ni2 kernel: 8021q: adding VLAN 0 to HW filter on device bond2
Oct 10 08:51:39 lx-vmhost-ni2 kernel: 8021q: adding VLAN 0 to HW filter on device vmbr2
Oct 10 08:51:39 lx-vmhost-ni2 kernel: bond1: link status definitely up for interface eth3, 1000 Mbps full duplex.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: igb 0000:83:00.0: eth4: igb: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Oct 10 08:51:39 lx-vmhost-ni2 kernel: bond2: link status definitely up for interface eth4, 1000 Mbps full duplex.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: vmbr2: port 1(bond2) entering forwarding state
Oct 10 08:51:39 lx-vmhost-ni2 kernel: ADDRCONF(NETDEV_CHANGE): bond2: link becomes ready
Oct 10 08:51:39 lx-vmhost-ni2 kernel: igb 0000:83:00.1: eth5: igb: eth5 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None
Oct 10 08:51:39 lx-vmhost-ni2 kernel: bond2: link status definitely up for interface eth5, 1000 Mbps full duplex.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: RPC: Registered named UNIX socket transport module.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: RPC: Registered udp transport module.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: RPC: Registered tcp transport module.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: Slow work thread pool: Starting up
Oct 10 08:51:39 lx-vmhost-ni2 kernel: Slow work thread pool: Ready
Oct 10 08:51:39 lx-vmhost-ni2 kernel: FS-Cache: Loaded
Oct 10 08:51:39 lx-vmhost-ni2 kernel: NFS: Registering the id_resolver key type
Oct 10 08:51:39 lx-vmhost-ni2 kernel: FS-Cache: Netfs 'nfs' registered for caching
Oct 10 08:51:39 lx-vmhost-ni2 kernel: Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
Oct 10 08:51:39 lx-vmhost-ni2 kernel: Loading iSCSI transport class v2.0-870.
Oct 10 08:51:39 lx-vmhost-ni2 kernel: iscsi: registered transport (tcp)
Oct 10 08:51:39 lx-vmhost-ni2 kernel: iscsi: registered transport (iser)
Oct 10 08:51:39 lx-vmhost-ni2 rrdcached[3594]: starting up
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3589]: ntpd 4.2.6p5@1.2349-o Sat May 12 09:54:55 UTC 2012 (1)
Oct 10 08:51:39 lx-vmhost-ni2 rrdcached[3594]: checking for journal files
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: proto: precision = 0.159 usec
Oct 10 08:51:39 lx-vmhost-ni2 rrdcached[3594]: started new journal /var/lib/rrdcached/journal/rrd.journal.1412923899.648033
Oct 10 08:51:39 lx-vmhost-ni2 rrdcached[3594]: journal processing complete
Oct 10 08:51:39 lx-vmhost-ni2 rrdcached[3594]: listening for connections
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listen and drop on 1 v6wildcard :: UDP 123
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listen normally on 2 lo 127.0.0.1 UDP 123
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listen normally on 3 vmbr0 172.18.0.37 UDP 123
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listen normally on 4 vmbr1 192.168.151.4 UDP 123
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listen normally on 5 vmbr2 192.168.152.4 UDP 123
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listen normally on 6 lo ::1 UDP 123
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listen normally on 7 vmbr1 fe80::225:90ff:fee8:2be8 UDP 123
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listen normally on 8 vmbr0 fe80::225:90ff:feef:9aea UDP 123
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: peers refreshed
Oct 10 08:51:39 lx-vmhost-ni2 ntpd[3595]: Listening on routing socket on fd #25 for interface updates
Oct 10 08:51:39 lx-vmhost-ni2 pmxcfs[3629]: [quorum] crit: quorum_initialize failed: 6
Oct 10 08:51:39 lx-vmhost-ni2 pmxcfs[3629]: [quorum] crit: can't initialize service
Oct 10 08:51:39 lx-vmhost-ni2 pmxcfs[3629]: [confdb] crit: confdb_initialize failed: 6
Oct 10 08:51:39 lx-vmhost-ni2 pmxcfs[3629]: [quorum] crit: can't initialize service
Oct 10 08:51:39 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] crit: cpg_initialize failed: 6
Oct 10 08:51:39 lx-vmhost-ni2 pmxcfs[3629]: [quorum] crit: can't initialize service
Oct 10 08:51:39 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] crit: cpg_initialize failed: 6
Oct 10 08:51:39 lx-vmhost-ni2 pmxcfs[3629]: [quorum] crit: can't initialize service
Oct 10 08:51:39 lx-vmhost-ni2 postfix/master[3713]: daemon started -- version 2.9.6, configuration /etc/postfix
Oct 10 08:51:40 lx-vmhost-ni2 /usr/sbin/cron[3764]: (CRON) INFO (pidfile fd = 3)
Oct 10 08:51:40 lx-vmhost-ni2 /usr/sbin/cron[3766]: (CRON) STARTUP (fork ok)
Oct 10 08:51:40 lx-vmhost-ni2 /usr/sbin/cron[3766]: (CRON) INFO (Running @reboot jobs)
Oct 10 08:51:40 lx-vmhost-ni2 kernel: DLM (built Aug 21 2014 08:36:35) installed
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Corosync Cluster Engine ('1.4.7'): started and ready to provide service.
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Corosync built-in features: nss
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Successfully read config from /etc/cluster/cluster.conf
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Successfully parsed cman config
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Successfully configured openais services to load
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [TOTEM ] Initializing transport (UDP/IP Multicast).
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [TOTEM ] The network interface [172.18.0.37] is now up.
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [QUORUM] Using quorum provider quorum_cman
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CMAN  ] CMAN 1364188437 (built Mar 25 2013 06:14:01) started
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: corosync CMAN membership service 2.90
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: openais cluster membership service B.01.01
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: openais event service B.01.01
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: openais checkpoint service B.01.01
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: openais message service B.03.01
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: openais distributed locking service B.03.01
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: openais timer service A.01.01
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: corosync configuration service
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Oct 10 08:51:40 lx-vmhost-ni2 kernel: SGI XFS with ACLs, security attributes, large block/inode numbers, no debug enabled
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: corosync profile loading service
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [QUORUM] Using quorum provider quorum_cman
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] New Configuration:
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Left:
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Joined:
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] New Configuration:
Oct 10 08:51:40 lx-vmhost-ni2 kernel: SGI XFS Quota Management subsystem
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.37)
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Left:
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Joined:
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.37)
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [QUORUM] Members[1]: 3
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [QUORUM] Members[1]: 3
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [CPG  ] chosen downlist: sender r(0) ip(172.18.0.37) ; members(old:0 left:0)
Oct 10 08:51:40 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Completed service synchronization, ready to provide service.
Oct 10 08:51:40 lx-vmhost-ni2 kernel: XFS (sde1): Mounting Filesystem
Oct 10 08:51:40 lx-vmhost-ni2 kernel: XFS (sde1): Ending clean mount
Oct 10 08:51:40 lx-vmhost-ni2 iscsid: iSCSI daemon with pid=3387 started!
Oct 10 08:51:41 lx-vmhost-ni2 cimserver[3648]: Listening on HTTP port 5988.
Oct 10 08:51:41 lx-vmhost-ni2 cimserver[3648]: Listening on HTTPS port 5989.
Oct 10 08:51:41 lx-vmhost-ni2 cimserver[3648]: Listening on local connection socket.
Oct 10 08:51:41 lx-vmhost-ni2 cimserver[3648]: Started CIM Server version 2.11.1.
Oct 10 08:51:41 lx-vmhost-ni2 cimserver[3648]: CIM Server registration with Internal SLP Failed. Exception: CIM_ERR_METHOD_NOT_AVAILABLE: register
Oct 10 08:51:45 lx-vmhost-ni2 pmxcfs[3629]: [status] notice: update cluster info (cluster name  dmc-cluster-ni, version = 16)
Oct 10 08:51:45 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: members: 3/3629
Oct 10 08:51:45 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: all data is up to date
Oct 10 08:51:45 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: members: 3/3629
Oct 10 08:51:45 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: all data is up to date
Oct 10 08:51:46 lx-vmhost-ni2 kernel: vmbr0: no IPv6 routers present
Oct 10 08:51:48 lx-vmhost-ni2 kernel: vmbr1: no IPv6 routers present
Oct 10 08:51:49 lx-vmhost-ni2 kernel: vmbr2: no IPv6 routers present
Oct 10 08:51:59 lx-vmhost-ni2 maxView Storage Manager Agent: [752] Flush and fetch rate set to Medium: controller 1 ( Adaptec ASR7805Q #4B21136D03F Physical Slot: 3 )
Oct 10 08:51:59 lx-vmhost-ni2 ntpd[3595]: Deferring DNS for 0.debian.pool.ntp.org 1
Oct 10 08:52:10 lx-vmhost-ni2 kernel: XFS (sdc1): Mounting Filesystem
Oct 10 08:52:10 lx-vmhost-ni2 kernel: XFS (sdc1): Ending clean mount
Oct 10 08:52:11 lx-vmhost-ni2 ntpd[3595]: Deferring DNS for 1.debian.pool.ntp.org 1
Oct 10 08:52:11 lx-vmhost-ni2 ntpd[4780]: signal_no_reset: signal 17 had flags 4000000
Oct 10 08:52:11 lx-vmhost-ni2 kernel: XFS (sdf1): Mounting Filesystem
Oct 10 08:52:11 lx-vmhost-ni2 kernel: XFS (sdf1): Ending clean mount
Oct 10 08:52:12 lx-vmhost-ni2 kernel: XFS (sdd1): Mounting Filesystem
Oct 10 08:52:12 lx-vmhost-ni2 kernel: XFS (sdd1): Ending clean mount
Oct 10 08:52:13 lx-vmhost-ni2 ntpd[3595]: Listen normally on 9 vmbr2 fe80::225:90ff:fee8:2e20 UDP 123
Oct 10 08:52:13 lx-vmhost-ni2 ntpd[3595]: peers refreshed
Oct 10 08:52:13 lx-vmhost-ni2 ntpd_intres[4780]: DNS 0.debian.pool.ntp.org -> 144.76.118.85
Oct 10 08:52:13 lx-vmhost-ni2 ntpd_intres[4780]: DNS 1.debian.pool.ntp.org -> 148.251.9.60
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] New Configuration:
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.37)
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Left:
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Joined:
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] New Configuration:
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.37)
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Left:
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Joined:
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [CPG  ] chosen downlist: sender r(0) ip(172.18.0.37) ; members(old:1 left:0)
Oct 10 08:52:24 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Completed service synchronization, ready to provide service.
Oct 10 08:52:29 lx-vmhost-ni2 kernel: Netfilter messages via NETLINK v0.30.
Oct 10 08:52:29 lx-vmhost-ni2 pvepw-logger[5725]: starting pvefw logger
Oct 10 08:52:29 lx-vmhost-ni2 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 10 08:52:29 lx-vmhost-ni2 kernel: tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
Oct 10 08:52:29 lx-vmhost-ni2 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Oct 10 08:52:29 lx-vmhost-ni2 kernel: ip6_tables: (C) 2000-2006 Netfilter Core Team
Oct 10 08:52:29 lx-vmhost-ni2 kernel: Enabling conntracks and NAT for ve0
Oct 10 08:52:29 lx-vmhost-ni2 kernel: nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
Oct 10 08:52:29 lx-vmhost-ni2 kernel: ploop_dev: module loaded
Oct 10 08:52:29 lx-vmhost-ni2 rgmanager[5876]: Waiting for quorum to form
Oct 10 08:52:29 lx-vmhost-ni2 pve-firewall[5881]: starting server
Oct 10 08:52:29 lx-vmhost-ni2 kernel: ip_set: protocol 6
Oct 10 08:52:30 lx-vmhost-ni2 pvedaemon[5894]: starting server
Oct 10 08:52:30 lx-vmhost-ni2 pvedaemon[5894]: starting 3 worker(s)
Oct 10 08:52:30 lx-vmhost-ni2 pvedaemon[5894]: worker 5896 started
Oct 10 08:52:30 lx-vmhost-ni2 pvedaemon[5894]: worker 5897 started
Oct 10 08:52:30 lx-vmhost-ni2 pvedaemon[5894]: worker 5898 started
Oct 10 08:52:30 lx-vmhost-ni2 pvestatd[5918]: starting server
Oct 10 08:52:30 lx-vmhost-ni2 pveproxy[5923]: starting server
Oct 10 08:52:30 lx-vmhost-ni2 pveproxy[5923]: starting 3 worker(s)
Oct 10 08:52:30 lx-vmhost-ni2 pveproxy[5923]: worker 5924 started
Oct 10 08:52:30 lx-vmhost-ni2 pveproxy[5923]: worker 5925 started
Oct 10 08:52:30 lx-vmhost-ni2 pveproxy[5923]: worker 5926 started
Oct 10 08:52:30 lx-vmhost-ni2 ntpd[3595]: Listen normally on 10 venet0 fe80::1 UDP 123
Oct 10 08:52:30 lx-vmhost-ni2 ntpd[3595]: peers refreshed
Oct 10 08:52:31 lx-vmhost-ni2 pvesh: <root@pam> starting task UPID:lx-vmhost-ni2:00001738:0000190C:5437822F:startall::root@pam:
Oct 10 08:52:31 lx-vmhost-ni2 spiceproxy[5945]: starting server
Oct 10 08:52:31 lx-vmhost-ni2 spiceproxy[5945]: starting 1 worker(s)
Oct 10 08:52:31 lx-vmhost-ni2 spiceproxy[5945]: worker 5946 started
Oct 10 08:52:38 lx-vmhost-ni2 kernel: venet0: no IPv6 routers present
Oct 10 08:52:41 lx-vmhost-ni2 task UPID:lx-vmhost-ni2:00001738:0000190C:5437822F:startall::root@pam:: cluster not ready - no quorum?
Oct 10 08:52:41 lx-vmhost-ni2 pvesh: <root@pam> end task UPID:lx-vmhost-ni2:00001738:0000190C:5437822F:startall::root@pam: cluster not ready - no quorum?
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] New Configuration:
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.37)
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Left:
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Joined:
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] New Configuration:
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.37)
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Left:
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Joined:
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [CPG  ] chosen downlist: sender r(0) ip(172.18.0.37) ; members(old:1 left:0)
Oct 10 08:52:44 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Completed service synchronization, ready to provide service.
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] New Configuration:
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.37)
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Left:
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Joined:
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] CLM CONFIGURATION CHANGE
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] New Configuration:
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.32)
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.35)
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.37)
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Left:
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] Members Joined:
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.32)
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CLM  ] #011r(0) ip(172.18.0.35)
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CMAN  ] quorum regained, resuming activity
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [QUORUM] This node is within the primary component and will provide service.
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [QUORUM] Members[2]: 1 3
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [status] notice: node has quorum
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [QUORUM] Members[2]: 1 3
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [QUORUM] Members[3]: 1 2 3
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [QUORUM] Members[3]: 1 2 3
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [CPG  ] chosen downlist: sender r(0) ip(172.18.0.35) ; members(old:2 left:0)
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: members: 2/5675, 3/3629
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: starting data syncronisation
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: members: 2/5675, 3/3629
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: starting data syncronisation
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: members: 1/5063, 2/5675, 3/3629
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: members: 1/5063, 2/5675, 3/3629
Oct 10 08:52:52 lx-vmhost-ni2 corosync[3915]:  [MAIN  ] Completed service synchronization, ready to provide service.
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: received sync request (epoch 1/5063/0000000F)
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: received sync request (epoch 1/5063/0000000F)
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: received all states
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: leader is 1/5063
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: synced members: 1/5063, 2/5675, 3/3629
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: all data is up to date
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: received all states
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [dcdb] notice: all data is up to date
Oct 10 08:52:52 lx-vmhost-ni2 pmxcfs[3629]: [status] notice: dfsm_deliver_queue: queue length 9
Oct 10 08:52:52 lx-vmhost-ni2 rgmanager[5876]: Quorum formed
Oct 10 08:52:52 lx-vmhost-ni2 kernel: dlm: no local IP address has been set
Oct 10 08:52:52 lx-vmhost-ni2 kernel: dlm: cannot start dlm lowcomms -107
Oct 10 08:53:32 lx-vmhost-ni2 pmxcfs[3629]: [status] notice: received log




Any ideas how to solve this problem?

PVE 3.3 backup problem

$
0
0

INFO: starting new backup job: vzdump 106 --remove 0 --mode snapshot --compress lzo --storage local --node node-1

INFO: Starting Backup of VM 106 (openvz)
INFO: CTID 106 exist mounted running
INFO: status = running
INFO: mode failure - unable to detect lvm volume group
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: starting first sync /1-ssd2/private/106/ to /var/lib/vz/dump/vzdump-openvz-106-2014_10_10-04_48_09.tmp

Why dump can't
detect lvm volume group ?


Thanks

disk io limiting customers

$
0
0
hi,

i played around with the io limiting for the disk. we have server with local storage and also ceph cluster.
when i limit the disk for example to 500 iops i get around 2mb with 4K reads/writes what is exactly 500 iops.
but with 4M blocks i get only a speed of 65M/sec which is far below that.

it is impossible for me to limit a vm to specs of a sata disk (around 120 iops and 130MB/sec) because if i will have 130mb/sec sequential i need around 1000 iops- or if i limit the iops to 120 i will get far below (around 35Mb/sec) seq read/writes

btw. i could not experience a difference with the burst_iops. i set them to numbers like 100000 and it made no difference even for small writes.

greetings

root partition full

$
0
0
Hi,

I have a problem with my Proxmox root partition. I seems to be 56% full but there are no big files on it.
I know there are many threads about this problem, but none of them solved my problem.

df:
Code:

udev                        10240        0      10240    0% /dev
tmpfs                      402636      328    402308    1% /run
/dev/sda2                20317448 10620680    8672820  56% /
tmpfs                        5120        0      5120    0% /run/lock
tmpfs                      1014780    24864    989916    3% /run/shm
/dev/mapper/pve-data    1912594928  3511428 1812694380    1% /var/lib/vz
/var/lib/vz/private/101  262144000  479844  261664156    1% /var/lib/vz/root/101
tmpfs                      262144        0    262144    0% /var/lib/vz/root/101/lib/init/rw
tmpfs                      262144        0    262144    0% /var/lib/vz/root/101/dev/shm
/var/lib/vz/private/102  83886080  700060  83186020    1% /var/lib/vz/root/102
none                        131072        4    131068    1% /var/lib/vz/root/102/dev
none                        26216      980      25236    4% /var/lib/vz/root/102/run
none                          5120        0      5120    0% /var/lib/vz/root/102/run/lock
none                        78640        0      78640    0% /var/lib/vz/root/102/run/shm
none                        102400        0    102400    0% /var/lib/vz/root/102/run/user
/dev/fuse                    30720      16      30704    1% /etc/pve

fdisk -l:


Code:

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.




Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


  Device Boot      Start        End      Blocks  Id  System
/dev/sda1              1  3907029167  1953514583+  ee  GPT


Disk /dev/mapper/pve-data: 1974.0 GB, 1974049177600 bytes
255 heads, 63 sectors/track, 239997 cylinders, total 3855564800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/pve-data doesn't contain a valid partition table

ncdu:
Code:

    7,6GiB [##########] /home
.  4,8GiB [######    ] /var
    1,3GiB [#        ] /usr
  285,3MiB [          ] /lib
  100,3MiB [          ] /boot
  22,2MiB [          ] /sbin
.  18,5MiB [          ] /run
    6,7MiB [          ] /bin
.  6,0MiB [          ] /etc
.  1,5MiB [          ] /tmp
!  16,0KiB [          ] /lost+found
    8,0KiB [          ] /mnt
    4,0KiB [          ] /lib64
e  4,0KiB [          ] /srv
e  4,0KiB [          ] /selinux
!  4,0KiB [          ] /root
e  4,0KiB [          ] /opt
e  4,0KiB [          ] /media
.  0,0  B [          ] /sys
.  0,0  B [          ] /proc
    0,0  B [          ] /dev
@  0,0  B [          ]  vz

Does anybody know how to solve this problem?
I have not made any backup or something else so i dont understand this issue.

Proxmox 3.3 2 Raid move data to second raid

$
0
0
Hi,

i have a fresh proxmox 3.3 setup on raid 1 (250GB). i have a other raid (1500GB).

I want to move pve-data to second raid. how can i do this?

second raid is /dev/sda...


Thx!

Use different kvm binaries / kvm versions for different VMs ?

$
0
0
Hi,

we still have big solaris 10 performance problems after upgrading proxmox kvm version

On older KVM Binaries the machines perform much better.

Is there an elegant way or workaround to run some vms under another kvm hypervisor version/binary than the rest in proxmox ?

Thanks
ado

US Based training / possible certification

$
0
0
So, love the product, and have been using it to run our testing cluster for a while. We're currently in the horrid Vmware Upgrade hamster wheel, and I've made the recommendation to the CIO to allow us to roll out Proxmox as an alternative. He's all for it, however, he wants to see if there is something of the following:

-US based training/consulting option for Proxmox
-Possible "Proxmox Certified Admin" or the like we could send staff to, to make the board of directors have their warm-fuzzies.

Is there such a thing? The IT staff is comfortable with Proxmox and love it, so theres total buy in as long as we can have these kind of options fulfilled.

Thanks!

Trunking VM not working

$
0
0
I want to configure PFsense a my lab virtual firewall.

My physical server has 4 nics but for this purpose I'm using only two with LACP.

My Cisco config:



Code:

!
interface FastEthernet0/45
 switchport trunk native vlan 192
 switchport mode trunk
 channel-group 1 mode passive
end

!
interface FastEthernet0/46
 switchport trunk native vlan 192
 switchport mode trunk
 channel-group 1 mode passive
end

interface Port-channel1
 switchport trunk native vlan 192
 switchport mode trunk
!




Proxmox config:

Code:

# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


iface eth2 inet manual


iface eth3 inet manual


auto bond0
iface bond0 inet manual
    slaves eth0 eth1
    bond_miimon 100
    bond_mode 802.3ad


auto vmbr0
iface vmbr0 inet static
    address  192.168.192.9
    netmask  255.255.255.0
    gateway  192.168.192.253
    bridge_ports bond0
    bridge_stp off
    bridge_fd 0


auto vmbr1
iface vmbr1 inet manual
    bridge_ports eth2
    bridge_stp off
    bridge_fd 0

My VM config:

proxmox1.png

My PFSense Config

pfsense.png

I set rules of "any any allow" to all the interfaces so pfsense is only routing now.

Problem, I can't get the trunk to work. For instance, I can't ping 192.168.200.1 neither 192.168.201.1.

SOMETIMES it starts working, other times just vlan 200 works, other times only vlan 201 works, sometimes both work and MOST of the times none work...

Wan Interface (em0 on pfsense, net0 on proxmox) always work.

Any hint?

NOTE: This is a LAB for my own fun to test proxmox + pfsense to evaluate both as possible production tools...
Attached Images
Viewing all 179628 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>