Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

console halt on qualified domain name

$
0
0
Hi all,

in v2.3 we have installed a ubunto ct with proxmox image. After boot the ct is halt on fully quallified domain name. After this it´s doens´t come a login. With ssh it´s works fine. Look Screen in attachment for example.screen.png
How can we fix it, well the console must go to login without halt?

regards
Attached Images

quorum disk: status offline. howto resolve?

$
0
0
Hi all,

I am having a weird problem with by quorum disk.

clustat
Cluster Status for midgaard @ Sun Feb 24 11:49:05 2013
Member Status: Quorate


Member Name ID Status
------ ---- ---- ------
esx1 1 Online, Local
esx2 2 Online
/dev/block/8:17 0 Offline, Quorum Disk


/etc/init.d/cman restart
Stopping cluster:
Leaving fence domain... [ OK ]
Stopping dlm_controld... [ OK ]
Stopping fenced... [ OK ]
Stopping qdiskd... [ OK ]
Stopping cman... [ OK ]
Waiting for corosync to shutdown:[ OK ]
Unloading kernel modules... [ OK ]
Unmounting configfs... [ OK ]
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Starting qdiskd... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Tuning DLM kernel config... [ OK ]
Unfencing self... [ OK ]
Joining fence domain... [ OK ]

Above seems ok but the syslog continiuesly showing this
Feb 24 11:47:29 esx1 pmxcfs[2281]: [status] crit: cpg_send_message failed: 9

Pinging node1 > node2
asmping -I vmbr30 224.0.2.1 esx2 -c 2
asmping joined (S,G) = (*,224.0.2.234)
pinging 172.16.3.9 from 172.16.3.8
unicast from 172.16.3.9, seq=1 dist=0 time=0.188 ms
unicast from 172.16.3.9, seq=2 dist=0 time=0.197 ms
multicast from 172.16.3.9, seq=2 dist=0 time=0.237 ms


--- 172.16.3.9 statistics ---
2 packets transmitted, time 2001 ms
unicast:
2 packets received, 0% packet loss
rtt min/avg/max/std-dev = 0.188/0.192/0.197/0.014 ms
multicast:
1 packets received, 0% packet loss since first mc packet (seq 2) recvd
rtt min/avg/max/std-dev = 0.237/0.237/0.237/0.000 ms



pinging node2 -> node1
asmping -I vmbr30 224.0.2.1 esx1 -c 2
asmping joined (S,G) = (*,224.0.2.234)
pinging 172.16.3.8 from 172.16.3.9
unicast from 172.16.3.8, seq=1 dist=0 time=0.326 ms
unicast from 172.16.3.8, seq=2 dist=0 time=0.302 ms
multicast from 172.16.3.8, seq=2 dist=0 time=0.251 ms


--- 172.16.3.8 statistics ---
2 packets transmitted, time 2000 ms
unicast:
2 packets received, 0% packet loss
rtt min/avg/max/std-dev = 0.302/0.314/0.326/0.012 ms
multicast:
1 packets received, 0% packet loss since first mc packet (seq 2) recvd
rtt min/avg/max/std-dev = 0.251/0.251/0.251/0.000 ms

Is hardware compatibility that of RHEL?

$
0
0
Dear all,

I was asking in a previous thread about RAID+SSDs or PCIe SSD. Then I realized the Proxmox kernel is based on that from RHEL. My question is... Does all hardware working on RHEL work on Proxmox?. Should all kernel modules developed for RHEL be compatible with Proxmox?. If not, how can I know what hardware is supported under Proxmox out of the box or through drivers?.

thanks in advance,

Jose

rackpdu and multiple power supplies

$
0
0
Hello

We use this in cluster.conf:
Code:

  <fencedevices>
    <fencedevice agent="fence_apc" ipaddr="10.200.100.11" login="admin" name="apc11" passwd="xxxxxxx"/>
    <fencedevice agent="fence_apc" ipaddr="10.200.100.78" login="admin" name="apc78" passwd="ssssi"/>
    <fencedevice agent="fence_apc" ipaddr="10.200.100.88" login="admin" name="apc88" passwd="zzzzzz"/>
  </fencedevices>

  <clusternodes>
    <clusternode name="fnode241" nodeid="1" votes="1">
      <fence>
        <method name="power">
          <device name="apc11" port="4" secure="on"/>
          <device name="apc11" port="5" secure="on"/>
          <device name="apc78" port="4" secure="on"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="node243" nodeid="2" votes="1">
      <fence>
        <method name="power">
          <device name="apc11" port="2" secure="on"/>
          <device name="apc78" port="2" secure="on"/>
          <device name="apc78" port="3" secure="on"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>

However when fencing tries to shutdown a node the rackpdu outlets are power cycled one at a time, instead of turning all of them off then powering all on.

Is that a possible bug or does the fence configuration above need to be changed?

Multiple IP address on ProxMox host

$
0
0
G'day.

I have searched for an answer, and not found it, so please excuse me if I am asking something that has been answered.

Set-up:

There are two physical NICs in the server. I have the following:
192.168.1.238 on vmbr1 which goes via eth1
192.168.2.238 on vmbr0 which goes via eth0
192.168.3.238 on vmbr2 which goes via eth0
eth0 is plugged into a switch with the other servers in the 192.168.2.0 and 192.168.3.0 servers.
eth1 is plugged into a switch with the 192.168.1.0 network machines.
192.168.1.0 ==> LAN
192.168.2.0 ==> DMZ
192.168.3.0 ==> DMZ

Problem:
I cannot ping the 192.168.3.238 from any network.
I cannot ping any 192.168.3.x address from the ProxMox host.

Additional info:
I can ping 192.168.1.x and 192.168.2.x from the ProxMox host.
I can ping 192.168.1.238 and 192.168.2.238 from the respective networks.
I can ping all the ProxMox host IP addresses from within the host itself, so I know they are configured and answering.

I have rebooted, a couple times, for various reasons.

I appreciate any suggestions.

Cheers.
Bobby

VPS is gone, but still listed in web based management

$
0
0
Hi All...

I am running PVE 2.2 (I'll post the exact version below). I created a VPS on an NFS share. I later tried to delete it. It is still listed in the list of containers, and if I try to remove it I get an error indicating that the remove failed, with 'vzctl destroy 7100' failed error code 41.

If I log directly into the host and run vzlist -a I get my full list of containors but the one I want to remove is not among them.

So my question is, do I have a problem and how do I remove that containor from the list?

Also, I want to upgrade but I have been nervous to do so with that problem pending. Is it safe to upgrade?

Thanks very much...

Jim


root@proxmox22:~# pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-71
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1
root@proxmox22:~#

Proxmox 2.x sudden halt

$
0
0
We have an IBM x3650 server blade where we are using the Proxmox virtual OS.We have done clustering using another IBM blade.The kernel version is 2.6.32-14-pve.Everything was running smooth until recently when our system suddenly halted causing the virtual containers to go down.We restarted the blade and everything was back to normal.However,we could not find a reason why this happened.The system generated no logs for the events which caused our system to go down.The system went down on Feb 18 15:56 IST.However the Kernel logs(kern.log,please check attachments) has the last entry on Feb 18 03:12:22 IST. Further logs were only generated after we powered off and powered on our system remotely through ipmitool.

I am attaching the kern.log file.Please help me out with this.

Code:

Feb 18 00:13:49 clust-sec1 kernel: WARNING: at lib/list_debug.c:26 __list_add+0x6d/0xa0() (Tainted: G        W  ---------------  )

Feb 18 00:13:49 clust-sec1 kernel: Hardware name: IBM System x3650 -[0007979]-
Feb 18 00:13:49 clust-sec1 kernel: list_add corruption. next->prev should be prev (ffff88023b48fc38), but was 7672655320726f20. (next=ffff880095308ae0).
 

Feb 18 00:13:49 clust-sec1 kernel: Modules linked in: vzethdev vznetdev simfs vzrst vzcpt nfs lockd fscache nfs_acl auth_rpcgss sunrpc vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_owner xt_mac ipt_REDIRECT nf_nat_irc nf_nat_ftp iptable_nat nf_nat xt_helper xt_state xt_conntrack nf_conntrack_irc nf_conntrack_ftp nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 xt_length ipt_LOG xt_hl xt_tcpmss vhost_net xt_TCPMSS macvtap ipt_REJECT xt_DSCP xt_dscp xt_multiport macvlan tun xt_limit kvm_intel iptable_mangle kvm iptable_filter ip_tables dlm configfs vzevent ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ipv6 iscsi_tcp libiscsi_tcp libiscsi fuse scsi_transport_iscsi radeon ttm drm_kms_helper drm snd_pcsp ibmpex i2c_algo_bit i5000_edac snd_pcm ibmaem snd_timer ipmi_msghandler snd soundcore ics932s401 edac_core i2c_i801 ioatdma tpm_tis i5k_amb i2c_core snd_page_alloc tpm tpm_bios dca serio_raw shpchp ext4 mbcache jbd2 sg ses enclosure ata_generic pata_acpi ata_piix bnx2 aacraid [las
 

Feb 18 00:13:49 clust-sec1 kernel: t unloaded: configfs]

Feb 18 00:13:49 clust-sec1 kernel: Pid: 870426, comm: clamscan veid: 101 Tainted: G        W  ---------------    2.6.32-14-pve #1

Feb 18 00:13:49 clust-sec1 kernel: Call Trace:

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff8106c608>] ? warn_slowpath_common+0x88/0xc0
 
Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff8106c6f6>] ? warn_slowpath_fmt+0x46/0x50

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff812816cd>] ? __list_add+0x6d/0xa0

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffffa072a4cc>] ? nfs_dq_prealloc_space+0x23c/0x2f0 [nfs]

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffffa07153aa>] ? nfs_file_write+0xba/0x210 [nfs]

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff8119489a>] ? do_sync_write+0xfa/0x140
 

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff81095be0>] ? autoremove_wake_function+0x0/0x40

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff8100984c>] ? __switch_to+0x1ac/0x320

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff81194b78>] ? vfs_write+0xb8/0x1a0

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff81195591>] ? sys_write+0x51/0x90

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff815288ae>] ? do_device_not_available+0xe/0x10

Feb 18 00:13:49 clust-sec1 kernel: [<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
 

Feb 18 00:13:49 clust-sec1 kernel: ---[ end trace 9389904522a4dd41 ]---

Feb 18 00:29:10 clust-sec1 kernel: ------------[ cut here ]------------
Feb 18 00:29:10 clust-sec1 kernel: WARNING: at lib/list_debug.c:26 __list_add+0x6d/0xa0() (Tainted: G        W  ---------------  )

Feb 18 00:29:10 clust-sec1 kernel: Hardware name: IBM System x3650 -[0007979]-

Feb 18 00:29:10 clust-sec1 kernel: list_add corruption. next->prev should be prev (ffff88023b48fc38), but was b4ce58e5e426e95b. (next=ffff880095308ae0).

 
Feb 18 00:29:10 clust-sec1 kernel: Modules linked in: vzethdev vznetdev simfs vzrst vzcpt nfs lockd fscache nfs_acl auth_rpcgss sunrpc vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_owner xt_mac ipt_REDIRECT nf_nat_irc nf_nat_ftp iptable_nat nf_nat xt_helper xt_state xt_conntrack nf_conntrack_irc nf_conntrack_ftp nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 xt_length ipt_LOG xt_hl xt_tcpmss vhost_net xt_TCPMSS macvtap ipt_REJECT xt_DSCP xt_dscp xt_multiport macvlan tun xt_limit kvm_intel iptable_mangle kvm iptable_filter ip_tables dlm configfs vzevent ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ipv6 iscsi_tcp libiscsi_tcp libiscsi fuse scsi_transport_iscsi radeon ttm drm_kms_helper drm snd_pcsp ibmpex i2c_algo_bit i5000_edac snd_pcm ibmaem snd_timer ipmi_msghandler snd soundcore ics932s401 edac_core i2c_i801 ioatdma tpm_tis i5k_amb i2c_core snd_page_alloc tpm tpm_bios dca serio_raw shpchp ext4 mbcache jbd2 sg ses enclosure ata_generic pata_acpi ata_piix bnx2 aacraid [las
 

Feb 18 00:29:10 clust-sec1 kernel: t unloaded: configfs]

Feb 18 00:29:10 clust-sec1 kernel: Pid: 870426, comm: clamscan veid: 101 Tainted: G        W  ---------------    2.6.32-14-pve #1

Feb 18 00:29:10 clust-sec1 kernel: Call Trace:
 

Feb 18 00:29:10 clust-sec1 kernel: [<ffffffff8106c608>] ? warn_slowpath_common+0x88/0xc0
 

Feb 18 00:29:10 clust-sec1 kernel: [<ffffffff8106c6f6>] ? warn_slowpath_fmt+0x46/0x50

Feb 18 00:29:10 clust-sec1 kernel: [<ffffffff812816cd>] ? __list_add+0x6d/0xa0
Feb 18 00:29:10 clust-sec1 kernel: [<ffffffffa072a4cc>] ? nfs_dq_prealloc_space+0x23c/0x2f0 [nfs]

Feb 18 00:29:10 clust-sec1 kernel: [<ffffffffa07153aa>] ? nfs_file_write+0xba/0x210 [nfs]

Feb 18 00:29:10 clust-sec1 kernel: [<ffffffff8119489a>] ? do_sync_write+0xfa/0x140

Feb 18 00:29:10 clust-sec1 kernel: [<ffffffff81095be0>] ? autoremove_wake_function+0x0/0x40

Feb 18 00:29:10 clust-sec1 kernel: [<ffffffff8126ea51>] ? cpumask_any_but+0x31/0x50
Feb 18 00:29:10 clust-sec1 kernel: [<ffffffff81194b78>] ? vfs_write+0xb8/0x1a0
 

Feb 18 00:29:10 clust-sec1 kernel: [<ffffffff81195591>] ? sys_write+0x51/0x90

Feb 18 00:29:10 clust-sec1 kernel: [<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b

Feb 18 00:29:10 clust-sec1 kernel: ---[ end trace 9389904522a4dd42 ]---
 

Feb 18 00:44:20 clust-sec1 kernel: ------------[ cut here ]------------
 

Feb 18 00:44:20 clust-sec1 kernel: WARNING: at lib/list_debug.c:26 __list_add+0x6d/0xa0() (Tainted: G        W  ---------------  )
 
Feb 18 00:44:20 clust-sec1 kernel: Hardware name: IBM System x3650 -[0007979]-

Feb 18 00:44:20 clust-sec1 kernel: list_add corruption. next->prev should be prev (ffff88023b48fc38), but was 00000000034b5aa0. (next=ffff880095308ae0).
 

Feb 18 00:44:20 clust-sec1 kernel: Modules linked in: vzethdev vznetdev simfs vzrst vzcpt nfs lockd fscache nfs_acl auth_rpcgss sunrpc vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_owner xt_mac ipt_REDIRECT nf_nat_irc nf_nat_ftp iptable_nat nf_nat xt_helper xt_state xt_conntrack nf_conntrack_irc nf_conntrack_ftp nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 xt_length ipt_LOG xt_hl xt_tcpmss vhost_net xt_TCPMSS macvtap ipt_REJECT xt_DSCP xt_dscp xt_multiport macvlan tun xt_limit kvm_intel iptable_mangle kvm iptable_filter ip_tables dlm configfs vzevent ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ipv6 iscsi_tcp libiscsi_tcp libiscsi fuse scsi_transport_iscsi radeon ttm drm_kms_helper drm snd_pcsp ibmpex i2c_algo_bit i5000_edac snd_pcm ibmaem snd_timer ipmi_msghandler snd soundcore ics932s401 edac_core i2c_i801 ioatdma tpm_tis i5k_amb i2c_core snd_page_alloc tpm tpm_bios dca serio_raw shpchp ext4 mbcache jbd2 sg ses enclosure ata_generic pata_acpi ata_piix bnx2 aacraid [las
 
 

Feb 18 00:44:20 clust-sec1 kernel: t unloaded: configfs]

Feb 18 00:44:20 clust-sec1 kernel: Pid: 870426, comm: clamscan veid: 101 Tainted: G        W  ---------------    2.6.32-14-pve #1

Feb 18 00:44:20 clust-sec1 kernel: Call Trace:
Feb 18 00:44:20 clust-sec1 kernel: [<ffffffff8106c608>] ? warn_slowpath_common+0x88/0xc0
 
Feb 18 00:44:20 clust-sec1 kernel: [<ffffffff8106c6f6>] ? warn_slowpath_fmt+0x46/0x50

Feb 18 00:44:20 clust-sec1 kernel: [<ffffffff812816cd>] ? __list_add+0x6d/0xa0

Feb 18 00:44:20 clust-sec1 kernel: [<ffffffffa072a4cc>] ? nfs_dq_prealloc_space+0x23c/0x2f0 [nfs]

Feb 18 00:44:20 clust-sec1 kernel: [<ffffffffa07153aa>] ? nfs_file_write+0xba/0x210 [nfs]

Feb 18 00:44:20 clust-sec1 kernel: [<ffffffff8119489a>] ? do_sync_write+0xfa/0x140
Feb 18 00:44:20 clust-sec1 kernel: [<ffffffff81095be0>] ? autoremove_wake_function+0x0/0x40

Feb 18 00:44:20 clust-sec1 kernel: [<ffffffff8126ea51>] ? cpumask_any_but+0x31/0x50

Feb 18 00:44:20 clust-sec1 kernel: [<ffffffff81194b78>] ? vfs_write+0xb8/0x1a0
Feb 18 00:44:20 clust-sec1 kernel: [<ffffffff81195591>] ? sys_write+0x51/0x90
 

Feb 18 00:44:20 clust-sec1 kernel: [<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
Feb 18 00:44:20 clust-sec1 kernel: ---[ end trace 9389904522a4dd43 ]---

Feb 18 00:44:29 clust-sec1 kernel: ------------[ cut here ]------------

Feb 18 00:44:29 clust-sec1 kernel: WARNING: at lib/list_debug.c:26 __list_add+0x6d/0xa0() (Tainted: G        W  ---------------  )

Feb 18 00:44:29 clust-sec1 kernel: Hardware name: IBM System x3650 -[0007979]-
 

Feb 18 00:44:29 clust-sec1 kernel: list_add corruption. next->prev should be prev (ffff88023b48fc38), but was 00007f5fd6f5cc60. (next=ffff880095308ae0).
 
 

Feb 18 00:44:29 clust-sec1 kernel: Modules linked in: vzethdev vznetdev simfs vzrst vzcpt nfs lockd fscache nfs_acl auth_rpcgss sunrpc vzdquota vzmon vzdev ip6t_REJECT ip6table_mangle ip6table_filter ip6_tables xt_owner xt_mac ipt_REDIRECT nf_nat_irc nf_nat_ftp iptable_nat nf_nat xt_helper xt_state xt_conntrack nf_conntrack_irc nf_conntrack_ftp nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 xt_length ipt_LOG xt_hl xt_tcpmss vhost_net xt_TCPMSS macvtap ipt_REJECT xt_DSCP xt_dscp xt_multiport macvlan tun xt_limit kvm_intel iptable_mangle kvm iptable_filter ip_tables dlm configfs vzevent ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ipv6 iscsi_tcp libiscsi_tcp libiscsi fuse scsi_transport_iscsi radeon ttm drm_kms_helper drm snd_pcsp ibmpex i2c_algo_bit i5000_edac snd_pcm ibmaem snd_timer ipmi_msghandler snd soundcore ics932s401 edac_core i2c_i801 ioatdma tpm_tis i5k_amb i2c_core snd_page_alloc tpm tpm_bios dca serio_raw shpchp ext4 mbcache jbd2 sg ses enclosure ata_generic pata_acpi ata_piix bnx2 aacraid [las
 
Feb 18 00:44:29 clust-sec1 kernel: t unloaded: configfs]
Feb 18 00:44:29 clust-sec1 kernel: Pid: 870426, comm: clamscan veid: 101 Tainted: G        W  ---------------    2.6.32-14-pve #1
Feb 18 00:44:29 clust-sec1 kernel: Call Trace:
 

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffff8106c608>] ? warn_slowpath_common+0x88/0xc0
 

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffff8106c6f6>] ? warn_slowpath_fmt+0x46/0x50

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffff812816cd>] ? __list_add+0x6d/0xa0

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffffa072a4cc>] ? nfs_dq_prealloc_space+0x23c/0x2f0 [nfs]

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffffa07153aa>] ? nfs_file_write+0xba/0x210 [nfs]
 
Feb 18 00:44:29 clust-sec1 kernel: [<ffffffff8119489a>] ? do_sync_write+0xfa/0x140

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffff81095be0>] ? autoremove_wake_function+0x0/0x40
 

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffff8126ea51>] ? cpumask_any_but+0x31/0x50

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffff81194b78>] ? vfs_write+0xb8/0x1a0
 

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffff81195591>] ? sys_write+0x51/0x90
 

Feb 18 00:44:29 clust-sec1 kernel: [<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
 
Feb 18 00:44:29 clust-sec1 kernel: ---[ end trace 9389904522a4dd44 ]---

Feb 18 00:44:49 clust-sec1 kernel: CT: 105: stopped
 

Feb 18 00:44:51 clust-sec1 kernel: CT: 105: started

Feb 18 01:38:00 clust-sec1 kernel: CPT ERR: ffff88003779b000,102 :foreign process 24716/953295(bash) inside CT (e.g. vzctl enter or vzctl exec).
 

Feb 18 01:38:00 clust-sec1 kernel: CPT ERR: ffff88003779b000,102 :suspend is impossible now.

Feb 18 03:12:22 clust-sec1 kernel: Holy Crap 1 0 910222,551(apache2)

 
---- This portion of the log is after our system was restarted -------
 
Feb 18 17:36:37 clust-sec1 kernel: imklog 4.6.4, log source = /proc/kmsg started.

Feb 18 17:36:37 clust-sec1 kernel: Initializing cgroup subsys cpuset
Feb 18 17:36:37 clust-sec1 kernel: Initializing cgroup subsys cpu

Feb 18 17:36:37 clust-sec1 kernel: Linux version 2.6.32-14-pve (root@maui) (gcc version 4.4.5 (Debian 4.4.5-8) ) #1 SMP Tue Aug 21 08:24:37 CEST 2012

Feb 18 17:36:37 clust-sec1 kernel: Command line: BOOT_IMAGE=/vmlinuz-2.6.32-14-pve root=/dev/mapper/pve-root ro quiet
 
Feb 18 17:36:37 clust-sec1 kernel: KERNEL supported cpus:
 

Feb 18 17:36:37 clust-sec1 kernel:  Intel GenuineIntel
Feb 18 17:36:37 clust-sec1 kernel:  AMD AuthenticAMD
 

Feb 18 17:36:37 clust-sec1 kernel:  Centaur CentaurHauls

Feb 18 17:36:37 clust-sec1 kernel: BIOS-provided physical RAM map:

Feb 18 17:36:37 clust-sec1 kernel: BIOS-e820: 0000000000000000 - 000000000009ac00 (usable)

Feb 18 17:36:37 clust-sec1 kernel: BIOS-e820: 000000000009ac00 - 00000000000a0000 (reserved)
 

Feb 18 17:36:37 clust-sec1 kernel: BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved)
Feb 18 17:36:37 clust-sec1 kernel: BIOS-e820: 0000000000100000 - 00000000bffc74c0 (usable)
 

Feb 18 17:36:37 clust-sec1 kernel: BIOS-e820: 00000000bffc74c0 - 00000000bffceac0 (ACPI data)
 

Feb 18 17:36:37 clust-sec1 kernel: BIOS-e820: 00000000bffceac0 - 00000000c0000000 (reserved)

Feb 18 17:36:37 clust-sec1 kernel: BIOS-e820: 00000000e0000000 - 00000000f0000000 (reserved)

Feb 18 17:36:37 clust-sec1 kernel: BIOS-e820: 00000000fec00000 - 0000000100000000 (reserved)
Feb 18 17:36:37 clust-sec1 kernel: BIOS-e820: 0000000100000000 - 0000000240000000 (usable)

Feb 18 17:36:37 clust-sec1 kernel: DMI 2.4 present.
.
.
.
.
.
.
.
.
.
--- Log truncated ----


qemu-kvm broken network adapter in Windows 2008

$
0
0
Hello.

We are having problems with the Realtek network adapter under qemu-kvm on a Windows 2008 build. The network adapter appears correctly and has been working correctly for over 6 months but when we came in today its just stopped working.

The adapter appears correctly, we have confirmed all the network settings etc but it comes up with connected but local area network access only.

We have rebooted the system and checked we are running the latest release of qemu-kvm. Has anyone else had this issue and is there a fix? This is the second system this has happened on, we didn't need the first so we just removed it, but this one is important.

Thanks,

Paul Hughes
Senior Manager
http://www.ukhost4u.com/

QEMU 1.4, Ceph RBD support (pvetest)

$
0
0
We just moved again a bunch of packages to our pvetest repository (on the road to Proxmox VE 2.3), including latest stable QEMU 1.4 and the GUI support for storing KVM VM disks on a Ceph RADOS Block Device (RBD) storage system.

Due to the new Backup and Restore implementation, KVM live backups of running virtual machines on Ceph RBD is no problem anymore - a quite unique feature and a big step forward.

Other small improvements and changes

  • qcow2 as default storage format, cache=none (previously raw)
  • KVM64 as default CPU type (previously qemu64)
  • e1000 as default NIC (previously rtl8139)
  • added omping to repo (for testing multicast between nodes)
  • task history per VM
  • enable/disable tablet for VM on GUI without stop/start of VM (you can use vmmouse instead, for lower CPU usage, works on modern Linux and on all Windows VMs as long as you install the vmmouse drivers)
  • Node Summary: added "KSM sharing" and "CPU Socket count"

Everybody is encouraged to test and give feedback!
__________________
Best regards,

Martin Maurer
Proxmox VE project leader

BIOS virtualization is enabled, but could not access KVM kernel module

$
0
0
Hello.
I can run KVM without hardware support. It’s works. But when I set options of hardware KVM support “on” I get error:

Quote:

Could not access KVM kernel module: No such file or directoryfailed to initialize KVM: No such file or directoryNo accelerator found!TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name 32bit -smp 'sockets=1,cores=2' -cpu kvm32 -nodefaults -boot 'menu=on' -vga cirrus -k en-us -m 512 -cpuunits 1000 -usbdevice tablet -drive 'file=/var/lib/vz/template/iso/Win2003sp2R2.Enterprise.VL.Ru.2CDin1.iso,if=none,i d=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/var/lib/vz/images/100/vm-100-disk-1.raw,if=none,id=drive-ide0,aio=native,cache=none' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge' -device 'rtl8139,mac=8E:F7:04:83:18:47,netdev=net0,bus=pci .0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime'' failed: exit code 1
My processor supports virtualization, it's Intel Xeon E3-1230. To solve the problem I’m read forum, but problem always stop on the “turn -on- BIOS virtualization…”
My BIOS virtualization is Enabled, and system is restarted.
Please, can someone help me? Thanks

Restore and storage issues

$
0
0
Hi,

I just found out about a strange bevaviour in Proxmox VE 2.2.

When restoring a VM with qmrestore the disk of the restored VM is always 4M larger than the original disk. Is this intended?
Further more the disk options (e.g. writeback) und disk size are missing in the config of the restored VM. Is this gonna be fixed?

Corosync Totem Re-transmission Issues

$
0
0
Hey all. I am still battling Corosync Re transmission issues. 2 of my 3 clusters seem to be having the issue. As you can see the two clusters having the issue are on the latest version. The 1 cluster not having the issue is on a previous version. I am at a loss as to what the issue could be. Doing some searching and it seems this typically happens when 1 node is under performing the other, I don't feel this is the case as cluster #1 never had this issue until I just upgraded it.

Cluster #1 (Having issue)
IBM x3550 M3's
Broadcom 10GB backend

Quote:

root@proxmox1:/var/log/cluster# pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1
Cluster #2 (Having issue)
IBM x3650 M4's
Broadcom 10GB backend

Quote:

root@medprox1:/var/log/cluster# pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1
Cluster #3 (Not having issues)
IBM x3650 M4's
Broadcom 10GB backend

Quote:

root@fiosprox1:~# pveversion -v
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-82
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-33
qemu-server: 2.0-69
pve-firmware: 1.0-21
libpve-common-perl: 1.0-39
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1

Here is my cluster.conf which is the same on each node other than IP's.

Quote:

<?xml version="1.0"?>
<cluster config_version="27" name="fiosprox">
<cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<quorumd allow_kill="0" interval="3" label="fiosprox_qdisk" master_wins="1" tko="10"/>
<totem token="54000"/>
<fencedevices>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.129" lanplus="1" login="USERID" name="ipmi1" passwd="PASSW0RD" power_wait="5"/>
<fencedevice agent="fence_ipmilan" ipaddr="10.80.12.132" lanplus="1" login="USERID" name="ipmi2" passwd="PASSW0RD" power_wait="5"/>
</fencedevices>
<clusternodes>
<clusternode name="fiosprox1" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="ipmi1"/>
</method>
</fence>
</clusternode>
<clusternode name="fiosprox2" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="ipmi2"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<pvevm autostart="1" vmid="102"/>
<pvevm autostart="1" vmid="104"/>
<pvevm autostart="1" vmid="100"/>
<pvevm autostart="1" vmid="101"/>
</rm>
</cluster>

WEB GUI password does not work

$
0
0
85-90% of the proxmox web interface are not accepting the password.

i have tried to restart pvedaemon but get the following error

root@proxmox2:~# /etc/init.d/pvedaemon restart
Restarting PVE Daemon: pvedaemonunable to create socket - PVE::APIDaemon: Cannot assign requested address
(warning).

syslog output:

Feb 25 14:00:09 proxmox2 pvedaemon[70397]: unable to start server: unable to create socket - PVE::APIDaemon: Cannot assign requested address



what are other ways to troubleshoot?
i can start and stop the service but get this error.

about ceph:unexpected property 'authsupported'

$
0
0
hi, in the backup vm info,about unexpected authsupported
......
INFO: status = stopped
INFO: file /etc/pve/storage.cfg line 10 (section 'ceph') - unable to parse value of 'authsupported': unexpected property 'authsupported'
INFO: backup mode: stop
INFO: ionice priority: 7
......

# pvesm list ceph
file /etc/pve/storage.cfg line 36 (section 'ceph') - unable to parse value of 'authsupported': unexpected property 'authsupported'
ceph:vm-105-disk-1 raw 17179869184 105
ceph:vm-105-disk-2 raw 68719476736 105

storage.cfg section:
rbd: ceph
monhost 172.16.100.101:6789;172.16.100.102:6789;172.16.100 .103:6789
pool rbd
username admin
authsupported cephx;none
content images

add rbd storage by gui,no authsupported option in the storage.cfg

# pveversion -v
pve-manager: 2.3-10 (pve-manager/2.3/499c7b4d)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-16-verycloud: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-13
pve-firmware: 1.0-21
libpve-common-perl: 1.0-48
libpve-access-control: 1.0-25
libpve-storage-perl: 2.3-3
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-6
ksm-control-daemon: 1.1-1

Can this be done?

$
0
0
I'd like to have a proxmox server with a physical wifi and ethernet connection. I want to have it connect to the internet through the wifi and be able to use a VM as a router and route stuff from the wifi (internet) to the ethernet (lan) inside a vm. Then I'd like all over VM's to work off of the ethernet connection. I'd also like to limit login to proxmox to only ethernet.

Can this be done?


I am sort of also wondering if maybe to simplify that I can just have a normal ethernet for private which would be used for VM's and logging into proxomx, then just using a PCIE passthrough to use the wifi card inside the router VM.

Can someone explain how to do this?

Windows 2008 R2 can no longer install from DVD?

$
0
0
I just ran into this problem on a server at our branch:
http://forum.proxmox.com/threads/556...Server-2008-R2

Weird thing though: I'm using the same original DVD from which I installed another VM just a few months ago on the very same system...

Trying to generate an iso-file with dd results in a 1.1GiB file instead of ~3GiB. This is reproducable with another DVD i have here, so i think the DVDs are made as multi-session and dd stops after the first track.

Internet connection at this server is utterly slow - download from MS has an ETA of >7h...
I won't be at the server site until next week, so downloading the iso here to an USB stick is no option.


Code:

# pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1

should each VM have its own DRBD or one DRBD for all VMs?

$
0
0
Hi,

I`ve heard that it is much safer, to setup separate DRBD resource for each single VM. So each VM has its own LV with DRBD. Is this really safer setup? It is much more complicated and more work/configuring has to be done :( But when some bad things happen to cluster (2 node cluster, HA) would this kind of setup lead to _higher_probability_ that in case of split-brain, it will hit just some of DRBD instances? So at least some of VMs would survive without any problem.

If there would be just one DRBD instance with a lot of LV volumes with VMs on them, split-brain would affect all of them.

Thank you
Pep.

Arrival of Proxmox 2.3?

$
0
0
Hi!

I need to build a new cluster with proxmox 2. My old one is 1.9 and i use iscsi with lvm.

The new one should use NFS instead beacause its easier to handle. I read about new backup features in proxmox 2.3 and actually i installed the packes from pvetest.
Now i am able to backup in snapshotmode to nfs share. That is REALLY GREAT!

Can you please tell me if i would run into trouble if i use the pvetest version in production?

When will proxmox 2.3 be released?

Add network interface

$
0
0
How add vmnet bridge to network interface like 802.1q (eth0.123) via web-interface without reboot server?

PVE 2.3test - Unresolved issues

$
0
0
Hello there.
I still have some problems with the last pvetest repositary:

- I am using a local mounted shared directory for ISO repositary: if I attach an iso file to a VM, it doesn't start because it can't access the file. No problem if I use a local storage.

- I can't live migrate VM's with SATA disk drives (on both ceph and local mounted shared directories); I have read that it is a QEMU 1.4 problem, but what will we do now with already created VM's ?

Thanks, Fabrizio
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>