Quantcast
Channel: Proxmox Support Forum
Viewing all 171725 articles
Browse latest View live

Problem creating a 2 nodes cluster with proxmox VE 2.3

$
0
0
Hello,

I'm trying to create a 2 nodes cluster on proxmox VE 2.3, using unicast (my hosting company does not support multicast) on a dedicated network interface

Here are the steps I've done :

1. fresh installation of proxmox VE then :

Code:

root@node1:~# aptitude update && aptitude full-upgrade -y
2. configuration of /etc/hosts :

Code:

root@node1:~# cat /etc/hosts
127.0.0.1      localhost

88.190.xx.xx  sd-xxxxx.dedibox.fr node1ext
10.90.44.xx    node1 pvelocalhost
10.90.44.xx  node2
root@node1:~# hostname
node1



3. Creation of the cluster

Code:

root@node1:~# pvecm create dataexperience
Restarting pve cluster filesystem: pve-cluster[dcdb] notice: wrote new cluster config '/etc/cluster/cluster.conf'
.
Starting cluster:
  Checking if cluster has been disabled at boot... [  OK  ]
  Checking Network Manager... [  OK  ]
  Global setup... [  OK  ]
  Loading kernel modules... [  OK  ]
  Mounting configfs... [  OK  ]
  Starting cman... [  OK  ]
  Waiting for quorum... [  OK  ]
  Starting fenced... [  OK  ]
  Starting dlm_controld... [  OK  ]
  Tuning DLM kernel config... [  OK  ]
  Unfencing self... [  OK  ]

4. Modification of the cluster.conf

Code:

root@node1:~# cp /etc/pve/cluster.conf /etc/pve/cluster.conf.new
root@node1:~# vi /etc/pve/cluster.conf.new


Code:

<?xml version="1.0"?>
<cluster name="dataexperience" config_version="2">

  <cman keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu" two_node="1" expected_votes="1">
  </cman>

  <clusternodes>
  <clusternode name="node1" votes="1" nodeid="1"/>
  </clusternodes>

</cluster>

Code:

root@node1:~# ccs_config_validate -v -f /etc/pve/cluster.conf.new
Creating temporary file: /tmp/tmp.BopYEiuGdz
Config interface set to:
Configuration stored in temporary file
Updating relaxng schema
Validating..
Configuration validates
Validation completed

Then I activated the cluster configuration through GUI

In order to ensure the new cluster configuration is taken into account, I rebooted the master (a bit overkilling) and following reboot :

Code:

root@node1:~# pvecm status
cman_tool: Cannot open connection to cman, is it running ?

Here is what I've seen in the log :

Code:

May  9 11:39:51 node1 pmxcfs[1457]: [dcdb] crit: cpg_initialize failed: 6
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:55 node1 pmxcfs[1457]: [status] crit: cpg_send_message failed: 9
May  9 11:39:57 node1 pmxcfs[1457]: [quorum] crit: quorum_initialize failed: 6
May  9 11:39:57 node1 pmxcfs[1457]: [confdb] crit: confdb_initialize failed: 6
May  9 11:39:57 node1 pmxcfs[1457]: [dcdb] crit: cpg_initialize failed: 6
May  9 11:39:57 node1 pmxcfs[1457]: [dcdb] crit: cpg_initialize failed: 6

That's my 3rd try with exactly the same result

3.0 rc1 (upgrade from fresh 2.3 install) and updated, can't find 'Convert to template

$
0
0
Hi, I've done a fresh (test) install of 2.3 and then upgraded to 3.0 with the provided script (the 3.0rc1 iso was not availabe at that time).
I've dist-upgraded but still if I right click on a (stopped) vm, I've no "Convert to template" item, but just the usual ones (start...).
What am I doing wrong?
Code:

root@proxmox:~# pveversion -v
pve-manager: 3.0-13 (pve-manager/3.0/7ce622f2)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-99
pve-kernel-2.6.32-20-pve: 2.6.32-99
pve-kernel-2.6.32-19-pve: 2.6.32-95
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-1
qemu-server: 3.0-8
pve-firmware: 1.0-22
libpve-common-perl: 3.0-2
libpve-access-control: 3.0-3
libpve-storage-perl: 3.0-3
vncterm: 1.1-2
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-11
ksm-control-daemon: 1.1-1

and a test's vm config
Code:

root@proxmox:~# cat /etc/pve/local/qemu-server/500.conf
#Note on test machine
bootdisk: virtio0
cores: 1
ide2: none,media=cdrom
memory: 512
name: prova
net0: virtio=86:D2:D6:0F:E7:33,bridge=vmbr90
ostype: l26
sockets: 1
virtio0: local:500/vm-500-disk-1.qcow2,format=qcow2,size=32G


Also with
# aptitude -f install
apache2 has been installed... is it ok? I've only a MegaCli additional repo with lsi monitor utilities installed, don't know if it's the culprint (I dubt) or not

Thanks a lot

Using native infinband in a proxmox cluster

$
0
0
currently we are using ipoib in our proxmox cluster. I was going through the corosync manuals and noticed that native infiniband is supported using the "transport=iba" directive
should this also work in proxmox ha cluster?

Comparation Proxmox UI vs OpenVZ Web Panel

$
0
0
I make an UI comparation that maybe used by Proxmox developers to expand its features:

Proxmox OWP
Support KVM Yes No
Support OpenVZ Yes Yes
Show IP Address No Yes
Able to set expired date No Yes (this feature very usable for Hosting companies which sale VPS on time based)
Remember me on Login No Yes (very usable when you manage many Promox from 1 dedicated IP)
Template for installation from URL No Yes
Add user define Description to VPS Guest No Yes
Select owner/permission when create VPS Guest (on Proxmox you only can select Permission after VPS created) No Yes
Reinstall VPS No Yes (absolutely useful for hosting company which sale large number of VPS)
CPU Limit (Proxmox only able to set CPU Units) No Yes (help you avoid client eating all resources)
Change VPS Limits (Proxmox only show the Limits but not allow to edit) No Yest (very usefull for advanced user rather than edit config file manually and of course you see it directly)
Show RAM and Disk Usage in MB (Proxmox only able to show in % format) No Yes (very useful if you allow Oversale HDD and RAM)

Cannot delete ghost node

$
0
0
Hello,
We have 3 servers in cluster. We had a problem adding a 4th server (it stayed in the "Waiting for quorum" status)...after rebooting all servers and disconnecting the last server we could gain access to the cluster again.
The problem now is that we have a ghost node with the "Estranged" status...what does it mean?, how can we delete that node? (it does not exist in any cluster.conf)
All nodes have the same and latest version (2.3/7946f1f1).

This is the summary screen: (the node that never joined the cluster was named proxmox2-5)

pm2-0.png

Any ideas will be appreciated...

Thanks in advance
Attached Images

vzdump failed - Device or resource busy

$
0
0
Hi!
I got this problem for the 1st time in maybe 2-3 weeks (upgraded from 1.9 to 2.3)
The vzdump script could not backup ANY of the 15 VMs running. All with the same error.
Is there a way to hrm ... fix this?

Thank you very much

proxmox:/dev/pve-san# pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-95
pve-kernel-2.6.32-19-pve: 2.6.32-95
pve-kernel-2.6.32-14-pve: 2.6.32-74
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1


vzdump 101 104 105 106 107 108 116 134 139 109 115 133 --quiet 1 --mailto xxx@xxxxxx.com --mode snapshot --storage vmxxxxx

101: May 09 00:00:02 INFO: Starting Backup of VM 101 (qemu)
101: May 09 00:00:02 INFO: status = running
101: May 09 00:00:02 INFO: unable to open file '/etc/pve/nodes/proxmox/qemu-server/101.conf.tmp.359369' - Device or resource busy
101: May 09 00:00:02 ERROR: Backup of VM 101 failed - command 'qm set 101 --lock backup' failed: exit code 2


Adding additional Virtual disk to increase VM storage

$
0
0
Is it possible to add additional storage(qcow2, raw or vmdk) into the VM without restarting or shutting down the VM? If it possible, how to make the VM os see that a new hdd has been added.
Thanks!

test WP failed, assume write enabled

$
0
0
All 20 seconds these messages appear in /var/log/messages

May 9 23:27:51 proxmox kernel: sd 9:0:0:2: reservation conflict
May 9 23:27:51 proxmox kernel: sd 9:0:0:2: [sdd] Test WP failed, assume Write Enabled //// Any ideas what could cause this ? We have proxmox kernel 2.3 running on Debian 6.0.7

Unable to Fence using APC PDU

$
0
0
Hello, I am currently trying to get fencing to work with my 3 node cluster, but I am unable to get it to work. From what it seems, fence_apc is unable to contact the PDU, but I can ping and SSH into the PDU from each node without a problem (minus a 20 second wait time for password prompt) . I have a APC Rack PDU model APC7930, and I am using 2 HP DL360 G5 Servers, and 1 custom 2U Server. All have the latest version of Proxmox, with all of the latest updates.

I have tried the test command, and here is what it gives me:

Code:

root@srv-1-02:~# fence_apc -x -l proxmox -p XXXX -a 10.1.7.3 -o status -n 1 -vUnable to connect/login to fencing device
pveversion -v
Code:

pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-95
pve-kernel-2.6.32-19-pve: 2.6.32-95
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1

/etc/pve/cluster.conf
Code:

<?xml version="1.0"?><cluster name="Cluster-1" config_version="8">


  <cman keyfile="/var/lib/pve-cluster/corosync.authkey">
  </cman>


  <fencedevices>
    <fencedevice agent="fence_apc" ipaddr="10.1.7.3" login="proxmox" name="pdu-1-01" passwd="XXXX" power_wait="10"/>
  </fencedevices>


  <clusternodes>


    <clusternode name="srv-1-02" votes="1" nodeid="1">
      <fence>
        <method name="power">
          <device name="pdu-1-01" port="1" secure="on"/>
        </method>
      </fence>
    </clusternode>


    <clusternode name="srv-1-03" votes="1" nodeid="2">
      <fence>
        <method name="power">
          <device name="pdu-1-01" port="2" secure="on"/>
          <device name="pdu-1-01" port="3" secure="on"/>
        </method>
      </fence>
    </clusternode>


    <clusternode name="srv-1-04" votes="1" nodeid="3">
      <fence>
        <method name="power">
          <device name="pdu-1-01" port="4" secure="on"/>
          <device name="pdu-1-01" port="5" secure="on"/>
        </method>
      </fence>
    </clusternode>


  </clusternodes>


  <rm>
    <service autostart="1" exclusive="0" name="TestIP" recovery="relocate">
      <ip address="10.1.8.1"/>
    </service>
  </rm>


</cluster>

Any ideas? Thanks

Poor NIC performance in Proxmox 2.3

$
0
0
Howdy

Strange issue at Hetzner.

I have 3 servers with them. 2 perform at 100M both ways (EQ4 and EQ8) and this third and newest one (SB74) is 100M both ways when booted on the Debian-based Hetzner rescue system and 100M in and 30-35M out when booted into Proxmox 2.3 (installed from ISO).

lspci (booted into Proxmox):
06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)



dmesg (booted into Proxmox):
r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
r8169 0000:06:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
r8169 0000:06:00.0: setting latency timer to 64
alloc irq_desc for 26 on node -1
alloc kstat_irqs on node -1
r8169 0000:06:00.0: irq 26 for MSI/MSI-X
r8169 0000:06:00.0: eth0: RTL8168c/8111c at 0xffffc9000306c000, 6c:62:6d:a0:77:76, XID 1c4000c0 IRQ 26
r8169 0000:06:00.0: eth0: jumbo features [frames: 6128 bytes, tx checksumming: ko]



I have performed this test booted in rescue mode and it seems to be 100M both ways:


root@rescue ~ # iperf -c open.da.org.za
------------------------------------------------------------
Client connecting to open.da.org.za, TCP port 5001
TCP window size: 21.9 KByte (default)
------------------------------------------------------------
[ 3] local 46.4.171.228 port 49948 connected with 78.46.63.11 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 114 MBytes 94.8 Mbits/sec


root@rescue ~ # iperf -s

------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 46.4.171.228 port 5001 connected with 78.46.63.11 port 58805
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.2 sec 114 MBytes 93.4 Mbits/sec




When I am booted into Proxmox download seems to be 100M and upload 30-35M:


root@open-02:~# iperf -c open
------------------------------------------------------------
Client connecting to open, TCP port 5001
TCP window size: 23.8 KByte (default)
------------------------------------------------------------
[ 3] local 46.4.171.228 port 42206 connected with 78.46.63.11 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 110 MBytes 92.2 Mbits/sec


root@open-02:~# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 46.4.171.228 port 5001 connected with 78.46.63.11 port 58817
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 42.1 MBytes 35.3 Mbits/sec



Any advice?

Proxmox and spice

$
0
0
Hello.
I am using version 3.0 of Proxmox. Is there a guide for integrating spice in Proxmox?
Thank you.

iSCSI errors on storage server

$
0
0
Hi

I have Debian Wheezy based iSCSI storage server running LIO iSCSI target and configured with targetcli.
Also I have two Proxmox VE 2.3 nodes in non-HA cluster configuration. Nodes are up to date.

When I add iSCSI target to cluster I have bunch of rx_data iSCSI errors (see below)

Target works fine (LVM on top) but those errors on storage server are annoying and produce lots of noise in syslog

I have traced the problem to /usr/bin/pvestatd, /usr/share/perl5/PVE/Storage/ISCSIPlugin.pm and iscsi_test_portal function which connects to iSCSI portal and disconnects but LIO target expects full ISCSI_HDR_LEN (drivers/target/iscsi/iscsi_target_login.c in kernel tree)

Quick fix is to return 1; from iscsi_test_portal instead of return $p->ping($server); but I am not sure will any other functionality suffer if we assume that target is always connectable.

Igor


/var/log/syslog on storage sever:
May 10 11:22:21 storage kernel: [609526.746899] rx_data() returned an error.
May 10 11:22:21 storage kernel: [609526.746995] iSCSI Login negotiation failed.
May 10 11:22:31 storage kernel: [609535.900565] rx_data() returned an error.
May 10 11:22:31 storage kernel: [609535.900591] iSCSI Login negotiation failed.
May 10 11:22:41 storage kernel: [609546.016885] rx_data() returned an error.
May 10 11:22:41 storage kernel: [609546.016912] iSCSI Login negotiation failed.

[SOLVED] How to Remove a Proxmox Node

$
0
0
Quote:

Remove a cluster node

Move all virtual machines out of the node, just use the Central Web-based Management 2.0 to migrate or delete all VM´s. Make sure you have no local backups you want to keep, or save them accordingly.
Log in to one remaining node via ssh. Issue a pvecm nodes command to identify the nodeID:
pvecm nodes

Node Sts Inc Joined Name
1 M 156 2011-09-05 10:39:09 hp1
2 M 156 2011-09-05 10:39:09 hp2
3 M 168 2011-09-05 11:24:12 hp4
4 M 160 2011-09-05 10:40:27 hp3

Issue the delete command (here deleting node hp2):

pvecm delnode hp2

If the operation succeeds no output is returned, just check the node list again with 'pvecm nodes' (or just 'pvecm n').
Quote:

ATTENTION: you need to power off the removed node, and make sure that it will not power on again.
Does the node need to be powered on to remove it? or powered off first?

OpenVZ container backup using snapshot method always have error

$
0
0
When trying to backup an OpenVZ container using SNAPSHOT method, I always encounter a bunch of errors (see log below).
But using STOP mode works great.
And I read somewhere that snapshot only works with LVM2 type storage?
All our containers are on 'directory' type of storage.

I just want to make sure that is the problem, before I go change many nodes to LVM2.

Can somebody help confirm this?

Thank you!

Code:

INFO: starting new backup job: vzdump 110 --remove 0 --mode snapshot --compress gzip --storage nasa1 --node a4
INFO: Starting Backup of VM 110 (openvz)
INFO: CTID 110 exist mounted running
INFO: status = running
INFO: mode failure - unable to detect lvm volume group
INFO: trying 'suspend' mode instead
INFO: backup mode: suspend
INFO: ionice priority: 7
INFO: starting first sync /mnt/md0/private/110/ to /mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/." failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/bin" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/bin/bzcmp" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/bin/bzegrep" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/bin/bzfgrep" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/bin/bzless" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/bin/csh" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/bin/lessfile" failed: Operation not permitted (1)
.
.
.
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/var/spool/postfix/usr/lib/zoneinfo/localtime" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/var/spool/samba" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/var/tmp" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/var/www" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/var/spool/postfix/pid/unix.smtp" failed: Operation not permitted (1)
INFO: rsync: chown "/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp/var/www/index.html" failed: Operation not permitted (1)
INFO: Number of files: 60739
INFO: Number of files transferred: 48930
INFO: Total file size: 5548046379 bytes
INFO: Total transferred file size: 5546184870 bytes
INFO: Literal data: 5546212438 bytes
INFO: Matched data: 0 bytes
INFO: File list size: 1516722
INFO: File list generation time: 0.001 seconds
INFO: File list transfer time: 0.000 seconds
INFO: Total bytes sent: 5550507736
INFO: Total bytes received: 969063
INFO: sent 5550507736 bytes  received 969063 bytes  1838541.74 bytes/sec

ERROR: Backup of VM 110 failed - command 'rsync --stats -x --numeric-ids -aH --delete --no-whole-file --inplace '/mnt/md0/private/110/' '/mnt/pve/nasa1/dump/vzdump-openvz-110-2013_05_09-19_33_50.tmp'' failed: exit code 23
INFO: Backup job finished with errors
TASK ERROR: job errors

Error when installing Proxmox

$
0
0
I have tried to install Proxmox a few different ways. First I tried to install it from a disc but I got the error: "Pve-kernel-2.6.32-18-pve_2.6.32-88_amd64.deb Failed". So then I tried another disc and got the same error. Lastly I tried using a flash-drive and I got the same error again. I'm new to Proxmox and some help would be very appreciated.

Run Script on Host After VM Start

$
0
0
Hi All,

I'm running OpenMediaVault inside a VM on Proxmox 2.3. I passed my PCI-e SAS card to OMV and it's working perfectly, except I get an option ROM error when SeaBIOS starts. If I "press any key to continue..." everything seems to work fine. I've looked into resolving the SeaBIOS error but haven't had much luck.

I need this VM to boot automatically because it is the iSCSI host for all the other VMs. I'm looking for a way to do a "qm sendkey" about 20 seconds after the VM starts. Is there a script on the host that executes when a VM starts? If so, I can use it to launch my own sendkey script. Since I'll be sending a benign keystroke, it's OK if the script runs each time any VM starts (not just OMV).

Thanks for any suggestions you can provide!

-Spencer

Upgrade from 2.3 to 3.0

$
0
0
i can still ssh but i have no web access please hlep

root@vmserver:~# pveversion -v
pve-manager: 3.0-13 (pve-manager/3.0/7ce622f2)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-99
pve-kernel-2.6.32-20-pve: 2.6.32-99
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-19-pve: 2.6.32-95
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-1
qemu-server: 3.0-8
pve-firmware: 1.0-22
libpve-common-perl: 3.0-2
libpve-access-control: 3.0-3
libpve-storage-perl: 3.0-3
vncterm: 1.1-2
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-11
ksm-control-daemon: 1.1-1

PVE 2.3 - 2.6.32.19 kernel panics

$
0
0
Was running 2.6.32-12, upgraded to 2.6.32-19, since I've taken (4) kernel panic/hard crash in the last 4 days. :(

I can't find anywhere in /var/log those are dumped, so I'm at the mercy of my memory.. the Crash was caused by individual OpenVZ container, with lots of complaints about XFS Writes. (Multiple containers read/write to/from a xxTB array formatted XFS)

I've rebooted under 2.6.32-16, we'll see if I get crashes there. So couple of questions:

1. Where can I find, if existing, better logs written to disk to help address this issue?
2. What can I set now, if anything, to make better logs be generated if it crashes again?


Code:

root@pmx4:/var/log# pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.3-95
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-80
pve-kernel-2.6.32-12-pve: 2.6.32-68
pve-kernel-2.6.32-19-pve: 2.6.32-95
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1

Question About PROMOX Mail Gateway

$
0
0
Hi,

I'am newbe in here. i have a question, is posible promox mail gateway using previously installed mail server?
i already have dedicated mail server running using centos 6.4 and with latest postfix.

i was looking at documentation, it should remove all harddrive, and install promox mail gateway.
is there posible to use promox mail gateway without loosing all data on harddrive?

Thanks

vz migration - do not remove from source node*

$
0
0
Hi,

vzmigrate has an option --remove-area which keeps container data on a source node. Is there any way to achieve this with Proxmox web interface? I have a Proxmox cluster without shared storage and I'd like to keep container files even on a node where the container is not active (to make further migrations between nodes quicker).
Viewing all 171725 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>