Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

List of VM's with cron

$
0
0
Has any1 created a cronjob that lists all VM's daily?
Some script that shows something similar than what Proxmox's Datacenter "Search" page does?
(Unfortunately even that won't show you what storage device it was on.)

I'm wondering what to put on that script.
The purpose of that script would be to help with restoration of virtual machines from backup if the host(s) dies.
The problem being that when the host dies there is no record of what was where so the restoration can be somewhat pain full.

What do you guys have to track those with?
I'm thinking something like this...

Quote:

cat /etc/pve/openvz/* |grep VE_PRIVATE=
VE_PRIVATE="/media/local3/private/102"
VE_PRIVATE="/media/local3/private/103"
VE_PRIVATE="/var/lib/vz/private/116"
VE_PRIVATE="/media/local4/private/121"
VE_PRIVATE="/var/lib/vz/private/140"
VE_PRIVATE="/media/local3/private/145"
VE_PRIVATE="/media/local4/private/190"
VE_PRIVATE="/media/local4/private/194"
VE_PRIVATE="/media/local4/private/195"
VE_PRIVATE="/media/local2/private/400"
VE_PRIVATE="/media/local2/private/401"
VE_PRIVATE="/media/local2/private/404"
VE_PRIVATE="/media/local3/private/406"
VE_PRIVATE="/var/lib/vz/private/411"
VE_PRIVATE="/media/local3/private/444"
VE_PRIVATE="/media/local3/private/445"

cat /etc/pve/qemu-server/* |grep ide
bootdisk: ide0
ide0: local4:172/vm-172-disk-1.qcow2,format=qcow2,size=10G
ide2: none,media=cdrom
bootdisk: ide0
ide0: local3:402/vm-402-disk-1.qcow2,format=qcow2,size=32G
ide2: none,media=cdrom

Then pipe those to a log file somewhere.

Comments?

pulled the year-old Herrera from Triple-A Las Vegas

Proxmox Performance issue on SATA Storage

$
0
0
Hi,
I have purchased ProxmoxVE 3.4 and installed on one host with 2 SATA Storages.
I have configured 2 SATA storages on 2 separate directories. They are formated with ext4 filesystem.
I have setup number of KVM based VMs on both the storages.
Inside VMs which are on storage2 I am getting poor writing performance around 10MB/s max. Due to this inside VMs %wa is getting higher and LoadAverage is also increasing accordingly.

Please note that on Proxmox host when I tried to test writing speed on both storages, I got equal writing speed.

Any pointers to troubleshoot this issue.


Regards
Neelesh Gurjar

How to properly separate two nodes from a cluster ?

$
0
0
Hi proxmox forum,

I have always found good help here so i am asking again :)

I have a cluster with two nodes, qorum set to 1, tag "two nodes" in cluster.conf, etc... working like a charm ! :D

proxmox and proxmox2 are my two nodes, but proxmox2 have to move to my school, so i have to "separate" the two nodes WITHOUT reinstalling it, i want to keep the VM on each...

If i delete proxmox2 from the cluster, proxmox will stay the only one in the cluster, is this ok ? Or i need to remove BOTH node from the cluster ?!


And how to remove a node properly ? i mean wihout reinstalling proxmox from scratch ? i know that this is not a good option in real life but this is not production here... Just a school project...

i was thinking to do the following

remove proxmox2 from cluster then run :

service cman stop

killall -9 corosync cman dlm_controld fenced

service pve-cluster stop

rm /etc/cluster/cluster.conf

rm -rf /var/lib/pve-cluster/* /var/lib/pve-cluster/.*

rm /var/lib/cluster/*

and reboot i guess


proxmox2 will never be in the cluster again after this...


Please give me your opinion on that :)

thank you

PS : Everybody is running proxmox for the project in my classroom because we really pushed for this with a friend :cool:

Problem with Snapshot Rollback (PVE 3.4)

$
0
0
Hello,

we have done some snapshots on a Windowsserver, with RAM. Nothing special. We build 3 Snapshots. After we would like to Rollback to Snapshot 2, then we get the following errors:

VM 122 qmp command 'cont' failed - got timeout
TASK ERROR: VM 122 qmp command 'qom-set' failed - unable to connect to VM 122 qmp socket - timeout after 31 retries

Second trail:
we tries to shutdown the vm before rollback:
VM quit/powerdown failed - terminating now with SIGTERM
VM still running - terminating now with SIGKILL
VM 122 qmp command 'cont' failed - got timeout
TASK ERROR: VM 122 qmp command 'qom-set' failed - unable to connect to VM 122 qmp socket - timeout after 31 retries

Then we try the rollback another 3 times, and then it has worked out. But with an Error:
VM 122 qmp command 'cont' failed - got timeout
TASK ERROR: VM 122 qmp command 'qom-set' failed - unable to connect to VM 122 qmp socket - timeout after 31 retries

What is here the problem, is this only an timeout from webinterface, the the rollback has worked at first try?

Thanks and best Reards
Code:

proxmox-ve-2.6.32: not correctly installed (running kernel: 2.6.32-34-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-34-pve: 2.6.32-140
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

not correctly installed is only because we use an older kernel. (issue in pvecluster)

quorum failed - CLM CONFIGURATION CHANGE [help!]

$
0
0
Hi. I need help. I'm fighting with quorum last week and can't do anything.
I have 3 nodes. 2x nodes is ok, but 3rd one can't sync with others
Code:

root@blackcap:~# pvecm nodesNode  Sts  Inc  Joined              Name
  1  M  2878936  2015-06-16 18:14:43  blackcap
  4  X      0                        phoenix
  5  X      0                        sparrow

root@blackcap:~# tail -f /var/log/syslog
Jun 16 18:16:03 blackcap corosync[212860]:  [CLM  ] Members Left:
Jun 16 18:16:03 blackcap corosync[212860]:  [CLM  ] Members Joined:
Jun 16 18:16:03 blackcap corosync[212860]:  [CLM  ] CLM CONFIGURATION CHANGE
Jun 16 18:16:03 blackcap corosync[212860]:  [CLM  ] New Configuration:
Jun 16 18:16:03 blackcap corosync[212860]:  [CLM  ] #011r(0) ip(192.168.30.58)
Jun 16 18:16:03 blackcap corosync[212860]:  [CLM  ] Members Left:
Jun 16 18:16:03 blackcap corosync[212860]:  [CLM  ] Members Joined:
Jun 16 18:16:03 blackcap corosync[212860]:  [TOTEM ] A processor joined or left the membership and a new membership was formed.
Jun 16 18:16:03 blackcap corosync[212860]:  [CPG  ] chosen downlist: sender r(0) ip(192.168.30.58) ; members(old:1 left:0) Jun 16 18:16:03 blackcap corosync[212860]:  [MAIN  ] Completed service synchronization, ready to provide service.

Code:

root@phoenix:~# pvecm nodesNode  Sts  Inc  Joined              Name
  1  X      0                        blackcap
  4  M  2621688  2015-06-16 17:21:11  phoenix   5  X  2621700                        sparrow

Code:

root@phoenix:~# cat /etc/pve/cluster.conf<?xml version="1.0"?>
<cluster name="birds1" config_version="9">

  <cman keyfile="/var/lib/pve-cluster/corosync.authkey">
  </cman>

  <clusternodes>


  <clusternode name="phoenix" votes="1" nodeid="4"/>
  <clusternode name="sparrow" votes="1" nodeid="5"/>
<clusternode name="blackcap" votes="1" nodeid="1"/></clusternodes>

</cluster>

I do many things, but no luck to restore normal working of cluster.
Please help.
proxmox 3.4

[BUG]Resize disk works only for running VMs...

$
0
0
Hi all,

Just a quick note to report a bug :
For a given VM, you can resize a disk if and only if the VM is RUNNING!

Resizing disk of a stopped VM always fails.

At least with a storage iSCSI + LVM (and raw disks).

A greyed button for a stopped VM could be enough.

Thanks,

Christophe (waiting for PVE 4.0!).

How do snapshot backups work?

$
0
0
Hi,When I do a snapshot backup of a KVM whose disk is on LVM, does it only backup the portion of the disk in its state at the point of when the snapshot starts? For example, if I start the backup before starting to compile an app, does it include all the partial disk writes that occur after I start the b/u while I'm compiling?Thanks,-J

Enabling wlan0 / Intel Corporation PRO/Wireless 3945ABG

$
0
0
Hi,

My proxmox VE 3.4 works perfectly.
On the computer I use, I've an ethernet interface and a wireless one.
The eth0 is on and works perfectly, but unfortunately, the wireless one is not even visible.
- when I type "ifconfig", no wlan0 interface shows up
- when I type "lspci -nn", I see Network controller [0280]: Intel Corporation PRO/Wireless 3945ABG [Golan] Network Connection [8086:4227] (rev 02)

How to switch on this wireless interface in debian/proxmox so I can use it as well?
I'm also unable to install these drivers https://wiki.debian.org/iwlegacy, as when I run the command "apt-get install firmware-iwlwifi", I get the following message:
"This may mean that the package is missing, has been obsoleted, or is only available from another source
However the following packages replace it:
pve-firmware

E: Package 'firmware-iwlwifi' has no installation candidate"


Thanks for your advice.
Cheers,

VM blocked due to hung_task_timeout_secs

$
0
0
Hi,

I need your advice because we have a problem where we don't have any solution.

For an unkown reason some VM become unavailable : the load on the VM grows up to a point where it's impossible to do anything. The only way to unblock the situation is to reboot it.
As all our VM a monitored thanks to centreon, I have some graphs

I already find some threads speaking about a similare problem but I don't find a solution.
https://forum.proxmox.com/threads/12...hlight=kworker
https://forum.proxmox.com/threads/21...eout_secs-quot

There is 4 servers, configurated on a cluster. Each servers have the same configuration.

Code:

$ pveversion  -v
proxmox-ve-2.6.32: 3.4-156 (running kernel: 3.10.0-5-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-39-pve: 2.6.32-156
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-2.6.32-34-pve: 2.6.32-140
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

There is no load on the host when the problem arrive on the VM.

Code:

top - 10:58:04 up 25 days, 22:35,  1 user,  load average: 0.19, 0.21, 0.23
Tasks: 616 total,  1 running, 615 sleeping,  0 stopped,  0 zombie
%Cpu(s):  0.3 us,  0.1 sy,  0.0 ni, 99.6 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem:    257927 total,    31034 used,  226893 free,        0 buffers
MiB Swap:      243 total,        0 used,      243 free,    3618 cached

  PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND                                                                                                                                                                         
 4846 root      20  0 32.5g 2.0g 4024 S    3  0.8 866:49.94 kvm                                                                                                                                                                             
 4557 root      20  0 8737m 2.8g 4008 S    3  1.1 827:31.35 kvm                                                                                                                                                                             
 4963 root      20  0 32.6g 1.6g 4024 S    2  0.6 951:55.92 kvm                                                                                                                                                                             
53389 root      20  0 4645m 1.3g 4024 S    2  0.5 860:33.95 kvm                                                                                                                                                                             
 4897 root      20  0 32.5g 1.6g 4148 S    2  0.6 917:29.60 kvm                                                                                                                                                                             
31560 root      20  0  287m  61m 6232 S    2  0.0  0:02.64 pvedaemon worke                                                                                                                                                                 
 4704 root      20  0 4616m 1.2g 3988 S    2  0.5 991:31.37 kvm                                                                                                                                                                             
 4313 root      20  0 4616m 1.5g 3984 S    1  0.6 534:31.11 kvm                                                                                                                                                                             
 4462 root      20  0 4616m 1.3g 4108 S    1  0.5 949:10.53 kvm                                                                                                                                                                             
 4607 root      20  0 4617m 1.4g 3984 S    1  0.5 500:35.04 kvm                                                                                                                                                                             
 4732 root      20  0 4637m 1.2g 4000 S    1  0.5 724:27.02 kvm                                                                                                                                                                             
29185 www-data  20  0  285m  61m 5160 S    1  0.0  0:03.53 pveproxy worker                                                                                                                                                                 
 3103 root      20  0  353m  54m  33m S    1  0.0  24:57.06 pmxcfs                                                                                                                                                                         
 3905 root      20  0 4616m 1.8g 3988 S    1  0.7 737:33.64 kvm                                                                                                                                                                             
    3 root      20  0    0    0    0 S    0  0.0  17:21.56 ksoftirqd/0                                                                                                                                                                     
 2183 root      20  0    0    0    0 S    0  0.0  12:47.38 xfsaild/dm-7                                                                                                                                                                   
 2982 root      20  0  371m 2328  888 S    0  0.0  18:25.42 rrdcached                                                                                                                                                                       
 3308 root      0 -20  201m  66m  42m S    0  0.0  64:04.29 corosync                                                                                                                                                                       
 4594 root      20  0    0    0    0 S    0  0.0  6:12.00 vhost-4557                                                                                                                                                                     
 5264 root      20  0 4630m 1.4g 3996 S    0  0.6 959:41.76 kvm                                                                                                                                                                             
 9205 root      20  0 4546m 4.1g 4148 S    0  1.6  28:06.42 kvm                                                                                                                                                                             
16144 root      20  0    0    0    0 S    0  0.0  0:44.51 kworker/6:1                                                                                                                                                                     
20152 root      20  0 4617m 1.1g 3984 S    0  0.4 298:02.00 kvm

Code:

$ sudo pveperf
CPU BOGOMIPS:      175688.20
REGEX/SECOND:      989918
HD SIZE:          0.95 GB (/dev/mapper/vg01-root)
BUFFERED READS:    325.33 MB/sec
AVERAGE SEEK TIME: 0.04 ms
FSYNCS/SECOND:    13047.64
DNS EXT:          29.53 ms

Code:

$ sudo pveperf /srv/vms/
CPU BOGOMIPS:      175688.20
REGEX/SECOND:      1046214
HD SIZE:          199.90 GB (/dev/mapper/vg01-vms)
BUFFERED READS:    445.96 MB/sec
AVERAGE SEEK TIME: 0.19 ms
FSYNCS/SECOND:    103.74
DNS EXT:          31.61 ms

Code:

$ sudo qm list | grep -v stopped
      VMID NAME                STATUS    MEM(MB)    BOOTDISK(GB) PID     
      102 server-102    running    4096              10.00 3905     
      103 server-103    running    4096              10.00 4313     
      104 server-104    running    4096              10.00 20152   
      105 server-105    running    4096              10.00 4462     
      106 server-106    running    4096              10.00 9205     
      107 server-107    running    8192              10.00 4557     
      108 server-108    running    4096              10.00 4607     
      109 server-109    running    4096              10.00 4704     
      110 server-110    running    4096              10.00 4732     
      111 server-111    running    4096              10.00 53389   
      112 server-112    running    32768            10.00 4846     
      113 server-113    running    32768            10.00 4897     
      114 server-114    running    32768            10.00 4963     
      137 server-137    running    4096              10.00 5264

Code:

$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:d3:c1:fc:c3:50
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: a0:d3:c1:fc:c3:51
Slave queue ID: 0

Code:

$ cat /etc/pve/qemu-server/106.conf
bootdisk: virtio0
cores: 1
ide2: none,media=cdrom
memory: 4096
name: server-106
net0: virtio=72:33:AA:5F:BB:11,bridge=vmbr0,tag=226
onboot: 1
ostype: l26
smbios1: uuid=071915f2-544c-49ba-a3c3-f3c52f5188d4
sockets: 1
virtio0: vms:106/vm-106-disk-1.qcow2,format=qcow2,size=10G

Some informations about a VM when the problem has just arrived.

Code:

$ uname -a
Linux ######### 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u1 x86_64 GNU/Linux

I find somes kernel error on syslog of the VM, but not

Code:

Jun 11 21:51:49 ######### kernel: [1762080.352106] INFO: task kworker/0:3:6362 blocked for more than 120 seconds.
Jun 11 21:51:49 ######### kernel: [1762080.352807] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jun 11 21:51:49 ######### kernel: [1762080.353202] kworker/0:3    D ffff88013fc13780    0  6362      2 0x00000000
Jun 11 21:51:49 ######### kernel: [1762080.353205]  ffff880137a147c0 0000000000000046 0000000100000008 ffff880136fd80c0
Jun 11 21:51:49 ######### kernel: [1762080.353208]  0000000000013780 ffff880120f31fd8 ffff880120f31fd8 ffff880137a147c0
Jun 11 21:51:49 ######### kernel: [1762080.353210]  ffff88013a340200 ffffffff8107116d 0000000000000202 ffff880139e0edc0
Jun 11 21:51:49 ######### kernel: [1762080.353213] Call Trace:
Jun 11 21:51:49 ######### kernel: [1762080.353219]  [<ffffffff8107116d>] ? arch_local_irq_save+0x11/0x17
Jun 11 21:51:49 ######### kernel: [1762080.353255]  [<ffffffffa0150a8f>] ? xlog_wait+0x51/0x67 [xfs]
Jun 11 21:51:49 ######### kernel: [1762080.353258]  [<ffffffff8103f6e2>] ? try_to_wake_up+0x197/0x197
Jun 11 21:51:49 ######### kernel: [1762080.353268]  [<ffffffffa015322c>] ? _xfs_log_force_lsn+0x1cd/0x205 [xfs]
Jun 11 21:51:49 ######### kernel: [1762080.353277]  [<ffffffffa0150502>] ? xfs_trans_commit+0x10a/0x205 [xfs]
Jun 11 21:51:49 ######### kernel: [1762080.353285]  [<ffffffffa011d7d4>] ? xfs_sync_worker+0x3a/0x6a [xfs]
Jun 11 21:51:49 ######### kernel: [1762080.353288]  [<ffffffff8105b5f7>] ? process_one_work+0x161/0x269
Jun 11 21:51:49 ######### kernel: [1762080.353290]  [<ffffffff8105aba3>] ? cwq_activate_delayed_work+0x3c/0x48
Jun 11 21:51:49 ######### kernel: [1762080.353292]  [<ffffffff8105c5c0>] ? worker_thread+0xc2/0x145
Jun 11 21:51:49 ######### kernel: [1762080.353294]  [<ffffffff8105c4fe>] ? manage_workers.isra.25+0x15b/0x15b
Jun 11 21:51:49 ######### kernel: [1762080.353296]  [<ffffffff8105f701>] ? kthread+0x76/0x7e
Jun 11 21:51:49 ######### kernel: [1762080.353299]  [<ffffffff813575b4>] ? kernel_thread_helper+0x4/0x10
Jun 11 21:51:49 ######### kernel: [1762080.353302]  [<ffffffff8105f68b>] ? kthread_worker_fn+0x139/0x139
Jun 11 21:51:49 ######### kernel: [1762080.353304]  [<ffffffff813575b0>] ? gs_change+0x13/0x13

Somes processus are blocked

Code:

ps auxf | grep -E ' [DR]'
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 2787 0.0 0.0 0 0 ? D May22 0:07 \_ [flush-253:4]
root 6362 0.0 0.0 0 0 ? D Jun11 0:00 \_ [kworker/0:3]
root 7257 0.0 0.0 0 0 ? D Jun11 0:00 \_ [kworker/0:2]
root 8036 0.0 0.0 0 0 ? D Jun11 0:00 \_ [kworker/0:1]
root 8262 0.0 0.0 0 0 ? D Jun11 0:00 \_ [kworker/0:4]
root 8263 0.0 0.0 0 0 ? D Jun11 0:00 \_ [kworker/0:5]
root 32191 0.0 0.0 0 0 ? D Jun12 0:12 \_ [kworker/0:8]
root 26983 0.0 0.0 0 0 ? D Jun12 1:24 \_ [kworker/0:9]
(… and some others process ...)

There is no io-wait

Code:

vmstat 1 10
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 3078348 2708 462720 0 0 1 89 15 18 0 0 99 0
0 0 0 3078340 2708 462720 0 0 0 0 289 718 0 0 100 0
0 0 0 3078340 2708 462720 0 0 0 0 289 722 0 0 100 0
0 0 0 3078340 2708 462720 0 0 0 0 291 730 0 0 100 0

Thks for your advices.

cluster not ready - no quorum: When adding storage

$
0
0
Hello All ,
I have a two-node cluster and I turned off second node.
When I want add storage to first node via (Datacenter > Storage > Add > Directory) I face to this error:
cluster not ready - no quorum? (500)

Is there any solution except turning on second node?

Thanks

Returning to the site of their homestand the suffered

Q: 2 node cluster - no HA - quorum iSCSI target - issues / help request

$
0
0
Hi, I wonder if anyone can comment.

I am trying to setup a modest size proxmox cluster,

-- 2 physical nodes running proxmox
-- to avoid problems with Quorum if one node is offline, I wanted to use a small dedicated shared iSCSI target as the 'quorum disk' for the 3rd vote 'tie breaker'.

-- to achieve this goal, basic steps I try:

-- setup cluster with 2 members, normal method as per https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
-- then setup iSCSI target quorum disk and proceed with setup "HA" config starting from this step:
https://pve.proxmox.com/wiki/Two-Nod..._Configuration

I am NOT interested in HA and fencing. I just want ability for quorum to work when just one physical host is online, plus the Quorum disk (ie, and second/other host is offline).

But I am getting feeling the config is not supported / or I am doing something wrong.

After I carefully follow the steps, I seem to get output which is expected, ie;

Code:


root@proxmox-a:/var/log# pvecm s
Version: 6.2.0
Config Version: 3
Cluster Name: newcluster
Cluster Id: 45236
Cluster Member: Yes
Cluster Generation: 96
Membership state: Cluster-Member
Nodes: 2
Expected votes: 3
Quorum device votes: 1
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 3
Flags:
Ports Bound: 0 178
Node name: proxmox-a
Node ID: 1
Multicast addresses: 239.192.176.101
Node addresses: 10.82.141.48
 
root@proxmox-a:/var/log# cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster name="newcluster" config_version="3">

  <cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
  <quorumd votes="1" allow_kill="0" interval="1" label="proxmox1_qdisk" tko="10"/>
  <totem token="54000"/>

  <clusternodes>
  <clusternode name="proxmox-a" votes="1" nodeid="1"/>
  <clusternode name="proxmox-b" votes="1" nodeid="2"/></clusternodes>

</cluster>

and

root@proxmox-a:/var/log# clustat
Cluster Status for newcluster @ Wed Jun 17 15:55:03 2015
Member Status: Quorate

 Member Name                                          ID  Status
 ------ ----                                          ---- ------
 proxmox-a                                                1 Online, Local
 proxmox-b                                                2 Online
 /dev/block/8:49                                          0 Online, Quorum Disk

root@proxmox-a:/var/log#


Alas, in web interface now if I click on the HA tab, it gives me a timeout error, "Got log request timeout (500)"

and if I try to 'touch' a file under /etc/pve it tells me, "touch: cannot touch `a': Device or resource busy"

So I think I have borked the cluster config / it thinks it has not got quorum / but I am not clear - what is the proper way to achieve this. (If it is possible ?)

Any comments are greatly appreciated.

Thanks,


Tim

VM to VM connectivity

$
0
0
I've got two VM's, both are FreeBSD.

/etc/network/interfaces
Code:

auto vmbr4
iface vmbr4 inet static
        address 10.1.11.3
        netmask 255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0

Each VM got virtio bridge nic to vmbr4. Firewall is disabled on the node. Both nic's have different mac addresses. I can ping host ip 10.1.11.3 from both vms, and i can ping both vms from host, but i cannot ping vm2 from vm1 and in backward. There is no additional firewall rules on vm's.

Maybe somebody know what it could be?

Two-Node High Availability Cluster with a quorum disk

$
0
0
Hi,

I'm trying to build a two-node cluster with a quorum disk.

I can migrate a container to an other node. When a node fails, the other one knows it.
The fencing works as expected and the quorum disk too.

Everything works perfectly except the live migration of my containers when a node fails. I can't manage to relocate a container to the other working node when the first one fails. Do you have an idea? I'm out of ideas...
Thank you!

Code:

root@bare2:~# tail -f /var/log/cluster/qdiskd.log
Jun 17 22:20:43 qdiskd Writing eviction notice for node 2
Jun 17 22:20:44 qdiskd Node 2 evicted

Code:

root@bare2:~# tail -f /var/log/cluster/fenced.log
Jun 17 12:05:23 fenced fencing node bare1
Jun 17 12:06:56 fenced fence bare1 success



My cluster.conf file:

Code:

root@bare2:~# cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster config_version="23" name="XXX">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<quorumd allow_kill="0" interval="1" label="proxmox_qdisk" tko="10" votes="1"/>
<totem token="54000"/>
<fencedevices>
...
</fencedevices>
<clusternodes>
<clusternode name="bare2" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="off" name="fence001"/>
</method>
</fence>
</clusternode>
<clusternode name="bare1" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="off" name="fence002"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<failoverdomains>
<failoverdomain name="failover1" nofailback="0" ordered="1" restricted="1">
<failoverdomainnode name="bare1" priority="1"/>
<failoverdomainnode name="bare2" priority="2"/>
</failoverdomain>
</failoverdomains>
<pvevm autostart="1" domain="failover1" recovery="relocate" vmid="101"/>
</rm>
</cluster>


And of course:

Code:

root@bare2:~# clustat
Cluster Status for XXX @ Wed Jun 17 22:16:11 2015
Member Status: Quorate


 Member Name                                                    ID  Status
 ------ ----                                                    ---- ------
 bare2                                                              1 Online, Local
 bare1                                                              2 Online
 /dev/block/8:33                                                0 Online, Quorum Disk


Code:

root@bare2:~# pvecm status
Version: 6.2.0
Config Version: 23
Cluster Name: XXX
Cluster Id: 3251
Cluster Member: Yes
Cluster Generation: 292
Membership state: Cluster-Member
Nodes: 2
Expected votes: 2
Quorum device votes: 1
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 7
Flags:
Ports Bound: 0 178
Node name: bare2
Node ID: 1
Multicast addresses: 239.192.12.191
Node addresses: 172.16.0.2


Use ZFS dataset as storage directory?

$
0
0
I'm not sure if I'm just overlooking something, or if I'm missing something completely.

I have a server (dual Xeon L5640s, 96GB RAM, 2x128GB SSD's in ZFS mirror for OS, 8x5TB drives in 2-way ZFS mirrors for data pool) in which I recently installed Proxmox VE 3.4 on. Install went great, got the boot volume in a ZFS mirror, created a new zpool with 4 vdev's each in 2-way mirror, and set up some shares. I plan to use this box for both centralized storage, as well as KVM host (Proxmox), and plan to create another zpool for VM's once I get a couple 500GB SSD's. In the mean time, I created a dataset/volume on my large ZFS zpool, in which I called "proxmox", set a quota of 200GB, and mounted at /tanks/tank_data_01/proxmox (my zpool is called tank_data_01, mounted at /tanks/tank_data_01).

I see the "proxmox" dataset, and I can successfully use it as a local directory with Proxmox, and I can create VM's and store ISO's on it. However, VM's won't start in this fashion (throws an error -- don't have that error handy). So I decided to add storage using "ZFS" from the drop down (as opposed to the "directory" I just tried with), and I can select my zpool. Problem is, I don't want to use the root of my pool for proxmox storage, and I want to use that proxmox dataset/volume. If I edit storage.cfg and append the correct path (/tanks/tank_data_01 to /tanks/tank_data_01/proxmox), it acts like it's working, but when I try to upload an ISO to that storage, it fails.

Is it not possible to select a ZFS dataset for my storage location, or do I have to use the root of my zpool? Any idea why I couldn't use the local path to that dataset? I know the error would be handy, and I'm trying to recall what it was. Server is offline at the moment and won't be back up until tomorrow.

Thanks!

Proxmox 3.4 create Clone

$
0
0
[English Text]

Hello,

when you create a clone of a VM is in "mode" not an option "Linked Clone" displays. Consequently, only create a full clone.

What conditions must be met in order for a linked clone can be created?
Proxmox Full Clone.png

[German Text]

Hallo,

beim Erstellen eines Klons von einem VM wird unter "Modus" keine Option "Verknüpfter Klon" angezeigt. Folglich kann lediglich ein vollständiger Klon erstellt werden.

Welche Vorraussetzungen müssen erfüllt werden, damit ein verknüpfter Klon erstellt werden kann?

Proxmox Full Clone.png
Attached Images

USB Device

$
0
0
Hi,

I'm connecting an usb device to my VM and I'm using this guide: https://pve.proxmox.com/wiki/USB_Dev...rtual_Machines and it says:
"How to connect devices to a running machine without shutting it down is demonstrated by the examples from above. If device is assigned in that way the assignment is valid until the machine stops. The assignment is independent of the actual status of the device (whether it´s currently plugged in or not), i.e. it can be plugged in and out multiple times and will be always assigned to the VM; regardless if the "Productid" or "busaddress" method is used."

How can I make this permanently? i.e. if I restart or shutdown/start the VM can it detect the usb device immediately. Is there some file in Proxmox where I can do this?

Thanks in advanced,
Regards

Snapshot option greyed out

$
0
0
I'm having an issue with my proxmox server and snapshots - the option is greyed out and i cant work out why? any suggestions?

Its a linux KVM running on a SAN.

Open vSwitch VLAN must be done with Managed/Smartswitch?

$
0
0
Trying to configure openvswitch in Proxmox for the first time. Initial setup seems to work great. Configured 2 proxmox hosts and few VMs with 2 VLANS.
But it only seems to work if I configure the VLAN tags on the Smartswitch also. If i untag all with default vlan 1, communication between hosts seems to stop. Did i misunderstood that open vswitch has the ability to be fully managed while not needing to touch the physical switch at all for VLAN configuration? Is the switch stripping tags from traffic originating from Host/VM?
Should i use a completely dumb switch instead?
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>