Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Found duplicate PV ?

$
0
0
this problem is similar to that of some other user here, but maybe they come from different issues.

I have 2 identical nodes, ibm x3650m2, with 20GB ram, 2x72 GB local disks (raid 1)
Disk /dev/sda: 72.0 GB, 71999422464 bytes

and both are connected to the same physical NAS:
vm disks from both nodes are on the same lvm/iscsi target
backups are made by both nodes on the same nfs share on the same NAS.

now, one node shows the problem, the other not :S

I noticed this problem from the web gui backup logs from that node, where I see that all (apparently successful) backups, since a while, start with something like

Code:

"INFO: Starting Backup of VM 102 (qemu)
INFO: status = running
INFO: update VM 102: -lock backup
  Found duplicate PV dB0Su2lTwsYfbcJPhby21PekoyeN3hHS: using /dev/sdc not /dev/sdb
  Found duplicate PV dB0Su2lTwsYfbcJPhby21PekoyeN3hHS: using /dev/sdc not /dev/sdb

INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/pve_ts879/dump/vzdump-qemu-102-2015_01_14-01_00_02.vma.lzo'
INFO: started backup task 'f373a23c-82d3-4cd6-a5af-382858f0ac91'
INFO: status: 0% (36044800/12884901888), sparse 0% (3534848), duration 3, 12/10 MB/s"

while the backup log sent by mail states (same exact job) but no "found duplicate" warning, so I never noticed from those...

Code:

"102: Jan 14 01:00:02 INFO: Starting Backup of VM 102 (qemu)
102: Jan 14 01:00:02 INFO: status = running
102: Jan 14 01:00:03 INFO: update VM 102: -lock backup
102: Jan 14 01:00:03 INFO: backup mode: snapshot
102: Jan 14 01:00:03 INFO: ionice priority: 7
102: Jan 14 01:00:03 INFO: creating archive '/mnt/pve/pve_ts879/dump/vzdump-qemu-102-2015_01_14-01_00_02.vma.lzo'
102: Jan 14 01:00:04 INFO: started backup task 'f373a23c-82d3-4cd6-a5af-382858f0ac91'
102: Jan 14 01:00:07 INFO: status: 0% (36044800/12884901888), sparse 0% (3534848), duration 3, 12/10 MB/s"

all backups logs started on the other node are just fine, no "found duplicate" whatsoever.

digging the first node logs from the gui, all related logs show this, but since the email apparenly removed them, I never noticed.
(I could find other traces in other system logs perhaps? where?)

what can lead to this warning (?) what happened, how to solve?

more info: good node shows

Code:

#ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb

#fdisk -l | grep "/dev/sd"
Disk /dev/mapper/pve-root doesn't contain a valid partition table
Disk /dev/mapper/pve-swap doesn't contain a valid partition table
Disk /dev/mapper/pve-data doesn't contain a valid partition table
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sda: 72.0 GB, 71999422464 bytes
/dev/sda1  *        2048    1048575      523264  83  Linux
/dev/sda2        1048576  140623871    69787648  8e  Linux LVM
Disk /dev/sdb: 1073.7 GB, 1073741824000 bytes

# pvscan
  PV /dev/sdb    VG pve_vm_disks_ts879  lvm2 [1000.00 GiB / 22.81 GiB free]
  PV /dev/sda2  VG pve                  lvm2 [66.55 GiB / 8.37 GiB free]
  Total: 2 [1.04 TiB] / in use: 2 [1.04 TiB] / in no VG: 0 [0  ]

bad node shows

Code:

#ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdc

fdisk -l | grep "/dev/sd"
Disk /dev/mapper/pve-root doesn't contain a valid partition table
Disk /dev/mapper/pve-swap doesn't contain a valid partition table
Disk /dev/mapper/pve-data doesn't contain a valid partition table
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sda: 72.0 GB, 71999422464 bytes
/dev/sda1  *        2048    1048575      523264  83  Linux
/dev/sda2        1048576  140623871    69787648  8e  Linux LVM
Disk /dev/sdb: 1073.7 GB, 1073741824000 bytes
Disk /dev/sdc: 1073.7 GB, 1073741824000 bytes

# pvscan
  Found duplicate PV dB0Su2lTwsYfbcJPhby21PekoyeN3hHS: using /dev/sdc not /dev/sdb
  PV /dev/sdc    VG pve_vm_disks_ts879  lvm2 [1000.00 GiB / 22.81 GiB free]
  PV /dev/sda2  VG pve                  lvm2 [66.55 GiB / 8.37 GiB free]
  Total: 2 [1.04 TiB] / in use: 2 [1.04 TiB] / in no VG: 0 [0  ]

those servers are not recently modified, and running pve since 1.5 and on NAS also nothing intentionally changed... and I see nothing strange...

currently both pve nodes have 3.1.24, are connected to the same gigabit switch, and have the same identical pveversions:

Code:

proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-24 (running version: 3.1-24/060bd5a6)
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-11-pve: 2.6.32-66
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-9
libpve-access-control: 3.0-8
libpve-storage-perl: 3.0-18
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

and similar pveperfs
Code:

pveperf /mnt/pve/pve_ts879/
CPU BOGOMIPS:      72530.88
REGEX/SECOND:      921498
HD SIZE:          11081.12 GB (ts879:/PVE)
FSYNCS/SECOND:    1567.25
DNS EXT:          260.35 ms
DNS INT:          1.33 ms (apiform.to.it)
root@pve2:~# pveperf /mnt/pve/pve_ts879/
CPU BOGOMIPS:      72530.88
REGEX/SECOND:      952254
HD SIZE:          11081.12 GB (ts879:/PVE)
FSYNCS/SECOND:    1601.93
DNS EXT:          199.14 ms
DNS INT:          1.08 ms (apiform.to.it)

Code:

pveperf /mnt/pve/pve_ts879/
CPU BOGOMIPS:      72531.60
REGEX/SECOND:      771375
HD SIZE:          11081.12 GB (ts879:/PVE)
FSYNCS/SECOND:    1343.58
DNS EXT:          49.35 ms
DNS INT:          1.03 ms (apiform.to.it)
root@pve1:~# pveperf /mnt/pve/pve_ts879/
CPU BOGOMIPS:      72531.60
REGEX/SECOND:      930701
HD SIZE:          11081.12 GB (ts879:/PVE)
FSYNCS/SECOND:    1638.08
DNS EXT:          163.32 ms
DNS INT:          0.95 ms (apiform.to.it)

Thanks,
Marco

L2TP IPsec VPN server within a container

$
0
0
Hello, i am currently having a few problems setting up a VON server inside of a container.

are there any configurations that i need set up to be able to follow a tutorial such as this:

http://www.elastichosts.co.uk/suppor...ec-vpn-server/

I have previously been able to get one of these running on a full virtual machine however i would like to try it inside of a container if it is at all possible.

thanks.

OVSBond + OVSBridge setup problem

$
0
0
In the GUI I created an OVSBond named bond1 of interfaces eth1 and eth3 and then created an OVSBridge named vmbr1. However after rebooting, if I do an ifconfig there is no bond1 interface or vmbr1. Here is the relevant section of the interfaces file.

Code:

iface eth1 inet manual


iface eth3 inet manual

allow-vmbr1 bond1
iface bond1 inet manual
        ovs_bonds eth1 eth3
        ovs_type OVSBond
        ovs_bridge vmbr1
        ovs_options lacp=active lacp-time=fast bond_mode=balance-tcp

auto vmbr1
iface vmbr1 inet manual
        ovs_type OVSBridge
        ovs_ports bond1

The switch ports that these interfaces plugs into show that they are DOWN (no link). In the GUI

bond1 shows Active=No and Autostart=no (there is no option for Autostart to be checked).
vmbr1 shows Active=No and Autostart=Yes

These devices were working when setup as a regular linux bond.

Code:

#ovs-vsctl show
96d93704-76d6-4bcd-8ea9-7837b208f681
    ovs_version: "1.4.2"


#ovs-appctl bond/show bond1
no such bond
ovs-appctl: ovs-vswitchd: server returned reply code 501

It is almost like the GUI config is not making it into the OVS config. I am running PVE 3.1-1/ao6c9f73 which is the free version with updates run.

Does anyone have any idea what could be wrong with our setup?

Thanks,
Eric

Migrate accross storages

$
0
0
I looked around a little bit and I found a few old things related to migrating across storages, however it'd sure be nice to be able to do offline migrations from one node to a different storage name on another nodes.

Doesn't seem like that would be too difficult.

VZDump backup file information

$
0
0
Is there currently a command to retrieve information about the contents of a VZDump backup file? The only things encoded in the filename itself are the VM number and the date/time the backup was done. It would be nice to be able to show the attribute/value pairs shown in the options tab of the GUI, at very least the name of the VM. If no such utility exists, perhaps someone would comment on where I might start to look at the source code, or where the format of these backup files is documented.

Win7x64+Spice stops responding on spice & LAN

$
0
0
i have the following problem: i got a win7x64 installation together with spice up and running, but at some point after certain amount of time 5min+ i can't reach that box over lan and spice viewer is not able to connect anymore. to get it back working i just need to connect through the console button -> no VNC, login with my windows account and afterwards everything works fine again (lan/spice connection)...

- no firewall on windows
- virtio 0.1-81
- spice guest tool 0.74
- proxmox 3.3-5
- network device changed from virtio to intel e1000 makes no difference

any idea what this could be up to? thanks in advance

Setup a IDS on proxmox as VM

$
0
0
How can i setup a IDS as VM on proxmox?
tried to find the answer on the forum but i couldnt find it

clone linux VM with virtual LVM disk

$
0
0
If I create a KVM template of a VM (say, ubuntu server) which has its virtual disk configured as (internal) LVM and than convert it to a template, then I can clone it to get several other VMs based on this machine.

Of course VM template internal LVM PVs/VGs/LVs all have their names and UUID like

Code:

  --- Physical volume ---
  PV Name              /dev/vda5
  VG Name              ubuntu-webserver
  ...
  PV UUID              UHqIVU-BKta-3R5x-9HvP-CSNG-r6gH-w87UIi

  --- Volume group ---
  VG Name              ubuntu-webserver
  System ID
  ...
  Free  PE / Size      0 / 0
  VG UUID              pfg6xD-nhx7-ZkJR-D003-mNEz-vg0o-a2w9Vf

  --- Logical volume ---
  LV Name                /dev/ubuntu-webserver/swap_1
  VG Name                ubuntu-webserver
  LV UUID                YngPz7-dFCj-hGNN-6RyY-fxH3-VNHV-HczVbO
  ...

now, If I clone other VM from this tempate, all will get the same PVs/VGs/LVs names/UUID.
since their disks would be an exact clone.

The same would happen if one would restore a VM backup to another VMID and then changing its ssh keys/ip/hostname, etc.
The disk LVM would remain the one of the original VM from which the backup was taken.

Now, would this fact be a problem for the pve nodes hosting them?
Would "reused" PVs/VGs/LVs names/UUID inside several VMs conflict in some way?

I always thought that what's inside a VM (kvm) disk is completely unknown to the host and other VMs/CTs, but I got this hint from another thread, based on another issue.

Then, if troubles could arise from this names/UUID "reuse", what is suggested to do?
vgrename/lvrename and vgchange/lvchange, other?

Thanks,
Marco

what is the "search domain"

$
0
0
Hello guys,

I am curious about the question what exactly the search domain at Proxmox VE (*yournode*-> DNS tab) is?

Could anyone please explain it to me?

Many thanks in advance,

Afox

Online add new node to work cluster, and problem clvmd

$
0
0
There was a problem with the addition of a new node to the existing cluster proxmox.
After you add a node, it rebooted the problem starts with the fact that disappears in rgmanager clustat, also a new node after the restart can not enter the clvm domain.
To solve this problem we have to restart all nodes at the same time, including the new, then everything works again like clockwork.

We use the iscsi + clvmd + ext4, the problem is with the addition of a new node, then after a while starts to work on other nodah fencing, resulting cluster inoperable.

There are some online procedure of adding a node, or is the problem clvmd?

If the issue can be considered only on a fee basis, verify that it does take us? Now for all nodes bought the license for 1 year without support.

Cluster setup

$
0
0
Hi all,

I'm a little bit confused with the cluster setup and I would like to ask if the following scenario is possible.

I've got 3 proxmox servers all runnning on different data centers. I just need to manage them from one address as cluster does.
Is it possible or all of them need to be on same network/subnet?

Kind regards

DRBD on RAID

$
0
0
I have 2 servers with large 12 disk hardware RAID 10 arrays that I want to set up like the DRBD wiki (http://pve.proxmox.com/wiki/DRBD). In that example and most others I have seen the DRBD volumes are always recommended to be on their own physical drive/array and Proxmox or whatever base hypervisor runs on it's own disk. I want to utilize the performance and redundancy of the RAID 10 for both the VMs and Proxmox. Is it okay to use LVM volumes from the RAID disk as backing for the DRBD volumes? I can't find any performance metrics on this setup versus using separate physical devices for the hypervisor and DRBD backing. I could add an SSD to run Proxmox on if the performance would be significantly improved.

10 Gb card : Broadcom 57810 vs Intel X520

$
0
0
Hi,


I'm going to buy a dell 320 server with a 10 Gb network card to connect to my SAN MD3800i.
My question is regarding the network card. I want to be sure that this card is compatible with Proxmox 3.3. I have the choice between two card :


- Broadcom 57810 DP 10Gb DA/SFP+ Converged Network Adapter


- Intel X520 DP 10Gb DA/SFP+ Server Adapte


Could you please advice me ?


Note : I will buy the support as soon as I have installed my cluster.

Thanks in advance.

New Proxmox VE Trainings 2015

$
0
0
Hi all!

Our new 2015 Proxmox VE training schedule offers modular two-day training courses at basic and advanced level.

Upcoming: Proxmox VE Installation and Administration (2-days)

  • 2-3 February 2015 in Roubaix (F)
  • 9-10 February 2015 in Nuremberg (D)
  • 24-25 February 2015 in Saarbruecken (D)

View all training dates, details and pricing

http://www.proxmox.com/training

We're happy to welcome stacktrace GmbH as our newest authorized training partner! stacktrace will offer trainings in Germany.
__________________
Best regards,

Martin Maurer
Proxmox VE project leaders

Exchange Environment - 4 vcpu faster than 10?

$
0
0
Hi all,

Sorry for the broken English, my native language is Dutch. I am from Belgium.

I assume this question was asked before but I am using the wrong keywords in the search.

We have on our Production Server (Proxmox 3.3-5 - 2x Intel Xeon E5-2650) a exchange environment.

5 x Windows Server 2012R2

1x Domain Controller (4 vcpu - 4GB ram - works perfectly)
1x Exchange with Client Access role (4 vcpu - 8GB ram - works perfectly)
1x Control Panel server (4 vcpu - 4GB ram - works perfetly)
2x Exchange with Mailbox role (4 vcpu - 16GB ram)

We recently had our share of iSCSI problems which gave us a corrupt Mailbox Database.

So we wanted to boost the cpu power on one of the Mailbox Database server to 10vcpu.

But to our surprise 10vcpu is actually slower than 4vcpu.

I checked it on a clean virtual server with prime95 and can proof that 4cores are faster punching pi numbers than 10 cores.

How is this possible? Is there somekind of logical explanation?

Thanks in advance!

Help! only one node lost connection to ISCSI target, how to recover?

$
0
0
I had some ISCSI problems recently, and found one node had ISCSI troubles and VM with disks on it suffered.
Now I moved all VM to other node, and restarted that failing one, but ISCSI cannot be reconnected...
(see this other post http://forum.proxmox.com/threads/207...d-duplicate-PV)

How can I recover this situation?
I have left there some VMs (OFF) and they can't be restarted...
tried a bunch of ISCSI commands but there seem to be a major problem...

ISCSI host is a NAS, that can be reached by both nodes and shows no other problems

Code:

"good node"
# ping ts879
PING ts879 (192.168.3.249) 56(84) bytes of data.
64 bytes from ts879 (192.168.3.249): icmp_req=1 ttl=64 time=0.196 ms

#pvs
  PV        VG                Fmt  Attr PSize    PFree
  /dev/sda2  pve                lvm2 a--    66.55g  8.37g
  /dev/sdb  pve_vm_disks_ts879 lvm2 a--  1000.00g 22.81g

# iscsiadm -m session
tcp: [1] 192.168.3.249:3260,1 iqn.2004-04.com.qnap:ts-879u-rp:iscsi.pve.d4e6fc



"bad node"
# ping ts879
PING ts879 (192.168.3.249) 56(84) bytes of data.
64 bytes from ts879 (192.168.3.249): icmp_req=1 ttl=64 time=0.115 ms

# pvs
  PV        VG  Fmt  Attr PSize  PFree
  /dev/sda2  pve  lvm2 a--  66.55g 8.37g

# iscsiadm -m session
iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session50
iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session51
iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session52
iscsiadm: could not read session targetname: 5
iscsiadm: could not find session info for session53
iscsiadm: No active sessions.

the only difference I can see in /etc/iscsi (apart initiator name, obviously) seems to be that
Code:

"good node" has
# ls -la /etc/iscsi/send_targets/
total 28
drw------- 7 root root 4096 Jul 10  2014 .
drwxr-xr-x 5 root root 4096 Nov  7  2013 ..
drw------- 2 root root 4096 Nov 21  2013 172.16.0.3,3260
drw------- 2 root root 4096 Jan 15 15:09 192.168.3.249,3260
drw------- 2 root root 4096 Jul 11  2014 192.168.3.30,3260
drw------- 2 root root 4096 Nov 20  2013 192.168.3.78,3260
drw------- 2 root root 4096 Jul 10  2014 ts879,3260


while "bad node" has
~# ls -la /etc/iscsi/send_targets/
total 24
drw------- 6 root root 4096 Nov 27  2013 .
drwxr-xr-x 5 root root 4096 Nov 19  2013 ..
drw------- 2 root root 4096 Nov 21  2013 172.16.0.3,3260
drw------- 2 root root 4096 Jun 14  2014 192.168.3.30,3260
drw------- 2 root root 4096 Nov 20  2013 192.168.3.78,3260
drw------- 2 root root 4096 Sep 15 20:17 ts879,3260

Thanks, Marco

Zfs ISCSI can't create more than 11 luns?

$
0
0
Hi,

I am trying to add vm disks zfs storage. But after creating 10 disks I can't create more. I get this error.

File exists. at /usr/share/perl5/PVE/Storage/LunCmd/Iet.pm line 376. (500)

If I delete one of the disk, I can add new one. But I get this error every time I try to create 12. disk.

Here is zfs list:

elastics/vm-101-disk-1 4.16G 5.80T 4.16G -
elastics/vm-101-disk-2 72K 5.80T 72K -
elastics/vm-107-disk-1 4.30G 5.80T 4.30G -
elastics/vm-107-disk-2 72K 5.80T 72K -
elastics/vm-107-disk-3 72K 5.80T 72K -
elastics/vm-107-disk-4 72K 5.80T 72K -
elastics/vm-111-disk-1 4.13G 5.80T 4.13G -
elastics/vm-111-disk-2 72K 5.80T 72K -
elastics/vm-111-disk-3 72K 5.80T 72K -
elastics/vm-111-disk-4 72K 5.80T 72K -
elastics/vm-127-disk-1 76.8G 5.80T 76.8G -


and here is ietd luns:
cat /proc/net/iet/volume
tid:1 name:iqn.2001-04.tr.edu.artvin:elastics
lun:0 state:0 iotype:blockio iomode:wt blocks:545259520 blocksize:512 path:/dev/elastics/vm-127-disk-1
lun:1 state:0 iotype:blockio iomode:wt blocks:306184192 blocksize:512 path:/dev/elastics/vm-107-disk-1
lun:2 state:0 iotype:blockio iomode:wt blocks:306184192 blocksize:512 path:/dev/elastics/vm-111-disk-1
lun:3 state:0 iotype:blockio iomode:wt blocks:306184192 blocksize:512 path:/dev/elastics/vm-101-disk-1
lun:4 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-111-disk-2
lun:5 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-111-disk-3
lun:6 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-111-disk-4
lun:8 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-107-disk-2
lun:10 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-107-disk-3
lun:9 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-107-disk-4
lun:7 state:0 iotype:blockio iomode:wt blocks:629145600 blocksize:512 path:/dev/elastics/vm-101-disk-2

Disk size in guest .conf file

$
0
0
I'm moving my home lab from ESXi over to Proxmox. I mounted my NFS datastore that ESXi uses and converted the vmdk file to qcow2 overwriting the file that was created when I created the guest in Proxmox.

The size in the .conf file (and web GUI) show 32GB but the actual disks are different sizes; 20GB, 100GB, etc. Is there any problem with the size being set in the .conf file? On one VM I removed the size= part of the vitrio0: line and now the GUI doesn't display the size and everything still runs.

I'm wondering what size= is used for in the virtio0: line and if it's optional.

IRC channel on freenod #proxmox - invite only now?

$
0
0
Okay, can't find a better place to post this (sorry).

I just noticed the #proxmox channel on freenode is now invite only. What's up with that? D:

EDIT: Er, I just realized there's #proxmox and ##proxmox (which I'm in). I'm going to ignore #proxmox for now, unless someone can advise what this is about? (sorry if already answered elsewhere)

Network UPS Tools (NUT) + ProxMox Cluster

$
0
0
Hi all,

I have a tripp lite Smart Pro that I want to hook into my cluster so that -

when power goes down, one of the cluster boxes can trigger all of the others to initiate a shutdown to power down the VM's and then power itself down. Ultimately I need to do this for two clusters - 1 is a PVE Ceph cluster the other is a PVE cluster for virtual guests.

Now, with the Tripp Lite I believe I'll need to use Network UPS Tools to make this work. I've found this rather simple tutorial for getting NUT setup:

http://www.dimat.unina2.it/LCS/Monit...untu10-eng.htm

But I'm not sure of the proper command set to send to the VM Hosts to safely power down the guests and shut down. Do I just use a shutdown -h now? Is there some PVE script on the system I can trigger that will shutdown vms and power off the system?

the tricky part is I need it to power off the vm hosts before the ceph network powers off which is sitting on a separate battery backup... Any tips would be appreciated!
Viewing all 171654 articles
Browse latest View live