Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Manually adding ISO images to proxmox

$
0
0
Hello, Is it possible to add an ISO to PVE by fetching it using wget or something? Is the same possible for OVZ templates? I ask because I have a very slow upload speed so uploading a 4GB image would take days. Thanks.

Basic understandign of Proxmox and resizing storage

$
0
0
HiI have a node, on this node I have to VMI also has a storage.My two VM has 50GB Hard disk each total of 100GMy storage is set to 150GI have no understood that storage are the storage that are max avaliable for the two VM. And my VM are natually stored in the Storage. BUt I hactually have more Hardware storage so how can I increase the Storage disk so it does not limit the size of the disk on the VM, (I know how to increase the disk of the VM)IF I cant change the size of the storage disk I guess that the solution would be to create a second storage disk and resinstall my second VM and to get it to use this second storage?Have I understood how this work corectly?THx

resize/compress vmdk files

$
0
0
hi, we have large vmdk files, but the data from VM's is got smaller . it is possible to compress the vmdk files? thank you

Windows Server 2012 VNC mouse pointer out of sync

Update cluster to 3.2 and kernel 3.10 : quorum lost

$
0
0
Hi all,

I got recently a new server, Dell PE R620, and decided to install it using PVE 3.2. As I only use KVM, I decided to also install the new kernel 3.10 from Enterprise repository. I had a little problem with this kernel, as my server did not reboot properly first time. In fact it appeared that it was trying to mount LVM volumes before loading Raid controller. It was resolved by adding the option 'scsi_mod.scan=sync' to the line 'GRUB_CMDLINE_LINUX_DEFAULT' in /etc/default/grub, as stated in another thread.

After that, I joined the new server to my PVE cluster, and all seems to be fine.

So, I also updated the other 3 nodes in the cluster, and also installed the 3.10 kernel. All went fine until I rebooted the last node. Then, one node appeared in red in the web management interface. I verified the cluster status and found I lost the quorum and had a line stating that 'Quorum: 3 Activity blocked'. Shortly after, all nodes did not see the other nodes, and the cluster had failed.
'pvecm nodes' showed only one node as alive (the current node were I was logged).

I tried to reboot some nodes. After that, I recovered for a short time the quorum, but it failed again in a short time.

I then tried to reinstall the 3.6.32 kernel on the last node I installed (in fact removed the 3.10 kernel, and update-grub). After I rebooted, I recovered instantly the entire cluster, and the quorum.

So, I will stick for now to the 3.6.32 kernel, and reinstall it on each node (I already did it on a node). Are there known problems with 3.10 kernel ?

My pveversion is this (on the new server, with 3.6.32 kernel now) :
# pveversion -v
proxmox-ve-2.6.32: 3.2-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.2-1 (running version: 3.2-1/1933730b)
pve-kernel-2.6.32-27-pve: 2.6.32-121
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-14
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Proxmox VE 3.0 unable to login to web interface

$
0
0
Hi,

I am using proxmox VE 3.0 i am able to login to server with same password but i am not able to login to server web interface. i am already using same server web interface more the 5month suddenly it stop working while trying to login it is saying "Login failed, please try again"

i restarted the server, tried to restart the pveproxy

pveversion -v
pve-manager: 3.0-20 (pve-manager/3.0/0428106c)
running kernel: 2.6.32-20-pve
proxmox-ve-2.6.32: 3.0-100
pve-kernel-2.6.32-20-pve: 2.6.32-100
lvm2: 2.02.95-pve3
clvm: 2.02.95-pve3
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-4
qemu-server: 3.0-15
pve-firmware: 1.0-22
libpve-common-perl: 3.0-4
libpve-access-control: 3.0-4
libpve-storage-perl: 3.0-6
vncterm: 1.1-3
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-12
ksm-control-daemon: 1.1-1

syslog

Mar 16 19:18:11 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:11 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:11 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:11 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:11 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: __ratelimit: 2141 callbacks suppressed
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.
Mar 16 19:18:16 proxmox2 kernel: Neighbour table overflow.

Regards,
Asaguru

need help for an exotic configuration

$
0
0
Hello , after a few years with pve installations and experience is it the first time with an problem. But before , some explanations:
i've byed 4 apple xserve 1,1 from an closed company for small money.
after hrs of hrs to setup an debian-only system, is this solved : debian wheezy starts and running perfect without any error. On the disks there is no OSX and the EFI Loader is compiled and correct configured. The grub menu shows the right Kernel's (The Wheezy , and the newly installed from this doku -> http://pve.proxmox.com/wiki/Install_..._Debian_Wheezy) As i said , an existing wheezy 64bit runs perfectly with kernel 3.2.0-4-amd64.
My Problem is :
After the setup and changes of Grub , the EFI Chainloader shows the Grub menu , The PVE ist the first entry and all files are in the folder /efi/boot (like the other Kernel/initrd's).
But if i start the PVE kernel, the Kernel will load correctly , but while loading the inited.img file the system hangs. no blinking cursor, the Last line is [Initrd, addr=0x5dad6000 mem=0xe89942]

the Grub.cfg line is :
menuentry "PVE Wheezy" {
fakebios
kernel /efi/boot/vmlinuz-2.6.32-29-pve ro root=/dev/sda3 nomodeset vga=normal
initrd /efi/boot/initrd.img-2.6.32-29-pve
}

the second line shows the same with different kernel/initrd for 3.2.0-4 from wheezy , and operates normally

since this is a xServe1,1 it is not possible to install with the Baremetal installer, x.org isnt run on this device

/dev/sda1 -> vFAT EFI Syste Partition
/dev/sda2 -> Swap Space
/dev/sda3 -> / (GRUB is installed in the Bootsector of this Partition), on EFI-Systems it isnt possible to install Grub in The MBR

any hints , to locate the error , while initrd.img-2.6.32-29-pve isnt load ?
thx a lot for your help

Nils

Public networking for containers in an OVH dedicated server?

$
0
0
Hello, I've been trying to set up public network access for the last two days with Proxmox on an OVH server. How do I do it? I have a Failover-IP because I thought I might try it but so far I have failed to configure it as well. My network looks like this(I added vmbr2 as part of my failed config): Is it because eth0 and eth1 are inactive? Everything is debian 7

Opening an LVM Partition

$
0
0
We have a disk image and we did kpartx -av (disk image)

Inside the 2 partitions p1 is the boot partition and p2 is an lvm partition. I can't mount the lvm partition - how do we open this?

Device Boot Start End Blocks Id System
/dev/PL-C-SAN-NODES/vm-100-disk-3p1 * 2048 499711 248832 83 Linux
/dev/PL-C-SAN-NODES/vm-100-disk-3p2 501758 167770111 83634177 5 Extended
Partition 2 does not start on physical sector boundary.

same mac from same proxmox node appears multiple times in switch mac list?

$
0
0
Hi all,

I have some network issues.. i think its because of loops but both RSTP and loop protection is active ..still i may have done it wrong.. anyway.. im asking here cause i need to know what proxmox normal behaviour is..

So i have 3 nodes with two interfaces in bond0 with lacp configured on each node - pretty standard..

I have two lan switches which each interface from each node is connected to. These two switches are also trunked with LACP.. I have tried with both HP static trunk and LACP - made no difference..

what im seeing aka symptoms..

sometimes when i boot a node or restart a node network service it all of sudden start to drop packets. and after some reboots i stops again. I quicker way to solve it is to boot one of the switches. My theroy is the following:

LACP makes only one interface active.. and im thinking in the packet drop scenario that 2 nodes has thire interface active on switch 1 and the last node has his interface active on switch 2 which in turn, because of the trunk between the two switches creates a loop.. does this make sense?

I went to the proxmox nodes and did a ifconfig to find the MACs.. here i noticed ALL my interfaces had the same mac

Proxmox00
Code:

root@proxmox00:~# ifconfig
bond0    Link encap:Ethernet  HWaddr 80:c1:6e:64:8d:3c
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:2712897 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1195000 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1835251437 (1.7 GiB)  TX bytes:238454039 (227.4 MiB)

bond0.2  Link encap:Ethernet  HWaddr 80:c1:6e:64:8d:3c
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:1023654 errors:0 dropped:0 overruns:0 frame:0
          TX packets:754 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1282156890 (1.1 GiB)  TX bytes:49772 (48.6 KiB)

bond0.3  Link encap:Ethernet  HWaddr 80:c1:6e:64:8d:3c
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:754 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:49772 (48.6 KiB)

eth2      Link encap:Ethernet  HWaddr 80:c1:6e:64:8d:3c
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:2210271 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1148199 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1583448270 (1.4 GiB)  TX bytes:232651227 (221.8 MiB)

eth3      Link encap:Ethernet  HWaddr 80:c1:6e:64:8d:3c
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:502626 errors:0 dropped:0 overruns:0 frame:0
          TX packets:46801 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:251803167 (240.1 MiB)  TX bytes:5802812 (5.5 MiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:94 errors:0 dropped:0 overruns:0 frame:0
          TX packets:94 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:16339 (15.9 KiB)  TX bytes:16339 (15.9 KiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00                    -00
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vmbr0    Link encap:Ethernet  HWaddr 80:c1:6e:64:8d:3c
          inet addr:10.10.99.20  Bcast:10.10.99.255  Mask:255.255.255.0
          inet6 addr: fe80::82c1:6eff:fe64:8d3c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1125198 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1094891 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:264351866 (252.1 MiB)  TX bytes:226508308 (216.0 MiB)

vmbr1    Link encap:Ethernet  HWaddr 80:c1:6e:64:8d:3c
          inet6 addr: fe80::82c1:6eff:fe64:8d3c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1272 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:58512 (57.1 KiB)  TX bytes:578 (578.0 B)

vmbr3    Link encap:Ethernet  HWaddr 80:c1:6e:64:8d:3c
          inet6 addr: fe80::82c1:6eff:fe64:8d3c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:578 (578.0 B)

root@proxmox00:~#

Proxmox01
Code:

root@proxmox01:~# ifconfig
bond0    Link encap:Ethernet  HWaddr e8:39:35:b7:c7:4c
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:78985596 errors:0 dropped:0 overruns:0 frame:0
          TX packets:101958145 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:47848136947 (44.5 GiB)  TX bytes:103293213480 (96.1 GiB)

bond0.2  Link encap:Ethernet  HWaddr e8:39:35:b7:c7:4c
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:11528034 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7606523 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:9308896784 (8.6 GiB)  TX bytes:13112974005 (12.2 GiB)

eth2      Link encap:Ethernet  HWaddr e8:39:35:b7:c7:4c
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:11207054 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1364502 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:7576213855 (7.0 GiB)  TX bytes:202885911 (193.4 MiB)

eth3      Link encap:Ethernet  HWaddr e8:39:35:b7:c7:4c
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:67778542 errors:0 dropped:0 overruns:0 frame:0
          TX packets:100593643 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:40271923092 (37.5 GiB)  TX bytes:103090327569 (96.0 GiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:1208712 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1208712 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:11790128367 (10.9 GiB)  TX bytes:11790128367 (10.9 GiB)

tap101i0  Link encap:Ethernet  HWaddr c2:15:aa:51:80:23
          inet6 addr: fe80::c015:aaff:fe51:8023/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:1917199 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4189948 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:121642134 (116.0 MiB)  TX bytes:5335729264 (4.9 GiB)

tap102i0  Link encap:Ethernet  HWaddr 92:76:1b:e4:58:1f
          inet6 addr: fe80::9076:1bff:fee4:581f/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:5 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:1 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:300 (300.0 B)  TX bytes:0 (0.0 B)

tap112i0  Link encap:Ethernet  HWaddr 1a:b3:20:e5:5c:7b
          inet6 addr: fe80::18b3:20ff:fee5:5c7b/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:5 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:1 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:300 (300.0 B)  TX bytes:0 (0.0 B)

tap113i0  Link encap:Ethernet  HWaddr a6:7b:dc:97:43:16
          inet6 addr: fe80::a47b:dcff:fe97:4316/64 Scope:Link
          UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
          RX packets:420012 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18693350 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:219791105 (209.6 MiB)  TX bytes:22361016498 (20.8 GiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00                                  -00
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vmbr0    Link encap:Ethernet  HWaddr e8:39:35:b7:c7:4c
          inet addr:10.10.99.21  Bcast:10.10.99.255  Mask:255.255.255.0
          inet6 addr: fe80::ea39:35ff:feb7:c74c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:43488538 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39379002 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:32323093880 (30.1 GiB)  TX bytes:86505250883 (80.5 GiB)

vmbr1    Link encap:Ethernet  HWaddr e8:39:35:b7:c7:4c
          inet6 addr: fe80::ea39:35ff:feb7:c74c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:28063 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1211608 (1.1 MiB)  TX bytes:578 (578.0 B)

root@proxmox01:~#

Proxmox02
Code:

root@proxmox02:~# ifconfig
bond0    Link encap:Ethernet  HWaddr 80:c1:6e:64:ab:2a
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:44492827 errors:0 dropped:0 overruns:0 frame:0
          TX packets:27420072 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:14991834618 (13.9 GiB)  TX bytes:4781766759 (4.4 GiB)

bond0.2  Link encap:Ethernet  HWaddr 80:c1:6e:64:ab:2a
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:6646040 errors:0 dropped:0 overruns:0 frame:0
          TX packets:15976 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3963772204 (3.6 GiB)  TX bytes:1054424 (1.0 MiB)

eth2      Link encap:Ethernet  HWaddr 80:c1:6e:64:ab:2a
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:15261688 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1036273 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:8401169881 (7.8 GiB)  TX bytes:131043042 (124.9 MiB)

eth3      Link encap:Ethernet  HWaddr 80:c1:6e:64:ab:2a
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:29231139 errors:0 dropped:0 overruns:0 frame:0
          TX packets:26383799 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:6590664737 (6.1 GiB)  TX bytes:4650723717 (4.3 GiB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:1158 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1158 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:657942 (642.5 KiB)  TX bytes:657942 (642.5 KiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vmbr0    Link encap:Ethernet  HWaddr 80:c1:6e:64:ab:2a
          inet addr:10.10.99.22  Bcast:10.10.99.255  Mask:255.255.255.0
          inet6 addr: fe80::82c1:6eff:fe64:ab2a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:27918154 errors:0 dropped:0 overruns:0 frame:0
          TX packets:25346691 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:6052590170 (5.6 GiB)  TX bytes:4530105646 (4.2 GiB)

vmbr1    Link encap:Ethernet  HWaddr 80:c1:6e:64:ab:2a
          inet6 addr: fe80::82c1:6eff:fe64:ab2a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:27945 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1285576 (1.2 MiB)  TX bytes:578 (578.0 B)

root@proxmox02:~# ^C

I figure its because all my interfaces comes in one way or another from bond0 but still eth2 and eth3 on proxmox00 also has the same MAC - is this suppose to be that way?

LACP on the switch trunk side is configured in the following groups:

TRK1: switch trunk
TRK2: Storage NAS - not important
TRK3: BAckup Nas - not important
TRK4: Proxmox00
TRK5: Proxmox01
TRK6: Proxmox02

Here is the mac table of switch 1

Code:


MAC Address

Source Port

MAC Type

00:11:32:24:28:9b TRK2 Learned
00:11:32:24:28:d7 TRK3 Learned
00:11:32:24:28:da TRK1 Learned
00:90:fb:40:9c:f6 TRK1 Learned
00:e0:20:11:0a:2e 12 Learned
00:e0:20:11:0a:2e 12 Learned
72:e1:e0:41:93:73 TRK5 Learned
80:c1:6e:64:8d:3c TRK1 Learned
80:c1:6e:64:8d:3c TRK1 Learned
80:c1:6e:64:8d:3d TRK4 Learned
80:c1:6e:64:ab:2a TRK6 Learned
80:c1:6e:64:ab:2a TRK6 Learned
80:c1:6e:64:ab:2b TRK6 Learned
c2:31:d6:cc:cf:4b TRK5 Learned
d4:c9:ef:3a:fe:e0 TRK1 Learned
d4:c9:ef:3a:fe:f8 TRK1 Learned
d4:c9:ef:3c:d9:a0 CPU Management
e8:39:35:b7:c7:4c TRK5 Learned
e8:39:35:b7:c7:4c TRK5 Learned
e8:39:35:b7:c7:4d TRK5 Learned

Notice there are 3 entries for TRK6 and that the mac is the same for two of them and not all 3?

also i see allot of double mac entries for same trunk groups.. actually i expected to only see one MAC for each trunk group? is that normal proxmox behavior or could it be because of switch loops? Would a normal setup with a proxmox node connected to two switche using LACP look this way or?

THANKS

Casper

EDIT:

Here is mac table of switch 2

Code:


MAC Address

Source Port

MAC Type

00:11:32:24:28:9b TRK1 Learned
00:11:32:24:28:9c TRK2 Learned
00:11:32:24:28:d7 TRK3 Learned
00:11:32:24:28:d8 TRK3 Learned
00:11:32:24:28:da 22 Learned
00:90:fb:40:9c:f6 12 Learned
00:e0:20:11:0a:2e TRK1 Learned
00:e0:20:11:0a:2e TRK1 Learned
72:e1:e0:41:93:73 TRK1 Learned
80:c1:6e:64:8d:3c TRK4 Learned
80:c1:6e:64:8d:3c TRK4 Learned
80:c1:6e:64:ab:2a TRK1 Learned
80:c1:6e:64:ab:2a TRK1 Learned
d4:c9:ef:3a:fe:e0 CPU Management
d4:c9:ef:3c:d9:b8 TRK1 Learned
e8:39:35:b7:c7:4c TRK5 Learned
e8:39:35:b7:c7:4c TRK1 Learned

KVM access to venet and host routing?

$
0
0
Hello all,

I have a proxmox ve (2.3) running on a dedicated server, and carrying several CTs.
All these CTs have private IP addresses (10.0.0.x) routed through the "venet", using the host as a NAT gateway, to access the internet. Some also have their own external IP addresses, in which case they just enjoy the routing without the NAT.

Now to that setup I'm trying to add a KVM machine, as I want to try and setup a freeBSD guest.
I want that guest to be able to reach the Internet, and the CTs as well. How can I do that?

I tried putting the guest on vmbr1 and give it a 10.0.0.x IP, as I think I read somewhere it could work. But it doesn't. (Some of you will probably think "of course", in which case I'd be happy to read their explanation.)

High IRQ Load on Win2k8R2

$
0
0
HI!

I have a Windows 2008R2 Server (full patched) with Terminal Services enabled. Running now fine for more than 1 Year without bigger Problems on an Ubuntu 12.04 LTS with KVM and libvirt. Moving the Image to a complete new Proxmoxserver results in extrem high IRQ Loads. Especially when large amounts of data are transferred over the network.

Networkcard is an e1000, Driver for Storage is virtio. Version 1.74 from http://alt.fedoraproject.org/pub/alt...latest/images/

High IRQ means up to 74% of CPU Time. I've got a second Machine (complete 1:1), still on Ubuntu wich never goes higher than 12%.

After some profiling with Windows Performance Analyzer i can see that there are 2 very expensive Procedures:

in ndis.sys: ndis5InterruptDpc and in ntoskrnl.exe EtwpStackWalkDpc

I've tried so step back to virtio 1.3 = no results
more CPU Power (more vcpus) = was even worse
Stop all other VMs on the host (3 Linux Boxes without noticable load) = no results
Install a complete new Window 2008R2 = out of the Box high IRQ Load :(

No Idea left...

Lg
mike

Some infos:

pveversion
pve-manager/3.1-43/1d4b0dfb (running kernel: 2.6.32-27-pve)
root@kobannode4:~# pveversion -v
proxmox-ve-2.6.32: 3.1-121 (running kernel: 2.6.32-27-pve)
pve-manager: 3.1-43 (running version: 3.1-43/1d4b0dfb)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-15
pve-firmware: 1.1-2
libpve-common-perl: 3.0-13
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-4
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

pveperf
CPU BOGOMIPS: 110393.88
REGEX/SECOND: 1061612
HD SIZE: 275.01 GB
BUFFERED READS: 420.60 MB/sec
AVERAGE SEEK TIME: 9.31 ms
FSYNCS/SECOND: 1896.44
DNS EXT: 21.75 ms
DNS INT: 23.71 ms
Attached Images

Help! Accidentally deleted proxmox and it's dependencies

$
0
0
hi,

I wanted to remove rsync on my proxmox server and I used autoremove (NOOOOOOOOB :mad::mad::mad:) now I can't connect to my proxmox user interface. My VM's are still running but I don't know how to get out this problem.

I tried to reinstall with the .deb packages but I have problems with dependencies...

Hope someone can help me sto solve this :/


Thanks

VM shutdown during backup

$
0
0
Hi,

I was just doing a backup of a qemu based VM. Part way through, the VM shut down. Has anyone seen this before?

Gerald

Duplicate VMID

$
0
0
Hi everybody,Today I have a problem with my proxmox servers.In my test lab, I have 3 proxmox servers in clustering mode. I have migrated 2 VMs between 2 nodes without remember that will include some problems with proxmox ID.So now, when i want to restart the pve-cluster service it show me an error : detected duplicate VMID 100 & detected duplicate VMID 101 ...Anybody know how do i resolve this issue ? thx,superwemba

Proxmox Mail Gateway: Hotfix 3.1-5829

$
0
0
We just released hotfix 3.1-5829 for our Proxmox Mail Gateway 3.1.

Release Notes

03.03.2014: Proxmox Mail Gateway 3.1-5829

  • proxmox-mailgateway (3.1-11)
  • improve email parser and quote links to avoid XSS reflection
  • fix menu activation with firefox
  • proxmox-spamassassin (3.3.2-4), updated ruleset


Download
http://www.proxmox.com/downloads/category/service-packs
__________________
Best regards,

Martin Maurer

Nested KVM

$
0
0
Hi,

I was trying to set up a testing cluster for proxmox inside proxmox, but I ran into this error:

boot.log:
Code:

Tue Mar 18 00:04:54 2014: Loading kernel module kvm-intel.
Tue Mar 18 00:04:54 2014: ERROR: could not insert 'kvm_intel': Unknown symbol in module, or unknown parameter (see dmesg)

dmesg:
Code:

kvm_intel: Unknown parameter `nested'
This is on the host proxmox, not inside the virtual proxmox by the way.
Would it be possible to compile this module with this parameter enabled? Or maybe this has been enabled in 3.2 already? Still running 2.6.32-26 (3.1-24) on this cluster...

Why are my Network Devices not available?

Enterprise update problem

$
0
0
Hi,

I'm trying to upgrade a Proxmox 3.1 with Community Subscription to 3.2. I subscribed to it several months ago, but never had problems updating it till now. I changed my sources.list to:

server:/etc/apt# cat sources.listdeb http://ftp.debian.org/debian wheezy main contrib


# PVE packages provided by proxmox.com
#deb https://enterprise.proxmox.com/debian wheezy pve-enterprise


# security updates
deb http://security.debian.org/ wheezy/updates main contrib

In my sources.list.d/pve-enterprise.list is:

deb https://enterprise.proxmox.com/debian wheezy pve-enterprise



I did the apt-get upgrade which resulted in this:

server:/etc/apt# apt-get update
Hit http://security.debian.org wheezy/updates Release.gpg
Hit http://security.debian.org wheezy/updates Release
Hit http://security.debian.org wheezy/updates/main amd64 Packages
Hit http://http.at.debian.org wheezy Release.gpg
Hit http://security.debian.org wheezy/updates/contrib amd64 Packages
Hit http://security.debian.org wheezy/updates/contrib Translation-en
Hit http://http.at.debian.org wheezy Release
Hit http://security.debian.org wheezy/updates/main Translation-en
Hit https://enterprise.proxmox.com wheezy Release.gpg
Hit http://http.at.debian.org wheezy/main amd64 Packages
Hit https://enterprise.proxmox.com wheezy Release
Hit http://http.at.debian.org wheezy/contrib amd64 Packages
Hit http://http.at.debian.org wheezy/contrib Translation-en
Hit http://http.at.debian.org wheezy/main Translation-en
W: Failed to fetch https://enterprise.proxmox.com/debia...wheezy/Release Unable to find expected entry 'pve-enterprise/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file)


E: Some index files failed to download. They have been ignored, or old ones used instead.



Can someone point me in the right direction? The Proxmox system says the subscription status is active.

Thanks

Creating a container that uses the hosts ip to host websites?

$
0
0
How can I do this? I want to create a container to host apache and mysql in there but I would like it tot use the server IP. Is this possible? If so, how do I do it? All machines are debian and are hosted on an OVH dedicated server.
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>