Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

Containers show as stopped

$
0
0
Hello,

I don't know what happened but today when I logged in to the Proxmox GUI I noticed that all containers show as offline. No stats are being shown for any container. vzlist shows them as running. Any know what happened? Please see the screenshot below.

proxmox.jpg
Attached Images

multipathd: uevent trigger error

$
0
0
Guten Tag,

wir haben eine Systemumgebung aus 3 proxmox-servern sowie einem zentralem Storage welches über iscsi mit den Servern verknüpft ist.
Die 3 Server wurden alle identisch installiert und nach Anleitung in einen Cluster gebracht. Das Storage wurde nach Anleitung per multipathing mit den Servern verknüpft und anschließend per LVM eingebunden.

Nun findet sich jedoch seit einiger Zeit auf 2 von 3 Servern immer wieder der Fehler "multipathd: uevent trigger error" in den Logs, ohne erkennbaren Grund.
Der dritte Server bringt keine Fehler, Konfigdateien wie /etc/multipath.conf /etc/network/interfaces und sonstige sind alle abgeglichen und keine Fehler erkennbar.
Das Storage ist weiterhin verfügbar und wird auf allen paths erreicht, jedoch werden im minutentakt diese Fehler gespammt.

Meine eigene Recherche im Internet blieb leider erfolgreich. Kann jemand helfen? Was bedeutet dieser Fehler genau und wie kann man ihn beheben?

Des Weiteren ist im Webinterface auf der primären vmbr0 das Gateway 192.168.0.1 eingerichtet. Prüfe ich das jedoch auf der Konsole mit "route" erhalte ich folgende Ausgabe:
Code:

root@proxmox01:~# route
Kernel-IP-Routentabelle
Ziel            Router          Genmask        Flags Metric Ref    Use Iface
10.0.0.0        *              255.255.255.0  U    0      0        0 vmbr99
192.168.0.0    *              255.255.0.0    U    0      0        0 vmbr0

Das beeinflusst den laufenden Betrieb zwar nicht, da das Clustering über das extra 10.0.0.0-Netz läuft, aber es ist trotzdem komisch.
Installiert wurde ganz normal nach Anleitung.

installierte Paketversionen siehe unten, Zeiten sind überall synchron
Code:

root@proxmox01:~# pveversion -v
proxmox-ve-2.6.32: 3.4-155 (running kernel: 2.6.32-38-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-38-pve: 2.6.32-155
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-5
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Keymap Console

$
0
0
Hey.

When i use the console on proxmox is complete different as my current keyboard, as example im unable to find the keys for " and so on.

Is it possible to change the keymap in the console to german?

SSD Speed was slow

$
0
0
Hi, i have create a RAID 10 with 4x SSD 512 GB SAMSUNG HDDs. I use a Hardware Raid with Cache and BBU.
If i make with original fstab entry:
Code:

/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext4 defaults 0 1
/dev/pve/swap none swap sw 0 0

Code:

pveperf /var/lib/vz
CPU BOGOMIPS:      48001.12
REGEX/SECOND:      865523
HD SIZE:          797.85 GB (/dev/mapper/pve-data)
BUFFERED READS:    266.64 MB/sec
AVERAGE SEEK TIME: 0.19 ms
FSYNCS/SECOND:    2935.39
DNS EXT:          68.47 ms
DNS INT:          57.05 ms

Speed is low.

If i change fstab to:
Code:

/dev/pve/root / ext4 relatime,nodelalloc,barrier=0,errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext4 relatime,nodelalloc,barrier=0 1
/dev/pve/swap none swap sw 0 0

and restart host and start again:
Code:

pveperf /var/lib/vz
CPU BOGOMIPS:      48000.40
REGEX/SECOND:      870037
HD SIZE:          797.85 GB (/dev/mapper/pve-data)
BUFFERED READS:    267.28 MB/sec
AVERAGE SEEK TIME: 0.19 ms
FSYNCS/SECOND:    3079.00
DNS EXT:          60.51 ms
DNS INT:          52.61 ms

Speed are slow too?

Why it was slow? Any Ideas?

best regards

OpenVZ containers on NFS

$
0
0
Hi,

I'm facing a few issues with a proxmox installation where i'm storing containers in an NFS share. Initially i had the ACL enabled and that was not working well at all, so i disabled it and things seemed to improve. I'm still facing a few issues though. For example:
- adding a user reports a problem with locks on the /etc/groups file
- cannot seem to use setcap inside containers

Any clues? From my investigations, some of those may be related to NFS limitations, but i'm not completely sure. Would i be better off using GlusterFS? Maybe i would not face those issues?

Cheers

BruteForce

$
0
0
Hi,

How to detect my VPSes are scanning SSH (bruteforce) on my SERVER ???
And how to view network history of that VPS. And how to block outgoing port 22 on all VPSes (VPS cant connect to port 22) ???

Thanks alot

networking painfully slow after upgrade to 3.4

$
0
0
Hi,

recently upgraded my old 2.3 server to 3.4 and am now finding that the networking is just painfully slow (Im talking 200kb/s here!).

First things first:

Quote:

Originally Posted by pveversion -v
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-39-pve: 2.6.32-156
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Code:

lspci| grep -i real
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06)

Code:

modinfo r8169
filename:      /lib/modules/2.6.32-39-pve/kernel/drivers/net/r8169.ko
firmware:      rtl_nic/rtl8168g-1.fw
firmware:      rtl_nic/rtl8106e-1.fw
firmware:      rtl_nic/rtl8411-1.fw
firmware:      rtl_nic/rtl8402-1.fw
firmware:      rtl_nic/rtl8168f-2.fw
firmware:      rtl_nic/rtl8168f-1.fw
firmware:      rtl_nic/rtl8105e-1.fw
firmware:      rtl_nic/rtl8168e-3.fw
firmware:      rtl_nic/rtl8168e-2.fw
firmware:      rtl_nic/rtl8168e-1.fw
firmware:      rtl_nic/rtl8168d-2.fw
firmware:      rtl_nic/rtl8168d-1.fw
version:        2.3LK-NAPI
license:        GPL
description:    RealTek RTL-8169 Gigabit Ethernet driver
author:        Realtek and the Linux r8169 crew <netdev@vger.kernel.org>
srcversion:    4C34A7693E03D5EF3239253
alias:          pci:v00000001d00008168sv*sd00002410bc*sc*i*
alias:          pci:v00001737d00001032sv*sd00000024bc*sc*i*
alias:          pci:v000016ECd00000116sv*sd*bc*sc*i*
alias:          pci:v00001259d0000C107sv*sd*bc*sc*i*
alias:          pci:v00001186d00004302sv*sd*bc*sc*i*
alias:          pci:v00001186d00004300sv*sd*bc*sc*i*
alias:          pci:v00001186d00004300sv00001186sd00004B10bc*sc*i*
alias:          pci:v000010ECd00008169sv*sd*bc*sc*i*
alias:          pci:v000010ECd00008168sv*sd*bc*sc*i*
alias:          pci:v000010ECd00008167sv*sd*bc*sc*i*
alias:          pci:v000010ECd00008136sv*sd*bc*sc*i*
alias:          pci:v000010ECd00008129sv*sd*bc*sc*i*
depends:        mii
vermagic:      2.6.32-39-pve SMP mod_unload modversions
parm:          use_dac:Enable PCI DAC. Unsafe on 32 bit PCI slot. (int)
parm:          debug:Debug verbosity level (0=none, ..., 16=all) (int)

Amongst the apparent lack of speed, the problem is this:

Code:

ethtool -s eth0 speed 1000 duplex full autoneg off
Cannot set new settings: Invalid argument
  not setting speed
  not setting duplex
  not setting autoneg

Note that I can disable autoneg (which more often than not doesnt work) if I run that command without speed. I can however not set the mode to 1000baseT. Prior to the upgrade I very much could and had.

I tried installing firmware-realtek, but...

Code:

The following packages have unmet dependencies:
 pve-firmware : Conflicts: firmware-realtek but 0.36+wheezy.1 is to be installed.

Do I have to download and compile realtek drivers myself now? Any other solutions?

Proxmox internal networking and OpenVswitch

$
0
0
I'm looking to replicate the ESXI setup of the internal switch, where VM's can talk to eachother directly over the bus instead of going out to the switch and then back.

Can this be accomplished with OpenVswitch? That way, my webhost can talk to my mysql server directly, while both of them have access to the internet still as well?

In the past, I made another linux bridge with no nics on it and attached to that, but I"m hoping to avoid setting up dual NICs/IP/etc.

OpenVZ, localhost port refused

$
0
0
Hi

I have server proxmox with container OpenVZ (debian).
I want use
ws://127.0.0.1:8080 (framework Symfony) but doesn't work, telnet too :

Trying ::1...
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused



Container using NAT (venet) for IP and no firewall.

How i can do open port 8080 for localhost ? thanks

Lose VM configuration due to backup failture

$
0
0
Have noticed that VM loses it's configuration if backup process fails.

Code:

INFO: Starting Backup of VM 130 (qemu)
INFO: status = running
INFO: update VM 130: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/pve_backup/dump/vzdump-qemu-130-2015_05_19-00_15_02.vma.lzo'
INFO: backup contains no disks
INFO: starting template backup
INFO: /usr/bin/vma create -v -c /mnt/pve/pve_backup/dump/vzdump-qemu-130-2015_05_19-00_15_02.tmp/qemu-server.conf exec:lzop>/mnt/pve/pve_backup/dump/vzdump-qemu-130-2015_05_19-00_15_02.vma.dat
INFO: vma: vma-writer.c:139: vma_writer_add_config: Assertion `len' failed.
ERROR: Backup of VM 130 failed - command '/usr/bin/vma create -v -c /mnt/pve/pve_backup/dump/vzdump-qemu-130-2015_05_19-00_15_02.tmp/qemu-server.conf 'exec:lzop>/mnt/pve/pve_backup/dump/vzdump-qemu-130-2015_05_19-00_15_02.vma.dat'' failed: got signal 6

Code:

root@pve02A:/etc/pve/nodes/pve01C/qemu-server# ls -alh
total 2.5K
drwxr-x--- 2 root www-data  0 Feb  1 13:31 .
drwxr-x--- 2 root www-data  0 Feb  1 13:31 ..
-rw-r----- 1 root www-data  0 May 19 00:15 130.conf
-rw-r----- 1 root www-data 487 May 19 01:12 136.conf
-rw-r----- 1 root www-data 396 Mar 31 17:16 205.conf
-rw-r----- 1 root www-data 888 Mar 18 16:27 503.conf
-rw-r----- 1 root www-data 294 Mar 18 16:27 504.conf

Code:

root@pve01C:~# pveversion -v
proxmox-ve-2.6.32: not correctly installed (running kernel: 3.10.0-9-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-3.10.0-8-pve: 3.10.0-30
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-3.10.0-9-pve: 3.10.0-33
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-140
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-5
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Any ideas how to restore VM config file and prevent PROXMOX from doing such nasty things?

Problem creating my own openvz template

$
0
0
Hello dear reader!

I'm trying to set up my own debian jessie open vz container and tried to follow this manual:

https://www.4b42.de/US-de/kb/58-Open...erstellen.html (It is in German).

I successfully deboostrapped Jessie with

Code:

debootstrap --arch amd64 wheezy /var/lib/vz/private/777 http://ftp.ch.debian.org/debian/
but the next step says:

Code:

vzctl set 777 --applyconfig basic --save
Result:

Sample config file not found: /etc/pve/openvz/ve-basic.conf-sample
Error: failed to apply some parameters, not saving configuration file!

So I searched for this error and found this alleged solution which does not seem to work any more:

http://realtechtalk.com/File_etcvzco...-1237-articles (this time in English).

It says that the file would have to be renamed to ve-vps.basic.conf-sample and that one should edit the /etc/vz/vz.conf and change the line "CONFIGFILE="vps.basic" to CONFIGFILE=BASIC.

Unfortunatly this line doesn't exist in the vz.conf neither does any of these .conf-sample files exist in the /etc/vz/conf/ directory.

So this leaves me stumped. My Proxmox version is 3.3-5.

Any help is greatly appreciated. Kind regards, MisterIX.

Startech USB to Ethernet installation

$
0
0
Hi all,

I want to install this USB-2-Ethernet adapter on my Proxmox host.

http://www.amazon.com/StarTech-Gigab.../dp/B007U5MGDC

I need to compile the driver myself with the sources of the kernel. I am currently on this version:

================
root@proxmox-virt:/tmp/Linux# pveversion --verbose
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-39-pve: 2.6.32-156
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
=================

I have the drivers but I don't know how to accomplish the following tutorial.

a. Obtain the kernel source tree for the platform in use and build it.

I have tried to install the headers, but I get an error.

#apt-get install linux-headers-2.6.32-39
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package linux-headers-2.6.32-39
E: Couldn't find any package by regex 'linux-headers-2.6.32-39'



Can anyone push me in the right direction?

OpenVZ released 3.X kernel

NFS not online

$
0
0
Hi,

I've just setup my proxmox on an OVH dedicated server and already setup the licence to get only the proper updates to my proxmox node.
The problem I'm having is that when I reboot the server some VM's are not starting up and the error message is:

"TASK ERROR: storage 'NFS' is not online"

I don't have anything associated with the VM's on the NFS with the exception of the virtualIO iso file mount on the DVD drive.


How can I avoid this problem and make sure all VM's are start up properly when my node reboots ?

My version is 3.4-6/102d4547

Thanks
Eduardo Brbosa

Containers and Network storage

$
0
0
Hi,

Has anyone a fully working solution for containers on a shared storage? I'm trying to use NFS, both v3 and v4 and i'm finding tons of issues with permissions and features that don't seem to go through NFS (setcap, ids, etc)

Any ideas?

Cheers

VGA and KVM VPS

$
0
0
Hi eveyone,

I setup my Proxmox with pve-kernel-3.10.0-8 kernel as well.

My system :
Quote:

E5-1650v3 - 64GB RAM - 240GB SSD - GTX 750
And my first VPS worked with my GPU card as well without error. But when i tried to turn on my 2nd vps (cloned from 1st).
It said :
Quote:

kvm: -device vfio-pci,host=03:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on: vfio: error opening /dev/vfio/29: Device or resource busy
kvm: -device vfio-pci,host=03:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on: vfio: failed to get group 29
kvm: -device vfio-pci,host=03:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on: Device initialization failed.
kvm: -device vfio-pci,host=03:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on: Device 'vfio-pci' could not be initialized
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=c137ff47-cff3-4134-b401-8379312db0f3' -name 1 -smp '2,sockets=2,cores=1,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga std -no-hpet -cpu 'kvm64,hv_spinlocks=0xffff,hv_relaxed,+lahf_lm,+x2 apic,+sep' -m 4096 -k en-us -readconfig /usr/share/qemu-server/pve-q35.cfg -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=03:00.0,id=hostpci0.0,bus=ich9-pcie-port-1,addr=0x0.0,multifunction=on' -device 'vfio-pci,host=03:00.1,id=hostpci0.1,bus=ich9-pcie-port-1,addr=0x0.1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:97d9ae6b71f' -drive 'file=/var/lib/vz/template/iso/virtio-win-drivers-20120712-1.iso,if=none,id=drive-ide0,media=cdrom,aio=native' -device 'ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=200' -drive 'file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=10 0' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=26:F1:00:8F:ED:C9,netdev=net0,bus=pci.0,ad dr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -machine 'type=q35' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1
Is there any solutions to run multiple VPSes with 1 GPU Card?

Updates break zfs

$
0
0
Ok, so I have a zfs RAID1 pool for boot on two SSD's and 8 drive ZFS10 pool for storage. Was working fine, could reboot server... until updates today. This had a new kernel, grub updates, zfs updates... Now I reboot and my 8 drive pool does not get mounted before Proxmox decides to create the dump images private and template folders before the storage is mounted. This is not the first time we have had this issue. Is there something that can be done to fix this permanently, it is very annoying!

So:

service pveproxy stop
service pvestatd stop
service pvedaemon stop
cd /Storage
# Delete all of the empty directories
rm -rf *
cd /
service zfs-mount start
service pvedaemon start
service pvestatd start
service pveproxy start

And now we are good until next reboot.

Configuring Proxmox-Host Network

$
0
0
Hi and Hello,

I´m new to this Forum and a rookie on Proxmox. I read about my questions google up and google down, and in the end all has confused me. I decided to talk to the Proxmox-Professionals :-)

My Goal:
Is to have a virtual Firewall installed on Proxmox with 2 or 3 virtual zones (DMZ/Green/etc). Proxmox-Host and Virtual-Firewall should have different public IPs. All virtual guests should talk to the internet through the virtual firewall.

What I have:
Proxmox installed on a root-server with 1x NIC and 2 IP-Adresses. My Host-Provider has "port security" activated, therefore only 1x MAC address is allowed.

My Questions:
1. In /etc/network/interfaces should I use routed-config or bridged config with proxyARP? And how should I configure it?
2. What else have I to do with routing and forwarding?
3. Is it a problem that main and second public IP are on different subnets? Do I need to set a route?
4. If I use proxyARP how can I manage that only my "red" zone has direct connection to the internet?
5. How should I bring up the second IP - is there a way to use the second IP directly on the virtualized "red" interface?

My best shot on the interfaces-config till now:
# network interface settings
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 85.25.19x.xxx
netmask 255.255.255.192
gateway 85.25.196.193

# 2. IP
auto eth0:0
iface eth0:0 inet static
address 85.25.15x.xxx
netmask 255.255.255.255
post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp

#Internal Switch Green
auto vmbr0
iface vmbr0 inet static
address 192.168.0.254
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0

#Internal Switch Orange
auto vmbr1
iface vmbr1 inet static
address 192.168.1.254
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0

#Internal Switch Red
auto vmbr2
iface vmbr2 inet static
address 192.168.2.254
netmask 255.255.255.0
bridge_ports none
bridge_stp off
bridge_fd 0

Problem on this config is, in every zone .254 could be used as gateway and second public IP isn´t seen on the outside, all requests coming from the main public IP.

I hope there is someone who can help me with this.

Thanks and regards,
Leo

eject cdrom error

$
0
0
system is installed with official proxmox ve 3.4 cdrom. when I eject physical cdrom, I got a lot of error messages.
never encounter similar situation. maybe this is a bug?
ejecterror.jpg
Attached Images

Guest Networking Dies

$
0
0
Hey all. We are having some strange guest network issues. The guest is CentOS7 running kernel "3.10.0-229.1.2.el7.x86_64". The hosts are setup with bridged networking. At random times the guest looses all network connectivity. The host is completely fine and has no issues. This is happening on both of our hosts so it rules out a hardware issue. The guest is using the virtio network driver. I just switched it to e1000 to see if it would prevent the issue.

The only way to get networking back, is to completely shutdown the guest. A reboot does not work, the guest has to be completely shut down then started again for networking to come back. Looking for some input. I have dug through logs on both the guest and host with nothing out of the ordinary.

I also updated both host nodes to the latest packages/kernel this morning and the issue happened again.

Not sure what else the issue could be, hoping someone can shed some insight or suggestions.
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>