Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

nvidiafb boot issue w/ ve4

$
0
0
I've got an nvidia card on my host. I recently reinstalled with ve4 and am getting the following on boot:

[screen goes white on green from white on black]:
[ 58.599324] nvidiafb: unable to setup MTRR


It just locks up here. Removing the card and booting works fine.

Please i need help

$
0
0
how can i access to the information of the virtual machines i mean the real data no matter the operative system that it have, access to the hard drive of the one of the virtual machines to get an information because it don't let me in the other way, don't for console or remote desktop please help me is very important information

thanks

Help me find the error of my network configs...

$
0
0
I'm implementing a HA cluster with 3 nodes, ceph, software fencing (PVE 4) and each node has a single "management" network and a pair of 10G fibers to diverse switches with several already-allocated 802.1q vlans.

Here's the logical setup:
Code:

eth6 -|
eth9 -|
    bond0-|
          bond0.1000 ("cluster traffic vlan" on the switches)-|
              bond0.1000.10 (intra-cluster corrosync and such)-| 
                    br10 (192.168.10.0/24)

I'm able to manually build all these relationships with what I'd been using previously on an Ubuntu box. I think I'm missing something, though, since now I have to manually associate the br10 with the underlying bond and then "ifconfig up" on everything to get it to start. I eventually want to build similar "sub-vlans" for ceph traffic, etc. This first one is needed before I can setup the cluster feature. Can someone tell me what I'm missing in my network interface config:
Code:

auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
        <stuff here>
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

auto br10
iface br10 inet static
  mtu 1500
  address 192.168.10.10
  netmask 255.255.255.0
  network 192.168.10.0
  bridge_ports bond0.1000.10
  bridge_hello 2
  bridge_maxage 12
  bridge_stp off

auto eth6
iface eth6 inet manual

auto eth9
iface eth9 inet manual

auto bond0
iface bond0 inet manual
        slaves eth6 eth9
        mtu 9000
        bond_miimon 100
        bond_mode active-backup

auto bond0.1000
iface bond0.1000 inet manual
  vlan-raw-device bond0
  up /sbin/vconfig add bond0.1000 10

#Q in Q

auto bond0.1000.10
iface bond0.1000.10 inet manual
  vlan-raw-device bond0.1000
  up /sbin/vconfig add bond0 1000

Thanks.
Dan

How do you remove the cluster configuration....from everything?

$
0
0
I'm building a new cluster and need to remove the corrosync settings after a small typo....but you can't. I've tried shutting down the service, removing the package, cleanly restarting the host, finding anything in the filesystem with "cluster" in the name, reinstalling the software, and rebooting again. The config COMES BACK!!!

How do I kill it? Please tell me how that corrosync.conf file is being generated so I can destroy it and start over.

And no, I do not want to completely re-install the system.

Thanks.
Dan

pty issue with ve 4 containers (converted from 3)

$
0
0
Upgraded a machine to ve4. Previous to the upgrade, I took backups of all openvz containers and am trying to restore in ve 4 w/ pct restore. Most went fine, but a few give me this on attempting to ssh to the container:
root@usenet's password:
PTY allocation request failed on channel 0
Linux usenet 4.1.3-1-pve #1 SMP Thu Jul 30 08:54:37 CEST 2015 i686


The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.


Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
You have mail.
stdin: is not a tty

----snip-----
It just hangs there indefinitely. Further, I'm not getting any content when trying to get a console (via the GUI, using pct enter works just fine). Any ideas?

using ceph with proxmox

$
0
0
Hi all,

We really like the idea that using ceph with proxmox we now use the entire server for virtualization and cluster. The downside is though that we use ressources on cep that could be used for proxmox and vise versa.

When you go through the ceph documentation they say to use dedicated hardware for ceph which is not what we are doing if we use it with proxmox.

I think they say this more to be on the safe side.

My question is. You guys who are using proxmox with ceph on the same node or nodes. How well does it run? do you have SQL servers running fine? do you servers for the entire infrastructor running this way? are there any thing one should know before planing this?

I know its a open question.. so i was hoping we could treat it as such..

Proxmox 4.0 Beta ( Unable to mount multiple nfs host paths in LXC container )

$
0
0
BETA 4.0Beta-26/5da615b

Getting error: TASK ERROR: multiple definitions for lxc.mount.entry

When my config contains:
lxc.mount.entry = /mnt/pve/Work/complete/movie media/MovieComplete none bind,create=dir 0 0
lxc.mount.entry = /mnt/pve/Media/Movie media/Movie none bind,create=dir 0 0

Both /mnt/pve/Work and /mnt/pve/Media are nfs shares

[SOLVED] Issue after installing ubuntu 14.04 LTS as vm

$
0
0
I am able to install Ubuntu but I am unable to view the console after reboot it just looks garbled. I have tried to google for the issue and I am not able to locate a resolution. I have tried to reboot the whole vm environment and have tried different browsers and computers. Any suggestions would be greatly appreciated. I have enclosed a screen shot.

Figured it out apparently it may have been a bad iso, which is weird because the iso installed just fine.
Attached Images

Proxmox GRE Tunnel.

$
0
0
Hello ppl.

I have some question about : GRE + VPS.. IP's.

My provider offer me just 1 IP.
So, is possible to use all GRE IP's (already routed) to gave connection on VPS's ?

VPS1. [GRE] 192.168.111.2 = [PUBLIC] 8.8.8.8
VPS2. [GRE] 192.168.111.3 = [PUBLIC] 8.8.8.9

very high server load/very idle cpu how to check which vps is causing issues?

Cheapest way to ensure filesystem consistency on a ZFS snapshot?

$
0
0
Hello,

I would like to take ZFS snapshots (remote storage via NFS) of the qcow2 img vm's. The cheapest (in terms of efficiency) way would be generating a virtual machine live snapshot before doing the zfs snapshot
and if yes will this snapshot result in a consistent file system system (don't care about the apps) ?

Thanks.

Problem with network configuration

$
0
0
So I installed the latest version on a Lenovo Thinkcentre m71e, and while booting, my network switch is showing activity, however when Proxmox loads, the activity light goes out and it's like the interface has been shut off. I have found no other help searching online.

When I do netstat -i it shows eth0 and it's MAC address but nothing attached to it.

What do I need to do to get it to work?

opening VM's installed on proxmox in Gnome-Boxes using Spice Fedora 22 Possible?

$
0
0
Hi There
Believe this should be possible to associate opening the VM using Spice in Gnome-Boxes. I can open them successfully using Remote-Viewer on the same machine but when I select gnome-boxes it doesn't recognise the file:///tmp/znGmxi1n in boxes this is launching from the proxmox webbrowser hypervisor page.

Any tips would be greatly appreciated.

Thanks

Resize disk does not work on a stopped VM...

$
0
0
Hi all, This bug already fired (https://bugzilla.proxmox.com/show_bug.cgi?id=643) is still there, confirmed today by another user. This time on a DRBD storage. Error 500. The same command in a shell works correctly : the disk is successfully resized. But, disk size is NOT correctly displayed in the hardware tab of the VM. A qm rescan needs to be done. Any idea about this random error when resizing a disk on a stopped VM?

Thanks,

Christophe.

Disks disappearing

$
0
0
Hello. I have had few occurrences where a vm's disk file (.qcow2) has disappeared without a trace.

Thankfully I do a backup each day so the vm's can easily be restored.



Today I got a vzdump backup status mail that includes this error:

Quote:

volume 'local:101/vm-101-disk-1.qcow2' does not exist
The disk file has indeed disappered but the vm is still running

Code:

/usr/bin/kvm -id 101 -chardev socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -name test -smp 8,sockets=2,cores=4,maxcpus=8 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -vga qxl -cpu kvm64,+lahf_lm,+x2apic,+sep -m 16384 -k is -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -spice tls-port=61000,addr=127.0.0.1,tls-ciphers=DES-CBC3-SHA,seamless-migration=on -device virtio-serial,id=spice,bus=pci.0,addr=0x9 -chardev spicevmc,id=vdagent,name=vdagent -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:5b8b5fbeb32f -drive if=none,id=drive-ide2,media=cdrom,aio=native -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=/var/lib/vz/images/101/vm-101-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=52:4F:BD:91:45:9B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300
I see no information on what has happened to the disk but it seems to be completely gone from the filesystem.

Any ideas why this is happeneng and how I can investigate this?

Cleaning cached memory

$
0
0
Hello,
I have got some servers with Proxmox installed.
One has 128GB RAM and when I run free, I can see, that there is all the time about 70GB of cached memory.
I found script, which cleans cached memory.
Is it safe to use it? Why is so much memory cached?

Proxmox + Ceph limitation

$
0
0
Hi,

Anybody try Proxmox + Ceph storage ?

We tried 3 nodes:

- Dell R610
- Raid H310 support jbod for hot swap SSD
- 3 SSD MX200 500GB (1 mon + 2 osd per node)
- Gigabit for wan and dedicated Gigabit for Ceph replication

When i test dd speed on 1 VM store on Ceph i only get avg speed at 47-50MB/s

Even my staff testing running dd test on multiple VM at the same time (simultances). The speed per VM still stand at 47-50MB/s

While i test at local SSD the speed is more better. 1GB/s

Is there anyone face with this issues ? Is this limitation of Ceph storage handle by Proxmox?


Sent from my iPhone using Tapatalk

snapshot-LV not growing

$
0
0
Hello, At our latest backup jobs the backups of all OpenVZ-containers failed with massive output of I/O errors in the logfiles. After investigating this i traced it down to the snapshot-volume not growing but staying at 1G size. Available space on the VG is >1TB.
Code:

# lvm dumpconfig | grep snapshot_autoextend        snapshot_autoextend_threshold=85        snapshot_autoextend_percent=20
Does the proxmox backup script (now) ignore these configuration values? As soon as the volume is full the backup log gets flooded with I/O errors until all remaining files failed or the job is stopped. The snapshot volume after stopping the failed/failing backup:
Code:

# lvs | grep vzsnap  vzsnap-proxmox-0 pve  swi-I-s--  1,00g      data  100.00
Did anyone experienced this issue lately and can point me into the right direction what option/behaviour changed? Thanks
Code:

# pveversion -v proxmox-ve-2.6.32: 3.4-157 (running kernel: 2.6.32-39-pve) pve-manager: 3.4-6 (running version: 3.4-6/102d4547) pve-kernel-2.6.32-32-pve: 2.6.32-136 pve-kernel-2.6.32-39-pve: 2.6.32-157 pve-kernel-2.6.32-30-pve: 2.6.32-130 pve-kernel-2.6.32-37-pve: 2.6.32-150 pve-kernel-2.6.32-34-pve: 2.6.32-140 pve-kernel-2.6.32-31-pve: 2.6.32-132 lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4 corosync-pve: 1.4.7-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.10-2 pve-cluster: 3.0-18 qemu-server: 3.4-6 pve-firmware: 1.1-4 libpve-common-perl: 3.0-24 libpve-access-control: 3.0-16 libpve-storage-perl: 3.0-33 pve-libspice-server1: 0.12.4-3 vncterm: 1.1-8 vzctl: 4.0-1pve6 vzprocps: 2.0.11-2 vzquota: 3.1-2 pve-qemu-kvm: 2.2-10 ksm-control-daemon: 1.1-1 glusterfs-client: 3.5.2-1
(and why on earth is the forum software stripping all linebreaks from this post??)

Proxmox 4.0beta1 Guests don't shutdown cleanly on host shutdown

$
0
0
I have been testing a Proxmox 4.0 node. With previous versions of proxmox, when the power button on the host is pressed, a shutdown command is issued via terminal, or shutdown initiated via gui, the node would send a acpi message to any running guests, and wait the specified or default timeout for each guest to shut down before shutting down the node.

With V4 the when a shutdown is issued, all vms stop almost immediately, and the node shuts down.
What I think I have determined:
* 'pvesh --nooutput create /nodes/localhost/stopall' will cleanly shut down nodes, so will 'service pve-manager stop'. "Stop All VMs" in the gui works OK.
* pve-manager is not installed in /etc/rcX.d/, something is still causing VMs to start up automatically ( possibly via '/etc/init.d/pve-manager start', possibly another way, I see two instances of kvm ID 100 running when the node starts)
Taken from 'ps -elf' during startup:
Code:

4 S root      1094    1  0  80  0 -  1082 wait  00:29 ?        00:00:00 /bin/sh /etc/init.d/pve-manager start
4 S root      1096  1094 27  80  0 - 81021 poll_s 00:29 ?        00:00:00 /usr/bin/perl /usr/bin/pvesh --nooutput create /nodes/localhost/s
1 S root      1123  1096  0  80  0 - 81251 hrtime 00:29 ?        00:00:00 task UPID:pve2:00000463:00000921:55ED9F53:startall::rootpam:
5 S root      1127  1123  0  80  0 - 84640 poll_s 00:29 ?        00:00:00 task UPID:pve2:00000467:00000925:55ED9F54:qmstart:100:root pam:
6 S root      1140  1127  3  80  0 - 65138 pipe_w 00:29 ?        00:00:00 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios type=1,uuid=e253ddd4-08c8-4833-9af7-a3c64cc28f93 -name test -smp 8,sockets=1,cores=8,maxcpus=8 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 8192 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:512f79e8bf1 -drive file=/var/lib/vz/template/iso/ubuntu-14.04.1-desktop-amd64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=22:08:69:1C:02:B1,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300

7 R root      1143    1  7  80  0 - 2201354 -    00:29 ?        00:00:00 /usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios type=1,uuid=e253ddd4-08c8-4833-9af7-a3c64cc28f93 -name test -smp 8,sockets=1,cores=8,maxcpus=8 -nodefaults -boot menu=on,strict=on,reboot-timeout=1000 -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 8192 -k en-us -device pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f -device pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2 -device usb-tablet,id=tablet,bus=uhci.0,port=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:512f79e8bf1 -drive file=/var/lib/vz/template/iso/ubuntu-14.04.1-desktop-amd64.iso,if=none,id=drive-ide2,media=cdrom,aio=threads -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=none,aio=native,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on -device virtio-net-pci,mac=22:08:69:1C:02:B1,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300

* Adding a rcX.d script to run either 'pvesh --nooutput create /nodes/localhost/stopall' or 'service pve-manager stop' doesn't help. I've confirmed the script runs, the listed commands end immediately. Either the VMs have already been terminated by then, something else runs simultaneously to kill them, or those commands behave differently during shutdown.
Sample file registered with 'update-rc.d pve-manager-test defaults':
Code:

#!/bin/sh

### BEGIN INIT INFO
# Provides:        pve-manager-test
# Required-Start:  $remote_fs pve-firewall
# Required-Stop:  $remote_fs pve-firewall
# Default-Start: 
# Default-Stop:    0 1 6
# Short-Description: PVE VM Manager
### END INIT INFO

. /lib/lsb/init-functions

PATH=/sbin:/bin:/usr/bin:/usr/sbin
DESC="PVE Status Daemon"
PVESH=/usr/bin/pvesh

test -f $PVESH || exit 0

# Include defaults if available
if [ -f /etc/default/pve-manager ] ; then
    . /etc/default/pve-manager
fi

case "$1" in
        start)
                if [ "$START" = "no" ]; then
                    exit 0
                fi
                echo "Starting VMs and Containers"
                pvesh --nooutput create /nodes/localhost/startall
                ;;
        stop)
                echo "Stopping running Backup"
                date >> /testfile
                echo "Stopping running Backup" >> /testfile
                vzdump -stop
                echo "Stopping VMs and Containers"
                echo "Stopping VMs and Containers" >> /testfile
                pvesh --nooutput create /nodes/localhost/stopall
                service pve-manager stop
                echo "Done." >> /testfile
                ;;
        reload|restart|force-reload)
                # do nothing here
                ;;
        *)
                N=/etc/init.d/$NAME
                echo "Usage: $N {start|stop|reload|restart|force-reload}" >&2
                exit 1
                ;;
esac

exit 0

Am I missing something obvious? Can anyone confirm shutdown behaviour has changed? Does anyone know how to configure nodes to cleanly shut down guests on node shutdown?

Graphs bug in Proxmox 4Beta

$
0
0
Graphs bug in Proxmox 4Beta


I've found a reproductible bug, the graphs are not working on Proxmox Beta4.
The graphs are not refesh, are not working if I mount a HDD directly on VM like
qm set 100 -sata1 /dev/sdb


The graphs work again when the VM shut down.


Go back to the V3.4 for me. :(

Quote:

proxmox-ve: 4.0-7 (running kernel: 4.1.3-1-pve)
pve-manager: 4.0-26 (running version: 4.0-26/5d4a615b)
pve-kernel-3.19.8-1-pve: 3.19.8-3
pve-kernel-4.1.3-1-pve: 4.1.3-7
lvm2: 2.02.116-pve1
corosync-pve: 2.3.4-2
libqb0: 0.17.1-3
pve-cluster: 4.0-14
qemu-server: 4.0-15
pve-firmware: 1.1-6
libpve-common-perl: 4.0-14
libpve-access-control: 4.0-6
libpve-storage-perl: 4.0-13
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-1
pve-container: 0.9-7
pve-firewall: 2.0-6
pve-ha-manager: 1.0-4
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.2-4
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>