Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

Transfer subscription

$
0
0
Hi Martin,

In the very near future I will upgrade my cluster. The disk will be moved to a new server so that motherboard and CPU will change and thereby, I guess, will change the machine ID?

Is the proper way to transfer the subscription to ask for a new license key to install on the new server?

Very poor performances

$
0
0
Hello.

In advance, sorry for my bad English.

I'm new on Proxmox. I plan ti use it to run Two Vms.
- 1 to do monitoring (FAN - Fully automated Nagios)
- 1 Ubuntu server 14 (Open Vpn Server, Ubquiti Unifi)

I have installed Proxmox VE on Shuutle barbonne.
Their caracteristics are :

CPU : Atom D2550, 1,86 Ghz - 4 Core
Ram : 2 Gb

I create a VM to test with 4 CPU Cores, 2 Gb Ram.
Vitrio and network for disk type.
I upload the Fan Iso file on the proxmox server.

The problem is that the vm system is very slow. The setiup of fan takes hours.

Before installing proxmox on this Shuttle, I have tried FAN directly on it (without Virtualizer environnement) and It was very fast.
So it seems to signify that the proxmox server is the source of the problem.

I have tried to create a new VM withe Ubuntu server and the problem is the same.

When I look for perfomances in the Summary tab, everything seems to be fine. CPU usage around 40%. Memory around 700 Mo...

I am sure that I didi bad configuration or that I can tweek it.

Could you please give me advise ?

Thanks in advance.

Sylvain.

[SOLVED] VNC viewer blocked by java 1.8

$
0
0
Hy,

I'm Trying to reach my vnc client thanks to my latest java install 1.8.
Unfortunately I have this message :
Code:

Missing Application-Name manifest attribute for: https://A.B.C.D:8006/vncterm/VncViewer.jar
I clear all my cached, add the https://A.B.C.D and http://A.B.C.D in the security exception list of the control panel, no more success.
I'm working with proxmox 2.3.13

Any idea would be appreciated

Thanks

Trying to extend pve-data on 'local' but don't understand the message returned

$
0
0
Hi all, I've added a second physical hdd to my proxmox server and I'm trying to use it to extend the existing 'local' storage device.
(e.g. I do not want to add a second storage device/node, just extend what is already there using LVM).

This is my current setup.
/dev/sdb is the new hdd.

Code:

root@proxmox:~# lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 298.1G  0 disk
├─sda1                8:1    0    1M  0 part
├─sda2                8:2    0  510M  0 part /boot
└─sda3                8:3    0 297.6G  0 part
  ├─pve-root (dm-0) 253:0    0  74.5G  0 lvm  /
  ├─pve-swap (dm-1) 253:1    0    4G  0 lvm  [SWAP]
  └─pve-data (dm-2) 253:2    0 203.1G  0 lvm  /var/lib/vz
sdb                  8:16  0 465.8G  0 disk
└─sdb1                8:17  0 465.8G  0 part


I have performed the following steps, which all worked:

I have extended the 'pve' Volume Group using vgextend, but as you can see from the below, I don't think it is making use of the additional 465G space that has been added...

Code:

root@proxmox:~# vgextend pve /dev/sdb1
root@proxmox:~# vgdisplay
  --- Volume group ---
  VG Name              pve
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  7
  VG Access            read/write
  VG Status            resizable
  MAX LV                0
  Cur LV                3
  Open LV              3
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size              763.35 GiB
  PE Size              4.00 MiB
  Total PE              195417
  Alloc PE / Size      72088 / 281.59 GiB
  Free  PE / Size      123329 / 481.75 GiB
  VG UUID              eUcGyo-8sNf-xBAE-BMJP-p3Rp-cOJo-8Okx1O

I did some searching on here and I found some responses saying that you need to also extend the pve-data LVM device using resize2fs.
So, I tried this, but I get the following message:

Code:

root@proxmox:~# resize2fs /dev/mapper/pve-data
resize2fs 1.42.5 (29-Jul-2012)
The filesystem is already 53239808 blocks long.  Nothing to do!

And after running it, I used the
Code:

df -h /var/lib/vz
command to check the new size and it's still only using 200G instead of the ~650G available to it.

Code:

root@proxmox:~# df -h /var/lib/vz
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-data  200G  6.6G  194G  4% /var/lib/vz

Can anyone explain why resize2fs isn't working for me? Thanks!

I do not have any data on the new disk, so happy to delete partitions etc.....whatever is necessary!

Ceph OSDs on Proxmox Node with VMs - not so good idea

$
0
0
After much testing and disaster simulation i have decided not to put Ceph OSDs on Proxmox node with VMs. This not be confused with not using Proxmox+Ceph Severs together. But OSDs should not be on the same Proxmox Nodes where several Virtual Machines are served.


During a OSD failure or OSD addition when Ceph goes into rebalancing mode i have noticed between 25% to 35% CPU consumption. If all my VMs already consuming 80% of CPU, this caused major slow down of VMs. During regular operation though CPU consumption was hardly noticeable. This is not new. Ceph developers did mention that during rebalancing Ceph will consumed large amount of resource.
As long as all OSDs are in their own nodes without any VMs, all is good. On a 7 nodes cluster i put all OSDs in Node 5, 6 & 7. Then put all VMs spread across Node 1 to 4. Running same disaster simulation VMs performed much better.


Proxmox + Ceph server still shines. Because this gives us ability to monitor/manage Ceph from same GUI and eliminates need of having separate node for admin/MONs.


Anybody had experience such as this or have suggestions?

problems after power cut

$
0
0
Hey,

We have had a power cut and the proxmox machine has some problems after this. It boots, and finishes with :
Quote:

INIT : no more processes left in this runlevel
The website *:8006 is not online, and the VM and containers are not online (or atleast not all)

I found some relevant links but that was when a container ran in this problem this is the host system running. I tried :
- recovery mode fsck everything (no problems)
- check inittab (seems normal)
- ran update & upgrade
- ran service pve-cluster restart

I got the website back online, but the password seems wrong... where should I start to look for the problem ?


For openVZ I found this to work (in recovery, after manually starting network & ssh) :
/etc/init.d/vz start
/etc/init.d/pve-cluster restart
vzctl start ID_of_container


SvennD

Time drift on domain controller

$
0
0
Hi everyone!
My network has a domain controller on the physical server, and I plan to add another in a virtual environment proxmox.
At the moment I am very much interested in the question of time synchronization of virtualized domain controller. Share your experience who have domain controller based on windows server 2008 R2. If everything works out of the box or something you need to reconfigure? Anyone use a wiki howto about guest time drift, is it actual?

Also interested in the general question with what time source guest machines are synchronized and whether there are settings for this in GUI interface or config files of the system.
Thank you!

Proxmox VE 3.2 Backup problem

$
0
0
Hi
our customer is complaining about the following scenario:
During the backup (in the night), the VMs are not responsive and available.
Is this normal or do I have to change my configuration?

Here some infos:

Proxmox VE 3.2
No cluster, just a single node
Backup to an iSCSI volume
backup method: fast and snapshot

Thanks

Restore default iptable Proxmox 3.2

$
0
0
Is there a script / command to restore the default iptables?

I normally use eth1/vmbr1 running through firewall appliance / switch ect. and VPN in through that same interface to access all CT's VM's and nodes.
================================================== ======
Problem
After attempting configuring iptables, and making many changes to eth0 & vmbr0, I've broken something. And cannot access anything on eth0, the node itself cannot reach external networks. Nor can eth0 be reached from external networks.

The VM's and CT's act normal as they use vmbr1

These two nics are on different networks within the datacenter. Completely separated. Per se'

Any guidance would be appreciated.

Thanks

Status = Internal Error

$
0
0
So this has occurred several times now across several different types of OS's. I upload my ISO - I create a VM - I set the Image - It boots - It installs - OS launches - conduct updates - configure the OS - reboots and comes back no issues.

Next thing you know after a reboot the status is = Internal Error
If your consoled in it says no media to boot from
If you boot off an attached image to conduct a repair, say like windows repair or recovery no difference it goes through the process but still wont boot.

With Win 2008 Server R2 I can get this to happen each time on the first reboot after install.

Its like the MBR for the VM is getting messed up even though I can even hit F12 and tell it to boot form HD or Mounted image etc.

I have done searches and seen multiple threads and pages about this issue, but not one post about a resolution.

One guy mentioned that in his thread that perhaps this software isn't ready and everyone defended it but no one provided a resolution.

Does anyone have some ideas on how to fix this issue?

VGA passthrough on consumer hardware

$
0
0
I'm running a PVE host on consumer grade hardware and can't get VFIO VGA passthrough to work right.

CPU: Core i7-950
Mobo: Asus P6X58D-Premium
RAM: 10GB non ECC
GPU: Nvidia Quadro FX 1800

Here is the Guest OS hardware

Code:

cat /etc/pve/nodes/VMnode0/qemu-server/100.confballoon: 64
bootdisk: virtio0
cores: 2
cpu: host
memory: 3072
name: devmachine
net0: virtio=AA:1E:18:73:AB:80,bridge=vmbr0
ostype: l26
sockets: 1
virtio0: lvm0:vm-100-disk-1,size=60G
machine: q35
hostpci0: 02:00.0,pcie=1,driver=vfio

Here's the error I get when trying to start the VM

Code:

kvm: -device vfio-pci,host=04:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to set iommu for container: Operation not permitted
kvm: -device vfio-pci,host=04:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to setup container for group 16
kvm: -device vfio-pci,host=04:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio: failed to get group 16
kvm: -device vfio-pci,host=04:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: Device initialization failed.
kvm: -device vfio-pci,host=04:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: Device 'vfio-pci' could not be initialized
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name devmachine -smp 'sockets=1,cores=2' -nodefaults -boot 'menu=on' -vga cirrus -cpu host,+x2apic -k en-us -m 3072 -readconfig /usr/share/qemu-server/pve-q35.cfg -device 'usb-tablet,id=tablet,bus=ehci.0,port=1' -device 'vfio-pci,host=04:00.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:804b6a81ca7e' -drive 'file=/dev/lvm0/vm-100-disk-1,if=none,id=drive-virtio0,aio=native,cache=none' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=AA:1E:18:73:AB:80,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -machine 'type=q35'' failed: exit code 1

Any idea what I'm doing wrong?

Packets are not being routed properly through venet0?

$
0
0
I've been trying to troubleshoot this problem for weeks. It first occurred when I changed my containers' interfaces from veth to venet. The problem revolves around DHCP.

The setup:

I've got 3 computers at play here:
  1. A DHCP client
  2. The Proxmox host / router running Shorewall
  3. DHCPd container (10.0.0.5)


The router (Proxmox host) has several interfaces:
eth1 VLAN Trunk
eth1.2 192.168.0.1 Network-guests / DHCP-client VLAN
eth1.3 10.0.0.1 DMZ VLAN
eth0 1.2.3.4 WAN
venet0 POINTOPOINT to Containers









The problem:

  1. The DHCP client sends DHCPDISCOVER packet across the 192.168.0.0/24 Subnet/VLAN
  2. The router's DHCP relay-agent (Debian dhcp-helper), listening on eth1.2 (192.168.0.1), relays the packet to DHCPd (10.0.0.5), presumably through venet0.
  3. In the DHCPd container, the packet arrives with a source IP address of 1.2.3.4 (WAN) and shortly-after a DHCPOFFER packet is sent to the relay agent (192.168.0.1), per the DHCP spec.
  4. Running tcpdump on the router's venet0, I can see the DHCPDISCOVER and DHCPOFFER packet go and come, respectively, but the DHCP relay agent waits continuously, never doing anything.


I suspect it doesn't get the packet. Naturally, my mind jumped to "routing issues," and so I checked the routing table. It seemed correct enough. I then tried to netcat between the router and DHCPd: I ran netcat, listening on 192.168.0.1 and successfully connected to it from the DHCPd container!

At some point, I thought Linux was filtering them out as Martian packets. I checked and double-checked after enabling logging of Martian packets. This is not the case.

As for whether the firewall could be filtering these out... I doubt it. I loosened the settings to pretty much accept anything. It still didn't fix anything.

Correct me if I'm wrong here, but I think that Linux cannot successfully "route" the packet to the process (dhcp-helper) because it left as out of the WAN interface (1.2.3.4), but returned through the venet0 interface to 192.168.0.1.

I'm kinda getting desperate here. I could just switch back to veth interfaces, but I really would prefer venet interfaces. Any help would be greatly appreciated!

Call trace on proxmox node

$
0
0
Hi everybody,
Today I noticed a strange problem in one of two proxmox nodes:
On Proxmox node 2 there is a call trace like that:
Code:

root@proxmox2:~# cat /var/log/kern.log.1Aug  1 23:03:33 proxmox2 kernel: INFO: task lzop:606425 blocked for more than 120 seconds.
Aug  1 23:03:33 proxmox2 kernel:      Not tainted 2.6.32-29-pve #1
Aug  1 23:03:33 proxmox2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug  1 23:03:33 proxmox2 kernel: lzop          D ffff8806684724c0    0 606425 606412    0 0x00000000
Aug  1 23:03:33 proxmox2 kernel: ffff8804da045a68 0000000000000086 0000000000000000 0000000000000000
Aug  1 23:03:33 proxmox2 kernel: ffff8804da045a28 ffffffff8100983c 000000000002526e 0000000000100000
Aug  1 23:03:33 proxmox2 kernel: 0000000000000000 00000001b6069634 ffff880668472a88 000000000001ec80
Aug  1 23:03:33 proxmox2 kernel: Call Trace:
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff8100983c>] ? __switch_to+0x1ac/0x320
Aug  1 23:03:33 proxmox2 kernel: [<ffffffffa02c71b0>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs]
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff8155c5d3>] io_schedule+0x73/0xc0
Aug  1 23:03:33 proxmox2 kernel: [<ffffffffa02c71be>] nfs_wait_bit_uninterruptible+0xe/0x20 [nfs]
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff8155d5ff>] __wait_on_bit+0x5f/0x90
Aug  1 23:03:33 proxmox2 kernel: [<ffffffffa02c71b0>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs]
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff8155d6a8>] out_of_line_wait_on_bit+0x78/0x90
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff810a27a0>] ? wake_bit_function+0x0/0x40
Aug  1 23:03:33 proxmox2 kernel: [<ffffffffa02c719f>] nfs_wait_on_request+0x2f/0x40 [nfs]
Aug  1 23:03:33 proxmox2 kernel: [<ffffffffa02cdf0b>] nfs_updatepage+0x22b/0x580 [nfs]
Aug  1 23:03:33 proxmox2 kernel: [<ffffffffa02bbac2>] nfs_write_end+0x142/0x280 [nfs]
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff81162e64>] ? ii_iovec_copy_from_user_atomic+0x84/0x110
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff81135970>] generic_file_buffered_write_iter+0x170/0x2b0
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff811365f5>] __generic_file_write_iter+0x225/0x420
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff81136875>] __generic_file_aio_write+0x85/0xa0
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff81136918>] generic_file_aio_write+0x88/0x100
Aug  1 23:03:33 proxmox2 kernel: [<ffffffffa02bae6c>] nfs_file_write+0x10c/0x280 [nfs]
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff811abf72>] do_sync_write+0xf2/0x140
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff811ac258>] vfs_write+0xb8/0x1a0
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff811acb51>] sys_write+0x51/0x90
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff8155f60e>] ? do_device_not_available+0xe/0x10
Aug  1 23:03:33 proxmox2 kernel: [<ffffffff8100b102>] system_call_fastpath+0x16/0x1b
Aug  1 23:07:33 proxmox2 kernel: INFO: task lzop:606425 blocked for more than 120 seconds.
Aug  1 23:07:33 proxmox2 kernel:      Not tainted 2.6.32-29-pve #1
Aug  1 23:07:33 proxmox2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Aug  1 23:07:33 proxmox2 kernel: lzop          D ffff8806684724c0    0 606425 606412    0 0x00000000
Aug  1 23:07:33 proxmox2 kernel: ffff8804da045a68 0000000000000086 0000000000000000 0000000000000000
Aug  1 23:07:33 proxmox2 kernel: ffff8804da045a28 ffffffff8100983c 000000000002526e 0000000000100000
Aug  1 23:07:33 proxmox2 kernel: 0000000000000000 00000001b60a7b96 ffff880668472a88 000000000001ec80
Aug  1 23:07:33 proxmox2 kernel: Call Trace:
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff8100983c>] ? __switch_to+0x1ac/0x320
Aug  1 23:07:33 proxmox2 kernel: [<ffffffffa02c71b0>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs]
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff8155c5d3>] io_schedule+0x73/0xc0
Aug  1 23:07:33 proxmox2 kernel: [<ffffffffa02c71be>] nfs_wait_bit_uninterruptible+0xe/0x20 [nfs]
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff8155d5ff>] __wait_on_bit+0x5f/0x90
Aug  1 23:07:33 proxmox2 kernel: [<ffffffffa02c71b0>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs]
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff8155d6a8>] out_of_line_wait_on_bit+0x78/0x90
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff810a27a0>] ? wake_bit_function+0x0/0x40
Aug  1 23:07:33 proxmox2 kernel: [<ffffffffa02c719f>] nfs_wait_on_request+0x2f/0x40 [nfs]
Aug  1 23:07:33 proxmox2 kernel: [<ffffffffa02cdf0b>] nfs_updatepage+0x22b/0x580 [nfs]
Aug  1 23:07:33 proxmox2 kernel: [<ffffffffa02bbac2>] nfs_write_end+0x142/0x280 [nfs]
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff81162e64>] ? ii_iovec_copy_from_user_atomic+0x84/0x110
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff81135970>] generic_file_buffered_write_iter+0x170/0x2b0
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff811365f5>] __generic_file_write_iter+0x225/0x420
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff81136875>] __generic_file_aio_write+0x85/0xa0
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff81136918>] generic_file_aio_write+0x88/0x100
Aug  1 23:07:33 proxmox2 kernel: [<ffffffffa02bae6c>] nfs_file_write+0x10c/0x280 [nfs]
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff811abf72>] do_sync_write+0xf2/0x140
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff811ac258>] vfs_write+0xb8/0x1a0
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff811acb51>] sys_write+0x51/0x90
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff8155f60e>] ? do_device_not_available+0xe/0x10
Aug  1 23:07:33 proxmox2 kernel: [<ffffffff8100b102>] system_call_fastpath+0x16/0x1b

I can't understand on what's happening, except that on 23:00:00 backup start.
Can someone help me to understand what's happened?

Thanks

Is it possible to redirect DVBcards to VM, like for VDR?

$
0
0
Is it possible to redirect DVBcards to VM, like for VDR? It's an idea. I think it would be nice to have this in a VM. It is easier to control, to backup ...

Best Regards
fireon

dashboard connection timeouts -> WARNING: proxy detected vanished client

$
0
0
Dear community,

we have a 4 node cluster, after removing a 5th failed node last weekend.
I tried to readd the failed node, but it didn´t work. So I removed it completly.

The cluster communication is realized over openVPN.
On two nodes there was also a nfs server, with defined shared storage in proxmox, but it was not used at that time.

After a lot of research, here is my problem:

The dahboard is not usable anymore and I can´t fix it nor have I found any information what the problem exactly is.
Some threads in this forum mention similar problems.

Symptoms

  • VM names are only: no name specified
  • Loading content stops with: timeout connection
  • The only log information I found: WARNING: proxy detected vanished client


Things I tried

  • Empty Browser-Cache
  • Restart services on all nodes
    • pveproxy
    • pve-cluster
    • nfs-kernel-server

  • Edit storage.conf and remove the nfs entries
  • Ensure no backup processes are running
  • echo "" > /var/log/pve/tasks/active


Questions

Is this still a confirmed bug?: post94889

Do you have any hints, or tips what I could try, or debug?


Technical informations

pvecm status (quite similar on all nodes)

Version: 6.2.0
Config Version: 12
Cluster Name: WR-CLUSTER
Cluster Id: 15060
Cluster Member: Yes
Cluster Generation: 12880
Membership state: Cluster-Member
Nodes: 4
Expected votes: 4
Total votes: 4
Node votes: 1
Quorum: 3
Active subsystems: 5
Flags:
Ports Bound: 0
Node name: bay2
Node ID: 4
Multicast addresses: 239.192.58.15
Node addresses: 10.8.8.2



pveversion

  • Deleted node: pve-manager/2.3/7946f1f1
  • node2: pve-manager/3.1-21/93bf03d4 (running kernel: 2.6.32-26-pve)
  • node7: pve-manager/2.3/7946f1f1
  • node4: pve-manager/3.2-4/e24a91c1 (running kernel: 2.6.32-30-pve)
  • node3: pve-manager/3.2-4/e24a91c1 (running kernel: 2.6.32-29-pve)





Edit

I just found out, when I click on Datacenter and then choose a vm located on node2, or node4 I can work with node2 and node4 again. For node3 and node7 the problem remains

Maybe a recopy of pve-ssl.key could fix it? Like in post76741
Although no diff and no such error log

But I am not sure and dont´t want to risk it without confirmation.

Cannot move more than one disk by VM at once...

$
0
0
Hi,

Move disk is a really, really useful feature!

It makes easy to reorganize storage, with VM still running. Great.

So, today, we need to move some tens of disks.

And because it is not possible to move all disks of a given VM at a time (VM locked while moving first one, and no multiple choice possible), we move all "first disks" of each VM, then all "second disks", and we forget some, go back and forth...

Any hope of such a "move group of disks" feature soon?

The very best would be to move all disks of a given STORAGE to another! Everyone can dream!

Thank you all a LOT for proxmox!

Christophe.

KVM passtrought PciExpress 3 card

$
0
0
Using pve-no-subscription repository.
Linux srv-vm2 3.10.0-3-pve #1 SMP Sat Aug 2 09:30:30 CEST 2014 x86_64 GNU/Linux


Code:

pveversion --verbose
proxmox-ve-2.6.32: not correctly installed (running kernel: 3.10.0-3-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-3.10.0-3-pve: 3.10.0-13
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-14
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-21
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-7
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Need passtrought
Code:

05:00.0 Multimedia video controller [0400]: Conexant Systems, Inc. Device [14f1:8210]
06:00.0 Multimedia video controller [0400]: Conexant Systems, Inc. Device [14f1:8210]
07:00.0 Multimedia video controller [0400]: Conexant Systems, Inc. Device [14f1:8210]

Config VM:
Code:

boot: dcn
bootdisk: virtio0
cores: 2
machine: q35
hostpci0: 05:00.0,pcie=1,driver=vfio
hostpci1: 06:00.0,pcie=2,driver=vfio
hostpci3: 07:00.0,pcie=3,driver=vfio
ide2: cdrom,media=cdrom
memory: 2000
name: srv-video
net0: virtio=DE:8E:86:88:81:72,bridge=vmbr0
net1: virtio=E2:6F:B8:08:1C:B3
onboot: 1
ostype: win7
sata0: local:iso/virtio-win-0.1-81.iso,media=cdrom,size=72406K
sockets: 1
virtio0: store:100/vm-100-disk-1.qcow2,format=qcow2,cache=writeback,size=30G

Log error:
Code:

unknown hostpci setting 'pcie=1'
unknown hostpci setting 'pcie=2'
vm 100 - unable to parse value of 'hostpci2' - unknown setting 'hostpci2'
kvm: -device piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2: Bus 'pci.0' not found
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name srv-video -smp 'sockets=1,cores=2' -nodefaults -boot 'menu=on' -vga std -no-hpet -cpu 'kvm64,hv_spinlocks=0xffff,hv_relaxed,+lahf_lm,+x2apic,+sep' -k en-us -m 2000 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'pci-assign,host=05:00.0,id=hostpci0,bus=pci.0,addr=0x10' -device 'pci-assign,host=06:00.0,id=hostpci1,bus=pci.0,addr=0x11' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/var/lib/vz/template/iso/virtio-win-0.1-81.iso,if=none,id=drive-sata0,media=cdrom,aio=native' -device 'ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -drive 'file=/dev/cdrom1,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -drive 'file=/vz/images/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,cache=writeback,aio=native' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=200' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,vhost=on' -device 'virtio-net-pci,mac=DE:8E:86:88:81:72,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -netdev 'type=user,id=net1,hostname=srv-video' -device 'virtio-net-pci,mac=E2:6F:B8:08:1C:B3,netdev=net1,bus=pci.0,addr=0x13,id=net1,bootindex=301' -rtc 'driftfix=slew,base=localtime' -machine 'type=q35' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1

How i can passtrought my pcie card for videorecorder?

Automatic assigning IP address

$
0
0
Hello,

Is it possible to automatically assigning IP address (failover) to a template with php?

sorry for my bad english, i'm french.

Cannot start VM after trying to masquerade it

$
0
0
Hi all,

I followed the tutorial here: http://servernetworktech.com/2012/12...rtual-machine/. As the tutorial says, I assigned device vmbr2 (or, in my case, vmbr1) to my VM. When I go to start the VM, it never starts, and Proxmox throws an error...

Code:

bridge 'vmbr1' does not exist
/var/lib/qemu-server/pve-bridge: could not launch network script
kvm: -netdev type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge: Device 'tap' could not be initialized
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -name vm101 -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -k en-us -m 512 -cpuunits 1000 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -drive 'file=/var/lib/vz/template/iso/lubuntu-14.04.1-desktop-amd64.iso,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/var/lib/vz/images/101/vm-101-disk-1.qcow2,if=none,id=drive-ide0,format=qcow2,aio=native,cache=none' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge' -device 'e1000,mac=65:C5:5B:F1:17:53,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1

/etc/network/interfaces (on host)
Code:

# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
        address  **.***.139.3
        netmask  255.255.255.0
        gateway  **.***.139.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0

auto vmbr1
iface vmbr1 inet static
    address 10.99.0.254
    netmask 255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0
    post-up echo 1 > /proc/sys/net/ipv4/ip_forward
    post-up iptables -t nat -A POSTROUTING -s '10.99.0.0/24' -o vmbr0 -j MASQUERADE
    post-down iptables -t nat -D POSTROUTING -s '10.99.0.0/24' -o vmbr0 -j MASQUERADE
    post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 1022 -j DNAT --to 10.99.0.1:22
    post-down iptables -t nat -D PREROUTING -i vmbr0 -p tcp --dport 1022 -j DNAT --to 10.99.0.1:22

(**.***.139.3 is my public IP)

In essence, what I'm trying to do is this...
  • I have one public IPv4 and one external NIC (the other NIC provided by my hosting company is for the internal "RPN" network).
  • I want to have multiple VMs with private IPs, and forward ports to the public IPv4.

Disconnect hanging/invalid nfs-server

$
0
0
Hello all,
new to proxmox I got some trouble with nfs for external backup.

After problems with an nfs-server I could delete it in storage-area of the web-administration and newly add under a different name (same server).
This site is ok.

But browsing in the procmox file system (by ssh-shell) it hangs when browsing to /mnt/pve or display "df" (seems nfs still tries to connect).
start/stop etc service nfs-common does not affect anything, no entry in fstab as usal.

Any Idea how to finally delete and stop nfs connections?

thx
gs
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>