Quantcast
Channel: Proxmox Support Forum
Viewing all 173527 articles
Browse latest View live

Random BSOD Server 2008 after p2v

$
0
0
Hello,

We moved 2 HP servers into 1 server with virtualisation (Proxmox). We used SelfImage and everything boots well. All Virtio drivers are installed and we removed all HP software.

The 2 servers are:
SBS 2008 (with Exchange 2007, AD, file, print, DNS, DHCP)
Server 2008 Standard (With SQL Express / SAP application)

At random times the servers crash with a blue screen. The SBS 2008 crashes more often, this server is used a bit more due to the Exchange software.

Some debug info of the minidumps:
NTFS_FILE_SYSTEM CI.dll
SYSTEM_SERVICE_EXCEPTION ntoskrnl.exe
IRQL_NOT_LESS_OR_EQUAL ntoskrnl.exe
SYSTEM_SERVICE_EXCEPTION ntoskrnl.exe
KMODE_EXCEPTION_NOT_HANDLED ntoskrnl.exe
DRIVER_VERIFIER_DETECTED_VIOLATION crcdisk.sys
PAGE_FAULT_IN_FREED_SPECIAL_POOL ntoskrnl.exe
SYSTEM_SERVICE_EXCEPTION mup.sys
DRIVER_CORRUPTED_EXPOOL ntoskrnl.exe

What i've tried so far:
- Memtest 24 hours
- Replaced hardware with known good test server
- Deleted a old McAfee and Synology driver
- Changed drivers (virtio/ide/e1000/vga)
- Chckdsk /F
- scf /scannow
- Disabled HP leftover drivers from starting (device management, hidden devices)
- Removed any software (except Windows/Microsoft/SAP related), so no anti-virus/back-up/monitoring etc.
- Checked eventlog for problems/errors, nothing special and nothing just before the crash

The timing of the BSOD has nothing to do with any stress, the servers also gets BSOD when doing nothing. It's not like a back-up running or a scheduled task. Also, I'm unable to reproduce the error by creating heavy load on the VM.

The VM's don't get a BSOD together, it's mostly just the SBS server and the APP server will run fine. The SBS server gets a BSOD 3 times a week and the APP server 1 or 2 times a month.

Some info about the currunt host:
proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1

Host has 2x 5504 Xeon processor, 16 GB ECC RAM, 4x500GB RAID10 (HP HW RAID). It's a ML350 G6 model.

I don't think this has much to do with Proxmox but maybe someone has experienced this before.

hung up at pveceph createmon

$
0
0
Hello Forum.

I'm trying to set up a 3 node ceph test cluster (not production) using the proxmox tutorial. I'm using Using proxmox 3.3-5. Everything went perfectly the first time I did this with only one NIC in each node.

When I added a second NIC to each node, in order to try separating traffic per the tutorial's suggestion, and I tried to create my first monitor on the first node, I got the following error:

proxmoxdev01:~# pveceph init --network 10.200.7.0/22
=> success
proxmoxdev01:~# pveceph createmon
proxmoxdev01:~# Invalid prefix 00001010110010000000011100000000/22

Node configurations:
- Intel based pc's with ~4 GB RAM
- 3 hard drives, one for proxmox, two for ceph osd/journal
- 2 nics
- one on the motherboard for proxmox (eth0 bridged to vmbr0 using 10.100.7.55-57/22)
- one PCIe add on NIC for ceph private usage (eth1 bridged to vmbr1 using 10.200.7.55-57/22)

Any guidance on how to deal with this error would be greatly appreciated.

Cheers.

Configuring KVM VMs with Automation

$
0
0
Hello,

I am new to proxmox, and there are plenty of docs around the web for this but they are not answering my needs :)i have few question if anyone can help me with:
1 - Is there a way to automatically create VMs (mostly KVM) on proxmox? with Autmatic IP assign and OS installation and etc?? because we want to have the VM created and installed right after customer pays for it
2 - Is there any specific configuration to be made to create KVM VMs???
3 - Does Proxmox have a IP pool so that we can add IPs to that to use them for VMs?

Unattended Installation of Proxmox?

$
0
0
Is there a way (boot parameters or kickstart) to make an unattended installation of Proxmox? I've made a DHCP, PXE servers for the installation but only because of the hostname, the root password and the time zone I have to go through the whole installation process manually? Is there someone that has made an unattended installation script?

backup failure - snapshot mode - logical volume is not mounted

$
0
0
vzdump is apparently unable to mount the LVM snapshot.
The lvm snapshot is successfully created under /dev/mapper/pve-data, but then it is not mounted on /mnt/vzsnap0/102.
This error effectively prevents the backup from being generated and sent to the storage device.

Any options other than rebooting the server?

Relevant info:

Code:

proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-15
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1


Error message:

Code:

INFO: starting new backup job: vzdump 102 --remove 0 --mode snapshot --compress lzo --storage awsbkp2015 --node ns428897
INFO: filesystem type on dumpdir is 'fuse' -using /var/tmp/vzdumptmp329836 for temporary files
INFO: Starting Backup of VM 102 (openvz)
INFO: CTID 102 exist mounted running
INFO: status = running
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating lvm snapshot of /dev/mapper/pve-data ('/dev/pve/vzsnap-ns428897-0')
INFO:  Logical volume "vzsnap-ns428897-0" created
INFO: creating archive '/mnt/sysbkp/dump/vzdump-openvz-102-2015_01_09-14_05_45.tar.lzo'
INFO: lzop: No such file or directory: <stdout>
ERROR: Backup of VM 102 failed - command '(cd /mnt/vzsnap0/private/102;find . '(' -regex '^\.$' ')' -o '(' -type 's' -prune ')' -o -print0|sed 's/\\/\\\\/g'|tar cpf - --totals --sparse --numeric-owner --no-recursion --one-file-system --null -T -|lzop) >/mnt/sysbkp/dump/vzdump-openvz-102-2015_01_09-14_05_45.tar.dat' failed: exit code 1
INFO: Backup job finished with errors
TASK ERROR: job errors

Openvswitch and VLAN's

$
0
0
Hi All,

I'm having a problem with VLAN's and openvswitch. I run many proxmox machines using vlan's and native linux bridging and works well. Our network has changed a bit and we now need to mount storage directly from the proxmox host on a specific VLAN to store our VM's. Additionally some of the VM's need to mount other storage directly (NFS) on the same VLAN. Apparently openvswitch works well in this scenario. The following is what I've tried and was unsuccesful - any pointers will be greatly appreciated.

I've installed Proxmox 3.3-1/a06c9f73 (latest) . I've then installed the latest openvswitch-switch package by adding the following line to /etc/apt/sources.list:

"deb http://download.proxmox.com/debian wheezy pve-no-subscription"

I then ran:

apt-get update
apt-get install openvswitch-switch

To verify the package was installed from the correct source:

apt-cache policy openvswitch-switch
openvswitch-switch:
Installed: 2.3.0-1
Candidate: 2.3.0-1
Version table:
*** 2.3.0-1 0
500 http://download.proxmox.com/debian/ wheezy/pve-no-subscription amd64 Packages
100 /var/lib/dpkg/status
2.0.90-4 0
500 http://download.proxmox.com/debian/ wheezy/pve-no-subscription amd64 Packages
2.0.90-3 0
500 http://download.proxmox.com/debian/ wheezy/pve-no-subscription amd64 Packages
1.4.2+git20120612-9.1~deb7u1 0
500 http://ftp.debian.org/debian/ wheezy/main amd64 Packages

I rebooted the proxmox machine and then connected to the web interface.

Under "network" I added a OVS bridge "vmbr1" (no IP) using "eth2" as "Bridge ports".
I then added an OVS IntPort called "vlan20" with an IP and "VLAN tag" of "20".

After a reboot here is my /etc/network/interfaces config:

========================================
# network interface settings
allow-vmbr1 vlan20
iface vlan20 inet static
address 192.168.250.196
netmask 255.255.255.0
ovs_type OVSIntPort
ovs_bridge vmbr1
ovs_options tag=20

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

allow-vmbr1 eth2
iface eth2 inet manual
ovs_type OVSPort
ovs_bridge vmbr1

iface eth3 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.1.99.83
netmask 255.255.0.0
gateway 10.1.1.1
bridge_ports eth3
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports eth2 vlan20
========================================

I'm unable to ping a host on VLAN 20 using this config. Using native linux bridges and vlan works perfectly (I use native bridges and vlan's extensively on other proxmox hosts). An example of a working native linux bridge config on the same machine:

cat /etc/network/interfaces
============================
# network interface settings
auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

iface eth3 inet manual

iface eth2 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.1.99.83
netmask 255.255.0.0
gateway 10.1.1.1
bridge_ports eth3
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet static
address 192.168.250.196
netmask 255.255.255.0
bridge_ports eth2.20
bridge_stp off
bridge_fd 0
============================

root@cloud-03:~# ping 192.168.250.100
PING 192.168.250.100 (192.168.250.100) 56(84) bytes of data.
64 bytes from 192.168.250.100: icmp_req=1 ttl=64 time=490 ms
64 bytes from 192.168.250.100: icmp_req=2 ttl=64 time=0.140 ms
^C

Some more info on my openvswitch setup:

ovs-vsctl show
460457d8-91d4-404d-93b3-954205c7fc28
Bridge "vmbr1"
Port "eth2"
Interface "eth2"
Port "vlan20"
tag: 20
Interface "vlan20"
type: internal
Port "vmbr1"
Interface "vmbr1"
type: internal
ovs_version: "2.3.0"

Any pointers will be greatly appreciated.

Garith Dugmore

GUI Speed

$
0
0
I have always experienced slow performance in the new web GUI of Proxmox 3+. I thought it was normal.

Accessing it from Android browsers or Windows browsers take some time to see any changes in the display after each action. Switching Host, tabs within each hosts to view configuration is taking close to one second at each action. Sometimes more. I have experienced this with IE, Firefox, Chrome, and whatever browser I could try on Android. My client hardware was not the most powerful with dual cores Android (now with LP) and Atom Dual cores/Dual Threads on Win7. WIFI or direct access on GB LAN made no difference. All devices are within the same IP segment, same GW and mask and can communicate fine with no issue.

I came to think it as normal until yesterday:

I was fixing an issue with my wife´s Mac Book Air and went to the Proxmox GUI only to be stunned at the reaction speed the interface was giving me on WIFI. Each action were instantaneously reflected in the GUI! I am talking FAST! I experienced the same with Safari and Chrome. The Mac is running the latest OS X.

Would it be because of the faster CPU of the Mac Book Air (I5)? Is the Proxmox GUI interface that much demanding that the client requires substantial processing power?

Can I get some indication from anyone about their own experiences and what client hardware and browser they are using to manage their Proxmox cluster?

Thanks.

Serge

MS Windows Server low resource utilization

$
0
0
Hello Forum,

we are a pure GNU/Linux shop but now have to run two MS Windows Servers (one Terminal Server and one MSSQL Server) for interfacing with the tax agency. We run our small environment on 4 Proxmox nodes with about 30, mostly small, VMs and have no problems so far. We have a dedicated node for the MS Windows VMs. The node is a 2x12 core AMD Opteron with 128GB of RAM and 2x4TB SATA DAS. We added the DAS drives to rule out network performance problems while accessing our SAN. The VMs are configured with virtio drivers and run in raw-containers. We added a virtual backnet to the VMs to rule out network latency.

We configured the MS Windows operating system to always use the high performance power setting, disabled all kinds of screensavers and only access the VM via RDP, but since nobody here is a MS professional we might have missed something.

We are confident that the resources are sufficient but those new MS Windows Servers 2012 (not r2) show the strange behaviour of not utilizing the available resources. Everything is runnning rather slow (a process that takes about a minute on a low powered physical computer needs about 3 minutes on the VMs) and of the available cores and RAM only a fraction (about 5-10%) is used. We have IO wait in the <1 range, so we are really at a loss what''s going on here.


Any hints, ideas, solutions?

Thanks and best regards.

erreur 255

$
0
0
hi guys i'm french, i have an soucy when i try to install a VM
when i launch the VM i have this message:

TASK ERROR: command '/bin/nc -l -p 5900 -w 10 -c '/usr/sbin/qm vncproxy 100 2>/dev/null'' failed: exit code 255

i'm lost i requesting your help.

novnc. questions.

$
0
0
First, I'm glad to see novnc working here, it's really fast.

Doubts : when I open an novnc terminal, I can copy the URL, and open from another place the same machine, is it a security risk ? I mean, someone could try to open remotely the MV terminals ?

Second one, its possible to resize the control bar of novnc ? I think its too big, and also is not useful at all. It only holds two buttons on the right top.

Thanks.

Adding a hard drive to an Ubuntu LVM VM

$
0
0
I added a 1TB HDD device to an Ubuntu VM with LVM on it, shut it down, and restarted it.

Code:

lvm> lvmdiskscan  /dev/ram0            [      64.00 MiB]
  /dev/Tauron-vg/root  [    244.77 GiB]
  /dev/ram1            [      64.00 MiB]
  /dev/Tauron-vg/swap_1 [      4.00 GiB]
  /dev/vda1            [    243.00 MiB]
  /dev/ram2            [      64.00 MiB]
  /dev/ram3            [      64.00 MiB]
  /dev/ram4            [      64.00 MiB]
  /dev/ram5            [      64.00 MiB]
  /dev/vda5            [    249.76 GiB] LVM physical volume
  /dev/ram6            [      64.00 MiB]
  /dev/ram7            [      64.00 MiB]
  /dev/ram8            [      64.00 MiB]
  /dev/ram9            [      64.00 MiB]
  /dev/ram10            [      64.00 MiB]
  /dev/ram11            [      64.00 MiB]
  /dev/ram12            [      64.00 MiB]
  /dev/ram13            [      64.00 MiB]
  /dev/ram14            [      64.00 MiB]
  /dev/ram15            [      64.00 MiB]
  1 disk
  18 partitions
  0 LVM physical volume whole disks
  1 LVM physical volume

Where the heck is it?

pveversion output question

$
0
0
Proxmox 3.3 is installed and updated from non-subscription repository for initial testing.


Output of pveversion -v show both of the following:

...
pvekernel-2.6.32-32-pve: 2.6.32-136
pvekernel-2.6.32-34-pve: 2.6.32-140
...

Shouldn't only the latest of these be listed? If so, is there a fix?

Thanks for any help.

Webinterface

$
0
0
Hi,

I'am to able to connect to my Proxmox Webinterface. I installed Proxmox via SoYouStart Webinterface.

My pveversion -v output:
Code:

proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Backup problem with OpenVZ

$
0
0
Hello everybody

For a month, the backup of containers no longer works.

error message
Quote:

INFO: starting new backup job: vzdump 100 --remove 0 --mode snapshot --compress gzip --storage CT-Backup --node vhost1
INFO: Starting Backup of VM 100 (openvz)
INFO: CTID 100 exist mounted running
INFO: status = running
ERROR: Backup of VM 100 failed - Can't kill a non-numeric process ID at /usr/share/perl5/LockFile/Simple.pm line 584.
INFO: Backup job finished with errors
TASK ERROR: job errors
package versions
  • proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-30-pve)
  • pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
  • pve-kernel-2.6.32-32-pve: 2.6.32-136
  • pve-kernel-2.6.32-33-pve: 2.6.32-138
  • pve-kernel-2.6.32-30-pve: 2.6.32-130
  • pve-kernel-2.6.32-34-pve: 2.6.32-140
  • pve-kernel-2.6.32-31-pve: 2.6.32-132
  • lvm2: 2.02.98-pve4
  • clvm: 2.02.98-pve4
  • corosync-pve: 1.4.7-1
  • openais-pve: 1.1.4-3
  • libqb0: 0.11.1-2
  • redhat-cluster-pve: 3.2.0-2
  • resource-agents-pve: 3.9.2-4
  • fence-agents-pve: 4.0.10-1
  • pve-cluster: 3.0-15
  • qemu-server: 3.3-3
  • pve-firmware: 1.1-3
  • libpve-common-perl: 3.0-19
  • libpve-access-control: 3.0-15
  • libpve-storage-perl: 3.0-25
  • pve-libspice-server1: 0.12.4-3
  • vncterm: 1.1-8
  • vzctl: 4.0-1pve6
  • vzprocps: not correctly installed
  • vzquota: 3.1-2
  • pve-qemu-kvm: 2.1-10
  • ksm-control-daemon: not correctly installed
  • glusterfs-client: 3.5.2-1


Does anyone know a solution or the cause?

Google Translate

Can't log into the Webinterface

$
0
0
I can't login into my Webinterface anymore, ssh is fine.
The last time I had this problem, it was caused by an external storage I'm using for the backups, but it's currently reachable, not like the last time when this happens.
This is a server from ovh, and the external storage is the ftp backup server which includes the package I've bought.

Output of pveversion -v


Code:

proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-30-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-30-pve: 2.6.32-130
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Syslog

Code:

Jan 10 22:45:01 Gravelines /USR/SBIN/CRON[549404]: (root) CMD (/usr/local/rtm/bin/rtm 27 > /dev/null 2> /dev/null)
Jan 10 22:45:35 Gravelines pveproxy[548749]: WARNING: proxy detected vanished client connection

Someone told me I've to update my authentication key, but unfortunatly, I don't know how to.
I hope someone can help me with this, thanks in advance.

Few questions about Proxmox Firewall

$
0
0
Hi, just now testing out the Proxmox FW and have a few questions below.

- I see a "Firewall" tab in both Datacenter View and Node View... What is the difference with the two.

- I created a rule that don't seem to work. I'm creating a rule to allow connection to Hostnode via specific external source ip and this don't seem to work at all. Here's the rule created.

IN Accept -i eth0 -source ext-ip-address -dest proxmox-hostnode-ip -p tcp -dport 8006 # Ext access to proxmox gui
IN Accept -i eth0 -source ext-ip-address -dest proxmox-hostnode-ip -p tcp -dport 22 # Ext access to hostnode ssh

also, created an explicit deny rule as follows:

IN drop -i eth0

After creating the above rule and have rule enabled I can't access proxmox gui or ssh to hostnode. I have to stop firewall to gain back access. What is wrong? I'm wondering if the firewall doesn't understand a rule to itself (hostnode interface ip)?

- I was wondering if Proxmox had a built-in catch all explicit deny rule? Or should one manually create it as above.


Thanks in advance for your help!

PCI Passthrough Problems

$
0
0
Hi all,
I am running the latest Proxmox, version 3.3-139
I try to passthrough a pci device to my vm:
Code:

03:00.0 Ethernet controller: Digium, Inc. Wildcard TDM410 4-port analog card (rev 11)        Subsystem: Digium, Inc. Wildcard TDM410 4-port analog card
        Control: I/O+ Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
        Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Interrupt: pin A routed to IRQ 16
        Region 0: I/O ports at d000 [size=256]
        Region 1: Memory at f7c20000 (32-bit, non-prefetchable) [size=1K]
        Expansion ROM at dfb00000 [disabled] [size=128K]
        Capabilities: [c0] Power Management version 2
                Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=100mA PME(D0+,D1+,D2+,D3hot+,D3cold+)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
        Kernel driver in use: pci-stub

I add the hostpci command to my vm:
Code:

# cat /etc/pve/qemu-server/101.conf
bootdisk: virtio0
cores: 1
ide2: none,media=cdrom
memory: 512
name: pbx
net0: virtio=A6:C9:BC:8B:9F:57,bridge=vmbr0
onboot: 1
ostype: l26
sockets: 1
virtio0: pve:vm-101-disk-1,size=10G
hostpci0: 03:00.0


But the vm don't start
Code:

# qm start 101
kvm: -device pci-assign,host=03:00.0,id=hostpci0,bus=pci.0,addr=0x10: PCI region 1 at address 0xf7c20000 has size 0x400, which is not a multiple of 4K.  You might experience some performance hit due to that.
kvm: -device pci-assign,host=03:00.0,id=hostpci0,bus=pci.0,addr=0x10: Failed to assign irq for "hostpci0"
Perhaps you are assigning a device that shares an IRQ with another device?: Input/output error
kvm: -device pci-assign,host=03:00.0,id=hostpci0,bus=pci.0,addr=0x10: Device initialization failed.
kvm: -device pci-assign,host=03:00.0,id=hostpci0,bus=pci.0,addr=0x10: Device 'kvm-pci-assign' could not be initialized
start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -name pbx -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -k en-us -m 512 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'pci-assign,host=03:00.0,id=hostpci0,bus=pci.0,addr=0x10' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:61fde84539cb' -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/dev/pve/vm-101-disk-1,if=none,id=drive-virtio0,aio=native,cache=none,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=A6:C9:BC:8B:9F:5          7,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1


Code:

Jan 11 01:39:56 ixi qm[4120]: <root@pam> starting task UPID:ixi:00001019:000062F1:54B1AA3C:qmstart:101:root@pam:Jan 11 01:39:56 ixi qm[4121]: start VM 101: UPID:ixi:00001019:000062F1:54B1AA3C:qmstart:101:root@pam:
Jan 11 01:39:57 ixi kernel: pci-stub 0000:03:00.0: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900003)
Jan 11 01:39:57 ixi kernel: device tap101i0 entered promiscuous mode
Jan 11 01:39:57 ixi kernel: vmbr0: port 5(tap101i0) entering forwarding state
Jan 11 01:39:57 ixi kernel: pci-stub 0000:03:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
Jan 11 01:39:57 ixi kernel: pci-stub 0000:03:00.0: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900003)
Jan 11 01:39:58 ixi kernel: assign device 0:3:0.0
Jan 11 01:39:58 ixi kernel: IRQ handler type mismatch for IRQ 16
Jan 11 01:39:58 ixi kernel: current handler: ehci_hcd:usb1
Jan 11 01:39:58 ixi kernel: Pid: 4128, comm: kvm veid: 0 Not tainted 2.6.32-34-pve #1
Jan 11 01:39:58 ixi kernel: Call Trace:
Jan 11 01:39:58 ixi kernel: [<ffffffff810f7a27>] ? __setup_irq+0x3e7/0x440
Jan 11 01:39:58 ixi kernel: [<ffffffffa0327c90>] ? kvm_assigned_dev_intr+0x0/0xf0 [kvm]
Jan 11 01:39:58 ixi kernel: [<ffffffff810f7b64>] ? request_threaded_irq+0xe4/0x1e0
Jan 11 01:39:58 ixi kernel: [<ffffffffa032d5cd>] ? kvm_vm_ioctl+0x100d/0x10f0 [kvm]
Jan 11 01:39:58 ixi kernel: [<ffffffff81461491>] ? pci_conf1_read+0xc1/0x120
Jan 11 01:39:58 ixi kernel: [<ffffffff81463203>] ? raw_pci_read+0x23/0x40
Jan 11 01:39:58 ixi kernel: [<ffffffff812ac47a>] ? pci_read_config+0x25a/0x280
Jan 11 01:39:58 ixi kernel: [<ffffffff811bcb9a>] ? vfs_ioctl+0x2a/0xa0
Jan 11 01:39:58 ixi kernel: [<ffffffff8122a886>] ? read+0x166/0x210
Jan 11 01:39:58 ixi kernel: [<ffffffff811bd1ce>] ? do_vfs_ioctl+0x7e/0x5a0
Jan 11 01:39:58 ixi kernel: [<ffffffff811a7156>] ? vfs_read+0x116/0x190
Jan 11 01:39:58 ixi kernel: [<ffffffff811bd73f>] ? sys_ioctl+0x4f/0x80
Jan 11 01:39:58 ixi kernel: [<ffffffff8100b182>] ? system_call_fastpath+0x16/0x1b
Jan 11 01:39:58 ixi kernel: deassign device 0:3:0.0
Jan 11 01:39:58 ixi kernel: pci-stub 0000:03:00.0: restoring config space at offset 0x1 (was 0x2900000, writing 0x2900003)
Jan 11 01:39:58 ixi kernel: pci-stub 0000:03:00.0: PCI INT A disabled
Jan 11 01:39:58 ixi kernel: vmbr0: port 5(tap101i0) entering disabled state
Jan 11 01:39:58 ixi qm[4121]: start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -name pbx -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -k en-us -m 512 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'pci-assign,host=03:00.0,id=hostpci0,bus=pci.0,addr=0x10' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:61fde84539cb' -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/dev/pve/vm-101-disk-1,if=none,id=drive-virtio0,aio=native,cache=none,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=A6:C9:BC:8B:9F:57,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1
Jan 11 01:39:58 ixi qm[4120]: <root@pam> end task UPID:ixi:00001019:000062F1:54B1AA3C:qmstart:101:root@pam: start failed: command '/usr/bin/kvm -id 101 -chardev 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/101.vnc,x509,password -pidfile /var/run/qemu-server/101.pid -daemonize -name pbx -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -k en-us -m 512 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'pci-assign,host=03:00.0,id=hostpci0,bus=pci.0,addr=0x10' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:61fde84539cb' -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/dev/pve/vm-101-disk-1,if=none,id=drive-virtio0,aio=native,cache=none,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap101i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=A6:C9:BC:8B:9F:57,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code 1

Have you any ideas?
I have enabled IOMMU in GRUB
dmesg on the proxmox server:
Code:

# dmesg | grep -e DMAR -e IOMMU
ACPI: DMAR 00000000d6268ea0 000B8 (v01 INTEL  DQ77MK  0000003A INTL 00000001)
Intel-IOMMU: enabled
dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020e60262 ecap f0101a
dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap c9008020660262 ecap f0105a
IOMMU 0xfed90000: using Queued invalidation
IOMMU 0xfed91000: using Queued invalidation
IOMMU: Setting RMRR:
IOMMU: Setting identity map for device 0000:00:02.0 [0xd7800000 - 0xdfa00000]
IOMMU: Setting identity map for device 0000:00:1d.0 [0xd61f1000 - 0xd61fe000]
IOMMU: Setting identity map for device 0000:00:1a.0 [0xd61f1000 - 0xd61fe000]
IOMMU: Setting identity map for device 0000:00:14.0 [0xd61f1000 - 0xd61fe000]
IOMMU: Prepare 0-16MiB unity mapping for LPC
IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0x1000000]

Here is my pveversion -v:
Code:

proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

can't start kvm machine - exit code 1

$
0
0
After upgrading from PVE 3.2 to 3,3, my KVM guests ceased to boot.
Tried a few suggestions fond on a few related threads, but none really worked for me.

The task error message reads as follows:
Code:

TASK ERROR: start failed: command '/usr/bin/kvm -id 111 -chardev 'socket,id=qmp,path=/var/run/qemu-server/111.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/111.vnc,x509,password -pidfile /var/run/qemu-server/111.pid -daemonize -name spops -smp 'sockets=1,cores=2' -nodefaults -boot 'menu=on' -vga std -no-hpet -cpu 'kvm64,hv_spinlocks=0xffff,hv_relaxed,+lahf_lm,+x2apic,+sep' -k pt -m 2048 -cpuunits 1000 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:ff67a0bdb55d' -drive 'file=/var/lib/vz/images/111/vm-111-disk-1.qcow2,if=none,id=drive-ide1,format=qcow2,cache=writethrough,aio=native,detect-zeroes=on' -device 'ide-hd,bus=ide.0,unit=1,drive=drive-ide1,id=ide1,bootindex=100' -netdev 'type=user,id=net0,hostname=spops' -device 'e1000,mac=B2:67:42:45:4F:4A,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1

Bridge issues

$
0
0
Hello all!
I have the following network structure:
eth0 Network device Yes No
vmbr0 Linux bridge Yes Yes eth0 89.xxx.xxx.xxx 255.xxx.xxx.xxx 89.xxx.xxx.xxx
vmbr1 Linux bridge Yes Yes eth0:2 93.xxx.xxx.xxx 255.xxx.xxx.xxx 93.xxx.xxx.xxx
vmbr2 Linux bridge Yes Yes eth0:3 188.xxx.xxx.xxx 255.xxx.xxx.xxx 188.xxx.xxx.xxx

I cannot ping kvm vps-es from node and also I don't have access from kvm vps-es to the node ip. I have network on all kvm vps-es.

Anyone can help me?

Can there be such configuration: 6 machines in the one cluster, but the HA uses only

$
0
0
Hi everyone,

Can there be such configuration: 6 machines in the one cluster, but the HA uses only 4 of them?

Thanks for your help,
miha_r.
Viewing all 173527 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>