Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

Sending data without using the Switch

$
0
0
First of all, hello everyone.

I am new to using Proxmox, and I have a question that maybe you can solve easily, but I have no idea about it. I have already implemented Proxmox with multiple virtual machines, and everything works smoothly.

I have a virtual machine (VM1) that offers web services, but the databases of these services are in the second VM (VM2). What I do, is that two virtual machines are capable of transmitting data over the network, without reaching the switch, that is, that the data sent directly VM1 to VM2.

That is, a client connects to the VM1 by subnet 192.168.xx, and this makes the query to the database to VM2, but using another network, it would for example 10.10.XX

I have understood that this can be done through a Bridge, but I have no idea how to do it in Proxmox. Anybody can help me or give me an idea of what I have to do?

I do not write very good English, so I have relied on Google Translate. Sorry for any inconvenience this may cause.

Thanks

2 Node 1 Quorum Disk Expected behavior

$
0
0
Just trying to get some input.

I have a 2 node setup with 1 quorum disk. DRBD and cluster communication is over a dedicated 10GB connection between the two nodes. When pulling the 10GB network from one of the nodes, they end up fencing each other off. I thought this is exactly what the quorum disk was suppose to prevent? Trying to pin down how to prevent this. I appreciate the input!

2 nics.... 2 vlans.... 1 nic down vps still connected

$
0
0
I thought that this would work as i expected it to but i guess i was wrong.

Scenario: Client gets hit with 10Gbps DDoS... nic1 useless...

nic1 vlan1 - 63.217.77.***
nic2 vlan2 - 198.154.77.***

nic1 vlan1 includes vps ips and 63.217.77.*** range
nic2 is on seperate vlan incase vlan1 is hosed and nic1 is not available we can still access the management console.

My issue is when we disconnected nic1 to stop the attack i can still reach the vm on node 1 by going to the ip address. How is this possible?

venet is set to route thru vmbr0 which is bonded to eth0.... how if eth0 is disconnected how is it possible for me to access the vm from outside the node?

let me know. thanks

Permissions

$
0
0
So we're trying to build a panel and thinking about how to organize users. We understand pools may be the easiest way to do this but looking in pvesh, where does Proxmox keep track of which user controls what resources?

ie if I give a user access to a pool and he creates a VM in that pool, how does proxmox know only that user has access to that VM?

We did a pvesh get /nodes/nodeid/qemu

and the output doesn't show anything about who created it / has access to it?

We seem to be able to get everything else but that

Datastore.Allocate vs. Datastore.AllocateSpace

$
0
0
Hi,

We want our users to be able to restore a backup but not erase it (kind of strange I know). We gave them access to Datastore.AllocateSpace but it gives this error - Permission check failed (/storage/backup, Datastore.Allocate) (403)

If we give both AllocateSpace and Allocate, then they can erase the backups that show up - is there any good workaround?

Benchmarking VMs shows impossibly high disk read speeds

$
0
0
I have something weird going on. Have a Win7 VM with the OS drive on a qcow vdisk on local storage. Have a second qcow vdisk on a NFS share and a third vdisk on an iSCSI LVM. Using iometer in an attempt to benchmark NFS vs iSCSI performance. With iSCSI I can see it both reading from & writing to my SAN but with NFS I can only see it writing…..there appears to be no reads on the SAN.

When I attempt to benchmark vdisks stored on the local storage I also get impossible read speeds.

Is there some sort of disk caching going on? I have each hard disk set to "Default (no cache)" under the cache setting.

nested proxmoxVE installs

$
0
0
Just out of couriosity as I have used nested esxi vm's for learning purposes before, Can ProxmoxVE be installed onto Debian CT's and make use of the CPU extensions in order to play around with migration, HA etc while learning the software?

UI freeze when opening Storage tab if NFS Share is not available

$
0
0
when the nfs share in not available , the UI freaze and i get a error cluster .

i have deleted the storage from /etc/pve/storage.cfg and restart pvedaemon . but hen i make a mount , i found the entrie still exist

Code:

root@serv:/etc# mount
/dev/sda1 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw)
x1.x1.xx0.xx:/home/servvps2 on /mnt/pve/remote-2 type nfs (rw,addr=x1.x1.x0.x7)
/dev/mapper/pve-vzsnap--ks381126.kimsufi.com--0 on /mnt/vzsnap0 type ext3 (rw)


how i can remove it from the mount system ?

bond and vlan

$
0
0
Hi all,

I am stock with a problem which I cannot solve. The problem is that I have a bond, a number of vlans, and bridges for the mention vlans but something fundamental seems to be wrong because communication cannot pass through the bond to the switch but all communication between VM's on the specific bridge and the host works. As I see it it seems that the bond either strips of the vlan tag or does not apply it at all. Can somebody see if my interface config file is wrong?

cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode 802.3ad
pre-up ifup eth0 eth1
post-down ifdown eth0 eth1
#bond_xmit_hash_policy layer2+3
#bond_lacp_rate slow


auto vlan1
iface vlan1 inet manual
vlan-raw-device bond0


auto vlan10
iface vlan10 inet manual
vlan-raw-device bond0


auto vlan20
iface vlan20 inet manual
vlan-raw-device bond0


auto vmbr0
iface vmbr0 inet static
address 192.168.2.8
netmask 255.255.255.0
gateway 192.168.2.1
bridge_ports vlan1
bridge_stp off
bridge_fd 0


auto vmbr10
iface vmbr10 inet static
address 172.16.1.8
netmask 255.255.255.0
bridge_ports vlan10
bridge_stp off
bridge_fd 0
# post-up ip route add table vlan10 default via 172.16.1.1 dev vmbr10
# post-up ip rule add from 172.16.1.0/24 table vlan10
# post-down ip route del table vlan10 default via 172.16.1.1 dev vmbr10
# post-down ip rule del from 172.16.1.0/24 table vlan10


auto vmbr20
iface vmbr20 inet static
address 172.16.2.8
netmask 255.255.255.0
bridge_ports vlan20
bridge_stp off
bridge_fd 0


auto vmbr30
iface vmbr30 inet static
address 172.16.3.8
netmask 255.255.255.0
bridge_ports bond0
bridge_stp off
bridge_fd 0
# gateway 172.16.3.254

An example VM config:
cat /etc/pve/qemu-server/109.conf
#eth0%3A 192.168.2.201
#eth1%3A 172.16.1.201
boot: cn
bootdisk: virtio0
cores: 2
cpu: qemu32
ide2: none,media=cdrom
memory: 384
name: ns1.datanom.net
net0: virtio=76:18:DB:B7:EB:C9,bridge=vmbr0
net1: virtio=7E:D7:33:6D:FB:4A,bridge=vmbr10
ostype: l26
sockets: 1
startup: order=1
virtio0: qnap_lvm:vm-109-disk-1

VM network on 2 NIC

$
0
0
Greetings
I have one problem with VM
My goal is virtualise our servers. I created two networks - one for LAN needs such as AD, DHCP, internal DNS, some software for accounting etc (192.168.1.0 network), other for external environment - homepages, external DNS servers etc. Look at scheme.PNG
scheme.PNG
My network settings are:
Code:



# network interface settings
auto lo
iface lo inet loopback


auto eth0
iface eth0 inet manual


auto eth1
iface eth1 inet manual


auto vmbr0
iface vmbr0 inet static
        address  192.168.1.200
        netmask  255.255.255.0
        gateway  192.168.1.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0


auto vmbr20
iface vmbr20 inet static
        address  192.168.101.2
        netmask  255.255.255.0
        bridge_ports eth1
        bridge_stp off
        bridge_fd 0

Problem - I can't get network working in VM (Windows XP, Win7, Server2012) in internal system for NIC0. All CT is working fine as expected for NIC0. If network suddenly shows up, it is not stable, loosing packets. But for external needs for NIC1 everything works as I need with the same VIRTIO drivers.
How I can deal with that problem? If my system would working just with CT, everything would be great, but there must be at least one Windows Server 2008.


My system:
Code:

pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1

Attached Images

Login failed. Please try again. Again and again...

$
0
0
Hello. Please help.
I'de installed Proxmox on Debian by ssh, my web interface is working, but always says to me "Login failed. Please try again."
---- Used real PAM and PVE
---- password only ASCII
---- Root and password working at ssh
---- /etc/hosts/
95.211.218.209 localhost.localdomain localhost- cat /etc/hostname
localhost
----- I don't have /etc/pve/user.cfg
----- ls -l /etc/pve
total 2
-rw-r----- 1 root www-data 451 Feb 14 12:06 authkey.pub
lrwxr-x--- 1 root www-data 0 Jan 1 1970 local -> nodes/localhost
drwxr-x--- 2 root www-data 0 Feb 14 12:06 nodes
lrwxr-x--- 1 root www-data 0 Jan 1 1970 openvz -> nodes/localhost/openvz
drwx------ 2 root www-data 0 Feb 14 12:06 priv
-rw-r----- 1 root www-data 1533 Feb 14 12:06 pve-root-ca.pem
-rw-r----- 1 root www-data 1675 Feb 14 12:06 pve-www.key
lrwxr-x--- 1 root www-data 0 Jan 1 1970 qemu-server -> nodes/localhost/qemu-server
-rw-r----- 1 root www-data 119 Feb 14 12:06 vzdump.cron

-------
pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-17-pve
proxmox-ve-2.6.32: 2.2-83
pve-kernel-2.6.32-10-pve: 2.6.32-63
pve-kernel-2.6.32-13-pve: 2.6.32-72
pve-kernel-2.6.32-17-pve: 2.6.32-83
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1



Can't find what is going wrong. Thank you for you help!

NFS hungs KVM's gone

$
0
0
We were running Proxmox 1.9 and few weeks ago upgraded in to Version 2.2 latest release. Sometimes our NFS server hung because of the storage lockdown (storage pingable bu no keyboard action at all) but with the help of the NFS mount options on storage.cfg, when we reboot the storage everything goes fine from where it hung.

Our storage.cfg has that line in it for the NFS share ;

options vers=3,rw,rsize=4096,wsize=4096,hard,intr

We're running KVM's, they're on the NFS share as before and today storage lock down but when it comes back with a reboot KVM's shutdown by them selves. /var/log/messages shows the incident started by this logs ;

Feb 14 12:49:43 prox144 kernel: kvm D ffff880812ef2640 0 782060 1 0 0x00000004
Feb 14 12:49:43 prox144 kernel: ffff8805af44f978 0000000000000086 a386010002000000 0700000003000000
Feb 14 12:49:43 prox144 kernel: 2000000001000000 070000006b4a5100 00343431786f7270 0000000000000000
Feb 14 12:49:43 prox144 kernel: 0000000001000000 ffff880812ef2bf0 ffff8805af44ffd8 ffff8805af44ffd8
Feb 14 12:49:43 prox144 kernel: Call Trace:
Feb 14 12:49:43 prox144 kernel: [<ffffffffa05245b0>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs]
Feb 14 12:49:43 prox144 kernel: [<ffffffff81528993>] io_schedule+0x73/0xc0
Feb 14 12:49:43 prox144 kernel: [<ffffffffa05245be>] nfs_wait_bit_uninterruptible+0xe/0x20 [nfs]
Feb 14 12:49:43 prox144 kernel: [<ffffffff8152935f>] __wait_on_bit+0x5f/0x90
Feb 14 12:49:43 prox144 kernel: [<ffffffffa05245b0>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs]
Feb 14 12:49:43 prox144 kernel: [<ffffffff81529408>] out_of_line_wait_on_bit+0x78/0x90
Feb 14 12:49:43 prox144 kernel: [<ffffffff81096b90>] ? wake_bit_function+0x0/0x40
Feb 14 12:49:43 prox144 kernel: [<ffffffffa052459f>] nfs_wait_on_request+0x2f/0x40 [nfs]
Feb 14 12:49:43 prox144 kernel: [<ffffffffa052af17>] nfs_updatepage+0x2c7/0x5b0 [nfs]
Feb 14 12:49:43 prox144 kernel: [<ffffffffa05188ba>] nfs_write_end+0x5a/0x290 [nfs]
Feb 14 12:49:43 prox144 kernel: [<ffffffff81126a10>] generic_file_buffered_write_iter+0x170/0x2b0
Feb 14 12:49:43 prox144 kernel: [<ffffffff8112877d>] __generic_file_write_iter+0x1fd/0x400
Feb 14 12:49:43 prox144 kernel: [<ffffffff810b4547>] ? futex_wait+0x227/0x380
Feb 14 12:49:43 prox144 kernel: [<ffffffff81128a05>] __generic_file_aio_write+0x85/0xa0
Feb 14 12:49:43 prox144 kernel: [<ffffffff81128a8f>] generic_file_aio_write+0x6f/0xe0
Feb 14 12:49:43 prox144 kernel: [<ffffffffa05183fc>] nfs_file_write+0x10c/0x210 [nfs]
Feb 14 12:49:43 prox144 kernel: [<ffffffff8119648a>] do_sync_write+0xfa/0x140
Feb 14 12:49:43 prox144 kernel: [<ffffffff81096b50>] ? autoremove_wake_function+0x0/0x40
Feb 14 12:49:43 prox144 kernel: [<ffffffff8104e508>] ? __wake_up_locked_key+0x18/0x20
Feb 14 12:49:43 prox144 kernel: [<ffffffff811e0103>] ? eventfd_write+0x193/0x1d0
Feb 14 12:49:43 prox144 kernel: [<ffffffff81196768>] vfs_write+0xb8/0x1a0
Feb 14 12:49:43 prox144 kernel: [<ffffffff81197242>] sys_pwrite64+0x82/0xa0
Feb 14 12:49:43 prox144 kernel: [<ffffffff8100b182>] system_call_fastpath+0x16/0x1b
Feb 14 12:50:22 prox144 kernel: ct0 nfs: server 192.168.201.121 not responding, still trying
Feb 14 12:51:43 prox144 kernel: kvm D ffff88041c8f4280 0 22239 1 0 0x00000000
Feb 14 12:51:43 prox144 kernel: ffff88041c09bc98 0000000000000086 ffff88081d7d83c0 00000000000000db
Feb 14 12:51:43 prox144 kernel: 0000000000000000 ffff88041c09bab8 ffffffff811abf90 dead000000100100
Feb 14 12:51:43 prox144 kernel: dead000000200200 ffff88041c8f4830 ffff88041c09bfd8 ffff88041c09bfd8
Feb 14 12:51:43 prox144 kernel: Call Trace:
Feb 14 12:51:43 prox144 kernel: [<ffffffff811abf90>] ? pollwake+0x0/0x60
Feb 14 12:51:43 prox144 kernel: [<ffffffff810705a5>] exit_mm+0x95/0x150
Feb 14 12:51:43 prox144 kernel: [<ffffffff810723e7>] do_exit+0x187/0x930
Feb 14 12:51:43 prox144 kernel: [<ffffffff810b3410>] ? wake_futex+0x40/0x60
Feb 14 12:51:43 prox144 kernel: [<ffffffff81072be8>] do_group_exit+0x58/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff810888b6>] get_signal_to_deliver+0x1f6/0x470
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100a335>] do_signal+0x75/0x800
Feb 14 12:51:43 prox144 kernel: [<ffffffff811e03b1>] ? eventfd_read+0x1c1/0x220
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100ab50>] do_notify_resume+0x90/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100b451>] int_signal+0x12/0x17
Feb 14 12:51:43 prox144 kernel: kvm D ffff88041bd82bd0 0 22251 1 0 0x00000000
Feb 14 12:51:43 prox144 kernel: ffff88041bcc3ad8 0000000000000086 00000000febf3014 ffff8808139417d0
Feb 14 12:51:43 prox144 kernel: 0000000000000001 0000000000000040 ffff88041bcc3aa8 ffffffffa0326bfb
Feb 14 12:51:43 prox144 kernel: ffff88041bcc3a78 ffff88041bd83180 ffff88041bcc3fd8 ffff88041bcc3fd8
Feb 14 12:51:43 prox144 kernel: Call Trace:
Feb 14 12:51:43 prox144 kernel: [<ffffffffa0326bfb>] ? emulator_write_emulated_onepage+0x11b/0x180 [kvm]
Feb 14 12:51:43 prox144 kernel: [<ffffffff81529085>] schedule_timeout+0x215/0x2e0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81030b4e>] ? physflat_send_IPI_mask+0xe/0x10
Feb 14 12:51:43 prox144 kernel: [<ffffffff8102ade9>] ? native_smp_send_reschedule+0x49/0x60
Feb 14 12:51:43 prox144 kernel: [<ffffffff8104f0a8>] ? resched_task+0x68/0x70
Feb 14 12:51:43 prox144 kernel: [<ffffffff8104f13d>] ? check_preempt_curr+0x6d/0x90
Feb 14 12:51:43 prox144 kernel: [<ffffffff81528cf3>] wait_for_common+0x123/0x190
Feb 14 12:51:43 prox144 kernel: [<ffffffff81059f10>] ? default_wake_function+0x0/0x20
Feb 14 12:51:43 prox144 kernel: [<ffffffff810857bd>] ? signal_wake_up+0x2d/0x40
Feb 14 12:51:43 prox144 kernel: [<ffffffff81528e1d>] wait_for_completion+0x1d/0x20
Feb 14 12:51:43 prox144 kernel: [<ffffffff8119ebb7>] do_coredump+0x3a7/0xbf0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81059afe>] ? try_to_wake_up+0xae/0x4c0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81084835>] ? __sigqueue_free+0x45/0x50
Feb 14 12:51:43 prox144 kernel: [<ffffffff810857c9>] ? signal_wake_up+0x39/0x40
Feb 14 12:51:43 prox144 kernel: [<ffffffff810888ad>] get_signal_to_deliver+0x1ed/0x470
Feb 14 12:51:43 prox144 kernel: [<ffffffff810865eb>] ? __send_signal+0x19b/0x390
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100a335>] do_signal+0x75/0x800
Feb 14 12:51:43 prox144 kernel: [<ffffffff810868d0>] ? do_send_sig_info+0x70/0x90
Feb 14 12:51:43 prox144 kernel: [<ffffffff81085cc5>] ? sigprocmask+0x75/0x110
Feb 14 12:51:43 prox144 kernel: [<ffffffff81085dea>] ? sys_rt_sigprocmask+0x8a/0x100
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100ab50>] do_notify_resume+0x90/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100b451>] int_signal+0x12/0x17
Feb 14 12:51:43 prox144 kernel: kvm D ffff88041bd82100 0 22252 1 0 0x00000000
Feb 14 12:51:43 prox144 kernel: ffff88041b265c98 0000000000000086 0000000000000000 ffff88041b265c08
Feb 14 12:51:43 prox144 kernel: ffff88041b265c68 ffffffff810b3ea0 ffff88041b265c28 0000000000000283
Feb 14 12:51:43 prox144 kernel: ffffffffffffffe0 ffff88041bd826b0 ffff88041b265fd8 ffff88041b265fd8
Feb 14 12:51:43 prox144 kernel: Call Trace:
Feb 14 12:51:43 prox144 kernel: [<ffffffff810b3ea0>] ? exit_robust_list+0x90/0x160
Feb 14 12:51:43 prox144 kernel: [<ffffffff810705a5>] exit_mm+0x95/0x150
Feb 14 12:51:43 prox144 kernel: [<ffffffff810723e7>] do_exit+0x187/0x930
Feb 14 12:51:43 prox144 kernel: [<ffffffff81072be8>] do_group_exit+0x58/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff810888b6>] get_signal_to_deliver+0x1f6/0x470
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100a335>] do_signal+0x75/0x800
Feb 14 12:51:43 prox144 kernel: [<ffffffff811aa475>] ? do_vfs_ioctl+0x3e5/0x5d0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81012b79>] ? read_tsc+0x9/0x20
Feb 14 12:51:43 prox144 kernel: [<ffffffff810a19d9>] ? ktime_get_ts+0xa9/0xe0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100ab50>] do_notify_resume+0x90/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100b451>] int_signal+0x12/0x17
Feb 14 12:51:43 prox144 kernel: kvm D ffff88041d2aec50 0 22254 1 0 0x00000000
Feb 14 12:51:43 prox144 kernel: ffff88041d2b1c98 0000000000000086 ffff88041d2b1c28 ffff88041d2b1c88
Feb 14 12:51:43 prox144 kernel: ffff88041d2b1c68 ffffffff810b3ea0 ffff88041d2b1c28 ffffffff812747e7
Feb 14 12:51:43 prox144 kernel: ffffffffffffffe0 ffff88041d2af200 ffff88041d2b1fd8 ffff88041d2b1fd8
Feb 14 12:51:43 prox144 kernel: Call Trace:
Feb 14 12:51:43 prox144 kernel: [<ffffffff810b3ea0>] ? exit_robust_list+0x90/0x160
Feb 14 12:51:43 prox144 kernel: [<ffffffff812747e7>] ? plist_del+0x37/0x70
Feb 14 12:51:43 prox144 kernel: [<ffffffff810705a5>] exit_mm+0x95/0x150
Feb 14 12:51:43 prox144 kernel: [<ffffffff810723e7>] do_exit+0x187/0x930
Feb 14 12:51:43 prox144 kernel: [<ffffffff8102ade9>] ? native_smp_send_reschedule+0x49/0x60
Feb 14 12:51:43 prox144 kernel: [<ffffffff81072be8>] do_group_exit+0x58/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff810888b6>] get_signal_to_deliver+0x1f6/0x470
Feb 14 12:51:43 prox144 kernel: [<ffffffff81059f10>] ? default_wake_function+0x0/0x20
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100a335>] do_signal+0x75/0x800
Feb 14 12:51:43 prox144 kernel: [<ffffffff8104e508>] ? __wake_up_locked_key+0x18/0x20
Feb 14 12:51:43 prox144 kernel: [<ffffffff811e0103>] ? eventfd_write+0x193/0x1d0
Feb 14 12:51:43 prox144 kernel: [<ffffffff815281f4>] ? thread_return+0xba/0x7e6
Feb 14 12:51:43 prox144 kernel: [<ffffffff810b67cb>] ? sys_futex+0x7b/0x170
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100ab50>] do_notify_resume+0x90/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100b451>] int_signal+0x12/0x17
Feb 14 12:51:43 prox144 kernel: kvm D ffff8807b2020080 0 782054 1 0 0x00000000
Feb 14 12:51:43 prox144 kernel: ffff88058f059c98 0000000000000086 ffff88058f059bf8 ffffffffa04737f0
Feb 14 12:51:43 prox144 kernel: ffff88058f059c68 ffffffff810b3ea0 0000000000000000 ffff88041c220800
Feb 14 12:51:43 prox144 kernel: ffffffffffffffe0 ffff8807b2020630 ffff88058f059fd8 ffff88058f059fd8
Feb 14 12:51:43 prox144 kernel: Call Trace:
Feb 14 12:51:43 prox144 kernel: [<ffffffffa04737f0>] ? rpc_put_task+0x10/0x20 [sunrpc]
Feb 14 12:51:43 prox144 kernel: [<ffffffff810b3ea0>] ? exit_robust_list+0x90/0x160
Feb 14 12:51:43 prox144 kernel: [<ffffffff810705a5>] exit_mm+0x95/0x150
Feb 14 12:51:43 prox144 kernel: [<ffffffff810723e7>] do_exit+0x187/0x930
Feb 14 12:51:43 prox144 kernel: [<ffffffffa051b758>] ? __nfs_revalidate_inode+0x58/0x220 [nfs]
Feb 14 12:51:43 prox144 kernel: [<ffffffff81072be8>] do_group_exit+0x58/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff810888b6>] get_signal_to_deliver+0x1f6/0x470
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100a335>] do_signal+0x75/0x800
Feb 14 12:51:43 prox144 kernel: [<ffffffff81071000>] ? release_task+0x370/0x520
Feb 14 12:51:43 prox144 kernel: [<ffffffff81196fb5>] ? vfs_read+0xb5/0x1a0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100ab50>] do_notify_resume+0x90/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100b451>] int_signal+0x12/0x17
Feb 14 12:51:43 prox144 kernel: kvm D ffff880812ef2640 0 782060 1 0 0x00000004
Feb 14 12:51:43 prox144 kernel: ffff8805af44f978 0000000000000086 a386010002000000 0700000003000000
Feb 14 12:51:43 prox144 kernel: 2000000001000000 070000006b4a5100 00343431786f7270 0000000000000000
Feb 14 12:51:43 prox144 kernel: 0000000001000000 ffff880812ef2bf0 ffff8805af44ffd8 ffff8805af44ffd8
Feb 14 12:51:43 prox144 kernel: Call Trace:
Feb 14 12:51:43 prox144 kernel: [<ffffffffa05245b0>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs]
Feb 14 12:51:43 prox144 kernel: [<ffffffff81528993>] io_schedule+0x73/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffffa05245be>] nfs_wait_bit_uninterruptible+0xe/0x20 [nfs]
Feb 14 12:51:43 prox144 kernel: [<ffffffff8152935f>] __wait_on_bit+0x5f/0x90
Feb 14 12:51:43 prox144 kernel: [<ffffffffa05245b0>] ? nfs_wait_bit_uninterruptible+0x0/0x20 [nfs]
Feb 14 12:51:43 prox144 kernel: [<ffffffff81529408>] out_of_line_wait_on_bit+0x78/0x90
Feb 14 12:51:43 prox144 kernel: [<ffffffff81096b90>] ? wake_bit_function+0x0/0x40
Feb 14 12:51:43 prox144 kernel: [<ffffffffa052459f>] nfs_wait_on_request+0x2f/0x40 [nfs]
Feb 14 12:51:43 prox144 kernel: [<ffffffffa052af17>] nfs_updatepage+0x2c7/0x5b0 [nfs]
Feb 14 12:51:43 prox144 kernel: [<ffffffffa05188ba>] nfs_write_end+0x5a/0x290 [nfs]
Feb 14 12:51:43 prox144 kernel: [<ffffffff81126a10>] generic_file_buffered_write_iter+0x170/0x2b0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8112877d>] __generic_file_write_iter+0x1fd/0x400
Feb 14 12:51:43 prox144 kernel: [<ffffffff810b4547>] ? futex_wait+0x227/0x380
Feb 14 12:51:43 prox144 kernel: [<ffffffff81128a05>] __generic_file_aio_write+0x85/0xa0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81128a8f>] generic_file_aio_write+0x6f/0xe0
Feb 14 12:51:43 prox144 kernel: [<ffffffffa05183fc>] nfs_file_write+0x10c/0x210 [nfs]
Feb 14 12:51:43 prox144 kernel: [<ffffffff8119648a>] do_sync_write+0xfa/0x140
Feb 14 12:51:43 prox144 kernel: [<ffffffff81096b50>] ? autoremove_wake_function+0x0/0x40
Feb 14 12:51:43 prox144 kernel: [<ffffffff8104e508>] ? __wake_up_locked_key+0x18/0x20
Feb 14 12:51:43 prox144 kernel: [<ffffffff811e0103>] ? eventfd_write+0x193/0x1d0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81196768>] vfs_write+0xb8/0x1a0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81197242>] sys_pwrite64+0x82/0xa0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100b182>] system_call_fastpath+0x16/0x1b
Feb 14 12:51:43 prox144 kernel: kvm D ffff88041a8729c0 0 23819 1 0 0x00000000
Feb 14 12:51:43 prox144 kernel: ffff8803a2a8bbb8 0000000000000046 ffff8803a2a8bb28 ffffffff81055331
Feb 14 12:51:43 prox144 kernel: ffff88003965e9c0 ffff8807b2020b88 ffff8803a2a8bb58 ffffffff81066793
Feb 14 12:51:43 prox144 kernel: ffff8807b2020b50 ffff88041a872f70 ffff8803a2a8bfd8 ffff8803a2a8bfd8
Feb 14 12:51:43 prox144 kernel: Call Trace:
Feb 14 12:51:43 prox144 kernel: [<ffffffff81055331>] ? enqueue_boosted_entity+0x41/0x70
Feb 14 12:51:43 prox144 kernel: [<ffffffff81066793>] ? enqueue_task_fair+0xc3/0x1f0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81528993>] io_schedule+0x73/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff811e1d02>] wait_for_all_aios+0xd2/0x110
Feb 14 12:51:43 prox144 kernel: [<ffffffff81059f10>] ? default_wake_function+0x0/0x20
Feb 14 12:51:43 prox144 kernel: [<ffffffff811e2b69>] exit_aio+0x59/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81069e10>] mmput+0x30/0x1f0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81070619>] exit_mm+0x109/0x150
Feb 14 12:51:43 prox144 kernel: [<ffffffff810723e7>] do_exit+0x187/0x930
Feb 14 12:51:43 prox144 kernel: [<ffffffff81084835>] ? __sigqueue_free+0x45/0x50
Feb 14 12:51:43 prox144 kernel: [<ffffffff81072be8>] do_group_exit+0x58/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff810888b6>] get_signal_to_deliver+0x1f6/0x470
Feb 14 12:51:43 prox144 kernel: [<ffffffff810865eb>] ? __send_signal+0x19b/0x390
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100a335>] do_signal+0x75/0x800
Feb 14 12:51:43 prox144 kernel: [<ffffffff810868d0>] ? do_send_sig_info+0x70/0x90
Feb 14 12:51:43 prox144 kernel: [<ffffffff81085cc5>] ? sigprocmask+0x75/0x110
Feb 14 12:51:43 prox144 kernel: [<ffffffff81085dea>] ? sys_rt_sigprocmask+0x8a/0x100
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100ab50>] do_notify_resume+0x90/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100b451>] int_signal+0x12/0x17
Feb 14 12:51:43 prox144 kernel: kvm D ffff88041c533210 0 24046 1 0 0x00000000
Feb 14 12:51:43 prox144 kernel: ffff8803c63d3bb8 0000000000000046 ffff8803c63d3b58 0000000000000096
Feb 14 12:51:43 prox144 kernel: ffff88003961e9c0 ffff88041cb55000 ffff8803c63d3b58 ffff88003961e9c0
Feb 14 12:51:43 prox144 kernel: ffff88003961e9c0 ffff88041c5337c0 ffff8803c63d3fd8 ffff8803c63d3fd8
Feb 14 12:51:43 prox144 kernel: Call Trace:
Feb 14 12:51:43 prox144 kernel: [<ffffffff81528993>] io_schedule+0x73/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff811e1d02>] wait_for_all_aios+0xd2/0x110
Feb 14 12:51:43 prox144 kernel: [<ffffffff81059f10>] ? default_wake_function+0x0/0x20
Feb 14 12:51:43 prox144 kernel: [<ffffffff811e2b69>] exit_aio+0x59/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81069e10>] mmput+0x30/0x1f0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81070619>] exit_mm+0x109/0x150
Feb 14 12:51:43 prox144 kernel: [<ffffffff810723e7>] do_exit+0x187/0x930
Feb 14 12:51:43 prox144 kernel: [<ffffffff81084835>] ? __sigqueue_free+0x45/0x50
Feb 14 12:51:43 prox144 kernel: [<ffffffff81072be8>] do_group_exit+0x58/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff810888b6>] get_signal_to_deliver+0x1f6/0x470
Feb 14 12:51:43 prox144 kernel: [<ffffffff810865eb>] ? __send_signal+0x19b/0x390
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100a335>] do_signal+0x75/0x800
Feb 14 12:51:43 prox144 kernel: [<ffffffff810868d0>] ? do_send_sig_info+0x70/0x90
Feb 14 12:51:43 prox144 kernel: [<ffffffff81085cc5>] ? sigprocmask+0x75/0x110
Feb 14 12:51:43 prox144 kernel: [<ffffffff81085dea>] ? sys_rt_sigprocmask+0x8a/0x100
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100ab50>] do_notify_resume+0x90/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100b451>] int_signal+0x12/0x17
Feb 14 12:51:43 prox144 kernel: kvm D ffff880366042540 0 24567 1 0 0x00000000
Feb 14 12:51:43 prox144 kernel: ffff88032eef9bb8 0000000000000046 ffff88032eef9b58 0000000000000096
Feb 14 12:51:43 prox144 kernel: ffff88003965e9c0 ffff88032efce000 ffff88032eef9b58 ffff88003965e9c0
Feb 14 12:51:43 prox144 kernel: ffff88003965e9c0 ffff880366042af0 ffff88032eef9fd8 ffff88032eef9fd8
Feb 14 12:51:43 prox144 kernel: Call Trace:
Feb 14 12:51:43 prox144 kernel: [<ffffffff81528993>] io_schedule+0x73/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff811e1d02>] wait_for_all_aios+0xd2/0x110
Feb 14 12:51:43 prox144 kernel: [<ffffffff81059f10>] ? default_wake_function+0x0/0x20
Feb 14 12:51:43 prox144 kernel: [<ffffffff811e2b69>] exit_aio+0x59/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81069e10>] mmput+0x30/0x1f0
Feb 14 12:51:43 prox144 kernel: [<ffffffff81070619>] exit_mm+0x109/0x150
Feb 14 12:51:43 prox144 kernel: [<ffffffff810723e7>] do_exit+0x187/0x930
Feb 14 12:51:43 prox144 kernel: [<ffffffff81084835>] ? __sigqueue_free+0x45/0x50
Feb 14 12:51:43 prox144 kernel: [<ffffffff81072be8>] do_group_exit+0x58/0xd0
Feb 14 12:51:43 prox144 kernel: [<ffffffff810888b6>] get_signal_to_deliver+0x1f6/0x470
Feb 14 12:51:43 prox144 kernel: [<ffffffff810865eb>] ? __send_signal+0x19b/0x390
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100a335>] do_signal+0x75/0x800
Feb 14 12:51:43 prox144 kernel: [<ffffffff810868d0>] ? do_send_sig_info+0x70/0x90
Feb 14 12:51:43 prox144 kernel: [<ffffffff81085cc5>] ? sigprocmask+0x75/0x110
Feb 14 12:51:43 prox144 kernel: [<ffffffff81085dea>] ? sys_rt_sigprocmask+0x8a/0x100
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100ab50>] do_notify_resume+0x90/0xc0
Feb 14 12:51:43 prox144 kernel: [<ffffffff8100b451>] int_signal+0x12/0x17

Any ideas
Best
Gokalp

Cant start VM's on new installation

$
0
0
Hi,

After successfully using proxmox in a text environment - I have today put proxmox onto a different machine, but im having trouble starting any VM.

I'm seeing these messages in the syslog..

Feb 14 14:40:19 RSPROX1 pvedaemon[3929]: start VM 102: UPID:RSPROX1:00000F59:000BC533:511CF753:qmstart:10 2:root@pam:
Feb 14 14:40:19 RSPROX1 pvedaemon[1787]: <root@pam> starting task UPID:RSPROX1:00000F59:000BC533:511CF753:qmstart:10 2:root@pam:
Feb 14 14:40:20 RSPROX1 pvedaemon[3929]: start failed: command '/usr/bin/kvm -id 102 -chardev 'socket,id=qmp,path=/var/run/qemu-server/102.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/102.vnc,x509,password -pidfile /var/run/qemu-server/102.pid -daemonize -name windows.example.com -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga std -no-hpet -k en-gb -m 512 -cpuunits 1000 -usbdevice tablet -drive 'file=/dev/cdrom,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/var/lib/vz/images/102/vm-102-disk-1.raw,if=none,id=drive-ide0,aio=native,cache=none' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap102i0,script=/var/lib/qemu-server/pve-bridge' -device 'rtl8139,mac=26:6A:DC:1C:79:AE,netdev=net0,bus=pci .0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1
Feb 14 14:40:20 RSPROX1 pvedaemon[1787]: <root@pam> end task UPID:RSPROX1:00000F59:000BC533:511CF753:qmstart:10 2:root@pam: start failed: command '/usr/bin/kvm -id 102 -chardev 'socket,id=qmp,path=/var/run/qemu-server/102.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/102.vnc,x509,password -pidfile /var/run/qemu-server/102.pid -daemonize -name windows.example.com -smp 'sockets=1,cores=1' -nodefaults -boot 'menu=on' -vga std -no-hpet -k en-gb -m 512 -cpuunits 1000 -usbdevice tablet -drive 'file=/dev/cdrom,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/var/lib/vz/images/102/vm-102-disk-1.raw,if=none,id=drive-ide0,aio=native,cache=none' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap102i0,script=/var/lib/qemu-server/pve-bridge' -device 'rtl8139,mac=26:6A:DC:1C:79:AE,netdev=net0,bus=pci .0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1

But I don't see any useful information in this, and don't know where to look to find out whats going wrong.

Anybody have any clues ?

Thanks.

Upgrade whole Server with Proxmox

$
0
0
Hi,
i have Debian lenny with Proxmox 1.8.


  1. My Hostsystem should be upgraded to Debian Squeeze
  2. Proxmox should be upgrade to 2.x (actually stable version)
  3. Guest should be upgrade to squeeze
  4. All this should happend on a livesystem.


Now my Questions, whats the best way to do this painless.

Alternative kernels for PVE 2

$
0
0
Is it possible to use 2.6.35 kernel with the PVE 2 i.e. supporting only KVM etc.

Thanks
Gokalp

Fencing Match

$
0
0
I have a two node cluster setup with HA both nodes show they are online but they keep trying to fence each other;

Here is a bit from the syslog:
Feb 14 15:53:52 prox1 fenced[1572]: fencing node prox3 still retrying
Feb 14 16:00:44 prox3 fenced[2324]: fencing node prox1 still retrying

I have tested multicast and its working correctly.
I have changed the fence device to manual for the time being so I can troubleshoot.

Any suggestions would be greatly appreciated.


System Info:
Version: 6.2.0
Config Version: 15
Cluster Name: CLUSTER
Cluster Id: 17809
Cluster Member: Yes
Cluster Generation: 396
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1
Active subsystems: 6
Flags: 2node
Ports Bound: 0
Node name: prox1
Node ID: 1
Multicast addresses: 239.192.69.214
Node addresses: <internalip>

Centos Issues Creating a new CT (x64 and i686/386)

$
0
0
I've tried both 32 and 64 bit images, the 32 bit I even downloaded using the web interface. It says 'Starting udev: [ OK ]' and then hangs with the square at the next line :(

Nothing special about this setup, just a plain disc install with local raid

proxmox and ceph

$
0
0
Dear Proxmox community,

I have a question concerning proxmox ceph storage. On the proxmox storage documentation, it is stated that ceph is currently not supported on the same nodes, so you'd have to set up a separate ceph cluster.
Could somebody explain to why this can't be done, and if this will be possible in the future? The reason is, I have some repurposed servers with >10T storage each and dual 4-core processors with loads of memory and would like to re-target them as a local cloud, preferably using proxmox and ceph.

Thanks!

Proxmox 2.2 HA cluster failover to a remote cluster

$
0
0
Hi,

I was wondering if proxmox support this feature or is there a way to setup 2 clusters with 3 nodes each in two different locations and if cluster at location A fails , location B starts the containers that use the same shared disk space between both locations.

Thanks,

Proxmox VE 2.2 upgrade to Whezzy

$
0
0
Hi

in the Future, can I upgrade a Debian Squeeze System to 7 and than running with Proxmox ? . In the older Version 1,9 is not works .
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>