Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

Resize qcow2 - Reduce size of the disk

$
0
0
Hi,
it's possible to reduce the size of the KVM qcow2 virtio partition without windows clone backup and restore?
any other quickly solution?

Regards,

host down

$
0
0
OK, so I upgrdaed to Proxmox 4.0 yesterday, upgrade went smooth on the host.

Converting openvz guest , not so good.

Followed the guide - https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC

Everything looked good up to the networking part.

The web interface did not configure for NAT (guest was using 10.10.10.10 as an openvz guest, no network access using a bridge and a LXC container and an ip of 10.10.10.10).

configured my Proxmox 4.0 LXC guest from the web interface to use a spare public IP address via a bridge ...

Host ip 64.251.17.23

LXC guest ip 64.251.17.25

crashed the host, can not ping host or guest, can not access the web interface, can not ssh into guest.

The guest is (was) a working openvz guest converted to LXC.

Code:

    ping banshee -c2    PING banshee (64.251.17.23) 56(84) bytes of data.


    --- banshee ping statistics ---
    2 packets transmitted, 0 received, 100% packet loss, time 999ms

Seems to be a bit of a bug.

The host is co-loco so I do not have physical access, have to have it shipped back for maintenance.

I was sort of hoping you had LXC figured out a you had with openvz, but, it seems LXC is not ready for prime time =(

VM Restore error with qmrestore

$
0
0
Hi all,

I tried to migrate 3 VMs from a proxmox 3.1 to a 3.4 machine (both iso installs from proxmox.com). So I scp'd the dumps to the new machine and got no error on migrating like this

Code:

qmrestore vm-100-disk-1.raw 100 -storage local -unique
The VMID is unused and the storage is "local" (different from old machine). vm-100-disk-1.raw is in place. All VMs appear on the webfront end, but starting them throws an error like this:

Quote:

kvm: -drive file=/var/lib/vz/images/100/vm-100-disk-1.raw,if=none,id=drive-sata0,format=raw,aio=native,cache=none,detect-zeroes=on: file system may not support O_DIRECTd
kvm: -drive file=/var/lib/vz/images/100/vm-100-disk-1.raw,if=none,id=drive-sata0,format=raw,aio=native,cache=none,detect-zeroes=on: could not open disk image /var/lib/vz/images/100/vm-100-disk-1.raw: Could not open '/var/lib/vz/images/100/vm-100-disk-1.raw': Invalid argument
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=f9004b01-431a-4a10-97a7-fdd76bcba60c' -name youtrack.intern.crea.de -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga cirrus -cpu kvm64,+lahf_lm,+x2apic,+sep -m 1024 -k de -cpuunits 1000 -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:b8e0ac7248a' -drive 'if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7 ' -drive 'file=/var/lib/vz/images/100/vm-100-disk-1.raw,if=none,id=drive-sata0,format=raw,aio=native,cache=none,detect-zeroes=on' -device 'ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=3E:09:5A:E2:F4:A9,netdev=net0,bus=pci.0 ,addr=0x12,id=net0,bootindex=300'' failed: exit code 1


My only guess is that the "-unique" causes trouble but the console doesn't even start so I can't tell.

As the old machine has damaged disks, I only do have the backup dumps.

Thanks a lot for your help!

Mike

3.4 Ceph snapshots hangs VM

$
0
0
Hi,

Firstly thanks for an excellent product!

I have a 3 cluster nodes that utilises ceph. Up until recently (say 1 month ago) snapshots have been working well. Now however, each time I perform a snapshot the VM hangs. To be more precise to me it appears that the HDD becomes ejected from the VM point of view.

To recover; a forced stop and start is needed. I should note that the integrity of the snapshots are valid.

An upgrade to version 4.0 is not an option just at the moment, but is desired later.

I am also using cephs own wheezy hammer repository. Could this be the issue?

Some PVE version details:

Code:

#pveversion -v
proxmox-ve-2.6.32: not correctly installed (running kernel: 3.10.0-12-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-3.10.0-12-pve: 3.10.0-37
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-19
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-34
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-13
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

Has anyone else experienced similar behaviour? Nothing seems to appear in any of the host node logs.

Cheers,
Brad.

Upgrade from 3.x to 4.0 >>> Saving/Restoring for the upgrade

$
0
0
I would like to upgrade from 3.3-1 to 4. I have read the instructions in the Proxmox Wiki and it seems straightforward but I have a couple of elementary questions about doing this via a new installation. Currently, I only have one VM (that I am concerned about) running. It is a Sophos UTM and it is running on a HDD that is passed through to the VM. The Proxmox VE iso was installed on a SSD and is the only thing currently running on that SSD (except Rockstor, which is not completely set up and I don't mind if I have to reinstall it). This is a personal setup so down time is only an inconvenience.

My questions are:
  1. "Save all files from /etc/pve/... on a save place" >>> what does this mean? I have an idea but am not sure. And secondly, where do I save it to?
  2. "Restore /etc/pve/storage.cfg" >>> restore it from where?
  3. Is there a way to get back to a previous working state if I poopoo the upgrade?Thanks for the help.

user for live migration

$
0
0
Greetings,

I've successfully setup a 3 nodes cluster on online.net using tinc tunnel.
I was able to change the default port for ssh changing the sshd_config and ssh_config.

However I would like to know if is possible to use another user (rather than root) to perform live migrations because I would like to disable ssh access to root user to follow best practices on security.

Is this possible ?


Thanks in Advance
embb

Is the enterprise support still active

$
0
0
I am asking because i send a request asking for some clarifications prior to buying the service and its been 3 days with no answer yet.

Ceph (incremental)backup

$
0
0
This is more a ceph question then a Proxmox question I guess.
But I want to backup images in my Ceph pool for offsite backup so preferably incremental how would one do this?

Can I take a snap shot like

Code:

rbd snap create vm-105-disk-1@Initial
then maybe for each day

Code:

rbd snap create vm-105-disk-1@<date today>
Transfer the first initial and then everyday sync with

Code:

rbd export-diff --from-snap daybefore vm-105-disk-1@<date today> vm-105-disk-1-<date yesterday>-to-<datetoday>.diff
And transfer only the diff files.

But when can you delete the Initial snapshot and the 'older' diff file? (otherwise snapshot will take a lot of space on cluster)
And afterwards when can you use merge-diff to create a 'good' file on the other side?

Anyone with good bash skills like to program above (if at all possible).

Issues in cluster on Proxmox v4

$
0
0
Hi!

In our labs we had 5 Proxmox nodes since this summer. Version 3.5 was all gone right (without any kind of problem).

This last month we are migrating the cluster to Proxmox v4 and issues have appeared :(

Suddently "Permission denied - invalid ticket 401" appears while browsing the UI, and throws me back to Login.

All hosts resolve mutually through /etc/hosts.

How can I debug this issue more deeply?

Cluster appears to be ok. And time is synced in all hosts (I checked it).
Selecció_004.png

Code:

Membership information
----------------------
    Nodeid      Votes Name
        3          1 eimtvm0
        5          1 eimtvm1 (local)
        4          1 eimtvm2
        6          1 eimtvm3
        2          1 eimtvm4
        1          1 eimtvm5

Code:

root@eimtvm1:~# pvecm status
Quorum information
------------------
Date:            Fri Nov 27 17:21:28 2015
Quorum provider:  corosync_votequorum
Nodes:            6
Node ID:          0x00000005
Ring ID:          616
Quorate:          Yes

Votequorum information
----------------------
Expected votes:  6
Highest expected: 6
Total votes:      6
Quorum:          4 
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 172.26.1.60
0x00000005          1 172.26.1.61 (local)
0x00000004          1 172.26.1.62
0x00000006          1 172.26.1.63
0x00000002          1 172.26.1.64
0x00000001          1 172.26.1.65


Thanks!!
Attached Images

Problems configuring VM builtin firewall with NAT + Port Forwarding + Hairpinning

$
0
0
I have a server with Proxmox 4.0-57 with the built-in firewall activated for the datacenter.

I have a single public IP address (e.g. 1.2.3.4) and my CTs/VMs have IPs in the subnet 10.10.10.0/24 and connected to internet using NAT:

Code:

## /etc/network/interfaces

iface lo inet loopback

auto eth0
iface eth0 inet static
        address 1.2.3.4
        netmask 255.255.0.0
        gateway 1.2.3.1
        post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp

auto vmbr0
iface vmbr0 inet static
        address 10.10.10.1
        netmask 255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up  iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE

The VMs/CTs (100 and 110 for the following example) expose several services (let's say SSH and HTTPS).
Services on some VMs (such as VM110) should only be visible from a given public subnet (e.g. 1.2.0.0/16).
Since the ports to be published may collide, port forwarding is done in the following way:

Code:

VM100:22  <--> 1.2.3.4:10022
VM100:443 <--> 1.2.3.4:10443
VM110:22  <--> 1.2.3.4:11022

That is:

Code:

## More from /etc/network/interfaces

# 100
        post-up  iptables -t nat -A PREROUTING -p tcp -m tcp --dport 10022 -j DNAT --to-destination 10.10.10.100:22
        post-down iptables -t nat -D PREROUTING -p tcp -m tcp --dport 10022 -j DNAT --to-destination 10.10.10.100:22
        post-up  iptables -t nat -A PREROUTING -p tcp -m tcp --dport 10443 -j DNAT --to-destination 10.10.10.100:443
        post-down iptables -t nat -D PREROUTING -p tcp -m tcp --dport 10443 -j DNAT --to-destination 10.10.10.100:443
# 110
        post-up  iptables -t nat -A PREROUTING -s 1.2.0.0/16 -p tcp -m tcp --dport 11022 -j DNAT --to-destination 10.10.10.110:22
        post-down iptables -t nat -D PREROUTING -s 1.2.0.0/16 -p tcp -m tcp --dport 11022 -j DNAT --to-destination 10.10.10.110:22

Up to this point, everything works flawlessly.

Now, the tricky part:

I'd like to move some part of this config to the GUI (such as the allowed ip ranges for the SSH on VM110). This is possible by removing the "-s 1.2.0.0/16" argument in the VM110 rules, and configuring accordingly the VM firewall.

I have been able to make this proposed configuration work, activating the ProxMox built-in firewall at the CT/VM level together with the previous config for port forwarding (removing the "-s" arg.) by using the following rules:

Code:

## More from /etc/network/interfaces

# Allow NAT working with the built-in firewall
        post-up  iptables -t raw -I PREROUTING  -i fwbr+ -j CT --zone 1
        post-down iptables -t raw -D PREROUTING  -i fwbr+ -j CT --zone 1

Additionally, with the CT/VM built-in firewall disabled, I have been able to configure hairpinning (connect in the internal network between CTs/VMs using their public IP:PORT) by using the following rules:

Code:

## More from /etc/network/interfaces

# 100 Harpinning
        post-up  iptables -t nat -A POSTROUTING -d 10.10.10.100 -p tcp -m multiport --dports 22,443 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -d 10.10.10.100 -p tcp -m multiport --dports 22,443 -j MASQUERADE
# 110 Harpinning
        post-up  iptables -t nat -A POSTROUTING -d 10.10.10.110 -p tcp -m multiport --dports 22 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -d 10.10.10.110 -p tcp -m multiport --dports 22 -j MASQUERADE

However, after several days trying, I have been completely unable to make both things work together (CT/VM firewall + hairpinning), and I have not been able to guess which is the network architecture with all the additional virtual NICs introduced by the firewall (fwbr101i0, fwln101i0, fwpr101i0, etc.).

Obviusly, disabling the firewall, enabling the previous hairpinning rules and controlling the traffic to the CTs/VMs using rules in the /etc/network/interfaces file is an option, but, is there a way to make hairpinning work together with the built-in firewall?

Any ideas?

Thanks in advance.

VM Boot failure: RAM Balloon driver cant get pages

$
0
0
When I start my VMs (Ubuntu 15.04) I often, if not usually get this error message when cold booting.
Code:

virtio_balloon virtio0: Out of puff! Can't get 1 pages
The VM seems to settle at 94MB/2GB used RAM when it hangs there.
I don't seem to be able to do anything with the VM through webinterface console, but if I connect via "serial cable" over socket from host I get into initramfs console, output from dmesg at end of post.

If I then reset the VM (not shutting it down completely), it will boot up just fine.

Do I need to do something to the host or VM so that it will get more memory during boot? I currently have the VM set to 256MB ram minimum and 2GB maximum. I have also tried to just set the VM to have 2GB fixed amount, but the problem was the same.

Output of dmesg after failed boot (shortened to fit into post, I cant post links yet but full log on this pastebin id: bH6DX2eR ):

Code:

[    7.928681] 262014 pages RAM
[    7.929491] 0 pages HighMem/MovableOnly
[    7.930337] 227731 pages reserved
[    7.931158] 0 pages cma reserved
[    7.931963] 0 pages hwpoisoned
[    7.932780] virtio_balloon virtio0: Out of puff! Can't get 1 pages
[    8.136276] vballoon: page allocation failure: order:0, mode:0x310da
[    8.137323] CPU: 0 PID: 46 Comm: vballoon Not tainted 3.19.0-33-generic #38-Ubuntu
[    8.138344] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014
[    8.140496]  0000000000000000 ffff880038ea7ba8 ffffffff817c4f9f 0000000000008ffa
[    8.141598]  00000000000310da ffff880038ea7c38 ffffffff81180868 ffff880038ea7c38
[    8.142670]  ffffffff8119157a 0000000000000000 0000000000000020 00000000000310da
[    8.143743] Call Trace:
[    8.144605]  [<ffffffff817c4f9f>] dump_stack+0x45/0x57
[    8.145552]  [<ffffffff81180868>] warn_alloc_failed+0xd8/0x130
[    8.146534]  [<ffffffff8119157a>] ? try_to_free_pages+0xba/0x140
[    8.147486]  [<ffffffff811849b2>] __alloc_pages_nodemask+0x842/0xba0
[    8.148437]  [<ffffffff811cb3c1>] alloc_pages_current+0x91/0x110
[    8.149358]  [<ffffffff811f16e1>] balloon_page_enqueue+0x21/0xa0
[    8.150287]  [<ffffffff81487096>] fill_balloon+0x96/0x110
[    8.151175]  [<ffffffff814878fe>] balloon+0x18e/0x220
[    8.152019]  [<ffffffff810b75f0>] ? __wake_up_sync+0x20/0x20
[    8.152918]  [<ffffffff81487770>] ? virtballoon_probe+0x1c0/0x1c0
[    8.153768]  [<ffffffff81095959>] kthread+0xc9/0xe0
[    8.154548]  [<ffffffff81095890>] ? kthread_create_on_node+0x1c0/0x1c0
[    8.155393]  [<ffffffff817cc018>] ret_from_fork+0x58/0x90
[    8.156198]  [<ffffffff81095890>] ? kthread_create_on_node+0x1c0/0x1c0
[    8.157019] Mem-Info:
[    8.157635] Node 0 DMA per-cpu:
[    8.158278] CPU    0: hi:    0, btch:  1 usd:  0
[    8.158971] Node 0 DMA32 per-cpu:
[    8.159592] CPU    0: hi:  186, btch:  31 usd: 142
[    8.160300] active_anon:222 inactive_anon:14 isolated_anon:0
[    8.160300]  active_file:0 inactive_file:0 isolated_file:0
[    8.160300]  unevictable:14738 dirty:0 writeback:0 unstable:0
[    8.160300]  free:10188 slab_reclaimable:1702 slab_unreclaimable:1785
[    8.160300]  mapped:588 shmem:41 pagetables:32 bounce:0
[    8.160300]  free_cma:0
[    8.164577] Node 0 DMA free:4360kB min:464kB low:580kB high:696kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:1072kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:7664kB mlocked:0kB dirty:0kB writeback:0kB mapped:100kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:48kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:1072 all_unreclaimable? yes
[    8.169150] lowmem_reserve[]: 0 975 975 975
[    8.169939] Node 0 DMA32 free:36392kB min:36468kB low:45584kB high:54700kB active_anon:888kB inactive_anon:56kB active_file:0kB inactive_file:0kB unevictable:57880kB isolated(anon):0kB isolated(file):0kB present:1032064kB managed:129468kB mlocked:0kB dirty:0kB writeback:0kB mapped:2252kB shmem:164kB slab_reclaimable:6800kB slab_unreclaimable:7092kB kernel_stack:784kB pagetables:128kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[    8.174726] lowmem_reserve[]: 0 0 0 0
[    8.175535] Node 0 DMA: 2*4kB (UM) 2*8kB (U) 1*16kB (U) 3*32kB (UM) 2*64kB (M) 2*128kB (M) 3*256kB (UM) 2*512kB (M) 0*1024kB 1*2048kB (R) 0*4096kB = 4360kB
[    8.177605] Node 0 DMA32: 52*4kB (UEM) 41*8kB (UE) 33*16kB (UEM) 14*32kB (UE) 9*64kB (UE) 4*128kB (EM) 4*256kB (UEM) 2*512kB (UE) 3*1024kB (UEM) 2*2048kB (UE) 6*4096kB (MR) = 36392kB
[    8.179863] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[    8.180938] 14779 total pagecache pages
[    8.181800] 0 pages in swap cache
[    8.182636] Swap cache stats: add 0, delete 0, find 0/0
[    8.183561] Free swap  = 0kB
[    8.184387] Total swap = 0kB
[    8.185206] 262014 pages RAM
[    8.186024] 0 pages HighMem/MovableOnly
[    8.186871] 227731 pages reserved
[    8.187689] 0 pages cma reserved
[    8.188504] 0 pages hwpoisoned
[    8.189303] virtio_balloon virtio0: Out of puff! Can't get 1 pages
[    8.392273] vballoon: page allocation failure: order:0, mode:0x310da
[    8.393361] CPU: 0 PID: 46 Comm: vballoon Not tainted 3.19.0-33-generic #38-Ubuntu
[    8.394456] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014
[    8.396365]  0000000000000000 ffff880038ea7ba8 ffffffff817c4f9f 000000000000945c
[    8.397468]  00000000000310da ffff880038ea7c38 ffffffff81180868 ffff880038ea7c28
[    8.398561]  ffffffff817c71ec ffff880038eb93a0 0000000000014240 ffff880038ea7fd8
[    8.399650] Call Trace:
[    8.400497]  [<ffffffff817c4f9f>] dump_stack+0x45/0x57
[    8.401434]  [<ffffffff81180868>] warn_alloc_failed+0xd8/0x130
[    8.402383]  [<ffffffff817c71ec>] ? __schedule+0x39c/0x900
[    8.403300]  [<ffffffff811849b2>] __alloc_pages_nodemask+0x842/0xba0
[    8.404250]  [<ffffffff811cb3c1>] alloc_pages_current+0x91/0x110
[    8.405166]  [<ffffffff811f16e1>] balloon_page_enqueue+0x21/0xa0
[    8.406073]  [<ffffffff81487096>] fill_balloon+0x96/0x110
[    8.406941]  [<ffffffff814878fe>] balloon+0x18e/0x220
[    8.407773]  [<ffffffff810b75f0>] ? __wake_up_sync+0x20/0x20
[    8.408633]  [<ffffffff81487770>] ? virtballoon_probe+0x1c0/0x1c0
[    8.409478]  [<ffffffff81095959>] kthread+0xc9/0xe0
[    8.410253]  [<ffffffff81095890>] ? kthread_create_on_node+0x1c0/0x1c0
[    8.411093]  [<ffffffff817cc018>] ret_from_fork+0x58/0x90
[    8.411881]  [<ffffffff81095890>] ? kthread_create_on_node+0x1c0/0x1c0
[    8.412714] Mem-Info:
[    8.413328] Node 0 DMA per-cpu:
[    8.413969] CPU    0: hi:    0, btch:  1 usd:  0
[    8.414667] Node 0 DMA32 per-cpu:
[    8.415285] CPU    0: hi:  186, btch:  31 usd: 142
[    8.415964] active_anon:222 inactive_anon:14 isolated_anon:0
[    8.415964]  active_file:0 inactive_file:0 isolated_file:0
[    8.415964]  unevictable:14738 dirty:0 writeback:0 unstable:0
[    8.415964]  free:10188 slab_reclaimable:1702 slab_unreclaimable:1785
[    8.415964]  mapped:588 shmem:41 pagetables:32 bounce:0
[    8.415964]  free_cma:0
[    8.420201] Node 0 DMA free:4360kB min:464kB low:580kB high:696kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:1072kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:7664kB mlocked:0kB dirty:0kB writeback:0kB mapped:100kB shmem:0kB slab_reclaimable:8kB slab_unreclaimable:48kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:1072 all_unreclaimable? yes
[    8.424694] lowmem_reserve[]: 0 975 975 975
[    8.425475] Node 0 DMA32 free:36392kB min:36468kB low:45584kB high:54700kB active_anon:888kB inactive_anon:56kB active_file:0kB inactive_file:0kB unevictable:57880kB isolated(anon):0kB isolated(file):0kB present:1032064kB managed:129468kB mlocked:0kB dirty:0kB writeback:0kB mapped:2252kB shmem:164kB slab_reclaimable:6800kB slab_unreclaimable:7092kB kernel_stack:784kB pagetables:128kB unstable:0kB bounce:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
[    8.430218] lowmem_reserve[]: 0 0 0 0
[    8.431028] Node 0 DMA: 2*4kB (UM) 2*8kB (U) 1*16kB (U) 3*32kB (UM) 2*64kB (M) 2*128kB (M) 3*256kB (UM) 2*512kB (M) 0*1024kB 1*2048kB (R) 0*4096kB = 4360kB
[    8.433100] Node 0 DMA32: 52*4kB (UEM) 41*8kB (UE) 33*16kB (UEM) 14*32kB (UE) 9*64kB (UE) 4*128kB (EM) 4*256kB (UEM) 2*512kB (UE) 3*1024kB (UEM) 2*2048kB (UE) 6*4096kB (MR) = 36392kB
[    8.435355] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[    8.436427] 14779 total pagecache pages
[    8.437301] 0 pages in swap cache
[    8.438140] Swap cache stats: add 0, delete 0, find 0/0
[    8.439085] Free swap  = 0kB
[    8.439910] Total swap = 0kB
[    8.440733] 262014 pages RAM
[    8.441542] 0 pages HighMem/MovableOnly
[    8.442397] 227731 pages reserved
[    8.443216] 0 pages cma reserved
[    8.444026] 0 pages hwpoisoned
[    8.444828] virtio_balloon virtio0: Out of puff! Can't get 1 pages
[  47.780892] random: nonblocking pool is initialized

novnc after reboot dont work

$
0
0
i have to shutdown my server and start it again due to some power cable managment... after this i cant connect to vnc console... before was everything OK...

no connection : Connection timed out
TASK ERROR: command '/bin/nc -l -p 5900 -w 10 -c '/usr/sbin/qm vncproxy 115 2>/dev/null'' failed: exit code 1

root@proxmox:~# telnet localhost 5900
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused

can somebody help me? i have firefox... and google chrome, none of them works...

direct connection no firewall or something else...

Assign public IPs automatically to VMs

$
0
0
Hello!

We have a single ip for the proxmox server that is bound to vmbr0 like

auto lo
iface lo inet loopback
iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
address 172.168.10.2
netmask 255.255.255.0
gateway 172.168.10.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

Addionally we have an IP Range like

address 172.165.10.82-172.165.10.94
netmask 255.255.255.0
gateway 172.165.10.1

How do I have to configure this second network so that it will be autmatically assigned to a new vm via dhcp?
Does Proxmox support this through dhcp?

Thank You for help!

Sven

Cluster filesystem

$
0
0
Hello!

I'm new in clustering.
I want to set up my secound node and i have one iscsi storage.
I want to ask the community about possiblity.
I now use the iscsi as lvm and i formatted to ext4 because of thin provisioning.
Possible to use the same iscsi volume on two or more node with thin provisioning (qcow2)?

I yes wich file system i need to use?

Sorry i was realy new.

All firewall rules

$
0
0
I am running proxmox 4 and using KVM, LXC, and Ceph on 8 servers. Is there an up to date list of incoming and outgoing ports I need open on all of these systems for all the services? I haven't found anything.

Backup proxmox configuration and restore

$
0
0
Hi there,

I have a proxmox server installed in a hard drive. I want to make a backup of my server's configuration, then install proxmox in a different device and then load this configuration into the new proxmox installation. As a result, the current proxmox would be replaced by the new one.

Is this possible ?

Thanks.

read only lxc mount points

$
0
0
hello,

is it possible to mount a read only directory into an lxc?

as it currently stands i can mount a directory from the host into the lxc by adding this line into the config for the lxc:

mp0: /mnt/hostlocation,mp=/mnt/lxclocation

however this is for both read and write. is it possible to do this as a read only? thanks

Change FQDN of ProxMox 4 Cluster

$
0
0
Hello, I installed proxmox 4 and was wondering if it is possible to change the FQDN that I entered during installation and if there is a tutorial for this process online. Thank you.

proxmox memory usage / proxmox front end on a different box?

$
0
0
I installed debian-Jessie, followed it up by a Proxmox install on top of that following instructions here - pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie (I am not allowed to post links).

After a default install, with nothing configured, nothing installed, no change whatsoever, my memory usage is hovering at over 1GB, as seen in Server View -> host machine -> Summary.
Load Average is 0.00, 0.03, 0.05
CPU usage is 0%
IO delay is 0%
Swap usage is at 0
However, RAM usage on this page is showing up as Used: 956 MB.

For what it's worth, my PVE manager version is pve-manager/4.0-57/cc7c2b53
and kernel version is 4.2.3-2-pve #1 SMP


Displaying via TOP, I see that the bulk of the memory is being used up by -pvedaemon-worker, pveproxy-worker. There are multiple threads of these, and these are taking up bulk of the 1GB RAM.

Is there anyway I can lower their RAM usage?

Additionally, I have another Linux box, which is basically an old laptop at home. Can I have the "proxmox front end" on this box (this box is Atom, doesn't support virtualization), and manage my virtualization host from this laptop, without installing any "heavy" resources on the virtualization host ?

Or, is there anyway I can "stop" the PVE manager from running when not needing it (95% of the time), and fire it up on demand? I plan on installing some Linux containers, a couple of Linux VMs and a couple of Windows VMs, and all of them will have SSH access (remote desktop for Win), so once I have the VMs setup , I won't be needing proxmox VE to be running, just the VMs themselves. Is what I am imagining somehow possible?

Is it possible to create a cluster using x86 and x64 architecture ?

$
0
0
Hello there,

There is not much to explain, is it possible to have a cluster with servers that are 32 bits and servers that are 64 bits ?

At the moment I'm using both kvm and openvz with proxmox 3.4.

Thanks.
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>