Quantcast
Channel: Proxmox Support Forum
Viewing all 171791 articles
Browse latest View live

Ceph hotplug problem in PVE 3.4 (Including cause and solution)

$
0
0
Hi,

I was testing the hotplug function of PVE 3.4 and found out it is not working with Ceph RBD storage.

When hotplugging a Hard Disk using Ceph i get the following error:


Parameter verification failed. (400)

virtio1: hotplug problem - adding drive failed: drive_add: string expected


Adding a Hard Disk using Ceph while the same VM is turned off works ok.

Also hotplugging using local directory (ZFS using writeback cache!) or local directory (EXT4) storage works.


Reading the following maillinglist thread (although somewhat different error message):

http://pve.proxmox.com/pipermail/pve...ry/008304.html

I'm was suspecting the same sub in the same file to be the problem: qemu_driveadd in /usr/share/perl5/PVE/QemuServer.pm
My guess was that while using Ceph the $drive variable is not set correctly using hotplug. (empty or malformed).

I reverted the patch (commit) and now it works! So while the commit fixes handling of spaces it breaks something else.

Kind regards,
Caspar

Where is the Bottleneck?

$
0
0
For a Fileserver Virtualization i passthrough a SATA Controller to a Nas4Free VM.
There i have et al an mirrored ZFS SSD Pool (2 x Samsung 840 Pro).
The Read-Performance from this Pool inside the Nas4Free VM is very fast: ~1GB/sec.
The Dataset of this Pool I've shared via NFS and added in Proxmox as NFS-Storage.
With VIRTIO Network in these VM i get this iperf-Values:

Code:

nas4free: ~ # iperf -c 192.168.0.3
------------------------------------------------------------
Client connecting to 192.168.0.3, TCP port 5001
TCP window size:  257 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.240 port 13603 connected with 192.168.0.3 port 5001
[ ID] Interval      Transfer    Bandwidth
[  3]  0.0-10.0 sec  45.5 GBytes  39.0 Gbits/sec


nas4free: ~ # iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  256 KByte (default)
------------------------------------------------------------
[  4] local 192.168.0.240 port 5001 connected with 192.168.0.3 port 55841
[ ID] Interval      Transfer    Bandwidth
[  4]  0.0-10.0 sec  37.0 GBytes  31.7 Gbits/sec

Code:

root@proxmox:/alex# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.0.3 port 5001 connected with 192.168.0.240 port 13603
[ ID] Interval      Transfer    Bandwidth
[  4]  0.0-10.0 sec  45.5 GBytes  38.9 Gbits/sec
root@proxmox:/alexiperf -c 192.168.0.240
------------------------------------------------------------
Client connecting to 192.168.0.240, TCP port 5001
TCP window size: 22.5 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.3 port 55841 connected with 192.168.0.240 port 5001
[ ID] Interval      Transfer    Bandwidth
[  3]  0.0-10.0 sec  37.0 GBytes  31.8 Gbits/sec

But the Read-Performance from the Host is very bad:

Code:

root@proxmox:/# dd if=/mnt/pve/nas4free-ssd/images/110/vm-110-disk-1.qcow2 of=/dev/null bs=2M count=2000
2000+0 records in
2000+0 records out
4194304000 bytes (4.2 GB) copied, 76.0466 s, 55.2 MB/s

When I use the "e1000" Network Card in Nas4Free VM, iperf shows ~"1.3 Gbits/sec" an the Result in dd increased to ~"130 MB/s"

I think the Speed with VIRTIO would still be nearly as fast as in the VM itself?
Where is the Bottleneck?

Ceph - how many replicas do you use?

$
0
0
Basically the title says it all - how many replicas do you use for your storage pools? I've been thinking 3 replicas for vms that I really need to be confident of the data durability due to hardware issues and 2 replicas for where I need speed but less redundancy. I was wondering what other people have chosen and why?

Live migration does not work when SPICE console/client is open.

$
0
0
I noticed in PVE 3.4 when i live migrate a VM it fails when there is a SPICE client (In my case Remote Viewer on Ubuntu) running for the VM.

Live migration works if the console is closed or when using the NoVNC console, only not with SPICE console open.

Here's the log including the error message:

Feb 27 16:55:56 starting migration of VM 103 to node 'pve-ceph-sn02' (192.168.0.13)
Feb 27 16:55:56 copying disk images
Feb 27 16:55:56 starting VM 103 on remote node 'pve-ceph-sn02'
Feb 27 16:55:57 starting ssh migration tunnel
Feb 27 16:55:58 starting online/live migration on localhost:60000
Feb 27 16:55:58 migrate_set_speed: 8589934592
Feb 27 16:55:58 migrate_set_downtime: 0.1
Feb 27 16:55:58 spice client_migrate_info
Feb 27 16:56:00 migration status: active (transferred 101428317, remaining 2358349824), total 4429852672)
Feb 27 16:56:02 migration status: active (transferred 209611197, remaining 2157207552), total 4429852672)
Feb 27 16:56:04 migration status: active (transferred 311697595, remaining 2015055872), total 4429852672)
Feb 27 16:56:06 migration status: active (transferred 413662539, remaining 1860911104), total 4429852672)
Feb 27 16:56:08 migration status: active (transferred 520977037, remaining 1716162560), total 4429852672)
Feb 27 16:56:10 migration status: active (transferred 623924528, remaining 159703040), total 4429852672)
Feb 27 16:56:12 migration status: active (transferred 729685545, remaining 126803968), total 4429852672)
Feb 27 16:56:14 migration status: active (transferred 837580363, remaining 17285120), total 4429852672)
Feb 27 16:56:14 migration status: active (transferred 856487611, remaining 52432896), total 4429852672)
Feb 27 16:56:15 migration status: active (transferred 872844316, remaining 33968128), total 4429852672)
Feb 27 16:56:15 migration status: active (transferred 891318061, remaining 16019456), total 4429852672)
Feb 27 16:56:15 migration status: active (transferred 906470294, remaining 10915840), total 4429852672)
Feb 27 16:56:16 migration speed: 227.56 MB/s - downtime 142 ms
Feb 27 16:56:16 migration status: completed
Feb 27 16:56:16 ERROR: VM 103 not running
Feb 27 16:56:16 ERROR: command '/usr/bin/ssh -o 'BatchMode=yes' root@192.168.0.13 qm resume 103 --skiplock' failed: exit code 2
Feb 27 16:56:16 Waiting for spice server migration
Feb 27 16:56:19 ERROR: migration finished with problems (duration 00:00:24)
TASK ERROR: migration problems

Accessing VM's NoVNC from another website

$
0
0
Hi all,
I'm working on a minimalistic service to let users create and manage their own OpenVZ VMs and access their VM terminal through my website.

My original plan was to simply embed in an iframe (in my website) the VM's NoVNC console url, ex:

HTML Code:

<iframe src="https://<proxmox-server-iport>/?console=openvz&novnc=1&vmid=123&vmname=hostname&node=proxmox">...
However this doesn't work because I need to pass a cookie containing the access-ticket (which I can get via the API) to my proxmox server in order to access such page. The problem is that I can't set cookies to my proxmox server from my website server (they live on separate machines), this is a general rule with cookies.

Question:

Is there an API or page on my proxmox server that I can call/create to set the access-ticket cookie? For example I could create a php page in my proxmox server that receives a POST request containing the ticket and set it as a cookie. Is there already such a page somewhere? If not, can anyone point me on how I would go about creating one?

Alternatively, (since I create a proxmox user for every user in my website) is there a way to programmatically log in a proxmox user via API? This way I'm prettey sure the user would have the ticket correctly set to access the VM console URL.

If there is any other method I didn't think of to setup VM console access from another website which is not the proxmox server please let me know.

Thank you!
Mic

3.4: All VM stopped under heavy Ceph reorg

$
0
0
Had two Ceph pools for RBD virt disks, vm_images (boot hdd images) + rbd_data (extra hdd images).


Then while adding pools for a rados GW (.rgw.*) suddenly ceph health status said that my vm_images pool had too few PGs, thus I ran:


ceph osd pool set vm_images pg_num <larger_number>
ceph osd pool set vm_images pgp_num <larger_number>


Kicking off a 20 min rebalancing with a lot of IO in the Ceph Cluster, eventually Ceph Cluster was fine again, only almost all my PVE VMs ended up in stopped state, wondering why, a watchdog thingy maybe...

/Steffen

PS! Admitting my Ceph public and private networks are on the same physical 2-3Gbs LaCP load balanced network (some nodes with 2x1Gbs NICs, some with 3x1Gbs NIcs) since my only other physical network is a slow 100Mbs public network.

Help! I'm in deep trouble, can't start containers after reboot

$
0
0
Hi All...

I'm in deep trouble here :-(

I would greatly appreciate any assistance!

I was having some trouble managing containers, I would get an error about a worker thread when I tried to use the web based management interface. I was able to stop containers using vzctl at the command line. The containers were all operating okay, so I though there was an issue limited to the user interface. Follishly, I rebooted the machine. It came back up and now I can't start any of my containers. These are critical!

If I try to use vzctl to start the containers, I get:

root@pmmaster:/# vzctl start 101
Container config file does not exist
root@pmmaster:/#

If I try to list the containers:

root@pmmaster:/# vzlist -a
Container(s) not found
root@pmmaster:/#


Thanks very much!

Here is my version information:


root@pmmaster:/# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-95
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

Why Proxmox VE needs at least two NICs

$
0
0
Hi,

I'm a newcomer in Proxmox.

The first question I'm asking myself is why Proxmox VE requires at least two NICs?? Please can someone explain it to me in little detail?

Many thanks.

proxmox doesnt like read-only nfs

$
0
0
have read-only shared nfs storage for iso images. got this trying to view content,

mkdir /mnt/pve/bootiso/template: Read-only file system at /usr/share/perl5/PVE/Storage/Plugin.pm line 794 (500)

iSCSI Reconnecting every 10 seconds to FreeNAS solution

$
0
0
Hi folks,

Proxmox seems to be connecting and reconnection every 10 seconds to my FreeNAS machine

Feb 28 10:26:11 nas ctld[57937]: 10.0.0.1: read: connection lost
Feb 28 10:26:11 nas ctld[2904]: child process 57937 terminated with exit status 1
Feb 28 10:26:21 nas ctld[57947]: 10.0.0.1: read: connection lost
Feb 28 10:26:21 nas ctld[2904]: child process 57947 terminated with exit status 1
Feb 28 10:26:31 nas ctld[57957]: 10.0.0.1: read: connection lost
Feb 28 10:26:31 nas ctld[2904]: child process 57957 terminated with exit status 1
Feb 28 10:26:41 nas ctld[57965]: 10.0.0.1: read: connection lost
Feb 28 10:26:41 nas ctld[2904]: child process 57965 terminated with exit status 1
Feb 28 10:26:51 nas ctld[57974]: 10.0.0.1: read: connection lost
Feb 28 10:26:51 nas ctld[2904]: child process 57974 terminated with exit status 1
Feb 28 10:27:01 nas ctld[57984]: 10.0.0.1: read: connection lost
Feb 28 10:27:01 nas ctld[2904]: child process 57984 terminated with exit status 1
Feb 28 10:27:11 nas ctld[57996]: 10.0.0.1: read: connection lost
Feb 28 10:27:11 nas ctld[2904]: child process 57996 terminated with exit status 1


I've opened up a ticket with the FreeNAS folks (https://bugs.freenas.org/issues/7891) and they seem to think the issue on the Proxmox side.

Does anyone have any insight into what is actually happening?

Thanks.

- Derek

Proxmox 3.4 "Out of Range" at install

$
0
0
I'm currently running Proxmox 3.1 on my home server but I decided it was time for a fresh install so I downloaded the 3.4 iso installer, and so my problem begins. The second I boot into the usb I get an "Out of Range" error on my monitor. I've tried this with 2 different monitor, one an asus and the other an acer, both running at 1080x1920. I used the acer monitor when I installed 3.1 the first time and didn't have this issue, there's been no hardware changes at all, so I can only assume something that's changed in Proxmox is causing this.

I've looked around on this forum and browsing through google, but I've not been able to find anything useful unfortuantly. If anyone has any idea what could be causing this I'd really appreciate it.

VM firewall doesn't work?

$
0
0
Using PVE 3.4 I'm trying to block all traffic to/from IP addresses on a specific VM on its eth0 (it has a number of IPs on that NIC), apart from one IP address that needs to be allowed (for all ports).

I'm putting the following in /etc/pve/firewall/<vmid>.fw

Code:

[OPTIONS]

enable: 1
policy_out: DROP
policy_in: DROP

[RULES]

OUT ACCEPT -i net0 -source 92.x.x.25
IN ACCEPT -i net0 -dest 92.x.x.25

But it has no effect.

The node's firewall is off, if that makes any difference.

Does anyone have any ideas?

EDIT: I've set log levels for in and out to debug - but there's nothing in the logs. That can't be right, can it?

EDIT: OK so I was getting confused between a "cluster" and a "node" - each have separate fw settings, AND you need to enable the fw on the NIC at the VM level. Checkbox city!

So now the fw is working (and I see debug messages), I still can't work out how to limit traffic to one IP on the guest and block all others.

[Resolved] vzdump backup didn't run

$
0
0
Hi,

Since 1er February, the backup didn't run

It's correctly configured in the web interface



I didn't receive an email with an error. The last email is successfully backup

How i can check the schedule

Best regards

Error on restore

$
0
0
Server 1 > 3.2 backup KVM VM
Note: wget backup to server 2


Server 2 > 3.4 restore KVM
chmod 777 file in dump


Error
restore vma archive: zcat /var/lib/vz/dump/vzdump-qemu-251.vma.gz|vma extract -v -r /var/tmp/vzdumptmp212811.fifo - /var/tmp/vzdumptmp212811
TASK ERROR: command 'zcat /var/lib/vz/dump/vzdump-qemu-251.vma.gz|vma extract -v -r /var/tmp/vzdumptmp212811.fifo - /var/tmp/vzdumptmp212811' failed: got timeout


Server 2
root@cid290:/var/lib/vz/dump# pveversion -v
proxmox-ve-2.6.32: 3.3-147 (running kernel: 2.6.32-37-pve)
pve-manager: 3.4-1 (running version: 3.4-1/3f2d890e)
pve-kernel-2.6.32-37-pve: 2.6.32-147
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-16
qemu-server: 3.3-20
pve-firmware: 1.1-3
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-31
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-12
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
root@cid290:/var/lib/vz/dump#

Proxmox 3.1 crashed during vzdump to NFS mount system

$
0
0
This is very unusual so I am posting this to the forum to see if others know why. The system that ran into this reporting issue is running:

proxmox-ve-2.6.32: 3.1-114 (running kernel: 2.6.32-26-pve)
pve-manager: 3.1-21 (running version: 3.1-21/93bf03d4)
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-8
qemu-server: 3.1-8
pve-firmware: 1.0-23
libpve-common-perl: 3.0-8
libpve-access-control: 3.0-7
libpve-storage-perl: 3.0-17
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve4
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.1-1



The system become non responsive which I had to cold reboot. Per my inspection of syslog I found the following errors around the time of incidence:

Feb 28 04:15:13 vpshost36 kernel: device-mapper: snapshots: Invalidating snapshot: Unable to allocate exception.
Feb 28 04:15:14 vpshost36 kernel: EXT3-fs error (device dm-3): ext3_get_inode_loc: unable to read inode block - inode=17235982, block=
68943874
Feb 28 04:15:14 vpshost36 kernel: __ratelimit: 108 callbacks suppressed
Feb 28 04:15:14 vpshost36 kernel: Buffer I/O error on device dm-3, logical block 0
Feb 28 04:15:14 vpshost36 kernel: lost page write due to I/O error on dm-3
Feb 28 04:15:14 vpshost36 kernel: EXT3-fs (dm-3): I/O error while writing superblock
Feb 28 04:15:14 vpshost36 kernel: EXT3-fs (dm-3): error in ext3_reserve_inode_write: IO failure
Feb 28 04:15:14 vpshost36 kernel: Buffer I/O error on device dm-3, logical block 0
Feb 28 04:15:14 vpshost36 kernel: lost page write due to I/O error on dm-3
Feb 28 04:15:14 vpshost36 kernel: EXT3-fs (dm-3): I/O error while writing superblock
Feb 28 04:15:14 vpshost36 kernel: EXT3-fs error (device dm-3): ext3_get_inode_loc: unable to read inode block - inode=17235981, block=
68943874



I also noticed that per each back up there were file system fixes similar to:

Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 9054124
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 9054091
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 9054088
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 9054074
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 9054071
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 6062696
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 6062695
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 6062694
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 6062693
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 6062690
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 5187023
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 5187022
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 5187021
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 5187020
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 5187013
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 4196439
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 4196435
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 4196434
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 4196433
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 4196430
Feb 28 03:23:37 vpshost36 kernel: ext3_orphan_cleanup: deleting unreferenced inode 1869926



This happened when the system was doing a backs to nfs mount file system. My guess is this is related to NFS but not certain. Any ideas why this happened?

Thanks

how can i change the bit depth of "no vnc" to something like 16bit?

$
0
0
how can i change the bit depth of "no vnc" to something like 16bit?
I have a small bandwidth link..

many thanks..

"Neighbour table overflow"

$
0
0
Hello everybody,

I used so far on my Proxmox server 3 kvm's.

My provider has now made me more IP addresses available so that I can run more than 3 vm's.

The 3 kvm's run without any problems - stable and fast.

I still have 2 more test machines created - each with 1 processor and 2 GB Ram - nothing that would put the host system into performance problems.

Only the following message in the syslog host machine I now noticed:

Code:

Mar  1 08:48:50 server1 kernel: Neighbour table overflow.
Mar  1 08:48:50 server1 kernel: Neighbour table overflow.
Mar  1 08:48:50 server1 kernel: Neighbour table overflow.
Mar  1 08:48:50 server1 kernel: Neighbour table overflow.
Mar  1 08:48:50 server1 kernel: Neighbour table overflow.
Mar  1 08:48:50 server1 kernel: Neighbour table overflow.
Mar  1 08:48:50 server1 kernel: Neighbour table overflow.
Mar  1 08:48:50 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: __ratelimit: 872 callbacks suppressed
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:06 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: __ratelimit: 584 callbacks suppressed
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:11 server1 kernel: Neighbour table overflow.
Mar  1 08:49:20 server1 kernel: __ratelimit: 434 callbacks suppressed
Mar  1 08:49:20 server1 kernel: Neighbour table overflow.
Mar  1 08:49:20 server1 kernel: Neighbour table overflow.
Mar  1 08:49:20 server1 kernel: Neighbour table overflow.
Mar  1 08:49:20 server1 kernel: Neighbour table overflow.
Mar  1 08:49:20 server1 kernel: Neighbour table overflow.
Mar  1 08:49:20 server1 kernel: Neighbour table overflow.
Mar  1 08:49:20 server1 kernel: Neighbour table overflow.
Mar  1 08:49:20 server1 kernel: Neighbour table overflow.
Mar  1 08:49:20 server1 kernel: Neighbour table overflow.
Mar  1 08:49:21 server1 kernel: Neighbour table overflow.
Mar  1 08:51:06 server1 kernel: __ratelimit: 8 callbacks suppressed
Mar  1 08:51:06 server1 kernel: Neighbour table overflow.
Mar  1 08:51:06 server1 kernel: Neighbour table overflow.



The solution would actually match the corresponding values ​​of the system:

The following values ​​are now in the configuration:
net.ipv4.neigh.default.gc_thresh1 = 256
net.ipv4.neigh.default.gc_thresh2 = 512
net.ipv4.neigh.default.gc_thresh3 = 1024

Another problem is I also noticed:

The "external link", that is, from my home computer to a new kvm's is interrupted occasionally - it connects not via ssh.

Putty then gets a timeout.

When I go to the machine via the Proxmox configuration interface, i can ping and trace to external hosts.

Hang the two problems together or are there 2 problems?

Any Suggestions?

Sebastian

Virtual Disk Size issue after move disk from GUI

$
0
0
Hello

Recently we moved a 30GB disk from storage 1 to storage 2. Format of the vm disk is qcow2. On the storage 1, when we do qemu-img info, we see following info

image: vm-149-disk-1.qcow2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 27G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false

The disk size is 27 GB instead of 30GB, but on storage 2, same disk shows the data as follows

image: vm-149-disk-1.qcow2
file format: qcow2
virtual size: 30G (32212254720 bytes)
disk size: 30G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false

the disk size has become similar to virtual size and this issue is happening for any drive size and any format. Now when we run the qemu-img convert on storage 2 , we get the disk space back to 27G instead of 30G.

Can any one suggest how this issue can be resolved or is this a bug in the system.

Debian X86_64 (or amd64)

$
0
0
Hi,

Why the debian virtual appliance are only in i386 ?

Best regards

Proxmox 3.4 and RAID0

$
0
0
Hello,
I installed version 3.4 and I used RAID 1 configuration. When I want to create a VM so Proxmox write error:
//edit: Sorry, RAID1

kvm: -drive file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none,detect-zeroes=on: file system may not support O_DIRECT
kvm: -drive file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none,detect-zeroes=on: could not open disk image /var/lib/vz/images/100/vm-100-disk-1.qcow2: Could not open '/var/lib/vz/images/100/vm-100-disk-1.qcow2': Invalid argument
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=e46dabe8-5966-4f28-a55c-09226920a721' -name xp -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga qxl -cpu kvm64,+lahf_lm,+x2apic,+sep -m 2048 -k en-us -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -spice 'tls-port=61000,addr=127.0.0.1,tls-ciphers=DES-CBC3-SHA,seamless-migration=on' -device 'virtio-serial,id=spice,bus=pci.0,addr=0x9' -chardev 'spicevmc,id=vdagent,name=vdagent' -device 'virtserialport,chardev=vdagent,name=com.redhat.sp ice.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:f861a656cbfe' -drive 'file=/var/lib/vz/template/iso/Win_XP.iso,if=none,id=drive-ide2,media=cdrom,aio=native' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -drive 'file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-virtio0,format=qcow2,aio=native,cache=none,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=D6:BC:14:52:99:D3,netdev=net0,bus=pci.0 ,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime'' failed: exit code 1

Thanks
Viewing all 171791 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>