Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

Proxmox pfsense

$
0
0
Hallo i have setup a new proxmox server with linux bridge.
via openvpn pfsense i can connect to all vms on the other proxmox but to none of the containers or the proxmoxhost itselfe.

what schould be changed so that i can also connect to the proxmox host

(at the monet im am using a workaround to a vm on another proxmox and connect back to the proxmox where the firewall is running)

thx in av for your suggestions.

zfs caching etc...

$
0
0
Hi,

Setting up several new machines which have
2 x 1TB SSD
2 x 4TB HDD

Theoretically I wanted to created

SSD Drives
800GB SSD ZFS Raid 1 - pool0 - root mount
100GB SSD CACHE partition
100GB SSD ZIL

HDD Drives
4TB ZFS Raid 1 - pool1 - secondary mount
|____100GB SSD CACHE
|____100GB SSD LOG

It doesn't seem possible via the installer nor does it seem possible to resize.


Any suggestions?

Guest freeze timeout - feature request

$
0
0
Hi all,

I have a few guests that show this in the backup logs:
qmp command 'guest-fsfreeze-freeze' failed - got timeout

qmp command 'guest-fsfreeze-thaw' failed - got timeout

This is obviously dealing with the qemu guest agent, and the corresponding vss functionality (when dealing with a windows client in this case) - I notice in the source code:

Code:


if (!$timeout) {
        # hack: monitor sometime blocks
        if ($cmd->{execute} eq 'query-migrate') {
            $timeout = 60*60; # 1 hour
        } elsif ($cmd->{execute} =~ m/^(eject|change)/) {
            $timeout = 60; # note: cdrom mount command is slow
        } elsif ($cmd->{execute} eq 'guest-fsfreeze-freeze' ||
                $cmd->{execute} eq 'guest-fsfreeze-thaw') {
            $timeout = 10;
        } elsif ($cmd->{execute} eq 'savevm-start' ||
                $cmd->{execute} eq 'savevm-end' ||
                $cmd->{execute} eq 'query-backup' ||
                $cmd->{execute} eq 'query-block-jobs' ||
                $cmd->{execute} eq 'backup-cancel' ||
                $cmd->{execute} eq 'query-savevm' ||
                $cmd->{execute} eq 'delete-drive-snapshot' ||
                $cmd->{execute} eq 'guest-shutdown' ||
                $cmd->{execute} eq 'snapshot-drive'  ) {
            $timeout = 10*60; # 10 mins ?
        } else {
            $timeout = 3; # default
        }
    }

of qemu-server, the timeout is 10 seconds. normally I assume this would be a good time - these two machines happen to be WSUS servers so they have a lot of data and a lot of small files and I'm thinking the VSS functionality is just taking too long and timing out. Is there any way (either existing or for the future) to add in the .conf to make it possible to increase that timeout period? That way for machines that don't experience the problem, it would just use the default, but for those few cases, you could increase it if necessary without increasing it for all of the machines....

Right now I'm not even sure of the state of those VMs due to the two timeouts (freeze and thaw) right after each other where it may be in a state where it may have not properly thawed (assuming it ever finished freezing). For those two vms right now I'll have to turn off the guest agent functionality to prevent any future issues, but it'd be nice if one could extend the timeout if needed.

Bugs related to reused VMIDs

$
0
0
Hello,

This is my first message in this forum and I would like to discuss an issue I find problematic on Proxmox.

In brief: VMIDs should be auto-incremental, like in a database.

Currently, when you remove a VM/CT, the ID becomes available again, and the next time you create a virtual machine It will be used (proxmox will suggest the least available ID).

One of the severe consequences of this, is that the new machine will inherit access privileges of the previous VM, as Proxmox doesn't drop privileges when removing the VM, and they are referenced using the VMID.

I just had to learn this the hard way, with a customer complaining somebody was accessing his console.

I also think that in the same way stale privileges remain in the system, there might be similar issues with other parts of the Proxmox system.

Would like to hear your opinions. Thank you

-J

How to assign an IP to new KVM Linux machine?

$
0
0
Hello, i installed Proxmox and created new KVM virtual machine with Linux on it. Please how to achieve that VPS has its own dedicated IP i just bought from datacenter? I found there is "Network" tab in Proxmox, do i need to setup anything there? Thank you

Changing VM disks from 'default-no cache' to 'writeback' significantly reduces IOWait

$
0
0
Hi,

I discovered lately that changing all the VM disks to 'writeback' has decreased the IOWait by orders of magnatude.
Just an FYI for everyone who might be dealing with IOWait.

-J

"Error 404048" in Proxmox console

$
0
0
I got "Error 404048" in Proxmox console during installation from .ISO What should i do please? // edit, i hit Reload button and it worked.

live migration with different IP

$
0
0
Hi there

i have at two Hoster 2 Proxmox Root Server.

Hoster 1 Server 1 have the IP 5.XX.XX.XX
Hoster 2 Server 2 have the IP 92.XX.XX.XX

works because live migration ?

Thanks
Achim

whats wrong with my discsystem?

$
0
0
Hi,

I've just run the update to upgrade to 3.4 and noticed some strange output:

Code:

Installing for i386-pc platform.
Installation finished. No error reported.
Generating grub configuration file ...
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
Found linux image: /boot/vmlinuz-2.6.32-39-pve
Found linux image: /boot/vmlinuz-2.6.32-34-pve
Found initrd image: /boot/initrd.img-2.6.32-34-pve
Found linux image: /boot/vmlinuz-2.6.32-33-pve
Found initrd image: /boot/initrd.img-2.6.32-33-pve
Found linux image: /boot/vmlinuz-2.6.32-32-pve
Found initrd image: /boot/initrd.img-2.6.32-32-pve
Found linux image: /boot/vmlinuz-2.6.32-30-pve
Found initrd image: /boot/initrd.img-2.6.32-30-pve
Found linux image: /boot/vmlinuz-2.6.32-26-pve
Found initrd image: /boot/initrd.img-2.6.32-26-pve
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
Found memtest86+ image: /memtest86+.bin
Found memtest86+ multiboot image: /memtest86+_multiboot.bin
done
Setting up libnuma1 (2.0.8~rc4-1) ...
Setting up libpve-common-perl (3.0-24) ...
Setting up openssl (1.0.1e-2+deb7u16) ...
Setting up pve-cluster (3.0-17) ...
Restarting pve cluster filesystem: pve-cluster.
Setting up libpve-access-control (3.0-16) ...
Setting up libpve-storage-perl (3.0-33) ...
Setting up libxml2-utils (2.8.0+dfsg1-7+wheezy4) ...
Setting up numactl (2.0.8~rc4-1) ...
Setting up parted (3.2-6~bpo70+1) ...
Setting up pve-kernel-2.6.32-39-pve (2.6.32-156) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 2.6.32-39-pve /boot/vmlinuz-2.6.32-39-pve
update-initramfs: Generating /boot/initrd.img-2.6.32-39-pve
run-parts: executing /etc/kernel/postinst.d/zz-update-grub 2.6.32-39-pve /boot/vmlinuz-2.6.32-39-pve
Generating grub configuration file ...
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
Found linux image: /boot/vmlinuz-2.6.32-39-pve
Found initrd image: /boot/initrd.img-2.6.32-39-pve
Found linux image: /boot/vmlinuz-2.6.32-34-pve
Found initrd image: /boot/initrd.img-2.6.32-34-pve
Found linux image: /boot/vmlinuz-2.6.32-33-pve
Found initrd image: /boot/initrd.img-2.6.32-33-pve
Found linux image: /boot/vmlinuz-2.6.32-32-pve
Found initrd image: /boot/initrd.img-2.6.32-32-pve
Found linux image: /boot/vmlinuz-2.6.32-30-pve
Found initrd image: /boot/initrd.img-2.6.32-30-pve
Found linux image: /boot/vmlinuz-2.6.32-26-pve
Found initrd image: /boot/initrd.img-2.6.32-26-pve
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
Found memtest86+ image: /memtest86+.bin
Found memtest86+ multiboot image: /memtest86+_multiboot.bin
done
Setting up pve-firmware (1.1-4) ...
Setting up pve-qemu-kvm (2.2-10) ...

I've checked the raid status, looks okay, I also checked the discs within the controller, looks also okay. The system runns on SSD raid 1.

I've checked messages and found this:
Code:

May 21 02:18:45 pm3 kernel: EXT3-fs: barriers disabled
May 21 02:18:45 pm3 kernel: kjournald starting.  Commit interval 5 seconds
May 21 02:18:45 pm3 kernel: EXT3-fs (dm-3): using internal journal
May 21 02:18:45 pm3 kernel: EXT3-fs (dm-3): 95 orphan inodes deleted
May 21 02:18:45 pm3 kernel: EXT3-fs (dm-3): recovery complete
May 21 02:18:45 pm3 kernel: EXT3-fs (dm-3): mounted filesystem with ordered data mode
May 21 02:19:19 pm3 kernel: EXT3-fs: barriers disabled
May 21 02:19:19 pm3 kernel: kjournald starting.  Commit interval 5 seconds
May 21 02:19:19 pm3 kernel: EXT3-fs (dm-3): using internal journal
May 21 02:19:19 pm3 kernel: EXT3-fs (dm-3): 95 orphan inodes deleted
May 21 02:19:19 pm3 kernel: EXT3-fs (dm-3): recovery complete
May 21 02:19:19 pm3 kernel: EXT3-fs (dm-3): mounted filesystem with ordered data mode
May 21 02:20:16 pm3 kernel: __ratelimit: 2819 callbacks suppressed
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:20:16 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:21:32 pm3 kernel: EXT3-fs: barriers disabled
May 21 02:21:32 pm3 kernel: kjournald starting.  Commit interval 5 seconds
May 21 02:21:32 pm3 kernel: EXT3-fs (dm-3): using internal journal
May 21 02:21:45 pm3 kernel: __ratelimit: 26 callbacks suppressed
May 21 02:21:45 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:21:45 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:21:45 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:21:45 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:21:45 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:21:45 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:21:45 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:21:45 pm3 kernel: lost page write due to I/O error on dm-3
May 21 02:21:45 pm3 kernel: <35893589805
May 21 02:21:45 pm3 kernel: <3589805
May 21 02:21:45 pm3 kernel: <73589805
May 21 02:21:45 pm3 kernel: <3589805
May 21 02:21:45 pm3 kernel: <73589805
May 21 02:21:45 pm3 kernel: <73589805
May 21 02:21:45 pm3 kernel: 3589805
May 21 02:21:45 pm3 kernel: 3589805
May 21 02:21:45 pm3 kernel: <73589805
May 21 02:21:45 pm3 kernel: <73589805
May 21 02:21:45 pm3 kernel: <3589805
May 21 02:21:45 pm3 kernel: <3589805

this filled messages log until the / partition was full

on rebooting I got this on console:
Code:

Buffer I/O error on device dm-5
I run pvscan:
Code:

  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408056320: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 59408113664: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 0: Input/output error
  /dev/pve/vzsnap-pm3-0: read failed after 0 of 4096 at 4096: Input/output error
  PV /dev/sda2  VG pve  lvm2 [110.32 GiB / 12.75 GiB free]
  Total: 1 [110.32 GiB] / in use: 1 [110.32 GiB] / in no VG: 0 [0  ]

dmsetup info:
Code:

Name:              pve-data-real
State:            ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        2
Event number:      0
Major, minor:      253, 2
Number of targets: 1
UUID: LVM-pmLkYaq8onYG7zkPUhzHxZXc1mwyJvnyYL6vSp40fai6CmO3CTYBNFLcJ0gaSogi-real

Name:              pve-swap
State:            ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 1
Number of targets: 1
UUID: LVM-pmLkYaq8onYG7zkPUhzHxZXc1mwyJvnyRQeMl6rgi9dIeT4T1P7qN4DlDQSueY0R

Name:              pve-root
State:            ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 0
Number of targets: 1
UUID: LVM-pmLkYaq8onYG7zkPUhzHxZXc1mwyJvnywEL4YeJLlVI0imVh2UDU3YBANLOqJuCC

Name:              pve-vzsnap--pm3--0-cow
State:            ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 4
Number of targets: 1
UUID: LVM-pmLkYaq8onYG7zkPUhzHxZXc1mwyJvnywaO21v568C4VFdfYGv0TYMOe8B8fIEyK-cow

Name:              pve-data
State:            ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 3
Number of targets: 1
UUID: LVM-pmLkYaq8onYG7zkPUhzHxZXc1mwyJvnyYL6vSp40fai6CmO3CTYBNFLcJ0gaSogi

Name:              pve-vzsnap--pm3--0
State:            ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        0
Event number:      0
Major, minor:      253, 5
Number of targets: 1
UUID: LVM-pmLkYaq8onYG7zkPUhzHxZXc1mwyJvnywaO21v568C4VFdfYGv0TYMOe8B8fIEyK

Drive Infos (from controller)
Code:

Enclosure Device ID: 24
Slot Number: 0
Drive's position: DiskGroup: 0, Span: 0, Arm: 0
Enclosure position: 1
Device Id: 37
WWN: 50026b723a09ee02
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA

Raw Size: 111.790 GB [0xdf94bb0 Sectors]
Non Coerced Size: 111.290 GB [0xde94bb0 Sectors]
Coerced Size: 110.827 GB [0xdda7800 Sectors]
Sector Size:  0
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: C4
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5005076028d068f9
Connected Port Number: 0(path0)
Inquiry Data: 50026B723A09EE02    KINGSTON SH103S3120G                    507KC4
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive:  Not Certified
Drive Temperature : N/A
PI Eligibility:  No
Drive is formatted for PI information:  No
PI: No PI
Drive's NCQ setting : N/A
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No

Enclosure Device ID: 24
Slot Number: 1
Drive's position: DiskGroup: 0, Span: 0, Arm: 1
Enclosure position: 1
Device Id: 38
WWN: 50026b723a09ef7f
Sequence Number: 2
Media Error Count: 0
Other Error Count: 0
Predictive Failure Count: 0
Last Predictive Failure Event Seq Number: 0
PD Type: SATA

Raw Size: 111.790 GB [0xdf94bb0 Sectors]
Non Coerced Size: 111.290 GB [0xde94bb0 Sectors]
Coerced Size: 110.827 GB [0xdda7800 Sectors]
Sector Size:  0
Firmware state: Online, Spun Up
Commissioned Spare : No
Emergency Spare : No
Device Firmware Level: C4
Shield Counter: 0
Successful diagnostics completion on :  N/A
SAS Address(0): 0x5005076028d068fa
Connected Port Number: 0(path0)
Inquiry Data: 50026B723A09EF7F    KINGSTON SH103S3120G                    507KC4
FDE Capable: Not Capable
FDE Enable: Disable
Secured: Unsecured
Locked: Unlocked
Needs EKM Attention: No
Foreign State: None
Device Speed: 6.0Gb/s
Link Speed: 6.0Gb/s
Media Type: Solid State Device
Drive:  Not Certified
Drive Temperature : N/A
PI Eligibility:  No
Drive is formatted for PI information:  No
PI: No PI
Drive's NCQ setting : N/A
Port-0 :
Port status: Active
Port's Linkspeed: 6.0Gb/s
Drive has flagged a S.M.A.R.T alert : No

Array infos:
Code:

Adapter 0 -- Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name                :
RAID Level          : Primary-1, Secondary-0, RAID Level Qualifier-0
Size                : 110.827 GB
Sector Size        : 512
Mirror Data        : 110.827 GB
State              : Optimal
Strip Size          : 128 KB
Number Of Drives    : 2
Span Depth          : 1
Default Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Default Access Policy: Read/Write
Current Access Policy: Read/Write
Disk Cache Policy  : Enabled
Encryption Type    : None
Is VD Cached: No

I'm not sure whats wrong, usually if a disc fails the disc is kicked out and I replace it with a new one and rebuild the array but none of the ssd's was kicked out until now. Any idea whats wrong here?

Thanks!

Dell H200 RAID 1 compatibilty

$
0
0
Hi!

I'm preparing a configuration for a new server, and would like to know about the compatibility of Proxmox with PERC H200 and 2xSAS 600GB wit RAID 1.
I have read several posts, but I'm not completely sure.

Anyone can help me? Thanks in advance.

----
Antonio

Warning - Avoid the pain - Don't grow your QCOW2 file beyond 2TB

$
0
0
Something we learnt the hard way and I hope others will avoid this "feature" (advised by support) of Proxmox VE.

We were running KVM instance on qcow2 file, and utilised the Resize-Disk option to increase space. Unfortunately this option allows increase in size just above 2Tb on the default ext3 filesystem.

End result, corruption will occur at some point in the future as the disk is written, you just won't know when and it's not recoverable, nor can you restore from vzdump backup.

Hopefully you will have read this before you expand the hard disk volume and avoided this situation. Better still perhaps Proxmox will realise this actually is a design bug in their software and stop growth beyond limits on the supported filesystems.

Thanks and good luck if you're in the same situation.

Proxmox Lenovo M5210 Raid compatibility

$
0
0
Good Morning.

I just have been testing Proxmox V 3.4 in a lenovo m73 and all fine, but now I try to install it in a Lenovo System X 3650 M5 with M5210 Raid, Proxmox is compatible with this card, can I do something to make it work? or just is not compatible?

Thanks

Error on "Clone from LVM to img (qcow2-raw)"

$
0
0
Hi all !

On very last Proxmox 3.4 I've a problem to full clone a VM:

On clone from qcow2 to img (raw, qcow2) all ok.
On clone from LVM to img:

***************
Use of uninitialized value in string eq at /usr/share/perl5/PVE/QemuServer.pm line 6014.
Use of uninitialized value $ready in concatenation (.) or string at /usr/share/perl5/PVE/QemuServer.pm line 6010.
***************

Any idea ?

Luca

Proxmox openvz is restarting everyday and CT is stopped

$
0
0
I recently installed proxmox in a dedicated server, i created 2 CT which only one CT is started , i don't know Why everyday the server is restarted and after the restarting the CT was stopped. there are nothing configured with crontab .

May 21 06:25:03 ns373646 spiceproxy[3050]: restarting server
May 21 06:25:03 ns373646 spiceproxy[3050]: worker 3051 finished
May 21 06:25:03 ns373646 spiceproxy[3050]: starting 1 worker(s)
May 21 06:25:03 ns373646 spiceproxy[3050]: worker 89990 started
Thanks.

VM Migration to another Server

$
0
0
Hello,

how can i migrate my VM's from my actual server to a new server which is not in the same network?
What is the best way to carry out my idea?

Regards

Blank screen on terminal when connecting to VM via serial port

$
0
0
I followed this howto http://pve.proxmox.com/wiki/Serial_Terminal

I does work partially. When I connect to terminal with command qm terminal 101 while the VM is booting I get all the boot messages. But when the boot messages stop nothing happens. Blank screen and I can not type anything.

Any idea?

Ceph - High apply latency on OSD causes poor performance on VM

$
0
0
Hi,

Since we have installing our new Ceph Cluster, we have frequently high apply latency on OSDs (near 200 ms to 1500 ms), while commit latency is continuously at 0 ms !

In Ceph documentation, when you run the command "ceph osd perf", the fs_commit_latency is generally higher than fs_apply_latency. For us it's the opposite.
The phenomenon has increased since we changed the Ceph version (migrate Giant 0.87.1 to Hammer 0.94.1)
The consequence is that our Windows VMs are very slow.
Does anyone could tell us if our configuration is good or not, and in what direction investigate ?

Code:

# ceph osd perf
osd fs_commit_latency(ms) fs_apply_latency(ms)
  0                    0                  62
  1                    0                  193
  2                    0                  88
  3                    0                  269
  4                    0                1055
  5                    0                  322
  6                    0                  272
  7                    0                  116
  8                    0                  653
  9                    0                    4
 10                    0                    1
 11                    0                    7
 12                    0                    4

Different informations in our configuration :

- Proxmox 3.4-6
- kernel : 3.10.0-10-pve
- CEPH :
- Hammer 0.94.1
- 3 hosts with 3 OSDs of 4 TB (9 OSDs) + 1 SSD of 500 GB per host for journals
- 1 host with 4 OSDs of 300 GB (4 OSDs) + 1 SSD of 500 GB for journals

- OSD Tree :
Code:

# ceph osd tree
ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 33.83995 root default
-6 22.91995    room salle-dr
-2 10.92000        host ceph01
 0  3.64000            osd.0        up  1.00000          1.00000
 2  3.64000            osd.2        up  1.00000          1.00000
 1  3.64000            osd.1        up  1.00000          1.00000
-3 10.92000        host ceph02
 3  3.64000            osd.3        up  1.00000          1.00000
 4  3.64000            osd.4        up  1.00000          1.00000
 5  3.64000            osd.5        up  1.00000          1.00000
-5  1.07996        host ceph06
 9  0.26999            osd.9        up  1.00000          1.00000
10  0.26999            osd.10      up  1.00000          1.00000
11  0.26999            osd.11      up  1.00000          1.00000
12  0.26999            osd.12      up  1.00000          1.00000
-7 10.92000    room salle-log
-4 10.92000        host ceph03
 6  3.64000            osd.6        up  1.00000          1.00000
 7  3.64000            osd.7        up  1.00000          1.00000
 8  3.64000            osd.8        up  1.00000          1.00000

- ceph.conf
Code:

[global]
        auth client required = cephx
        auth cluster required = cephx
        auth service required = cephx
        auth supported = cephx
        cluster network = 10.10.1.0/24
        filestore xattr use omap = true
        fsid = 2dbbec32-a464-4bc5-bb2b-983695d1d0c6
        keyring = /etc/pve/priv/$cluster.$name.keyring
        mon osd adjust heartbeat grace = true
        mon osd down out subtree limit = host
        osd disk threads = 24
        osd heartbeat grace = 10
        osd journal size = 5120
        osd max backfills = 1
        osd op threads = 24
        osd pool default min size = 1
        osd recovery max active = 1
        public network = 192.168.80.0/24


[osd]
        keyring = /var/lib/ceph/osd/ceph-$id/keyring


[mon.0]
        host = ceph01
        mon addr = 192.168.80.41:6789


[mon.1]
        host = ceph02
        mon addr = 192.168.80.42:6789


[mon.2]
        host = ceph03
        mon addr = 192.168.80.43:6789

Thanks.
Best regards

VM Migration Failing

$
0
0
Has anyone seen this happening?

Server1:
Quote:

root@server1:~# pveversion -v
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-39-pve: 2.6.32-156
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Server2:
Quote:

root@server2:~# pveversion -v
proxmox-ve-2.6.32: 3.4-156 (running kernel: 2.6.32-39-pve)
pve-manager: 3.4-6 (running version: 3.4-6/102d4547)
pve-kernel-2.6.32-39-pve: 2.6.32-156
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-2
pve-cluster: 3.0-17
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Config on server1:
Quote:

root@server1:~# qm config 101
boot: c
bootdisk: ide0
cores: 1
cpu: kvm64
ide0: san1.vg1:vm-101-disk-1,size=32G
ide2: none,media=cdrom
kvm: 0
memory: 512
name: nigeltest
net0: e1000=1E:FD:B2:35:87:39,bridge=vmbr0
numa: 0
ostype: win8
scsihw: virtio-scsi-pci
smbios1: uuid=c841bd91-acdf-4940-b290-11922458b5b1
sockets: 1

Error message:
Quote:

May 21 16:57:49 starting migration of VM 101 to node 'server2' (192.168.20.46)
May 21 16:57:49 copying disk images
May 21 16:57:49 starting VM 101 on remote node 'server2'
May 21 16:57:50 starting ssh migration tunnel
May 21 16:57:51 starting online/live migration on localhost:60001
May 21 16:57:51 migrate_set_speed: 8589934592
May 21 16:57:51 migrate_set_downtime: 0.1
May 21 16:57:53 ERROR: online migrate failure - aborting
May 21 16:57:53 aborting phase 2 - cleanup resources
May 21 16:57:53 migrate_cancel
May 21 16:57:53 ERROR: migration finished with problems (duration 00:00:04)
TASK ERROR: migration problems
QMP strace error:
Quote:

[pid 6097] read(12, "{\"QMP\": {\"version\": {\"qemu\": {\"micro\": 1, \"minor\": 2, \"major\": 2}, \"package\": \"\"}, \"capabilities\": []}}\r\n", 8192) = 105
[pid 6097] write(12, "{\"execute\":\"qmp_capabilities\",\"id\":\"6097:1 5\",\"arguments\":{}}", 60) = 60
[pid 6097] read(12, "{\"return\": {}, \"id\": \"6097:15\"}\r\n", 8192) = 33
[pid 6097] write(12, "{\"execute\":\"query-migrate\",\"id\":\"6097:16\",\"arguments\":{}}", 57) = 57
[pid 6097] read(12, "{\"return\": {\"status\": \"failed\"}, \"id\": \"6097:16\"}\r\n", 8192) = 51
Both machines are running off a iscsitarget SAN.

san1 is in shared mode.

san1.vg1 is in shared mode.

Offline migration works fine.

May have broken my installation !

$
0
0
Hello forum,

I have been having some issues getting my system to boot. Since it was installed a few weeks ago I have been having to manually mount rpool at boot time. I would get a message stating "Manually import the pool at the command prompt and exit" (screen dump attached).

Manuall_Mount_Pool.jpg

I wasn't too bothered about this as this is a test system. I am learning/testing Proxmox and was not planning on rebooting too often anyway, however, I stupidly decided to try and fix it hoping an update might fix the issue. Part of the update came up with messages about changes to the boot settings (GRUB) and I chose to keep the existing settings. Now trying to mount the pool manually just doesn't work. Doh !

I can reinstall, its not a problem as the VM's are stored on a FreeNAS SAN, however, I would like to understand what has happened so I don't make the same mistake again. Here is a screen shot of the error messages.

ZFS_MOUNT_FAIL.jpg

If anyone can point me in the right direction would be greatly appreciated.
Attached Images

Install Intel I217-LM driver on Proxmox 3.4 Error ;/

$
0
0
Hi,

i want to install newest driver for my network on Proxmox 3.4

driversource: http://downloadmirror.intel.com/1581...3.1.0.2.tar.gz

i have done latest update and reboot then:
Code:

apt-get install pve-headers-`uname -r` build-essential
cd /usr/local/src
wget http://downloadmirror.intel.com/15817/eng/e1000e-3.1.0.2.tar.gz
tar xzf e1000e-3.1.0.2.tar.gz
cd e1000e-3.1.0.2
make install

then i get this error :/
Code:

make: *** No rule to make target `install'.  Stop.
hope anyone can help me with that and find a solution.

regards ice
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>