Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

new prospective server, hardware compatibility

$
0
0
Hi,

We will soon get a new server (proposed by our provider) and i want to know if the hardware have an optimal compatibility with the last proxmox version, especialy the RAID controller

Server : HP DL360 Gen 9
RAID controller : HP smart array p440ar
Hypervisor disks : 2x 300Go SAS 10k RAID 1
VMS storage disks : 4 x 900Go SAS 10k RAID 10
RAM : 4 x 16 Go DDR4 2133 MHz
processor : 2x Intel Xeon E5-2620v3


The RAID controller should work with debian but i prefer to have feedback from exp users, will my first install with proxmox.

Thanks for your help

Loop0 after installation

$
0
0
After installation proxmox ve 4, every time I get this errors:

EXT4-fs (Loop0): couldn't mount as ext2 due to feature incompatibilities.

I see some topics but don't find normal solution!

Отправлено с моего SGP771 через Tapatalk

howto create shared ramdisk on proxmox to lxc container?

$
0
0
I have done a ramdisk on my proxmox host that using LXC.

This is my fstab on proxmox host :
tmpfs /mnt/ramdisk tmpfs nodev,nosuid,nodiratime,size=40G 0 0

Code:

Filesystem            Size  Used Avail Use% Mounted on
udev                  10M    0  10M  0% /dev
tmpfs                  13G  58M  13G  1% /run
/dev/dm-0              15G  4.0G  10G  29% /
tmpfs                  32G  63M  32G  1% /dev/shm
tmpfs                5.0M    0  5.0M  0% /run/lock
tmpfs                  32G    0  32G  0% /sys/fs/cgroup
tmpfs                  40G  1.7M  40G  1% /mnt/ramdisk
/dev/mapper/pve-data  100G  23G  77G  23% /var/lib/vz
/dev/fuse              30M  32K  30M  1% /etc/pve
cgmfs                100K    0  100K  0% /run/cgmanager/fs


The problem is that i have setup a config for my lxc container like this:
mp0: /mnt/ramdisk,mp=/mnt/ramdisk
rootfs: local:105/vm-105-disk-1.raw,size=40G

And the lxc cointainer is mapping the /mnt/ramdisk to the server with the 40G space. But if i try to just save something to the folder it takes the ram from the lxc container and not the shared folder.
So can i on any how change the folder from tmpfs to something like common /dev/loop4 or something so it don't se it as a ramdisk itself.

this is how it look like in the vm:

Code:

root@ngx01-p2:/mnt# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/loop3      40G  11G  27G  29% /
none            100K    0  100K  0% /dev
cgroup          12K    0  12K  0% /sys/fs/cgroup
tmpfs            32G    0  32G  0% /sys/fs/cgroup/cgmanager
tmpfs            40G  1.7M  40G  1% /mnt/ramdisk
tmpfs          6.3G  44K  6.3G  1% /run
tmpfs          5.0M    0  5.0M  0% /run/lock
tmpfs          1.6G    0  1.6G  0% /run/shm


thanks.

Random crashing

$
0
0
Hello,

I have been having an issue with random VM crashes over the last few months. The problem happens with several different VMs running Ubuntu 12.x and 14.04 and on two different clusters running PVE v3.3-5 or v3.4-1.

Since the crashing is abrupt, there is nothing in the logs of the VMs.

Here is a capture of the console of one of the crashes. All of the other crashes look similar...

Capture.PNG

Thanks,

-Glen
Attached Images

LXC Updates

$
0
0
I am just wondering if updates for LXC and the features it offers could be defined a bit more on the expected availability. This really seems the one place where 4.x took a step backwards in comparison to the 3.4 / Open VZ solution. In particular we are interested in the following:

1. Live migration of containers
2. Ability to use zfs over iSCSI for storage
3. Capabilities - https://forum.proxmox.com/threads/23...C-Capabilities (still no solution to this)

These are really for us the last updates needed to match the functionality we gave up with moving forward. These also keep LXC from being a robust solution for us as many of the servers cannot easily take the downtime associated with a stop, copy, start solution as we are currently forced to use and even then requires us to now do that EARLY AM to minimize the interruption. Much nicer when we could migrate middle of the day and move along.

I realize that many times giving time frames is a best guess and subject to other items. Just wondering if we might have moved forward too soon...

Startup delay is ignored for one VM

$
0
0
Running Proxmox 3.4-11 I have an issue where one of my VM's starts before the network storage is ready.

I have tried setting a startup delay to over 2 minutes but that doesn't seem to make a difference.

By looking at the log it appears this VM tries to start even before the start all vm and containers command is sent.

For reference VM 105 has a 30 second delay configured. VM 100 has a 35 Second delay configured.

See pictures attached.


startlog.PNG



startup2.PNG

Any ideas?
Attached Images

Proxmox 4.0 with ZFS / near lock up situation

$
0
0
Hello $all,

I am running a Proxmox 4.0 Server for a customer, with qcow2 files and zfs as storage.

ZFS is set up this way:
- 2x2 mirror
- 2 cache devices (SSD)

The server has:
- 32gig of ram
- 16gig are used for the VMs (5, all windows XP to 7)
- 2 Xeon CPUs

For backup I run a script which creates a zfs snapshot and then rdiffs the qcow2 files. This evening the server became close to unresponsive. Once I managed to login, I saw this:

Code:

PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM    TIME+ COMMAND
 6626 root      20  0 2526920 1.401g  2808 S 231.5  4.5 370:28.37 kvm                                                                                             
 1104 root      1 -19      0      0      0 R  90.3  0.0  1221:11 z_wr_iss                                                                                         
 1107 root      0 -20      0      0      0 R  90.3  0.0  67:06.84 z_wr_int_1                                                                                       
 1175 root      20  0      0      0      0 R  90.3  0.0  1105:08 txg_sync                                                                                         
12554 root      20  0  18996  10184  1388 R  90.3  0.0 138:10.60 rdiff                                                                                           
13574 root      20  0 5373656 3.733g  2304 S  90.3 11.9  6339:06 kvm                                                                                             
31375 root      1 -19      0      0      0 R  90.3  0.0  0:26.16 z_wr_iss                                                                                         
31430 root      1 -19      0      0      0 R  90.3  0.0  0:20.30 z_wr_iss                                                                                         
31462 root      1 -19      0      0      0 R  90.3  0.0  0:06.71 z_wr_iss                                                                                         
31463 root      1 -19      0      0      0 R  90.3  0.0  0:03.54 z_wr_iss                                                                                         
31464 root      1 -19      0      0      0 R  90.3  0.0  0:02.51 z_wr_iss                                                                                         
31470 root      1 -19      0      0      0 R  90.3  0.0  0:02.43 z_wr_iss                                                                                         
31471 root      1 -19      0      0      0 R  90.3  0.0  0:02.50 z_wr_iss                                                                                         
 3252 root      20  0 1449640 870372  2248 S  79.1  2.6 405:42.81 kvm                                                                                             
31476 root      20  0  25864  3048  2392 R  11.3  0.0  0:00.02 top

Please note the CPU usage values. Also, in the web interface I see totally strange CPU usage values for the VMs. First I thought a fan has died and the CPUs are throtteling massively, but I checked the BMC as well as thermal throttle counters. The Xeons look fine and run with full power.

What is happening here?

Regards,

Andreas

Migrate between hard drives in VM

$
0
0
Hi all,

I'm a bit confused with moving data between hard disks in one VM.

In VM 2 hard drives:
1. local:100/vm-100-disk-1.vmdk (SATA)
2. local:100/vm-100-disk-2.qcow2 (VIRTIO)

1. Any idea how to make 1:1 migration from first to second drive?
2. If it's not possible, then why? Is there any other way, like create new VM and move data there?

Backup suddenly stopped working

$
0
0
On a node that has been running without problems suddenly the backup stopped working.

It backups to a connected USBdisk.
The disk is fine, I unmounted and mounted it.
It shows fine in df

I removed the disk from 'storage' in the mainnode and added it again.
No errors.

But nothing happens.
No backup.
No error from cron, no email from the backup.
Syslog shows nothing.
Totally dead.

I have no idea where to look, hopefully someone can put me on the right track.

Red machine icon

$
0
0
Hi,

I stumbled upon a problem I never had and I did not find any useful information. I have currently a 6 node PVE 3.4 cluster (previously 7 nodes). One machine got fenced due to a ZFS stuck kernel problem and since then halv of my cluster is shown offline (red machine icon). The VMs are still running, I can start, stop, migrate via commandline but the GUI is not working as expected. The GUI updates also hardware information like used ram and cpu, but does not update the rrd graphs. These are blank since the fencing of the other node.

There are no obvious entries in the logfiles I checked, it seams only the GUI is not working.

clustat shows a normal cluster:
Code:

Cluster Status for cluster @ Tue Nov  3 09:11:20 2015
Member Status: Quorate


 Member Name                                                    ID  Status
 ------ ----                                                    ---- ------
 proxmox1                                                            1 Online, rgmanager
 proxmox2                                                            2 Online, rgmanager
 proxmox3                                                            3 Online, Local, rgmanager
 apu-01                                                              4 Online, rgmanager
 apu-02                                                              5 Online, rgmanager
 proxmox4                                                            7 Online, rgmanager

and also pvecm

Code:

Version: 6.2.0
Config Version: 62
Cluster Name: cluster
Cluster Id: 13364
Cluster Member: Yes
Cluster Generation: 2988
Membership state: Cluster-Member
Nodes: 6
Expected votes: 6
Total votes: 6
Node votes: 1
Quorum: 4 
Active subsystems: 7
Flags:
Ports Bound: 0 11 177 
Node name: proxmox3
Node ID: 3
Multicast addresses: 239.192.52.104
Node addresses: 10.192.0.243

Here my pveversion -v:

Code:

root@proxmox3 ~ > pveversion  -v
proxmox-ve-2.6.32: 3.4-160 (running kernel: 3.10.0-11-pve)
pve-manager: 3.4-9 (running version: 3.4-9/4b51d87a)
pve-kernel-2.6.32-40-pve: 2.6.32-160
pve-kernel-3.10.0-11-pve: 3.10.0-36
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-18
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

I already tried to restart some services like pve-manager, pveproxy, pvedaemon, pvestatd.

How to import ZFS root pool by-id?

$
0
0
During Proxmox 4.0 installation, I install root on a mirrored vdev on rpool. The installer appears to have created the pool by disk assignment (/dev/sdx) instead of by-id. Is there any way to force a re-import of the root zpool by-id? I know for a storage pool one can simply:

Code:

# zpool export rpool
# zpool import -d /dev/disk/by-id rpool

Is this possible for a root zpool? Barring that, is it possible to change the zfs init.d script (or similar startup script) to mount the zpool by-id instead of by disk assignment?


In other words, zpool status returns this:

Code:

root@proxmox:/# zpool status  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 0h3m with 0 errors on Mon Nov  2 18:56:29 2015
config:


        NAME        STATE    READ WRITE CKSUM
        rpool      ONLINE      0    0    0
          mirror-0  ONLINE      0    0    0
            sdb2    ONLINE      0    0    0
            sdc2    ONLINE      0    0    0

How can I get it to return something like this:

Code:

root@proxmox:/# zpool status  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 0h3m with 0 errors on Mon Nov  2 18:56:29 2015
config:

        NAME                                STATE    READ WRITE CKSUM
        rpool                                ONLINE      0    0    0
          mirror-0                          ONLINE      0    0    0
            ata-ST4000DM000-1F2168_XXXXXXXX  ONLINE      0    0    0
            ata-ST4000DM000-1F2168_XXXXXXXX  ONLINE      0    0    0

SFScon15 in Bolzano (IT) on November 13, 2015

$
0
0
Proxmox will be at the South Tyrol SFScon15 in Bolzano this year presenting the new features of Proxmox VE 4.0.

The South Tyrol Free Software Conference (SFScon) is the annual conference dedicated to Free Software in South Tyrol attracting visitors from Northern Italy, Austria and Switzerland.

Read more: https://www.sfscon.it/talks/proxmox-...vironment-4-0/

  • WHAT: SFScon15 - Free Software Conference
  • WHEN: 13 November 2015
  • WHERE: Via Siemens, 19 - 39100 Bolzano/Bozen (IT)

https://www.sfscon.it , @SFScon
__________________
Best regards,

Martin Maurer
Proxmox VE project lead

Proxmox 3.4: Booting from ZFS failed since update - No pool imported...

$
0
0
Hi all,

I installed Proxmox 3.4 on a Raid 1 ZFS mirror and it works fine since the ZFS update.
Now the boot process stops with
Code:

No pool imported. Manually import the root pool
at the command prompt and then exit

To workaround this I export the rpool and import it afterwards.
Code:

zpool export rpool
zpool import -R /root -N rpool

Then the next error comes up.
Code:

Error: Failed to mount root filesystem ´rpool/ROOT/pve-1´
The hint works:
Code:

mount -o zfsutil -t zfs rpool/ROOT/pve-1 /root
after exit the normal boot sequence goes on and system comes up.

Question is how to solve this so that the system comes up without manual investigation?

Here some data of the system:

Code:

# zpool status rpool
  pool: rpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 0h0m with 0 errors on Mon Nov  2 11:41:22 2015
config:

        NAME        STATE    READ WRITE CKSUM
        rpool      ONLINE      0    0    0
          mirror-0  ONLINE      0    0    0
            sdc3    ONLINE      0    0    0
            sdd3    ONLINE      0    0    0

errors: No known data errors

Code:

# parted /dev/sdc
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA SATA SSD (scsi)
Disk /dev/sdc: 64.0GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End    Size    File system  Name                  Flags
 1      1049kB  2097kB  1049kB              Grub-Boot-Partition  bios_grub
 2      2097kB  136MB  134MB  fat32        EFI-System-Partition  boot, esp
 3      136MB  64.0GB  63.9GB  zfs          PVE-ZFS-Partition

Code:

# dpkg -l zfsutils zfs-initramfs
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                        Version            Architecture      Description
+++-===========================-==================-==================-===========================================================
ii  zfs-initramfs              0.6.5-1~wheezy    amd64              Native ZFS root filesystem capabilities for Linux
ii  zfsutils                    0.6.5-1~wheezy    amd64              command-line tools to manage ZFS filesystems

Attached Images

sd 2:0:0:0: [sda] abort (Proxmox 3.4)

$
0
0
Hi

I have a few problem with an virtual, when backup run, the kvm failed automatically and I revieve error log :

[mar nov 3 04:16:07 2015] sd 2:0:0:0: [sda] abort
[mar nov 3 04:17:08 2015] INFO: rcu_sched detected stalls on CPUs/tasks: { 4} (detected by 7, t=60002 jiffies, g=33686286, c=33686285, q=0)
[mar nov 3 04:17:08 2015] sending NMI to all CPUs:
[mar nov 3 04:17:08 2015] NMI backtrace for cpu 7
[mar nov 3 04:17:08 2015] CPU: 7 PID: 0 Comm: swapper/7 ve: 0 Tainted: PF O-------------- 3.10.0-233.1.2.lve1.3.33.4.el7.x86_64 #1 ovz.4.1
[mar nov 3 04:17:08 2015] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014
[mar nov 3 04:17:08 2015] task: ffff8801f55a4150 ti: ffff8801f55b4000 task.ti: ffff8801f55b4000
[mar nov 3 04:17:08 2015] RIP: 0010:[<ffffffff8104621a>] [<ffffffff8104621a>] native_write_msr_safe+0xa/0x10
[mar nov 3 04:17:08 2015] RSP: 0018:ffff8801ffdc3d80 EFLAGS: 00000046
[mar nov 3 04:17:08 2015] RAX: 0000000000000400 RBX: 0000000000000007 RCX: 0000000000000830
[mar nov 3 04:17:08 2015].....


the problem only appear in one of our KVM of our cluster and the virtual is not accesible during 5/10 min.

I have found this post that seem talk about our problem
http://unix.stackexchange.com/questi...ing-task-abort

I have try it but for now with no better result.
Did someone have similar problem?

Thanks a lot

ProxMox 4.0 : Windows Server Backup

$
0
0
Hi Guys,

New customer, new server, new proxmox install, new setup.

Problem: For the backup of Windows Servers we use Windows Server backup, it's not bad, but not very good either. But that is not the problem, normally we add a disk to the virtual machine and dedicate it to the backup program. Sometimes it's on the NAS, sometimes it's on a usb drive, ... In this example it is a NFS mapping to a Synology DS415 NAS. This kind of setup always worked on Proxmox 3.x

When I want to add the drive and complete the wizard it gives the error: Formatting the disk has failed. Please ensure the disk is online and accessible. Incorrect function.

I already tried to format it manually which works!
I tried to put data on it, which works!
I tried to change the location of the qcow file to the local storage. Which does not work!

So my guess is that Proxmox 4.0 does something different with the virtual drives. Which makes our backup utility go crazy.

pveversion - v output:
proxmox-ve: 4.0-19 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-19
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-20
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie

Anyone some thoughts?

Thanks in advance,

Cliff.

Install on Debian Jessie: Name resolution issue

$
0
0
I'm trying to set up Proxmox with Debian 8 on a laptop to create a mobile lab running a few Windows servers.

I went ahead and tried to set it up on Debian 8 with these instructions:

EDIT: Can't post links. Google "install proxmox on debian jessie" to get the link.

However, I’ve hit a snag. I replaced this line in /etc/hosts:

127.0.1.1 debm6400

With this one:

192.168.1.22 debm6400

and made sure that hostname wasn’t on any other line. However, I get this when I run getent

$ getent hosts $(hostname)
::1 debm6400

However, looking it up by IP address works correctly.

$ getent hosts 192.168.1.22

192.168.1.22 debm6400

The “::1” result sure looks like IPv6 (which I'm not at all familiar with) and I’m not really sure what the getent command does. Looking at the man page, it looks like I might need to edit nsswitch.conf and put files ahead of dns. Or maybe I need to just rip IPv6 out by the roots. Do I really need an FQDN? Should I just ignore this discrepancy and proceed with installation?

Proxmox 4.0 CentOS 6 OpenVZ to LXC Issues.

$
0
0
Hi,

I have recently installed Proxmox 4 and converted a few CentOS 6 OpenVZ backups to LXC.

These containers are running correctly, with the exception that I cannot SSH into them.

When I do SSH to them I get this message:

PTY allocation request failed on channel 0

I can SSH into them if I do this:

ssh root@192.168.100.21 "/bin/bash -i"

It seams to only affect CentOS 6 OpenVZ containers that I have converted. They were originally created with the precreated 64bit templates found here: https://openvz.org/Download/template/precreated

Has anyone else had this issue, and any ideas on a fix please?

Regards
Daniel Parker

Urgent: Proxmox/Ceph Support Needed

$
0
0
Hello,

Are there any professional paid Proxmox/Ceph support people on the forum who could assist us? Would prefer US based but really need help quickly.

Please email me at eric.merkel at sozotechnologies.com of via phone at 317-203-9222 if you can help.

Our Ceph cluster has lost 33% of its disks and it is killing our IO on all the servers. We have the max backfills etc turned down to 1 but it is not helping. Wondering if we can change # of copies from 3 to 2 on the fly in ceph?


Best regards,
Eric

PVE 4.0 HA graphical weirdness under chrome

$
0
0
I noticed something interesting after setting up my PVE4.0 (nonpaid with latest public updates) cluster with Ceph and software HA (which is working impressively well, BTW). If I add a client VM to HA monitoring, then select it in the web GUI, I cannot click-away from it. In other words, anything else I click on will result in the highlighted item reverting back to the view of that VM in about 2 seconds. I tried stopping and starting the VM and the browser to no avail. Only rebooting the node on which the GUI was being accessed seems to address the problem.

Here's the really interesting part: it only happens in chrome (win64). I attempted the same under a recent firefox build and had no issues. Any idea what might be able to cause a behavior like this?

Thanks.
Dan

Proxmox 4.0 non-UEFI boot proble

$
0
0
I am trying to install Proxmox 4.0 with zfs root. It states in the wiki, that zfs root is not supported on UEFI, so I'm trying to boot Proxmox installer from legacy-mode. First, I turned UEFI boot off in BIOS and tried to boot from USB-flash, which did not work - it simply did not boot and was falling back to BIOS. The same USB-stick works fine in UEFI mode. Then I tried burning a DVD, but it also did not work in legacy mode: I get 'error: file '/boot/grub/i386-pc/efi_gop.mod' not found' and then monitor show "input mode is not supported" message via OSD. I even tried using a different DVD drive - did not help. And installing in UEFI mode works fine with DVD also (not in the case of zfs root, of course). To verify that this is not a BIOS problem, I prepared a USB-stick with ubuntu live and it worked perfectly in legacy mode.
Any suggestions would be appreciated!
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>