Quantcast
Channel: Proxmox Support Forum
Viewing all 171679 articles
Browse latest View live

Failed to sync data - can't migrate 'vm...' - storagy type 'lvm' not supported

$
0
0
Proxmox 3.4-3, 3 systems in cluster.
2 problems - the easy one is the typo in the error message ("storagy").
the not so easy one- My VMS are using LVM and/or ZFS block storage on local machines. VMs are stopped (offline migration). Can't migrate VM to another machine. Error message is:
Failed to sync data - can't migrate 'vm...' - storagy type 'lvm' not supported

And similar error for zfs.
If I move the storage to local file (raw or qcow), then offline migration works! I converted all my local qcow/raw disk images to ZFS block storage after upgrading to 3.4, thinking there might be a performance improvement, plus got to learn more about ZFS. What is the point of ZFS (or LVM) block device storage, if I can't (offline) migrate VMs - assuming this behavior is intentional. I thought migration worked with LVM in previous versions, but I could be mistaken since previously some of my VMs used local storage and some used LVM block storage, and I may not have tried migrating an LVM based one before, and only tried it now because it failed with ZFS storage.

Even with this glitch, Proxmox rocks. Thank you.

Snapshot files location

$
0
0
Hi All,
Hope every one doing well.
I am using proxmox3.3, I took multiple snapshots for my testing vm.

What i got doubt is, where are these snapshots files are stored? or for this snapshot which storage space it took?

Thanks so much for advance

Regards
Venkata

Error pve-cluster[main] crit: Unable to get local IP address

$
0
0
I am very new to the Debian,Proxmox,server world but very keen.
I have a Debian 7 wheezy installation of Proxmox installed using this tutorial. Everything has gone fine up to the point of booting from pve. then at the point it says
  • Connect to the Proxmox VE web interface

    Connect to the admin web interface (https://I put my address here :8006) and configure the vmbr0 and review all other settings, finally reboot to check if everything is running as expected.

When I try to navigate to the Proxmox GUI
notAvailable.PNG
11.PNGError1.PNGError2.PNG11NotFullyInstalledError.PNG
Any help would be appreciated.

Attached Images

USB Device Missing After VM Reboot

$
0
0
I am trying to add an external USB 3.0 hard drive for Windows 2008 R2 vm.

Following the steps USB physical port mapping

Code:

:.../# qm monitor 100
qm>_info usbhost
  Bus 10, Addr 5, Port 2, Speed 5000 Mb/s
        Class 00: USB device 1234:5678, Device Name 5678

    Bus 1, Addr 2, Port 2, Speed 480 Mb/s
        Class 00: USB device 8765:4321, USB Flash Drive
qm>_q

/etc/pve/qemu-server/100.conf

Code:

...
startup: order=2, up=600, down=1200
usb0: host=10-2
vga: qxl
...

Shutdown VM
Start VM

Code:

:.../# qm monitor 100
qm>_info usbhost
    Bus 1, Addr 2, Port 2, Speed 480 Mb/s
        Class 00: USB device 8765:4321, USB Flash Drive
qm>_q



Bus 10, Addr 5, Port 2, Speed 5000 Mb/s...
is missing.

Dell R730 Proxmox 3.4

$
0
0
Hello All
I have an issue installing Proxmox 3.4 (also 3.3) on my brand new dell R730 server.
The proxmox 3.4 cannot find the boot disk (raid virtual disk)
My current raid is PERC H730P mini.
I am at the last firmware.

Any Help please.

Thank you all for your time
Koby Peleg Hen

I/O perf problem with LSI MegaRAID SAS 9261-8i

$
0
0
Hello folks,


We are facing some very problematic I/O perf issues with the LSI MegaRAID SAS 9261-8i
It has been configured as a RAID-6 and we have the following perf (from the ProxMox master host)

Code:

proxmaster:~$ dd if=/dev/urandom of=test bs=1024k count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 15.4111 s, 6.8 MB/s



This is two times less than with another host using a very similar configuration with older hardware.


FW Package Build: 12.7.0-0007

BIOS Version : 3.13.00
FW Version : 2.70.03-0862

ProxMow host is configured using ext3 FS.

Nothing special beside that.


Any advice, or pointer will be welcome.

Firewall help (should be fairly quick....)

$
0
0
Proxmox 3.1 dist-upgraded over time to 3.4 (pve-manager/3.4-3/2fc72fee (running kernel: 2.6.32-37-pve)

I have finally taken the plunge with a free afternoon, and set up IP Sets, Security Groups, and Rules (in 'Datacenter' view). 2-host cluster, one simply a warm spare of the other. This will compliment the existing router and host-based firewalls, and primarily designed to keep any one compromised VM/CT from then being a launchpad to attack VM's.

But - I can't seem to get the firewall to enable?

VM/CT > 'Hardware/Network': Firewall ticked.
DC > Firewall > Options: Firewall enabled.

I have added each VM/CT as an IP Set (some have more than 1 IP, so this keeps them all together).
I have then added each VM/CT to a Security Group, referencing the IP Set. These then have rules within, e.g. 'allow SSH from management subnet'.
Finally, I add each Security Group to the Rules tab, and enable there as well.

Nothing seems to happen? I can disable the Rule or Security Group, and I can still access as before.

/etc/pve/firewall/cluster.fw shows enabled and all rules appear to be within. There's no errors in the logs that I can see. Just nothing happens.

What really obvious step am I missing here? (I have followed the Firewall Wiki to get to this point).

Thanks in advance.

UPDATE: Missed an 'Enable Firewall' option on the VM (boy, there's a lot of tick-boxes), but it also seems I have to add the Security Group to the VM 'Firewall Rules' Tab to get it to work. I thought the Datacenter Firewall Rules tab would apply globally? Perhaps I misunderstood this?).

UPDATE 2: Think this is mostly solved by my last update (and waiting 30 seconds for conn track to expire). Just curious what the Datacenter Rules view is for, if individual host rules still need to be added on each VM or CT. Maybe I just misunderstood its purpose! :)

Moving KVM VM raw image to Proxmox

$
0
0
I have a UBUNTU KVM server running a linux server guest (Jira). It is working well and I want to transfer this guest to a new Promox 3.4 host. The host is also working well and is already running some VMs. These Proxmox VM disk images are store in a ZFS pool called pool01. I can see them listed as disk images in the proxmox web viewer.

How can I transfert my UBUNTU KVM disk image (.qcow2) to Proxmox in ZFS pool01?

If I select the ZFS pool01 in the web giu and look at its content, I see the disk images for the already present VM guests. If I try to upload I am only offered the possibilities : iso image, vzdump backup files, openvz template.

I do not seem to find any *.qcow2 in my proxmox file system so obviously, I am missing something. Any help will be greatly appreciated

Regards

J Blain

How to grant non-root users privileges to do most things?

$
0
0
Okay, forgive me if this is a silly question, but I've googled and looked at the online help, and came up empty. When I have used proxmox before, I just used root. I need to deploy a server with proxmox for me and two other people to use. I created the unix accounts for them with passwords. Check. I added a group 'Users' in the proxmox GUI and created proxmox users for each of the 3 accounts, placing them in the 'Users' group. Check. Now what? The only obvious place I see to apply these is in the Datacenter view, where I can add a Permission entry for the various items there, listing 'Users' and giving 'PVEAdmin'. So far, so good. Something is still missing though, since if I logout as root and login as one of the other users, the 'Create VM' button at the top right of the screen is grayed out. I assume there is (at least one) step I left out, but it isn't obvious (to me at least) what?

VMA restore fails after successful backup.

$
0
0
We have been using Proxmox for several years, now, and it's been working pretty well. However, we recently went to clone a VM (which we have been able to successfully do in the past) and have encountered an issue. The backup (performed using vzdump --tmpdir=/mnt/pve/vm_backupsSAN/tmp/ --dumpdir=/mnt/pve/vm_backupsSAN/ 188, we have also done with and without compression and stdexcludes) works and does not return any error messages. The location it is backing up to is an NFS share.

Verifying or restoring the backup has yielded the same failure in different places based on whether or not compression is enabled, and we have tried all backups on two different nodes in our cluster to confirm that it's not machine-specific.

The latest backup we tried using the backup command above gave the following result:

root@node2:~# vma verify -v /mnt/pve/vm_backupsSAN/vzdump-qemu-188-2015_03_30-11_55_45.vma
CFG: size: 310 name: qemu-server.conf
DEV: dev_id=1 size: 16106127360 devname: drive-virtio0
DEV: dev_id=2 size: 107374182400 devname: drive-virtio1
CTIME: Mon Mar 30 11:55:49 2015
progress 1% (read 1234829312 bytes, duration 3 sec)
progress 2% (read 2469658624 bytes, duration 8 sec)
progress 3% (read 3704422400 bytes, duration 14 sec)


** ERROR **: verify failed - wrong vma extent header chechsum
aborting...
Aborted

I have seen other threads where it was a memory error, but those appear to be cases where the users only had one system to work with. Another item mentioned was disabling compression, but that has only changed the location at which the checksum error is found.

web interface not working after upgrade

$
0
0
Hello,

I have upgraded my proxmox system to the last version. After upgrade I was not able to login to my proxmox via web interface.

PHP Code:

 pveproxy startperlwarningSetting locale failed.perlwarningPlease check that your locale settings:
       
LANGUAGE = (unset),        LC_ALL = (unset),        LANG "fr_FR.UTF-8"    are supported and installed on your system.perlwarningFalling back to the standard locale ("C").start failed can't aquire lock '/var/run/pveproxy/pveproxy.pid.lock' - daemon already started (pid = 4817) 


I tried to test and start a VM via this command qm start 101 but got same error
PHP Code:

qm start 101perlwarningSetting locale failed.perlwarningPlease check that your locale settings:        LANGUAGE = (unset),        LC_ALL = (unset),        LANG "en_US.utf8"    are supported and installed on your system.perlwarningFalling back to the standard locale ("C"). 

Thanks for help

pve-firewall prevents VM starts from web interface (also noVNC hangs)

$
0
0
Proxmox v3.4 nodes in a cluster. I'm attempting to use pve-firewall to secure access to the nodes.

Nodes are on 10.10.10.1 and 10.10.10.2. I'm attempting to manage cluster from another machine on 10.10.2.1.

I've enabled the firewall as per https://pve.proxmox.com/wiki/Proxmox_VE_Firewall and wish to add the entire 10.10.0.0/16 to "management" IPSET.
So my cluster.fw look like this

Code:

[OPTIONS]
# enable firewall (cluster wide setting, default is disabled)
enable: 1

[IPSET management]
10.10.0.0/16

I'm able to use the web interface and SSH into all cluster nodes. But starting a node fails. The VM icon changes to "white" (and VM status is running) but after a while the task fails with
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 ...[snipped]... failed: got timeout

Attempting to use NoVNC while the nodes is being started causes window to stall at "Starting VNC handshake" message. Notice that it doesn't say "Failed to connect to server (code: 1006)" so the node is being started?

Accessing other, already running nodes, with noVNC works just fine. I'm able to STOP already running running nodes. Once I try to start them, same issue occurs.

Once I disable pve-firewall, I am able start nodes with no issues.

how to take take backup on external storage or disks

$
0
0
Hi All,
Hope every one doing well.
I need to take backups on external[except local disk] hard disks and external storage like iscsi storage.
the problem is, while we are adding second or third HDD to proxmox 3.3, it taking as a LVM group. During LVM group we are unable to Choose ISO images,backup like this. We can choose only during directory or NFS type only. Same problem with iSCSI storage also.

Is any other way to add external HDD or iSCSI storage space for backups.

Thank you so much for advance.

Regards
Venkata

Configure proxmox with ceph

$
0
0
Hi everyone
I am new here, and i have a problem with config i guess. i set hostname, IP , netmask, gateway and DNS. I want to run "apt-get update" What possibly colud gone wrong? i set proxy, cause i have to. I am using switch for this, because i want to make a cluser form two desktops and install ceph on it.

Kernel 3.10.x hangs on Boot with root on ZFS

$
0
0
I evaluate Proxmox 3.4 with ZFS on KVM (Nested Virtualisation).
I installed Proxmox 3.4 on ZFS (RAID 0) with success.

To test Snapshot and Rollback on the ZFS Rootfs i installed Kernel 3.10.0-8-pve. With Kernel 3.10.0-8-pve and 3.10.0-7-pve boot hangs at accessing the zpool.
Kernel 2.6.32-37-pve works fine.

I tried rootdelay=30 without any change.

Has someone experienced similar problems?

Code:

root@pve34-zfs-1:~# parted /dev/sda print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 8590MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start  End    Size    File system  Name                  Flags
 1      1049kB  2097kB  1049kB              Grub-Boot-Partition  bios_grub
 2      2097kB  136MB  134MB  fat32        EFI-System-Partition  boot, esp
 3      136MB  8589MB  8453MB  zfs          PVE-ZFS-Partition

root@pve34-zfs-1:~# zpool list
NAME    SIZE  ALLOC  FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool  7.81G  2.00G  5.81G    25%  1.00x  ONLINE  -
root@pve34-zfs-1:~# zfs list -t all
NAME                                USED  AVAIL  REFER  MOUNTPOINT
rpool                              2.93G  4.76G  152K  /rpool
rpool/ROOT                          1.22G  4.76G  144K  /rpool/ROOT
rpool/ROOT/pve-1                    1.22G  4.76G  1.13G  /
rpool/ROOT/pve-1@test2              17.1M      -  920M  -
rpool/ROOT/pve-1@rear-install      20.0M      -  927M  -
rpool/ROOT/pve-1@kernel-update1      232K      -  921M  -
rpool/ROOT/pve-1@kernel-update-3.1  248K      -  921M  -
rpool/cttest1                        797M  1.22G  797M  /rpool/cttest1
rpool/swap                          953M  5.69G  100K  -
rpool/vm-100-disk-2                  72K  4.76G    72K  -
root@pve34-zfs-1:~#

Proxmox34-3.10.0-8-pve-hang.png
Attached Images

Setting up Citrix Netscaler VPX Express on Proxmox 3.4 Cluster

$
0
0
Dear all,

I have a working Proxmox 3.4, 2 Node Cluster. Everything works fine so far. Great work Developers!;)

Now I wanted to add a prebuild KVM Image "Citrix Netscaler VPX Express Appliance 10.5" and so the problem starts.

I uploaded the *.raw Image of the HDD to the local Storage of one Clusternode.
Then I investigated the KVM-Description-File (it's XML) for the Appliance to set the Configuration Parameters correct.
The Controller should be "IDE", the Networkcard "virtio". No Problem so far. I created a new KVM machine and added the HDD to it.
But when I start the Machine it hangs with .... default/loader.conf .... on the Bootloader.

As far as I know is, that Netscaler is based on freeBSD 6.
I googled around but found nothing helpful.

To test if the machine works, i installed a debian wheezy on another machine and imported the Appliance successfully.

Does anybody could help, to get Netscaler working on Proxmox?

Thanks in advance

Anduril

Installing pve-kernel-3.10.0-8-pve fails

$
0
0
Installing pve-kernel-3.10.0-8-pve fails due to build errors in iscsitarget-1.4.20.2.
How to resolve this?

DKMS make.log for iscsitarget-1.4.20.2 for kernel 3.10.0-8-pve (x86_64)
Tue Mar 31 17:18:12 CEST 2015
make: Entering directory `/usr/src/linux-headers-3.10.0-8-pve'
LD /var/lib/dkms/iscsitarget/1.4.20.2/build/built-in.o
LD /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/built-in.o
CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/tio.o
CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/iscsi.o
CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/nthread.o
CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o
/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c: In function ‘worker_thread’:
/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:76:3: error: implicit declaration of function ‘get_io_context’ [-Werror=implicit-function-declaration]
/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:76:21: warning: assignment makes pointer from integer without a cast [enabled by default]
cc1: some warnings being treated as errors
make[2]: *** [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o] Error 1
make[1]: *** [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel] Error 2
make: *** [_module_/var/lib/dkms/iscsitarget/1.4.20.2/build] Error 2
make: Leaving directory `/usr/src/linux-headers-3.10.0-8-pve'

vm crash during backup snapshot on nfs

$
0
0
Hi, i use proxmox and it's very useful and cool! :)

i have two server (with proxmox) in a datacenter, but multicast is not supported :( , for this reason i can't use a cluster...
but i configured one server such as a nfs server (directly proxmox, debian). then i add new nfs storage on second server (such as client) directly in proxmox web gui.
works ok, in fact i see "content" correctly.
on first server i add new storage ( directory) with the same path of the nfs "share", and i see the same "content".

when i tried to run a backup of one VM (kvm centos) with web gui on nfs destination, the backup has finished succesfully, but the VM crashed!
the i reboot it, centos asked me to run fcsk to repair some errors on hard disk. do it and reboot and now is all ok.

what could have happened?
depends to nfs destination? so far i have always done backup to local directory and works great.

my goal is schedule backup on other proxmox server, so that i restore more quickly on other server.

Thank you in advance

my pve version
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-34-pve: 2.6.32-140 lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4 corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3 libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8 vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

output backup log
INFO: starting new backup job: vzdump 101 --remove 0 --mode snapshot --compress lzo --storage nfs-d3 --node d4
INFO: Starting Backup of VM 101 (qemu)
INFO: status = running
INFO: update VM 101: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/nfs-d3/dump/vzdump-qemu-101-2015_03_31-12_20_10.vma.lzo'
INFO: started backup task '87c270d5-004a-4f16-8860-bc120aa1ad7b'
INFO: status: 1% (619642880/53687091200), sparse 0% (495964160), duration 3, 206/41 MB/s
.....
INFO: status: 100% (53687091200/53687091200), sparse 13% (7093821440), duration 1335, 1442/0 MB/s
INFO: transferred 53687 MB in 1335 seconds (40 MB/s)
INFO: archive file size: 25.26GB
INFO: Finished Backup of VM 101 (00:22:37)
INFO: Backup job finished successfully
TASK OK

Proxmox not recognized RAID x hardware, in a SUPERMICRO server SBB X-E5-2620V3

$
0
0
Proxmox not recognized Raid on a Supermicro server SBB X-E5-2620V3


That such friends,


We have now acquired a batch of these servers Supermicro brand.


To this we had problems installing Proxmox System in them, but the download version 3.4 of Proxmox, and gives me the ability to be installed under a software raid.


But if I create a hardware Raid the Proxmox not recognize this raid, if not recognize me independent disks.


My question would be, if Proxmox is compatible with this type of server ?, and if so how could I do to reconnoitre the hardware raid this server ?.


Thank you for your support and your time.


Atte.


Jorge

infiniband voes

$
0
0
Infibind drivers in kernel 3.10 floods the log with this error:
ib0: packet len 2049 (> 2048) too long to send, dropping

Which causes this:
Mar 31 19:17:30 esx2 pmxcfs[3477]: [status] notice: cpg_send_message retry 80
Mar 31 19:17:31 esx2 pmxcfs[3477]: [status] notice: cpg_send_message retry 90
Mar 31 19:17:32 esx2 pmxcfs[3477]: [status] notice: cpg_send_message retry 100
Mar 31 19:17:32 esx2 pmxcfs[3477]: [status] notice: cpg_send_message retried 100 times
Mar 31 19:17:32 esx2 pmxcfs[3477]: [status] crit: cpg_send_message failed: 6


Anybody knows how to solve this?
Viewing all 171679 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>