Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

SSD Caching software solution

$
0
0
Hello,

There is seems more and more stable solution for SSD caching without hardware raid card.

I'm need SSD caching for RAW partitions (Windows) is there any of there SSD caching that's could work with RAW instead of filesystem ?

ZFS have interresting features too, but seems also work only with his filesystem ? (never test it just look at man and some webpage)

The goal is to have SSD caching with Software Raid and RAW partition for Windows VM guest host.

Thanks

Proxmox update error

$
0
0
When i try update i get this error :

starting apt-get update
Hit http://mirror.hetzner.de wheezy Release.gpg
Hit http://mirror.hetzner.de wheezy/updates Release.gpg
Hit http://mirror.hetzner.de wheezy Release.gpg
Hit http://mirror.hetzner.de wheezy Release
Hit http://mirror.hetzner.de wheezy/updates Release
Hit http://mirror.hetzner.de wheezy Release
Hit http://mirror.hetzner.de wheezy/main amd64 Packages
Hit http://mirror.hetzner.de wheezy/contrib amd64 Packages
Hit http://mirror.hetzner.de wheezy/non-free amd64 Packages
Hit http://security.debian.org wheezy/updates Release.gpg
Hit http://mirror.hetzner.de wheezy/updates/main amd64 Packages
Hit http://mirror.hetzner.de wheezy/updates/contrib amd64 Packages
Hit http://mirror.hetzner.de wheezy/updates/non-free amd64 Packages
Hit http://mirror.hetzner.de wheezy/pve amd64 Packages
Hit http://security.debian.org wheezy/updates Release
Hit http://download.proxmox.com wheezy Release.gpg
Hit http://security.debian.org wheezy/updates/main Sources
Hit http://download.proxmox.com wheezy Release
Hit http://security.debian.org wheezy/updates/contrib Sources
Hit http://security.debian.org wheezy/updates/non-free Sources
Hit http://cdn.debian.net wheezy Release.gpg
Hit http://security.debian.org wheezy/updates/main amd64 Packages
Hit http://download.proxmox.com wheezy/pve amd64 Packages
Hit http://cdn.debian.net wheezy Release
Hit http://security.debian.org wheezy/updates/contrib amd64 Packages
Hit http://security.debian.org wheezy/updates/non-free amd64 Packages
Hit http://cdn.debian.net wheezy/main Sources
Hit http://cdn.debian.net wheezy/non-free Sources
Hit http://cdn.debian.net wheezy/contrib Sources
Hit http://cdn.debian.net wheezy/main amd64 Packages
Hit http://cdn.debian.net wheezy/non-free amd64 Packages
Hit http://cdn.debian.net wheezy/contrib amd64 Packages
Ign https://enterprise.proxmox.com wheezy Release.gpg
Ign https://enterprise.proxmox.com wheezy Release
Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages

Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages

Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages

Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages

Err https://enterprise.proxmox.com wheezy/pve-enterprise amd64 Packages
The requested URL returned error: 401
W: Failed to fetch https://enterprise.proxmox.com/debia...amd64/Packages The requested URL returned error: 401

E: Some index files failed to download. They have been ignored, or old ones used instead.
TASK ERROR: command 'apt-get update' failed: exit code 100





Can someone help me to fix it ?

Just a news in french here :

proxmox 3.2.1/1933730b and mailcleaner spice issies

$
0
0
Hi Team, I have updated proxmox to the lastest and I have the no subscribtion. I have an instance of mailcleaner running and if I change the display to spice, when the image boots up I get the the following error: unaligned pointer 0x17a0179 press any key. Can anyone advise as all my other vm works well with spice now. Have a mixture between windows server 2008 and linux. Cheers, Raj

[solved] Update an Proxmox Cluster with 5 Nodes from 3.1 to 3.2

$
0
0
Hello,

we running 5 Nodes in a proxmox 2.0 cluster with 3.1. We have on all nodes the Community Subscription. I don't know, we never have done an Upgrade before from a cluster. So what is the right way to do that. Can we go on every node in the Webinterface on upgrade, and that it's. (Maybe an reboot when updateting kernel an qemu). Or is there a special to observe. Must every node have the same version of Proxmox?

Thanks for the information
Regards Fireon

Backupproblem with Snapshot, VM crashes sporadically after starting backup process

$
0
0
Hello,

at customer after update from 3.1 to proxmox 3.2 we have a problem to backup one VM with snapshot (VM is an Gentoo, up to date). On my homeserver the same problem, but the VM is an Ubuntu 12.04 LTS. At my homeserver the backuprocess does not work sporadically. Sometimes it work.
But on customers server, every backup from the vm let the vm crash. It is a little Cluster with two nodes. On Customer we have the Basic Subscription. So updates should be ok. Here are the logs from the backups:

Code:

NFO: starting new backup job: vzdump 102 --quiet 1 --mailto technik@local --mode snapshot --compress gzip --storage sicherung
INFO: Starting Backup of VM 102 (qemu)
INFO: status = running
INFO: update VM 102: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/sicherung/dump/vzdump-qemu-102-2014_03_22-00_15_02.vma.gz'
INFO: started backup task 'ce8af6cf-cf06-4f8f-9726-4f9037dc0291'
INFO: status: 0% (87425024/16106127360), sparse 0% (20480), duration 3, 29/29 MB/s
INFO: status: 1% (178651136/16106127360), sparse 0% (3944448), duration 8, 18/17 MB/s
INFO: status: 2% (349700096/16106127360), sparse 0% (3973120), duration 15, 24/24 MB/s
INFO: status: 3% (495845376/16106127360), sparse 0% (8286208), duration 20, 29/28 MB/s
INFO: status: 4% (661389312/16106127360), sparse 0% (8327168), duration 26, 27/27 MB/s
INFO: status: 5% (813432832/16106127360), sparse 0% (12673024), duration 32, 25/24 MB/s
ERROR: VM 102 not running
INFO: aborting backup job
ERROR: VM 102 not running
ERROR: Backup of VM 102 failed - VM 102 not running
INFO: Backup job finished with errors
TASK ERROR: job errors



Anohter backup, The VM crashes later:

Code:

INFO: starting new backup job: vzdump 102 --quiet 1 --mailto technik@local --mode snapshot --compress gzip --storage sicherung
INFO: Starting Backup of VM 102 (qemu)
INFO: status = running
INFO: update VM 102: -lock backup
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating archive '/mnt/pve/sicherung/dump/vzdump-qemu-102-2014_03_22-00_00_02.vma.gz'
INFO: started backup task 'f870193c-f679-4c06-a2d4-74e6802d873b'
INFO: status: 0% (87425024/16106127360), sparse 0% (57344), duration 3, 29/29 MB/s
INFO: status: 1% (176422912/16106127360), sparse 0% (3985408), duration 8, 17/17 MB/s
INFO: status: 2% (333185024/16106127360), sparse 0% (4005888), duration 15, 22/22 MB/s
INFO: status: 3% (486539264/16106127360), sparse 0% (8318976), duration 20, 30/29 MB/s
INFO: status: 4% (646184960/16106127360), sparse 0% (8327168), duration 26, 26/26 MB/s
INFO: status: 5% (826474496/16106127360), sparse 0% (12255232), duration 33, 25/25 MB/s
INFO: status: 6% (969277440/16106127360), sparse 0% (12296192), duration 38, 28/28 MB/s
INFO: status: 7% (1131282432/16106127360), sparse 0% (16240640), duration 46, 20/19 MB/s
INFO: status: 8% (1295515648/16106127360), sparse 0% (20537344), duration 54, 20/19 MB/s
INFO: status: 9% (1455816704/16106127360), sparse 0% (20537344), duration 62, 20/20 MB/s
INFO: status: 10% (1623064576/16106127360), sparse 0% (20762624), duration 70, 20/20 MB/s
INFO: status: 11% (1778909184/16106127360), sparse 0% (20905984), duration 78, 19/19 MB/s
INFO: status: 12% (1944453120/16106127360), sparse 0% (21254144), duration 86, 20/20 MB/s
INFO: status: 13% (2117206016/16106127360), sparse 0% (21577728), duration 93, 24/24 MB/s
INFO: status: 14% (2261647360/16106127360), sparse 0% (21581824), duration 99, 24/24 MB/s
INFO: status: 15% (2425094144/16106127360), sparse 0% (21581824), duration 106, 23/23 MB/s
INFO: status: 16% (2603548672/16106127360), sparse 0% (21590016), duration 113, 25/25 MB/s
INFO: status: 17% (2751987712/16106127360), sparse 0% (21598208), duration 121, 18/18 MB/s
INFO: status: 18% (2900230144/16106127360), sparse 0% (21823488), duration 129, 18/18 MB/s
INFO: status: 19% (3076915200/16106127360), sparse 0% (22646784), duration 136, 25/25 MB/s
INFO: status: 20% (3243114496/16106127360), sparse 0% (22654976), duration 141, 33/33 MB/s
INFO: status: 21% (3395289088/16106127360), sparse 0% (22659072), duration 145, 38/38 MB/s
INFO: status: 22% (3557818368/16106127360), sparse 0% (26644480), duration 151, 27/26 MB/s
INFO: status: 23% (3706060800/16106127360), sparse 0% (26644480), duration 156, 29/29 MB/s
INFO: status: 24% (3880910848/16106127360), sparse 0% (30576640), duration 164, 21/21 MB/s
INFO: status: 25% (4036755456/16106127360), sparse 0% (30646272), duration 171, 22/22 MB/s
INFO: status: 26% (4196401152/16106127360), sparse 0% (30646272), duration 179, 19/19 MB/s
INFO: status: 27% (4354146304/16106127360), sparse 0% (30646272), duration 192, 12/12 MB/s
INFO: status: 28% (4511891456/16106127360), sparse 0% (30707712), duration 204, 13/13 MB/s
ERROR: VM 102 not running
INFO: aborting backup job
ERROR: VM 102 not running
ERROR: Backup of VM 102 failed - VM 102 not running
INFO: Backup job finished with errors
TASK ERROR: job errors

Since the Upgrade, no Backup of the VM was successfully. So copied the VM to my homeserver. And the backup works correctly. I'am confusing.
What can i do, what could be causing the problem?

Best Regards
Fireon

Getting started with PVE storage model - software RAID5 array and storage concepts

$
0
0
Hello,

So following up my "debuts" with proxmox, I am now trying to grasp the concepts and storage model of PVE. First of all, I'd like to add a bunch of SATA drives (6 of them) that were previously assembled as a software RAID 5 array under Slackware Linux. I would like to reassemble them and reuse the array under a VM running CentOS 6.5

Some questions:

1. Software arrays do not seems to be supported in proxmox at all. How should I proceed? Should I assemble them with mdadm at proxmox's level then add the md device in the vmnumber.conf file with something like: ide0: /dev/md0 so the VM sees it as a single large 10TB drive, or should I pass-through all 6 drives unassembled to the VM then assemble the array at the VM level?

2. I have tried the second option (pass-through all 6 drives unassembled to the VM then assemble the array at the VM level) by adding these lines in the VM config file

ide0: /dev/sdb
ide1: /dev/sdc
ide2: /dev/sdd
ide3: /dev/sde
ide4: /dev/sdf
ide5: /dev/sdg

but unfortunately, in proxmox, I only see 4 of them. I booted up the VM and confirm that only 4 of the 6 drives are passed to the VM. Obviously, the array is not started because there are too many missing drives. Why are only 4 of the 6 drives passed to the VM by PVE? Is it a misconfig of some sort or are VM limited to 4 physical drives?

EDIT: User tom at thread http://forum.proxmox.com/threads/159...e-pass-through clearly stated that

Quote:

you cannot use ide4. there are only 4 ide devices allowed - ide0, ide1, ide2 and ide3.
So how do I add more than 4 !??!

Thanks guys!
Attached Images

Adding LVM to Proxmox shows the storage volume as having a 100% disk usage.

$
0
0
Just installed ProxMox for the first time and having an issue with adding a LVM.
When I set it up it right away shows that the disk usage is 100%

YvZ0UNv.png
Attached Images

a second clip we see Daryl

OpenVZ migration fails when no quota enabled

$
0
0
Hi,

Using Proxmox VE 3.2, latest updates applied, no-subscription repo, in a 10-node cluster setup.

I've disabled disk quota in /etc/vz/vz.conf by setting DISK_QUOTA=no, cause I do not need
OpenVZ disk quota. I use OpenVZ for storage servers and allow it to take all of my disk space.

I use external GlusterFS server to store VMs. No HA, no rgmanager, just VMs on GlusterFS.

When I try to migrate OpenVZ VM to another node, I get:
Code:

Mar 24 11:57:19 starting migration of CT 81141 to node 'pve-server-NN' (XXX.XXX.XXX.XXX)
Mar 24 11:57:24 container data is on shared storage 'default'
Mar 24 11:57:24 dump 2nd level quota
Mar 24 11:57:24 # vzdqdump 81141 -U -G -T > /mnt/pve/default/dump/quotadump.81141
Mar 24 11:57:24 ERROR: Failed to dump 2nd level quota: vzquota : (error) Can't open quota file for id 81141, maybe you need to reinitialize quota: No such file or directory
Mar 24 11:57:24 aborting phase 1 - cleanup resources
Mar 24 11:57:24 start final cleanup
Mar 24 11:57:24 ERROR: migration aborted (duration 00:00:10): Failed to dump 2nd level quota: vzquota : (error) Can't open quota file for id 81141, maybe you need to reinitialize quota: No such file or directory
TASK ERROR: migration aborted

pve32-vzmigrate-trouble.png

This is strange, cause if there is no quota enabled, vzmigrate should NOT check for it. :(

And, yes, if OpenVZ disk quota is enabled, migration goes fine.

Could you please help me with that? What should I do to make my VMs migrate across nodes smoothly, even if disk quota is disabled?

Thanks in advance!
Attached Images

New 3.2 installation, cannot connect via browser

$
0
0
I installed Proxmox VE 3.2 for the first time on my home server hardware today. I first attempted to install via USB key, but that didn't work at all... the installation timed out trying to "unmount cdrom". Well, I went back to the stone-age and burned a CD for the install ISO and it installed correctly, and without issue at all.

I now cannot connect to the Proxmox host via my PC on the local network, but I can ping the host IP address from my PC no problem. Checking ifconfig on the proxmox host, the network adapter is working and has an IP address on the local network (statically assigned, 192.168.1.100). This correctly correlates with my router IP assignment list. Additionally, from the host I am able to ping 4.2.2.2, yahoo.com, and my personal PC on the local network.

Here is the output from ifconfig, transcribed from the proxmox cmd line login on the physical hardware:

Code:

root@proxmox:~$ ifconfig
eth0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet6 addr: nn##::##n#:##nn:nn##:n#n#/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:15902 errors:0 dropped:0 overruns:0 frame:0
TX packets:743 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3105698 (2.9 MiB) TX bytes:51566 (50.3KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 MiB) TX bytes:0 (0.0KiB)


venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/64 Scope:Link
UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 MiB) TX bytes:0 (0.0KiB)


vmbr0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: nn##::##n#:##nn:nn##:n#n#/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13021 errors:0 dropped:0 overruns:0 frame:0
TX packets:580 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1986783 (1.8 MiB) TX bytes:40908 (39.9KiB)


root@proxmox:~$ ifconfig

All hardware is working fine (I have another HDD I still have XenServer installed on, one with Windows Server 2008r2, and another with a Linux Mint desktop version just for kicks that I used to test the hardware).

Mainboard: Gigabyte A75M-UD2H
AMD Athlon II X4 CPU
16GB memory
2x AOC-SASLP-MV8 SAS Raid Cards

Any insight or help would be appreciated!

Proxmox 3.1-21 Disk usage OpenVz show containers 0.0 % in use !?? Bug ? Or ?

$
0
0
Proxmox 3.1-21 Proxmox 3.1-21 Disk usage OpenVz show containers 0.0 % in use !?? Bug ? Or ? While disk do use some space. Also in webmin console disk use 0.0 % Is this different then 2.3-13 ??? Is this a option you must activate ? Is there a lot different between Version 2.3-13 and 3.1-21 ?

Cluster IP migration

$
0
0
Hi,


I need to change IPs of both nodes of a 2 nodes (up to date) unicast cluster with no downtime of any VM. (old IPs: 88.x.x.x, new ones: 195.x.x.x)


I planned to:
- move all VMs from one node (B) to the other (A)
- change B IP
- migrate back all VMs from A to B
- change A IP
- dispatch VMs like they were at the beginning


So far so good, I tried the following:
- move all VMs from node B to node A
- change IP of node B in
/etc/network/interfaces
/etc/hosts
also change /etc/hosts in node A
- /etc/init.d/networking restart (node B)


this led to node B glowing red in the cluster, and no VM migration possible.


- reboot node B


not better


- restore previous network config in the previously changed files
- /etc/init.d/networking restart (node B)


cluster still down


- reboot node B


cluster finally restored, back to start.


Since the new IPs are in different ranges than the old ones, I’m not sure if I have any chance to make a temporary cluster with node A in the old range and node B in the new one.


Removing node B from the cluster, moving IPs of both nodes and setting again node B in the cluster doesn’t feel that safe, according to:
http://pve.proxmox.com/wiki/Proxmox_...a_cluster_node


What is the safest way to operate this migration?

How debugging KVM

$
0
0
Hello,

I got several VM under KVM freezing at startup without anything in log and qm report vm started (then internal error)
On console i only have a cursor and bios never start.

Adding another disk and changing boot order on it fix the issue but need copy file and reinstall all program (windows vm)
So it's not a config issue, it's a problem on the VM disk.
I use RAW disk. Happen already 3 times.

How i can debug it for find where the problem happen ?
Why even BIOS not start ?

Thanks for help.

How to make Backups using HP USB StorageWorks DAT 72

$
0
0
Hi

Can anyone help me to add a tape drive as backup system to proxmox?
I was already able to see it in the OS, but I don't know how to make proxmox use it for backup.
Thank you in advance for the support

Proxmox and slow backup

$
0
0
Why does Proxmox use old gzip for backup? It's using only one-core when packing :( Does anyone tired to use pigz with proxmox? Would it be enough to do"ln -s /usr/bin/pigz gzip? :-) What are cons of doing that?

Proxmox Two-Node High Availability cluster problems with Quorum Disk

$
0
0
Hi All,


I have managed to configure the Two node cluster and assign a Quorum disk to a Third server using an iscsi target.


In the work-through outlined in:- http://forum.proxmox.com/threads/163...ive-migrations


The iscsaidm example for connecting to the iscsi target is not persistent and does not survive a reboot on either of the Proxmox Cluster Nodes.



I therefore decided to set-up the iscsi target in Proxmox storage assigning it to both nodes.



After a reboot the primary node can see the Quorum disk and a clustat shows the Quorum disk.



However on the Secondary node the Quorum check fails on boot.



If I run /etc/init.d/cman reload on the secondary node I can then see the Quorum Disk when using clustat.



It therefore looks like the iscsi target is mounted after Proxmox checks for the Quorum disk on boot.



The Quorum disk is mounted on the primary server before it checks for it.



Is this problem due to the way Proxmox starts clustering?



How do I mount the Quorum disk before Proxmox checks for it on each node?



Will adding iscsiadm -m node -T iqn.BLAHBLAH -p IPOFiSCSISERVER -l to the /etc/rc.local work?



Are there any benefits in running more than one Quorum disk on a two node Cluster?

DRBD or Ceph Storage on a Two Node Proxmox HA Cluster?

$
0
0
Hi all,


I would appreciate some advice :)


I have been looking at the following link:-


http://pve.proxmox.com/wiki/Two-Node...bility_Cluster


This link tells you how to set-up a 2 node Proxmox High Availability cluster using DRBD (for network based RAID 1 replicated storage in Primary/Primary mode over LVM)


I've always been a bit wary of using DRBD after a few problems with split brain.


Now with the launch of Proxmox 3.2 we have the ability to use Ceph storage.



Can we use Ceph on just two nodes?


If so, has anyone tried this and can offer some inputs?

cloning a KVM?

$
0
0
Coming from Xen/VMWare, cloning, templating, and booting up a live VM in seconds are one of those features that makes virtualization wonderful but I do not see it supported on Proxmox(?).

Or am I overlooking something or is there some reason proxmox don't support this?
I know you can manually do some workarounds via CLI, boot it up and change the IP, etc, but why not be able to do it from GUI?

Lack of this feature makes it a bit tough sell to customers, because currently they have to buy a VM, and they have to install OS from scratch via the VNC? Using Hostbill btw.

OTOH, the Openvz is not a problem and I wish KVM worked in similar fashion where you can boot up a fresh image in seconds.

But please correct me if I'm wrong ...

there are 'communication failure (0)' in proxmox ve's status box

$
0
0
This is why? How to solvethink you
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>