Quantcast
Channel: Proxmox Support Forum
Viewing all 172502 articles
Browse latest View live

Proxmox VE Differential Backups for Proxmox 3.1

$
0
0
Hi to all,
probably you know that following to this link : http://ayufan.eu/projects/proxmox-ve...ntial-backups/ you can download this free plugin for add differential backup ability to ours lovely proxmox, and this plugin is just ready for proxmox 3.1 !!!

it is a pity that Dietmar rejected this software because I have found it very useful ...

I hope that Dietmar could reconsider its position on this topic.

THANKS to Proxmox authors and thanks to Ayufan for his job!

Move disk error...

$
0
0
Hi,

root@node1:~# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-20-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

root@node1:~# pvesm status
Backup-VM nfs 1 7672806912 3884612096 3788092416 51.13%
Backup-distant dir 1 976758780 976758776 4 100.50%
Images-ISO nfs 1 528444928 164738176 363604352 31.68%
disques-VM-1 iscsi 1 0 0 0 100.00%
disques-VM-2 iscsi 1 0 0 0 100.00%
disques-VM-3 iscsi 1 0 0 0 100.00%
disques-VM-DFS-1 iscsi 1 0 0 0 100.00%
local dir 1 249296796 44275004 205021792 18.26%
lvm-disques-VM-1 lvm 1 536866816 0 440381440 0.50%
lvm-disques-VM-2 lvm 1 536866816 0 426762240 0.50%
lvm-disques-VM-3 lvm 1 536866816 0 183496704 0.50%
lvm-disques-VM-DFS-1 lvm 1 1073737728 0 6283264 0.50%

I try to move a disk (165GB) from an iSCSI storage to a local one, with 200GB free space, to be able to snapshot this VM.

I get this error :

create full clone of drive ide1 (lvm-disques-VM-3:vm-909-disk-2)
Formatting '/var/lib/vz/images/909/vm-909-disk-2.raw', fmt=raw size=177167400960
transferred: 0 bytes remaining: 177167400960 bytes total: 177167400960 bytes progression: 0.00 %
...
transferred: 177164517376 bytes remaining: 2883584 bytes total: 177167400960 bytes progression: 100.00 %
transferred: 177167400960 bytes remaining: 0 bytes total: 177167400960 bytes progression: 100.00 %
TASK ERROR: storage migration failed: mirroring error: VM 909 qmp command 'block-job-complete' failed - The active block job for device 'drive-ide1' cannot be completed

Any idea?

Thanks,

Christophe.

Restore : choose a storage for each disk?

$
0
0
Hi,

For any reason a VM have multiple disks, each on different storage.

Backup works well.

Restore could be better if one could select destination storage disk by disk.
As of today, we need enough free space on a unique storage, then move disks to prefered storage.


Christophe.

How would we share one hard disk using shared storage and 2 proxmox hosts

$
0
0
Hello,

To preface, the reason I'm looking into this option is because we don't have a third server available to properly setup quorum, we are very small office and don't require HA. I'm also a little worried about clustering since with clustering there is only one web interface. If I lose one Host...can I be sure that my web interface will work properly. Maybe I'm being paranoid.

So here's the scoop...

We have two Proxmox Hosts running version 3.1 We have setup a shared storage NAS (using NfS and 802.3ad). Assuming only one host would ever run a VM at any onetime, is there a way to save the hard disk (.raw file) on the shared storage and have both proxmox hosts configuration files point to this shared storage for the disk. I would assign only one host to ever run the VM. In the event one host goes down we could simply start the VM on the other host and get up and running quickly again. What I'm proposing wouldn't be used for mission critical VM's.

Thoughts? How would I set this up?

Thank you.

Charles.

End of an era

node would not restart, now has a /etc/pve issue

$
0
0
Hello

When I tried to stop a node, it got stuck here ( from ps afx ] :
Code:

  6061 ?        SL    0:00  \_ startpar -p 4 -t 20 -T 3 -M stop -P 2 -R 6
  6534 ?        S      0:00      \_ /bin/sh /etc/init.d/vz stop
  6583 ?        R      0:48          \_ /sbin/modprobe -r ip_nat_ftp

there were no vz's on this node.

after 5 minutes I used kill -9 6534 un stick it.

while the node was down I made a change to /etc/pve/cluster.conf . that should not be an issue when a node is off? we have 4 nodes in this cluster.

yet here is output of fbc3 /etc/pve # ls -l /etc/pve/cluster*

bad node:
Code:

-r--r----- 1 root www-data 1393 Aug 24 11:57 /etc/pve/cluster.conf
-r--r----- 1 root www-data 1393 Aug 27 12:33 /etc/pve/cluster.conf.new


good nodes have this:
Code:

# node  fbc87
-rw-r----- 1 root www-data 1505 Aug 27 12:44 cluster.conf
-rw-r----- 1 root www-data 1432 Aug 23 07:56 cluster.conf.old

# node fbc241
s012  ~ # ls -l /etc/pve/clus*
-rw-r----- 1 root www-data 1505 Aug 27 12:44 /etc/pve/cluster.conf
-rw-r----- 1 root www-data 1432 Aug 23 07:56 /etc/pve/cluster.conf.old


more info:
Code:

fbc87  /etc/pve # pvecm nodes
Node  Sts  Inc  Joined              Name
  1  X  1112                        fbc3
  2  M    828  2013-08-03 14:00:13  fbc87
  3  M  1072  2013-08-26 15:35:21  s035
  5  M    972  2013-08-22 18:47:55  s012


*bad node pveversion at time of reboot:
Code:

fbc3  /var/lib/vz/private # pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

I upgraded after reboot, restarted and have the same issue.


Any suggestions to fix?

Hyper-v migrate Proxmox storflt driver

$
0
0
I migrated a virtual machine hyper-v on Proxmox qcow2, the VM part correctly but I can not delete the driver storflt, if I try to uninstall to restart the blue screen appears, this driver is present in the qemu hard disk and is a driver of hyper -v IDE disk
image.png
Attached Images

Proxmox Cluster broken

$
0
0
I just did a mistake on one of my proxmox 2.3 servers.
I accidentally delete the known .ssh/known_host file and run this command "pvecm add xxx.xxx.xxx.xxx -f "
on one of my existing cluster nodes (i was in the wrong terminal...)
This cluster is a 2 nodes cluster.

and now i can't login anymore to the webinterface...
ssh is working well and the CT's are still running.
is there a possibility to rejoin the old cluster?
And one very important question:
what happens with all the CT's if i leave the cluster?
and try to join a new cluster?

Thanks for your help
I found this information in an other thread could this maybe fix my problem?but what will happen with all my CT's?

You don't need to re-install the node. Save the current VM configs in /etc/pve/local/qemu-server before removing everything.
Remove the node from the cluster with pvecm delnode <node to be removed> on one of the other nodes.

Perform the following commands on the node to be removed:

service cman stop
killall -9 corosync cman dlm_controld fenced
service pve-cluster stop
rm /etc/cluster/cluster.conf
rm -rf /var/lib/pve-cluster/* /var/lib/pve-cluster/.*
rm /var/lib/cluster/*
reboot (sometimes cluster kernel modules remain hang on the connections so it is better to reboot, to be sure).


After rebooting, you can add the node as usual:
pvecm add <IP address of one of the nodes already in the cluster>


I've broken and re-did a cluster this way for about 7-8 times in the past two weeks (it is important to me to make sure that I can restore a broken cluster before we go live).

Login failed - web interface

$
0
0
I had to do a reinstall of Proxmox because I screwed up the partitions, but I got it up again and working fine, except that I can't login to the web interface with the root account, just keeps saying login failed. I'm using Linux authentication, and I can SSH into it using same password. What gives?

New single server setup.

$
0
0
Hi good people. Not so skilled virtualization administrator needs your help. Our firm prepares to deploy new dedicated virtualization server and we want to use proxmox 3 with basic support plan to rule them.

The hardware will be:
CPU Dual Intel Xeon E5 6 cores each
64GB RAM
hardware RAID 10 - 4 x SAS Seagate Cheetah 15k 600Gb (primary virtual machines storage)
hardware RAID 10 - 4 x SATA WD Black 7200 2Tb (slow and test virtual machines storage/backup storage for SAS virtual machines)
1 WAN interface

I'm planning to install proxmox 3.1, make NAT bridge and hide virtual machines behind it, because hoster can get me only one gigabit Ethernet adapter with only 3 public IP addresses. All access to the virtual machines will be through internet.

Question 1. What is the best way to use disk arrays - where to install proxmox on SATA or SAS array, how to connect second array to proxmox - LVM, directory or what is the best choise? (on older versions I have used directory on LVM connected as second proxmox storage).

Question 2. We need 4 Windows Server 2008 R2 virtual machines - 1 terminal server, 1 web server IIS, 1 MS SQL server, 1 server for custom software to use with MS SQL. Also we need 3 Linux virtual machines - 1 mail server Zimbra, 1 HelpDesk, 1 jabber server. So the question is what type of virtual disks to use on this systems - I think qcow2 will be the best choice for all except Windows Server 2008 R2 with MS SQL - there must be raw format.

Question 3. Before this setup, for lab use I have installed proxmox 1.9 and used it for testing software and configuration. We are using VMware ESXi 4 on our server, but in VMware ESXi 5 RAM limit per server/host is 32GB. We need another free platform. But when I tried proxmox 3.0 I saw there are some differences with default CPU type for virtual machine (compared to 1.9 platform). I saw on our Intel Xeon E3 some slight lesser performance using proxmox 3 compared to proxmox 1.9. Do I need to use new type of cpu or old qemu will be sufficient for new machines with sandy bridge cores?

P.S. maybe proxmox developers will write good book for administrators about using proxmox (with some tricks and tips, also good techniques - like FreeBSD or debian handbook). It will be very good - I will buy it. Can pay 30 euro for electronic version. I think a lot of people can buy such book to support project. Of course there is Red Hat documentation about KVM, another info about openvz, but there are no information about standard deployment of different platforms inside KVM to get best performance for example with MS SQL (i read here on forum that raw is the best choice for MS SQL, but in proxmox 3.1 are significant improvements with using qcow2). Or maybe someone can propose good book about KVM and OpenVZ virtualization? And why there is no online courses from proxmox - only local? It will be very good to have choice, because I can't travel to Germany for courses - it will be too expensive for me.

Thx for help and good luck.

Restoring VM from 2.3 to 3.0, on start throws error

$
0
0
When I start the VM that I restored onto 3.0 from 2.3 it ends up throwing an error, I can't seem to find any information on this anywhere.

TASK ERROR: detected old qemu-kvm binary (unknown)

[SOLVED] Win7 x64 guest with SQLServer 2012 = High CPU usage

$
0
0
Hello,
I've installed Proxmox 3.1-3 on my HP Proliant with 12GB of RAM, 1x Xeon E5120 @1.86GHz (2 cores).
At the moment I have 5 CT and 2VM - one VM is Pfsense Gateway and another one is Windows 7 x64 HP.

On that Windows machine I've installed SQLServer 2012 Express and VNC server (UltraVNC) as well as virtio drivers.
The problem is that even in idle state, guest CPU usage is about 30-40% (proxmox reports very similiar values) and there are spikes to 100%. What can be the cause of such problem?

Here's my VM .conf file:
Code:

balloon: 2048boot: cdn
bootdisk: virtio0
cores: 2
cpu: host
memory: 6144
name: Windows
net0: virtio=62:46:A0:EB:AE:83,bridge=vmbr1
onboot: 1
ostype: win7
sockets: 1
startup: order=3
tablet: 0
virtio0: local:104/vm-104-disk-1.qcow2,format=qcow2,cache=writeback,size=57G

And here is what dstat reports when this machine is off:
Code:

----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in  out | int  csw
 17  14  69  0  0  0| 280k  408k|  0    0 | 125B  230B|6745    14k
 10  7  83  0  0  0|  0    40k| 454k  458k|  0    0 |3451  6679
 10  8  82  0  0  0|  0  128k|  53k  50k|  0    0 |2659  5422
 11  9  80  0  0  0|  0  128k| 557k  521k|  0    0 |3667  7097
 24  6  70  0  0  1|  0    40k| 132k  226k|  0    0 |3145  5373
 11  7  83  0  0  0|8192B  184k| 414k  440k|  0    0 |3254  6122
  6  4  90  0  0  0|  0    0 | 162k  176k|  0    0 |2904  6134
  9  6  85  0  0  0|  0    96k| 560k  554k|  0    0 |3390  6931
  5  4  92  0  0  0|  0  112k|  48k  50k|  0    0 |2406  4514
  3  3  94  0  0  0|  0    0 |  44k  41k|  0    0 |2320  4252
  4  3  93  0  0  0|  0    0 |  71k  65k|  0    0 |2364  4408
  4  3  93  0  0  1|  0    0 |  39k  47k|  0    0 |2236  4192
  6  8  85  0  0  0|  0    72k|  55k  50k|  0    0 |2609  5480
  3  3  94  1  0  0|  0  656k|  33k  38k|  0    0 |2280  4285
  6  3  91  0  0  0|  0    0 |  61k  56k|  0    0 |2342  4273
  5  4  91  0  0  0|  0    0 |  68k  69k|  0    0 |2445  4417
  5  3  92  0  0  0|  0    0 |  34k  36k|  0    0 |2314  4252
  7  4  89  1  0  0|  40k  72k|  64k  68k|  0    0 |2495  4472
 14  8  78  0  0  0|  0    72k| 622k  552k|  0    0 |3998  7537

And here's with Windows VM running. Look at the increase in int and csw :/

Code:

----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--usr sys idl wai hiq siq| read  writ| recv  send|  in  out | int  csw
 17  14  69  0  0  0| 282k  408k|  0    0 | 125B  249B|6751    14k
 32  18  29  21  0  0|  16M  80k| 360k  212k|  0    0 |6739    14k
 36  18  25  20  0  1|  11M  576k| 297k  325k|  0    0 |7105    16k
 30  17  35  18  0  0|7656k    0 | 297k  253k|  0    0 |6777    15k
 45  34  13  7  0  0|  13M  64k| 607k  913k|  0    60k|9570    14k
 26  20  30  23  0  1|  17M  224k| 553k  281k|  0  216k|8392    16k
 21  16  38  26  0  0|  16M  48k| 115k  69k|  0    0 |5567    15k
 32  25  23  19  0  0|  13M  856k| 387k  623k|  0    0 |8201    18k
 22  16  37  25  0  0|  14M  80k| 390k  158k|  0    76k|6339    15k
 22  16  61  1  0  1| 184k  344k| 232k  277k|  0    0 |5981    14k
 36  17  29  18  0  0|  15M    0 | 198k  199k|  0    0 |6721    14k
 27  17  32  23  0  0|  16M  152k| 110k  136k|  0  100k|6442    14k
 25  17  26  32  0  0|  14M  800k| 156k  132k|  0    16k|6232    14k
 34  27  27  14  0  0|  13M  40k|  85k  136k|  0    0 |8868    16k
 54  36  6  4  0  1|  15M  256k| 124k  90k|  0    0 |  29k  16k
 50  41  5  3  0  1|  17M    0 |  92k  121k|  0    0 |  31k  17k
 58  35  5  2  0  1|8464k  200k| 146k  129k|  0    0 |  31k  11k
 53  35  8  5  0  0|  13M  960k| 227k  192k|  0    0 |  24k  18k
 52  38  7  3  0  1|  14M    0 | 137k  167k|  0    0 |  28k  18k
 79  21  1  0  0  1|  10M  164k|  64k  97k|  0    0 |6132    11k
 83  17  1  0  0  0| 368k  664k|  41k  48k|  0    0 |5617  6146
 83  17  0  0  0  0|6800k  344k|  88k  76k|  0    0 |6085  6537
 82  18  0  0  0  1|9376k  108k| 133k  102k|  0    0 |5678  7933

pveversion --verbose
Code:

proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-19-pve: 2.6.32-95
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-22-pve: 2.6.32-107
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2

proxmox 3.1 Subscription-message on login

$
0
0
Hi, why the users get the same message on login as the admin?
the message You do not have a valid subscription for this server. Please visit www.proxmox.com to get a list of available options is for admin not or users?

if we buy one community Subscription, and we have a serverfarm with 12 servers. come the message more or not and is deactivated?

regards

Issue with license

Start console failure with "authentication failure"

$
0
0
pveversion
Code:

pve-manager/3.1-3/dc0e9b0e (running kernel: 2.6.32-23-pve)

When I clone a Windows template,start the new VM , start console failure with "
authentication failure"
Code:


Aug 28 18:11:54 proxmox20152 pvedaemon[551558]: authentication failure; rhost=127.0.0.1 user=root@pam msg=Authentication failure

Aug 28 18:11:54 proxmox20152 pmxcfs[551491]: [status] notice: received log



qm config 20045
Code:

bootdisk: ide0
cores: 3
ide0: local:20045/vm-20045-disk-1.raw,format=raw,size=16G
ide2: none,media=cdrom
memory: 4096
name: Mon20045
net0: virtio=F6:49:E7:E9:F2:C3,bridge=vmbr0
onboot: 1
ostype: wxp
sockets: 2
virtio1: local:20045/vm-20045-disk-2.raw,format=raw,size=80G


Nag Screen

$
0
0
For all the people complaining about the nag screen

1) You can remove this really easily - read the code. If you can't understand it then you have no business trying to sell production services - not that I support this so I won't post how

2) You can change the warning message to your advantage. IE say "Welcome to CLOUD HOST" or something ...

3) Just pay and support - its cheap

/topic

Huge Pages Support?

$
0
0
Now that I have some systems with 128GB of RAM it seems that using HugePages would be very beneficial.

For example, the page table for 16GB of RAM using 4k pages is 32MB, using hugepages it is 64KB
Keeping 64KB in the CPU Cache is much easier than 32MB

Looking at change logs I see that openvz at one point set transparent huge pages to disabled by default (enabled by default in upstream redhat)
But I can not find an explination as to why it was disabled.

Is there some issue between openvz and transparent huge pages?
If so, can I use transparent huge pages with KVM if I do not use any openvz containers?

Solaris 10 Guest no network traffic after upgrade to proxmox 3.1

$
0
0
Upgraded yesterday (aptitude update&upgrade & rebooted server). Everything appears to be fine, however, 2 Solaris 10 guest each with an Ethernet interfaces (E1000 connected to vmbrX) which worked before now suddenly have no network connectivity, interfaces are there but no packets are going. Via console connected to the VM's and did snoop on the interfaces in the guest but I only see some arp request *from* the VM to determine the gateway but no packets from the host into the guest at all. Migrated one of the guest to another server still running on proxmox 3.0 and network works without any issue. Any clues ? Is there a simply recipe to downgrade to 3.0 to I can really confirm 100% this issue is due to upgrade ? Thx.

How to RESIZE through API?

$
0
0
This code doesn't work, unfortunately the API documentation not yet complete. Don't know if array name of 'disk' and 'size' are the corrects one.
Code:

$svr_hostname = 'dsi-094114';
$vmid = 100;

$hdd_settings = array();
$hdd_settings['disk'] = 'ide0';
$hdd_settings['size'] = '+25';
$pve2->put("/nodes/$svr_hostname/qemu/$vmid/resize", $hdd_settings);

Any idea?

Container Can't Update

$
0
0
Anytime I try to do container updates, they don't resolve.
I have done for debian

Code:

nano /etc/resolv.conf

nameserver 10.1.222.99

Code:

nano /etc/hostname

staging
localhost

Code:

nano /etc/hosts/

10.1.222.99 staging
127.0.0.1 localhost.localdomain localhost
::1 localhost.localdomain localhost

Code:

nano /etc/network/interfaces

# Auto generated lo interface
auto lo
iface lo inet loopback


# Auto generated venet0 interface
auto venet0
iface venet0 inet manual
        up ifconfig venet0 up
        up ifconfig venet0 127.0.0.2
        up route add default dev venet0
        down route del default dev venet0
        down ifconfig venet0 down




iface venet0 inet6 manual
        up route -A inet6 add default dev venet0
        down route -A inet6 del default dev venet0
        down ifconfig venet0 down




iface venet0 inet6 manual
        up route -A inet6 add default dev venet0
        down route -A inet6 del default dev venet0


auto venet0:0
iface venet0:0 inet static
        address 10.1.222.99
        netmask 255.255.255.255
        gateway 10.1.222.1

What am I missing? I can connect to the container through SSH, but I can't access the mirrors for updates from the container
Viewing all 172502 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>