Quantcast
Channel: Proxmox Support Forum
Viewing all 171725 articles
Browse latest View live

PVE Cluster question

$
0
0
Hello,

I'd just like to know if a pve cluster pools physical hardware resources such that VMs pull from that central pool, and not from any particular server?

For example, in a cluster with 2 nodes, each with 10GB of RAM, is it possible to create a VM with 15GB of RAM?

I've read the wiki article on the PVE cluster and couldn't quite figure this out.

Thanks.

Login loop

$
0
0
Hello,
i did an mistake and removed my clusternode.
Now i could not do any changes on my VM because of the error (permissin denied)
I tried to fix this issue by recreating a new cluster http://undefinederror.org/how-to-res...-in-proxmox-2/ but now i can't log in.
If i use the correct credentcials i returned to the loginscreen.
What can i do?

Thanks for your help

Tobias

Promox ve 2.3 installation on sd card questions

$
0
0
Hi,


I'm trying to install promox ve on my dell servers


Configuration:
-2 Dell PowerEdge R620
-Bi xeon
-256go RAM
-2 SD card 2go (miroring)
-No hard disk
-Storage dell MD3200 sas attachement


I have some questions:
Is there someone who already installed promox ve on SD card?
On promox ve installation wizard , i see only 1go disk space, not 2go. May be a driver problem?
And 1go disk space seems not sufficient, what is the minimum disk size for the installation?


Thanks in advance for your help.

iscsitarget on proxmox 2.3 host!

$
0
0
Hi Guys,

I don't know if my approach is wrong, but I need to provide my VM/CT guests by a iSCSI target which hosts by proxmox 2.3!
So I have installed the linux kernel first and then iscsitarget & iscsitarget-dkms packages. Just the problem is a WARNING when I :
root@opxi2Server1:~# dkms status
iscsitarget, 1.4.20.2, 2.6.32-18-pve, x86_64: installed (WARNING! Diff between built and installed module!)

I just wanted to know there is nothing wrong with doing this on proxmox because it has a modified kernel.

BEST,
-- Afshin Afzali

Proxmox VE 3.0 RC1 released!

$
0
0
We just released Proxmox VE 3.0 RC1 (release candidate). Its based on the great Debian 7.0 release (Wheezy) and introduces a great new feature set:

VM Templates and Clones

Under the hood, many improvements and optimizations are done, most important is the replacement of Apache2 by our own event driven API server.

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

Release notes
See http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.0

Download
http://www.proxmox.com/downloads/pro.../17-iso-images

Install Proxmox VE on Debian Wheezy

http://pve.proxmox.com/wiki/Install_..._Debian_Wheezy

Upgrade from any 2.3 to 3.0
http://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0

All RC1 installations can be updated to 3.0 stable without any problems (apt).
__________________
Best regards,

Martin Maurer
Proxmox VE project leader

Proxmox VE 3.0 RC1 released!

$
0
0
We just released Proxmox VE 3.0 RC1 (release candidate). Its based on the great Debian 7.0 release (Wheezy) and introduces a great new feature set:

VM Templates and Clones

Under the hood, many improvements and optimizations are done, most important is the replacement of Apache2 by our own event driven API server.

A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.

Release notes
See http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.0

Download
http://www.proxmox.com/downloads/pro.../17-iso-images

Install Proxmox VE on Debian Wheezy

http://pve.proxmox.com/wiki/Install_..._Debian_Wheezy

Upgrade from any 2.3 to 3.0
http://pve.proxmox.com/wiki/Upgrade_from_2.3_to_3.0

All RC1 installations can be updated to 3.0 stable without any problems (apt).
__________________
Best regards,

Martin Maurer
Proxmox VE project leader

Clone vm

$
0
0
Hello everyone! Can you tell me how can i clone vm ? Can i do it from web interface ?

uefi boot

$
0
0
I have a fresh install of w7 on hard drive and am trying to boot this disk from a kvm vm. I have read some posts on the subject but do not understand.
Seabios? Supports UEFI boot?
Thank you.

Concurrent migration fails because port is in use

$
0
0
Hello everybody,


We have a two-node-cluster with Proxmox 2.3-13 using DRBD. Fail-over works, but after the failed node is back and unfenced, the second migration back to the original node fails:


Code:

task started by HA resource agent
May 07 13:53:21 starting migration of VM 103 to node 'vhost2' (10.0.0.102)
May 07 13:53:21 copying disk images
May 07 13:53:21 starting VM 103 on remote node 'vhost2'
May 07 13:53:23 starting migration tunnel
bind: Address already in use


channel_setup_fwd_listener: cannot listen to port: 60000


Could not request local forwarding.


May 07 13:53:24 starting online/live migration on port 60000
May 07 13:53:24 migrate_set_speed: 8589934592
May 07 13:53:24 migrate_set_downtime: 0.1
May 07 13:53:26 ERROR: online migrate failure - aborting
May 07 13:53:26 aborting phase 2 - cleanup resources
May 07 13:53:26 migrate_cancel
May 07 14:03:30 ERROR: migration finished with problems (duration 00:10:10)
TASK ERROR: migration problems

The first migration worked without problems. There seems to be a race condition if two migrations are concurrent, the port is not incremented. Is there any option to delay the migration by a few seconds? Or is another work-around available? Any help is appreciated because we really need to re-balance the VMs after a node comes online.

Kind regards,
Chris


This is our cluster.conf:

Code:

<?xml version="1.0"?>
<cluster config_version="9" name="testcluster">
  <cman two_node="1" expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
  <fencedevices>
    <fencedevice agent="fence_ifmib" community="public" ipaddr="10.0.0.5" name="switch_a1" snmp_version="2c"/>
    <fencedevice agent="fence_ifmib" community="public" ipaddr="10.0.0.6" name="switch_a2" snmp_version="2c"/>
    <fencedevice agent="fence_ifmib" community="public" ipaddr="10.0.0.7" name="switch_b1" snmp_version="2c"/>
    <fencedevice agent="fence_ifmib" community="public" ipaddr="10.0.0.8" name="switch_b2" snmp_version="2c"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="vhost1" nodeid="1" votes="1">
      <fence>
        <method name="fence">
          <device action="off" name="switch_b1" port="35"/>
          <device action="off" name="switch_b2" port="38"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="vhost2" nodeid="2" votes="1">
      <fence>
        <method name="fence">
          <device action="off" name="switch_a1" port="37"/>
          <device action="off" name="switch_a2" port="42"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
    <failoverdomains>
      <failoverdomain name="domain1" nofailback="0">
        <failoverdomainnode name="vhost1" priority="1"/>
      </failoverdomain>
      <failoverdomain name="domain2" nofailback="0">
        <failoverdomainnode name="vhost2" priority="1"/>
      </failoverdomain>
    </failoverdomains>
    <pvevm autostart="1" vmid="102" domain="domain1" />
    <pvevm autostart="1" vmid="103" domain="domain2"/>
    <pvevm autostart="1" vmid="106" domain="domain1" />
    <pvevm autostart="1" vmid="107" domain="domain2" />
  </rm>
</cluster>

2.2 and 2.3 cluster

$
0
0
I have 4 pve version 2.2 hosts in cluster. After update one of host to 2.3 i have some problems. VM's live migration 2.2 <-> 2.3 failed. Web panel 2.3 show all pve nodes down (including itself, 2.3 host), 2.2 not show status of VM on 2.3 host. Is 2.2 and 2.3 versions compatible or not?

Can't activate LV of VM's after iscsi server restarted. Help needed!

$
0
0
Hi All,

I have a Proxmox cluster consisting 8 nodes. This worked flawless for almost a year now. Unfortunately the iscsi server crashed, after the restart the VM's on the nodes showed irratic behaviour. Therefore I restarted one of these VM's which resulted in the following message:

Error: can't activate LV '/dev/DATA07/vm-102-disk-1': Skipping volume group Data07

From that point I have checked the iscsi-server where everything shows correctly. The nodes have iscsi initiators running there.
The LVM's exist en check out ok as far as I can see.

Currently I am running out of options and have 20 VM's down, any help is VERY welcome!!

Thanks in advance!
Siebe

IO Delay. What is a normal value how to trouble shoot ? Tools command ?

$
0
0
My IO Delay looks like:

Normal between 0.00 to 1.00
Sometimes 3 or max 10
Backups higher but seem "Normal" to me.

When do we talk about bad performance ? What is the best way or command to find
out what VM is causing this ? Checking the disk read write in the Proxmox gui ? :rolleyes:

My system looks like this:

pveperf:

CPU BOGOMIPS: 53465.68
REGEX/SECOND: 1169780
HD SIZE: 1015.93 GB (/dev/sda3)
BUFFERED READS: 42.95 MB/sec
AVERAGE SEEK TIME: 39.78 ms
FSYNCS/SECOND: 4.83
DNS EXT: 49.22 ms

Did notice there is a lot of info here about IO Delay :cool: So guess i did find some answers already ...
I use hardware raid 5 noticed one of my drives is dead :( will be replaced. Its raid 5 so server is still fine :)

Ram Upgrade Proxmox Hostsystem ...

$
0
0
Hello,

is it possible to upgrade the ram on the hostsystem after the proxmox installation?

Best regards
BUE

Backup Status Info explanation

$
0
0
Could somebody explain the info proxmox shows in the following format during backup?

"INFO: status 10% (9999999999/99999999999), sparse 7%(99999999999), duration 674, 33/17 MB/S"

Thanks!

3.0 Upgrade forgot to stop vms - website not loading?

$
0
0
I forgot to shut off the vms and now can't start the vms in SSH and the site doesn't load? any ideas?

Proxmox 3.0 iSCSI Drivers Do not work

$
0
0
The nodes refuse to start iscsi sessions - anyone else having issues with this?

OpenVz containers same ip and different control panels

$
0
0
Hi everybody

I have dedicated server and install proxmox , the distribution of OVH over Debian

I install this and create containers , in this case create 2 containers 100 and 101

I instal in each one of these containers centos 6.3 clean template

After this i load in each case ispconfig for works in each container

After this i use

Quote:

iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o vmbr0 -j SNAT --to 5.39.84.286

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 1022 -j DNAT --to 10.0.0.1:22

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 1090 -j DNAT --to 10.0.0.1:80

Wit this i can access each container with different ports and with the same ip


Now the problem :


If i want add domains for each container how i can do this because i need one ip and the ip it´s the same in each case and only change port , how i can load the domain for example www.domain.com with the ip fix of machine 5.39.84.286 and load the host container for this

I think it´s possible but i don´t know how i can do this

Other people tell me ok use ipfailover and no more problems , ok but i don´want buy one ipfailover for each container i supose it´s possible use this kind of method for use containers and use the same ip , i think .....

Thank´s for the help and the best regards for all community

HW Raid status.

$
0
0
Hello,

I wondering if it possible to intergrated RAID status into proxmox www inteface?

I`m using raid controller Perc 5i (rebranded from LSI)

I have installed megacli I can use this program to take out diferent status indication from the controller.

Create ISO from Source / Modify ISO

$
0
0
I'm trying to edit some files inside the ISO file and then recreate it.

What I did is:

Code:

mount -o loop proxmox-ve_2.3-ad9c5c05-30.iso iso/
cp -R iso/* newiso/
mkisofs -R -iso-level 4 -z -V "PVE" -o pve-cd.iso -b boot/isolinux/isolinux.bin -c boot/isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table newiso/

It boots fine but it returns this error:

Code:

no cdrom found - unable to continue (type exit or CTRL+D to reboot)
What am I doing wrong?

Migrate OpenVZ CT to ProxMox (ploop problem ...) ?

$
0
0
Hi,

I've several OpenVz CT actually working on OpenVZ Host (not ProxMox). These CT use ploop layout. When i try to migrate to ProxMox whith "vzmigrate" i get this error : "Destination node don't support ploop, can't migrate". I think it's normal because ProxMox don't seem to support ploop.

Do you think there is a method to convert containers currently running ploop storage mode to normal OpenVZ storage layout ?

Thanks in advance for your help !

/Xavier
Viewing all 171725 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>