Quantcast
Channel: Proxmox Support Forum
Viewing all 171654 articles
Browse latest View live

The Intel Rapid Storage / LSI Megaraid

$
0
0
I am building a server machine at the moment, this is for the first that I build a server machine for production use so I have to get some help. :) I would like to run a Proxmox VE on it but I've got some trobules managing HW Raid to work. The server has special motherboard designed for servers. The model ishttps://www.asus.com/Commercial_Serv...ations/Z9PAD8/
It has two types of RAID built in - the LSI Megaraid and Intel Rapid Storage
My first idea was to setup the RAID 1 using one of those systems and than install the Proxmox VE on that RAID partition. But the problem is that when I setup the raid the linux does not recognize it as raid partition and instead it show all the disks separately as /dev/sdX.
I have 4 disks installed on that machine:
2x Intel SSD 240GB - for the first RAID 1
2x WD 2TB - for second RAID 1
I made this stepes: 1. Set the SATA mode to RAID in BIOS
2 Started Intel / LSI utility to setup raid array
3. Started installing proxmox
4. Did not see any of the raid arrays in linux installation
I know that there is a possibility to run a software raid using mdadm instead of hw but I would like to use the hw one.
Is there any way to setup Proxmox on HW Raid using on of those technologies ? Or should I better use software raid ?
Thank you. :)



Unable to create VM ?

$
0
0
Cannot create a VM:

Code:

Formatting '/var/lib/vz/images/108/vm-108-disk-1.qcow2', fmt=qcow2  size=128849018880 encryption=off cluster_size=65536  preallocation='metadata' lazy_refcounts=off
TASK ERROR: create failed - unable to open file '/etc/pve/nodes/prox3poe1/qemu-server/108.conf.tmp.139360' - Input/output error


After a reboot it works for a while to create a vm, I dont know how long until the error comes again ???

Proxmox 3.3 install - temperature monitoring?

$
0
0
Just recently discovered Proxmox, and thought I'd try it out on a simple test machine, so I have 3.3 installed and running nicely with 3 KVMs. It's great, but it'd be good to be able to track the CPU temperature in the web interface of a node, and maybe the HDDs. Is there a way to do this? Or at least be able to query for CPU temp via a terminal.

When I did a search, all I found was info on much older versions, using lm-sensors. I did try and see if the commands mentioned would work, but they don't seem to exist on 3.3.

eg, "sensors" results in a "bash: pve: command not found".
Am I missing something really basic here? :confused:

It's probably not relevant to the question, but the hardware is:

Intel DQ45CB mainboard
Intel Xeon X3210 CPU (Quad 2.13GHz)
8GB DDR2 800MHz RAM
60GB SSD for the Proxmox install
500GB HDD for KVM etc storage

Thanks!

[SOLVED] DAB Version in pveversion

$
0
0
Since DAB is used by Template creators and is of no use to general SysAdmins it was left out of the pveversion -v output in the standard install.

For those who need it they can add dab and any other package names to the array in line 536 (current version PVE 3.3-2 Oct 3rd 2014) in /usr/share/perl5/PVE/API2/APT.pm which is:
Code:

    push @list, qw(lvm2 clvm corosync-pve openais-pve libqb0 redhat-cluster-pve resource-agents-pve fence-agents-pve pve-cluster qemu-server pve-firmware libpve-common-perl libpve-access-control libpve-storage-perl pve-libspice-server1 vncterm vzctl vzprocps vzquota pve-qemu-kvm ksm-control-daemon glusterfs-client);
In the case of adding dab it would now become:
Code:

    push @list, qw(lvm2 clvm corosync-pve openais-pve libqb0 redhat-cluster-pve resource-agents-pve fence-agents-pve pve-cluster qemu-server pve-firmware libpve-common-perl libpve-access-control libpve-storage-perl pve-libspice-server1 vncterm vzctl vzprocps vzquota pve-qemu-kvm ksm-control-daemon glusterfs-client dab);

Which storage type?

$
0
0
We want to use Proxmox in a productive environment for a server which receives a lot of web requests for serving a website and a REST service via a mySQL and a MongbDB database. MongoDB data files are relatively big and change frequently. The server frequently gets a lot of small requests from many users. Using proxmox we aim at being able to do a live-migration to another host in case that our server hardware fails. We have 2 host servers with SSDs - both located at the same data centre and our idea is to create a KVM based guest with proxmox and to distribute the KVM VM data file to allow a fast live-migration.

As far as I can see we will need to use a storage type that allows 2 hosts to share the storage. Looking at the proxmox wiki I found several types that are supported including GLusterFS and RBD/Ceph. (While browing the web I also found DRBD, but that doesn't seem to be supported by Proxmox (is that correct?)). - However, I didn't find any information about which type may be the best for our scenario.

Does anyone have experience? What would you prefer as a strorage type?

As a side-question: Does anyone of them support compression (mongodb creates relatively big data files which can be compressed very well)? I could imagine that this may be good during the network transfer and maybe it can also allow us to use the space of the drived more efficiently...?

fencing, lanplus

$
0
0
Hello everyone

I have a cluster with 3 nodes set up and I' now testing HA. I got everything to work which is cool :) and I really like PVE a lot!

I bought a book called Mastering Proxmox which sometimes is a bit confusing. For example, in the wiki it sais that if you use fencing you have to edit every /etc/default/redhat-cluster-pve and change FENCE_JOIN="yes". In the book it sais "do this only on one node". Which one is correct and why?

The other question I have is about this lanplus. I have set up IPMI fencing like it sais in the wiki. When I use the command fence_node X -vv and have lanplus="1" it doesn't work. If I take out lanplus="1" it works fine. What is this lanplus? (I found that for example ilo2 uses this. But I still don't understand what it is) Is it necessary to use this lanplus? I have a supermicro board X10SLL-F with the newest BIOS update and also the newest IPMI firmware.

Thanks for some info on these questions ;)

sincerely

Unable to umount / stop containers properly

$
0
0
root@maxwell:/# vzctl stop 800

Stopping container ...
Container was stopped
Can't locate PVE/OpenVZ.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/lib/vzctl/scripts/proxmox.umount line 4.
BEGIN failed--compilation aborted at /usr/lib/vzctl/scripts/proxmox.umount line 4.
Error executing umount script /usr/lib/vzctl/scripts/proxmox.umount

# df -h leave the VM filesystem mounted and a restart tells me it can't copy files like "ERROR: Can't copy file /etc/hosts" onto a read-only partition

/var/lib/vz/private/800 20G 11G 9.3G 54% /u0/vz/root/800

Older Proxmox's don't have "/usr/lib/vzctl/scripts/proxmox.umount" - using on this system: 2.6.32-32-pve

I found the OpenVZ.pm in VZDump, but if I create the neccessary symlink to satisfy the error on a restart of the VM I get (without the symlink startup gives no errors):

root@maxwell:/etc# vzctl start 800
Starting container ...
Subroutine read_global_vz_config redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 46.
Subroutine read_vz_list redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 121.
Subroutine new redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 156.
Subroutine type redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 170.
Subroutine vm_status redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 174.
Subroutine prepare redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 185.
Subroutine lock_vm redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 232.
Subroutine unlock_vm redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 247.
Subroutine copy_data_phase1 redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 253.
Subroutine stop_vm redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 259.
Subroutine start_vm redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 265.
Subroutine suspend_vm redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 271.
Subroutine snapshot redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 277.
Subroutine copy_data_phase2 redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 308.
Subroutine resume_vm redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 314.
Subroutine assemble redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 320.
Subroutine archive redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 338.
Subroutine cleanup redefined at /usr/lib/perl5/PVE/OpenVZ.pm line 365.
Undefined subroutine &PVE::OpenVZ::load_config called at /usr/lib/vzctl/scripts/proxmox.umount line 13.
Error executing umount script /usr/lib/vzctl/scripts/proxmox.umount
Adding IP address(es): 80.95.186.241
/bin/bash: line 504: /etc/network/interfaces: Read-only file system
/bin/bash: line 540: /etc/network/interfaces: Read-only file system
/bin/bash: line 547: /etc/network/interfaces: Read-only file system
cp: cannot create regular file `/etc/network/interfaces.bak': Read-only file system
/bin/bash: line 571: /etc/network/interfaces.bak: Read-only file system
mv: cannot stat `/etc/network/interfaces.bak': No such file or directory
Setting CPU units: 1000
Setting CPUs: 2
/bin/cp: cannot create regular file `/etc/hosts.20': Read-only file system
ERROR: Can't copy file /etc/hosts

So how do you get OpenVZ.pm or satisfy this issue with the missing Perl module?

About snapshots and locked VM's

$
0
0
Hi,

have I got idea of snaphost completely wrong here? I've been doing backups for one Windows server that holds about 600gb of data with lzo compression. Windows server has qcow2 disks.

How I have allways understood snapshot is that snapshot holds all data and status for certain moment in time. This snapshot is used to take backup, snapshot of course will contain RAM status. Now it seems that current snapshot locks whole server for whole time snapshot is beeing processed and releases lock afterwards?

I could not find any good documentation for this. I just would like to know is there somenthig wrong with our system or is this just how it works.

Fence Issues

$
0
0
Looking for input on a fence issue I am having with a fresh Proxmox 3.3 cluster.

I can manually fence with no issues like so. This atleast proves that the management port is on the network and functioning as expected.

Quote:

root@vaultprox3:~# fence_ipmilan -l USERID -p PASSW0RD -a 10.80.12.187 -o reboot
Success: Rebooted
Quote:

root@vaultprox3:~# fence_node vaultprox4 -vv
fence vaultprox4 dev 0.0 agent fence_ipmilan result: error from agent
agent args: nodename=vaultprox4 agent=fence_ipmilan ipaddr=10.80.12.187 lanplus=1 login=USERID passwd=PASSW0RD power_wait=5
fence vaultprox4 failed
However if I try to use fence_node, or kill corosync or anything like that fence fails. I can't for the life of me figure out what the issue is. Here is my cluster.conf.

Code:

root@vaultprox3:~# cat /etc/pve/cluster.conf
<?xml version="1.0"?>
<cluster config_version="5" name="vaultprox">
  <cman expected_votes="3" keyfile="/var/lib/pve-cluster/corosync.authkey"/>
  <quorumd allow_kill="0" interval="3" label="vaultprox_qdisk" master_wins="1" tko="10"/>
  <totem token="54000"/>
  <fencedevices>
    <fencedevice agent="fence_ipmilan" ipaddr="10.80.12.186" lanplus="1" login="USERID" name="ipmi1" passwd="PASSW0RD" power_wait="5"/>
    <fencedevice agent="fence_ipmilan" ipaddr="10.80.12.187" lanplus="1" login="USERID" name="ipmi2" passwd="PASSW0RD" power_wait="5"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="vaultprox3" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="ipmi1"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="vaultprox4" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="ipmi2"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
  </rm>
</cluster>

At this point I am at a loss as to what the issue could be. I appreciate the input!

Quote:

root@vaultprox3:~# pveversion -v
proxmox-ve-2.6.32: 3.3-138 (running kernel: 2.6.32-33-pve)
pve-manager: 3.3-2 (running version: 3.3-2/995e687e)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-2.6.32-31-pve: 2.6.32-132
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-35
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-9
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

DRBD + qcow2 + Snapshots - is this possible?

$
0
0
Hello,

we installed Proxmox with DRBD. So we see that it is not possible any more to use the snapshot possibility in proxmox. Maybe this is because that it is not possible to use qcow2 with DRBD. Is there no way to use these services with Proxmox and DRBD?

Thanks and best Regards

CT from other CT via NAT and public IP not reachable

$
0
0
Hi,

one CT works as a mailserver, reachable via NAT from external network with the following iptables-roules:

IP="xxx.xxx.xxx.xxx"
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -t nat -A PREROUTING -p tcp -d $IP --dport 25 -i eth0 -j DNAT --to-destination 192.168.0.105:25

Is there a way to connect to the mailserver-CT from internal network via public IP?

telnet xxx.xxx.xxx.xxx 25
Trying xxx.xxx.xxx.xxx...
telnet: Unable to connect to remote host: Connection refused


telnet via internal IP works.


Thanks, proxmox mit openvz rocks!

Some websites are not loading

$
0
0
I think since the last Proxmox 3.3 patch, im not able to load some websites on vms correctly.
It's very weird because some websites like google.com are working without any problems.
But when im trying to connect to microsoft.com and other sites they only show that their loading and i can even see the header and sourcecode, but it isn't shown.
It neither can be the browser because im also having the problem with some programms that apparently also can't really connect to the sites.
My conclusion is that it's something with the built-in proxmox firewall, so i disabled it but that didn't helped.
I've noticed that the websites suddenly loaded while i was shutting down the hostnode, but that only occourred once.
So my question is did some of you guys ever had a similar problem or do you have any idea why it is happening?
I hope you can help me.
Greetings,
Lucas

multiple bridges to a nic

$
0
0
On my Proxmox box I have 2 nics (eth0, eth1)
Standard config maps vmbr0 to eth0 as usual.
I added vmbr1 and bridged it to eth1. No problem.

Now I want to create vmbr2, vmbr3, vmbr4 and have the traffic on the VMs that use these interfaces, go through eth1.

Obviously, I have the usual limit: "device eth1 is already a member of a bridge; can't enslave it to bridge ..." that stop me to simple map vmbrN to eth1 in /etc/network/interfaces.
How can I solve?

Thanks, P.

Need to rebuild node 2 in my cluster - Advise Please

$
0
0
I've purchased 4 additional hard drives for a RAID 10 (that already has 4 drives). This is server #2 in my Proxmox cluster.

I'm thinking it will be difficult and/or time consuming to keep this node in tact, while adding these 4 drives to an existing RAID 10. I suppose it is possible to expand the RAID 10 by these 4 drives and somehow expand the one virtual drive to encompass all 8 drives, but that seems like it would take a long time for the RAID controller to distribute all the existing data as needed.

So, instead, I've migrated all existing virtual machines over to the primary Proxmox node (I guess that's what you call the intial master node that you start with).

So, now, node#2 has no virtual machine on it.

I'd like to correctly remove that node in a manner in which there won't be any problems. I want to install all 8 hard drives onto this server, having one virtual volume, then install Proxmox freshly on top.

After this, I want to add this node back to the cluster, and then mirgrate some virtual machines over to it.

Do you see any issues I might run into? For example:

1) Will my unexpired license work just fine after the fresh install?
2) Are there special steps I need to take to properly remove the node?
3) When I re-add the node, will the master node complain about anything, or possibly rejected me based on some unique id regarding the previous installation of this cluster-node?

These concerns my seem a bit extreme, but I just want to make sure I don't corrupt my clusters during this upgrade. You're advise is appreciated. Thanks in advance.

Migration, prevent slow vzquota init on shared storage

$
0
0
This question have been asked and left unanswered for a few years now, here we come again.

Shared storage, pvectl migrate vmid does a full vzquota init on the target. It is an extremely slow process, takes hours which makes it not quite qualifying for the name of "online migration".

Maybe proxmox is not aware that it's a shared storage? If not, how to make it?

But if it is why does it init the darned quota again? Is there a way to prevent it rebuilding?

Thanks.

ipset inside CTs

$
0
0
kernel-2.6.32-28-pve - ipset works fine inside OpenVZ containers.
kernel-2.6.32-33-pve - ipset fails with 'Kernel error received: Operation not permitted'.
What's happen? Can/would you fix it?

automated / preseed pxe install / provisioning of proxmox itself

$
0
0
Is there a way to automate a proxmox install and/or pxe boot the installer? for example preseeding?

Proxmox VE Mobile not working correctly

$
0
0
Hello,

when I open https://MY-Server-IP:8006/?mobile=1 on my webbrowser (Firefox 32.0) the new mobile interface opens. I can enter my root-details and everything is working perfectly. The only problem is, when I enter standard "PVEVMUser" credentials, I get the following screen: (see image attached)

Is there any way to solve this, so my users can use the mobile view to manage their VMs?

(the same error appears on my mobile phone)

pvevm_mobile.PNG
Attached Images

Auto migrate VM when a node has network fails or down

$
0
0
Good morning to all.

I just configured a two node cluster with HA but I have a problem.

I have a VM (100) running in node 1 (gestion1), if I restart or shutdown manually node 1, this VM is migrated to node 2 withouth any problem, it works from node 2 to node 1, too.
VM is configured in a LVM data storage with drbd configured.

RGManager is running in two nodes.

The problem I have is:

If I have a VM running on node 1 (or node 2) and I quit the LAN cable or quit the power from the node, VM is not migrated to the other node from the cluster. The VM downs with the node.

My cluster.conf is:

Code:

<?xml version="1.0"?>
<cluster config_version="7" name="gestioncluster">
  <cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1"/>
  <fencedevices>
    <fencedevice agent="fence_ilo" ipaddr="192.168.130.34" login="ADMIN" name="fenceA" passwd="ADMI$
    <fencedevice agent="fence_ilo" ipaddr="192.168.130.44" login="ADMIN" name="fenceB" passwd="ADMI$
  </fencedevices>
  <clusternodes>
    <clusternode name="gestion1" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device action="reboot" name="fenceA"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="gestion2" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device action="reboot" name="fenceB"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
    <pvevm autostart="1" vmid="100" recovery="relocate"/>
  </rm>
</cluster>

Proxmox version in two nodes:

Code:

pveversion -vproxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

I hope you can help me with this problem, if you need more info I can paste it.

advice: tricky buttons (reset & restart)

$
0
0
Dear proxmox community,

I'm Johannes, old (well above 30, so yes, old... ;)) and my first post regards to an improvement of the webinterface. It's easy to implement, extremely useful for admins like me it will (at some point in future) save your soul.

What happened to me a few hours ago:
Came to the office, not perfectly awake, saw that one VM hogs 100% of one core, wanted to restart this particular VM. Well, I restarted the VM, sort of...

Instead I restarted the whole host, dual Xeon E3 64GB RAM. What a nightmare! A few seconds later I had a subtle feeling "well, something looked different". Another few seconds later I realized that I restarted the host and another few seconds later the colleagues realized.

Good thing, if any, was that it's been early in the morning and not everybody was already working. Nevertheless, this should not have happened at all!

It was definitely my mistake due to early morning, things on my mind and so forth. But, as with every catastrophe, small problems aggregate into that final catastrophe. I think that the "reset" and "restart" buttons shouldn't be mistaken for one another. They're about at the same place within the WebGUI, they have the same confirmation dialogue but they so much have different impact!

My advice to the dev team:
1. Hide the restart button (for the host), maybe somewhere bottom right.
2. Change the confirmation popup, so that the admin has to type "Yes, I'm 100% sure" or something the likes (the debian way).
3. Remove the restart button completely from the WebGUI and only allow host restarts from the CLI.
4. Instead of the confirmation popup, which again can easily be mistaken for the VM reset popup, implement a password confirmation.

From the above mentioned list I recommend no 4. Additionally to being unique looking it has a security component built in. Imagine a scenario where you leave your PC and someone, maybe a fired coworker, wants to payback the company. Or imagine my misclick scenario.

Any host, esp. a proxmox host is mostly dedicated to run 24/7. Therefore a restart/reboot of the whole host is an uncommon event and I believe this uncommon and potentially disastrous event should not be triggered with two clicks that, again, have a lookalike.

The current implementation so much triggers the brains' pattern recognition, and that's something we absolutely want to avoid.

Long story short: I don't want anybody to experience the pain, the embarrasment and, to be blunt, the money down the drain that I have to go through now!

Cheers,
Jo
Viewing all 171654 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>