Quantcast
Channel: Proxmox Support Forum
Viewing all 171768 articles
Browse latest View live

Problem booting PVE kernel on iscsi

$
0
0
Hi,
First of all, I have searched a bit in the forums but for some reason it seems I am not able to find a matching issue, except one that didnt resolve my problem.

I have installed Proxmox according to the instructions here: https://pve.proxmox.com/wiki/Proxmox_ISCSI_installation as well as adding info from a few other places since it assumed dhcp etc.

Right now I have a working Debian system that boots from iSCSI, using the 2.6.32-5-adm64 kernel. The idea was then to use the Proxmox kernel and, going forward, boot into it. So I looked up the guide here: https://pve.proxmox.com/wiki/Install..._Debian_Wheezy

Adding sources, installing images, updating grub etc works great. No issues at all. However, when I boot the PVE kernel the machine fails to boot due to - what it seems - a disk error. Starting the iSCSI session etc seems to be working based on the messages printed on the screen.
It is quite hard to see exactly what is the first error, since the screen is flooded but at the end I get the error message: ext4-fs (sda2): ext4_da_writepages: jbd2_start: 1024 pages, ino [then a changing number]. this message continues to appear and are pushing out the buffer on the top as well so its quite hard to see what the actual issue is.

An other error messages are "Rejecting IO to offline device". At one point I think I saw (before it was pushed out) that the drives where considered having hardware issues and that I should run fsck.

I've also enabled bootlogd to see all the log messages but that log seems to be very empty unless I boot into the Debian kernel, same goes for dmesg.

I've tried different pve kernel versions, including 2.6.32-25, 2.6.32-26, 2.6.32-27, 2.6.32-34 but all with the same issue. I've ran fsck froma live dist and there was no error detected (it finnished instant tho). However, whenever I try to boot into the Debian kernel, there seems to be no issue with the filesystem.

Any idea of how I can proceed with my troubleshooting and resolve the issue? I understand remote boot is not officially supported, but in this case it might not even be related since the iSCSI session is already up and are working under another kernel. Likely there is some issue with modules that needs to be enabled in the kernel, or at least that is my guess.

Thanks in advance,
trafficlight

Netboot PXE to ceph disk image?

$
0
0
Is it theoretically possible to PXE boot a diskless server into Proxmox and then have it use a ceph block device as its root filesystem?

The incremental cost of putting at least one SSD in a node, however small, doesn't seem like a lot in terms of cost, power, and failure point, but over a bunch of nodes it really adds up.

Since we've got this hyper-redundant ceph cluster sitting there for the VM's, it'd be great to use it natively as well!

Thanks for any insight!

Install Fails on 16.8TB RAID

$
0
0
I am trying to install Proxmox 3.3 on a 16.8TB RAID 10 array and the install fails with the error: mkfs.ext3 -m 0 -F /dev/pve/data failed with exit code 1 at /usr/bin/proxinstall line 177. Looking at the details it is trying to format my whole array with ext3 which has a 16TB size limit. Is there any way to configure the install to use a smaller portion of the disk? I plan on carving up the disk space with LV for use with the VMs anyway.

Distribution CPU processes on the nodes?

$
0
0
Hi guys !!


I need your help please !!


Latest am learning to use Proxmox and I find it incredible, but I doubt it if I can distribute the process using the CPU at node 1 in the other nodes?


And it would be incredible to tell me how I can do, I would be very grateful.


Greetings and I will be attentive to your comments and thank you very much everyone. ;)

Totally messed up proxmox installation

$
0
0
Hi,

I had installed manually proxmox 3.1 into debian wheezy and it was working great with a few VMs on.

Yesterday I tried to enable HA since now I have two machines. I have put the new machine (installed with debian wheezy, then proxmox 3.1 and then upgraded to 3.3 flawlessly) as master and then when I tried to put my old machine as slave, all went foobar.

1. I couldn't add the slave successfully as it kept stuck on "Waiting for quorum"
2. I lost all the VM configurations on the slave, but they are still running and /var/lib/vz is intact

So as everything went wrong, I decided to try to disable HA on the slave to keep it running as before. I followed these steps: http://undefinederror.org/how-to-res...-in-proxmox-2/ and omitted the part of creating a cluster.

After It didn't work, I decided to upgrade from 3.1 to 3.3 expecting to clean things up.

apt-get dist-upgrade shows the following output:
Code:

root@helix:/var/log# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages have been kept back:
  pve-qemu-kvm qemu-server
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
3 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue [Y/n]? Y
Setting up pve-manager (3.3-1) ...
insserv: Service cman has to be enabled to start service pvedaemon
insserv: exiting now!
update-rc.d: error: insserv rejected the script header
dpkg: error processing pve-manager (--configure):
 subprocess installed post-installation script returned error exit status 1
Setting up vzctl (4.0-1pve6) ...
insserv: Service cman has to be enabled to start service vz
insserv: exiting now!
update-rc.d: error: insserv rejected the script header
dpkg: error processing vzctl (--configure):
 subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of proxmox-ve-2.6.32:
 proxmox-ve-2.6.32 depends on pve-manager; however:
  Package pve-manager is not configured yet.
 proxmox-ve-2.6.32 depends on vzctl (>= 3.0.29); however:
  Package vzctl is not configured yet.


dpkg: error processing proxmox-ve-2.6.32 (--configure):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 pve-manager
 vzctl
 proxmox-ve-2.6.32
E: Sub-process /usr/bin/dpkg returned an error code (1)


Proxmox webpage is not working anymore (doesn't load anything even after restarting the services). Pveproxy shows
Code:

Dec 23 23:57:30 helix pveproxy[387691]: WARNING: Can't call method "timeout_reset" on an undefined value at /usr/share/perl5/PVE/HTTPServer.pm line 225.
maybe because an inconsistent upgrade.

pveversion -v
Code:

pveversion -v
proxmox-ve-2.6.32: not correctly installed (running kernel: 2.6.32-26-pve)
pve-manager: not correctly installed (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-8
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: not correctly installed
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


Any suggestion on how to repair the installation? On the other side, at my new machine I have a clean proxmox 3.3 installation with a cluster created, maybe it can help with some files.

Thanks and merry christmas!

Proxmox VE 3.3-5, noVNC not work

$
0
0
Hello,
I updated to Proxmox 3.3-5 to have ... noVNC. (Because Java sucks!)
And the only thing I do not have is ... noVNC! Why?!?!

Thank you for your help,

Mickael

Wrong disk size?

$
0
0
Hi all,

I'm trying to cleanup a little bit of disk space on a node.
During my search I found:

root@proxmox1:/var/lib/vz/images/102# ls -lah
total 91G
drwxr-xr-x 2 root root 4.0K Dec 25 14:08 .
drwxr-xr-x 11 root root 4.0K Nov 14 09:39 ..
-rw-r--r-- 1 root root 133G Dec 25 14:10 vm-102-disk-1.qcow2

It shows the dir 91GB but the .qcow2 file inside that dir is 133GB.

Now, I've just logged in to this vm and the total usage space is about 14GB:

root@:~# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 76G 14G 59G 19% /
udev 10M 0 10M 0% /dev
tmpfs 791M 256K 791M 1% /run
/dev/disk/by-uuid/1c56ce75-4d63-4489-9fce-c257d5269afa 76G 14G 59G 19% /
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 2.3G 0 2.3G 0% /run/shm

Under the 102 vm the disk space that I've allocated is 80GB.

Where on earth are the rest of the GBs that seems to be used?

I've deleted all snapshots for that VM (before that it was even more than 91GB).

Thanks

Migration question - ESXi ---> Proxmox

$
0
0
Hi all,

I've been reading this page:

http://pve.proxmox.com/wiki/Migratio..._to_Proxmox_VE

in the wiki.

I have several windows servers I would like to migrate, but I'm concerned about a few things -

I know domain controllers are sometimes not very polite about changing hardware on the level of moving to very dissimilar equipment (not much unlike moving from vmware to proxmox I'd assume) - has anyone migrated a Win2008 DC from VMWare to ProxMox? If so, did it work or was it just a miserable failure?

What was the best option in terms of the migration if it worked well? I'd like the target format for the proxmox HDD image to be RAW (a few of the servers have MSSQL servers on them). Would it be best to use sysrescuecd or clonezilla or do a complete backup of the esxi vm and then use the server 2008 boot disk to restore from backup to the proxmox vm? If I use the dd option of the rescue cd, I assume I'll end up with a raw disk image or do I need to do some other manipulation to it?

My ultimate goal is to get the VM's migrated off of the free version ESXi servers and onto ProxMox cluster w/ ceph storage cluster. Ultimately I have about 8 Windows server vms (4 are DC's) on three vlans, 7 linux vms on 4 vlans and 6 windows 7 vms on 1 vlan (shared with 3 of the windows servers) that I'd need to migrate off of the existing vmware platform over to proxmox.

I do have some time before I plan to do this completely, but I want to make sure I understand it so I can have the procedure written out clearly so I can do it as quickly as possible to reduce the downtime.

If I can't migrate the DC's I do have a secondary plan I can put into place, but it would be less painful if I'd be able to migrate.

Kernel Panic in HP ML310E V2

$
0
0
Hello,
Unfortunately I do not speak in English, but I hope to explain myself well with Google Translate.
I'm using Proxmox in two HP ML310, and both have a very similar kernel panic.
The following image.
In all VMS I use the CPU host, this can cause a problem?


IMG_0124.JPG

Obrigado!
Attached Images

SSH working but Web interface not working

$
0
0
Hi,

i have recently installed proxmox 3.3 on my throne server.After all successfully installation. i could be able to do ssh to my server.but default web interface not working tried https://<ip>:8006.

i think,by default we should be able to connect webinterface as per doc.can you help me to resolve this issue?

root@localhost:~# uname -a
Linux localhost 2.6.32-32-pve #1 SMP Thu Aug 21 08:50:19 CEST 2014 x86_64 GNU/Linux
root@localhost:~#
root@localhost:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback


auto vmbr0
iface vmbr0 inet static
address 10.102.100.140
netmask 255.255.255.0
gateway 10.102.100.130
bridge_ports eth0
bridge_stp off
bridge_fd 0
root@localhost:~#

root@localhost:/etc# ls -l /etc/pve/local/pve-ssl.key
ls: cannot access /etc/pve/local/pve-ssl.key: No such file or directory
root@localhost:/etc# pvecm updatecerts
hostname lookup failed - got local IP address (localhost = 127.0.0.1)
root@localhost:/etc#

Two Factor Authentication using U2F

$
0
0
Any plans to support FIDO/U2F going forward?

thanks~

Transport endpoint is not connected

$
0
0
Hello,

I want to create VM. I go to www interface and page ~90% load and crash system ?
I try restart pve-cluster but I get error:

root@i1:~# /etc/init.d/pve-cluster restart
Restarting pve cluster filesystem: pve-clusterstart-stop-daemon: warning: failed to kill 2605: No such process
[main] crit: unable to create lock '/var/lib/pve-cluster/.pmxcfs.lockfile': Read-only file system
[main] notice: exit proxmox configuration filesystem (-1)
(warning).
root@i1:~# ls -la /etc/pve/
ls: cannot access /etc/pve/: Transport endpoint is not connected
root@i1:~#

Can help ? This server is production server with active customers. I cannot reboot becouse boot time very long and customers virtual server timeout very bad


edit:
I cannot create file on /:
root@i1:~# touch /qe
touch: cannot touch `/qe': Read-only file system

Wrapping my head around CEPH - couple questions

$
0
0
Ok, I'm going to be setting up an experiment group at my work for learning/using Ceph within Proxmox. I'm planning on having a dedicated proxmox cluster to act as the Ceph storage systems (and a secondary cluster to act as the vm hosts). Most of the tutorials cover the basic setup of a proxmox ceph cluster but not much else.

My questions so I can understand how to test this are:

1) I understand when I set up the ceph cluster, I'll have a certain amount of ODS', PG's and replicas (not including CRUSH map (I'll only have one rack for storage, so I won't be on multiple levels in a CRUSH map), management nodes of which all three will be, etc.). For this test network, there will be 3 nodes, each node will have 2HDD, so a total of 6HDD - 6 OSDs with a replication of 3 - so through the formula (6*100)/3 = 200pg - ok no problem. My question now lies, if I need to increase the OSD's to increase my total capacity (say I add 2 more nodes, each of those containing 2 more OSDs) - once I create the OSDs and they show up in my manager, do I need to change the pool settings? Can I even change the pool settings? I mean, now it should be (10*100)/3 = 334pg (333.33, but I remember reading somewhere to round up if you end up without a whole number)...? Basically, I'm confused on the proper process to add storage to an existing proxmox ve managed ceph cluster

2) The reverse of number 1 - let's say a HDD fails completely and I need to remove it from the equation; I'm sure there is a proper procedure for this through proxmox, but it may instead need to be handled by command line (which is fine), but I'm not sure about the procedure I need to follow.

3) And finally (I think ;) ) let's say the proxmox ceph cluster nodes need to be updated - I know to maintain quorum, I need to only allow 1 to update and restart at a time; now I'm assuming after a restart of a single node, the cluster will be in an unhealthy status (as vms will have been running and data still being written to the other two nodes/rest of the OSD's) - when I bring the first updated system back online will it self heal the cluster? or will I manually need to force it to repair? What kind of time are we talking about before the restarted node is fully functional? (does it just sync the changed data or will it do a complete rebuild where I'll need to figure the time based upon the amount of data and the network speed (I found the formula once, but can't find the link all of a sudden))

I'm sorry for what must seem somewhat basic questions, but most tutorials I've come across when it comes to proxmox ve management of a ceph cluster only really describe the initial creation of a cluster - I'm still working my way through the actual ceph.com documentation (ie: I'm very new to ceph) , but I'd like to begin some initial testing on real hardware to get a better feel for it.

two node cluster fence error

$
0
0
Hi, I'am just installed proxmox 3.3 on two servers and i want to make ha cluster. I wrote many websites and watched many video tutorails on YT and i have massive problem with fencing. My servers don't have fence devices (i use tyan gt25). I created cluster and works fine i see two nodes from GUI, but when i try to validate cluster.conf i get many errors.
Code:

root@prox1:~# ccs_config_validate -l /etc/pve/cluster.conf.new
Relax-NG validity error : Extra element fencedevices in interleave
tempfile:4: element fencedevices: Relax-NG validity error : Element cluster fail
ed to validate content
tempfile:19: element device: validity error : IDREF attribute name references an
 unknown ID "fenceB"
Configuration fails to validate

This is my cluster.conf.new:

Code:

<?xml version="1.0"?>
<cluster name="test" config_version="5">

  <cman keyfile="/var/lib/pve-cluster/corosync.authkey" two_node="1" expected_votes="1"/>


  <fencedevices>
  <fencedevice agent="fence_ilo" ipaddr="192.168.1.121" login="root" password="pass" name="fenceA"/>
  <fencedevice agent="fence_ilo" ipaddr="192.168.1.122" login="root" password="pass" name="fenceB"/>
  </fencedevices>

 

  <clusternodes>
  <clusternode name="prox1" votes="1" nodeid="1">
 
  <fence>
  <method name="1">
  <device name="fenceA" action="reboot"/>
  </method>
  </fence>
 
  </clusternode>

  <clusternode name="prox2" votes="1" nodeid="2">
 
  <fence>
  <method name="1">
  <device name="fenceB" action="reboot"/>
  </method>
  </fence>
  </clusternode>

</clusternodes>
</cluster>

My pveversion:
Code:

proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

I must made stupid mistake but, i don't see anything. Sorry for my english because I'am learning it now.

Passthrough Intel HD 2500 (i5-3470)

$
0
0
Hey guys,

I am trying to passthrough an Intel HD 2500 graphics card. My CPU is i5-3470 and chipset is Q77, both are supporting VT-x and VT-d. I am always getting a black screen. So is it generally possible to passthrough an Intel graphics card?

I tried to follow the steps in the following thread, but there is no end conclusion for this problem. http://forum.proxmox.com/threads/190...th-my-hardware

Do I really need the VGA arbiter patch for i915 driver or is it implemented in the proxmox kernel? Or do I maybe need the newest kernel version from pvetest repository? https://lkml.org/lkml/2014/5/9/517

I saw in the kernel changelog that proxmox enabled vfio xvga. Have they implemented the patch or is it something else? https://github.com/proxmox/pve-kerne...angelog.Debian

If anyone can give me a solution it would be great!

Thanks,
Felix

So here is my setup...

pveversion -v
Code:

pve-manager/3.3-5/bfebec03 (running kernel: 3.10.0-5-pve)
root@kvmtest:~# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-3.10.0-5-pve: 3.10.0-19
pve-kernel-2.6.32-34-pve: 2.6.32-140
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1

/etc/apt/sources.list.d/proxmox.list:
Code:

# deb http://download.proxmox.com/debian wheezy pvetest
deb http://download.proxmox.com/debian wheezy pve-no-subscription

/etc/default/grub:
Code:

...
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
...

/etc/modules:
Code:

loop
kvm
kvm_intel

/etc/modprobe.d/iommu_unsafe_interrupts.conf:
Code:

options vfio_iommu_type1 allow_unsafe_interrupts=1
lspci -nn
Code:

00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller [8086:0150] (rev 09)
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller [8086:0152] (rev 09)
00:14.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:1e31] (rev 04)
00:16.0 Communication controller [0780]: Intel Corporation 7 Series/C210 Series Chipset Family MEI Controller #1 [8086:1e3a] (rev 04)
00:19.0 Ethernet controller [0200]: Intel Corporation 82579LM Gigabit Network Connection [8086:1502] (rev 04)
00:1a.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:1e2d] (rev 04)
00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04)
00:1d.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:1e26] (rev 04)
00:1f.0 ISA bridge [0601]: Intel Corporation Q77 Express Chipset LPC Controller [8086:1e47] (rev 04)
00:1f.2 SATA controller [0106]: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] [8086:1e02] (rev 04)
00:1f.3 SMBus [0c05]: Intel Corporation 7 Series/C210 Series Chipset Family SMBus Controller [8086:1e22] (rev 04)

/etc/pve/qemu-server/100.conf:
Code:

bootdisk: virtio0
cores: 3
hostpci0: 00:02.0,x-vga=on,pcie=1,driver=vfio
hotplug: 1
ide2: local:iso/virtio-win-0.1-94.iso,media=cdrom,size=68326K
keyboard: de
machine: q35
memory: 4096
name: windows7
net0: e1000=3E:09:DA:0B:8F:A9,bridge=vmbr0
ostype: win7
sockets: 1
tablet: 0
virtio0: local:100/vm-100-disk-1.raw,format=raw,size=80G

/usr/bin/vfio-bind:
Code:

#!/bin/bash

modprobe vfio-pci

for dev in "$@"; do
    vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
    device=$(cat /sys/bus/pci/devices/$dev/device)
    if [ -e /sys/bus/pci/devices/$dev/driver ]; then
        echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
    fi
    echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done

Code:

qm showcmd 100 > /root/start.sh

/root/start.sh:
Code:

#!/bin/sh

vfio-bind 0000:00:02.0

/usr/bin/kvm -id 100 -chardev socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password -pidfile /var/run/qemu-server/100.pid -daemonize -name windows7 -smp sockets=1,cores=3 -nodefaults -boot menu=on -vga none -no-hpet -cpu kvm64,kvm=off,hv_spinlocks=0xffff,hv_relaxed,+lahf_lm,+x2apic,+sep -k de -m 4096 -readconfig /usr/share/qemu-server/pve-q35.cfg -device vfio-pci,host=00:02.0,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0,x-vga=on -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 -iscsi initiator-name=iqn.1993-08.org.debian:01:c3d7a3e9eb5 -drive file=/var/lib/vz/template/iso/virtio-win-0.1-94.iso,if=none,id=drive-ide2,media=cdrom,aio=native -device ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive file=/var/lib/vz/images/100/vm-100-disk-1.raw,if=none,id=drive-virtio0,format=raw,aio=native,cache=none,detect-zeroes=on -device virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=100 -netdev type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown -device e1000,mac=3E:09:DA:0B:8F:A9,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300 -rtc driftfix=slew,base=localtime -machine type=q35 -global kvm-pit.lost_tick_policy=discard


Installation of Proxmox on Fresh Dedicated Server

$
0
0
Dear Friends

I am new to this forum & also having very little knowledge about installation of proxmox. Need your help regarding installation on my new server that I got recently.

I got Debian 6 oldstable (Squeeze) installed on my server need help to install Proxmox now if any can share step by step with screenshots as I haven't installed proxmox or any other such software.

Thanks for your help in advance

Rgds
Sanjay

Two node sheepdog cluster fail

$
0
0
Hi, I have a two node proxmox cluster in OVH with sheepdog. Everything works as expected but I got twice a network problem inside my vRack in OVH which lead to a disconnection between the nodes.
Corosync reports the new cluster configuration with only one member and the same does sheepdog but when network connection is re-estalished, corosync reports the new topology with two nodes but not sheepdog which remain with one node connected. It seems like the daemon does not listen to corosync new configuration.
The only way I could re-establish sheepdog cluster config is to restart one node which trigger a cluster recovery.

proxmox-ve-2.6.32: 3.2-126 (running kernel: 2.6.32-29-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-29-pve: 2.6.32-126
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
Sheepdog daemon version 0.8.0

Proxmox GUI Listen on internal network (VE 3.3)

$
0
0
Hello,

how can I tell apache in VE 3.3 to listen only on the internal NIC instead of listening to the public and internal NIC?

I want to have the GUI only reachable from internal.

Thank you and happy new year.

Best regards

Benjamin

Proxmox 2.x vs 3.x CPU load

$
0
0
I have 2 separate clusters. One is running pve 2.3 and the other is running pve 3.3. Each cluster is comprised of identical hardware IBM x3650 M4's. Each cluster has one running VM which is CentOS6. Noticing a significant difference in load averages between the two when running similar operations. As you can see from the screen shots they both have a KVM process which are running pretty hard. My only issue is the pve 3.3 cluster reports pretty much no load average even though it has a VM running between 600-900% CPU consistently.

PVE 2.3
vaultprox2.jpg

PVE 3.3
vaultprox3.jpg

The VM configurations are identical other than one has 40GB of ram and the other has 50GB. I even see this difference in load within the guest itself. Everything is running AOK just trying to rap my head around the reason for such a difference.
Attached Images

proxmox ve 1.9

$
0
0
Hello everyone,

Can anyone tell me where I can find the instalation files for proxmox VE 1.9?

I tried looking on the website, but it seems like I can only download 3.1-3.3 from there.

Regards,

Projan
Viewing all 171768 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>