Anybody know how to change the default VGA type so any new VM created anywhere in the cluster can have that VGA type? For example instead of Standard VGA type all new VMs will have Spice(qxl).
↧
Change default VGA type cluster wide
↧
noobs question about adding harddriv
Hi all
this is a bit of a noob question i gues :D
How do i add my 3 drives that i just took from my old linux
to the new VM linux,
Or how to migrate them over, they run Sata and EXT4
this is a bit of a noob question i gues :D
How do i add my 3 drives that i just took from my old linux
to the new VM linux,
Or how to migrate them over, they run Sata and EXT4
↧
↧
Maintaining Patches to Proxmox host (e.g. kernel modules for PPP)
Hi - I was wondering:
After reading http://www.howtoforge.com/how-to-add...nvz-containers
Before we can use PPP in the container, we must enable the PPP kernel modules on the host system:
modprobe tun
modprobe ppp-compress-18
modprobe ppp_mppe
modprobe ppp_deflate
modprobe ppp_async
modprobe pppoatm
modprobe ppp_generic
Question - how do people maintain a list of modifications they make to their proxmox servers so they can reapply those changes next time they upgrade proxmox?
Cheers,
Martin.
After reading http://www.howtoforge.com/how-to-add...nvz-containers
Quote:
Before we can use PPP in the container, we must enable the PPP kernel modules on the host system:
modprobe tun
modprobe ppp-compress-18
modprobe ppp_mppe
modprobe ppp_deflate
modprobe ppp_async
modprobe pppoatm
modprobe ppp_generic
Cheers,
Martin.
↧
ceph performance seems very slow
A common post on the forums it seems, but my case is unique! :) probably not really ...
3 proxmox/ceph nodes, one just a nuc that is used for quorum purposes, no OSD's or VM's
Underlying filesystem is ZFS, so using
journal dio = 0
2 OSD's on two nodes
- 3TB Western Digital Reds
- SSD for Cache and Log
OSD Nodes: 2 * 1GB in balance-rr direct. iperf gives 1.8 GB/s
Original tests were with a ZFS log and cache on SSD.
using dd in a guest, I got seq writes of 12 MB/s
I also tried with the ceph journal on a SSD and journial dio on which did improve things, with guest writes up to 32 MB/s
Seq reads are around 80 MB/s
The same tests runs with glusterfs give much better results, sometimes by an order of magnitude.
CEPH Benchmarks
Gluster Benchmarks]
It almost seems that ceph is managing to disable the ZFS log & cach altogether.
3 proxmox/ceph nodes, one just a nuc that is used for quorum purposes, no OSD's or VM's
Underlying filesystem is ZFS, so using
journal dio = 0
2 OSD's on two nodes
- 3TB Western Digital Reds
- SSD for Cache and Log
OSD Nodes: 2 * 1GB in balance-rr direct. iperf gives 1.8 GB/s
Original tests were with a ZFS log and cache on SSD.
using dd in a guest, I got seq writes of 12 MB/s
I also tried with the ceph journal on a SSD and journial dio on which did improve things, with guest writes up to 32 MB/s
Seq reads are around 80 MB/s
The same tests runs with glusterfs give much better results, sometimes by an order of magnitude.
CEPH Benchmarks
Code:
rados -p test bench -b 4194304 60 write -t 32 -c /etc/pve/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --no-cleanup
Total time run: 63.303149
Total writes made: 709
Write size: 4194304
Bandwidth (MB/sec): 44.800
Stddev Bandwidth: 28.9649
Max bandwidth (MB/sec): 96
Min bandwidth (MB/sec): 0
Average Latency: 2.83586
Stddev Latency: 2.60019
Max latency: 11.2723
Min latency: 0.499958
rados -p test bench -b 4194304 60 seq -t 32 -c /etc/pve/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --no-cleanup
Total time run: 25.486230
Total reads made: 709
Read size: 4194304
Bandwidth (MB/sec): 111.276
Average Latency: 1.14577
Max latency: 3.61513
Min latency: 0.126247
ZFS
- SSD LOG
- SSD Cache
-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 186.231 MB/s
Sequential Write : 7.343 MB/s
Random Read 512KB : 157.589 MB/s
Random Write 512KB : 8.330 MB/s
Random Read 4KB (QD=1) : 3.934 MB/s [ 960.4 IOPS]
Random Write 4KB (QD=1) : 0.165 MB/s [ 40.4 IOPS]
Random Read 4KB (QD=32) : 23.660 MB/s [ 5776.3 IOPS]
Random Write 4KB (QD=32) : 0.328 MB/s [ 80.1 IOPS]
Test : 1000 MB [C: 38.6% (24.7/63.9 GB)] (x5)
Date : 2014/11/26 18:46:51
OS : Windows 7 Professional N SP1 [6.1 Build 7601] (x64)
ZFS
- SSD Cache (No LOG)
Ceph
- SSD Journal
-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 198.387 MB/s
Sequential Write : 23.643 MB/s
Random Read 512KB : 155.883 MB/s
Random Write 512KB : 18.940 MB/s
Random Read 4KB (QD=1) : 3.927 MB/s [ 958.7 IOPS]
Random Write 4KB (QD=1) : 0.485 MB/s [ 118.5 IOPS]
Random Read 4KB (QD=32) : 23.482 MB/s [ 5733.0 IOPS]
Random Write 4KB (QD=32) : 2.474 MB/s [ 604.0 IOPS]
Test : 1000 MB [C: 38.8% (24.8/63.9 GB)] (x5)
Date : 2014/11/26 22:16:06
OS : Windows 7 Professional N SP1 [6.1 Build 7601] (x64)
Gluster Benchmarks]
Code:
ZFS
- SSD LOG
- SSD Cache
-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 682.756 MB/s
Sequential Write : 45.236 MB/s
Random Read 512KB : 555.918 MB/s
Random Write 512KB : 44.922 MB/s
Random Read 4KB (QD=1) : 11.900 MB/s [ 2905.2 IOPS]
Random Write 4KB (QD=1) : 1.764 MB/s [ 430.6 IOPS]
Random Read 4KB (QD=32) : 26.159 MB/s [ 6386.4 IOPS]
Random Write 4KB (QD=32) : 2.915 MB/s [ 711.6 IOPS]
Test : 1000 MB [C: 38.6% (24.7/63.9 GB)] (x5)
Date : 2014/11/26 21:35:47
OS : Windows 7 Professional N SP1 [6.1 Build 7601] (x64)
ZFS
- SSD Cache (No LOG)
-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 729.191 MB/s
Sequential Write : 53.499 MB/s
Random Read 512KB : 625.833 MB/s
Random Write 512KB : 45.738 MB/s
Random Read 4KB (QD=1) : 12.780 MB/s [ 3120.1 IOPS]
Random Write 4KB (QD=1) : 2.667 MB/s [ 651.1 IOPS]
Random Read 4KB (QD=32) : 27.777 MB/s [ 6781.4 IOPS]
Random Write 4KB (QD=32) : 3.823 MB/s [ 933.4 IOPS]
Test : 1000 MB [C: 38.6% (24.7/63.9 GB)] (x5)
Date : 2014/11/26 23:29:07
OS : Windows 7 Professional N SP1 [6.1 Build 7601] (x64)
It almost seems that ceph is managing to disable the ZFS log & cach altogether.
↧
Proxmox with DS1513+ NAS has slow backup speed
I have a Proxmox version 3.3 hosting a test XP VM on a DS1513+ NAS iSCSI LUN. When I perform a backup of my VM to the local drive of the Proxmox or to an NFS share on a different HD in the same NAS it runs at about 10MB/s. I also have an identical XP VM on the local drive of the Proxmox and when I back it up locally it has an average speed of 572MB/s. I have edited the /etc/vzdump.conf to bwlimit: 900000
What am I missing?
VM on DS1513+ NAS backed up to the Proxmox Local
INFO: starting new backup job: vzdump 111 --remove 0 --mode snapshot --compress lzo --storage local --node VH1
INFO: Starting Backup of VM 111 (qemu)
INFO: status = stopped
INFO: update VM 111: -lock backup
INFO: backup mode: stop
INFO: bandwidth limit: 900000 KB/s
INFO: ionice priority: 7
INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-111-2014_11_27-11_56_26.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '2eb2a822-93bb-4119-b008-7eff0f438700'
INFO: status: 0% (30146560/34359738368), sparse 0% (7901184), duration 3, 10/7 MB/s
INFO: status: 1% (349831168/34359738368), sparse 0% (21774336), duration 38, 9/8 MB/s
INFO: status: 2% (693633024/34359738368), sparse 0% (34652160), duration 75, 9/8 MB/s
INFO: status: 3% (1037041664/34359738368), sparse 0% (129548288), duration 111, 9/6 MB/s
INFO: status: 4% (1381695488/34359738368), sparse 0% (252596224), duration 147, 9/6 MB/s
INFO: status: 5% (1719402496/34359738368), sparse 1% (487444480), duration 181, 9/3 MB/s
INFO: status: 95% (32646365184/34359738368), sparse 91% (31329239040), duration 3220, 10/0 MB/s
INFO: status: 96% (32988069888/34359738368), sparse 92% (31670943744), duration 3253, 10/0 MB/s
INFO: status: 97% (33335541760/34359738368), sparse 93% (32018415616), duration 3286, 10/0 MB/s
INFO: status: 98% (33682358272/34359738368), sparse 94% (32365232128), duration 3319, 10/0 MB/s
INFO: status: 99% (34017181696/34359738368), sparse 95% (32700055552), duration 3351, 10/0 MB/s
INFO: status: 100% (34359738368/34359738368), sparse 96% (33042608128), duration 3384, 10/0 MB/s
INFO: transferred 34359 MB in 3384 seconds (10 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 815MB
INFO: Finished Backup of VM 111 (00:56:27)
INFO: Backup job finished successfully
TASK OK
VM on Proxmox local backed up to the Proxmox Local
INFO: starting new backup job: vzdump 111 --remove 0 --mode snapshot --compress lzo --storage local --node VH1
INFO: Starting Backup of VM 111 (qemu)
INFO: status = stopped
INFO: update VM 111: -lock backup
INFO: backup mode: stop
INFO: bandwidth limit: 900000 KB/s
INFO: ionice priority: 7
INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-111-2014_11_27-13_09_34.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task 'b21abb16-81a6-4c14-92f5-50f18c8c5ca1'
INFO: status: 0% (294780928/34359738368), sparse 0% (21020672), duration 3, 98/91 MB/s
INFO: status: 1% (410517504/34359738368), sparse 0% (22908928), duration 6, 38/37 MB/s
INFO: status: 2% (815464448/34359738368), sparse 0% (106303488), duration 12, 67/53 MB/s
INFO: status: 3% (1077673984/34359738368), sparse 0% (143208448), duration 17, 52/45 MB/s
INFO: status: 4% (1447493632/34359738368), sparse 0% (282394624), duration 22, 73/46 MB/s
INFO: status: 9% (3154313216/34359738368), sparse 5% (1922179072), duration 25, 568/22 MB/s
INFO: status: 14% (4980670464/34359738368), sparse 10% (3670519808), duration 28, 608/26 MB/s
INFO: status: 22% (7690321920/34359738368), sparse 18% (6373531648), duration 31, 903/2 MB/s
INFO: status: 30% (10456596480/34359738368), sparse 26% (9139806208), duration 34, 922/0 MB/s
INFO: status: 38% (13222871040/34359738368), sparse 34% (11906080768), duration 37, 922/0 MB/s
INFO: status: 46% (15989145600/34359738368), sparse 42% (14672355328), duration 40, 922/0 MB/s
INFO: status: 54% (18755420160/34359738368), sparse 50% (17438294016), duration 43, 922/0 MB/s
INFO: status: 62% (21521694720/34359738368), sparse 58% (20204568576), duration 46, 922/0 MB/s
INFO: status: 70% (24287969280/34359738368), sparse 66% (22970843136), duration 49, 922/0 MB/s
INFO: status: 78% (27054243840/34359738368), sparse 74% (25737117696), duration 52, 922/0 MB/s
INFO: status: 86% (29820518400/34359738368), sparse 82% (28503392256), duration 55, 922/0 MB/s
INFO: status: 94% (32586792960/34359738368), sparse 91% (31269666816), duration 58, 922/0 MB/s
INFO: status: 100% (34359738368/34359738368), sparse 96% (33042608128), duration 60, 886/0 MB/s
INFO: transferred 34359 MB in 60 seconds (572 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 815MB
What am I missing?
VM on DS1513+ NAS backed up to the Proxmox Local
INFO: starting new backup job: vzdump 111 --remove 0 --mode snapshot --compress lzo --storage local --node VH1
INFO: Starting Backup of VM 111 (qemu)
INFO: status = stopped
INFO: update VM 111: -lock backup
INFO: backup mode: stop
INFO: bandwidth limit: 900000 KB/s
INFO: ionice priority: 7
INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-111-2014_11_27-11_56_26.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task '2eb2a822-93bb-4119-b008-7eff0f438700'
INFO: status: 0% (30146560/34359738368), sparse 0% (7901184), duration 3, 10/7 MB/s
INFO: status: 1% (349831168/34359738368), sparse 0% (21774336), duration 38, 9/8 MB/s
INFO: status: 2% (693633024/34359738368), sparse 0% (34652160), duration 75, 9/8 MB/s
INFO: status: 3% (1037041664/34359738368), sparse 0% (129548288), duration 111, 9/6 MB/s
INFO: status: 4% (1381695488/34359738368), sparse 0% (252596224), duration 147, 9/6 MB/s
INFO: status: 5% (1719402496/34359738368), sparse 1% (487444480), duration 181, 9/3 MB/s
INFO: status: 95% (32646365184/34359738368), sparse 91% (31329239040), duration 3220, 10/0 MB/s
INFO: status: 96% (32988069888/34359738368), sparse 92% (31670943744), duration 3253, 10/0 MB/s
INFO: status: 97% (33335541760/34359738368), sparse 93% (32018415616), duration 3286, 10/0 MB/s
INFO: status: 98% (33682358272/34359738368), sparse 94% (32365232128), duration 3319, 10/0 MB/s
INFO: status: 99% (34017181696/34359738368), sparse 95% (32700055552), duration 3351, 10/0 MB/s
INFO: status: 100% (34359738368/34359738368), sparse 96% (33042608128), duration 3384, 10/0 MB/s
INFO: transferred 34359 MB in 3384 seconds (10 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 815MB
INFO: Finished Backup of VM 111 (00:56:27)
INFO: Backup job finished successfully
TASK OK
VM on Proxmox local backed up to the Proxmox Local
INFO: starting new backup job: vzdump 111 --remove 0 --mode snapshot --compress lzo --storage local --node VH1
INFO: Starting Backup of VM 111 (qemu)
INFO: status = stopped
INFO: update VM 111: -lock backup
INFO: backup mode: stop
INFO: bandwidth limit: 900000 KB/s
INFO: ionice priority: 7
INFO: creating archive '/var/lib/vz/dump/vzdump-qemu-111-2014_11_27-13_09_34.vma.lzo'
INFO: starting kvm to execute backup task
INFO: started backup task 'b21abb16-81a6-4c14-92f5-50f18c8c5ca1'
INFO: status: 0% (294780928/34359738368), sparse 0% (21020672), duration 3, 98/91 MB/s
INFO: status: 1% (410517504/34359738368), sparse 0% (22908928), duration 6, 38/37 MB/s
INFO: status: 2% (815464448/34359738368), sparse 0% (106303488), duration 12, 67/53 MB/s
INFO: status: 3% (1077673984/34359738368), sparse 0% (143208448), duration 17, 52/45 MB/s
INFO: status: 4% (1447493632/34359738368), sparse 0% (282394624), duration 22, 73/46 MB/s
INFO: status: 9% (3154313216/34359738368), sparse 5% (1922179072), duration 25, 568/22 MB/s
INFO: status: 14% (4980670464/34359738368), sparse 10% (3670519808), duration 28, 608/26 MB/s
INFO: status: 22% (7690321920/34359738368), sparse 18% (6373531648), duration 31, 903/2 MB/s
INFO: status: 30% (10456596480/34359738368), sparse 26% (9139806208), duration 34, 922/0 MB/s
INFO: status: 38% (13222871040/34359738368), sparse 34% (11906080768), duration 37, 922/0 MB/s
INFO: status: 46% (15989145600/34359738368), sparse 42% (14672355328), duration 40, 922/0 MB/s
INFO: status: 54% (18755420160/34359738368), sparse 50% (17438294016), duration 43, 922/0 MB/s
INFO: status: 62% (21521694720/34359738368), sparse 58% (20204568576), duration 46, 922/0 MB/s
INFO: status: 70% (24287969280/34359738368), sparse 66% (22970843136), duration 49, 922/0 MB/s
INFO: status: 78% (27054243840/34359738368), sparse 74% (25737117696), duration 52, 922/0 MB/s
INFO: status: 86% (29820518400/34359738368), sparse 82% (28503392256), duration 55, 922/0 MB/s
INFO: status: 94% (32586792960/34359738368), sparse 91% (31269666816), duration 58, 922/0 MB/s
INFO: status: 100% (34359738368/34359738368), sparse 96% (33042608128), duration 60, 886/0 MB/s
INFO: transferred 34359 MB in 60 seconds (572 MB/s)
INFO: stopping kvm after backup task
INFO: archive file size: 815MB
↧
↧
Can't console Windows XP on Proxmox 3.3
I have a problem.
I want to start vnc console Windowsn XP on proxmox 3.3
but i got this error and i don't know what that means.
TASK ERROR: command '/bin/nc -l -p 5901 -w 10 -c '/usr/sbin/qm vncproxy 100 2>/dev/null'' failed: exit code 255
and here my pveversion -v
root@KPNO-PRD-TES:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
can somebody help me. please
I want to start vnc console Windowsn XP on proxmox 3.3
but i got this error and i don't know what that means.
TASK ERROR: command '/bin/nc -l -p 5901 -w 10 -c '/usr/sbin/qm vncproxy 100 2>/dev/null'' failed: exit code 255
and here my pveversion -v
root@KPNO-PRD-TES:~# pveversion -v
proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-32-pve)
pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73)
pve-kernel-2.6.32-32-pve: 2.6.32-136
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.1-34
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-23
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-5
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
can somebody help me. please
↧
Using Chef-proxmox to maintain Patches to Proxmox host (kernel modules for PPP)
Hi - I was wondering:
After reading http://www.howtoforge.com/how-to-add...nvz-containers
Before we can use PPP in the container, we must enable the PPP kernel modules on the host system:
modprobe tun
modprobe ppp-compress-18
modprobe ppp_mppe
modprobe ppp_deflate
modprobe ppp_async
modprobe pppoatm
modprobe ppp_generic
Question - how do people maintain a list of modifications they make to their proxmox servers so they can reapply those changes next time they upgrade proxmox?
(EDIT) I noted https://supermarket.getchef.com/cookbooks/proxmox today - is anyone using that?
Cheers,
Martin.
After reading http://www.howtoforge.com/how-to-add...nvz-containers
Quote:
Before we can use PPP in the container, we must enable the PPP kernel modules on the host system:
modprobe tun
modprobe ppp-compress-18
modprobe ppp_mppe
modprobe ppp_deflate
modprobe ppp_async
modprobe pppoatm
modprobe ppp_generic
(EDIT) I noted https://supermarket.getchef.com/cookbooks/proxmox today - is anyone using that?
Cheers,
Martin.
↧
Expand VM disk on LVM (on ISCSI SAN multipath): error *FROM GUI*, OK from CLI
Hi all !
On a test cluster, when I expand a VM disk on LVM (on ISCSI SAN && multipah) I see:
"error resizing volume '/dev/data-mp/vm-103-disk-1': Run `lvextend --help' for more information. (500)"
On shell, lvextend works correctly.
I use PX 3.3, last upgrade (I've tried to upgrade all cluster to pvetest, same error).
Thanks
Luca
On a test cluster, when I expand a VM disk on LVM (on ISCSI SAN && multipah) I see:
"error resizing volume '/dev/data-mp/vm-103-disk-1': Run `lvextend --help' for more information. (500)"
On shell, lvextend works correctly.
I use PX 3.3, last upgrade (I've tried to upgrade all cluster to pvetest, same error).
Thanks
Luca
↧
Compatibility with physical RAID controller
Hi everybody,
My client wants to buy a server(IBM or Lenovo) to install Proxmox 3.3 as server virtualization. Before to make decision about IBM or Lenovo, I want to be sure wheter Proxmox is compatible with these physical RAID controllers:
IBM : RAID M5110
RAID M1115
Lenovo : RAID SATA 0/1/10/5 6 Gbits/s onboard Intel® RSTet
Which of these controllers is compatible with Proxmox 3.3?
Thanks for your answers
My client wants to buy a server(IBM or Lenovo) to install Proxmox 3.3 as server virtualization. Before to make decision about IBM or Lenovo, I want to be sure wheter Proxmox is compatible with these physical RAID controllers:
IBM : RAID M5110
RAID M1115
Lenovo : RAID SATA 0/1/10/5 6 Gbits/s onboard Intel® RSTet
Which of these controllers is compatible with Proxmox 3.3?
Thanks for your answers
↧
↧
No access to CEPH configuration with role Administrator
Hi,
as a non-root user with permissions of role Administrator I can't access to all CEPH tabs (Status, Config, Monitor, Pools etc.).
Error message: "Error Permission check failed (user!= root@pam) (403)" - is this behavior as designed or just a bug?
Thanks!
as a non-root user with permissions of role Administrator I can't access to all CEPH tabs (Status, Config, Monitor, Pools etc.).
Error message: "Error Permission check failed (user!= root@pam) (403)" - is this behavior as designed or just a bug?
Thanks!
↧
Using Proxmox to host a CoreOS Cluster
Hi all,
I am currently trying to setup a CoreOS cluster using VMs inside of Proxmox. However I have a few hiccups and doubts, perhaps someone more experiences with proxmox can help with this.
Ok, so I use Proxmox to create 6 VMs on a 8-core dedicated server from online.net. The server has Proxmox VE 3.3 has the Host operating system. My idea was to use VMs and simulate the creation of machines (similar to how you setup a CoreOS cluster on DigitalOcean ), I used the bare metal iPXE instructions to run CoreOS VMs inside of Proxmox, the problem then comes when I have to access those VMs, since noVNC has no way to start a session with a SSHkey (AFAIK) I used the Host shell to try and SSH inside of the VMs... however there are a few things that are confusing me:
One is that every VM has the same IPv4 address 10.0.2.15, also my Bridged network settings did not work so I started every VM with NAT, I could not get the vmbr0 to turn active. How can I enable each VM to have its own IPv4 address?
How can I access the VMs from outside of the proxmox host? I typically use mRemoteNG or Putty to root access servers from a windows laptop... is there anyway to set this up? Theoretically there would a SSH tunnel from mRemoteNG <--> Proxmox <--> VM CoreOS I don't know how to set this up, any ideas on how to achieve this would be great.
That last question brings me to the great topic of how to expose services from those VMs to the outside world, not just to the host machine... for instance I want to expose a nginx web server running inside a docker container on my coreOS VM to the outside world.
The main issue right now is because i am using iPXE to boot the machine and I cant get inside of the VM using SSH I can't install CoreOS to disk, and since I cant get inside of the machine I can't run Docker containers on it... any tips in the right direction on how to solve part or all of my current issues would be great!
I am currently trying to setup a CoreOS cluster using VMs inside of Proxmox. However I have a few hiccups and doubts, perhaps someone more experiences with proxmox can help with this.
Ok, so I use Proxmox to create 6 VMs on a 8-core dedicated server from online.net. The server has Proxmox VE 3.3 has the Host operating system. My idea was to use VMs and simulate the creation of machines (similar to how you setup a CoreOS cluster on DigitalOcean ), I used the bare metal iPXE instructions to run CoreOS VMs inside of Proxmox, the problem then comes when I have to access those VMs, since noVNC has no way to start a session with a SSHkey (AFAIK) I used the Host shell to try and SSH inside of the VMs... however there are a few things that are confusing me:
One is that every VM has the same IPv4 address 10.0.2.15, also my Bridged network settings did not work so I started every VM with NAT, I could not get the vmbr0 to turn active. How can I enable each VM to have its own IPv4 address?
How can I access the VMs from outside of the proxmox host? I typically use mRemoteNG or Putty to root access servers from a windows laptop... is there anyway to set this up? Theoretically there would a SSH tunnel from mRemoteNG <--> Proxmox <--> VM CoreOS I don't know how to set this up, any ideas on how to achieve this would be great.
That last question brings me to the great topic of how to expose services from those VMs to the outside world, not just to the host machine... for instance I want to expose a nginx web server running inside a docker container on my coreOS VM to the outside world.
The main issue right now is because i am using iPXE to boot the machine and I cant get inside of the VM using SSH I can't install CoreOS to disk, and since I cant get inside of the machine I can't run Docker containers on it... any tips in the right direction on how to solve part or all of my current issues would be great!
↧
High Availability - Migration problems - Proxmox 3
Hello everybody,
[I'm french, sorry for my english]
Today I have a problem with my Proxmox servers.
Explanation:
My cluster is composed of 3 Proxmox servers. They are running the same pve version [pve-manager/3.3-5/bfebec03 (running kernel: 2.6.32-32-pve)] and they have exactly the same hardware.
My VMs are stored into iSCSI disk which is provided by a server.
Fence devices [IPMI] are completely reliable.
So, my High Availability cluster works perfectly when I simulate a node failure [for exemple : I shutdown one proxmox] the VMs migrate on the other node according to the failover configuration written in the cluster.conf file.
It works well, no problem here.
So now my problem is when the proxmox [which had a failure with the suthdown] come back, the VMs want migrate on this node but, most of the time, I get migration problems and I don't no why because I'm able to manually migrate the VMs just after.
Time to Time some VMs migrate when the node come back and others can't migrate... it depend.
What is the problem ? Did you have already encounter this problem ?
Regards,
superwemba.
[I'm french, sorry for my english]
Today I have a problem with my Proxmox servers.
Explanation:
My cluster is composed of 3 Proxmox servers. They are running the same pve version [pve-manager/3.3-5/bfebec03 (running kernel: 2.6.32-32-pve)] and they have exactly the same hardware.
My VMs are stored into iSCSI disk which is provided by a server.
Fence devices [IPMI] are completely reliable.
So, my High Availability cluster works perfectly when I simulate a node failure [for exemple : I shutdown one proxmox] the VMs migrate on the other node according to the failover configuration written in the cluster.conf file.
It works well, no problem here.
So now my problem is when the proxmox [which had a failure with the suthdown] come back, the VMs want migrate on this node but, most of the time, I get migration problems and I don't no why because I'm able to manually migrate the VMs just after.
Time to Time some VMs migrate when the node come back and others can't migrate... it depend.
What is the problem ? Did you have already encounter this problem ?
Regards,
superwemba.
↧
scheduler used in VM
Hi all,
Inspired by a thread about ceph and which scheduler to use in a VM I made a quick test using fio. Results are stunning so I hope others can duplicate my results?
My storage is based on ZFS.
Test file used:
Results
It seems scheduler has a big impact on performance when dealing with file systems with native sophisticated caching.
Inspired by a thread about ceph and which scheduler to use in a VM I made a quick test using fio. Results are stunning so I hope others can duplicate my results?
My storage is based on ZFS.
Test file used:
Code:
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern
Sorry, wrong forum
[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw
rwmixread=80
direct=1
size=4g
ioengine=libaio
# IOMeter defines the server loads as the following:
# iodepth=1 Linear
# iodepth=4 Very Light
# iodepth=8 Light
# iodepth=64 Moderate
# iodepth=256 Heavy
iodepth=64
Code:
NFS iSCSI
CFQ r:4537 w:1130 r: 6927 w: 1733
NOOP r:7484 w:1874 r: 11454 w: 2874
↧
↧
Tuning performance in VM with sceduler
Hi all,
Inspired by a thread about ceph and which scheduler to use in a VM I made a quick test using fio. Results are stunning so I hope others can duplicate my results?
My storage is based on ZFS.
Test file used:
Results
It seems scheduler has a big impact on performance when dealing with file systems with native sophisticated caching.
Inspired by a thread about ceph and which scheduler to use in a VM I made a quick test using fio. Results are stunning so I hope others can duplicate my results?
My storage is based on ZFS.
Test file used:
Code:
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern
[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw
rwmixread=80
direct=1
size=4g
ioengine=libaio
# IOMeter defines the server loads as the following:
# iodepth=1 Linear
# iodepth=4 Very Light
# iodepth=8 Light
# iodepth=64 Moderate
# iodepth=256 Heavy
iodepth=64
Results
Code:
NFS iSCSI
CFQ r:4537 w:1130 r: 6927 w: 1733
NOOP r:7484 w:1874 r: 11454 w: 2874
It seems scheduler has a big impact on performance when dealing with file systems with native sophisticated caching.
↧
Disabling SSLv3
PayPal has notified users of not supporting SSLv3 from 2014-12-03 onwards whence only TLS would be accepted.
Adjust Apache conf files in all VPS/KVMs:
* For Wheezy/Squeeze and earlier - http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html
* For Jessie and later - http://httpd.apache.org/docs/2.4/ssl/ssl_howto.html
* General - https://wiki.mozilla.org/Security/Server_Side_TLS
Adjust Apache conf files in all VPS/KVMs:
* For Wheezy/Squeeze and earlier - http://httpd.apache.org/docs/2.2/ssl/ssl_howto.html
* For Jessie and later - http://httpd.apache.org/docs/2.4/ssl/ssl_howto.html
* General - https://wiki.mozilla.org/Security/Server_Side_TLS
↧
Problem Proxmox OVH
Hi, I'm on this forum because I have two days trying to use the failover IPs ovh.
I tried a thousand ways but does not ping the debian server.
Someone could help me?
Is there a sysadmin?
I tried a thousand ways but does not ping the debian server.
Someone could help me?
Is there a sysadmin?
↧
Not visible the contents of storage.
I had a problem with one of the servers. Not visible the contents of storage. On the other servers in the cluster storage contents visible(001.jpg)001.jpg
At the same time, the status displays information about used and available space (002.jpg).
002.jpg
On another servers all content is displayed.
003.jpg
What is the problem and how can I solve without reinstalling Proxmox? Reboot not solve this problem.
Proxmox 3.0
At the same time, the status displays information about used and available space (002.jpg).
002.jpg
On another servers all content is displayed.
003.jpg
What is the problem and how can I solve without reinstalling Proxmox? Reboot not solve this problem.
Proxmox 3.0
↧
↧
Fencing, HA with Dell doesn't work when unplug A/C
Hi forum,
My cluster is composed by 5 nodes, fence devices and cluster.conf is completely configured. Fence_node works fine on nodes.
I have problem when I unplug directly A/C one node. VMs is not migrated to other node. My cluster.conf:
<?xml version="1.0"?>
<cluster config_version="101" name="cluster">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<fencedevices>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.1" login="admin" name="node1-idrac" passwd="XXXXXXXX"/>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.2" login="admin" name="node2-idrac" passwd="XXXXXXXX"/>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.3" login="admin" name="node3-idrac" passwd="XXXXXXXX"/>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.4" login="admin" name="node4-idrac" passwd="XXXXXXXX"/>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.5" login="admin" name="node5-idrac" passwd="XXXXXXXX"/>
</fencedevices>
<clusternodes>
<clusternode name="pve1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="off" name="node1-idrac"/>
</method>
</fence>
</clusternode>
<clusternode name="pve2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="off" name="node2-idrac"/>
</method>
</fence>
</clusternode>
<clusternode name="pve3" nodeid="3" votes="1">
<fence>
<method name="1">
<device action="off" name="node3-idrac"/>
</method>
</fence>
</clusternode>
<clusternode name="pve4" nodeid="4" votes="1">
<fence>
<method name="1">
<device action="off" name="node4-idrac"/>
</method>
</fence>
</clusternode>
<clusternode name="pve5" nodeid="5" votes="1">
<fence>
<method name="1">
<device action="off" name="node5-idrac"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<failoverdomains>
<failoverdomain name="failover" nofailback="0" ordered="0" restricted="0">
<failoverdomainnode name="pve1"/>
</failoverdomain>
</failoverdomains>
<pvevm autostart="1" vmid="2040"/>
<pvevm autostart="1" vmid="2041"/>
<pvevm autostart="1" vmid="2042"/>
<pvevm autostart="1" vmid="2043"/>
<pvevm autostart="1" vmid="2044"/>
<pvevm autostart="1" vmid="2045"/>
<pvevm autostart="1" vmid="2046"/>
<pvevm autostart="1" vmid="2047"/>
<pvevm autostart="1" vmid="2048"/>
<pvevm autostart="1" vmid="2049"/>
<pvevm autostart="1" vmid="2050"/>
<pvevm autostart="1" vmid="2051"/>
<pvevm autostart="1" vmid="2052"/>
<pvevm autostart="1" vmid="2053"/>
<pvevm autostart="1" vmid="2054"/>
<pvevm autostart="1" vmid="2055"/>
<pvevm autostart="1" vmid="2056"/>
<pvevm autostart="1" vmid="2057"/>
<pvevm autostart="1" vmid="2058"/>
<pvevm autostart="1" vmid="2059"/>
<pvevm autostart="1" vmid="2060"/>
<pvevm autostart="1" vmid="2061"/>
<pvevm autostart="1" vmid="2062"/>
<pvevm autostart="1" vmid="2063"/>
<pvevm autostart="1" vmid="2064"/>
<pvevm autostart="1" vmid="2066"/>
<pvevm autostart="1" vmid="2067"/>
<pvevm autostart="1" vmid="2068"/>
<pvevm autostart="1" vmid="2069"/>
<pvevm autostart="1" vmid="2070"/>
<pvevm autostart="1" vmid="2071"/>
<pvevm autostart="1" vmid="2072"/>
<pvevm autostart="1" vmid="2073"/>
<pvevm autostart="1" vmid="2074"/>
<pvevm autostart="1" vmid="2075"/>
<pvevm autostart="1" vmid="2076"/>
<pvevm autostart="1" vmid="2077"/>
<pvevm autostart="1" vmid="2078"/>
<pvevm autostart="1" vmid="2079"/>
<pvevm autostart="1" vmid="2080"/>
<pvevm autostart="1" vmid="2081"/>
<pvevm autostart="1" vmid="2082"/>
<pvevm autostart="1" vmid="2083"/>
<pvevm autostart="1" vmid="2084"/>
<pvevm autostart="1" vmid="2085"/>
<pvevm autostart="1" vmid="2086"/>
<pvevm autostart="1" vmid="2087"/>
<pvevm autostart="1" vmid="2088"/>
<pvevm autostart="1" vmid="2089"/>
<pvevm autostart="1" vmid="2090"/>
<pvevm autostart="1" vmid="2091"/>
<pvevm autostart="1" vmid="2092"/>
<pvevm autostart="1" vmid="2093"/>
<pvevm autostart="1" vmid="2094"/>
<pvevm autostart="1" vmid="2095"/>
<pvevm autostart="1" vmid="2096"/>
<pvevm autostart="1" vmid="2097"/>
<pvevm autostart="1" vmid="2098"/>
</rm>
</cluster>
Regards.
My cluster is composed by 5 nodes, fence devices and cluster.conf is completely configured. Fence_node works fine on nodes.
I have problem when I unplug directly A/C one node. VMs is not migrated to other node. My cluster.conf:
<?xml version="1.0"?>
<cluster config_version="101" name="cluster">
<cman keyfile="/var/lib/pve-cluster/corosync.authkey"/>
<fencedevices>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.1" login="admin" name="node1-idrac" passwd="XXXXXXXX"/>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.2" login="admin" name="node2-idrac" passwd="XXXXXXXX"/>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.3" login="admin" name="node3-idrac" passwd="XXXXXXXX"/>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.4" login="admin" name="node4-idrac" passwd="XXXXXXXX"/>
<fencedevice agent="fence_idrac" ipaddr="10.0.19.5" login="admin" name="node5-idrac" passwd="XXXXXXXX"/>
</fencedevices>
<clusternodes>
<clusternode name="pve1" nodeid="1" votes="1">
<fence>
<method name="1">
<device action="off" name="node1-idrac"/>
</method>
</fence>
</clusternode>
<clusternode name="pve2" nodeid="2" votes="1">
<fence>
<method name="1">
<device action="off" name="node2-idrac"/>
</method>
</fence>
</clusternode>
<clusternode name="pve3" nodeid="3" votes="1">
<fence>
<method name="1">
<device action="off" name="node3-idrac"/>
</method>
</fence>
</clusternode>
<clusternode name="pve4" nodeid="4" votes="1">
<fence>
<method name="1">
<device action="off" name="node4-idrac"/>
</method>
</fence>
</clusternode>
<clusternode name="pve5" nodeid="5" votes="1">
<fence>
<method name="1">
<device action="off" name="node5-idrac"/>
</method>
</fence>
</clusternode>
</clusternodes>
<rm>
<failoverdomains>
<failoverdomain name="failover" nofailback="0" ordered="0" restricted="0">
<failoverdomainnode name="pve1"/>
</failoverdomain>
</failoverdomains>
<pvevm autostart="1" vmid="2040"/>
<pvevm autostart="1" vmid="2041"/>
<pvevm autostart="1" vmid="2042"/>
<pvevm autostart="1" vmid="2043"/>
<pvevm autostart="1" vmid="2044"/>
<pvevm autostart="1" vmid="2045"/>
<pvevm autostart="1" vmid="2046"/>
<pvevm autostart="1" vmid="2047"/>
<pvevm autostart="1" vmid="2048"/>
<pvevm autostart="1" vmid="2049"/>
<pvevm autostart="1" vmid="2050"/>
<pvevm autostart="1" vmid="2051"/>
<pvevm autostart="1" vmid="2052"/>
<pvevm autostart="1" vmid="2053"/>
<pvevm autostart="1" vmid="2054"/>
<pvevm autostart="1" vmid="2055"/>
<pvevm autostart="1" vmid="2056"/>
<pvevm autostart="1" vmid="2057"/>
<pvevm autostart="1" vmid="2058"/>
<pvevm autostart="1" vmid="2059"/>
<pvevm autostart="1" vmid="2060"/>
<pvevm autostart="1" vmid="2061"/>
<pvevm autostart="1" vmid="2062"/>
<pvevm autostart="1" vmid="2063"/>
<pvevm autostart="1" vmid="2064"/>
<pvevm autostart="1" vmid="2066"/>
<pvevm autostart="1" vmid="2067"/>
<pvevm autostart="1" vmid="2068"/>
<pvevm autostart="1" vmid="2069"/>
<pvevm autostart="1" vmid="2070"/>
<pvevm autostart="1" vmid="2071"/>
<pvevm autostart="1" vmid="2072"/>
<pvevm autostart="1" vmid="2073"/>
<pvevm autostart="1" vmid="2074"/>
<pvevm autostart="1" vmid="2075"/>
<pvevm autostart="1" vmid="2076"/>
<pvevm autostart="1" vmid="2077"/>
<pvevm autostart="1" vmid="2078"/>
<pvevm autostart="1" vmid="2079"/>
<pvevm autostart="1" vmid="2080"/>
<pvevm autostart="1" vmid="2081"/>
<pvevm autostart="1" vmid="2082"/>
<pvevm autostart="1" vmid="2083"/>
<pvevm autostart="1" vmid="2084"/>
<pvevm autostart="1" vmid="2085"/>
<pvevm autostart="1" vmid="2086"/>
<pvevm autostart="1" vmid="2087"/>
<pvevm autostart="1" vmid="2088"/>
<pvevm autostart="1" vmid="2089"/>
<pvevm autostart="1" vmid="2090"/>
<pvevm autostart="1" vmid="2091"/>
<pvevm autostart="1" vmid="2092"/>
<pvevm autostart="1" vmid="2093"/>
<pvevm autostart="1" vmid="2094"/>
<pvevm autostart="1" vmid="2095"/>
<pvevm autostart="1" vmid="2096"/>
<pvevm autostart="1" vmid="2097"/>
<pvevm autostart="1" vmid="2098"/>
</rm>
</cluster>
Regards.
↧
Bug when move VM from storage snapshot-able (ZFS) to storage not snapshot-able
Hi !
Case:
1) VM in node 1 on ZFS storage: create snapshot
2) VM in node 1 migrate storage to datastore LVM (iscsi)
3) Old snapshot is visible (???) but inconsistent.
4) VM with cdrom ISO set before snapshot now set to "none"
5) Migrate VM to node 2: unable to migrate because is set an ISO (not true)
6) VM.conf is heavily uncleaned, ISO is set in an old snapshot, but snapshot is related to an old state (inconsistent)
OK, if I modify .conf cleaning the inconsistent state all works.
Would be needed more controls to lock inconsistent states....
Case:
1) VM in node 1 on ZFS storage: create snapshot
2) VM in node 1 migrate storage to datastore LVM (iscsi)
3) Old snapshot is visible (???) but inconsistent.
4) VM with cdrom ISO set before snapshot now set to "none"
5) Migrate VM to node 2: unable to migrate because is set an ISO (not true)
6) VM.conf is heavily uncleaned, ISO is set in an old snapshot, but snapshot is related to an old state (inconsistent)
OK, if I modify .conf cleaning the inconsistent state all works.
Would be needed more controls to lock inconsistent states....
↧
Installing Proxmox on Supermicro X10SLH-F-B and configure it with Synology as Storage
Hello folks,
i want to use this thread to get my proxmox host running on my Supermicro X10SLH-F-B and after that i want to use my Synology as my storage.
i want to use this thread to get my proxmox host running on my Supermicro X10SLH-F-B and after that i want to use my Synology as my storage.
↧