When I shutdown a host which is running HA VM it puts the VM in a "freezed" state until the node comes back online. This is not what we are use to or expect from a HA cluster. Is there a way to ensure the VM gets moved to the other available node instead of waiting for the other node to come back online? Some of these servers take 5-10 minutes+ to reboot.
↧
Proxmox 4 HA VM Freeze State
↧
Cluster proxmox 3.4 broken
At last night on all nodes I got message:
or
I don't understand what It happened, but all nodes gave message:
When I did /etc/init.d/cman start, all nodes started reboot by fence daemon.
Could you please explain, what could happened with cluster? network switch don't have any message about errors on ports
Code:
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: client command is 5
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: About to process command
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] memb: command to process is 5
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: Returning command data. length = 0
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: sending reply 40000005 to fd 28
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: read 20 bytes from fd 28
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: client command is 7
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: About to process command
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] memb: command to process is 7
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] memb: get_all_members: allocated new buffer (retsize=1024)
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] memb: get_all_members: retlen = 6600
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] memb: command return code is 15
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: Returning command data. length = 6600
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: sending reply 40000007 to fd 28
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: read 20 bytes from fd 28
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: client command is 91
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: About to process command
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] memb: command to process is 91
Nov 15 06:28:46 cluster-1-1 corosync[3821]: cman killed by node 7 because we were killed by cman_tool or other application
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] memb: command return code is 0
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: Returning command data. length = 24
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] daemon: sending reply 40000091 to fd 28
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] ais: deliver_fn source nodeid = 7, len=34, endian_conv=0
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] ais: deliver_fn source nodeid = 7, len=24, endian_conv=0
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] memb: Message on port 0 is 6
Nov 15 06:28:46 cluster-1-1 corosync[3821]: [CMAN ] memb: got KILL for node 1
root@cluster-1-1:~#
or
Code:
Nov 16 06:25:32 cluster-1-9 pvedailycron[29872]: <root@pam> starting task UPID:cluster-1-9:000074BD:10B16646:56495ABC:aptupdate::root@pam:
Nov 16 06:25:32 cluster-1-9 pmxcfs[3369]: [status] crit: cpg_send_message failed: 9
Nov 16 06:25:32 cluster-1-9 pmxcfs[3369]: [status] crit: cpg_send_message failed: 9
Nov 16 06:25:32 cluster-1-9 pmxcfs[3369]: [status] crit: cpg_send_message failed: 9
Nov 16 06:25:32 cluster-1-9 pmxcfs[3369]: [status] crit: cpg_send_message failed: 9
Nov 16 06:25:34 cluster-1-9 pvedailycron[29885]: update new package list: /var/lib/pve-manager/pkgupdates
Nov 16 06:25:36 cluster-1-9 pmxcfs[3369]: [status] crit: cpg_send_message failed: 9
Nov 16 06:25:36 cluster-1-9 pmxcfs[3369]: [status] crit: cpg_send_message failed: 9
Nov 16 06:25:36 cluster-1-9 pvedailycron[29872]: <root@pam> end task UPID:cluster-1-9:000074BD:10B16646:56495ABC:aptupdate::root@pam: OK
Nov 16 06:25:36 cluster-1-9 pmxcfs[3369]: [status] crit: cpg_send_message failed: 9
Nov 16 06:25:36 cluster-1-9 pmxcfs[3369]: [status] crit: cpg_send_message failed: 9
Nov 16 06:25:36 cluster-1-9 postfix/pickup[22710]: EDC4932641F: uid=0 from=<root>
Nov 16 06:25:37 cluster-1-9 postfix/cleanup[29930]: EDC4932641F: message-id=<20151116042536.EDC4932641
Code:
root@cluster-1-3:~# clustat
Could not connect to CMAN: No such file or directory
root@cluster-1-3:~#
Could you please explain, what could happened with cluster? network switch don't have any message about errors on ports
↧
↧
KVM to LXC
Is there a easy way to move a linux VM to LXC?
Can I just mount the raw file and boot with LXC?
Can I just mount the raw file and boot with LXC?
↧
disable Intel RC6 in Grub
I have this line in /etc/default/grub
but I got feeling that this command is ignored (since upgrade to pve4.0) because server crash randomly.
returns
Any idea where could I put this line to disable rc6?
Thanks.
Code:
GRUB_CMDLINE_LINUX_DEFAULT="quiet i915.i915_enable_rc6=0"
but I got feeling that this command is ignored (since upgrade to pve4.0) because server crash randomly.
Code:
for i in /sys/module/i915/parameters/*;do echo ${i}=`cat $i`;done
Code:
...
/sys/module/i915/parameters/enable_rc6=1
...
Any idea where could I put this line to disable rc6?
Thanks.
↧
Kernel 4.2.3-2-pve - Kernel Bug at net/8021q/vlan.c:89
Hi, got a nasty crash this morning while testing PVE 4.x
IMG_0786.jpg
How-to reproduce
1 : install pve with latest iso and patch/upgrade to latest 4.x version from pve-no-subscription
2 : Network switch port is configured with 2 tagged vlan. No untag vlan assignement.
3 : Move pve management to a separate vlan and add a bridge with the same eth with vlan awareness active.
4 : Reboot once to apply and reboot one more time to get the crash.
As long as you have a bridge bind to that interface and another one with the same interface with a vlan you will get that crash.
See image below for network configuration on the pve host.
pve network.png
With pve 3.x no problem all is working fine. We have a multinodes cluster up and running with 100+ kvm guests.
IMG_0786.jpg
How-to reproduce
1 : install pve with latest iso and patch/upgrade to latest 4.x version from pve-no-subscription
2 : Network switch port is configured with 2 tagged vlan. No untag vlan assignement.
3 : Move pve management to a separate vlan and add a bridge with the same eth with vlan awareness active.
4 : Reboot once to apply and reboot one more time to get the crash.
As long as you have a bridge bind to that interface and another one with the same interface with a vlan you will get that crash.
See image below for network configuration on the pve host.
pve network.png
With pve 3.x no problem all is working fine. We have a multinodes cluster up and running with 100+ kvm guests.
↧
↧
Proxmox 4 log entries should I be worried?
Am testing a server, and get this in the logs. On my other proxmox 4 server, I do NOT get these in the logs..
Is there an issue I should worry about?
audit: type=1400 audit(1447232527.712:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=1355 comm="apparmor_parser"
audit: type=1400 audit(1447232527.712:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=1355 comm="apparmor_parser"
audit: type=1400 audit(1447232527.712:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=1355 comm="apparmor_parser"
audit: type=1400 audit(1447232527.716:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=1355 comm="apparmor_parser"
Is there an issue I should worry about?
audit: type=1400 audit(1447232527.712:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=1355 comm="apparmor_parser"
audit: type=1400 audit(1447232527.712:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=1355 comm="apparmor_parser"
audit: type=1400 audit(1447232527.712:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=1355 comm="apparmor_parser"
audit: type=1400 audit(1447232527.716:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=1355 comm="apparmor_parser"
↧
offline node, VMs actually working
Hi all,
after i tried to make cluster with other proxmox server , both are same version, i got some trouble, my proxmox server look like it is offline node, but actually all of the VMs are working properly,please have a look at attachment Capture.PNG
after i tried to make cluster with other proxmox server , both are same version, i got some trouble, my proxmox server look like it is offline node, but actually all of the VMs are working properly,please have a look at attachment Capture.PNG
↧
non-PAM Permissions for Ceph in PVE4
I've been able to wrestle the PVE permissions system to do most everything I've needed. I'm stuck on how to grant access to ceph, though. As a "pve realm" user that has "Administrator" privileges, I get: Permission check failed (user != root@pam) (403) when attempting to show any ceph attributes from the web interface.
Is there a "special" thing that needs to be done here to make ceph work, perhaps adding some user to the /etc/group files to similar?
Thanks.
Dan
Is there a "special" thing that needs to be done here to make ceph work, perhaps adding some user to the /etc/group files to similar?
Thanks.
Dan
↧
proxmox 4 plan for drive failure zfs raid 10
proxmox 4 plan for drive failure zfs raid 10
Ok, I get the zfs commands for replacing a drive. However, during testing, I yanked out a drive:
did zpool online sda2
zpool rpool scrub
However, the sda2 caught my eye..
Upon running fdisk, I notice two of the drives have a boot partition.
on the old mdadm I would have then done this.
##################
Add drive back:
format to match (First drive is good drive, you get NO WARNING) Make sure you get it right!
sfdisk -d /dev/sda | sfdisk /dev/sda
grub-install /dev/sda
##################
So I guess my question is, how to put the boot partition back and whether to do it before or after the replacement command of zfs?
Ok, I get the zfs commands for replacing a drive. However, during testing, I yanked out a drive:
did zpool online sda2
zpool rpool scrub
However, the sda2 caught my eye..
Upon running fdisk, I notice two of the drives have a boot partition.
on the old mdadm I would have then done this.
##################
Add drive back:
format to match (First drive is good drive, you get NO WARNING) Make sure you get it right!
sfdisk -d /dev/sda | sfdisk /dev/sda
grub-install /dev/sda
##################
So I guess my question is, how to put the boot partition back and whether to do it before or after the replacement command of zfs?
↧
↧
ZFS filesystem cannot store ISO's?
I have a test install of proxmox 4 and I created a ZFS raidz1 for data use (this raidz1 is not proxmox boot).
When adding the ZFS storage through the proxmox storage gui, only options are "container,disk image" and "container". Seems I cannot store ISO images.
Is this by design to not give the ability to store ISO images?
When adding the ZFS storage through the proxmox storage gui, only options are "container,disk image" and "container". Seems I cannot store ISO images.
Is this by design to not give the ability to store ISO images?
↧
Unable to access web GUI when configuring Proxmox with a public static IP
Hi,
I'm trying to setup proxmox with a public static IP that I bought from my ISP (different netmask/gateway as well), and I'm having trouble connecting with the web GUI. When I enter in the information for the static IP, I can ping other outside servers, and I can ping the server running proxmox via a computer on the same network, but I can't ping the server from outside the network. I also can't access the web GUI from within the same network.
When I enter in a local IP, with the same gateway and netmask that my other computer on the same network uses I don't have any issues connecting to the web GUI.
My end goal is to be able to connect to the web GUI over the internet, but I have no idea what my problem is as of now.
I'm trying to setup proxmox with a public static IP that I bought from my ISP (different netmask/gateway as well), and I'm having trouble connecting with the web GUI. When I enter in the information for the static IP, I can ping other outside servers, and I can ping the server running proxmox via a computer on the same network, but I can't ping the server from outside the network. I also can't access the web GUI from within the same network.
When I enter in a local IP, with the same gateway and netmask that my other computer on the same network uses I don't have any issues connecting to the web GUI.
My end goal is to be able to connect to the web GUI over the internet, but I have no idea what my problem is as of now.
↧
Problem with backup on Ct (proxmox4)
Hello, i have a ct running and his image is on Nas with Nfs.
when i run the backup on local, i have this error:
Thank you! :)
when i run the backup on local, i have this error:
Code:
INFO: starting new backup job: vzdump 100 --mode snapshot --node cartama --compress lzo --storage local --remove 0 INFO: Starting Backup of VM 100 (lxc) INFO: status = running INFO: mode failure - some volumes does not support snapshots INFO: trying 'suspend' mode instead INFO: backup mode: suspend INFO: ionice priority: 7 INFO: starting first sync /proc/1545/root// to /var/lib/vz/dump/vzdump-lxc-100-2015_11_17-07_39_28.tmp INFO: rsync: readlink_stat("/proc/1545/root/var/backups/dpkg.statoverride.3.gz") failed: Input/output error (5) INFO: IO error encountered -- skipping file deletion INFO: rsync: readlink_stat("/proc/1545/root/var/log/syslog.3.gz") failed: Input/output error (5) INFO: file has vanished: "/proc/1545/root/var/lib/unifi-video/videos/temp/hls/15d846dc-0942-3143-8c4e-dcbebef5c6b2/record/0418D6C36AB3_2/segment_1447720033827_1447720033827_10019.ts" INFO: Number of files: 27,505 (reg: 21,151, dir: 2,290, link: 4,031, dev: 2, special: 31) INFO: Number of created files: 27,504 (reg: 21,151, dir: 2,289, link: 4,031, dev: 2, special: 31) INFO: Number of deleted files: 0 INFO: Number of regular files transferred: 21,144 INFO: Total file size: 1,492,443,127 bytes INFO: Total transferred file size: 1,492,156,535 bytes INFO: Literal data: 1,492,169,768 bytes INFO: Matched data: 0 bytes INFO: File list size: 720,860 INFO: File list generation time: 0.001 seconds INFO: File list transfer time: 0.000 seconds INFO: Total bytes sent: 1,494,133,151 INFO: Total bytes received: 428,207 INFO: sent 1,494,133,151 bytes received 428,207 bytes 271,738,428.73 bytes/sec INFO: total size is 1,492,443,127 speedup is 1.00 INFO: rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1] ERROR: Backup of VM 100 failed - command 'rsync --stats -X --numeric-ids -aH --delete --no-whole-file --inplace --one-file-system --relative /proc/1545/root///./ /var/lib/vz/dump/vzdump-lxc-100-2015_11_17-07_39_28.tmp' failed: exit code 23 INFO: Backup job finished with errors TASK ERROR: job errors
↧
Paris Open Source Sumit
Hi;
Why There is not a stand from Proxmove in POSS?
http://opensourcesummit.paris/preins...by=time&step=0
I have kept voluntarily. Oh well.
Moula From Paris ( Place de la République )
Why There is not a stand from Proxmove in POSS?
http://opensourcesummit.paris/preins...by=time&step=0
I have kept voluntarily. Oh well.
Moula From Paris ( Place de la République )
↧
↧
[SOLVED] Proxmox 4 "pve" not found after first start.
Hi good people. My customer got two new servers for virtualization. I have decidet to use proxmox 4 with zfs local sotrage and ssd cache for zil and l2arc. I think the proxmox is best small/medium solution for standalove server or small cluster virtuaization. I have already in production 4 proxmox servers from 1.x to 3.x versions. And now i want to use it with commercial subscription but i can't start proxmox after new install.
Hardware configuration:
M/B - Supermicro X10DRiRAM - 128 GB DDR4 2133 4 channel mode
NIC - build in dual Intel 350 gigabit ethernet
SSD - 200GB Intel DC S3610, 800GB Intel DC S3610
HDD - 6 x Seagate ES 3.5 Enterprise 2TB
No hardware RAID, all disks are connected to build in intel sata controllers (SATA and sSATA) in AHCI mode.
2 SSD + 2 HDD are connected to sSATA and 4 HDD are connected to SATA (i have tryed to swap disks and transfer SSD from sSATA to SATA controller).
Intel 200GB SSD will be used as system ssd for proxmox + iso and templates, 800 GB SSD will be used for ZIL and L2ARC, HDD will be used as ZFS mirror storage.
BIOS settings are default (i have tryed to tune BIOS but don't have success), on SATA controllers Agressive power management is Disabled, on SSD disks - Solid State Drive mode is Enabled, on HDD - Hard Disk Drive mode is enabled.
The problem is: after installing proxmox 4 to 200GB SSD and first start i got error:
---Loading, please wait ...
Volume group "pve" not found
Cannot process volume group pve
Unable to find LVM volume pve/root
Gave up waiting for root device.
---
I have checked with lvm vgdisplay if volume group exists - yes it is. I have scanned it with lvm lvscan, and it shows that all are inactive:
---
inactive '/dev/pve/swap' [23,25GiB] inherit
inactive '/dev/pve/root' [46,50GiB] inherit
inactive '/dev/pve/data' [100,44GiB] inherit
---
Ok. I made them active with lvm vgchange -a y pve and rebooted. But after reboot got same result. Strange is that some times if i reset the server it boots to the system, but very seldom, most all time i got "Volume group "pve" not found" message. I think that this is something with timers and disk activation at stratup - maybe lvm kernel module don't starts or starts too late or something else. Anyway i need help - i'm not very skillen linux admin, mostly time i rule windows systems. Need your help to start.
PS. i have some questions - maybe someone has link to site or book or something else with info about optimizing Supermicro BIOS for KVM or virtualization. I found good links on cisco, fujitsu, hp and dell sites about optimizing server BIOS for viztualization, but nothing about Supermicro. Last time tuned BIOS with Cisco isntruction for viltualization. And if there some good resources about tuning ZFS for storage with SSD cache - i found a lot of good articles and websites but nowhere can find for example what size L2ARC, ARC ZIL must be to work for virtualization platform. I bought 3 books from amazon - "Proxmox Cookbook" and "Mastering Proxmox" by Wasim Ahmed and "Proxmox High Availabilaty" by Simon Cheng, but there are no answers for my questions.
Hardware configuration:
M/B - Supermicro X10DRiRAM - 128 GB DDR4 2133 4 channel mode
NIC - build in dual Intel 350 gigabit ethernet
SSD - 200GB Intel DC S3610, 800GB Intel DC S3610
HDD - 6 x Seagate ES 3.5 Enterprise 2TB
No hardware RAID, all disks are connected to build in intel sata controllers (SATA and sSATA) in AHCI mode.
2 SSD + 2 HDD are connected to sSATA and 4 HDD are connected to SATA (i have tryed to swap disks and transfer SSD from sSATA to SATA controller).
Intel 200GB SSD will be used as system ssd for proxmox + iso and templates, 800 GB SSD will be used for ZIL and L2ARC, HDD will be used as ZFS mirror storage.
BIOS settings are default (i have tryed to tune BIOS but don't have success), on SATA controllers Agressive power management is Disabled, on SSD disks - Solid State Drive mode is Enabled, on HDD - Hard Disk Drive mode is enabled.
The problem is: after installing proxmox 4 to 200GB SSD and first start i got error:
---Loading, please wait ...
Volume group "pve" not found
Cannot process volume group pve
Unable to find LVM volume pve/root
Gave up waiting for root device.
---
I have checked with lvm vgdisplay if volume group exists - yes it is. I have scanned it with lvm lvscan, and it shows that all are inactive:
---
inactive '/dev/pve/swap' [23,25GiB] inherit
inactive '/dev/pve/root' [46,50GiB] inherit
inactive '/dev/pve/data' [100,44GiB] inherit
---
Ok. I made them active with lvm vgchange -a y pve and rebooted. But after reboot got same result. Strange is that some times if i reset the server it boots to the system, but very seldom, most all time i got "Volume group "pve" not found" message. I think that this is something with timers and disk activation at stratup - maybe lvm kernel module don't starts or starts too late or something else. Anyway i need help - i'm not very skillen linux admin, mostly time i rule windows systems. Need your help to start.
PS. i have some questions - maybe someone has link to site or book or something else with info about optimizing Supermicro BIOS for KVM or virtualization. I found good links on cisco, fujitsu, hp and dell sites about optimizing server BIOS for viztualization, but nothing about Supermicro. Last time tuned BIOS with Cisco isntruction for viltualization. And if there some good resources about tuning ZFS for storage with SSD cache - i found a lot of good articles and websites but nowhere can find for example what size L2ARC, ARC ZIL must be to work for virtualization platform. I bought 3 books from amazon - "Proxmox Cookbook" and "Mastering Proxmox" by Wasim Ahmed and "Proxmox High Availabilaty" by Simon Cheng, but there are no answers for my questions.
↧
Anyone using Thecus Zub NAS C10GTR 10GBE Card? Suggestions?
I want to use this ethernet nic to connect 10Gbit 2 node with drbd (and 3° node for quorum) since is much cheaper than Intel ones, but wondering if it works. Any experience?
Reliable alternatives?
Thanks in advance
Reliable alternatives?
Thanks in advance
↧
Proxmox Webconsole is not acessible and VMs are down
Hello,
My webconsole to access proxmox is down and my VM are down as well.
However I can ping and access through SSH. I've made a reboot but it stills the same.
Could you please help me on this ?
Thanks in advance.
Regards,
Nuno
My webconsole to access proxmox is down and my VM are down as well.
However I can ping and access through SSH. I've made a reboot but it stills the same.
Could you please help me on this ?
Thanks in advance.
Regards,
Nuno
↧
Problem to run a konverted vmdk SLES OS under PVE4.0
Hello everybody!
We are using PVE 4.0 in our live System.
In the past i converted successfully four SLES 11 SP 3 OSs with SAP Databases to our Ceph Storage, and the VMs from VMware run perfect on our PVE 4.0. (converted from vmdk to raw)
But now i cant find a solution, to run an older SLES with an older SAP System. Whatever i change, rather SCSI, NIC or something else, nothing works. Always the same error on booting SLES on PVE:
"Waiting for device /dev/sda2 to appear: ......not found -- exiting to /bin
$ _ (blinking coursor)"
The SLES OS has four HDDs, the first i used IDE, all other has SATA.
The other SLES on 11 SP3 i had the same configurations, and they work fine. IDE is a must for the first HDD, on SCSI or VIRTIO the first HDD will not boot.
But why does the SLES not boot correctly, why he loss the device sda2?
Thanks in advance!
Best regards,
Roman
We are using PVE 4.0 in our live System.
In the past i converted successfully four SLES 11 SP 3 OSs with SAP Databases to our Ceph Storage, and the VMs from VMware run perfect on our PVE 4.0. (converted from vmdk to raw)
But now i cant find a solution, to run an older SLES with an older SAP System. Whatever i change, rather SCSI, NIC or something else, nothing works. Always the same error on booting SLES on PVE:
"Waiting for device /dev/sda2 to appear: ......not found -- exiting to /bin
$ _ (blinking coursor)"
The SLES OS has four HDDs, the first i used IDE, all other has SATA.
The other SLES on 11 SP3 i had the same configurations, and they work fine. IDE is a must for the first HDD, on SCSI or VIRTIO the first HDD will not boot.
But why does the SLES not boot correctly, why he loss the device sda2?
Thanks in advance!
Best regards,
Roman
↧
↧
Windows Kernel Performance Test 4.2.3.2
Hello again everyone. I am sharing one last performance test I performed tonight.
PVE
Power Edge R815 with 250GB RAM and 4 sockets of 16 colors each, fiber optic adapter connected to a storage Infortrend with SSD.
VM
Windows 2008 R2 Eth with 50GB memory and socket 3 with 15 colors each, the server receives an average of 110 connections and configured with best practices https://pve.proxmox.com/wiki/Windows...best_practices
Proxmox made the upgrade to the 4.2.3.2 kernel and restart the server after we started the VM, very fast to start, however when users began to connect the server was slow getting to lose packets on the network and hang.
Then returned to 3.19.8.1 kernel version in which features stability and great performance.
Note. When down the size of memory for up to 5 GB VM normally works well quickly, however when you increase the memory to 50GB the same lock.
Below is my current setup.
pve-manager: 4.0-57 (running version: 4.0-57 / cc7c2b53)
pve-kernel-3.19.8-1-pve: 3.19.8-3
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
qemu-kvm-pve: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2 + deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10 pve2-
cgmanager: 00:39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6 ~ jessie
If anyone has any information regarding the difference between the two kernel that directly impacts the performance of Windows.
Thank you.
PVE
Power Edge R815 with 250GB RAM and 4 sockets of 16 colors each, fiber optic adapter connected to a storage Infortrend with SSD.
VM
Windows 2008 R2 Eth with 50GB memory and socket 3 with 15 colors each, the server receives an average of 110 connections and configured with best practices https://pve.proxmox.com/wiki/Windows...best_practices
Proxmox made the upgrade to the 4.2.3.2 kernel and restart the server after we started the VM, very fast to start, however when users began to connect the server was slow getting to lose packets on the network and hang.
Then returned to 3.19.8.1 kernel version in which features stability and great performance.
Note. When down the size of memory for up to 5 GB VM normally works well quickly, however when you increase the memory to 50GB the same lock.
Below is my current setup.
pve-manager: 4.0-57 (running version: 4.0-57 / cc7c2b53)
pve-kernel-3.19.8-1-pve: 3.19.8-3
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
qemu-kvm-pve: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2 + deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10 pve2-
cgmanager: 00:39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6 ~ jessie
If anyone has any information regarding the difference between the two kernel that directly impacts the performance of Windows.
Thank you.
↧
Inclusion of quotas of LXC
Hi! Transferred the container with Openvz to LXC. In the container the control system of a hosting is installed - quotas are necessary. Whether there is such mechanism in LXC and how to include it? / etc/fstab empty
↧
Cluster wide scheduler (cron tasks)
Does anybody know how the cluster wide cron could be implemented?
I need to start a cron job on any node within a cluster every day but to be sure that in case if node fails (or is powered off) that the task will be executed on another node
I need to start a cron job on any node within a cluster every day but to be sure that in case if node fails (or is powered off) that the task will be executed on another node
↧