/var/log/cluster was my go to log location on Proxmox 3. Looks like this has been retired in Proxmox 4. Trying to pin down what log contains the fence actions. Does this just get logged to syslog?
↧
Proxmox 4 Logs
↧
Proxmox 4 unable to open LXC console via GUI
Hi, i have installaed proxmox 4.0, release 4.0-57 and i have succesfully created my first LXC container on top of ZFS pool (yes, snapshot and cloning works like a charm)
the problem is when i click "console" after containers has been marked as running, proxmox tells me to press "ctrl +a ctrl +a" to log into container or "ctrl +a +q" to exit. Unfortunatly non works, so from GUI the container is unusable.
Anyone with this issue or with a hand on how to sort it out?
thank you
S
lxc-prmx.PNG
the problem is when i click "console" after containers has been marked as running, proxmox tells me to press "ctrl +a ctrl +a" to log into container or "ctrl +a +q" to exit. Unfortunatly non works, so from GUI the container is unusable.
Anyone with this issue or with a hand on how to sort it out?
thank you
S
lxc-prmx.PNG
↧
↧
SYSLOG: /usr/sbin/irqbalance[XXXX]: irq XX affinity_hint subset empty
WHAT: Syslog Spam
WHEN: Ten-second intervals of 25 entries. 150 log writes per minute.
SOFTWARE INFO: PVE Version: 4.0-57/cc7c2b53 on Debian Jessie (64bit), Linux 4.2.2-1-pve #1 SMP
HARDWARE INFO: 32 x Intel(R) Xeon(R) CPU E5-2630 v3 at 2.40GHz (2 Sockets) | 256 GB RAM | 1.4 TB SSD RAID 10 w/ MegaRAID 9271 Cache 1 GB + CacheVault
SCREENSHOT: You are not allowed to post any kinds of links, images or videos until you post a few times. :(
NARRATIVE: Using Proxmox VE 4, installed from an OVH template image, syslog is being spammed with irqbalance entries about affinity_hint subset empty conditions. I found a bug report for the issue, indicating that it was fixed over a year ago (for ubuntu dist).
WHAT I'VE TRIED: apt-get update, apt-get upgrade, apt-get install --reinstall irqbalance.
Any ideas on how to fix this?
Thank you very much for your time.
WHEN: Ten-second intervals of 25 entries. 150 log writes per minute.
SOFTWARE INFO: PVE Version: 4.0-57/cc7c2b53 on Debian Jessie (64bit), Linux 4.2.2-1-pve #1 SMP
HARDWARE INFO: 32 x Intel(R) Xeon(R) CPU E5-2630 v3 at 2.40GHz (2 Sockets) | 256 GB RAM | 1.4 TB SSD RAID 10 w/ MegaRAID 9271 Cache 1 GB + CacheVault
SCREENSHOT: You are not allowed to post any kinds of links, images or videos until you post a few times. :(
NARRATIVE: Using Proxmox VE 4, installed from an OVH template image, syslog is being spammed with irqbalance entries about affinity_hint subset empty conditions. I found a bug report for the issue, indicating that it was fixed over a year ago (for ubuntu dist).
WHAT I'VE TRIED: apt-get update, apt-get upgrade, apt-get install --reinstall irqbalance.
Any ideas on how to fix this?
Thank you very much for your time.
↧
Proxmox 4 on Jessie - LVM Problem
Hi,
I've a strange problem regarding my Proxmox 4 installation. First I tried to upgrade my Proxmox 3 server to Proxmox 4 following the instructions on the wiki. Everything worked fine until the reboot with the proxmox kernel. Afterwards the system didn't come up again. It's stuck in the initramfs sequence while activating the LVs from my volume group. wondrously it can activate the LV for / and also the LV for SWAP but then it gets stuck on the usr-LV. If I start lvm in the initramfs busybox an do a vgchange -ay I get all LVs enabled and if I leave the shell the system boots up and works ok. After messing around for I while I decided to reinstall. So I did a clean Debian Jessie installation, MD-Raid5, LVM on top. I had no issues during installation. Also the stock Debian Jessie was able to boot up after installation. Then I installed Proxmox 4 on-top, following the instructions on the wiki. With exactly the same result. The system won't boot up with the proxmox kernel. It gets stuck after the LV for swap. Everything worked nice with Proxmox 3 and it also seems to work ok after manually activating the LVs. I can start, create, clone and remove VM.
This is a screenshot when it gets stuck. Any ideas what's going wrong? Thank you.
IMG_1479.jpg
I've a strange problem regarding my Proxmox 4 installation. First I tried to upgrade my Proxmox 3 server to Proxmox 4 following the instructions on the wiki. Everything worked fine until the reboot with the proxmox kernel. Afterwards the system didn't come up again. It's stuck in the initramfs sequence while activating the LVs from my volume group. wondrously it can activate the LV for / and also the LV for SWAP but then it gets stuck on the usr-LV. If I start lvm in the initramfs busybox an do a vgchange -ay I get all LVs enabled and if I leave the shell the system boots up and works ok. After messing around for I while I decided to reinstall. So I did a clean Debian Jessie installation, MD-Raid5, LVM on top. I had no issues during installation. Also the stock Debian Jessie was able to boot up after installation. Then I installed Proxmox 4 on-top, following the instructions on the wiki. With exactly the same result. The system won't boot up with the proxmox kernel. It gets stuck after the LV for swap. Everything worked nice with Proxmox 3 and it also seems to work ok after manually activating the LVs. I can start, create, clone and remove VM.
This is a screenshot when it gets stuck. Any ideas what's going wrong? Thank you.
IMG_1479.jpg
↧
qemu-img convert gives error: Unsupported image type 'seSparse'
Any idea what I can do to correct this to make it work ? Google searchs have proven elusive in search for an answer.
root@prox2:/mnt/pve/NAS-VMWARE/daveskb# ls -latrh
total 6.8G
-rw-r--r-- 1 root root 0 Apr 22 2014 daveskb.vmsd
-rw-r--r-- 1 root root 262 Apr 22 2014 daveskb.vmxf
-rw-r--r-- 1 root root 166K Nov 9 21:35 vmware-21.log
-rw-r--r-- 1 root root 166K Nov 9 21:47 vmware-22.log
-rw-r--r-- 1 root root 112K Nov 9 22:01 vmware-23.log
-rw-r--r-- 1 root root 45K Nov 9 22:04 vmware-24.log
-rw-r--r-- 1 root root 45K Nov 9 22:04 vmware-25.log
-rw-r--r-- 1 root root 166K Nov 9 22:12 vmware-26.log
-rw------- 1 root root 497 Nov 9 23:18 daveskb.vmdk
-rw------- 1 root root 8.5K Nov 9 23:41 daveskb.nvram
-rw------- 1 root root 6.9G Nov 9 23:41 daveskb-sesparse.vmdk
-rwxr-xr-x 1 root root 2.4K Nov 9 23:41 daveskb.vmx
-rw-r--r-- 1 root root 166K Nov 9 23:41 vmware.log
root@prox2:/mnt/pve/NAS-VMWARE/daveskb# qemu-img convert -f vmdk /mnt/pve/NAS-VMWARE/daveskb/daveskb.vmdk -O qcow2 /mnt/pve/NAS-qcow2/images/daveskb.qcow2
qemu-img: Could not open '/mnt/pve/NAS-VMWARE/daveskb/daveskb.vmdk': Unsupported image type 'seSparse'
root@prox2:/mnt/pve/NAS-VMWARE/daveskb# ls -latrh
total 6.8G
-rw-r--r-- 1 root root 0 Apr 22 2014 daveskb.vmsd
-rw-r--r-- 1 root root 262 Apr 22 2014 daveskb.vmxf
-rw-r--r-- 1 root root 166K Nov 9 21:35 vmware-21.log
-rw-r--r-- 1 root root 166K Nov 9 21:47 vmware-22.log
-rw-r--r-- 1 root root 112K Nov 9 22:01 vmware-23.log
-rw-r--r-- 1 root root 45K Nov 9 22:04 vmware-24.log
-rw-r--r-- 1 root root 45K Nov 9 22:04 vmware-25.log
-rw-r--r-- 1 root root 166K Nov 9 22:12 vmware-26.log
-rw------- 1 root root 497 Nov 9 23:18 daveskb.vmdk
-rw------- 1 root root 8.5K Nov 9 23:41 daveskb.nvram
-rw------- 1 root root 6.9G Nov 9 23:41 daveskb-sesparse.vmdk
-rwxr-xr-x 1 root root 2.4K Nov 9 23:41 daveskb.vmx
-rw-r--r-- 1 root root 166K Nov 9 23:41 vmware.log
root@prox2:/mnt/pve/NAS-VMWARE/daveskb# qemu-img convert -f vmdk /mnt/pve/NAS-VMWARE/daveskb/daveskb.vmdk -O qcow2 /mnt/pve/NAS-qcow2/images/daveskb.qcow2
qemu-img: Could not open '/mnt/pve/NAS-VMWARE/daveskb/daveskb.vmdk': Unsupported image type 'seSparse'
↧
↧
Is it possible to run a NFS server within a LXC?
My attempt to run a NFS server within a LXC Linux Container failed. I used the debian based TurnKey Linux fileserver template and tried to activate the NFS kernel server within its LXC (/etc/exports were already configured):
Is there some trick to enable this inside LXC or is it generally impossible because the nfs-kernel-server can not be shared with containers?
I would prefer to setup the NFS server within a LXC and not enable it in proxmox itself.
Thank you!
Code:
# /etc/init.d/nfs-kernel-server start
mount: nfsd is write-protected, mounting read-only
mount: cannot mount nfsd read-only
Exporting directories for NFS kernel daemon....
Starting NFS kernel daemon: nfsdrpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem
failed!
# mount -t nfsd nfsd /proc/fs/nfsd
mount: nfsd is write-protected, mounting read-only
mount: cannot mount nfsd read-only
I would prefer to setup the NFS server within a LXC and not enable it in proxmox itself.
Thank you!
↧
LXC Network access
Hi,
just installed LXC CentOS 7 Container under new Proxmox v4 Server, and I can't access to the new server via ssh or http
I got ssh: connect to host, port 22: Connection refused
anything special I should do?
iptables and firewalled are disabled
Regards,
just installed LXC CentOS 7 Container under new Proxmox v4 Server, and I can't access to the new server via ssh or http
I got ssh: connect to host, port 22: Connection refused
anything special I should do?
iptables and firewalled are disabled
Regards,
↧
Gluster packages form gluster.org
Dear all,
there are several posts about problems with gluster and gluster mounts, especially when rebooting the node that was inserted as glusterfs server.
We are investigating this issue and first of all we wonder if there would be any problems updating gluster with packages from gluster.org
Environment:
PVE 4.0
glusterfs-client 3.5.2-2+deb8u1 amd64 clustered file-system (client package)
glusterfs-common 3.5.2-2+deb8u1 amd64 GlusterFS common libraries and translator modules
Gluster Server
glusterfs-client 3.5.2-2+deb8u1 amd64 clustered file-system (client package)
glusterfs-common 3.5.2-2+deb8u1 amd64 GlusterFS common libraries and translator modules
glusterfs-server 3.5.2-2+deb8u1 amd64 clustered file-system (server package)
The gluster is running on 3 identical servers with replica=3, previously we were running CentOS on the same servers with gluster 3.7 and all the gluster mounts did not have a problem at all, now with version 3.5.2 and promox there are frequent problems.
We are thinking of 2 different scenarios
1. just update the gluster and keep using the PVE GUI for the mount
2. do a manual mount and use "directory" in the PVE Gui Does anyone have input or remarks on this ?
Regards Soeren
there are several posts about problems with gluster and gluster mounts, especially when rebooting the node that was inserted as glusterfs server.
We are investigating this issue and first of all we wonder if there would be any problems updating gluster with packages from gluster.org
Environment:
PVE 4.0
glusterfs-client 3.5.2-2+deb8u1 amd64 clustered file-system (client package)
glusterfs-common 3.5.2-2+deb8u1 amd64 GlusterFS common libraries and translator modules
Gluster Server
glusterfs-client 3.5.2-2+deb8u1 amd64 clustered file-system (client package)
glusterfs-common 3.5.2-2+deb8u1 amd64 GlusterFS common libraries and translator modules
glusterfs-server 3.5.2-2+deb8u1 amd64 clustered file-system (server package)
The gluster is running on 3 identical servers with replica=3, previously we were running CentOS on the same servers with gluster 3.7 and all the gluster mounts did not have a problem at all, now with version 3.5.2 and promox there are frequent problems.
We are thinking of 2 different scenarios
1. just update the gluster and keep using the PVE GUI for the mount
2. do a manual mount and use "directory" in the PVE Gui Does anyone have input or remarks on this ?
Regards Soeren
↧
Forwarding by Web GUI
Proxmox 4, Can I make ports forwarding for Vm using proxmox Web GUI or only ssh console?
Thank you!
Отправлено с моего SGP771 через Tapatalk
Thank you!
Отправлено с моего SGP771 через Tapatalk
↧
↧
Backup proxmox (v4) configuration
Hi,
I want to backup my Proxmox V4 configuration (to be able to restore it quickly in case of problem).
To do it i plan to backup the content of "/etc/pve/" directory.
Do you think that's enough (i just want to backup the configuration : nfs mount, network configuration ...) ?
I specify that all the containers and virtual machine are stored on an NFS mount (on another server / with a spécific backup)
Thanks in advance.
/Xavier
I want to backup my Proxmox V4 configuration (to be able to restore it quickly in case of problem).
To do it i plan to backup the content of "/etc/pve/" directory.
Do you think that's enough (i just want to backup the configuration : nfs mount, network configuration ...) ?
I specify that all the containers and virtual machine are stored on an NFS mount (on another server / with a spécific backup)
Thanks in advance.
/Xavier
↧
systemctl fails on jessie template
Using the debian-8.0-standard template systemctl gives:
'Failed to get D-Bus connection: Unknown error -1'
It seems that systemd isn't actually running inside the container.
Is this a problem with the template or do you need to add some extra settings?
I've also looked in to lxc-systemd instructions on the Arch wiki, but they didn't work.
(can't post link)
Any ideas/tips are welcome.
'Failed to get D-Bus connection: Unknown error -1'
It seems that systemd isn't actually running inside the container.
Is this a problem with the template or do you need to add some extra settings?
I've also looked in to lxc-systemd instructions on the Arch wiki, but they didn't work.
(can't post link)
Any ideas/tips are welcome.
↧
LXC: raw to system files?
Hi thereI just setup a new server with proxmox 4 and I see that that now LXC is being used.I wonder, is it possible with LXC also to get the "container" files as normal files and not as .raw file onto the host like it used to be with OpenVZ?
↧
How to migrate VMS on ZFS file system or could i move them by creating cluster?
Hello and thank you for viewing this thread i have two proxmox nodes running proxmox 4.
and i have 1 vm on server A and 4 vms on server B
the file system is zfs because server A has 4x drives and i did raid 10 software.
other server is raid 1 with software and zfs with 2x drives.
i need to migrate all of the servers to one server because i need to reuse one of them.
any idea how i can do this? i tried by creating a cluster so i can live migrate all the vms to one server but it told me that i couldnt create a cluster because one of the servers had 1 vm on it.
im stuck and this is very urgent because i really need this server but i also need to keep the vms that are on it so it is imperative that i consolidate everything to one node.
please help!
and i have 1 vm on server A and 4 vms on server B
the file system is zfs because server A has 4x drives and i did raid 10 software.
other server is raid 1 with software and zfs with 2x drives.
i need to migrate all of the servers to one server because i need to reuse one of them.
any idea how i can do this? i tried by creating a cluster so i can live migrate all the vms to one server but it told me that i couldnt create a cluster because one of the servers had 1 vm on it.
im stuck and this is very urgent because i really need this server but i also need to keep the vms that are on it so it is imperative that i consolidate everything to one node.
please help!
↧
↧
why not an ZFS over NFS
hello,
May be it' would be nice to have an ZFS over NFS plugin storage.
zfs pool will be nfs mount as volume and a zfs dataset create remotely for each VM in oder to quota and snapshot.
What do you think, is this something doable ?
regards serge
May be it' would be nice to have an ZFS over NFS plugin storage.
zfs pool will be nfs mount as volume and a zfs dataset create remotely for each VM in oder to quota and snapshot.
What do you think, is this something doable ?
regards serge
↧
Windows 2012 VPN (PPTP) client connection
I have Windows Server 2012 guest system installed in Proxmox host.
I try to connect to my VPN server, but still ending up with error 619.
My configuration:
Windows box is using bridge network. It can access Internet. Firewall is turned off. MASQUERADE is configured on Proxmox
I already tried using various NIC (Intel 1000, Realtec, VirtIO), but with no result...
VPN server is working fine from another PC (standalone box).
Any hints how to solve it?
I try to connect to my VPN server, but still ending up with error 619.
My configuration:
Windows box is using bridge network. It can access Internet. Firewall is turned off. MASQUERADE is configured on Proxmox
I already tried using various NIC (Intel 1000, Realtec, VirtIO), but with no result...
VPN server is working fine from another PC (standalone box).
Any hints how to solve it?
↧
Storage: local folder also as NFS share
I am trying proxmox cluster
I have installed 2 servers with local storage each.
One of this, since it has a very large disk, I have shared via NFS. That way I can share the disk and do live migrations.
I have the following config
Node1
localDir
importedLocalDir (the same localDir at Node1 but mounted as NFS)
Node2
localDir
importedLocalDir (the same as localDir at Node1, mounted as NFS)
Can this be a problem since the same storage is both avaialble as localDir and NFS share, and partilary in Node 1 it's even present twice? can there be concurrency issues?
Note: since localDir and importedLocalDir in node 1 are basicaly the same I could hide localDir, but running the VM's directly sounds better for performance
I have installed 2 servers with local storage each.
One of this, since it has a very large disk, I have shared via NFS. That way I can share the disk and do live migrations.
I have the following config
Node1
localDir
importedLocalDir (the same localDir at Node1 but mounted as NFS)
Node2
localDir
importedLocalDir (the same as localDir at Node1, mounted as NFS)
Can this be a problem since the same storage is both avaialble as localDir and NFS share, and partilary in Node 1 it's even present twice? can there be concurrency issues?
Note: since localDir and importedLocalDir in node 1 are basicaly the same I could hide localDir, but running the VM's directly sounds better for performance
↧
Migrations fail with HTB: quantum of class 10001 is big. Consider r2q change
Hi there,
i am running proxmox 4.0 in a cluster with kernel 4.2.2-1-pve
My virtual machines use traffic shapping like: net0: virtio=96:89:34:1C:AC:C6,bridge=vmbr0,rate=12
The network is quite fast/10 Gigabit
I see strange effects that live migrations sometimes fail, sometimes they work
Sometimes the same migration which just failed suceeds 20 Minutes later.
On the Proxmox machines in the dmesg output i see messages like this:
Do you have any idea what is happening and what I can do to work around the problem ?
Thanks
adoII
i am running proxmox 4.0 in a cluster with kernel 4.2.2-1-pve
My virtual machines use traffic shapping like: net0: virtio=96:89:34:1C:AC:C6,bridge=vmbr0,rate=12
The network is quite fast/10 Gigabit
I see strange effects that live migrations sometimes fail, sometimes they work
Sometimes the same migration which just failed suceeds 20 Minutes later.
On the Proxmox machines in the dmesg output i see messages like this:
Code:
[ 2641.493905] vmbr0: port 9(tap198i0) entered forwarding state
[ 2641.523523] HTB: quantum of class 10001 is big. Consider r2q change.
[ 2642.952392] vmbr0: port 9(tap198i0) entered disabled state
Thanks
adoII
↧
↧
Proxmox pfsense best Practice?
Hi all,
i would like to know what would be the best practice of doings this, im using proxmox with 2 NIC and want to make a pfsense VM, my config would be,
ISP modem/router----eth0/vmbr0---pfsense----eth1/vmbr1----LAN
My config would be
#WAN#vmbr0 DHCP
#LAN#vmbr1 static 192.168.1.2(pfsense ip)
in theory this is the right way to do it right? but how would i access Proxmox, would i need a new NIC and make a vmbr3 just for accessing the proxmox?
thanks in advance!
i would like to know what would be the best practice of doings this, im using proxmox with 2 NIC and want to make a pfsense VM, my config would be,
ISP modem/router----eth0/vmbr0---pfsense----eth1/vmbr1----LAN
My config would be
#WAN#vmbr0 DHCP
#LAN#vmbr1 static 192.168.1.2(pfsense ip)
in theory this is the right way to do it right? but how would i access Proxmox, would i need a new NIC and make a vmbr3 just for accessing the proxmox?
thanks in advance!
↧
Centos 7 LXC Container and Bind
I was moving one of my DNS servers to Proxmox 4. I created a Centos 7 LXC container and installed bind on it it with "yum install bind bind-utils". Named will not start.
service named start
Redirecting to /bin/systemctl start named.service
Job for named.service failed. See 'systemctl status named.service' and 'journalctl -xn' for details.
# service named status
Redirecting to /bin/systemctl status named.service
named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; disabled)
Active: failed (Result: exit-code) since Tue 2015-11-10 18:04:47 CST; 25s ago
Process: 979 ExecStartPre=/usr/sbin/named-checkconf -z /etc/named.conf (code=exited, status=226/NAMESPACE)
Nov 10 18:04:47 ns1.testtest123.net systemd[979]: Failed at step NAMESPACE spawning /usr/sbin/named-checkconf: Permission denied
Nov 10 18:04:47 ns1.testtest123.net systemd[1]: named.service: control process exited, code=exited status=226
Nov 10 18:04:47 ns1.testtest123.net systemd[1]: Failed to start Berkeley Internet Name Domain (DNS).
Nov 10 18:04:47 ns1.testtest123.net systemd[1]: Unit named.service entered failed state.
Is this something to do with AppArmor? Few months ago I created a Centos 7 OpenVZ container on Proxmox 3 with no issues at all with DNS server running.
service named start
Redirecting to /bin/systemctl start named.service
Job for named.service failed. See 'systemctl status named.service' and 'journalctl -xn' for details.
# service named status
Redirecting to /bin/systemctl status named.service
named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; disabled)
Active: failed (Result: exit-code) since Tue 2015-11-10 18:04:47 CST; 25s ago
Process: 979 ExecStartPre=/usr/sbin/named-checkconf -z /etc/named.conf (code=exited, status=226/NAMESPACE)
Nov 10 18:04:47 ns1.testtest123.net systemd[979]: Failed at step NAMESPACE spawning /usr/sbin/named-checkconf: Permission denied
Nov 10 18:04:47 ns1.testtest123.net systemd[1]: named.service: control process exited, code=exited status=226
Nov 10 18:04:47 ns1.testtest123.net systemd[1]: Failed to start Berkeley Internet Name Domain (DNS).
Nov 10 18:04:47 ns1.testtest123.net systemd[1]: Unit named.service entered failed state.
Is this something to do with AppArmor? Few months ago I created a Centos 7 OpenVZ container on Proxmox 3 with no issues at all with DNS server running.
↧
∑천안오피〖Opyo01。CoM〗≪오피요≫역삼건마≡성남오피ᕠ안산건마ᕠ
∑논현오피〖Opyo01。CoM〗≪오피요≫일산건마≡마포오피ᕠ안산건마ᕠ ∑역삼오피〖Opyo01。CoM〗≪오피요≫부평건마≡마포오피ᕠ선릉건마ᕠ ∑간석오피〖Opyo01。CoM〗≪오피요≫간석건마≡신촌오피ᕠ안산건마ᕠ
↧