↧
Install of Server 2025 using high amount of resources
↧
Crash / unresponsive every 2-3 weeks
Hi,
I have installed proxmox on a mini PC to run Home Assistant (VM) and other servers in LXC (MariadB, InfluxDB, Grafana, Nore-red) and learning a bit more every day. All is working fine except for three system crashes that it happened about 17 days after last reboot.
Any guidance to help me identify the cause(s) would be great.
Symptoms:
Read more
I have installed proxmox on a mini PC to run Home Assistant (VM) and other servers in LXC (MariadB, InfluxDB, Grafana, Nore-red) and learning a bit more every day. All is working fine except for three system crashes that it happened about 17 days after last reboot.
Any guidance to help me identify the cause(s) would be great.
Symptoms:
- PVE, VM and LXC have diseappeared from the network, so cannot ssh
- connecting a monitor to the server gives no hdmi input
Read more
↧
↧
Proxmox LXC: Extreme I/O Delays mit neueren WD SA500 SSD
Hi Leute, ich schreibe euch mal in der Hoffnung ein paar Lösungen und Ideen zu meinem merkwürdigen Problem zu finden. Kurz: ich habe 2 Server, auf dem einem Performancen die SSDs ohne Probleme, auf dem anderen Server habe ich mit wirklich hohem i/o Delay zu kämpfen, obwohl auf beiden, die selbe Anwendung läuft, und sie sonst auch identisch aufgesetzt sind.
Von vorne weg, mir ist klar, das Consumer SSDs wie die SA500 nicht unbedingt ideal für ZFS Systeme sind, jedoch geht es mir hier nicht...
Read more
Von vorne weg, mir ist klar, das Consumer SSDs wie die SA500 nicht unbedingt ideal für ZFS Systeme sind, jedoch geht es mir hier nicht...
Read more
↧
zfs 2.3.0
Well,
It's released today and it brings with it a lot of good features:
Read more
It's released today and it brings with it a lot of good features:
- RAIDZ Expansion (#15022): Add new devices to an existing RAIDZ pool, increasing storage capacity without downtime.
- Fast Dedup (#15896): A major performance upgrade to the original OpenZFS deduplication functionality.
- Direct IO (#10018): Allows bypassing the ARC for reads/writes, improving performance in scenarios like NVMe devices where caching may hinder efficiency.
- JSON...
Read more
↧
DKIM, DMARC in proxmox mail gateway
Hi,
Are the DKIM in proxmox can use hostname/fqdn ? if yes anyone know how to setup this with correctly.
Are the DKIM in proxmox can use hostname/fqdn ? if yes anyone know how to setup this with correctly.
↧
↧
Einrichtung LSI/Avago/Broadcom Matrix Storage Manager
Guten Morgen zusammen,
ich hoffe auf Hilfe zur Einrichtung des Matrix Storage Managers auf einem Proxmox-Server zur Verwaltung von Broadcom-Raidcontrollern. Mein Ziel ist, von einem WIndows-Client, auf dem der MSM als Verwaltungskonsole installiert ist, auf die Controller des Servers zugreifen zu können. Zwischen unterschiedlichen Windows-Systemen funktioiniert das bei mir bereits.
Der Proxmox ist ein vor kurzem erst installierter 8.3.3, der mit apt-get update und apt-get upgrade auf den...
Read more
ich hoffe auf Hilfe zur Einrichtung des Matrix Storage Managers auf einem Proxmox-Server zur Verwaltung von Broadcom-Raidcontrollern. Mein Ziel ist, von einem WIndows-Client, auf dem der MSM als Verwaltungskonsole installiert ist, auf die Controller des Servers zugreifen zu können. Zwischen unterschiedlichen Windows-Systemen funktioiniert das bei mir bereits.
Der Proxmox ist ein vor kurzem erst installierter 8.3.3, der mit apt-get update und apt-get upgrade auf den...
Read more
↧
NVME Drive Serial "unknown"
Dear Proxmox Community,
Two SK Hynix PC711 NVME drives in my Proxmox Node have an "unknown" serial in the Disks table. There seems to be a parsing error when trying to detect it. I found the following thread where another user had a similar problem.
When I run the udev command to read out the serial number, I get the following output (abbreviated):
Notice the spaces after the equal sign...
Read more
Two SK Hynix PC711 NVME drives in my Proxmox Node have an "unknown" serial in the Disks table. There seems to be a parsing error when trying to detect it. I found the following thread where another user had a similar problem.
When I run the udev command to read out the serial number, I get the following output (abbreviated):
Code:
udevadm info -p /sys/block/nvme0n1 --query all
ID_SERIAL_SHORT= KGADN486501210C70H
Notice the spaces after the equal sign...
Read more
↧
Can PBS backup Hyper-v vms?
Hi All
I was hoping that someone has tried this or has some sort of solution to backup hyper-v vms using PBS?
They VM's are hosted on a Hyper-v server.
We are primarily using Proxmox as our VE.
We are running our windows DC's and other windows services on hyper-v
And was hoping that there is a way for PBS to manage the backups cause the windows native one it terrible.
Then I can set this up for my PVE as well.
I was hoping that someone has tried this or has some sort of solution to backup hyper-v vms using PBS?
They VM's are hosted on a Hyper-v server.
We are primarily using Proxmox as our VE.
We are running our windows DC's and other windows services on hyper-v
And was hoping that there is a way for PBS to manage the backups cause the windows native one it terrible.
Then I can set this up for my PVE as well.
↧
PCI Passthrough with RTX 4060 Ti: kvm: vfio: Unable to power on device, stuck in D3
Hey guys,
we do a PCI Passthrough of a TRX 4060 TI on a B650D4U-2L2T/BCM mainboard with an AMD Ryzen 9 7900X CPU.
I got the passthrough working yesterday once after a CMOS reset followed by reapplying these bios options (I verified working passthrough using nvidia-smi in the VM), but after powering off the server and putting it back into the rack, it doesn't work anymore. Every time I start the VM, it does show this error (rebooted ~5 times to test if it occurs every time):
Read more
we do a PCI Passthrough of a TRX 4060 TI on a B650D4U-2L2T/BCM mainboard with an AMD Ryzen 9 7900X CPU.
I got the passthrough working yesterday once after a CMOS reset followed by reapplying these bios options (I verified working passthrough using nvidia-smi in the VM), but after powering off the server and putting it back into the rack, it doesn't work anymore. Every time I start the VM, it does show this error (rebooted ~5 times to test if it occurs every time):
Code:
kvm...
Read more
↧
↧
Update & Risk Management Best Practices? How to ensure real HA between Clusters
Hey guys,
today in one of our standup meetings we thought about improving out update strategy. Currently we run multiple clusters running on different versions of Proxmox. So far, to ensure we have our HA services always running, we update one cluster at a time, let it run to ensure its stable and roughly a month later its counterpart.
We do have development clusters in place to test the most recent Proxmox changes but we can never assure the exact software version on each cluster we...
Read more
today in one of our standup meetings we thought about improving out update strategy. Currently we run multiple clusters running on different versions of Proxmox. So far, to ensure we have our HA services always running, we update one cluster at a time, let it run to ensure its stable and roughly a month later its counterpart.
We do have development clusters in place to test the most recent Proxmox changes but we can never assure the exact software version on each cluster we...
Read more
↧
Disabling conntrack on VM interface (with nftables-based firewall enabled)
Hey everyone,
I've got a VM running a site to site VPN which is a backup to a physical connection handled by a hardware router. As a result of this, the traffic passing via the internal interface may be asymmetrical, or existing connections created over the physical backhaul connection may at any time need to shift to the VPN. As one might expect, conntrack is a major issue for these kinds of scenarios, and will happily destroy your connections as a result.
Previously, I've used a...
Read more
I've got a VM running a site to site VPN which is a backup to a physical connection handled by a hardware router. As a result of this, the traffic passing via the internal interface may be asymmetrical, or existing connections created over the physical backhaul connection may at any time need to shift to the VPN. As one might expect, conntrack is a major issue for these kinds of scenarios, and will happily destroy your connections as a result.
Previously, I've used a...
Read more
↧
Truenas as VM question
Hello,
I have a truenas scale VM with harddisks (PCIE) passthrough shared via NFS on all proxmox cluster nodes.
I also have other VMs using that truenas shared NFS storage as their main harddisk storage, the issue is when or if the truenas VM restarts, all other VMs using that shared NFS storage will have IO errors, the only solution is to restart those VMs after the truenas VM has become healthy.
Is there any solution to restore those VMs back to normal state if the truenas VM is...
Read more
I have a truenas scale VM with harddisks (PCIE) passthrough shared via NFS on all proxmox cluster nodes.
I also have other VMs using that truenas shared NFS storage as their main harddisk storage, the issue is when or if the truenas VM restarts, all other VMs using that shared NFS storage will have IO errors, the only solution is to restart those VMs after the truenas VM has become healthy.
Is there any solution to restore those VMs back to normal state if the truenas VM is...
Read more
↧
Ceph rbd mirror force promote
Hello everyone,
I’m currently setting up my first Ceph mirror configuration and have a few questions regarding its behavior.
For example, I’m uncertain about how to force-promote an image on my DR cluster (site-b) during the synchronization process.
From what I’ve read in the documentation, in a disaster scenario occurring during synchronization, a force-promote operation promotes the last snapshot received by the DR cluster. However, as noted:
"Since this mode is not as...
Read more
I’m currently setting up my first Ceph mirror configuration and have a few questions regarding its behavior.
For example, I’m uncertain about how to force-promote an image on my DR cluster (site-b) during the synchronization process.
From what I’ve read in the documentation, in a disaster scenario occurring during synchronization, a force-promote operation promotes the last snapshot received by the DR cluster. However, as noted:
"Since this mode is not as...
Read more
↧
↧
GUI not available after adding own certificate
I added own certificate to my PVE. as I did some times before.
But this time I can not start the GUI again.
The PVE is still running, the VNs are still runing. I can log in the VMs via ssl and I can login to PVE host as root via SSL.
ss -TULPN | grep 8006 says port is open
What to do?
Appreiciate help
Think I made a mistake adding the .crt inststead of .pem
But this time I can not start the GUI again.
The PVE is still running, the VNs are still runing. I can log in the VMs via ssl and I can login to PVE host as root via SSL.
ss -TULPN | grep 8006 says port is open
What to do?
Appreiciate help
Think I made a mistake adding the .crt inststead of .pem
↧
Ceph
Hi,
I have some problems with Ceph mon.
![1741770823493.png 1741770823493.png]()
How it happens?
Yesterday I want create osd disk on cluster nodes, but after created on the one node a got all time timeout in ceph. I search it and find that version 18 have some problems.
So I want to back to 17, and uninstall all ceph and conf
, now after new install ceph cant start and have many problems.
Someone can help? Or must I reinstall all 6 nodes?
I have some problems with Ceph mon.

How it happens?
Yesterday I want create osd disk on cluster nodes, but after created on the one node a got all time timeout in ceph. I search it and find that version 18 have some problems.
So I want to back to 17, and uninstall all ceph and conf

Someone can help? Or must I reinstall all 6 nodes?
↧
2. Subnetz mit vSwitch wie bei VMWare ESXi
Hallo zusammen,
ich habe eine PVE-Server der "ganz normal" in meinem Netz hängt
Grundsätzlich habe ich der Firma zwei Subnetze (angenommen Subnetz 11 und Subnetz 12). Der PVE hängt im Subnetz 11.
Nun möchte ich (so wie das bei VMWare ESXi auch war) die VM's aus dem Subnetz 12 erreichbar machen. Bei ESXi konnte man dann einen sogenannten vSwitch einrichten, der nichts gemacht hat als das NIC mit dem Hardwareswitch zu verbinden. Es wurde also keine IP verbraucht.
Und dieses Verhalten möchte...
Read more
ich habe eine PVE-Server der "ganz normal" in meinem Netz hängt
Grundsätzlich habe ich der Firma zwei Subnetze (angenommen Subnetz 11 und Subnetz 12). Der PVE hängt im Subnetz 11.
Nun möchte ich (so wie das bei VMWare ESXi auch war) die VM's aus dem Subnetz 12 erreichbar machen. Bei ESXi konnte man dann einen sogenannten vSwitch einrichten, der nichts gemacht hat als das NIC mit dem Hardwareswitch zu verbinden. Es wurde also keine IP verbraucht.
Und dieses Verhalten möchte...
Read more
↧
io-error
Hi
All my vms on a proxmox server appear wih the orange triangle IO error and are frozen.
It seems that my vms could not access to their harddrive.
At the proxmox boot I have the following message
Read more
All my vms on a proxmox server appear wih the orange triangle IO error and are frozen.
It seems that my vms could not access to their harddrive.
At the proxmox boot I have the following message
Bash:
Mar 11 09:30:24 proxGPU2 systemd[1]: zfs-import@capacity.service: Main process exited, code=exited, status=1/FAILURE
Mar 11 09:30:24 proxGPU2 systemd[1]: zfs-import@capacity.service: Failed with result 'exit-code'.
Mar 11 09:30:24 proxGPU2 systemd[1]: Failed to start Import ZFS pool...
Read more
↧
↧
Services down
↧
Backups of LXC with new 8.3 mount option "discard" fail
Today I noticed that when I changed my LXC mount options to "lazyatime, discard", my nightly backups fail:
Read more
Code:
INFO: starting new backup job: vzdump 1010 --storage pbs.xyz--notification-mode auto --remove 0 --notes-template 'Daily {{guestname}}' --node kaiju --mode snapshot
INFO: Starting Backup of VM 1010 (lxc)
INFO: Backup started at 2024-11-27 21:23:21
INFO: status = running
INFO: CT Name: uwe
INFO: including mount point rootfs ('/') in backup
INFO: found old vzdump snapshot (force...
Read more
↧
lxc mount zfs pool
I want to share the results of me trying to mount multiple zvols in an lxc container to share them using smb/nfs/sftp
The command outputs are trimmed to only include the relevant parts.
Goal:
Gounting ZFS-Pool into lxc Container.
Prerequisites
While researching I found 2 ways to do it.
Read more
The command outputs are trimmed to only include the relevant parts.
Goal:
Gounting ZFS-Pool into lxc Container.
Prerequisites
- Proxmox VE 8.2.2
- Container 102 (to mount to)
- A Zpool
Code:$ zfs list NAME MOUNTPOINT tank /tank tank/zvolA /tank/zvolA tank/zvolB /tank/zvolB
While researching I found 2 ways to do it.
- Mount...
Read more
↧