Quantcast
Channel: Proxmox Support Forum
Viewing all 171745 articles
Browse latest View live

survey for "got inotify poll request in wrong process - disabling inotify" log message


File restore von btrfs im Verbund mehrere Disks

$
0
0
Wir haben eine VM mit Samba, die mehrere btrfs Volumen hat.
Eines der Volumen ist auf drei Virtuelle-Platten verteilt.

Beim File-Restore wird das Verbunde Volumen nicht eingelesen, die Platten sind jedoch sichtbar, alle anderen Daten können wiederhergestellt werden.
btrfs.jpg
Beim Restore der gesamten VM startet diese ohne Fehler und es sind alle Daten vorhanden.

In der Logdatei: /var/log/proxmox-backup/file-restore/ qemu.log ist zu sehen das beim Einlesen der Disks nach der uuid gesucht...

Read more

Backup is missing some files

$
0
0
Hello,

I have setup a backup für my two virtual machines, which runs once per week at night. Tonight it startet writing some backup files to my NAS, but to my surprise one of the backups seems way to small.

I downloaded a big Linux ISO with 5GB on one of the virtual machines, the file was written to a SSD drive. The backup file created is below 1.5GB so the ISO seems to be skipped. The log of the backup shows no error.

Here are my current backup settings:
1740397130093.png

Attached you can...

Read more

Regelmässig SMART fehler (FailedReadSmartData)

$
0
0
Bekomme seit ein paar Tagen jede Nacht folgenden SMART fehler vom PVE:

Device: /dev/sda [SAT], failed to read SMART Attribute Data

Device info:
WD Red SA500 2.5 2TB, S/N:24450XD00274, WWN:5-001b44-dd4461416, FW:540500WD, 2.00 TB

Manchmal ist es /dev/sda, manchmal ist es auch /dev/sdc.

Hier mal der system log:
Feb 22 00:46:03 pve kernel: sd 10:0:0:0: [sda] tag#55 CDB: Write(10) 2a 00 57 6c 18 40 00 00 30 00
Feb 22 00:46:03 pve kernel: scsi target10:0:0: handle(0x000b)...

Read more

Problem vergrößern von QCOW Disk

$
0
0
Hallo,

Leider beginnt mein erster Thread gleich mit einer etwas prekären Situation:

Wir betreiben eine 4 Node Proxmox Cluster mit NFS Storage seit Monaten mit Veeam back ohne Probleme.

Leider hakts nun etwas beim Vergrößern eine Disk am Main Fileserver.

Ich möchte die Disk von 12 auf 14 TB vergrößern ( Via GUI, bekomme jedoch folgende Meldung:

TASK ERROR: VM 201 qmp command 'block_resize' failed - Bitmap...

Read more

Fibre Channel SAN with Live Snapshot

$
0
0
Hello,

we like to integrate our Fibre Channel San to Proxmox.

We already did this successfully with LVM, but there we cannot make Live Snapshots.

But is there maybe a way to make it happen with a GlusterFS Volume to Connect 4 Nodes with a shared Filesystem to the Fibre Channel SAN?

The San also Supports ISCSI, but this does not bring anything because it has no ZFS Support.

zpool log and cache device

$
0
0
I have installed PBS Server for testing.
I have 2 spare nvme Devices. Is it ok, to add the nvmes as a log and a cache device to the pool ?

SPAM report mails kommen immer doppelt

$
0
0
Hallo zusammen,

wir haben das Phänomen, das Spam Reports an die User immer doppelt versendet werden.

in der Timer konfiguration ist folgendes eingestellt:
[Unit]
Description=Send Daily Spam Report Mails

[Timer]
OnCalendar=
OnCalendar=07:00
OnCalendar=08:00
OnCalendar=09:00
OnCalendar=10:00
OnCalendar=11:00
OnCalendar=12:00
OnCalendar=13:00
OnCalendar=14:00
OnCalendar=15:00
OnCalendar=16:00
OnCalendar=17:00
OnCalendar=18:00
OnCalendar=19:00
OnCalendar=20:00

Persistent=true

Im service:
[Unit]...

Read more

Reale thin lvm belegung

$
0
0
Hallo,

bin proxmox Anfänger und da mich das thin-provisioning in eine böse Falle gelockt hatte, habe ich für mich einen script geschrieben mit dem ich die reale belegung von local-lvm anzeigen kann.

Code:
clear &&\
lvs -a --units m --noheadings --nosuffix | \
awk '\
$1 ~ /^vm-/{size +=$4} \
$1 ~ /^vm-/{printf "\033[0;34m %s %s\033[0m\n",$1,$4;} \
$1 ~ /^data/{all=$4} \
{usedp=size/(all/100)}\
{freep=(all-size)/(all/100)}\
END{\
{print "--------------------------"}\
{printf "\033[0;33m%s GB...

Read more

Wrong CEPH Cluster remaining disk space value

$
0
0
Hello everyone,

I'm new to the wonderful proxmox ecosystem.

I recently configured a proxmox cluster with ceph, but I don't understand the remaining disk space values displayed.

Here is a description of my configuration :

  • PVE1/PVE2/PVE3 (3 nodes)
  • Dell PowerEdge server (VE 8.3.3 installed on NVme with RAID1 BOSS Card 2x480GB)
  • "VM+Management Access Network" : 10Gbps > on switch
  • "CEPH Public+Cluster network" : 25Gbps DAC Fiber > full mesh RSTP
  • "Corosync cluster...

Read more

Performance comparison between ZFS and LVM

$
0
0
Hi,

we are evaluating ZFS for our Proxmox VE future installations over the currently used LVM.
ZFS looks very promising with a lot of features, but we have doubts about the performance; our servers contains vm with various databases and we need to have good performances to provide a fluid frontend experience.

From our understanding ZFS needs to have direct access to the disks, so the server is required to have a transparent capable controller or none at all.
Instead of use the controller...

Read more

Apparmor in privileged container

$
0
0
I have a problem which might be normal behavior or not, I'm looking for confirmation.
Every time I start a privileged container or restart apparmor inside I get the following message in the host's syslog:

Code:
Apr 12 17:49:12 pm kernel: [154462.321869] audit: type=1400 audit(1649778552.937:390): apparmor="STATUS" operation="profile_replace" info="not policy admin" error=-13 label="lxc-115_</var/lib/lxc>//&:lxc-115_<-var-lib-lxc>:unconfined" pid=4082008 comm="apparmor_parser"

Also...

Read more

Quick HOWTO on setting up iSCSI Multipath

$
0
0
Hi Everyone,

I originally included this HOWTO guide as a reply to someone else's post but am posting in its own thread as it may help others who struggle to get proper iSCSI MPIO working on a Promox cluster. Coming from an enterprise VMware ESXi background, I wanted my shared storage setup the same way in Promox (albeit without LVM snapshots but Veeam fills this gap). It should be noted that this configuration is entirely done from shell and not through the web UI. I found that the GUI...

Read more

Cannot remove disk for VM when not all Proxmox nodes are online

$
0
0
I'm currently using a 4 node proxmox cluster with a CEPH filesystem as storage. Of these 4 nodes only 3 of these are running and 1 is offline.

1740400524751.png
Which is intended for my purposes.

I have a VM (104) which is currently stored on an erasure pool. The erasure pool is only stored on the online nodes. The proxmox4 node is not participating in this CEPH pool.

However when I try to remove the disk for VM 104 (or in this case, the whole VM), proxmox shows an error that it can't acquire...

Read more

problems with nordvpn CLI in debian LXC

$
0
0
i have a strange problem using nordvpn installed within an LXC.

i have a debian 12 LXC with nordvpn linux client installed (command line only).
the problem is that whilst nordvpn connects and works as expected, it only works for around 15-30 minutes then at that point i get zero DNS resolution from within the LXC. the vpn is still connected, and i can still ping any WAN IP address, but any attempt to resolve DNS fails. i disconnect the VPN, reconnect and it's rinse and repeat.
i've tried...

Read more

qemu guest agent vss provider stopped (after backup?)

$
0
0
In Proxmox 8.3 I created two VMs with Windows Server 2019 enabling "QEMU guest agent" in Options. In the VMs I installed QEMU guest agent (108.0.2) and set the related services to start automatically at boot.

Every morning in these two VMs I find the "QEMU Guest Agent VSS Provider" service stopped. If I restart it, the service starts without problems.

I think the problem is related to the Proxmox Snapshot Backup that I automatically perform...

Read more

Bottleneck at 200 MiB/s?! 7 Node Ceph NVME & SSD Cluster

$
0
0
Hey friends,

Observing strange performance limitation.

EDIT: ALL SSD & NVME are Enterprise Level! No consumer stuff built in

Backups PBS & Veeam topping out at around 200 MiB/s..

Esp. Veeam is telling, the bottleneck is actually the "source", which is the ceph cluster.

Backup Vault is capable of much more than this (24 x 14 TBs ZFS Z2 volume)

Each node is utilizing MLAG with 2x 25Gbit/s.
VMs using 10Gbit/s NIC virtio.

None of the OSD is showing high latencies or commit/apply...

Read more

CEPH OSDs Full, Unbalanced PGs, and Rebalancing Issues in Proxmox VE 8

$
0
0

Scenario


I have a Proxmox VE 8 cluster with 6 nodes, using CEPH as distributed storage. The cluster consists of 48 OSDs, distributed across 4 servers with SSDs and 2 with HDDs.


Monday night, three OSDs reached 100% capacity and crashed:
  • osd.16 (pve118)
  • osd.23 (pve118)
  • osd.24 (pve119)
Logs indicate "ENOSPC" (No Space Left on Device), and these OSDs are unable to start.

1739972861388.png...

Read more

Error internal-error when installing Windows 7 sp1

$
0
0
Error internal-error when installing Windows 7 sp1

Virtual machine created
Memory 2 Gib
Processors 1 (1 socket, 1 cores) (x86-64-v2-aes)
BIOS Default (SeaBIOS)
Display Default
Maschine pc-i440fx-9.0
SCSI Controller VirtiO SCSI single
Hard Disk (sata0) size+32G
Network Device (net0) e1000

Proxmox Virtual Environment 8.3.4

CPU(s) 12 x 12th Gen Intel(R) Core(TM) i5-12400 (1 Socket)
Kernel Version Linux 6.8.12-6-pve (2024-12-19T19:05Z)
ram 64 Gb

p.s.
Virtual machines with Windows 10 and 11...

Read more

HOWTO: Scripts to make cloudbase work like cloudinit for your windows based instances

$
0
0
Hi,

we are a small compagny ( Geco-iT ) from France that strongly relies on Proxmox PVE every day and as we find proxmox more and more powerfull, we want to give back to the community by providing some of our tools for PVE.

We made tools to use cloudbase on windows like cloudinit on linux instances !

CloudBase is an open-source project provided by Cloudbase Solutions to enable initialization of a new instance on Windows machines. The purpose is to be the equivalent of the...

Read more
Viewing all 171745 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>