Quantcast
Channel: Proxmox Support Forum
Viewing all articles
Browse latest Browse all 171704

CEPH issue: continuous rebalancing

$
0
0
Hi!

I've got 3 node PVE cluster with CEPH.
1 pool, 4 OSDs per node

Yesterday CEPH started rabalancing and it still goes.
I think this because I've added abot 6Tb of data and autoscale changed pgs number from 32 to 128.
It's ok but I'm bit confused because it's doing recovery fine until 95% is reached and than it goes back to 94% again.

In the log I can see this (as you can see 4.99% and then 5.77% misplaced again):

2023-05-03T21:29:08.670288+0300 mgr.pve1 (mgr.13426191) 551 : cluster [DBG]...

Read more

Viewing all articles
Browse latest Browse all 171704

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>