Hi!
I've got 3 node PVE cluster with CEPH.
1 pool, 4 OSDs per node
Yesterday CEPH started rabalancing and it still goes.
I think this because I've added abot 6Tb of data and autoscale changed pgs number from 32 to 128.
It's ok but I'm bit confused because it's doing recovery fine until 95% is reached and than it goes back to 94% again.
In the log I can see this (as you can see 4.99% and then 5.77% misplaced again):
2023-05-03T21:29:08.670288+0300 mgr.pve1 (mgr.13426191) 551 : cluster [DBG]...
Read more
I've got 3 node PVE cluster with CEPH.
1 pool, 4 OSDs per node
Yesterday CEPH started rabalancing and it still goes.
I think this because I've added abot 6Tb of data and autoscale changed pgs number from 32 to 128.
It's ok but I'm bit confused because it's doing recovery fine until 95% is reached and than it goes back to 94% again.
In the log I can see this (as you can see 4.99% and then 5.77% misplaced again):
2023-05-03T21:29:08.670288+0300 mgr.pve1 (mgr.13426191) 551 : cluster [DBG]...
Read more