Hey guys,
I have a ceph cluster with 3 OSDs on 3 nodes, 1 osd each node. 2 of the osds went offline and won't come back (pretty sure the disks died). 1 OSD is still alive with monitor. I can see the data from
Read more
I have a ceph cluster with 3 OSDs on 3 nodes, 1 osd each node. 2 of the osds went offline and won't come back (pretty sure the disks died). 1 OSD is still alive with monitor. I can see the data from
ceph -s:
Code:
id: c42a9057-9b43-4e68-afe8-d2cac60a8a6c
health: HEALTH_WARN
mon SpaceDewdy3 is low on available space
1 osds down
2 hosts (2 osds) down
Reduced data availability: 33 pgs inactive
Degraded data...
Read more