Since last Ceph update (to current 17.2.5) we noted that every node reboot will mark OSDs from that node as crashed. However they return with the server boot normally.
I checked ceph and journalctl logs, and I did not find anything relevant about the daemons crashing (timeout, segfault, etc)
Is this something normal after the reboot (the CEPH HEALTH_WARN)? Seems not, because this is new for us.
I checked ceph and journalctl logs, and I did not find anything relevant about the daemons crashing (timeout, segfault, etc)
Is this something normal after the reboot (the CEPH HEALTH_WARN)? Seems not, because this is new for us.