Quantcast
Channel: Proxmox Support Forum
Viewing all articles
Browse latest Browse all 173595

Ceph - osd "Problem" - on two nodes are the same osdid´s

$
0
0
Hi to all!

I have a three node cluster, with a ceph storage. It works perfect now.
There are three Servers, every server has 8 harddisks. 1 for Proxmox, 7 for ceph storage.
Now my problem. Normally, there is only one osdid name for the whole three node cluster. I changed in the past harddrives, formated them right and now there are two osds with the same osdid, but only on pve2 it is shown at the Proxmox GUI.
On pve1 and pve2 there is the same osdid 9:

HTML Code:

root@pve1:~# df -h
Filesystem                    Size  Used Avail Use% Mounted on
udev                            10M    0  10M  0% /dev
tmpfs                          1.4G  492K  1.4G  1% /run
/dev/mapper/pve-root            34G  1.5G  31G  5% /
tmpfs                          5.0M    0  5.0M  0% /run/lock
tmpfs                          2.8G  59M  2.7G  3% /run/shm
/dev/mapper/pve-data            73G  180M  73G  1% /var/lib/vz
/dev/fuse                      30M  24K  30M  1% /etc/pve
/dev/cciss/c0d6p1              132G  48G  85G  36% /var/lib/ceph/osd/ceph-9
/dev/cciss/c0d2p1              132G  30G  103G  23% /var/lib/ceph/osd/ceph-1
/dev/cciss/c0d4p1              132G  25G  108G  19% /var/lib/ceph/osd/ceph-7
/dev/cciss/c0d5p1              132G  21G  112G  16% /var/lib/ceph/osd/ceph-8
/dev/cciss/c0d7p1              132G  29G  104G  22% /var/lib/ceph/osd/ceph-10
/dev/cciss/c0d3p1              132G  24G  108G  19% /var/lib/ceph/osd/ceph-2
/dev/cciss/c0d1p1              132G  23G  109G  18% /var/lib/ceph/osd/ceph-0


root@pve2:~# df -h
Filesystem                    Size  Used Avail Use% Mounted on
udev                            10M    0  10M  0% /dev
tmpfs                        1000M  492K  999M  1% /run
/dev/mapper/pve-root            34G  2.0G  30G  7% /
tmpfs                          5.0M    0  5.0M  0% /run/lock
tmpfs                          2.0G  59M  1.9G  3% /run/shm
/dev/mapper/pve-data            77G  180M  77G  1% /var/lib/vz
/dev/fuse                      30M  24K  30M  1% /etc/pve
/dev/cciss/c0d5p1              132G  30G  102G  23% /var/lib/ceph/osd/ceph-12
/dev/cciss/c0d3p1              132G  22G  111G  17% /var/lib/ceph/osd/ceph-6
/dev/cciss/c0d6p1              132G  18G  115G  13% /var/lib/ceph/osd/ceph-9
/dev/cciss/c0d7p1              132G  20G  112G  16% /var/lib/ceph/osd/ceph-13
/dev/cciss/c0d2p1              132G  20G  112G  15% /var/lib/ceph/osd/ceph-4
/dev/cciss/c0d4p1              132G  23G  109G  18% /var/lib/ceph/osd/ceph-11
/dev/cciss/c0d1p1              132G  19G  114G  14% /var/lib/ceph/osd/ceph-3


How can fix that problem to use the harddrivedisk at pve1 (osdid9)?

The command (on pve1) "pveceph destroy osd 9" don´t work:

HTML Code:

root@pve1:~# pveceph destroyosd 9
osd is in use (in == 1)
root@pve1:~#

Did anyone had the same "problem" in the past?

Thanks in advance,

roman

Viewing all articles
Browse latest Browse all 173595

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>