Hi,

Weird thing happened with my ceph. I've got nice nightly scripts (bash scripts) 
for making backups, snapshots, cleaning up etc... Starting from my last upgrade 
to ceph v19.2.2 my scripts hang during execution. The rbd map and rbd unmap 
commands doesn't return to the prompt. So my script invoces a command like 
"cephadm shell -- rbd --pool libvirt-pool map --read-only --image 
CmsrvXCH2-SWAP@snap_4" but my script doesn't continue because the rbd map 
command never quits.

I've also tried it myself. After killing my script a few times, I saw several 
images being mapped. So let's unmap them. This happened...
root@hvs001:/# rbd unmap /dev/rbd0
^C
root@hvs001:/# rbd unmap /dev/rbd1
^C

The devices aren't mounted so the unmapping isn't blocked. Strangely enough 
after activating the unmap command, the device is removed from the /dev but I 
still have to do a ^C to return to the shell...

Do I have an issue with my ceph cluster? Has anybody experienced something 
similar? Should I report a bug?

Greetings,

Dominique.


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to