Hi,
Thanks Adam
The test script worked fine
Furthermore, the bootstrap command worked fine on another host
What can it be that has not been cleaned up on ceph-host-1 that prevents
the bootstrap to work ?
After zapping the previous ( working) cluster I have :
update / upgrade the server to 24.04.2
remove/reinstall docker
delete /etc/ceph/*, /var/lib/ceph/*, /etc/systemd/system/ceph*
pkill -9 -f ceph*
reboot
Thanks
Steven
On Thu, 24 Jul 2025 at 12:22, Adam King <[email protected]> wrote:
> I can't say I know why this is happening, but I can try to give some
> context into what cephadm is doing here in case it helps give something to
> look at. This is when cephadm creates the initial monmap. When we do so we
> write a python "NamedTemporaryFile" and then mount that into a container
> that contains the monmaptool to write out the monmap.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *logger.info <http://logger.info>('Creating initial monmap...') monmap
> = write_tmp('', 0, 0) out = CephContainer( ctx,
> image=ctx.image, entrypoint='/usr/bin/monmaptool', args=[
> '--create', '--clobber', '--fsid', fsid,
> '--addv', mon_id, mon_addr, '/tmp/monmap' ],
> volume_mounts={ monmap.name <http://monmap.name>:
> '/tmp/monmap:z', }, ).run()*
>
>
> Given the docker command it ran in your screenshot seems correct, I guess
> the most likely issue would be with actually writing out the temporary
> file. That function looks pretty straightforward though
>
>
>
>
>
>
>
>
>
> *def write_tmp(s, uid, gid): # type: (str, int, int) -> IO[str]
> tmp_f = tempfile.NamedTemporaryFile(mode='w', prefix='ceph-tmp')
> os.fchown(tmp_f.fileno(), uid, gid) tmp_f.write(s) tmp_f.flush()
> return tmp_f*
>
>
> You could maybe try something like this locally to see if that creates a
> file the correct way
>
>
>
>
>
>
>
>
>
>
> *import osimport tempfiletmp_f = tempfile.NamedTemporaryFile(mode='w',
> prefix='ceph-tmp')os.fchown(tmp_f.fileno(), 0,
> 0)tmp_f.write('')tmp_f.flush()with open(tmp_f.name <http://tmp_f.name>,
> 'r') as tmp_file:*
>
> * print(tmp_file.name <http://tmp_file.name>) print(tmp_file.read())*
>
>
> That's effectively what cephadm is doing but with the arguments we pass to
> write_tmp while creating the monmap hardcoded in, and an additional bit to
> print out the name and content (a blank line) of the file. Keep in mind the
> file will get automatically cleaned up when the NamedTemporaryFile goes out
> of scope / the script completes. For me it just prints something like
> */tmp/ceph-tmptwqgus0d* and a newline when run as a standalone script.
> That was on a host where bootstrap completed successfully with the 19.2.2
> copy of cephadm and a 19.2.2 container image
>
> On Thu, Jul 24, 2025 at 11:42 AM Steven Vacaroaia <[email protected]>
> wrote:
>
>> Thanks Anthony
>>
>> The volume bind command looks like this
>>
>> -v /tmp/ceph-tmp-wvh4u18:/tmp/monmap:z
>>
>> so i guess the /tmp/ceph-tmp-xxxx never gets created so there is nothing
>> to map
>>
>> What would be the solution though ...I tried removing and reinstalling
>> docker
>> Update and upgrade ( Ubuntu 24.04.2 LTS)
>>
>> On Thu, 24 Jul 2025 at 11:09, Anthony D'Atri <[email protected]>
>> wrote:
>>
>> >
>> > How did you get that attachment through the list? Curious.
>> >
>> > I think /tmp/monmap is relative to the directory tree that the container
>> > sees:
>> >
>> > -v /tmp/ceph-tmp-wvh4u18:/tmp/monmapz
>> >
>> > Because of the wrapping I’m not sure about the z on the next line, but
>> -v
>> > can be considered like a bind mount of the first path so that container
>> > sees it on the second path.
>> >
>> > Now, as to what happened to /tmp/ceph-tmpxxxx, I can’t say.
>> >
>> >
>> > > On Jul 24, 2025, at 10:53 AM, Steven Vacaroaia <[email protected]>
>> wrote:
>> > >
>> > > Hi,
>> > >
>> > > I had to zap and start from scratch my cluster and now I cannot
>> > bootstrap using cephadm
>> > >
>> > > The error is " monmaptool: error writing to /tmp/monmap: (21) Is a
>> > directory "
>> > >
>> > > There is no /tmp/monmap directory though
>> > >
>> > > I also made sure there is nothing in /var/lib/ceph, no "ceph" process
>> ,
>> > nothing in /etc/systemd/system/ceph* or /etc/ceph AND rebooted the
>> server
>> > >
>> > > Any help will be appreciated
>> > >
>> > > Many thanks
>> > > Steven
>> > >
>> > > <image.png>
>> > > _______________________________________________
>> > > ceph-users mailing list -- [email protected]
>> > > To unsubscribe send an email to [email protected]
>> >
>> >
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
>>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]