> On Aug 1, 2025, at 9:33 AM, gagan tiwari <[email protected]> 
> wrote:
> 
> Thanks Anthony,
>                                That did the trick! All OSDs are up now.

Groovy.
> 
> Verifying port 0.0.0.0:9100 ...
> Cannot bind to IP 0.0.0.0 port 9100: [Errno 98] Address already in use

Yeah, that sure sounds like you already have node_exporter running.  Prometheus 
exporters generally have a default port on which they listen, usually but not 
always in the 9xxx range.  There's an informal registry of these somewhere to 
help avoid collisions, but it's not enforced.

Most likely this is either detritus from a previous deployment attempt, or 
perhaps your organization has a fleetwide node_exporter deployment that is 
conflicting. I tend to place a fleet node_exporter deployment on 9101 instead 
of 9100 to avoid instances deployed by other tools.  It doesn't hurt to have 
more than one running.

9100 is also historically the port used by HP's JetDirect network printer 
interface.



> ERROR: TCP Port(s) '0.0.0.0:9100' required for node-exporter already in use
> [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
>    daemon node-exporter.ceph-mon2 on ceph-mon2 is in error state
> 
> I think most likely its due to the leftovers of the previous
> installation.  How would I go about removing this cleanly and  more
> importantly, in a way that Ceph is aware of the change, therefore clearing
> the warning.

systemctl -a
ceph orch ls
ceph orch ps

Look for orphaned orchestrator instances and kill, then disable, kill, and 
remove with systemctl if the ceph orch commands don't do the whole job.  
node_exporter doesn't have state as such, so there's nothing to preserve.
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to