[ceph-users] ingress for mgr service

2024-07-24 Thread farhad kh
It is very good to be able to use Ingress for service manager This possibility of high availability for Ed Will this feature be added in the next version or should it still be implemented manually? ___ ceph-users mailing list -- ceph-users@ceph.io To uns

[ceph-users] when calling the CreateTopic operation: Unknown

2024-07-12 Thread farhad kh
hi, i want to use ceph bucket notification . i try to created topic with below command but get error when used kafka with user/password how can i solved this problem ? my syntax have any problem? https://www.ibm.com/docs/en/storage-ceph/7?topic=management-creating-bucket-notifications https://doc

[ceph-users] Problem in changing monitor address and public_network

2024-05-26 Thread farhad kh
Hello, according to ceph own document and the article that I sent the link to, I tried to change the address of the ceph machines and its public network. But the guarantee that I had to set the machines with the new address(ceph orch host set-addr opcrgfpsksa0101 10.248.35.213) , the command was n

[ceph-users] ceph api rgw/role

2024-04-22 Thread farhad kh
hi , i used ceph api for create rgw/role but ther is not api for delete or edit rgw/role . how can i delete them or edit ? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] change ip node and public_network in cluster

2024-02-17 Thread farhad kh
I have implemented a ceph cluster with cephadm which has three monitors and three OSDs each node have one interface 192.168.0.0/24 network. I want to change the address of the machines to the range 10.4.4.0/24. Is there a solution for this change without data loss and failure? i change the pubic_ne

[ceph-users] Re: cephadm file "/sbin/cephadm", line 10098 PK ^

2023-12-18 Thread farhad kh
hi,thank you for guidance There is no ability to change the global image before launching,I need to download the images from the private registry during the initial setup. i used option --image but it not worked. # cephadm bootstrap --image rgistry.test/ceph/ceph:v18 --mon-ip 192.168.0.160 -

[ceph-users] cephadm file "/sbin/cephadm", line 10098 PK ^

2023-12-18 Thread farhad kh
Hello, I downloaded cephadm from the link below. https://download.ceph.com/rpm-18.2.0/el8/noarch/ I change the address of the images to the address of my private registry, ``` DEFAULT_IMAGE = 'opkbhfpspsp0101.fns/ceph/ceph:v18' DEFAULT_IMAGE_IS_MAIN = False DEFAULT_IMAGE_RELEASE = 'reef' DEFAULT_P

[ceph-users] dashboard ERROR exception

2023-10-30 Thread farhad kh
i use ceph 17.2.6 and when i deploy two number of separate rgw realm with zonegroup and zone , dashboard enabled access for bouth object gateway and i can create user and bucket and etc .but when i trying create bucket in on of object gatways .i get this error in below: debug 2023-10

[ceph-users] dashboard for rgw NoSuchKey

2023-07-03 Thread farhad kh
I deploy the rgw service and the default pool is created automatically But I get an error in the dashboard `` Error connecting to Object Gateway: RGW REST API request failed with default 404 status code","HostId":"736528-default-default"}') `` There is a dashboard user but I created the bucket ma

[ceph-users] copy file in nfs over cephfs error "error: error in file IO (code 11)"

2023-06-25 Thread farhad kh
hi everybody we have problem with nfs gansha load balancer whene use rsync -avre to copy file from another share to ceph nfs share path we get this error `rsync -rav /mnt/elasticsearch/newLogCluster/acr-202* /archive/Elastic-v7-archive` rsync : close failed on "/archive/Elastic-v7-archive/"

[ceph-users] osd memory target not work

2023-06-20 Thread farhad kh
when set osd_memory_target for limitation usage memory for osd disk ,This value is expected to be set for the OSD container .But with the docker stats command, this value is not seen Is my perception of this process wrong? --- [root@opcsdfpsbpp0201 ~]# ceph orch ps | grep osd.12 osd.12

[ceph-users] autocaling not work and active+remapped+backfilling

2023-06-19 Thread farhad kh
hi i have a problem with ceph 17.2.6 , cephfs with mds daemons but see an unusual behavior. create a data pool with default crush rule but data just store in 3 specific osd and other osd is clean PG auto-scaling is also active but its size does not change when the pool is biger I did this manua

[ceph-users] cephfs mount with kernel driver

2023-06-19 Thread farhad kh
I noticed that in my scenario, when I mount cephfs via the kernel module, it directly copies to one or three of the OSDs. And the writing speed of the client is higher than the speed of replication and auto scaling This causes the writing operation to stop as soon as those OSDs are filled, and the

[ceph-users] stray daemons not managed by cephadm

2023-06-12 Thread farhad kh
i deployed the ceph cluster with 8 node (v17.2.6) and after add all of hosts, ceph create 5 mon daemon instances i try decrease that to 3 instance with ` ceph orch apply mon --placement=label:mon,count:3 it worked, but after that i get error "2 stray daemons not managed by cephadm" . But every ti

[ceph-users] change user root to non-root after deploy cluster by cephadm

2023-06-07 Thread farhad kh
Hi guys I deployed the ceph cluster with cephadm and root user, but I need to change the user to a non-root user And I did these steps: 1- Created a non-root user on all hosts with access without password and sudo `$USER_NAME ALL = (root) NOPASSWD:ALL` 2- Generated a SSH key pair and use ssh-copy-

[ceph-users] fail delete "daemon(s) not managed by cephadm"

2023-05-27 Thread farhad kh
hi everyone i have a warning ` 1 stray daemon(s) not managed by cephadm` # ceph health detail HEALTH_WARN 1 stray daemon(s) not managed by cephadm [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm stray daemon mon.apcepfpspsp0111 on host apcepfpspsp0111 not managed by cepha

[ceph-users] new install or change default registry to private registry

2023-05-17 Thread farhad kh
i try deploy cluster from private registry and used this command {cephadm bootstrap ---mon-ip 10.10.128.68 --registry-url my.registry.xo --registry-username myuser1 --registry-password mypassword1 --dashboard-password-noupdate --initial-dashboard-password P@ssw0rd } even i changed section Default

[ceph-users] Set the Quality of Service configuration.

2023-04-02 Thread farhad kh
how to i can set IO quota or limit for R/W for erasure coding pool in ceph ? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] recovery for node disaster

2023-02-12 Thread farhad kh
I have a cluster of three nodes, with three replicas per pool on cluster nodes - HOST ADDR LABELS STATUS apcepfpspsp0101 192.168.114.157 _admin mon apcepfpspsp0103 192.168.114.158 mon _admin apcepfpspsp0105 192.168.114.159 mon _admin 3 hosts in cluster --

[ceph-users] add an existing rbd image to iscsi target

2022-12-07 Thread farhad kh
i have cluster (v 17.2.4) with cephadm --- [root@ceph-01 ~]# ceph -s cluster: id: c61f6c8a-42a1-11ed-a5f1-000c29089b59 health: HEALTH_OK services: mon:3 daemons, quorum ceph-01.fns.com,ceph-03,ceph-02 (age 109m) mgr:ceph-01.fns.com.vdoxhd(active, since 1

[ceph-users] remove osd in crush

2022-08-27 Thread farhad kh
i removed osd from crushmap but it still in 'ceph osd tree' [root@ceph2-node-01 ~]# ceph osd tree ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT PRI-AFF -1 20.03859 root default -20 20.03859 datacenter dc-1 -21 20.03859 room serv

[ceph-users] use ceph rbd for windows cluster "scsi-3 persistent reservation"

2022-06-22 Thread farhad kh
I need a disk storage block that is shared between two Windows servers. Servers are active standby (server certification) Only one server can write at a time, but both servers can read the created files And if the first server shuts down, the second server can edit the files or create a new file

[ceph-users] lifecycle config minimum time

2022-06-21 Thread farhad kh
i want set lc for incomplete multipart but i not find document that say use minute or hour for time how can set time for lc less than day ? Abort incomplete multipart upload after 1 day Enabled 1

[ceph-users] Degraded data redundancy: 32 pgs undersized

2022-06-12 Thread farhad kh
i upgraded my cluster to 17.2 and locked process upgrade i have error [root@ceph2-node-01 ~]# ceph -s cluster: id: 151b48f2-fa98-11eb-b7c4-000c29fa2c84 health: HEALTH_WARN Reduced data availability: 32 pgs inactive Degraded data redundancy: 32 pgs undersized

[ceph-users] unknown object

2022-06-06 Thread farhad kh
i deleted all object in my bucket but used capacity not zero when i list object in pool wit `rados -p default.rgw.buckets.data.ls` shows me a lot of objects 2ee2e53d-bad4-4857-8bea-36eb52a83f34.5263789.1__shadow_1/16Q91ZUY34EAW9TH.2~zOHhukByW0DKgDIIihOEhtxtW85FO5m.74_1 2ee2e53d-bad4-4857-8bea-36eb

[ceph-users] Error CephMgrPrometheusModuleInactive

2022-06-01 Thread farhad kh
i have error im dashboard ceph -- CephMgrPrometheusModuleInactive description The mgr/prometheus module at opcpmfpskup0101.p.fnst.10.in-addr.arpa:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down. Without the mgr/prometheus module metrics and alert

[ceph-users] ceph upgrade bug

2022-05-30 Thread farhad kh
I will update the cluster to version 16.2.9 but other versions do not show demons [root@opcpmfpsbpp0101 c41ccd12-dc01-11ec-9e25-00505695f8a8]# ceph orch ps NAME HOST PORTSSTATUS REFRESHED AGE MEM USE MEM LIM VERSIONIMAGE ID CONTAI

[ceph-users] multi write in block device

2022-05-30 Thread farhad kh
multi write in block device i have two windows server and i persent a lun with ceph rbd for bouth i need when disk is ofline for first windows server another sever can update,write and read all file in disk but this until first server is down or disconnect from lun not work what shoulde be doing

[ceph-users] Degraded data redundancy and too many PGs per OSD

2022-05-30 Thread farhad kh
hi i have a problem in my cluster i used cache tier for rgw data In this way, three hosts for cache and three hosts for data I have used SSDs for cache and HDD for data i set 20 GiB quota for cache pool when one host of cache tier shulde be offline released this warning and i decreased quota to 10

[ceph-users] Ceph's mgr/prometheus module is not available

2022-05-29 Thread farhad kh
hi i upgraded my cluster from 16.2.6 to 16.2.9 and i have this error in dashboard but not in command line The mgr/prometheus module at opcpmfpsbpp0103.fst.20.10.in-addr.arpa:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down. Without the mgr/prometheus

[ceph-users] HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSCSI gateway 'opcpmfpsbpp0101' does not exist retval: -2 [ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: d

2022-05-28 Thread farhad kh
hi i have a error in delete service from dashboard ceph version is 16.2.6 HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSCSI gateway 'opcpmfpsbpp0101' does not exist retval: -2 [ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed: iSC

[ceph-users] cephadm error mgr not available and ERROR: Failed to add host

2022-05-24 Thread farhad kh
hi i want used private registry for running cluster ceph storage and i changed default registry my container runtime (docker) /etc/docker/deamon.json { "registery-mirrors": ["https://private-registery.fst";] } and all registry addres in /usr/sbin/cephadm(quay.ceph.io and docker.io to my private

[ceph-users] RGW error s3 api

2022-05-24 Thread farhad kh
hi i have a lot of error in s3 api in client s3 i get this : 2022-05-24 10:49:58.095 ERROR 156723 --- [exec-upload-21640003-285-2] i.p.p.d.service.UploadDownloadService: Gateway Time-out (Service: Amazon S3; Status Code: 504; Error Code: 504 Gateway Time-out; Request ID: null; S3 Extended Re

[ceph-users] RGW error s3 api

2022-05-24 Thread farhad kh
hi i have a lot of error in s3 api in client s3 i get this : 2022-05-24 10:49:58.095 ERROR 156723 --- [exec-upload-21640003-285-2] i.p.p.d.service.UploadDownloadService: Gateway Time-out (Service: Amazon S3; Status Code: 504; Error Code: 504 Gateway Time-out; Request ID: null; S3 Extended Req

[ceph-users] HDD disk for RGW and CACHE tier for giving beter performance

2022-05-24 Thread farhad kh
I want to save data pools for rgw on HDD disk drives And use some SSD hard drive for the cache tier on top of it Has anyone tested this scenario? Is this practical and optimal? How can I do this? ___ ceph-users mailing list -- ceph-users@ceph.io To unsub

[ceph-users] disaster in many of osd disk

2022-05-24 Thread farhad kh
I lost some disks in my cluster ceph then began to correct the structure of the objects and replicate them This caused me to get some errors on the s3 api Gateway Time-out (Service: Amazon S3; Status Code: 504; Error Code: 504 Gateway Time-out; Request ID: null; S3 Extended Request ID: null; Prox

[ceph-users] client.admin crashed

2022-05-16 Thread farhad kh
i have error a in my cluster ceph HEALT_WARN 1 demons have recently crashed [WRN] RECENT_CRASH: 1 demons have recently crashed client.admin crashed on host node1 at 2022-05-16T08:30:41205667z what does this mean How can I fix it? ___ ceph-users ma

[ceph-users] client.admin crashed

2022-05-16 Thread farhad kh
i have error a in my cluster ceph HEALT_WARN 1 demons have recently crashed [WRN] RECENT_CRASH: 1 demons have recently crashed client.admin crashed on host node1 at 2022-05-16T08:30:41205667z ___ ceph-users mailing list -- ceph-users@ceph.io To uns

[ceph-users] empty bucket

2022-05-14 Thread farhad kh
hi i deleted all object in bucket but used capacity in my bucket isnot zero and show in ls command many objects why ? and how can i deleted all ? s3 ls s3://podspace-default-bucket-zone /usr/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS req