Thanks, Karl for sharing.  With local SSD's you be able to auto scale. Is
that correct?

On Fri, Feb 7, 2020 at 5:22 AM Nicolas PARIS <nicolas.pa...@riseup.net>
wrote:

> hi all
>
> what about cephfs or lustre distrubuted filesystem for such purpose ?
>
>
> Karl Stoney <karl.sto...@autotrader.co.uk.INVALID> writes:
>
> > we personally run solr on google cloud kubernetes engine and each node
> has a 512Gb persistent ssd (network attached) storage which gives roughly
> this performance (read/write):
> >
> > Sustained random IOPS limit 15,360.00 15,360.00
> > Sustained throughput limit (MB/s) 245.76  245.76
> >
> > and we get very good performance.
> >
> > ultimately though it's going to depend on your workload
> > ________________________________
> > From: Susheel Kumar <susheel2...@gmail.com>
> > Sent: 06 February 2020 13:43
> > To: solr-user@lucene.apache.org <solr-user@lucene.apache.org>
> > Subject: Storage/Volume type for Kubernetes Solr POD?
> >
> > Hello,
> >
> > Whats type of storage/volume is recommended to run Solr on Kubernetes
> POD?
> > I know in the past Solr has issues with NFS storing its indexes and was
> not
> > recommended.
> >
> >
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkubernetes.io%2Fdocs%2Fconcepts%2Fstorage%2Fvolumes%2F&amp;data=02%7C01%7Ckarl.stoney%40autotrader.co.uk%7Cade649a9f6e84e1ee7d008d7ab0a8c7b%7C926f3743f3d24b8a816818cfcbe776fe%7C0%7C0%7C637165934101219754&amp;sdata=wsc4v3dJwTzOqSirbo7DvdmrimTL2sOX66Ug%2FvzrRw8%3D&amp;reserved=0
> >
> > Thanks,
> > Susheel
> > This e-mail is sent on behalf of Auto Trader Group Plc, Registered
> Office: 1 Tony Wilson Place, Manchester, Lancashire, M15 4FN (Registered in
> England No. 9439967). This email and any files transmitted with it are
> confidential and may be legally privileged, and intended solely for the use
> of the individual or entity to whom they are addressed. If you have
> received this email in error please notify the sender. This email message
> has been swept for the presence of computer viruses.
>
>
> --
> nicolas paris
>

Reply via email to