Thanks Toke and Jack. Jack,
Yes. it is 480 million :) I will share the additional details soon. thanks. Regards, Anil On 13 March 2016 at 21:06, Jack Krupansky <jack.krupan...@gmail.com> wrote: > (We should have a wiki/doc page for the "usual list of suspects" when > queries are/appear slow, rather than need to repeat the same mantra(s) for > every inquiry on this topic.) > > > -- Jack Krupansky > > On Sun, Mar 13, 2016 at 11:29 AM, Toke Eskildsen <t...@statsbiblioteket.dk> > wrote: > > > Anil <anilk...@gmail.com> wrote: > > > i have indexed a data (commands from files) with 10 fields and 3 of > them > > is > > > text fields. collection is created with 3 shards and 2 replicas. I have > > > used document routing as well. > > > > > Currently collection holds 47,80,01,405 records. > > > > ...480 million, right? Funny digit grouping in India. > > > > > text search against text field taking around 5 sec. solr is query just > > and > > > of two terms with fl as 7 fields > > > > > fileId:"file unique id" AND command_text:(system login) > > > > While not an impressive response time, it might just be that your > hardware > > is not enough to handle that amount of documents. The usual culprit is IO > > speed, so chances are you have a system with spinning drives and not > enough > > RAM: Switch to SSD and/or add more RAM. > > > > To give better advice, we need more information. > > > > * How large are your 3 shards in bytes? > > * What storage system do you use (local SSD, local spinning drives, > remote > > storage...)? > > * How much physical memory does your system have? > > * How much memory is free for disk cache? > > * How many concurrent queries do you issue? > > * Do you update while you search? > > * What does a full query (rows, faceting, grouping, highlighting, > > everything) look like? > > * How many documents does a typical query match (hitcount)? > > > > - Toke Eskildsen > > >