I would shard the index so that each shard is no larger than the memory of the
machine it sits on, that way your entire index will be in memory all the time.
When I was at Feedster (I wrote the search engine), the rule of thumb I had was
to have 14GB of index on a 16GB machine.
François
On Dec
Hi,
I remember going through some page that had graphs of response times based on
index size for solr.
Anyone know of such pages?
Internally, we have some requirements for response times and I'm trying to
figure out when to shard the index.
Thanks,
Tri
Yes that fixed the problem. interesting.. usually think setting debug just
changes the verbosity level.. in this case caused docs not to be processed.
02db-data-config.xmlfull-importidle144002010-12-31 17:45:03Indexing
completed. Added/Updated: 440 documents. Deleted 0 documents.2010-12-31
17:45:
sure I'll try that.
2010/12/31 Ahmet Arslan
> It seems that with &debug=on there is a hard coded default rows=10.
>
>
> http://knowtate.servehttp.com:8983/solr/core0/dataimport?command=full-import&debug=on&echoParams=all&rows=50
>
> returns "Added/Updated: 50 documents. Deleted 0 documents."
>
It seems that with &debug=on there is a hard coded default rows=10.
http://knowtate.servehttp.com:8983/solr/core0/dataimport?command=full-import&debug=on&echoParams=all&rows=50
returns "Added/Updated: 50 documents. Deleted 0 documents."
It seems that debug parameter is related to /solr/core0/ad
one little extra piece of info: part of the stats page got omitted - notably
the number of errors was reported as 0.
errors : 0
timeouts : 0
totalTime : 1963
avgTimePerRequest : 981.5
avgRequestsPerSecond : 0.0011371888
2010/12/31 Stephen Boesch
> I am asking for a full DataImport via a url.
I am asking for a full DataImport via a url. It seems to be partially
happy with the request - with debug=on I can see it saying that 10
documents were indexed. The backend however realizes there are actually 440
records available for the query.
Not sure why only 10 records were selected and th
The Solr admin pages do not have a delete function. You have to use
'curl' or 'wget' or your own SolrJ program to delete documents.
On Fri, Dec 31, 2010 at 3:34 AM, wrote:
> Dear,
>
> I have created Index through Crawler Solr but i am getting old pages link
> also.
> My query is how to delete sp
also try &debugQuery=true and see why each result matched
On Thu, Dec 30, 2010 at 4:10 PM, mrw wrote:
>
>
> Basically, just what you've suggested. I did the field/query analysis piece
> with verbose output. Not entirely sure how to interpret the results, of
> course. Currently reading anythi
Well, if that's what's in your class, this won't work:
because it's looking for "org.apache". You can try just
class="MarathiAnalyzer"
So I'm not sure removing the package statement is really what you want here.
So now I'm wondering if you really put the jar file in the right place, is
it p
Here's a discussion of the difference between them, does that answer?
http://lucene.472066.n3.nabble.com/spell-check-vs-terms-component-td1870214.html
Best
Erick
On Fri, Dec 31, 2010 at 8:55 AM, TxCSguy wrote:
>
> Hi,
>
> I am trying to clear up some confusion about SOLR's spell check
> functi
My actual class files present in the jar file are:
MarathiAnalyzer.class
MarathiStemFilter.class
MarathiStemmer.class
MarathiAnayzer$1.class
MarathiAnalyzer$SavedStreams.class
Please tell what else do I need to specify about my problem?
--
View this message in context:
http://lucene.472066.n3.n
Dear,
I have created Index through Crawler Solr but i am getting old pages link
also.
My query is how to delete specific links from Index through Solr Admin?
Regards,
Tapan Sadafal.
DID : 67897880
This e-mail is confidential. It may also be legally privileged. If you are
not the addressee you
On Fri, Dec 31, 2010 at 2:40 AM, mrw wrote:
>
>
> Basically, just what you've suggested. I did the field/query analysis piece
> with verbose output. Not entirely sure how to interpret the results, of
> course. Currently reading anything I can find on that.
[...]
>From the above, it is not quit
Hi,
I am trying to clear up some confusion about SOLR's spell check
functionality. Being new to SORL and Lucene as well, I was under the
assumption that spellcheck would take a query entered by a user and end up
actually querying the index based upon the corrections returned by the
spellcheck co
Thanks for the reply.
What i have done now is that i take the suggested string and make another
query to solr along with the filter parameter.
It is working for now, since i can't figure out another workaround.
Regards,
Taimur
--
View this message in context:
http://lucene.472066.n3.nabble.co
16 matches
Mail list logo