I don't know of a way to tell Solr to load all the indexes into
memory, but if you were to simply read all the files at the OS level,
that would do it. Under a unix OS, "cat * > /dev/null" would work. Under
Windows, I can't think of a way to do it off the top of my head, but if
you had Cygwin
On 7/17/2010 3:28 AM, marship wrote:
Hi. Peter and All.
I merged my indexes today. Now each index stores 10M document. Now I only have
10 solr cores.
And I used
java -Xmx1g -jar -server start.jar
to start the jetty server.
How big are the indexes on each of those cores? You can easily get th
Hi. Geert-Jan.
Thanks for replying.
I know solr has querycache and it improves the search speed from second
time. Actually when I talk about the search speed. I don't mean talking about
the speed of cache. When user search on our site, I don't want the first time
cost 10s and all following
>My query string is always simple like "design", "principle of design",
"tom"
>EG:
>URL:
http://localhost:7550/solr/select/?q=design&version=2.2&start=0&rows=10&indent=on
IMO, indeed with these types of simple searches caching (and thus RAM usage)
can not be fully exploited, i.e: there isn't reall
> > Each solr(jetty) instance on consume 40M-60M memory.
> java -Xmx1024M -jar start.jar
That's a good suggestion!
Please, double check that you are using the -server version of the jvm
and the latest 1.6.0_20 or so.
Additionally you can start jvisualvm (shipped with the jdk) and hook
into jetty
Hi Scott!
> I am aware these cores on same server are interfering with each other.
Thats not good. Try to use only one core per CPU. With more per CPU you
won't have any benefits over the single-core version, I think.
> can solr use more memory to avoid disk operation conflicts?
Yes, only the m
How does your queries look like? Do you use faceting, highlighting, ... ?
Did you try to customize the cache?
Setting the HashDocSet to "0.005 of all documents" improves our search speed a
lot.
Did you optimize the index?
500ms seems to be slow for an 'average' search. I am not an expert but with
Is there any reason why you have to limit each instance to only 1M
documents?
If you could put more documents in the same core I think it would
dramatically improve your response times.
-Original Message-
From: marship [mailto:mars...@126.com]
Sent: donderdag 15 juli 2010 6:23
To: solr-us