I'm creating a custom handler where I have a base query and a resulting
doclistandset.
I need to do some extra queries to get top-results per facet. There are 2
cases:
1. the sorting used for the top-results for a particular facet is the same
as the sorting used for the already returned doclist
So I counted the number if distinct values that I have for each field that I
want a facet on. In total it's around 100,000. I tried with a filterCache
of 120,000 but it seems like too much because the server went down. I will
try with less, around 75,000 and let you know.
How do you to partition t
Jonathan Ariel wrote:
How do you to partition the data to a static set and a dynamic set, and then
combining them at query time? Do you have a link to read about that?
One way would be distributed search (SOLR-303), but distributed idf is
not part of the current patch anymore, so you may have
A commit every two minutes means that the Solr caches are flushed
before they even start to stabilize. Two things to try:
* commit less often, 5 minutes or 10 minutes
* have enough RAM that your entire index can fit in OS file buffers
wunder
On 4/16/08 6:27 AM, "Jonathan Ariel" <[EMAIL PROTECTED
In order to do that I have to change to a 64 bits OS so I can have more than
4 GB of RAM.Is there any way to see how long does it takes to Solr to warmup
the searcher?
On Wed, Apr 16, 2008 at 11:40 AM, Walter Underwood <[EMAIL PROTECTED]>
wrote:
> A commit every two minutes means that the Solr ca
Is there anyway to know how much memory is being used in caches?
On Wed, Apr 16, 2008 at 11:50 AM, Jonathan Ariel <[EMAIL PROTECTED]> wrote:
> In order to do that I have to change to a 64 bits OS so I can have more
> than 4 GB of RAM.Is there any way to see how long does it takes to Solr to
> war
Do it. 32-bit OS's went out of style five years ago in server-land.
I would start with 8GB of RAM. 4GB for your index, 2 for Solr, 1 for
the OS and 1 for other processes. That might be tight. 12GB would
be a lot better.
wunder
On 4/16/08 7:50 AM, "Jonathan Ariel" <[EMAIL PROTECTED]> wrote:
> In
It is working, but I disabled recursive field aliasing. Two questions:
* Is it possible to do recursive field aliasing from solrconfig.xml?
* If not, do we want to preserve this speculative feature?
I think the answers are "no" and "no", but I'd like a second opinion.
wunder
On 4/15/08 10:23 AM
Hello. I am having a similar problem as the OP. I see that you recommended
setting 4GB for the index, and 2 for Solr. How do I allocate memory for the
index? I was under the impression that Solr did not support a RAMIndex.
Walter Underwood wrote:
>
> Do it. 32-bit OS's went out of style five ye
4GB for the operating system to use to buffer disk files.
That is not a Solr setting.
wunder
On 4/16/08 11:05 AM, "oleg_gnatovskiy" <[EMAIL PROTECTED]>
wrote:
>
> Hello. I am having a similar problem as the OP. I see that you recommended
> setting 4GB for the index, and 2 for Solr. How do I all
Oleg, you can't explicitly say "N GB for index". Wunder was just saying how
much you can imagine how much RAM each piece might need and be happy with.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: oleg_gnatovskiy <[EMAIL PROTECTED]>
To: sol
Thanks Chris.
I had in mind "Occurs in alot of documents". Please do
point me where i can pick up an example of using the
LukeRequestHandler and the "shingles based tokenizer".
Eric
--- Chris Hostetter <[EMAIL PROTECTED]> wrote:
>
> it depends on your definition of "polular" if you
> mean "occu
Eric,
Look at LUCENE-400 or Lucene trunk/contrib/analyzers for the shingles stuff.
Have you checked the Wiki for info about LukeRequestHandler? I bet it's there.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: Edwin Koome <[EMAIL PROTECTED]>
T
Oh ok. That makes sense. Thanks.
Otis Gospodnetic wrote:
>
> Oleg, you can't explicitly say "N GB for index". Wunder was just saying
> how much you can imagine how much RAM each piece might need and be happy
> with.
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
Hi all,
I am trying to install Solr with Jetty (as part of another application)
on a Linux server running Gentoo linux and JDK 1.6.0_05.
When I try to start Jetty (and Solr), it doesn't open a port.
I know you will need more info, but I'm not sure what you would need as
I'm not clear on how this
Folks,
I know there is a 'GET' to send queries to Solr. But is there a POST
interface to sending queries? If so, can someone point me in that
direction?
Thanks, Jim
What does the Jetty log output say in the console after you start it? It
should mention the port # on one of the last lines. If it does, try using
curl or wget to do a local request:
curl http://localhost:8983/solr/
wget http://localhost:8983/solr/
Matt
On Wed, Apr 16, 2008 at 5:08 PM, Shawn Car
Hey everyone,
I'm experimenting with updating solr from a remote XML source, using an
XSLT transform to get it into the solr XML syntax (and yes, I've looked
into SOLR-469, but disregarded it as I need to do quite a bit using XSLT
to get it to what I can index) to let me maintain an index.
I'm lo
: Is there a way to implement a custom request handler or similar to get
: solr to apply an XSLT transform to the content stream before it attempts
: to parse it? If not possible OOTB, where would be the right place to
: add said functionality?
take a look at SOLR-285 and SOLR-370 ... a RequestH
: I know there is a 'GET' to send queries to Solr. But is there a POST
: interface to sending queries? If so, can someone point me in that
: direction?
POST using standard the standard application/x-www-form-urlencoded
content-type (ie: the same way you would POST using any HTML form)
-Hoss
Hi Daniel,
Maybe if you can give us a sample of how your XML looks like, we can suggest
how to use SOLR-469 (Data Import Handler) to index it. Most of the use-cases
we have yet encountered are solvable using the XPathEntityProcessor in
DataImportHandler without using XSLT, for details look at
http
21 matches
Mail list logo