on selection issue another query to get your additional data (if i
follow what you want)
On 22 January 2012 18:53, Dave wrote:
> I take it from the overwhelming silence on the list that what I've asked is
> not possible? It seems like the suggester component is not well supported
> or understood,
I take it from the overwhelming silence on the list that what I've asked is
not possible? It seems like the suggester component is not well supported
or understood, and limited in functionality.
Does anyone have any ideas for how I would implement the functionality I'm
looking for. I'm trying to i
That was how I originally tried to implement it, but I could not figure out
how to get the suggester to return anything but the suggestion. How do you
do that?
On Thu, Jan 19, 2012 at 1:13 PM, Robert Muir wrote:
> I really don't think you should put a huge json document as a search term.
>
> Jus
I really don't think you should put a huge json document as a search term.
Just make "Brooklyn, New York, United States" or whatever you intend
the user to actually search on/type in as your search term.
put the rest in different fields (e.g. stored-only, not even indexed
if you dont need that) an
In my original post I included one of my terms:
Brooklyn, New York, United States?{ |id|: |2620829|,
|timezone|:|America/New_York|,|type|: |3|, |country|: { |id| : |229| },
|region|: { |id| : |3608| }, |city|: { |id|: |2616971|, |plainname|:
|Brooklyn|, |name|: |Brooklyn, New York, United States|
I don't think the problem is FST, since it sorts offline in your case.
More importantly, what are you trying to put into the FST?
it appears you are indexing terms from your term dictionary, but your
term dictionary is over 1GB, why is that?
what do your terms look like? 1GB for 2,784,937 docume
I'm also seeing the error when I try to start up the SOLR instance:
SEVERE: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:344)
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:352)
at org.apache.lucene.util.fst.FST$BytesWriter.writeByte
Unfortunately, that doesn't look like it solved my problem. I built the new
.war file, dropped it in, and restarted the server. When I tried to build
the spellchecker index, it ran out of memory again. Is there anything I
needed to change in the configuration? Did I need to upload new .jar files,
o
Hi Dave,
Try 'ant usage' from the solr/ directory.
Steve
> -Original Message-
> From: Dave [mailto:dla...@gmail.com]
> Sent: Wednesday, January 18, 2012 2:11 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Trying to understand SOLR memory requirements
>
Ok, I've been able to pull the code from SVN, build it, and compile my
SpellingQueryConverter against it. However, I'm at a loss as to where to
find / how to build the solr.war file?
On Tue, Jan 17, 2012 at 8:59 AM, Robert Muir wrote:
> I committed it already: so you can try out branch_3x if you
Robert, where can I pull down a nightly build from? Will it include the
apache-solr-core-3.3.0.jar and lucene-core-3.3-SNAPSHOT.jar jars? I need to
re-build with a custom SpellingQueryConverter.java.
Thanks,
Dave
On Tue, Jan 17, 2012 at 8:59 AM, Robert Muir wrote:
> I committed it already: so y
I'm using 3.5
On Tue, Jan 17, 2012 at 7:57 PM, Lance Norskog wrote:
> Which version of Solr do you use? 3.1 and 3.2 had a memory leak bug in
> spellchecking. This was fixed in 3.3.
>
> On Tue, Jan 17, 2012 at 5:59 AM, Robert Muir wrote:
> > I committed it already: so you can try out branch_3x i
Which version of Solr do you use? 3.1 and 3.2 had a memory leak bug in
spellchecking. This was fixed in 3.3.
On Tue, Jan 17, 2012 at 5:59 AM, Robert Muir wrote:
> I committed it already: so you can try out branch_3x if you want.
>
> you can either wait for a nightly build or compile from svn
> (h
I committed it already: so you can try out branch_3x if you want.
you can either wait for a nightly build or compile from svn
(http://svn.apache.org/repos/asf/lucene/dev/branches/branch_3x/).
On Tue, Jan 17, 2012 at 8:35 AM, Dave wrote:
> Thank you Robert, I'd appreciate that. Any idea how long
Thank you Robert, I'd appreciate that. Any idea how long it will take to
get a fix? Would I be better switching to trunk? Is trunk stable enough for
someone who's very much a SOLR novice?
Thanks,
Dave
On Mon, Jan 16, 2012 at 10:08 PM, Robert Muir wrote:
> looks like https://issues.apache.org/ji
I remembered there is another implementation using lucene index file as the
look up table not the in memory FST
FST has its advantage in speed but if you writes documents during runtime,
reconstructing FST may cause performance issue
On Tue, Jan 17, 2012 at 11:08 AM, Robert Muir wrote:
> looks l
looks like https://issues.apache.org/jira/browse/SOLR-2888.
Previously, FST would need to hold all the terms in RAM during
construction, but with the patch it uses offline sorts/temporary
files.
I'll reopen the issue to backport this to the 3.x branch.
On Mon, Jan 16, 2012 at 8:31 PM, Dave wrot
According to http://wiki.apache.org/solr/Suggester FSTLookup is the least
memory-intensive of the lookupImpl's. Are you suggesting a different
approach entirely or is that a lookupImpl that is not mentioned in the
documentation?
On Mon, Jan 16, 2012 at 9:54 PM, qiu chi wrote:
> you may disable
you may disable FST look up and use lucene index as the suggest method
FST look up loads all documents into the memory, you can use the lucene
spell checker instead
On Tue, Jan 17, 2012 at 10:31 AM, Dave wrote:
> I've tried up to -Xmx5g
>
> On Mon, Jan 16, 2012 at 9:15 PM, qiu chi wrote:
>
> >
I've tried up to -Xmx5g
On Mon, Jan 16, 2012 at 9:15 PM, qiu chi wrote:
> What is the largest -Xmx value you have tried?
> Your index size seems not very big
> Try -Xmx2048m , it should work
>
> On Tue, Jan 17, 2012 at 9:31 AM, Dave wrote:
>
> > I'm trying to figure out what my memory needs are
What is the largest -Xmx value you have tried?
Your index size seems not very big
Try -Xmx2048m , it should work
On Tue, Jan 17, 2012 at 9:31 AM, Dave wrote:
> I'm trying to figure out what my memory needs are for a rather large
> dataset. I'm trying to build an auto-complete system for every
>
I'm trying to figure out what my memory needs are for a rather large
dataset. I'm trying to build an auto-complete system for every
city/state/country in the world. I've got a geographic database, and have
setup the DIH to pull the proper data in. There are 2,784,937 documents
which I've formatted
22 matches
Mail list logo