FWIW, I just implemented a system that stores the index in SOLR but the
records in a partitioned set of MySQL databases. The only stored field in
SOLR is an ID field, which is the key to a table in the MySQL database. I
had to modify SOLR a tiny bit and write a "database" search component so
that
toring the
> > missing value? As in Field.Store.YES? As opposed to
> Field.Index.###?
> > Because theres no need to Store this value.
> >
> > Erick
> >
> > On Thu, Jan 21, 2010 at 11:22 PM, Dallan Quass
> wrote:
> >
> >> Hi,
> >>
&g
Hi,
I want to issue queries where queried fields have a specified value or are
"missing". I know that I can query missing values using a negated
full-range query, but it doesn't seem like that's very efficient (the fields
in question have a lot of possible values). So I've opted to store special
I want to use distributed search with some search components that I would
like to execute only on the main server, not on the shards, because they
reference some large in-memory lookup tables. After the search components
get done processing the orignal query, the query may contain SpanNearQueries
on between
> solr and OS
>
> On Mon, Aug 11, 2008 at 10:52 AM, Dallan Quass
> <[EMAIL PROTECTED]> wrote:
> > Sorry for the newbie question. When running solr under tomcat I
> > notice that the amount of memory tomcat uses increases over
> time until
> >
Sorry for the newbie question. When running solr under tomcat I notice that
the amount of memory tomcat uses increases over time until it reaches the
maximum limit set (with the Xms and Xmx switches) for the jvm.
Is it better to allocate give all available physical memory to the jvm, or
to alloca
> Grant Ingersoll wrote:
>
> How often does your collection change or get updated?
>
> You could also have a slight alternative, which is to create
> a real small and simple Lucene index that contains your
> translations and then do it pre-indexing. The code for such
> a searcher is quite sim
Hi Grant,
> Can you describe your indexing process a bit more? Do you
> just have one or two tokens that you have "translate" or is
> it that you are going to query on every token in your text?
> I just don't see how that will perform at all to look up
> every token in some index, so maybe i
> Dallas, got money to spend on solving this problem? I
> believe this is something that tools like LingPipe can solve
> through language model training and named entity extraction.
Hi Otis,
Thank-you for your reply. I'm familiar with tools like LingPipe, but this
problem is actually *much* s
> Can you describe your indexing process a bit more? Do you
> just have one or two tokens that you have "translate" or is
> it that you are going to query on every token in your text?
> I just don't see how that will perform at all to look up
> every token in some index, so maybe if we have s
> this may sound a bit too KISS - but another approach could be
> based on synonyms, i.e. if the number of abbreviation is
> limited and defined ("All US States"), you can simply define
> complete state name for each abbreviation, this way a
> "Chicago, IL" will be "translated" (...) in "Chicag
I have a situation where it would be beneficial to issue queries in a filter
that is called during analysis. In a nutshell, I have an index of places
that includes possible abbreviations. And I want to query this index during
analysis to convert user-entered places to "standardized" places. So i
If I'm loading say 80-90% of the fields 80-90% of the time, and I don't have
any large compressed text fields, is it safe to say that I'm probably better
off to turn off lazy field loading?
Thanks,
--dallan
13 matches
Mail list logo