What's the best way to retrieve the unique key field from SolrJ? From
what I can tell, it seems like I would need to retrieve the schema and
then parse it and get it from there, or am I missing something?
Thanks,
Grant
Thanks Erick.
I have tried with WhitespaceAnalyzer as you said.
-> In my schema.xml i have removed the filter class
"solr.WordDelimiterFilterFactory" for both indexing and querying.
-> If i remove this, the special character search works fine. But i am
unable to search for this scenario...
ex
right now you need to know the unique key name to get it...
I don't think we have any easy way to get that besides parsing the
schema
With debugQuery=true, the uniqueKey is added to the 'explain' info:
...
this gets parsed into the QueryResults _explainMap and _docIdMap but i'm
not
Hmmm, I should have just mandated that the id field be called "id"
from the start :-)
On Feb 11, 2008 5:51 PM, Grant Ingersoll <[EMAIL PROTECTED]> wrote:
> What's the best way to retrieve the unique key field from SolrJ? From
> what I can tell, it seems like I would need to retrieve the schema an
On 2/11/08 8:42 PM, "Chris Hostetter" <[EMAIL PROTECTED]> wrote:
> if you want to worry about smart load balancing, try to load balance based
> on the nature of the URL query string ... make you load balancer pick
> a slave by hashing on the "q" param for example.
This is very effective. We used
: I have a quick question about using solrj to connect to multiple slaves.
: My application is deployed on multiple boxes that have to talk to
: multiple solr slaves. In order to take advantage of the queryResult
: cache, each request from one of my app boxes should be redirected to the
: same so
:When i again start what will happen in the "data" folder? Any data
: refreshing,adding,deleting etc..
:Every restart of the solr server what will happens to indexing data or it
: remain unchanged without any action?
if you start up Solr, and there is allready a "data" directory containin
: essentially, this is:
: +north:[* TO nnn] +south:[sss TO *] +east:[* TO eee] +west:[www TO *]
: Would this be better as four individual filters?
it depends on the granularity you exepct clients to query with ... if
cleents can get really granular, then the odds of reuse are lower, so the
ad
Is it not possible to make a grid of your boxes? It seems like this would be
a more efficient query:
grid:N100_S50_E250_W412
This is how GIS systems work, right?
Lance
-Original Message-
From: Ryan McKinley [mailto:[EMAIL PROTECTED]
Sent: Monday, February 11, 2008 6:13 PM
To:
On Feb 11, 2008 9:13 PM, Ryan McKinley <[EMAIL PROTECTED]> wrote:
> >>
> >> Would this be better as four individual filters?
> >
> > Only if there were likely to occur again in combination with different
> > constraints.
> > My guess would be no.
>
> this is because the filter could not be cached?
Would this be better as four individual filters?
Only if there were likely to occur again in combination with different
constraints.
My guess would be no.
this is because the filter could not be cached?
Since i know it should not cached, is there any way to make sure it does
not purge usefu
On Feb 11, 2008 8:51 PM, Ryan McKinley <[EMAIL PROTECTED]> wrote:
> Hello-
>
> I'm working on a SearchComponent that should limit results to entries
> within a geographic range. I would love some feedback to make sure I'm
> not building silly queries and/or can change them to be better. I have
>
Hello-
I'm working on a SearchComponent that should limit results to entries
within a geographic range. I would love some feedback to make sure I'm
not building silly queries and/or can change them to be better. I have
four fields:
The component looks for a "bounds" argument a
: to. For example, if I have a field in a document such as "username" which is
: a string that I'll do wild-card searches on, Solr will return document
: matches but no highlight data for that field. The end-goal is to know which
FYI: this is a known bug that results from a "safety" net in the
S
if you just want commits ot happen on a regular frequenty take a look at
the autoCommit options.
sa for the specific errors you are getting, i don't know enouugh python to
unerstand them, but it may just be that your commits are taking too long
and your client is timing out on waiting for the
Hello,
I'm looking for some configuration guidance to help improve
performance of my application, which tends to do a lot more indexing
than searching.
At present, it needs to index around two documents / sec - a document
being the stripped content of a webpage. However, performance was so
: Another option is to add it to the responseHeader Or it could be a quick
: add to the LukeRH. The former has the advantage that we wouldn't have to make
adding the info to LukeRequestHandler makes sense.
Honestly: i can't think of a single use case where client code would care
about what
Lance Norskog wrote:
Is it not possible to make a grid of your boxes? It seems like this would be
a more efficient query:
grid:N100_S50_E250_W412
This is how GIS systems work, right?
something like that... I was just checking if I could get away with
range queries for now... I'll also
Another option is to add it to the responseHeader Or it could be
a quick add to the LukeRH. The former has the advantage that we
wouldn't have to make extra calls at the cost of sending an extra
string w/ every message. The latter would work by asking for it up
front and then saving
thoughts on requiring that for solrj? perhaps in 2.0? Not suggesting
it is a good idea (yet)... but we may want to consider it.
Yonik Seeley wrote:
Hmmm, I should have just mandated that the id field be called "id"
from the start :-)
On Feb 11, 2008 5:51 PM, Grant Ingersoll <[EMAIL PROTECTE
It's based on SOLR 1.2, however it's customized for our application to do
this. I'm only mentioning that it's possible by changing the
DirectUpdateHandler2 to have multiple indexes.
pb
-Original Message-
From: Niveen Nagy [mailto:[EMAIL PROTECTED]
Sent: Sunday, February 10, 2008 1:47 AM
21 matches
Mail list logo