: Is there a standard way to automatically update the values returned by
: the methods in SolrInfoMBean? Particularly those concerning revision
: control etc. I'm assuming folks don't just update that by hand every
: commit...
I'm not sure that i understand your question, but it really depends on
: Is there another way to make this happen without making further changes
: to the index? Maybe a bounce of the servlet server?
Lucene only actively tries to delete these files when it needs to write to
the directory for some reason, so unfortunately no.
I've opened a Jira issue with some misc
: The Collocations would be similar to facets except I am also trying to get
: multi word phrases as well as single terms. So suppose I could write
Assuming I understand what you want, I would look into using the
SingleFilter to build up Tokens consisting of N->M tokens, then you could
just fac
: You could easily write your own query parser (QParserPlugin, in Solr's
: terminology) that internally translates queries like
:
:q = res_url:url AND res_rank:rank
:
: into
: q = res_ranked_url:"rank url"
:
: thus hiding the res_ranked_url field from the user/client.
:
: I'm not
: I want to store a SolrInputDocument to the filesystem until it can be sent
: to the solr server via the solrj client.
out of curiosity: why? ... what is the anticipated delay that leads you to
the expectation that there will be an "until it can be sent to the solr
server" situation?
-Hoss
: I have done follow it, but if I query with diacritic it respose only
: non-diacritic. But I want to query without diacritic anh then solr will be
: response both of diacritic and without diacritic :(
What is "it" that you have done? ... can you show us your config?
The diatric folding issue i
: This doesn't solve my problem. I can't write my javadoc comments
: referencing to Solr API doc located in my local hard drive.
Why not?
: are available online at well-defined URLs. I'd like to have
: Solr API docs available in the similar manner.
patches (to the website) are welcome! ...
: You probably have duplicates (docs on different shards with the same id).
: Deeper paging will detect more of them.
: It does raise the question of if we should be changing numFound, or
: indicating a separate duplicate count. Duplicates aren't eliminated
random thought (from someone whose nev
> "if this is the expected behaviour is
> there a way to override it?"[1]
>
> [1] me
Using PositionFilterFactory[1] after NGramFilterFactory can yield parsed query:
field:fa field:am field:mi field:il field:ly field:fam field:ami field:mil
field:ily
[1]
http://wiki.apache.org/solr/Analyzers
"if this is the expected behaviour is there a way to override it?"[1]
[1] me
On Thu, Dec 31, 2009 at 10:13 AM, AHMET ARSLAN wrote:
>> Hello *, im trying to make an index
>> to support spelling errors/fuzzy
>> matching, ive indexed my document titles with
>> NGramFilterFactory
>> minGramSize=2 ma
> Hello *, im trying to make an index
> to support spelling errors/fuzzy
> matching, ive indexed my document titles with
> NGramFilterFactory
> minGramSize=2 maxGramSize=3, using the analysis page i can
> see the
> common grams match between the indexed value and the query
> value,
> however when i
> Hi,
>
> I'm new to Solr but so far I think its great. I've
> spent 2 weeks reading
> through the wiki and mailing list info.
>
> I have a use case and I'm not sure what the best way is to
> implement it. I
> am keeping track of peoples calendar schedules in a really
> simple way: each
> user
Hello *, im trying to make an index to support spelling errors/fuzzy
matching, ive indexed my document titles with NGramFilterFactory
minGramSize=2 maxGramSize=3, using the analysis page i can see the
common grams match between the indexed value and the query value,
however when i try to do a query
Thanks Erik,
the null problem was introduced when I copied the example below, now I
have the nulls excluded using (sortMissingLast="true"), in 1.5 using
the suggested config below and im still not seeing the desired behavior.
It seems to me that the default behavior of the Java Collator usi
have you tried setting sortMissingLast="true" in your schema.xml? Something
like...
or perhaps in your individual field definition instead. The schema.xml
examples have additional information that you really should scan at
least
HTH
Erick
On Thu, Dec 31, 2009 at 8:53 AM, Joel Nylund wrote
My first recommendation would be to do this client-side, just search
again with a new query if there are zero results returned.
However, you can accomplish what you're after with custom
QueryComponent, subclassing the default one, call super, check the
count, if zero, adjust the query and c
Hi,
I'm new to Solr but so far I think its great. I've spent 2 weeks reading
through the wiki and mailing list info.
I have a use case and I'm not sure what the best way is to implement it. I
am keeping track of peoples calendar schedules in a really simple way: each
user can login and input a
what serialization would you wish to use?
you can use java serialization or solrj helps you serialize it as xml
or javabin format
(org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec)
On Thu, Dec 31, 2009 at 6:55 AM, Phillip Rhodes wrote:
> I want to store a SolrInputDocument to the f
On Thu, Dec 31, 2009 at 2:29 AM, johnson hong
wrote:
>
> Hi,all.
> I found a problem on distributed-seach.
> when i use "?q=keyword&start=0&rows=20" to query across
> distributed-seach,it will return numFound="181" ,then I
> change the start param from 0 to 100,it will return numFound="13
It could be this bug, fixed in trunk:
* SOLR-1595: StreamingUpdateSolrServer used the platform default character
set when streaming updates, rather than using UTF-8 as the HTTP headers
indicated, leading to an encoding mismatch. (hossman, yonik)
Could you try a recent nightly build (or build
Hello All,
I have no complete knowledge of query execution workflow
. I have an issue. I had one approach to edit all the user submitted
queries in the queryparser part ( for every request ), but later I was
suggested to do query modification only if the user submitted query
has no re
Hi,
After some further investigation, it turns out that null fields were
sorting first, so if the title was null it was coming up first. This
is true even with 1.5 and collatedROOT. (I tried on last nights build).
So let me change my question, how do I make items with null values
sort las
I'm not seeing this effect with the example setup:
http://localhost:8983/solr/select?defType=dismax&qf=name&q=ipod&debugQuery=true&bq=inStock:true
What do you get as the parsedquery when using debugQuery=true?
What's your full request to Solr?
There is some odd logic in the dismax pars
It's possible, but requires a custom DirectoryFactory implementation.
There isn't a built in factory to construct a RAMDirectory. You wire
it into solrconfig.xml this way:
class="[fully.qualified.classname]">
On Dec 31, 2009, at 5:06 AM, dipti khullar wrote:
Hi
Can somebody
I'm using solr 1.4 on tomcat 5.0.28, with client
StreamingUpdateSolrServer with 10threads and xml communication via Post
method.
Is there a way to avoid this error (data lost)?
And is StreamingUpdateSolrServer reliable ?
GRAVE: org.apache.solr.common.SolrException: Invalid CRLF
at org.a
Hi
Can somebody let me know if its possible to configure RAMDirectory from
solrconfig.xml. Although its clearly mentioned in
https://issues.apache.org/jira/browse/SOLR-465 by Mark that he has worked
upon it, but still I couldn't find any such property in config file in Solr
1.4 latest download.
Ma
Hello:
I have a basic question:
I am using dismax, Solr 1.4. Let's say I have query where q=sometext
and it returns me 50 results.but let's say now I want to rank all
those (say 10) documents higher where field:abc.
Note I just want rank them higher based on the field value and not
27 matches
Mail list logo