What are the general schools of thought on how to update an index?
I have a medium volume OLTP SaaS system. I think my options are:
1) Run the DIH delta-query every minutes to pull in changes
2) Use "Update" events on the app to asynchronously create a bean
that represents my so
I am having similar scenario and we are having one Primary Master and One
Back Up Master. Load switching is happen using Big IP load balancer. When
Primary master goes down Back Up Master will become active Primary master.
We have added a heath check API in solr and when Primary master is back t
You can add it yourself in admin-extra.html
Ephraim Ofir
-Original Message-
From: Nico Luna [mailto:nicolaslun...@gmail.com]
Sent: Friday, November 11, 2011 7:57 PM
To: solr-user@lucene.apache.org
Subject: How to read values from dataimport.properties in a production
environment
I'm tr
I am new to solr in general and trying to get a handle on the memory
requirements for caching. Specifically I am looking at the filterCache right
now. The documentation on size setting seems to indicate that it is the number
of values to be cached. Did I read that correctly, or is it really
On Thu, Nov 17, 2011 at 2:59 PM, Brian Lamb
wrote:
> http://localhost:8983/solr/mycore/search/?q=test {!boost b=2}
>
> it is still really slow. Is there a different approach I should be taking?
I just tried what something similar to this (a non-boosted query vs a
simple boosted query)
on a 10M do
Any ideas on this one?
On Thu, Nov 17, 2011 at 3:53 PM, Brian Lamb
wrote:
> Sorry, the query is actually:
>
> http://localhost:8983/solr/mycore/search/?q=test{!boost
> b=product(sum(log(sum(myfield,1)),1),recip(ms(NOW,mydate_field),3.16e-11,1,8))}&start=&sort=score+desc,mydate_field+desc&wt=xslt&
> You're right:
>
> public SolrQueryParser(IndexSchema schema, String
> defaultField) {
> ...
> setLowercaseExpandedTerms(false);
> ...
> }
Please note that lowercaseExpandedTerms uses String.toLowercase() (uses
default Locale) which is a Locale sensitive operation.
In Lucene AnalyzingQueryP
You're right:
public SolrQueryParser(IndexSchema schema, String defaultField) {
...
setLowercaseExpandedTerms(false);
...
}
OK, thanks for pointing.
On Fri, Nov 18, 2011 at 4:12 PM, Ahmet Arslan wrote:
> > Actually I have just checked the source code of Lucene's
> > QueryParser and
> > lowerca
> Actually I have just checked the source code of Lucene's
> QueryParser and
> lowercaseExpandedTerms there is set to true by default
> (version 3.4). The
> code there does lower-casing by default. So in that sense I
> don't need to
> do anything in the client code. Is something wrong here?
But So
OK.
Actually I have just checked the source code of Lucene's QueryParser and
lowercaseExpandedTerms there is set to true by default (version 3.4). The
code there does lower-casing by default. So in that sense I don't need to
do anything in the client code. Is something wrong here?
On Fri, Nov 18,
> Hi Ahmet,
>
> Thanks for the link.
>
> I'm a bit puzzled with the explanation found there
> regarding lower casing:
>
> These queries are case-insensitive anyway because
> QueryParser makes them
> lowercase.
>
> that's exactly what I want to achieve, but somehow the
> queries *are*
> case-sen
The main one is that you can get an explosion in the number of terms,
depending on your input, especially if you have things that aren't
regular text. Imagine
partone-1
partone-2
partone-3
parttwo-1
parttwo-2
parttwo-3
if catenateall is set to 0, you;d get 5 tokens here. If it was set to
1 you'd
Hi,
I know its too early for this given that the NewSolrCloud for Solr 4 is
still in development, but what is interesting for those of us anxiously
awaiting NewSolrCloud 4 is to understand how it compares to existing
cloud-like search such as ElasticSearch (which I only recently learned
about).
Hi Ahmet,
Thanks for the link.
I'm a bit puzzled with the explanation found there regarding lower casing:
These queries are case-insensitive anyway because QueryParser makes them
lowercase.
that's exactly what I want to achieve, but somehow the queries *are*
case-sensitive. Probably I should pl
> *Can fileSize be faceted? *I tried to
> facet them, but fileSize is of type
> string and can not be faceted.
> I want to facet my doc and pdf files according to their
> size. I can
> calculate file Size but they are of type string.
> What should I do in order to achieve that?
> Thanks in advance
> I have a multivalued field say "MulField" in my index that
> have values in a document like
>
> 1
>
> Auto Mobiles
> Toyota Corolla
>
>
> No let say I specified a search criteria as
>
> +MulField:Mobiles +MulField:Toyota
>
> now my question is it is possible that this document sh
> Here is one puzzle I couldn't yet find a key for:
>
> for the wild-card query:
>
> *ocvd
>
> SOLR 3.4 returns hits. But for
>
> *OCVD
>
> it doesn't
This is a FAQ. Please see
http://wiki.apache.org/lucene-java/LuceneFAQ#Are_Wildcard.2C_Prefix.2C_and_Fuzzy_queries_case_sensitive.3F
I'm new to SolR and just got things working.
I can query my index & retrieve JSON results via: HTTP GET using: wt=json
and q=num_cpu parameters:
e.g.:
http://127.0.0.1:8080/solr/select?indent=on&version=2.2&q=num_cpu%3A16&fq=&start=0&rows=10&fl=*%2Cscore&qt=&wt=json&explainOther=&debugQuery=on
Wh
Hello,
The parsedQuery is displayed as follow:
parsedquery=+(DisjunctionMaxQuery((title:responsable^4.0 |
keywords:responsable^3.0 | organizationName:responsable |
location:responsable | formattedDescription:responsable^2.0 |
nafCodeText:responsable^2.0 | jobCodeText:responsable^3.0 |
categoryPay
Hello,
Here is one puzzle I couldn't yet find a key for:
for the wild-card query:
*ocvd
SOLR 3.4 returns hits. But for
*OCVD
it doesn't
On the indexing side two following tokenizers/filters are defined:
On the query side:
SOLR analysis tool shows, that OCVD gets lower-cased to ocvd
Hi
I have a multivalued field say "MulField" in my index that have values in a
document like
1
Auto Mobiles
Toyota Corolla
No let say I specified a search criteria as
+MulField:Mobiles +MulField:Toyota
now my question is it is possible that this document should not appear in the
Environment: Solr 1.4 on Windows/MS SQL Server
A write lock is getting created whenever I am trying to do a full-import of
documents using DIH. Logs say "Creating a connection with the database."
and the process is not going forward (Not getting a database connection). So
the indexes are no
When I set my fileSize of type string. It shows error as I have posted above.
Then I changed it to slong and results was severe..here is log
18 Nov, 2011 3:00:54 PM
org.apache.solr.response.BinaryResponseWriter$Resolver getDoc
WARNING: Error reading a field from document : SolrDocument[{}]
dear erolagnab,
it is your code in the solr server?
which class i can put it?
--
View this message in context:
http://lucene.472066.n3.nabble.com/fieldCache-problem-OOM-exception-tp3067057p3517780.html
Sent from the Solr - User mailing list archive at Nabble.com.
Definitely worked for me, with a classic full text search on "ipod" and
such.
Changing the lower bound changed the number of results.
Follow Chris advice, and give more details.
John wrote:
Doesn't seem to work.
I though that FilterQueries work before the search is performed and not
after...
25 matches
Mail list logo