Erick,
Thanks for the response.
Kindly have a look at my sample query,
select?fl=city,$score&q=*:*&fq={!lucene q.op=OR df=city v=$cit}&cit=Chennai&
*sort=$score desc& score=norm($la,value,10)& la=8 &b=1&c=2*here,
score= norm($la,value,10), norm is a custom function
*,if I change la th
The admin/analysis page is your friend. Taking some time to
get acquainted with that page will save you lots and lots and
lots of time. In this case, you'd have seen that your input
is actually tokenized as (1999/99), parentheses and all as a
_single_ token, so of course searching for 1999/99 would
Hi Joel,
Thank you for the reply. I created
https://issues.apache.org/jira/browse/SOLR-5773 for this new feature. I was
looking at the getBoostDocs() function and if I understand it correctly it
iterates through the boosted set that is passed into the function and then
iterates over the boosted Se
I'm trying to query XML documents stored in Riak 2.0, which has integrated
Solr. My XML looks like this.
So a search in Riak might look something like this:
q=MainData.Info.Info@name:Bob
So let's say I want to match all documents where the name="Bob" and
city="Cincinnati", for
Hi Ahmet/Erick,
I tried escaping as well. See no luck.
The title am looking for is - ARABIAN NIGHTS #01 (1999/99)
I figured out that if i pass the query as *1999/99* (i.e asterisk not only
at the end but at the beginning as well), It works.
The problem is the braces. I can change my field type
Hi David,
Just read through your comments on the jira. Feel free to create a jira for
this. The way this currently works is that if the elevated document is not
the selected group head, then both the elevated document and the group head
are in the result set. What you are suggesting is that the el
On 2/25/2014 4:30 PM, KNitin wrote:
Jeff : Thanks. I have tried reload before but it is not reliable (atleast
in 4.3.1). A few cores get initialized and few dont (show as just
recovering or down) and hence had to move away from it. Is it a known issue
in 4.3.1?
With Solr 4.3.1, you are running
Erick: My autocommit is set to trigger every 30 seconds with
openSearcher=false. The autocommit for soft commits are disabled
On Tue, Feb 25, 2014 at 3:30 PM, KNitin wrote:
> Jeff : Thanks. I have tried reload before but it is not reliable (atleast
> in 4.3.1). A few cores get initialized and
Jeff : Thanks. I have tried reload before but it is not reliable (atleast
in 4.3.1). A few cores get initialized and few dont (show as just
recovering or down) and hence had to move away from it. Is it a known issue
in 4.3.1?
Shawn,Otis,Erick
Yes I have reviewed the page before and have given 1
Thanks Hoss, that makes sense.
Anyway, I like the new paradigm better ... it allows for more
intelligent elevation control.
Cheers,
L
On 25/02/2014 23:26, Chris Hostetter wrote:
: What is seems that is happening is that excludeIds or elevateIds ignores
: what's in elevate.xml. I would hav
: What is seems that is happening is that excludeIds or elevateIds ignores
: what's in elevate.xml. I would have expected (hoped) that it would layer on
: top of that, which makes a bit more sense I think.
That's not how it's implemented -- i believe Joel implemented this way
intentional because
Hit the send button too fast ...
What is seems that is happening is that excludeIds or elevateIds ignores
what's in elevate.xml. I would have expected (hoped) that it would layer
on top of that, which makes a bit more sense I think.
Thanks,
Lajos
On 25/02/2014 22:58, Lajos wrote:
Guys,
I
Guys,
I've been testing out https://issues.apache.org/jira/browse/SOLR-5541 on
4.7RC4.
I previously had an elevate.xml that elevated 3 documents for a specific
query. My understanding is that I could, at runtime, exclude one of
those. So I tried that like this:
http://localhost:8080/solr/e
Hi,
By saying escaping I mean this : q=title_autocomplete:1999\/99* It is
different than URL encoding.
http://lucene.apache.org/core/4_6_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Escaping_Special_Characters
If prefix query parser didn't return what you want then
What does it say happens on your admin/analysis page
for that field?
And did you by any chance change your schema without
reindexing everything?
Also, try the TermsComonent to see what tokens are actually
_in_ your index. Schema-browser from the admin page can
help here too.
Best,
Erick
On Tue
Gopal: I'm glad somebody noticed that blog!
Joel:
For bulk loads it's a Good Thing to lengthen out
your soft autocommit interval. A lot. Every second
poor Solr is trying to open up a new searcher while
you're throwing lots of documents at it. That's what's
generating the "too many searchers" probl
Hi Ahmet,
Thanks for your reply.
Yes. I pass my query this way - > q=title_autocomplete:1999%2f99
I tried your way too. But no luck. :(
--
View this message in context:
http://lucene.472066.n3.nabble.com/Wildcard-search-not-working-if-the-query-conatins-numbers-along-with-special-character
This blog by Eric will help you to understand different commit option and
transaction logs and it does provide some recommendation for ingestion
process.
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
On Tue, Feb 25, 2014 at 11:40 AM, Furkan KA
Hi Kashish,
What happens when you use this q={!prefix f=title_autocomplete}1999/99
I suspect '/' character is a special query parser character therefore it needs
to be escaped.
Ahmet
On Tuesday, February 25, 2014 9:55 PM, Kashish
wrote:
Hi,
I have a very weird problem. The wild card searc
Hi,
I have a very weird problem. The wild card search works fine for all
scenarios but one. It doesn't seem to give any result for query 1999/99*. I
checked the debug query and its formed perfect.
title_autocomplete:1999/99*
title_autocomplete:1999/99*
(+title_autocomplete:1999/99* ())/no_coord
+
Hi;
You should read here:
http://wiki.apache.org/solr/FAQ#What_does_.22exceeded_limit_of_maxWarmingSearchers.3DX.22_mean.3F
On the other hand do you have 4 Zookeeper instances as a quorum?
Thanks;
Furkan KAMACI
2014-02-25 20:31 GMT+02:00 Joel Cohen :
> Hi all,
>
> I'm working with Solr 4.6.1
Hi all,
I'm working with Solr 4.6.1 and I'm trying to tune my ingestion process.
The ingestion runs a big DB query and then does some ETL on it and inserts
via SolrJ.
I have a 4 node cluster with 1 shard per node running in Tomcat with
-Xmx=4096M. Each node has a separate instance of Zookeeper on
Hi;
There is a round robin process when assigning nodes at cluster. If you want
to achieve what you want you should change your Solr start up order.
Thanks;
Furkan KAMACI
2014-02-25 19:17 GMT+02:00 Shawn Heisey :
> On 2/25/2014 8:09 AM, Oliver Schrenk wrote:
> > I want to run two logical insta
https://issues.apache.org/jira/browse/SOLR-5773
I am having trouble with CollapseQParserPlugin showing duplicate groups when
the search results contain a member of a grouped document but another member
of that grouped document is defined in the elevate component. I have
described the issue in more
On 2/25/2014 8:09 AM, Oliver Schrenk wrote:
> I want to run two logical instances (leader & replica) of Solr on each
> physical
> machine (host_1 & host_2).
>
> Everything is running but the shard is replicated on the same physical
> machine!
> Which doesn't work as a failover mechanism. So at
I don’t actually run these commands. Everything is written down in either
jetty.conf or solr.xml. I basically copy-pasted the output from a `ps -ef |
grep solr`.
Is the Collections API the only way to do so? At the moment this is a proof of
concept, but for going to production and I want to put
Oliver,
You'll probably have better luck not supplying CLI arguments and creating your
collection via the collections api
(https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-CreateaCollection).
Try removing -DnumShards and setting the -Dcollection.configName to some
Hi,
tldr: I have troubles configuring SolrCloud 4.3.1 to replicate the shard of
another machine. Basically what it boils down is the question how to tell on
solr instance to replicate the shard of another machine. I though that the
system property `-Dshard=2` will do the trick but it doesn't do
This seems like an XY problem, you're asking for
specifics on doing something without any indication
_why_ you think this would help. Nor are you explaining
what the problem you're having is in the first place.
At any rate, queryResultCache is unlikely to impact
much. All it is is a map containing
Right, highlighting may have to re-analyze the input in order
to return the highlighted data. This will be significantly slower
than the search, especially if you have a large number
of rows you're returning.
You can get better performance in highlighting by using
FastVectorHighlighter. See:
http
is there any way programmatically disable/enable solr queryResultCache?
I am using SolrJ.
Thanks & Regards,
Senthilnathan V
Hi,
I would like to know whether anyone have experienced this kind of phenomena.
We are having performance problem regarding query on stemmed value.
I've documented the symptoms which I'm currently facing:
Search on field content
Search on field spell
Highlighting (on content field)
A few things:
1) If your database uses a BLOB, you should not use clobtransformer;
FieldStreamDataSource should be sufficient.
2) In a previous message, it showed that the converted/etxracted document
was empty (except for an html boilerplate wrapper). This was using the
configuration I suggested
Okey.
Here is my data-config file:
--
Solr.log file :
INFO - 2014-02-25 17:33:40.023; org.apache.solr.core.SolrCore; [CHESS_
On 25 February 2014 14:54, Chandan khatua wrote:
> Hi Gora,
>
> The column type in DB is BLOB. It only stores binary data.
>
> If I do not use TikaEntityProcessor, then the following exception occurs:
[...]
It is difficult to follow what you are doing when you say one thing, and
seem to do anothe
I vaguely remember such a Jira issue but I can't find it now.
Gregg, can you open an issue? A patch would be even better.
On Tue, Feb 25, 2014 at 8:28 AM, Gregg Donovan wrote:
> We fetch a large number of documents -- 1000+ -- for each search. Each
> request fetches only the uniqueKey or the un
Hi Gora,
The column type in DB is BLOB. It only stores binary data.
If I do not use TikaEntityProcessor, then the following exception occurs:
at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:457)
59163 [Thread-16] ERROR org.apache.solr.handler.dataimport.DocBuil
On 25 February 2014 14:27, Chandan khatua wrote:
> Sir,
>
>
>
> Please send me the data-config file to index binary data which are stored in
> Database as BLOB type.
Are you paying attention to the follow-ups? I had suggested
possibilities, including the fact that Solr cannot automatically
decide
Sir,
Please send me the data-config file to index binary data which are stored in
Database as BLOB type.
Thanking you,
Chandan
39 matches
Mail list logo