Moshin:
As the author of the transient cores stuff I can authoritatively state
that it wasn't designed with SolrCloud in mind, so I'd be a little
careful about extending that functionality, even by analogy ;).
Not to say that it's totally incompatible, but
That said, I may be working on some
How about dynamic loading/unloading of some shards (cores) similar to the
transient cores feature. Should be ok if the unloaded shard has a replica. If
no replica then extending shards.tolerant concept to use some
timeout/acceptable-latency value sounds interesting.
-Mohsin
- Original Mes
bq. We ran into one of failure modes that only AWS can dream up
recently, where for an extended amount of time, two nodes in the same
placement group couldn't talk to one another, but they could both see
Zookeeper, so nothing was marked as down.
I had something similar happen with one of my SolrCl
"Last Gasp" is the last message that Sun Storage controllers would send to each
other when things whet sideways...
For what it's worth.
> Date: Fri, 21 Nov 2014 14:07:12 -0500
> From: michael.della.bi...@appinions.com
> To: solr-user@lucene.apache.org
> Subject: Re: Dealing with bad apples in a S
Could you add echoParams=all to the query and see what comes back?
Currently, you echo the params you sent, would be good to see what
they look like after they combine with defaults.
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.so
Brian and I are working together to diagnose this issue so I can chime in
quickly here as well. These values are defined as part of the the defaults
section of the config.
Sounds like you'll want use the ScoreCachingWrappingScorer. Your
DelegatingCollector can wrap the ScoreCachingWrappingScorer around the
scorer passed into the setScorer(Scorer) method and pass it to down the
collector chain.
Joel Bernstein
Search Engineer at Heliosearch
On Fri, Nov 14, 2014 at 3:
When I've run an optimize with Solr 4.8.1 (by clicking optimize from the
collection overview in the admin ui) it goes replica by replica, so it is
never doing more than one shard or replica at the same time.
It also significantly slows down operations that hit the replica being
optimized. I've see
What is the section's type where you define these: defaults, appends,
or invariants? You did not mention that but it might be important.
Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizer
Whew! Thanks for closing this off.
Best,
Erick
On Fri, Nov 21, 2014 at 9:11 AM, nbosecker wrote:
> Good call - we are adding some ACL to the query going in, and using a Map to
> store the original query values, if there are multiple of the same key, it's
> only storing the last value.
>
> My bad
bq: if I can optimize one shard at a time
Not sure. Try putting &distrib=false on the URL, but I don't know
for sure whether that'd work or not. If this works at all, it'll work
on one _replica_ at a time, not shard.
Bu why would you want to? Each optimization is local and runs
in the background
Hmmm, I just tried this with 4.10.2 and can't reproduce this at all.
If I define facets in the /select handler then specify any
facet.field on the URL, the URL completely overrides the defaults
in /select. I even tried specifying the facet.field twice and still only
a single section was returned in
Good discussion topic.
I'm wondering if Solr doesn't need some sort of "shoot the other node in
the head" functionality.
We ran into one of failure modes that only AWS can dream up recently,
where for an extended amount of time, two nodes in the same placement
group couldn't talk to one anot
bq. esp. since we've set max threads so high to avoid distributed
dead-lock.
We should fix this for 5.0 - add a second thread pool that is used for
internal requests. We can make it optional if necessary (simpler default
container support), but it's a fairly easy improvement I think.
- Mark
On
Just soliciting some advice from the community ...
Let's say I have a 10-node SolrCloud cluster and have a single collection
with 2 shards with replication factor 10, so basically each shard has one
replica on each of my nodes.
Now imagine one of those nodes starts getting into a bad state and st
Good call - we are adding some ACL to the query going in, and using a Map to
store the original query values, if there are multiple of the same key, it's
only storing the last value.
My bad! Thanks for the hint, I wasn't even considering that issue.
Best,
Nancy
--
View this message in context
It’s the "Deleted Docs” metric in the statistic core.
I now that eventually the merges will expunge this deletes but I will run out
of space soon and I want to know the _real_ space that I have.
Actually I have space enough (about 3.5x the size of the index) to do the
optimize.
Other
We’ve run into an issue during local testing of the 4.10.2 release, where if
the search handler config in solrconfig.xml has facet.fields defined, and a
different field is on the request, then the requested facets are included twice
in the response. If the list of default facet fields is remov
Yes, should be no problem.
Although this should be happening automatically, the percentage
of documents in a segment weighs quite heavily when the decision
is made to merge segments in the background.
You say you have "millions of deletes". Is this the difference between
numDocs and maxDoc on the
In addition to Alexandre's comments, I've occasionally seen something
like this happen when _very_ large packets were being transmitted
back and forth. You might have to up the packet size allowed by the
servelet. But that's a guess...
Best,
Erick
On Fri, Nov 21, 2014 at 4:04 AM, Alexandre Rafalo
That's all just governed by the analysis chain you've
defined for the field in question.
The admin/analysis page will show you the actual
query generated, and is a great place to get an
understanding of how Solr/Lucene to the index
and query time processing on each separate field.
Best,
Erick
On
Hi,
It´s possible perform an optimize operation and continuing indexing over a
collection?
I need to force expunge deletes from the index I have millions os deletes
and need free space.
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Optimize-during-ind
I am using Solr 4.2.1 Could someone give me an example how to create a query
wich will be analysed to match shingles
--
View this message in context:
http://lucene.472066.n3.nabble.com/matching-shingles-tp4170259.html
Sent from the Solr - User mailing list archive at Nabble.com.
Looks like something reset the connection between the Solr server you are
talking to and the one hosting the other shard (
http://192.168.120.202:8080/solr)
I would check the logs of the _other_ server and see if something is there.
Otherwise, I would look into something like a firewall in betwee
see if() function example at
https://cwiki.apache.org/confluence/display/solr/Function+Queries
On Fri, Nov 21, 2014 at 2:17 PM, shacky wrote:
> Hi.
> I'm using Solr 1.4.1 (I know it's an old version) and I'm trying to
> find a way to sort found record using a custom list of field values.
>
> The
Hi.
I'm using Solr 1.4.1 (I know it's an old version) and I'm trying to
find a way to sort found record using a custom list of field values.
The same as MySQL's "ORDER BY FIELD()" function:
SELECT * FROM fruit ORDER BY FIELD(name, 'Banana', 'Apple', 'Pear',
'Orange'), variety;
Could you help me
Hi
We used Solr4.6 for search and there was a exception occured randomly.
The exception message in application(use SolrJ) was :
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:
org.apache.solr.client.solrj.SolrServerException: IOException occured
when talking to server at: ht
Hi all,
So in my index I've got 2 docs with 1 multivalued
SpatialRecursivePrefixTreeFieldType-field:
[
{"points": ["MULTIPOINT(1 1, 2 2)", "MULTIPOINT(10 10, 20 20)"]},
{"points": ["MULTIPOINT(1 1, 2 2)"]}
]
Query:
q=*:*&fq=points:"IsWithin(POLYGON((0 0, 4 0, 4 4, 0 4, 0 0)))"
Result
Hello,
I am using Solr 4.8.1 with the following fields in schema.xml:
where:
- id: unique vehicle id
- vehicle_year: year when the vehicle was made (ex. 2014)
- vehicle_maker: vehicle manufacturer (ex. BMW)
- vehicle_model: vehicle model (ex. 320i)
- vehicle_trim: vehicle trim (ex. Seda
Hello,
I am using Solr 4.8.1 with the following fields in schema.xml:
where:
- id: unique vehicle id
- vehicle_year: year when the vehicle was made (ex. 2014)
- vehicle_maker: vehicle manufacturer (ex. BMW)
- vehicle_model: vehicle model (ex. 320i)
- vehicle_trim: vehicle trim (ex. Seda
30 matches
Mail list logo