Well, I do tend to go on
As Shawn mentioned memory is usually the most
precious resource and splitting to more shards, assuming
they're in separate JVMs and preferably on separate
machines certainly will relieve some of that pressure.
My only caution there is that splitting to more shards may
Thanks Erick,
I have another index with the same infrastructure setup, but only 10m
docs, and never see these slow-downs, that's why my first instinct was
to look at creating more shards.
I'll definitely make a point of investigating further tho with all the
things you and Shawn mentioned, t
Be _very_ cautious when you're looking at these timings. Random
spikes are often due to opening a new searcher (assuming
you're indexing as you query) and are eminently tunable by
autowarming. Obviously you can't fire the same query again and again,
but if you collect a set of "bad" queries and, sa
On 3/19/2016 11:12 AM, Robert Brown wrote:
> I have an index of 60m docs split across 2 shards (each with a replica).
>
> When load testing queries (picking random keywords I know exist), and
> randomly requesting facets too, 95% of my responses are under 0.5s.
>
> However, during some random manua
Ashwin:
First, if at all possible I would simply set up my new SolrCloud
structure (2 shards, a leader and follower each) and re-index the
entire corpus. 24M docs isn't really very many, and you'll have to
have this capability sometime since somone, somewhere will want to
change the schema in ways
Did you guys were able to fix this issue?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Shard-splitting-error-cannot-uncache-file-1-nvm-tp4086863p4140598.html
Sent from the Solr - User mailing list archive at Nabble.com.
Greg Preston wrote
> [qtp243983770-60] ERROR org.apache.solr.core.SolrCore –
> java.io.IOException: cannot uncache file="_1.nvm": it was separately
> also created in the delegate directory
> at
> org.apache.lucene.store.NRTCachingDirectory.unCache(NRTCachingDirectory.java:297)
> a
Hey guys, I filed a jira for this and apparently this problem has been fixed
in Lucene but didn't make it into the 4.4 release. Please see jira for more
info about patches: https://issues.apache.org/jira/browse/SOLR-5144.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Shard
I am also getting same error when performing shard splitting using solr 4.4.0
--
View this message in context:
http://lucene.472066.n3.nabble.com/Shard-splitting-failure-with-and-without-composite-hashing-tp4083662p4084177.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
I'm getting the same error on 4.4.0 (just downloaded) reproducibly with
these steps:
# curl
'http://localhost:8983/solr/admin/cores?action=CREATE&collection=sockmonkey&name=sockmonkeycore1&numShards=1&shard=shard1&replicationFactor=1'
# curl
'http://localhost:8983/solr/admin/cores?action=CREA
Oops, I somehow forgot to mention that. The errors I'm seeing are with the
release version of Solr 4.4.0. I mentioned 4.1.0 as that's what we
currently have in prod, and we want to upgrade to 4.4.0 so we can do shard
splitting. Towards that end, I'm testing shard splitting in 4.4.0 and
seeing th
The very first thing I'd do is go to Solr 4.4. There have been
a lot of improvements in this code in the intervening 3
versions.
If the problem still occurs in 4.4, it'll get a lot more attention
than 4.1
FWIW,
Erick
On Fri, Aug 9, 2013 at 7:32 PM, Greg Preston wrote:
> Howdy,
>
> I'm tryi
Beautiful. Thanks!
Otis
--
Solr & ElasticSearch Support -- http://sematext.com/
Performance Monitoring - http://sematext.com/spm/index.html
On Tue, Jun 18, 2013 at 12:34 PM, Mark Miller wrote:
> No, the hash ranges are split and new docs go to both new shards.
>
> - Mark
>
> On Jun 18, 201
No, the hash ranges are split and new docs go to both new shards.
- Mark
On Jun 18, 2013, at 12:25 PM, Otis Gospodnetic
wrote:
> Hi,
>
> Imagine a (common) situation where you use document routing and you
> end up with 1 large shards (e.g. 1 large user with lots of docs).
> Shard splitting wi
Hi Ming,
Yes, that's exactly what I meant. Referring to your last email about
SolrEntityProcessor -- If you're trying to migrate from a 3.x installation
to SolrCloud, then I think that you should create a SolrCloud installation
with numShards=1 and copy over your previous (3.x) index. Then you can
Hi Shalin,
Do you mean that we can do 1->2, 2->4, 4->8 to get 8 shards eventually?
After splitting, if we want to set up a solrcloud with all 8 shards, how
shall we allocate the shards then?
Thanks,
Ming-
On Mon, Jun 10, 2013 at 9:55 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
No, it is hard coded to split into two shards only. You can call it
recursively on a sub shard to split into more pieces. Please note that some
serious bugs were found in that command which will be fixed in the next
(4.3.1) release of Solr.
On Tue, Jun 11, 2013 at 9:43 AM, Mingfeng Yang wrote:
>
You will need to edit it manually and upload using a zookeeper client, you can
use kazoo, it's very easy to use.
--
Yago Riveiro
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
On Wednesday, May 22, 2013 at 10:04 AM, Arkadi Colson wrote:
> clusterstate.json is now reporting shard3 as
clusterstate.json is now reporting shard3 as inactive. Any idea how to
change clusterstate.json manually from commandline?
On 05/22/2013 08:59 AM, Arkadi Colson wrote:
Hi
I tried to split a shard but it failed. If I try to do it again it
does not start again.
I see the to extra shards in /co
19 matches
Mail list logo