Hey Erick,
Some thoughts:
Solr should _not_ have to replicate the index or go into peer sync on
> startup.
>
Okay, that's good to know! Tells us that this is a problem that can be
solved.
>
> > are you stopping indexing before you shut your servers down?
>
By indexing, you mean adding new ent
Whenever I hit a problem with SPLITSHARDS it's usually because I run out of
disk as effectively your doubling the disk space used by the shard.
However for large indexes (and 40GB is pretty large) take a look at
https://issues.apache.org/jira/browse/SOLR-5324
If that's the problem one possible w
Solr should _not_ have to replicate the index or go into peer sync on startup.
> are you stopping indexing before you shut your servers down?
> Be very sure you have passed your autocommit interval after you've stopped
> indexing and before you stop Solr.
> How are you shutting down? bin/solr s
I just repeated the procedure, same effect. I'm an hour in and it's still
recovering. Looked at the autoscaling API, but it's configured not to do
anything, which makes sense given the previous output.
One thing I did see, just now:
solr | 2018-12-05 20:02:37.922 INFO (qtp213195234
Authentication does work and authorization for general is working fine. But
nothing authorization works when specified certain collection. That's so
frustrating. It is weird that even I just do simple "path":"/*" won't do
anything if I add "collection":"a".
--
Sent from: http://lucene.472066.n3.
But it is silly to base non-heap RAM on the size of the heap. Get the
RAM needed for the non-heap usage. That has nothing to do with the
size of the Java heap.
Non-heap RAM is mostly used for two things: other programs and
file buffers for the Solr indexes. Base the RAM needs on those.
wunder
Wal
3x heap is larger than usual, but significant RAM beyond heap is a good
idea if you can't fit the whole index in 31 GB of memory, since the OS will
cache files in ram. Note also the use of 32 GB through about 45 GB heap
settings gives you LESS heap than 31 GB due to an increase in pointer sizes
nee
Hi Kevin,
We do have logs. Grepping for peersync, I can see
solr | 2018-12-05 03:31:41.301 INFO
(coreZkRegister-1-thread-2-processing-n:solr.node2.metaonly01.eca.local:8983_solr)
[c:iglshistory s:shard3 r:core_node12 x:iglshistory_shard3_replica_n10]
o.a.s.u.PeerSync PeerSync: core=
Do you have logs right before the following?
"we notice that the nodes go into "Recovering" state for about 10-12 hours
before finally coming alive."
Is there a peersync failure or something else in the logs indicating why
there is a full recovery?
Kevin Risden
On Wed, Dec 5, 2018 at 12:53 PM
Hi All,
We have a collection:
- solr 7.5
- 3 shards, replication factor 2 for a total of 6 NRT replicas
- 3 servers, 16GB ram each
- 2 billion documents
- autoAddReplicas: false
- 2.1 TB on-disk index size
- index stored on hdfs on separate servers.
If we (gracefully) shut down solr
I’ve never heard a recommendation to have three times as much RAM as the heap.
That doesn’t make sense to me.
You might need 3X as much disk space as the index size.
For RAM, it is best to have the sum of:
* JVM heap
* A couple of gigabytes for OS and demons
* RAM for other processes needed on
Hi Jay,
In my case, I created a CopyField for this case.
i.e.
And of course define ABC before
-Message d'origine-
De : jay harkhani [mailto:jay.harkh...@hotmail.com]
Envoyé : mercredi 5 décembre 2018 13:29
À : solr-user@lucene.apache.org
Objet : Query regarding Dynamic Fields
Hello
Hello All,
We are using dynamic fields in our collection. We want to use it in query to
fetch records. Can someone please advice on it?
i.e.: q=ABC_*:"myValue"
Here "ABC_*" is dynamic field. Currently when we tried if provide field name as
above it gives "org.apache.solr.search.SyntaxError".
Hi,
I have a legacy app which runs on solr 4.4 - I have 4 nodes solr cloud
with 3 zookeepers.
curl -v
'http://localhost:8980/solr/admin/collections?action=SPLITSHARD&collection=billdocs&shard=shard1&async=2000'
50039splitshard
the collection time out:300sorg.apache.solr.common.SolrExcepti
Hello,
I would like to use SOLR to index the Cooperative Patent Classification,
The CPC has a hierarchical structure and it can have more than 20 level.
It's a basic structure without Type of nested doc.
i.e:
A -> A01 -> A01B -> A01B3/00 -> A01B3/40 -> A01B3/4025 .
A -> A01 -> A01L -> A01L1
Ok. So with these suggestions, I found
https://lucene.apache.org/solr/guide/6_6/configuring-solrconfig-xml.html#Configuringsolrconfig.xml-ImplicitCoreProperties
So to test this I tried to use it in DIH as this has a similar issue with
configsets as every collection needs its own DIH.properties.
16 matches
Mail list logo