Thanks Emir,
As mentions below, I have indexing the using two tenant and my data are
currently belongs to only one shard which also shows impact in retrieval much
faster but While insert its seems slower.
So is there any specific reason for that.
Please do needful.
Regards,
Ketan.
Hi Ketan
What about SOLR-10619 and SOLR-10983? Of the two, 10619 is probably
the most important in this respect. The way the Overseer consumed
requests from the queue was very inefficient and may particularly
affect this problem. There are a couple of other JIRAs that center
around not creating unnecessary
No luck for me , did you give it a try meantime ?
M not sure , if I may have missed something , my logs are completely gone
after this change.
Wondering whats wrong with them.
-Atita
On Tue, Oct 10, 2017 at 5:58 PM, Atita Arora wrote:
> Sure thanks Emir,
> Let me give them a quick try and I'll
On 10/10/2017 9:11 AM, Erick Erickson wrote:
Hmmm, that page is quite a bit out of date. I think Shawn is talking
about the "old style" Solr (4.x) that put all the state information
for all the collections in a single znode "clusterstate.json". Newer
style Solr puts each collection's state in
/co
Bernd:
bq: ...having 2 tlog files but about 180 open file handles was the
point asking the question...
This doesn't make any sense to me. I can't imagine that what you
describe is intended behavior _if_ all those file handles are open on
tlogs.
I suspect that they _aren't_ open on tlogs however.
Hold it. "date", "tdate", "pdate" _are_ primitive types. Under the
covers date/tdate are just a tlong type, newer Solrs have a "pdate"
which is a point numeric type. All that these types do is some parsing
up front so you can send human-readable data (and get it back). But
under the covers it's sti
Hmmm, that page is quite a bit out of date. I think Shawn is talking
about the "old style" Solr (4.x) that put all the state information
for all the collections in a single znode "clusterstate.json". Newer
style Solr puts each collection's state in
/collections/my_collection/state.json which has ve
In the Solr Wiki, Shawn Heisey writes the following:
"Regardless of the number of nodes or available resources, SolrCloud begins
to have stability problems when the number of collections reaches the low
hundreds. With thousands of collections, any little problem or change to
the cluster can cause
Following up on this thread. It was finally determined that Solr was being
hard killed. The Windows service was not giving Solr enough time to shut
down and was hard killing it. We fixed this and have not had the issue
since.
On Tue, May 31, 2016 at 1:51 PM, Jon Drews wrote:
> I forgot to add th
Sure thanks Emir,
Let me give them a quick try and I'll update you.
Thanks,
Atita
On Tue, Oct 10, 2017 at 5:28 PM, Emir Arnautović <
emir.arnauto...@sematext.com> wrote:
> Hi Atita,
> I did not try it, but I think that following could work:
>
>
> #logging queries
> log4j.logger.org.apache.solr.h
Hi Atita,
I did not try it, but I think that following could work:
#logging queries
log4j.logger.org.apache.solr.handler.component.QueryComponent=WARN,slow
log4j.appender.slow=org.apache.log4j.RollingFileAppender
log4j.appender.slow.File=${solr.log}/slow.log
log4j.appender.slow.layout=org.apach
Hi Ketan,
Is it possible that you are indexing only one tenant and that is causing single
shard to become hotspot?
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 10 Oct 2017, at 12:47, Ketan Thanki
Hi Emir,
So I made few changes to the log4j config , I am able to redirect these
logs to another file as well.
But as these are the WARN logs so I doubt any logs enabled at WARN level
are going to be redirected here in this new log file.
So precisely , I am using Solr 6.1 (in cloud mode) & I have
On 10/10/2017 11:02, Bernd Fehling wrote:
Questions coming to my mind:
Is there a "Resiliency Status" page for SolrCloud somewhere?
How would SolrCloud behave in a Jepsen test?
This has been done in 2014 - see
https://lucidworks.com/2014/12/10/call-maybe-solrcloud-jepsen-flaky-networks/
Ch
Hi,
Need the help regarding to below mention query.
I have configure the 2 collections with each has 2 shard and 2 replica and i
have implemented Composite document routing for my unique field 'Id' where I
have use 2 level Tenant route as mentions below.
e.g : projectId:158380 modelId:3606 where
Questions coming to my mind:
Is there a "Resiliency Status" page for SolrCloud somewhere?
How would SolrCloud behave in a Jepsen test?
Regards
Bernd
Am 10.10.2017 um 09:22 schrieb Toke Eskildsen:
> On Mon, 2017-10-09 at 20:50 -0700, Tech Id wrote:
>> Being a long term Solr user, I tried to do a
While you're generally right, in this case it might make sense to stick
to a primitive type.
I see "unixtime" as a technical information, probably from
System.currentTimeMillis(). As long as it's not used as a "real world"
date but only for sorting based on latest updates, or chosing which
documen
In line :
/"1. No zookeeper - I have burnt my hands with some zookeeper issues in the
past and it is no fun to deal with. Kafka and Storm are also trying to
burden zookeeper less and less because ZK cannot handle heavy traffic."/
Where did you get this information ? is based on some publicly
rep
There was time ago a Solr installation which had the same problem, and the
author explained me that the choice was made for performance reasons.
Apparently he was sure that handling everything as primitive types would
give a boost to the Solr searching/faceting performance.
I never agreed ( and one
i could see different version of the below entries in Leader and replica.
While doing index , in replica instance logs we could see it is keep
receiving update request from leader but it says no changes, skipping
commit.
Master (Searching)
Master (Replicable)
There is no other error me
Hi Atita,
You should definetely go with log4j configuration as anything else would be
redoing what log4j can do. You already have slowQueryThresholdMillies to make
slow queries log with WARN and you can configure log4j to put such logs (class
+ level) to a separate file.
This seems like frequent
On Mon, 2017-10-09 at 20:50 -0700, Tech Id wrote:
> Being a long term Solr user, I tried to do a little comparison myself
> and actually found some interesting features in ES.
>
> 1. No zookeeper - I have burnt my hands with some zookeeper issues
> in the past and it is no fun to deal with. Kafka
Hi Erik,
the cause of having 2 tlog files but about 180 open file handles
was the point asking the question "when transaction logs are closing".
Second was the statement you also use "...every hard commit of any
flavor should close the current tlog".
This is somehow true, but I would expect that t
23 matches
Mail list logo