Yes its stored in the directories configured in zoo.cfg
.Jeff Courtade
M: 240.507.6116
On Jan 11, 2016 1:16 PM, "Jim Shi" wrote:
> Hi, I have question regarding collection configurations stored Zoo Keeper
> with solrCloud.
> All collection configurations are stored at Zoo K
1240.416114498
75thPcRequestTime.value 1614.2324915
95thPcRequestTime.value 3048.37888109
99thPcRequestTime.value 5930.183086690001
--
Thanks,
Jeff Courtade
M: 240.507.6116
Thanks,
I was hoping there was a way without a core reload.
Do you know what is different with cloud? I need to do this in both.
Jeff Courtade
M: 240.507.6116
On Apr 2, 2016 1:37 PM, "Shawn Heisey" wrote:
> On 4/2/2016 11:06 AM, Jeff Courtade wrote:
> > I am putting toge
Thanks very much.
Jeff Courtade
M: 240.507.6116
On Apr 2, 2016 3:03 PM, "Otis Gospodnetić"
wrote:
> Hi Jeff,
>
> With info that Solr provides in JMX you have to keep track of things
> yourself, do subtractions and counting yourself.
> If you don't feel like
.20150815151640598
ps04 shard 2 replica
61G
/opt/solr/solr-4.7.2/example/solr/collection1/data/index.20140820212651780
39G
/opt/solr/solr-4.7.2/example/solr/collection1/data/index.20150815170546642
what can i do to remedy this?
--
Thanks,
Jeff Courtade
M: 240.507.6116
console.
once it is green check the version number on ps01 and ps03 they should be
the same now.
Repeat this for shard2 and you are done.
--
Thanks,
Jeff Courtade
M: 240.507.6116
On Mon, Aug 17, 2015 at 10:57 AM, Jeff Courtade
wrote:
> Hi,
>
> I have SOLR cloud running on SOLR 4.7.2
&
InBytes matches on Leader
and replica
curl http://ps01:8983/solr/admin/cores?action=STATUS |sed 's/>\n1439974815928
2015-08-19T09:00:15.928Z
43691759309
40.69 GB
if that number date and size match on the leader and the replicas I believe
we are in sync.
Can anyone verify this?
-
number is what I was interested in.
Should the version number be different in SOLR Cloud then as it is
deprecated?
--
Thanks,
Jeff Courtade
M: 240.507.6116
On Wed, Aug 19, 2015 at 10:08 AM, Shawn Heisey wrote:
> On 8/19/2015 7:52 AM, Jeff Courtade wrote:
> > We are running S
I am getting failures when trying too split shards on solr 4.2.7 with
custom plugins.
It fails regularily it cannot find the jar files for plugins when creating
the new cores/shards.
Ideas?
--
Thanks,
Jeff Courtade
M: 240.507.6116
DME.txt
4.0Ksolr.xml
4.0Kzoo.cfg
[root@dj01 solr]# du -sh
/opt/solr/solr-4.7.2/solr04/solr/collection1_shard1_1_replica2
16G /opt/solr/solr-4.7.2/solr04/solr/collection1_shard1_1_replica2
[root@dj01 solr]# du -sh
/opt/solr/solr-4.7.2/solr03/solr/collection1_shard1_0_replica2
18G /opt/solr/solr-4.7.2/
quot;state":"active",
"base_url":"http://10.135.2.153:8984/solr";,
"core":"collection1_shard2_0_replica1",
"node_name":"10.135.2.153:8984_solr",
"leader":"true"
simple rmr the
/overseer/queue ?
Jeff Courtade
M: 240.507.6116
How does the cluster react to the overseer q entries disapeering?
Jeff Courtade
M: 240.507.6116
On Aug 22, 2017 8:01 AM, "Hendrik Haddorp" wrote:
> Hi Jeff,
>
> we ran into that a few times already. We have lots of collections and when
> nodes get started too fast th
So ...
Using the zkCli.sh i have the jute.maxbuffer setup so I can list it now.
Can I
rmr /overseer/queue
Or do i need to delete individual entries?
Will
rmr /overseer/queue/*
work?
Jeff Courtade
M: 240.507.6116
On Aug 22, 2017 8:20 AM, "Hendrik Haddorp" wrote:
>
Thanks very much.
I will followup when we try this.
Im curious in the env this is happening to you are the zookeeper
servers residing on solr nodes? Are the solr nodes underpowered ram and or
cpu?
Jeff Courtade
M: 240.507.6116
On Aug 22, 2017 8:30 AM, "Hendrik Haddorp" wro
I set jute.maxbuffer on the so hosts should this be done to solr as well?
Mine is happening in a severely memory constrained end as well.
Jeff Courtade
M: 240.507.6116
On Aug 22, 2017 8:53 AM, "Hendrik Haddorp" wrote:
> We have Solr and ZK running in Docker containers. There is
that it should not be able to be put into a state that is not
recoverable except destructively.
If you have a very active solr cluster this could cause data loss I am
thinking.
--
Thanks,
Jeff Courtade
M: 240.507.6116
On Tue, Aug 22, 2017 at 1:14 PM, Hendrik Haddorp
wrote:
> - stop
s
i can go...
maybe something to do with logging things through only the update handler
or something?
Anyone bearing a cluestick is welcome.
--
Thanks,
Jeff Courtade
M: 240.507.6116
m URP, as I mentioned earlier.
>
> Regards,
>Alex.
> P.s. Remember that there are full document updates and partial
> updates. What you want to log about that is your business level
> decision.
>
--
Thanks,
Jeff Courtade
M: 240.507.6116
They wanted out of the box solutions.
This is what I found too that it would be custom. i was hoping i just was
not finding something obvious.
Jeff Courtade
M: 240.507.6116
On Dec 6, 2016 7:07 PM, "John Bickerstaff" wrote:
> You know - if I had to build this, I would consider sl
Thanks very much every one.
They will probably pursue custom code to see if they can get this data and
log it.
J
--
Thanks,
Jeff Courtade
M: 240.507.6116
On Tue, Dec 6, 2016 at 7:07 PM, John Bickerstaff
wrote:
> You know - if I had to build this, I would consider slurping up the
> re
we still need to use CMS?
--
Thanks,
Jeff Courtade
M: 240.507.6116
Thanks for that...
I am just starting to look at this I was unaware of the license debacle.
Automated testing up to 10 is great.
I am still curious about the GC1 being supported now...
On Wed, Sep 26, 2018 at 10:25 AM Zisis T. wrote:
> Jeff Courtade wrote
> > Can we use GC1 garbage c
unning 6.6.2. We also run it on our
> 4.10.4 master/slave cluster.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/ (my blog)
>
> > On Sep 26, 2018, at 7:37 AM, Jeff Courtade
> wrote:
> >
> > Thanks for that...
&g
around 47.5 GB per server.
APX 2million docs per shard
--
Jeff Courtade
M: 240.507.6116
running 6.6.2. We also run it on our
> > > 4.10.4 master/slave cluster.
> > >
> > > wunder
> > > Walter Underwood
> > > wun...@wunderwood.org
> > > http://observer.wunderwood.org/ (my blog)
> > >
> > > > On Sep 26,
APX=approximately sorry
On Wed, Sep 26, 2018, 2:09 PM Shawn Heisey wrote:
> On 9/26/2018 9:45 AM, Jeff Courtade wrote:
> > We are considering a move to solr 7.x my question is Must we use cloud?
> We
> > currently do not and all is well. It seems all work is done ref
The CMS settings are very nearly what we use after tons of load testing we
changed newratio to 2 and it cut the 10 second pauses way down for us
huge heap though
On Wed, Sep 26, 2018, 2:17 PM Shawn Heisey wrote:
> On 9/26/2018 9:35 AM, Jeff Courtade wrote:
> > My concern with us
ituation."
> > >
> > >
> > > On Wed, Sep 26, 2018 at 11:08 AM Walter Underwood <
> wun...@wunderwood.org>
> > > wrote:
> > >
> > > > We’ve been running G1 in prod for at least 18 months. Our biggest
> cluster
> > > > i
256
>
> Jeff,
>
> On 9/26/18 11:35, Jeff Courtade wrote:
> > My concern with using g1 is solely based on finding this. Does
> > anyone have any information on this?
> >
> > https://wiki.apache.org/lucene-java/JavaBugs#Oracle_Java_.2F_Sun_Java_
> .2F_OpenJDK_Bugs
&g
We run an old master/slave solr 4.3.0 solr cluster
14 nodes 7/7
indexes average 47/5 gig per shard around 2 mill docs per shard.
We have constant daily additions and a small amount of deletes.
We optimize nightly currently and it is a system hog.
Is it feasible to never run optimize?
I ask bec
We use 4.3.0 I found that we went into gc hell as you describe with small
newgen. We use CMS gc as well
Using newration=2 got us out of that 3 wasn't enough...heap of 32 gig
only
I have not gone over 32 gig as testing showed diminishing returns over 32
gig. I only was brave enough to go to 4
Hi,
I am working n doing a simple point upgrade from solr 7.6 to 7.7 cloud.
6 servers
3 zookeepers
one simple test collection using the prepackages _default config.
i stop all solr servers leaving the zookeepers up.
change out the binaries and put the solr.in.sh file back in place with
memory a
I
will be happy to update the mailing list when I figure this out for
everyone's Mutual entertainment.
--
Jeff Courtade
M: 240.507.6116
On Fri, Feb 15, 2019, 12:33 PM Erick Erickson Hmmm. I'm assuming that "nothing in the logs" is node/logs/solr.log, and
> that
> you'
This particular cve came out in the mailing list. Fed 12th
CVE-2017-3164 SSRF issue in Apache Solr
I need to know what the exploit for this could be?
can a user send a bogus shards param via a web request and get a local file?
What does an attack vector look like for this?
I am being aske
adjacent
> web endpoint via a GET request.
>
> Note that this can only impact you if your Solr instance can be directly
> accessed by untrusted sources.
>
> HTH
>
> On Thu, Feb 28, 2019 at 11:54 AM Jeff Courtade
> wrote:
>
> > This particular cve came out in the
The only way I found to track GC times was by truning on GC logging and the
writing cronjob data collection script and graphing it in zabbix
On Mon, Mar 18, 2019 at 12:34 PM Erick Erickson
wrote:
> Attachments are pretty aggressively stripped by the apache mail server, so
> it didn’t come throug
So,
I had a problem when at a customer site. They use zabbix for data
collection and alerting.
The solr server had been setup to use only jmx metrics.
the jvm was unstable and would lock up for a period of time and the metrics
and counters would be all screwed up. Because it was using jmx to ale
that give you a good way to navigate the GC
> events, GCViewer is free though.
>
> Best,
> Erick
>
> > On Mar 18, 2019, at 10:17 AM, Jeff Courtade
> wrote:
> >
> > So,
> >
> > I had a problem when at a customer site. They use zabbix for data
> &
just am not
understanding something basic.
J
--
Jeff Courtade
M: 240.507.6116
ything.
>
> On Mon, Jun 4, 2018, 23:45 Jeff Courtade wrote:
>
> > Hi,
> >
> > This I think is a very simple question.
> >
> > I have a solr 4.3 master slave setup.
> >
> > Simple replication.
> >
> > The master and slave were both running
To be clear I deleted the actual index files out from under the running
master
On Mon, Jun 4, 2018, 2:25 PM Jeff Courtade wrote:
> So are you saying it should have?
>
> It really acted like a normal function this happened on 5 different pairs
> in the same way.
>
>
> On M
heck for the version and get the modified files.
>
> If replication is configured in slave you will see commands getting
> triggered and you could get some idea from there.
>
> Also you could paste that log if it not clear.
>
> Regards,
> Aman
>
> On Mon, Jun 4, 201
I am thankful for that!
Could you point me at something that explains this maybe?
J
On Mon, Jun 4, 2018, 4:31 PM Shawn Heisey wrote:
> On 6/4/2018 12:15 PM, Jeff Courtade wrote:
> > This was strange as I would have thought the replica would have
> replicated
> > an empty ind
2018 at 5:44 PM, Walter Underwood
> wrote:
> > Check the logs. I bet it says something like “refusing to fetch empty
> index.”
> >
> > wunder
> > Walter Underwood
> > wun...@wunderwood.org
> > http://observer.wunderwood.org/ (my blog)
> >
>
Nothing in the logs it's like it didn't happen.
So I think I need to address my logging levels log4j
On Tue, Jun 5, 2018, 12:35 AM Jeff Courtade wrote:
> Yes unix.
>
> It was an amazing moment.
>
>
>
> On Mon, Jun 4, 2018, 11:28 PM Erick Erickson
> wrote:
ent...
Generally
--
Jeff Courtade
M: 240.507.6116
On Mon, Apr 15, 2019, 9:33 AM SOLR4189 wrote:
> Hi all,
>
> I have a collection with many shards. Each shard is in separate SOLR node
> (VM) has 40Gb index size, 4 CPU and SSD.
>
> When I run performance checking with 50GB RAM (10Gb
Hi we have a new setup of solr 7.7 without cloud in a master/slave setup
Periodically our core stops responding to queries and must be
restarted on the slave.
Two hosts
is06 solr 7.7 master
ss06 solr 7.7 slave
simple replication is setup no solr cloud
so on the primary is06 we see this error
48 matches
Mail list logo