e/src/test/org/apache/solr/schema/CopyFieldTest.java?
>
> No need to make this a SolrCloud test.
>
> Best,
> Erick
>
> > On Aug 15, 2019, at 11:31 AM, Chris Troullis
> wrote:
> >
> > Hi all,
> >
> > We recently upgraded from Solr 7.3 to 8.1, an
Hi all,
We recently upgraded from Solr 7.3 to 8.1, and noticed that the maxChars
property on a copy field is no longer functioning as designed. Per the most
recent documentation it looks like there have been no intentional changes
as to the functionality of this property, so I assume this is a bug
x27;t able to determine the cause, but at least we are back to normal
query times now.
Chris
On Fri, Jun 15, 2018 at 8:06 AM, Chris Troullis
wrote:
> Thanks Shawn,
>
> As mentioned previously, we are hard committing every 60 seconds, which we
> have been doing for years, and have ha
oint it's all we can think of to do.
Thanks,
Chris
On Thu, Jun 14, 2018 at 6:23 PM, Shawn Heisey wrote:
> On 6/12/2018 12:06 PM, Chris Troullis wrote:
> > The issue we are seeing is with 1 collection in particular, after we set
> up
> > CDCR, we are getting extremely slow r
at 2:37 PM, Susheel Kumar
wrote:
> Is this collection anyway drastically different than others in terms of
> schema/# of fields/total document etc is it sharded and if so can you look
> which shard taking more time with shard.info=true.
>
> Thnx
> Susheel
>
> On Wed, Jun 1
ferent.
>
> Now, assuming all that's inconclusive, I'm afraid the next step would
> be to throw a profiler at it. Maybe pull a stack traces.
>
> Best,
> Erick
>
> On Wed, Jun 13, 2018 at 6:15 AM, Chris Troullis
> wrote:
> > Thanks Erick. A little more info
say that absent configuring CDCR things seem to run fine. So
> I'd look at the tlogs and my commit intervals. Once the tlogs are
> under control then move on to other possibilities if the problem
> persists...
>
> Best,
> Erick
>
>
> On Tue, Jun 12, 2018 at 11:06 AM,
Hi all,
Recently we have gone live using CDCR on our 2 node solr cloud cluster
(7.2.1). From a CDCR perspective, everything seems to be working
fine...collections are staying in sync across the cluster, everything looks
good.
The issue we are seeing is with 1 collection in particular, after we se
/twitter.com/lucidworks
> LinkedIn: https://www.linkedin.com/in/sarkaramrit2
> Medium: https://medium.com/@sarkaramrit2
>
> On Tue, Apr 17, 2018 at 8:58 PM, Susheel Kumar
> wrote:
>
> > DISABLEBUFFER on source cluster would solve this problem.
> >
> > On Tue, Apr 17,
Hi,
We are attempting to use CDCR with solr 7.2.1 and are experiencing odd
behavior with transaction logs. My understanding is that by default, solr
will keep a maximum of 10 tlog files or 100 records in the tlogs. I assume
that with CDCR, the records will not be removed from the tlogs until it ha
Nevermind I found itthe link you posted links me to SOLR-12036 instead
of SOLR-12063 for some reason.
On Tue, Mar 20, 2018 at 1:51 PM, Chris Troullis
wrote:
> Hey Amrit,
>
> Did you happen to see my last reply? Is SOLR-12036 the correct JIRA?
>
> Thanks,
>
> Chris
&g
Hey Amrit,
Did you happen to see my last reply? Is SOLR-12036 the correct JIRA?
Thanks,
Chris
On Wed, Mar 7, 2018 at 1:52 PM, Chris Troullis wrote:
> Hey Amrit, thanks for the reply!
>
> I checked out SOLR-12036, but it doesn't look like it has to do with CDCR,
> and
look.
>
> Amrit Sarkar
> Search Engineer
> Lucidworks, Inc.
> 415-589-9269
> www.lucidworks.com
> Twitter http://twitter.com/lucidworks
> LinkedIn: https://www.linkedin.com/in/sarkaramrit2
> Medium: https://medium.com/@sarkaramrit2
>
> On Wed, Mar 7, 2018 at 1:35 AM, C
Hi all,
We recently upgraded to Solr 7.2.0 as we saw that there were some CDCR bug
fixes and features added that would finally let us be able to make use of
it (bi-directional syncing was the big one). The first time we tried to
implement we ran into all kinds of errors, but this time we were able
rying manually it seems to work fine, but
something is going on in some of our test environments.
Thanks,
Chris
On Thu, Nov 9, 2017 at 2:52 PM, Chris Troullis wrote:
> Thanks Mike, I will experiment with that and see if it does anything for
> this particular issue.
>
> I implemented S
rovements in using multiple
> threads to resolve deletions:
> http://blog.mikemccandless.com/2017/07/lucene-gets-
> concurrent-deletes-and.html
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Tue, Nov 7, 2017 at 2:26 PM, Chris Troullis
> wrote:
>
>
through the same optimistic locking that regular updates
> use (i.e. the _version_ field). But I'm kind of guessing here.
>
> Best,
> Erick
>
> On Tue, Nov 7, 2017 at 8:51 AM, Shawn Heisey wrote:
> > On 11/5/2017 12:20 PM, Chris Troullis wrote:
> >> The issue I am
t's worse when it comes to distributed indexes. If the updates
> > were sent out in parallel you could end up in situations where one
> > replica contained 132 and another didn't depending on the vagaries of
> > thread execution.
> >
> > Now I didn't write
plica.
Has anyone else experienced this/have any thoughts on what to try?
On Sun, Nov 5, 2017 at 2:20 PM, Chris Troullis wrote:
> Hi,
>
> I am experiencing an issue where threads are blocking for an extremely
> long time when I am indexing while deleteByQuery is also running.
>
> Se
Hi,
I am experiencing an issue where threads are blocking for an extremely long
time when I am indexing while deleteByQuery is also running.
Setup info:
-Solr Cloud 6.6.0
-Simple 2 Node, 1 Shard, 2 replica setup
-~12 million docs in the collection in question
-Nodes have 64 GB RAM, 8 CPUs, spinni
terval to something short, but
> that has other problems.
>
> see: SOLR-6606, but it looks like other priorities have gotten in the
> way of it being committed.
>
> Best,
> Erick
>
> On Tue, Aug 1, 2017 at 1:50 PM, Chris Troullis
> wrote:
> > Hi,
> >
>
Hi,
I think I know the answer to this question, but just wanted to verify/see
what other people do to address this concern.
I have a Solr Cloud setup (6.6.0) with 2 nodes, 1 collection with 1 shard
and 2 replicas (1 replica per node). The nature of my use case requires
frequent updates to Solr, a
Shalin,
Thanks for the response and explanation! I logged a JIRA per your request
here: https://issues.apache.org/jira/browse/SOLR-10695
Chris
On Mon, May 15, 2017 at 3:40 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> On Sun, May 14, 2017 at 7:40 PM, Chris Troullis
Hi,
I've been experimenting with various sharding strategies with Solr cloud
(6.5.1), and am seeing some odd behavior when using the implicit router. I
am probably either doing something wrong or misinterpreting what I am
seeing in the logs, but if someone could help clarify that would be awesome.
> Zookeeper. 300 collections isn't all that much in recent Solr
> installations. All that filtered through how beefy your hardware is of
> course.
>
> Startup is an interesting case, but I've put 1,600 replicas on 4 Solr
> instance on a Mac Pro (400 each). You can config
ast but
> unpredictable. "Documents may take up to 5 minutes to appear and
> searches will usually take less than a second" is nice and concise. I
> have my expectations. "Documents are searchable in 1 second, but the
> results may not come back for between 1 and 10 seconds&
Hi,
I use Solr to serve multiple tenants and currently all tenant's data
resides in one large collection, and queries have a tenant identifier. This
works fine with aggressive autowarming, but I have a need to reduce my NRT
search capabilities to seconds as opposed to the minutes it is at now,
whi
Hi!
I am looking for some advice on an sharding strategy that will produce
optimal performance in the NRT search case for my setup. I have come up
with a strategy that I think will work based on my experience, testing, and
reading of similar questions on the mailing list, but I was hoping to run
m
28 matches
Mail list logo