Hello Mikhail,
I am sorry, I forgot to include below in my main query json (*"fields": "*
[child limit=-1]"*) .
I am extracting all the fields along with its nested docs.
So I want as a part of results and not part of facet. I would expect out of
the docs that i listed, it should return only
"i
Just a couple of points I’d make here. I did some testing a while back in
which if no commit is made, (hard or soft) there are internal memory
structures holding tlogs and it will continue to get worse the more docs
that come in. I don’t know if that’s changed in further versions. I’d
recommend doi
Thank you so much Erick.
Will check these out.
Regards,
Rohan Kasat
On Tue, Jun 4, 2019 at 12:54 PM Erick Erickson
wrote:
>
> (t’s usually far easier to create a new collection in your upper
> environment and index to _that_. Once the indexing is done, use the
> Collections API CREATEALIAS comm
This should be considered a bug. Feel free file jira for this.
Joel Bernstein
http://joelsolr.blogspot.com/
On Tue, Jun 4, 2019 at 9:16 AM aus...@3bx.org.INVALID
wrote:
> Just wanted to provide a bit more information on this issue after
> experimenting a bit more.
>
> The error I've describe
For what I know the configuration files need to be already in the test/resource
directory before runnin. I copy them to the directory using a maven
maven-antrun-plugin in the generate-test-sources phase. And the framework can
"create a collection” without the configfiles, but it will obviously f
We have occasionally been seeing an error such as the following:
2019-06-03 23:32:45.583 INFO (indexFetcher-45-thread-1) [ ]
o.a.s.h.IndexFetcher Master's generation: 1424625
2019-06-03 23:32:45.583 INFO (indexFetcher-45-thread-1) [ ]
o.a.s.h.IndexFetcher Master's version: 1559619115480
201
On the surface, this znode already exists:
/solr/configs/collection2
So it looks like somehow you're
> On Jun 4, 2019, at 12:29 PM, Pratik Patel wrote:
>
> /solr/configs/collection2
(t’s usually far easier to create a new collection in your upper environment
and index to _that_. Once the indexing is done, use the Collections API
CREATEALIAS command to point traffic to the new collection. You can then use
the old one to index to and use CREATEALIAS to point to that one, sw
Hello Everyone,
I am trying to run a simple unit test using solr test framework. At this
point, all I am trying to achieve is to be able to upload some
configuration and create a collection using solr test framework.
Following is the simple code which I am trying to run.
private static final Str
Hi,
In our setup we have two SolrCloud environments running Solr 7.5 version.
Specific to the question - We have one collection with 3 shards and 3
replicas on the lower environment and a newly created mirrored collection
on Production.
Wanted to know on approaches to copy the index for collection
You might want to test with softcommit of hours vs 5m for heavy indexing +
light query -- even though there is internal memory structure overhead for
no soft commits, in our testing a 5m soft commit (via commitWithin) has
resulted in a very very large heap usage which I suspect is because of
other
Hi,
I am trying make use of User Defined cache functionality to optimise a
particular workflow.
We are using Solr 7.4.
Step 1. I noticed, first we would have to add Custom Cache entry in
solrconfig.xml.
What’s its Config API alternative for solrCould ?
I couldn’t find one at,
https://lucene.ap
Correct, do not optimize.
“Optimize” was a bad choice for this action. It is a forced merge.
With master/slave, it means the slaves must always copy the entire
400 GB index. Without optimize, they would only need to copy the
changed segments.
Solr automatically merges segments for you.
wunder
Hello, Jai.
I'm not sure I understand. Where do you need that max child price, at
parent results or at the facet?
On Tue, Jun 4, 2019 at 11:12 AM Jai Jamba
wrote:
> Hi,
> Below is my document structure and underneath I have mentioned my multi
> select facet query. I have one query related to fi
Thanks for letting us know. Yeah, many thousands of fields is an anti-pattern.
At some point I’d like to put in a limit or log warning or something so people
would get warning when something like this happens.
And to make matters more “interesting”, the meta-data associated with the
fields does
I need to update that, didn’t understand the bits about retaining internal
memory structures at the time.
> On Jun 4, 2019, at 2:10 AM, John Davis wrote:
>
> Erick - These conflict, what's changed?
>
> So if I were going to recommend settings, they’d be something like this:
> Do a hard commit
Hi Midas,
Your question will probably attract more useful answers if you provide
better details. What version of solr, How many nodes, and any associated
error messages from the logs. I see you asking questions that nobody can
answer because we don't know the details of your system, or why you are
Just wanted to provide a bit more information on this issue after
experimenting a bit more.
The error I've described below only seems to occur when I'm
collapsing/expanding on an integer field. If I switch the field type to a
string, no errors occur if there are missing field values within the
do
Hi Edwin,
Thanks for the additional datapoint. It seemed to work for me, but we
don't really understand the problem yet, so maybe it's not a solid
work around like I'd hoped. I'm curious to hear whether it works for
Colvin.
To double check though: forwardCredentials is only supported in Solr >
Hi Sotiris,
First off, forget what I said earlier about the "all" permission.
What I said is mostly correct, but I had forgotten about some of the
other behavior here that complicates things some.
I replicated the behavior you're seeing and spent a bit of time
tracing things through on the Solr s
400GB index is good ?
Are we should shard it .?
When we should start caring about inex size .?
On Tue, Jun 4, 2019 at 3:04 PM Midas A wrote:
> So we should not optimize our index ?
>
> On Tue, Jun 4, 2019 at 2:37 PM Toke Eskildsen wrote:
>
>> On Tue, 2019-06-04 at 11:48 +0530, Midas A wrote:
>
So we should not optimize our index ?
On Tue, Jun 4, 2019 at 2:37 PM Toke Eskildsen wrote:
> On Tue, 2019-06-04 at 11:48 +0530, Midas A wrote:
> > Index size is 400GB. we used master slave architecture .
> >
> > commit is taking time while not able to perform optimize .
>
> Why do you want to o
Erick - These conflict, what's changed?
So if I were going to recommend settings, they’d be something like this:
Do a hard commit with openSearcher=false every 60 seconds.
Do a soft commit every 5 minutes.
vs
Index-heavy, Query-light
Set your soft commit interval quite long, up to the maximum la
On Tue, 2019-06-04 at 11:48 +0530, Midas A wrote:
> Index size is 400GB. we used master slave architecture .
>
> commit is taking time while not able to perform optimize .
Why do you want to optimize in the first place? What are you hoping to
achieve?
There should be an error message in your So
Hi Martin,
What fieldType are you using for the field “Sagstitel”? Is it the same as
other fields?
Regards,
Edwin
On Mon, 3 Jun 2019 at 16:06, Martin Frank Hansen (MHQ) wrote:
> Hi,
>
> I am having some difficulties making highlighting work. For some reason
> the highlighting feature only work
Hi,
Below is my document structure and underneath I have mentioned my multi
select facet query. I have one query related to filtering where I am not
sure how can I get the child documents having some field value equivalent
to maximum value of that field in the available child docs ("Price" field
fo
almost forgot to report back, maybe it helps somebody else it turned
out to be caused by a feature in our software being used in a way we did
not anticipate.
That resulted in a lot (> 100.000) of different dynamic fields which
probably is an anti-pattern on its own, but the slow commits where
27 matches
Mail list logo