Hi,
I have been facing issue on my solr server. My solr version is solr-5.5.5. I
have connected to slaves servers with master server. Sometimes my slave
replication fails or it stops on slave side.
I get this error on master side: (Please help me with the solution)
2019-08-19 22:29:55.573 ERROR
Hi,
we are usually using Solr Clouds with 5 nodes and up to 2000 collections
and a replication factor of 2. So we have close to 1000 cores per node.
That is on Solr 7.6 but I believe 7.3 worked as well. We tuned a few
caches down to a minimum as otherwise the memory usage goes up a lot.
The Solr
Hello, Shawn and Community:
Thank you for a quick response, and giving useful information.
As far as I understand, the main cause of this problem is something like
"time out."
This time out could perhaps happen due to insufficient memory for our index
size, so waiting time is not enough to grace
Hi
I have a solr-cloud cluster, but it's unstable when collection number is big:
1000 replica/core per solr node.
To solve this issue, I have read the performance guide:
https://cwiki.apache.org/confluence/display/SOLR/SolrPerformanceProblems
I noted there is a sentence on solr-cloud section:
"R
So in Solr I have a data structure that is made of of Restaurant Documents
that have Deals attached as _childDocuments to a Restaurant. However with my
current query
{!parent which='content_type:restaurant'}_text_=kids
&fl=*,[child parentFilter=content_type:restaurant
childFilter="content_type:dea
What it says ;)
My guess is that your configuration mentions the field “features” in, perhaps
carrot.snippet or carrot.title.
But it’s a guess.
Best,
Erick
> On Aug 28, 2019, at 5:18 PM, Joe Obernberger
> wrote:
>
> Hi All - trying to use clustering with SolrCloud 8.2, but getting this err
No, you cannot just use the collection name. Replicas are just cores.
You can host many replicas of a single collection on a single Solr node
in a single CoreContainer (there’s only one per Solr JVM). If you just
specified a collection name how would the code have any clue which
of the possibiliti
Wait, would I need to use core name like collection1_shard1_replica_n4
etc/? Can't I use collection name? What if I have multiple shards, how
would I know where does the document that I am working with lives in
currently.
I would rather prefer to use collection name and expect the core
informatio
Hi All - trying to use clustering with SolrCloud 8.2, but getting this
error:
"msg":"Error from server at null: org.apache.solr.search.SyntaxError: Query Field
'features' is not a valid field name",
The URL, I'm using is:
http://solrServer:9100/solr/DOCS/select?q=*%3A*&qt=/clustering&clusterin
Hmmm, should work. What is your core_name? There’s strings like
collection1_shard1_replica_n4 and core_node6. Are you sure you’re using the
right one?
> On Aug 28, 2019, at 3:56 PM, Arnold Bronley wrote:
>
> Hi,
>
> In a custom Solr plugin code,
> req.getCore().getCoreContainer().getCore(core
Wittenberg, Lucas wrote:
> As suggested I switched to using DocValues and SortedDocValues.
> Now QTime is down to an average of 1100, which is much, much better
> but still far from the 30 I had with SOLR 4.
> I suppose it is due to the block-oriented compression you mentioned.
I apologize for be
On 8/28/2019 1:58 PM, Pushkar Raste wrote:
What does this exception really affect.
I believe it is related in some way to how Lucene uses Java's MMAP
capability to access data on disk. The MMAP functionality that Lucene
uses required changes to properly support later Java versions. There
w
It is simply a risk. It is not tested. Any functionality may fail eventually or
have unknown side effects in the long run. It is also not clear to me why you
want to update Java, but not Solr. If you want the latest security fixes, bug
fixes and new features then I would go first for a new Solr
@Shawn: You are right. In my case, the collection name is same as
configuration name and that is why it works. Do you know if there is some
other property that I can use that refers to the collection name instead?
On Wed, Aug 28, 2019 at 3:52 PM Shawn Heisey wrote:
> On 8/28/2019 1:42 PM, Arnold
I understand that the problem will not be fixed. What I am trying to
understand is even with the exception (the only exception I saw after
running my Solr4 cluster on JDK11 for 4 weeks), I am able index and query
documents just fine.
What does this exception really affect.
On Wed, Aug 28, 2019 at
@Furkan: You might be right. I am getting this permission error when I
start the Solr but it hasn't caused any visible issues yet.
/opt/solr/bin/solr: line 2130: /var/solr/solr-8983.pid: Permission denied
On Wed, Aug 21, 2019 at 6:33 AM Martijn Koster
wrote:
> Hi Arnold,
>
> It’s hard to say wi
Hi,
In a custom Solr plugin code,
req.getCore().getCoreContainer().getCore(core_name) is returning null even
if core by name core_name is loaded and up in Solr. req is object
of SolrQueryRequest class. I am using Solr 8.2.0 in SolrCloud mode.
Any ideas on why this might be the case?
On 8/28/2019 1:42 PM, Arnold Bronley wrote:
I have configured the SolrCloud collection-wise only and there is no other
way. The way you have defined 3 zkHosts (comma separated values for zkHost
property), I tried that one before as it was more intuitive. But it did not
work for me. I had to use 3
Hi Erick,
I have configured the SolrCloud collection-wise only and there is no other
way. The way you have defined 3 zkHosts (comma separated values for zkHost
property), I tried that one before as it was more intuitive. But it did not
work for me. I had to use 3 different replica elements each fo
On 8/27/2019 8:22 AM, Pushkar Raste wrote:
I am trying to run Solr 4 on JDK11, although this version is not supported
on JDK11 it seems to be working fine except for the error/exception "Unmap
hack not supported on this platform".
What the risks/downsides of running into this.
The first version
Can someone help me with this?
On Tue, Aug 27, 2019 at 10:22 AM Pushkar Raste
wrote:
> Hi,
> I am trying to run Solr 4 on JDK11, although this version is not supported
> on JDK11 it seems to be working fine except for the error/exception "Unmap
> hack not supported on this platform".
> What the
Ok, thank you Erick and Toke.
As suggested I switched to using DocValues and SortedDocValues.
Now QTime is down to an average of 1100, which is much, much better but still
far from the 30 I had with SOLR 4.
I suppose it is due to the block-oriented compression you mentioned. Not sure
if it is pos
On 8/28/2019 4:01 AM, Kayak28 wrote:
I use Solr with Windows servers, and cannot shutdown Solr successfully.
When I try to stop Solr using solr.cmd, which is kicked from Windows Task
Manager, it "looks" like Solr stops without any problem.
Here "looks" means that at least log file that Solr wrote
Hi all.
In our team we thought about some tricky solution for queries with long time
highlighting. For example, highlighting that takes more than 25 seconds. So,
we created our component that wraps highlighting component of SOLR in this
way:
public void inform(SolrCore core) {
. . . .
sub
Hi Salmaan,
Are you still seeing this behavior, or were you able to figure things out?
I just got a chance to try out the security.json in Solr 7.6 myself,
and I can't reproduce the behavior you're seeing.
It might be helpful to level set here. Make sure that our
security.json settings and our
If I try to add any metadata in a field like this :
doc.addField("meta", metadata.get("dc_creator"));
1. I don't get that field in the results, though it has been created.And,
following is the definition on the schema :
2. When I check it in my code for the value using,
System.out.println(me
Yup ! I have already made stored = true for _text_. I will see to it. No
worries.
BUT, I really need HELP for the separation of content & metadata. I checked ,
but there isn't any field that is copying the values into the '_text_' field.
The only definition I have for _text_ is :
For this : do
Not sure. You have an
section and
section. Frankly I’m not sure which one will be used for the index-time chain.
Why don’t you just try it?
change
to
reload and go. It’d take you 5 minutes and you’d have your answer.
Best,
Erick
> On Aug 28, 2019, at 1:57 AM, Bjarke Buur Mortensen
> w
CURRENTLY, I AM GETTING
"_text_" :
[" \n \n date 2019-06-24T09:52:33Z \n cp:revision 5 \n Total-Time 1 \n
extended-properties:AppVersion 15. \n stream_content_type
application/vnd.openxmlformats-officedocument.presentationml.presentation \n
meta:paragraph-count 18 \n meta:word-count
Attachments are aggressively stripped of attachments, you’ll have to either
post it someplace and provide a link or paste the relevant sections into the
e-mail.
You’re not getting any metadata because you’re not adding any metadata to the
documents with
doc.addField(“metadatafield1”, value_of_
Attaching managed-schema.xml
-Original Message-
From: Khare, Kushal (MIND) [mailto:kushal.kh...@mind-infotech.com]
Sent: 28 August 2019 16:30
To: solr-user@lucene.apache.org
Subject: RE: Require searching only for file content and not metadata
I already tried this example, I am currently
I already tried this example, I am currently working on this. I have complied
the code, it is indexing the documents. But, it is not adding any thing to the
field - _text_ . Also, not giving any metadata.
doc.addField("_text_", textHandler.toString()); --> here,
textHandler.toString() is blank f
Yes, I have already gone through the reference guide. Its all because of the
guide and documentation that I have reached till this stage.
Well, I am indexing rich document formats like - .docx, .pptx, .pdf etc.
The metadata I am talking about is - that currently sorl puts all the data like
author
Hello, Community:
I use Solr with Windows servers, and cannot shutdown Solr successfully.
When I try to stop Solr using solr.cmd, which is kicked from Windows Task
Manager, it "looks" like Solr stops without any problem.
Here "looks" means that at least log file that Solr wrote does not seem to
ha
Yes I am using solr-5.5.5.
This error is intermittent. I don't think there must be any issue with master
connection limits. This error is accompanied by this on master side:
ERROR (qtp1450821318-60072) [ x:sitecore_web_index]
o.a.s.h.ReplicationHandler Unable to get file names for indexCommit
This looks like ample memory to get the index chunk.
Also, I looked at the IndexFetcher code, I remember you were using Solr
5.5.5 and the only reason in my view, this would happen is when the index
chunk is not downloaded as can also be seen in the error (Downloaded
0!=123) which clearly states th
On 8/27/2019 7:18 AM, Khare, Kushal (MIND) wrote:
Basically, what problem I am facing is - I am getting the textual content +
other metadata in my _text_ field. But, I want only the textual content written
inside the document.
I tried various Request Handler Update Extract configurations, but n
Hi,
Memory details for slave1:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 99G 40G 55G 43% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
Memory details for slave2:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 99G 45G 49G 48% /
tmpfs
On 8/28/2019 12:55 AM, Vignan Malyala wrote:
Im planning to create separate core for each of my client in solr.
Can I create around 500 cores in solr. Is it a good idea?
For each client i have around 10 records on average currently.
There is no limit that I know of to the number of cores.
You need to provide a little bit more details. What is your Schema? How is the
document structured ? Where do you get metadata from?
Have you read the Solr reference guide? Have you read a book about Solr?
> Am 28.08.2019 um 08:10 schrieb Khare, Kushal (MIND)
> :
>
> Could anyone please help
We run SOLR with Replica n=2 and are happily torture them with 1500~
cores and above, each set contains at least 10.000 docs, most of them
are over millions. It works.
We have a 256GB RAM in the servers, allocation for SOLR is 140G.
Mit freundlichem Gruß / kind regards
Wolfgang Freudenberger
41 matches
Mail list logo