Those constraints can be easily set if you are using Docker. The problem
is however that at least up to Oracle Java 8, and I believe quite a bit
further, the JVM is not at all aware about those limits. That's why when
running Solr in Docker you really need to make sure that you set the
memory l
Bah, I should have said when you create a collection. You get the
following if you create your collection using the default schema:
WARNING: Using _default configset with data driven schema
functionality. NOT RECOMMENDED for production use.
To turn off: bin/solr config -c eoe -p 8982 -act
Hi,
When the solr authentication is enabled, which is better to use ZK ACL or
enable authentication for the whole zookeeper itself? Or is their any other
better option?
Thanks,
Yamuna J
On 10/11/2018 4:51 AM, yasoobhaider wrote:
Hi Shawn, thanks for the inputs.
I have uploaded the gc logs of one of the slaves here:
https://ufile.io/ecvag (should work till 18th Oct '18)
I uploaded the logs to gceasy as well and it says that the problem is
consecutive full GCs. According to the
On 10/11/2018 10:07 AM, Mikhail Ibraheem wrote:
Hi Erick,Thanks for your reply.No, we aren't using schemaless mode.
is not explicitly declared in our solrconfig.xml
Schemaless mode is not turned on by the schemaFactory config element.
The default configurations that Solr ships with have s
Erick,
I don't get any such message when I start solr - could you share what
that curl command should be?
You suggest modifying solrconfig.xml - could you be more explicit on
what changes to make?
Terry
On 10/11/2018 11:52 AM, Erick Erickson wrote:
> bq: Also why solr updates and persists the
Hi Erick,Thanks for your reply.No, we aren't using schemaless mode.
is not explicitly declared in our solrconfig.xmlAlso we have
only one replica and one shard.
Any help?
ThanksMikhail
On Thursday, 11 October 2018, 17:53:01 EET, Erick Erickson
wrote:
bq: Also why solr updates and pe
bq: Also why solr updates and persists the managed-schema while ingesting data?
I'd guess you are using "schemaless mode", which is expressly
recommended _against_ for production systems. See "Schemaless Mode" in
the reference guide.
I'd disable schemaless mode (when you start Solr there should b
On 10/11/2018 9:06 AM, Bisonti Mario wrote:
I startup tika server from command line:
java -jar /opt/tika/tika-server-1.19.1.jar
I configured, with ManifoldCF a connector to Solr.
When I start the ingest of pdf and .xls document, I see in the tika server:
so it seems that tika server process
Hallo.
I startup tika server from command line:
java -jar /opt/tika/tika-server-1.19.1.jar
I configured, with ManifoldCF a connector to Solr.
When I start the ingest of pdf and .xls document, I see in the tika server:
INFO Setting the server's publish address to be http://localhost:9998/
INFO
Hi,We upgraded to Solr 7.5, we try to ingest to solr using solrJ in concurrent
updates (Many threads).We are getting this exception:o.a.s.s.ManagedIndexSchema
Bad version when trying to persist schema using 1 due to:
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode =
Ba
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Shawn,
On 10/11/18 12:54 AM, Shawn Heisey wrote:
> On 10/10/2018 10:08 PM, Sourav Moitra wrote:
>> We have a Solr server with 8gb of memory. We are using solr in
>> cloud mode, solr version is 7.5, Java version is Oracle Java 9
>> and settings for X
I have to echo what others have said. An 80G heap is waaay out the norm,
especially when you consider the size of your indexes and the number of docs.
Understanding why you think you need that much heap should be your top
priority. As has already been suggested, insuring docValues are set for
Don't know if this directly affects what you're trying to do. But I
have an 8GB server and when I run "solr status" I can see what % of the
automatic memory allocation is being used. As it turned out, solr would
occasionally exceed that (and crashed).
I then began starting solr with the additio
We are relatively far behind with this one. The collections that we
experience the problem on are currently running on 6.3.0. If it's easy
enough for you to upgrade, it might be worth a try, but I didn't see any
changes to the RealTimeGet in either of the 7.4/5 change logs after a
cursory glance.
Hey Chris,
Which version of SOLR are you running? I was thinking of maybe trying
another version to see if it fixes the issue.
On Thu, Oct 11, 2018 at 8:11 AM Chris Ulicny wrote:
> We've also run into that issue of not being able to reproduce it outside of
> running production loads.
>
> Howeve
We've also run into that issue of not being able to reproduce it outside of
running production loads.
However, we haven't been encountering the problem in live production quite
as much as we used to, and I think that might be from the /get requests
being spread out a little more evenly over the ru
Hi Shawn, thanks for the inputs.
I have uploaded the gc logs of one of the slaves here:
https://ufile.io/ecvag (should work till 18th Oct '18)
I uploaded the logs to gceasy as well and it says that the problem is
consecutive full GCs. According to the solution they have mentioned,
increasing the
Beside the heap the JVM has other memory areas, like the metaspace:
https://docs.oracle.com/javase/9/tools/java.htm
-> MaxMetaspaceSize
search for "size" in that document and you'll find tons of further
settings. I have not tried out Oracle Java 9 yet.
regards,
Hendrik
On 11.10.2018 06:08, So
19 matches
Mail list logo