> That is not correct as of version 4.0.
>
> The only kind of update I've run into that cannot proceed at the same
> time as an optimize is a deleteByQuery operation. If you do that, then
> it will block until the optimize is done, and I think it will also block
>
> any update you do after it.
>
On 10/12/2016 12:18 AM, Reinhard Budenstecher wrote:
> Is my assumption correct that an OPTIMIZE of index would block all
> inserts? So that all processes have to pause when I will start an
> hour-running OPTIMIZE? If so, this would also be no option for the moment.
That is not correct as of versi
>
> That's considerably larger than you initially indicated. In just one
> index, you've got almost 300 million docs taking up well over 200GB.
> About half of them have been deleted, but they are still there. Those
> deleted docs *DO* affect operation and memory usage.
>
> Getting rid of deleted
>
> Just a sanity check. That directory mentioned, what kind of file system is
> that on? NFS, NAS, RAID?
I'm using Ext4 with options "noatime,nodiratime,barrier=0" on a hardware RAID10
with 4 SSD disks
__
Gesendet mit Maills.de - mehr als n
>
> What I have been hoping to see is the exact text of an OutOfMemoryError
> in solr.log so I can tell whether it's happening because of heap space
> or some other problem, like stack space. The stacktrace on such an
> error might be helpfultoo.
>
Hi,
I did understand what you need, I'm newbie
On 10/10/2016 2:49 AM, Alexandre Rafalovitch wrote:
> Just a sanity check. That directory mentioned, what kind of file
> system is that on? NFS, NAS, RAID?
The original post says it's hardware RAID10 with locally installed SSD
disks. It doesn't mention what filesystem is on it. If I were buildi
> Really, there is nothing in solr. log. I did not change any option
> related
> to this in config. Solr died again some hours ago and the last entry
> is: 2016-10-09 22:02:31.051 WARN (qtp225493257-1097) [ ]
> o.a.s.h.a.LukeRequestHandler Error getting file length for
> [segments_9102]
That's a
Just a sanity check. That directory mentioned, what kind of file system is
that on? NFS, NAS, RAID?
Regards,
Alex
On 10 Oct 2016 1:09 AM, "Reinhard Budenstecher" wrote:
>
> That's considerably larger than you initially indicated. In just one
> index, you've got almost 300 million docs taki
>
> That's considerably larger than you initially indicated. In just one
> index, you've got almost 300 million docs taking up well over 200GB.
> About half of them have been deleted, but they are still there. Those
> deleted docs *DO* affect operation and memory usage.
>
Yes, that's larger than
On 10/9/2016 1:59 PM, Reinhard Budenstecher wrote:
> Solr 6.2.1 on Debian Jessie, installed with:
> Actually, there are three cores and UI gives me following info:
> Num Docs:148652589, Max Doc:298367634, Size:219.92 GB Num
> Docs:37396140, Max Doc:38926989, Size:28.81 GB Num Docs:8601222Max
> Do
> What version of Solr? How has it been installed and started?
>
Solr 6.2.1 on Debian Jessie, installed with:
apt-get install openjdk-8-jre-headless openjdk-8-jdk-headless
wget "http://www.eu.apache.org/dist/lucene/solr/6.2.1/solr-6.2.1.tgz"; && tar
xvfz solr-*.tgz
./solr-*/bin/install_solr_ser
On 10/9/2016 12:33 PM, Reinhard Budenstecher wrote:
> We have an ETL process which updates product catalog. This produces massive
> inserts on MASTER, but there are no reads. Often there are thousands and
> hundreds of thousands of records per minute that where inserted. But
> sometimes I get a
12 matches
Mail list logo