I am using external file fields with larger external files and I noticed
Solr Core Reload loads external files twice: firstSearcher and nextSearcher
event.
Does it mean the Core Reload triggers both events? What is the
benefit/reason of triggering both events at the same time? I see this on
V. 4
searchers and searchers are per core.
>
> I don't particularly see the benefit of firing them both either. Not
> sure which one makes
> the most sense though.
>
> Best,
> Erick
>
> On Mon, Oct 3, 2016 at 7:10 PM, Jihwan Kim wrote:
> > I am using external file
I read your first reference and run the following command on the
Solr_Installed Dir. I am using v. 6.2.0 and 4.10.4. both works.
bin/solr start -f -a "-Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=7666"
On Tue, Oct 4, 2016 at 5:26 PM, John Bickerstaff
wrote:
> All,
>
> I've
I would like to ask the Solr organization an enhancement request.
The FileFloatSource creates a cache value from an external file when a Core
is reloaded and/or a new searcher is opened. Nevertheless, the external
files can be changed less frequently.
With a larger document and larger external f
etFloats.
vals = new float[reader.maxDoc()];
latestCache.put(ffs.field.getName(), vals);
Am I missing something? Any feedback will be helpful to understand the Solr
better.
Thank you,
Jihwan
On Tue, Oct 4, 2016 at 8:35 PM, Yonik Seeley wrote:
> On Tue, Oct 4, 2016 at 10:09 PM, Jihwan Kim
correctly?
Thanks.
On Tue, Oct 4, 2016 at 8:59 PM, Jihwan Kim wrote:
> "The array is indexed by internal lucene docid," --> If I understood
> correctly, it is done inside the for loop that I briefly showed.
>
> In the following code I used, the 'vals' points to an
Got it! Thanks a lot!
On Oct 4, 2016 9:29 PM, "Yonik Seeley" wrote:
> On Tue, Oct 4, 2016 at 11:23 PM, Jihwan Kim wrote:
> > Hi Yonik,
> > I thought about your comment and I might understand what you were saying.
> > The for loop in the getFloats method assign a
Hi,
We are using Solr 4.10.4 and experiencing out of memory exception. It
seems the problem is cause by the following code & scenario.
This is the last part of a fetchLastIndex method in SnapPuller.java
// we must reload the core after we open the IW back up
if (reloadCore) {
earcher process hangs and high CPU usage remains high.
We are also using a larger external field file too.
On Thu, Oct 20, 2016 at 9:11 AM, Jihwan Kim wrote:
> A little more about "At certain timing, this method also throw "
> SnapPuller - java
Count is 0. and go to all other process in the close()
On Thu, Oct 20, 2016 at 8:44 AM, Jihwan Kim wrote:
> Hi,
> We are using Solr 4.10.4 and experiencing out of memory exception. It
> seems the problem is cause by the following code & scenario.
>
> This is the last
usage
and a slow response time. Attached image is the thread hung.
On Thu, Oct 20, 2016 at 9:29 AM, Shawn Heisey wrote:
> On 10/20/2016 8:44 AM, Jihwan Kim wrote:
> > We are using Solr 4.10.4 and experiencing out of memory exception. It
> > seems the problem is cause by the
look through any of the memory profilers and try to
> catch the objects (and where they're being allocated). The second is
> to look at the stack trace (presuming you don't have an OOM killer
> script running) and perhaps triangulate that way.
>
> Best,
> Erick
>
> On
SnapPuller.openNewSearcherAndUpdateCommitPoint(SnapPuller.java:680)
On Thu, Oct 20, 2016 at 10:14 AM, Jihwan Kim wrote:
> Good points.
> I am able to create this with periodic snap puller and only one http
> request. When I load the Solr on tomcat, the initial memory usage was
> between 6
why is there a setting (maxWarmingSearchers) that even lets you have more
than one:
Isn't it also for a case of (frequent) update? For example, one update is
committed. During the warming up for this commit, another update is
made. In this case the new commit also go through another warming. If
14 matches
Mail list logo