unsubscribe
On Tue, Oct 29, 2019 at 4:34 AM UMA MAHESWAR
wrote:
> hi all,
> SolrCore Initialization Failures
> {{core}}: {{error}}
> Please check your logs for more information
>
>
> {{exception.msg}}
>
> here my log file
> 2019-10-29 06:03:30.995 INFO (main) [ ] o.e.j.u.log Logging initializ
Is it possible to join documents and use a field from the "from"
documents to sort the results? For example, I need to search
"employees" and sort on different fields of the "company" each employee
is joined to. What would that query look like? We've looked at various
resources but haven't f
se, could mean a re-write of the entire index. So it'd be an
> expensive operation. Usually deletes are removed in the normal course of
> indexing as segments are merged together.
>
> On Sat, Sep 27, 2014 at 8:42 PM, Eric Katherman wrote:
>
>> I'm running into me
I'm running into memory issues and wondering if I should be using
expungeDeletes on commits. The server in question at the moment has
450k documents in the collection and represents 15GB on disk. There are
also 700k+ "Deleted Docs" and I'm guessing that is part of the disk
space consumption b
e",
"q": "values_field_66_date:[* TO NOW/DAY+1DAY]",
"TZ:'America/Los_Angeles'": "",
"_": "1384487341231",
"wt": "json",
"rows": "25"
}
https://gist.gi
We're still not seeing the proper result.I've included a gist of the query
and its debug result. This was run on a clean index running 4.4.0 with just
one document. That document has a date of 11/15/2013 yet the date in the
included TZ it is the 14th but I still get that document returned.
Can anybody provide any insight about using the tz param? The behavior of this
isn't affecting date math and /day rounding. What format does the tz variables
need to be in? Not finding any documentation on this.
Sample query we're using:
path=/select
params={tz=America/Chicago&sort=id+desc&s
Stats:
default config for 4.3.1 on a high memory AWS instance using jetty.
Two collections each with less than 700k docs per collection.
We seem to hit some performance lags when doing large commits. Our front end
service allows customers to import data which is stored in Mongo and then
indexed