Lucene does not use a logger framework. But if you are using Solr then you
can route the infoStream logging to Solr's log files by setting an option
in the solrconfig.xml. See
http://lucene.apache.org/solr/guide/6_6/indexconfig-in-solrconfig.html#IndexConfiginSolrConfig-OtherIndexingSettings
On Fr
Hi,
I am using solr6.2.1 in my project. I want to join more than 2 cores as per
the requirement.
core1 : empid
core2 : empid,sid,pid
core3 : sid,pid
i want to join core2 and core3 on [core2.sid = core3.sid and core2.pid =
core3.pid] and the resultant records of core2.empid should be joined with
On 7/27/2017 1:30 AM, Atita Arora wrote:
> What OS is Solr running on? I'm only asking because some additional
> information I'm after has different gathering methods depending on OS.
> Other questions:
>
> /*OpenJDK 64-Bit Server VM (25.141-b16) for linux-amd64 JRE
> (1.8.0_141-b16), built on Jul
On 7/27/2017 7:20 AM, Itay K wrote:
> I'm trying to measure Precision and recall for a search engine which is
> crawling data sources of an organization.
>
> Are there any best practices regrading these indexes and specific
> industries (e.g. for financial organizations, the recommended percentage
On 7/27/2017 10:57 AM, Nawab Zada Asad Iqbal wrote:
> I see a lot of discussion on this topic from almost 10 years ago: e.g.,
> https://issues.apache.org/jira/browse/LUCENE-1482
>
> For 4.5, I relied on 'System.out.println' for writing information for
> debugging in production.
>
> In 6.6, I notice
Hi,
I am working on a log management tool and considering to use solr to
index/search the logs.
I have few doubts about how to organize or create the cores.
The tool should process 200 million events per day with each event containing
40 to 50 fields. Currently I have planned to create a
You're better off just using one core. Perhaps think about pre-processing
the logs to "summarize" them into less "documents"
I do this and in my situation i summarize things like, user-hits-item, for
example. so i find all the times a certain user had hits on a certain item
in one day and put tha
Hi all,
I’m using Lucence Solr (Lucidworks ) as the search index and its a
schemaless search engine to index outlook files.
I have been using it since version 2.0.* and have an issue with querying
dates.
The older versions were mapping( date_created, mail from_date etc) to date
datatype making
This question has been asked before. I found a few postings to Solr user and a
couple on Google-in-the-large.
But I am still not sure which is best.
My project currently has two distinct datasets (documents) with no shared
fields.
But at times, we need to query across both of them.
So we
Thanks Shawn and everyone!
Solved.
2017-07-27 18:29 GMT-04:00 Shawn Heisey :
> On 7/25/2017 5:21 PM, Lucas Pelegrino wrote:
> > Trying to make solr work here, but I'm getting this error from this
> command:
> >
> > $ ./solr create -c products -d /Users/lucaswxp/reduza-solr/
> products/conf/
> >
Amrit,
Problem solved! My biggest mistake was in my SOURCE-side configuration. The
zkHost field needed the entire zkHost string, including the CHROOT indicator. I
suppose that should have been obvious to me, but the examples only showed the
IP Address of the target ZK, and I made a poor assumpt
You don't need to start cdcr on target cluster. Other steps are exactly what I
did. After disable buffer on both target and source, the tlog files are purged
according to the specs.
-- Thank you
Sean
From: Patrick Hoeffel
mailto:patrick.hoef...@polarisalpha.com>>
Date: Friday, Jul 28, 2017, 4
12 matches
Mail list logo