Hi Apache Support,
Can I kindly check what is the best practice for housekeeping archival logs for
Solr & Zookeeper? What I meant here is the best practices recommended by Apache
and to ensure the middleware is kept healthy?
The current version of Solr is 6.1.0 and Zookeeper is 3.4.6. Thanks in
I did it success by remove 'file:'
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi, Rahul.
Thank you for your reply.
I already tried that, and I could see what files were read(via
FileDataSource) and what files were added(via UpdateLog).
So, by checking both, I could determine bad files.
But I want to know bad files directly.
Thanks,
Yasufumi
2018年7月9日(月) 12:47 Rahul Singh
Hi guys,
In Solrcloud mode, where to put the OpenNLP models?
Upload to zookeeper?
As I test on solr 7.3.1, seems absolute path on local host is not working.
And can not upload into zookeeper if the model size exceed 1M.
Regards,
Jerome
On Wed, Apr 18, 2018 at 9:54 AM Steve Rowe wrote:
> Hi Ale
Still not working, same issue documents are not getting pushed to index.
-
Regards
Shruti
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Have you tried changing the log level
https://lucene.apache.org/solr/guide/7_2/configuring-logging.html
--
Rahul Singh
rahul.si...@anant.us
Anant Corporation
On Jul 8, 2018, 8:54 PM -0500, Yasufumi Mizoguchi ,
wrote:
> Hi,
>
> I am trying to indexing files into Solr 7.2 using data import handle
Hi,
I am trying to indexing files into Solr 7.2 using data import handler with
onError=skip option.
But, I am struggling with determining the skipped documents as logs do not
tell which file was bad.
So, how can I know those files?
Thanks,
Yasufumi
Has anyone had any luck using the Solr 7.3+ exporter for metrics collection on
a Solr instance with the basic auth plugin enabled? The exporter starts without
issue but I have had no luck specifying the credentials when the exporter tries
to call the metrics API. The documentation does not appea
I have as well faced the problem when we have composite primary key in the
table, so below is how have went with workaround.
deltaQuery retrieve concat value with time criteria (that should retrieves
only modified rows) and use it in deltaImportQuery with where clause.
On Sun, Jul 8, 2
Why do you want to add such long strings to your index in the first
place? There are almost useless for search, you want tokenized
(text_general is a good place to start) if you want to search for
words within the string.
"The number of bytes limit" is 32K or so, right? What do you want to
do with
OK, I missed the bit about
"...no explicit fl was specified..."
Please raise a JIRA. Also, make sure you highlight the bit about no fl
list specified.
Patches welcome of course!
Best,
Erick
On Sat, Jul 7, 2018 at 1:29 PM, Ganesh Sethuraman
wrote:
> Yes, i have the same problem too. DocValues=
Dataconfig I am using now
*managed-schema*
data_id
-
Regards
Shruti
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
My oracle table doesn't have any primary key and delta import requires a
primary key, that's why I am creating it by concatenating 2 columns. For now
just for testing I am using only one column.
I am using solr-6.1.0 version. This is the response I am getting. But every
time I run delta import , i
Mikhail,
Actually, your suggestion worked! I was making a typo on the field name. Thank
you very much!
TK
p.s. I have found a mention of _query_ "magic field" in the Solr Reference Guide
On 7/8/18 11:04 AM, TK Solr wrote:
Thank you.
This is more promising because I see the second clause i
Thank you.
This is more promising because I see the second clause in parsedquery. But it is
hitting zero document.
The debug query output looks like this. explain is empty:
rawquerystring":"_query_:{!parent which=\"isParent:true\" v='attrname:genre AND
attrvalue:drama'} AND _query_:{!parent
15 matches
Mail list logo