Hi,
I want to add a parameter to the handler for health checks.
In our case, we want to add a parameter like "failIfEmptyCores" because we
want the state of a node with no core to be an error.
I don't think it will change the existing behavior by setting default to
false.
What do you think about t
Hi All,
We are using the Luke API in order to get all dynamic field names from our
collection:
/solr/collection/admin/luke?wt=csv&numTerms=0
This worked fine in 6.2.1 but it's non deterministic anymore (8.6.1) - looks
like it queries a random single shard.
I've tried using /solr/collection/sel
This is my javascript code ,from where I am calling solr ,which has a
loaded nutch core (index).
My java script client ( runs on TOMCAT server) and Solr
server are on the same machine (10.21.6.100) . May be due to cross
domain references issues OR something is missing I don't know.
I expected Respo
Hi
I am using Solr 6.1.0. My SOLR_TIMEZONE=UTC in solr.in.cmd.
My current Solr server machine time zone is also UTC.
My one collection has below one field in schema.
Suppose my current Solr server machine time is 2020-10-01 10:00:00.000. I have
one document in that collection and in that doc
First of all, I’d just use a stand-alone program to do your
processing for a number of reasons, see:
https://lucidworks.com/post/indexing-with-solrj/
1- I suspect your connection will be closed eventually. Since it’s expensive to
open one of these, the driver may keep it open for a while.
2
harjags wrote
> Below errors are very common in 7.6 and we have solr nodes failing with
> tanking memory.
>
> The request took too long to iterate over terms. Timeout: timeoutAt:
> 162874656583645 (System.nanoTime(): 162874701942020),
> TermsEnum=org.apache.lucene.codecs.blocktree.SegmentTermsEnum
I apologize for sending this email again, I don't mean to spam the mailbox but
looking out for the urgent help.
We are using Apache Solr 7.7 on Windows platform. The data is synced to Solr
using Solr.Net commit. The data is being synced to SOLR in batches. The
document size is very huge (~0.5GB
Hello,
We are using Solr 8.5.2
We are having trouble with dealing with network errors between a Solr node and
a client.
In our situation, our Solr Nodes and Zk hosts are healthy and can communication
with each other, all our collections are up and healthy.
When we simulate a network proble
Hi All,
We are using the Luke API in order to get all dynamic field names from our
collection:
/solr/collection/admin/luke?wt=csv&numTerms=0
This worked fine in 6.2.1 but it's non deterministic anymore (8.6.1) - looks
like it queries a random single shard.
I've tried using /solr/collection/sel
Hello,
I am trying to use a Streaming Expression to query only a subset of the shards
of a collection.
I expected to be able to use the "shards" parameter like on a regular query on
"/select" for instance but this appear to not work or I don't know how to do it.
Is this somehow a feature/restri
I can’t think of an easy way to do this in Solr.
Do a bunch of string searches on the query on the client side. If any of them
match,
make a “no hits” result page.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Sep 30, 2020, at 11:56 PM, Derek Poh
Well, when not splitting on whitespace you can the CharFilter for regex
replacements [1] to clear the entire search string if anywhere in the string a
banned word is found:
.*(cigarette|tobacco).*
[1]
https://lucene.apache.org/solr/guide/6_6/charfilterfactories.html#CharFilterFactories-solr.P
Hi All,
We have 2 collections and we are using basic authentication against solr ,
configured in security.json . Is it possible to configure in such a way
that we have different credentials for each collection . Please advise if
there is any other approach i can look into.
Example ; user1:passwor
On 10/1/2020 3:55 AM, Sunil Dash wrote:
This is my javascript code ,from where I am calling solr ,which has a
loaded nutch core (index).
My java script client ( runs on TOMCAT server) and Solr
server are on the same machine (10.21.6.100) . May be due to cross
domain references issues OR something
On 10/1/2020 6:57 AM, Manisha Rahatadkar wrote:
We are using Apache Solr 7.7 on Windows platform. The data is synced to Solr
using Solr.Net commit. The data is being synced to SOLR in batches. The
document size is very huge (~0.5GB average) and solr indexing is taking long
time. Total document
On 10/1/2020 4:24 AM, Nussbaum, Ronen wrote:
We are using the Luke API in order to get all dynamic field names from our
collection:
/solr/collection/admin/luke?wt=csv&numTerms=0
This worked fine in 6.2.1 but it's non deterministic anymore (8.6.1) - looks
like it queries a random single shard.
https://lucene.apache.org/solr/guide/8_6/authentication-and-authorization-plugins.html
*Authentication* is global, but *Authorization* can be configured to use
rules that restrict permissions on a per collection basis...
https://lucene.apache.org/solr/guide/8_6/rule-based-authorization-plugin.
Manisha,
In addition to what Shawn has mentioned above, I would also like you to
reevaluate your use case. Do you *need to* index the whole document ? eg:
If it's an email, the body of the email *might* be more important than any
attachments, in which case you could choose to only index the email b
18 matches
Mail list logo