Hi Team,
Greetings of the day.
I would like to subscribe for knowledge and configuration about the Solr
search for better user experience.
Kindly guide me to further proceed.
Regards,
Hi Alex,
Thanks for your reply. How do we integrate tesseract with Solr? Do we have
to implement Custom update processor or extend the
ExtractingRequestProcessor?
Regards
Suresh
On Wed, Oct 23, 2019 at 11:21 AM Alexandre Rafalovitch
wrote:
> I believe Tika that powers this can do so w
files?
Regards
Suresh
the example which I need is in Solr7.4.0 and above
On Mon, 25 Mar 2019 at 15:31, Suresh Kumar Shanmugavel <
sureshkumar.shanmuga...@lastminute.com> wrote:
> Hi Team,
> I need one sample web application on Solr with master and slave
> configurations having at least one core
SASL and DIGEST
authentication schemes and it is very hard to follow.
Is there a better resource or documentation that I can follow?
-suresh
Hi Shawn,
Thanks for replying to my questions.
So is it correct to assume that exposing merge metrics is not known to
cause any performance degradation?
-suresh
On Wed, Jan 10, 2018 at 5:40 PM, Shawn Heisey wrote:
> On 1/10/2018 11:08 AM, S G wrote:
>
>> Last comment by Shawn on S
default?
Regards
suresh
On Mon, Jan 8, 2018 at 10:25 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> The merge metrics were enabled by default in 6.4 but they were turned
> off in 6.4.2 because of large performance degradations. For more
> details, see https://issues.
...
...
I would like to know why is this metrics not exposed by default just like
all the other metrics?
Is there any performance overhead that we should be concerned about it?
If there was no particular reason, then can we expose it by default?
Regards
Suresh
What is the downside of configuring ramBufferSizeMB to be equal to 5GB ?
Is it only that the window of time for flush is larger, so recovery time will
be higher in case of a crash?
Thanks
Suresh
On 12/27/17, 1:34 PM, "Erick Erickson" wrote:
You are probably hitting mor
Hi,
I wanted to check if it is a known issue that the merge metrics are not
exposed as JMX beans.
Any one else in the community ran into this issue?
Thanks
Suresh
On Sun, Dec 3, 2017 at 4:24 PM, suresh pendap
wrote:
> I see only these metrics in my Jconsole window
>
> [image: Inlin
I see only these metrics in my Jconsole window
[image: Inline image 1]
On Sun, Dec 3, 2017 at 4:19 PM, suresh pendap
wrote:
> Hi,
> I am using Solr version 6.6.0 and am following the document
> https://lucene.apache.org/solr/guide/6_6/metrics-reporting.html#index-
> merge-metri
omething else to be able to expose the Index Merge
metrics as JMX mbeans?
Thanks
Suresh
Hello,
I am using Solr version 6.6 and I am following the document to get the
Segment merge related metrics
https://lucene.apache.org/solr/guide/6_6/metrics-reporting.html#index-merge-metrics
I added the configuration to expose the merge related metrics to my
solrconfig.xml file as below
Thanks Lincoln for your suggestions. It was very helpful.
I am still curious as to why is the original query taking long time. It is
something that Lucene should have ideally optimized.
Is there any way to see the execution plan used by Lucene?
Thanks
Suresh
On Thu, Aug 31, 2017 at 11:11 AM
optimal query execution plan?
Is there anyway to look at the query execution plan generated by Lucene?
Regards
Suresh
Thanks Shalin for the reply.
Do I need to also update the query parsers in order to handle the new query
param?
I can build a custom component but dabbling with query parsers would be way
too much for me to handle.
Thanks
Suresh
On Tue, Aug 8, 2017 at 9:49 PM, Shalin Shekhar Mangar
address of the client
machine which fired this query.
Thanks
Suresh
axtime=2 -Dsolr.autoCommit.maxDocs=10
I would like to know the order of precedence in which the
configurations are applied.
Regards
Suresh
Eric,
Thanks!! for the link.
-suresh
On Wed, Jul 19, 2017 at 8:11 AM, Erick Erickson
wrote:
> Chris Hostetter has a writeup here that has a good explanation:
>
> https://lucidworks.com/2013/12/12/coming-soon-to-solr-
> efficient-cursor-based-iteration-of-large-result-sets/
>
which had served the first
page?
Thanks
Suresh
, but the current default values seem to be very low.
Is my understanding of these configuration params correct?
Thanks
Suresh
Thanks Erick for the reply.
When the leader asks the follower to go into recovery status, does it stop
sending future updates to this replica until it becomes fully in sync with
the leader?
Regards
Suresh
On Mon, Jun 5, 2017 at 8:32 PM, Erick Erickson
wrote:
> bq: This means that technica
GW,
Did you mean a separate transaction log on Solr or on Zookeeper?
-suresh
On Tue, Jun 6, 2017 at 5:23 AM, GW wrote:
> I've heard of systems tanking like this on Windows during OS updates.
> Because of this, I run all my updates in attendance even though I'm Linux.
> My N
Hi,
Why and in what scenarios do Solr nodes go into recovery status?
Given that Solr is a CP system it means that the writes for a Document
index are acknowledged only after they are propagated and acknowledged by
all the replicas of the Shard.
This means that technically the replica nodes shoul
the strategy of using the OS buffer cache works
better instead of using a large Cache inside the JVM heap?
I was looking for some numbers that people have used for configuring these
Caches in the past and the rational for choosing these values.
Thanks
Suresh
On 5/11/17, 11:13 PM, "Shawn H
query latency?
Any recommendations around Cache size settings or any documentation would
be very helpful.
I also would like to know if there is any document which provides
guidelines on performing Capacity planning for a Solr cluster.
Regards
Suresh
query latency and it is recommended to configure
that Cache?
I also would like to know if there is any document which provides guidelines on
performing Capacity planning for a Solr cluster.
Regards
Suresh
push the CPU usage
further up by putting more load on the cluster?
Thanks
Suresh
On 4/28/17, 6:44 PM, "Shawn Heisey" wrote:
>On 4/28/2017 12:43 PM, Toke Eskildsen wrote:
>> Shawn Heisey wrote:
>>> Adding more shards as Toke suggested *might* help,[...]
>> I
plica1:rack3 and shard2_replica2:
It should not assign the replicas as shard1_replica1:rack1,
shard1_replica2:rack1, shard2_replica1: rack2, shard2_replica2: rack2. In this
case all the replicas of a shard are assigned to the same rack.
This is not ideal and not good for HA reasons.
Regards
Suresh
that I can extract from this sized cluster and
index size combination?
Any ideas is highly appreciated.
Thanks
Suresh
larger value
and see if that gives me some benefit.
Thanks
Suresh
On 3/24/17 6:05 AM, "Shawn Heisey" wrote:
>On 3/23/2017 6:10 PM, Suresh Pendap wrote:
>> I performed the test with 1 thread, 10 client threads and 50 client
>> threads. I noticed that as I increa
jobs etc?
Thanks
Suresh
On 3/23/17 9:33 PM, "Erick Erickson" wrote:
>I'd check my I/O. Since you're firing the same query, I expect that
>you aren't I/O bound at all, since, as you say, the docs should
>already be in memory. This assumes that your document cach
Disk cache so that maximum documents stay in memory in the buffer cache.
Regards
Suresh
On 3/23/17 7:52 PM, "Zheng Lin Edwin Yeo" wrote:
>I also did find that beyond 10 threads for 8GB heap size , there isn't
>much
>improvement with the performance. But you can increase
I am using version 6.3 of Solr
On 3/23/17 7:56 PM, "Aman Deep Singh" wrote:
>system
seeing is abnormal behavior,
what could be the issue? Is there any configuration that I can tweak to get
better query response times for more load?
Regards
Suresh
Hi,
Will Solrj client 4.10.3 version work with Solr 6.3 version of the server? I
was trying to look up the documentation but no where the compatibility matrix
between server and client is provided.
Has some one already used this combination?
Regards
Suresh
Hi,
We are having the solr index maintained in a central server and multiple users
might be able to access the index data.
May I know what are best practice for securing the solr index folder where
ideally only application user should be able to access. Even an admin user
should not be able to
Hi All,
We are trying to setup the Solr Cloud in our team and able setup multiple nodes
in one server as a cloud.
Need clarifications on the following.
Is there any good documentation, which can help us to build the Solr Cloud with
multiple physical servers?
Since the Solr Cloud is distributed
ject: Re: Exception while loading 2 Billion + Documents in Solr 4.8.0
On 2/4/2015 2:54 PM, Arumugam, Suresh wrote:
>
> Hi All,
>
>
>
> We are trying to load 14+ Billion documents into Solr. But we are
> failing to load them into Solr.
>
>
>
> So
.
Regards,
Suresh.A
From: Arumugam, Suresh [mailto:suresh.arumu...@emc.com]
Sent: Wednesday, February 04, 2015 1:54 PM
To: solr-user@lucene.apache.org
Cc: Habeeb, Anwar
Subject: Exception while loading 2 Billion + Documents in Solr 4.8.0
Hi All,
We are trying to load 14+ Billion documents into Solr. But
Hi All,
We are trying to load 14+ Billion documents into Solr. But we are failing to
load them into Solr.
Solr version: 4.8.0
Analyzer used: ClassicTokenizer for index as well as query.
Can someone help me in getting into the core of this issue?
For 14+ Billion document load, we are loading 2B
We are also facing the same problem in loading 14 Billion documents into Solr
4.8.10.
Dataimport is working in Single threaded, which is taking more than 3 weeks.
This is working fine without any issues but it takes months to complete the
load.
When we tried SolrJ with the below configuration
So the regex is bound to fail since you're thinking in terms of the
entire input rather than the result of your analysis chain, i.e. tokenization +
filters as defined in schema.xml.
FWIW,
Erick
On Fri, Jan 23, 2015 at 8:58 PM, Arumugam, Suresh
wrote:
> Hi All,
>
>
>
> We
Hi All,
We have indexed the documents to Solr & not able to query using the Regex.
Our data looks like as below in a Text Field, which is indexed using the
ClassicTokenizer.
1b ::PIPE:: 04/14/2014 ::PIPE:: 01:32:48 ::PIPE:: BMC
Power/Reset action ::PIPE:: Delayed shutdown time
Vignesh,
Update the fl parameter with the keyword field you defined in your schema
file which will return the keywords attached to the document.
Thanks,
SureshKumar.S
From: vignesh
Sent: Wednesday, April 23, 2014 1:04 PM
To: solr-user@lucene.apache.org
Sub
You are running the solr in the built in jetty server or tomcat ?
First check http://:8080/ is working.
If that working then check with http://:8080/solr, which will display the
solr admin page. From this page you can check the collection1 core is available
or not and also you can view the log
Kaushik,
Before delete the rows in the table, collect the primary id of the table
related to the solr index and fire a solr query by deleteby ID and pass the
collected ids. This will remove the documents in the solr index.
Thanks,
SureshKumar.S
From: Ka
Hi Sohan,
The best approach for the auto suggest is using the facet query.
Please refer the link :
http://solr.pl/en/2010/10/18/solr-and-autocomplete-part-1/
Thanks,
SureshKumar.S
From: Sohan Kalsariya
Sent: Monday, March 17, 2014 8:14 PM
To: solr-use
Prasi,
It is not possible to use the index files of one solr instance for the second
instance. The reason behind this is while booting the solr instance it will get
lock the schema and index files to make sure other instance won't update the
index and schema files.
As you mentioned like want t
You can go ahead with Tomcat by deploying the solr war in it. It is highly
scalable.
Thanks,
SureshKumar.S
From: Jay Potharaju
Sent: Friday, February 21, 2014 11:10 AM
To: solr-user@lucene.apache.org
Subject: Setting up solr on production server
Hi,
I '
What is Field Term Vector? how we can use this?
Suresh
I would like to post PDF, DOC, TXT into SOLR to do the indexing.
Suresh
Thanks Hoss,
I found them in SRC/ SCRIPTS. But i dont know how to execute those
snapshooter, snappuller, abc, backup... How I can make one instance of solr
as master and other as slave. Is it fully depends of rsyncd
-Suresh
- Original Message -
From: "Chris Hostetter&quo
Hi,
I need to configure master / slave servers. Hence i check at wiki help
documents. I found that i need to install solr-tools rpm. But i could not able
to download the files. Please some help me with solr-tools rpm.
Suresh Kannan
54 matches
Mail list logo