Hi All,
I have just started with a new project and using solrCloud and during my
performance testing, i have been finding OOM issues. The thing which i
notice more is physical memory keeps on increasing and never comes to
original state.
I'm indexing 10 million documents and have 4 nodes as lea
Thanks Shawn. I was looking at SOLR Admin UI and also using top command on
server.
Im running a endurance for 4 hr with 50/sec TPS and i see the physical
memory keeps on increasing during the time and if we have schedule delta
import during that time frame which can import upto 4 million docs. Af
Thanks Shawn a lot !!.
Just wanted to clarify we have solrCloud so when doing my testing its not a
single server where im hitting. I have multiple servers. At a time we have 4
leaders n 4 replicas which are communicated using zookeeper.
So, in total we have 8 servers and zookeeper is install on
roblem, but how can overcome the
problem of hanging.
Error occurred during initialization of VM
Could not reserve enough space for object heap
Regards,
Rohit
Mobile: +91-9901768202
About Me: <http://about.me/rohitg> http://about.me/rohitg
t why solr hangs
and is there a way to automatically kill and restart.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: simon [mailto:mtnes...@gmail.com]
Sent: 02 September 2011 14:03
To: solr-user@lucene.apache.org
Subject: Re: Solr Hangs
Hi,
I am running solr in tomcat on a linux machine, my solr hangs after about 40
hrs, I wanted to generate the dump and analyse the logs. But the command
kill -QUIT PID doesn't seem to be doing anything.
How can I generate a dump otherwise to see, why solr hangs?
Regards,
Rohit
Nope not getting anything here also.
Regards,
Rohit
-Original Message-
From: Jerry Li [mailto:zongjie...@gmail.com]
Sent: 08 September 2011 08:09
To: solr-user@lucene.apache.org
Subject: Re: Unable to generate trace
what about kill -3 PID command?
On Thu, Sep 8, 2011 at 4:06 PM, Rohit
past 15 days, unless
someone queries for it explicetly. How can this be achieved?
. Will it be better to go for Solr replication or distribution if
there is little option left
Regards,
Rohit
Mobile: +91-9901768202
About Me: <http://about.me/rohitg> http://about.me/rohitg
(Mostly facet queries
based on date, string fields).
After sometime about 18-20hr solr goes out of memory, the thread dump
doesn't show anything. How can I improve this besides adding more ram into
the system.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Ori
a jconsole to my solr as suggested to get a better picture.
Regards,
Rohit
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 14 September 2011 08:15
To: solr-user@lucene.apache.org
Subject: Re: Out of memory
Hi Rohit,
Do you use caching?
How big is your index in
://haklus.com/crssConfig.xml
http://haklus.com/rssConfig.xml
http://haklus.com/twitterConfig.xml
http://haklus.com/facebookConfig.xml
Thanks again
Rohit
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 14 September 2011 10:23
To: solr-user@lucene.apache.org
It's happening more in search and search has become very slow particularly on
the core with 69GB index data.
Regards,
Rohit
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 15 September 2011 07:51
To: solr-user@lucene.apache.org
Subject: Re: Out of memory
Thanks Dmitry, let me look into sharading concepts.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 15 September 2011 10:15
To: solr-user@lucene.apache.org
Subject: Re: Out of memory
If you
idea about sharding right now, if you could point me to some
resource for date wise sharding.
Regards,
Rohit
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: 17 September 2011 00:19
To: solr-user@lucene.apache.org
Subject: RE: Out of memory
: Actually I am
approximately 50,000 documents and optimizing once a
day.
I am using the fq parameter during faceting, since all my queries are
datetime bound and max and min auto_id bound. Eg, fq=createdOnGMT[Date1 TO
Date2]&fq=id:[id1 TO id2]&facet=true&facet.field..
Regards,
Rohit
Mobile: +9
I have been using solr 3.1 am planning to update to solr 3.4, whats the
steps to be followed or anything that needs to be take care of specifically
for the upgrade?
Regards,
Rohit
https://issues.apache.org/jira/browse/SOLR-1709
The path is not working on both 3.1 and 3.4 version, how else can I apply
the patch?
Regards,
Rohit
I want to query, right now I use it in the following way,
CommonsHttpSolrServer server = new CommonsHttpSolrServer("URL HERE");
SolrQuery sq = new SolrQuery();
sq.add("q",query);
QueryResponse qr = server.query(sq);
Regards,
Rohit
-Original Message-
From: Yur
Ya the problem is with the length of the URL, with a lot of filters coming
in the length goes beyond the length allowed. But, I guess extending url
length would be a better approcach.
Regards,
Rohit
-Original Message-
From: Sujit Pal [mailto:sujit@comcast.net]
Sent: 14 October 2011
I have saved tweets related to some keywords in solr, can Solr be used to
generate the tag cloud of important words from these tweets?
Regards,
Rohit
,
Rohit
2 0.6 3:52.59 java
3591 root 20 0 19352 1576 1068 R0 0.0 0:00.24 top
1 root 20 0 23684 1908 1276 S0 0.0 0:06.21 init
Regards,
Rohit
What are the difference in the different DirectoryFactory?
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Yury Kats [mailto:yuryk...@yahoo.com]
Sent: 10 December 2011 12:11
To: solr-user@lucene.apache.org
Subject: Re: Virtual Memory very
Thanks Yurykats.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 13 December 2011 11:17
To: solr-user@lucene.apache.org
Subject: Re: Virtual Memory very high
If you allow me to chime in, is
(AprEndpoint.java:1675)
at java.lang.Thread.run(Unknown Source)
Regards,
Rohit
Hi,
We are storing a large number of tweets and blogs feeds into solr.
Now if the user searches for twitter mentions like, @rohit , records which
just contain the word rohit are also being returned. Even if we do an exact
match "@rohit", I understand this happens because
https://issues.apache.org/jira/browse/SOLR-2059
But searching for @username is also returning results for just username or
#hashtag is just returning result for hastag. How can I achieve this?
Regards,
Rohit
Go the problem, I need to user "types=" parameter to ignore character like #,@
in WordDelimiterFilterFactory factory.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: 16 February 201
Memory details:
export JAVA_OPTS="$JAVA_OPTS -Xms6g -Xmx36g -XX:MaxPermSize=5g"
Solr Config:
false
10
32
1
1000
1
What could be causing this, as everything was running fine a few days back?
Regards,
Rohit
Mobile: +91-9901768202
About Me: <
fine a few days back?
Regards,
Rohit
Mobile: +91-9901768202
About Me: <http://about.me/rohitg> http://about.me/rohitg
Hi Tirthankar,
The average size of documents would be a few Kb's this is mostly tweets
which are being saved. The two cores are storing different kind of data and
nothing else.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Tirth
Operating system in linux ubuntu.
No not using spellchecker
Only language detection in my update chain.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Jan Høydahl [mailto:jan@cominvent.com]
Sent: 12 April 2012 12:50
To: solr-user
Thanks for pointing these out, but I still have one concern, why is the
Virtual Memory running in 300g+?
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Bernd Fehling [mailto:bernd.fehl...@uni-bielefeld.de]
Sent: 12 April 2012 11:58
To
Thanks for pointing these out, but I still have one concern, why is the
Virtual Memory running in 300g+?
Regards,
Rohit
-Original Message-
From: Tirthankar Chatterjee [mailto:tchatter...@commvault.com]
Sent: 12 April 2012 13:43
To: solr-user@lucene.apache.org
Subject: Re: Solr 3.5
The machine has a total ram of around 46GB. My Biggest concern is Solr index
time gradually increasing and then the commit stops because of timeouts, out
commit rate is very high, but I am not able to find the root cause of the issue.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http
Hi Shawn,
Thanks for the information, let me give this a try, since this is a live box I
will try it during the weekend and update you.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: 13
increasing the commit time, though I cannot find a reason are they
related in anyway?
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: 13 April 2012 11:01
To: solr-user@lucene.apache.org
Subject
?
Regards,
Rohit
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:2
68)
at
org.apache.solr.handler.ReplicationHandler$1.run(ReplicationHandler.java:149
)
Actually the replication starts, but is never able to complete and then
restarts again.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
Hi Erick,
Yes I have enabled the following setting,
internal
5000
1
Will try with higher timeouts. I tried scp command and the link didnt break
once, I was able to copy the entire 300Gb files, so am not too sure if this
is a network problem.
Regards,
Rohit
Mobile
<https://issues.apache.org/jira/browse/SOLR-1903>
https://issues.apache.org/jira/browse/SOLR-1903, is there any other fix to
solve this problem. I am currently using solr 3.6.
Regards,
Rohit
needs to be
provided everytime, which was not the case earlier. What might be casing
this?
Regards,
Rohit
Hi Jack,
I understand that df would make this work normaly, but why did
defaultSearchField stop working suddenly. I notice that there is talk about
deprecating it, but even then it should continue to work right?
Regards,
Rohit
-Original Message-
From: Jack Krupansky [mailto:j
Thanks for the pointers Jack, actually the strange part is that the
defaultSearchField element is present and uncommented yet not working.
docKey
searchText
Regards,
Rohit
-Original Message-
From: Jack Krupansky [mailto:j...@basetechnology.com]
Sent: 11 June 2012 20:35
To: solr-user
but I don't know how to get it to work with Solr.
Has anyone else worked on this earlier?
Regards,
Rohit
gn(classified) as sentiment from
Why i am doing this timezone conversion is because i need to group results
by the user timezone. How can i achieve this?
Regards, Rohit
since the result should be
grouped on users timezone. Is there anyway we can achieve this in Solr?
Regards,
Rohit
-Original Message-
From: Craig Stires [mailto:craig.sti...@gmail.com]
Sent: 06 May 2011 04:30
To: solr-user@lucene.apache.org
Subject: RE: Solr
Thanks Ahmet, let me give this a shot.
Regards,
Rohit
-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com]
Sent: 06 May 2011 15:39
To: solr-user@lucene.apache.org
Subject: RE: Solr: org.apache.solr.common.SolrException: Invalid Date
String:
--- On Fri, 5/6/11, Rohit
Running solr on jetty right now and the console shows no error, also "
\Solr\example\logs " folder is empty.
Thanks,
Rohit
Hi Erick,
Thats exactly how I am starting solr.
Regards,
Rohit
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 09 May 2011 16:57
To: solr-user@lucene.apache.org
Subject: Re: Total Documents Failed : How to find out why
First you need to find your logs
");
List facets = qr.getFacetFields();
for(FacetField facet : facets)
{
List facetEntries = facet.getValues();
for(FacetField.Count fcount : facetEntries)
{
System.out.pr
0
0
0
+1DAY
2010-01-01T00:00:00Z
2011-05-31T00:00:00Z
1) How can i retrieve these values in java,
2) Also if there is anyway i can convert the json response to the json java
object
Regards,
Rohit
html
If I don't apply the offset the results match with the facet count, is there
something wrong in my query?
Regards,
Rohit
the facet count, is there
something wrong in my query?
Regards,
Rohit
P.S
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: 14 May 2011 05:28
To: solr-user@lucene.apache.org
Subject: Re: Solr Range Facets
: I did try what you suggested, but I am not ge
an index size of 13217121 documents, now when I want to get documents
between two dates and then sort them by ID solr goes out of memory. This is
with just me using the system, we might also have simultaneous users, how
can I improve this performance?
Rohit
?
-Rohit
-Original Message-
From: rajini maski [mailto:rajinima...@gmail.com]
Sent: 19 May 2011 14:53
To: solr-user@lucene.apache.org
Subject: Re: Out of memory on sorting
Explicit Warming of Sort Fields
If you do a lot of field based sorting, it is advantageous to add explicitly
warming
ll out test case for moving to solr has passed, this is proving to be a big
set back. Help would be greatly appreciated.
Regards,
Rohit
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: 19 May 2011 18:21
To: solr-user@lucene.apache.org
Subject: Re: Out of memory
Each core need to be queried separately,
http://localhost:8983/solr/fund_dih/select?q=
http://localhost:8983/solr/fund_tika/select?q=
Regards,
Rohit
-Original Message-
From: Zhao, Zane [mailto:zane.z...@fil.com]
Sent: 20 May 2011 07:50
To: solr-user@lucene.apache.org
Subject: How can
path to take ?
. How can I do this with minimum down time, given the fact that our
index is huge
. Can someone point me to the right direction for this?
Thanks and Regards,
Rohit
? Will reducing the
number of commits per hour help?
2. Most of my queries are field or date faceting based? how to improve
those?
Regards,
Rohit
Regards,
Rohit
Mobile: +91-9901768202
About Me: <http://about.me/rohitg> http://about.me/rohitg
Also, if I could change the behaviour on the fly, update based on a flag
and ignore on another flag.
Thanks and Regards,
Rohit
response :
0
18
تأجير الاهلي
Regards,
Rohit
Mobile: +91-9901768202
About Me: <http://about.me/rohitg> http://about.me/rohitg
Thanks Ahmet, this was the problem I guess.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com]
Sent: 15 August 2011 22:20
To: solr-user@lucene.apache.org
Subject: Re: Solr + Arabic Search
> I
Hi,
Just wanted to know how much memory can Tomcat running on Windows Enterprise
RC2 server effectively utilize. Is there any limitation to this?
Regards,
Rohit
We delete some data from solr, post which solr is not accepting any
commit's. What could be wrong?
We don't see any error in logs or anywhere else.
Regards,
Rohit
Hi Brandan,
I am not sure if get whats being suggested. Our delete worked fine, but now
no new data is going into the system.
Could you please throw some more light.
Regards,
Rohit
-Original Message-
From: Brendan Grainger [mailto:brendan.grain...@gmail.com]
Sent: 19 July 2012 17:33
numFound- 56000
Second time
query=abc&start=4000&rows=4000
numFound- 55998
What can cause this?
Regards,
Rohit
I can cross check our shards once again, but I am sure this is not the case.
Regards,
Rohit
Mobile: +91-9901768202
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: 08 August 2012 21:04
To: solr-user@lucene.apache.org
Subject: Re: numFound changes on
, but that doesn't seem to help either.
Regards,
Rohit
HI Lance,
Thanks for explaining this, it does push out all other programs.
Regards,
Rohit
Mobile: +91-9901768202
-Original Message-
From: Lance Norskog [mailto:goks...@gmail.com]
Sent: 03 September 2012 01:00
To: solr-user@lucene.apache.org
Subject: Re: Solr Not releasing memory
1) I
I am currently using StandardDirectoryFactory, would switching directory
factory have any impact on the indexes?
Regards,
Rohit
-Original Message-
From: Claudio Ranieri [mailto:claudio.rani...@estadao.com]
Sent: 03 September 2012 10:03
To: solr-user@lucene.apache.org
Subject: RES
I am taking of Physical memory here, we start at -Xms of 2gb but very soon it
goes high as 45Gb. The memory never comes down even when a single user is not
using the system.
Regards,
Rohit
-Original Message-
From: Markus Jelsma [mailto:markus.jel...@openindex.io]
Sent: 03 September
aggregation
queries should be very fast.
Is Solr suitable for such use cases?
Thanks
Rohit
data regarding elasticsearch/solr
performance in this area that I can refer to?
Thanks
Rohit
On Thu, Feb 4, 2016 at 11:48 AM, CKReddy Bhimavarapu
wrote:
> Hello Rohit,
>
> You can use the Banana project which was forked from Kibana
> <https://github.com/elastic/kibana>, and wo
client is getting loaded from some where in your maven. Check
dependency tree of your pom.xml and see if you can exclude this jar getting
loaded from anywhere else. Just exclude them in your pom.xml. I hope this
solves your issue,
Thanks
Rohit
On Tue, Aug 2, 2016 at 9:44 AM, Steve Rowe wrote
helps in solving your problem.
Thanks
Rohit Kanchan
On Tue, Aug 30, 2016 at 5:11 PM, Erik Hatcher
wrote:
> Personally, I don’t think a QParser(Plugin) is the right place to modify
> other parameters, only to create a Query object. A QParser could be
> invoked from an fq, not just a q,
I think it is better to use zookeeper data. Solr Cloud updates zookeeper
about node status. If you are using cloud then u can check zookeeper
cluster api and get status of node from there. Zookeeper cluster state api
can give you information about your Solr cloud. I hope this helps.
Thanks
Rohit
With Java 8, you also need to upgrade your tomcat which can work on Java 8.
I think Tomcat 8.x compiled using Java 8. I think you can switch your
existing Tomcat also to Java 8 but that may break somewhere because of same
reason.
Thanks
Rohit Kanchan
On Sat, Sep 10, 2016 at 2:38 AM, Brendan
I am writing a custom ValueSourceParser. Everything is working fine except
when i use valuesourceparser for sorting it stops working for calls with
different data.
Eg: if i make a query to sort by func(cost) desc
It works.
Now if i change cost with some another field eg func(rating) desc
It sorts
Hi Hoss,
Thanks for the response. Will make the necessary changes and get back to
you.
Btw this is just a testing code. The logic is yet to be implemented. What
according to you could be the best way to return hashcode?
Regards,
Rohit
On Oct 5, 2016 5:27 AM, "Chris Hostetter" wrote:
be a good idea to go through the whole solr code just
to make a external valuesourceparser. Any ideas?
Regards,
Rohit Agarwal
On Wed, Oct 5, 2016 at 10:11 AM, Rohit Agarwal
wrote:
> Hi Hoss,
>
> Thanks for the response. Will make the necessary changes and get back to
> you.
>
>
our processes would receive them.
Any ideas on how this could be done?
Rohit Jain
be an example of that.
Rohit
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Monday, June 12, 2017 11:56 AM
To: solr-user
Subject: Re: Parallel API interface into SOLR
Have you looked at Streaming Aggregation/Streaming Expressions/Parallel SQL etc?
Best,
Eric
Thanks a lot Joel! No wonder I could not find it :-). I will try to see if
this will work for us.
Rohit
-Original Message-
From: Joel Bernstein [mailto:joels...@gmail.com]
Sent: Monday, June 12, 2017 1:01 PM
To: solr-user@lucene.apache.org
Subject: Re: Parallel API interface into
schema. In solr you can define dynamic fields too. This is all my
understanding.
-
Rohit
On Wed, Nov 23, 2016 at 10:27 AM, Prateek Jain J <
prateek.j.j...@ericsson.com> wrote:
>
> Hi All,
>
> I have started to use mongodb and solr recently. Please feel free to
> correct me w
and because of this there is high gc
pause which is causing first replica go in recovery then leader and replica
crashes.
-
Rohit
.
It would be better to know why old deletes Map is used there. I am still
digging, If I find something then I will share that.
Thanks
Rohit
On Tue, Mar 21, 2017 at 4:00 PM, Chris Hostetter
wrote:
>
> : facing. We are storing messages in solr as documents. We are running a
> : pruning
query solr before deleting at client
side. It is possible that there is a bug in this code but I am not sure,
because when I run tests in my local it is not showing any issues. I am
trying to remote debug now.
Thanks
Rohit
On Wed, Mar 22, 2017 at 9:57 AM, Chris Hostetter
wrote:
>
> : O
from a
committer. What do you guys think?
Thanks
Rohit
On Wed, Mar 22, 2017 at 1:36 PM, Rohit Kanchan
wrote:
> For commits we are relying on auto commits. We have define following in
> configs:
>
>
>
> 1
>
> 300
entries or not. I will update this thread about my
findings. I really appreciate yours and Chris response.
Thanks
Rohit
On Mon, Mar 27, 2017 at 10:47 AM, Erick Erickson
wrote:
> Rohit:
>
> Well, whenever I see something like "I have this custom component..."
> I immediately
copy. It was causing OOM. We have changed that and now making a deep copy.
Now It seems it is restricting old deletes map to capacity 1K.
After deployment of this change, we took another heap dump and did not find
this as leak suspects. Please let me know if anyone have questions.
Thanks
Rohit
On
My solr config has :
15000
false
1000
Machine is ubuntu 13 / 4 cores / 16GB RAM. Given 6gb to Solr running
over tomcat.
Still when i am adding documents to solr and searching its returning 0
hits. Its taking long before the document actu
hey come up at the
> expected frequency?
>
>
> On 4 July 2013 15:35, Rohit Kumar wrote:
>
> > My solr config has :
> >
> >
> >15000
> >false
> >
> >
> >
> >
> > 1000
> &
hanks
*
On Fri, Jul 5, 2013 at 8:30 AM, Jack Krupansky wrote:
> 1. Do you have an update processor chain that doesn't have RunUpdate in it?
>
> 2. Is the solrconfig directive missing?
>
> 3. Is _version_ missing from your schema?
>
> -- Jack Krupansky
>
> -Original
Hey,
I am trying to create a plugin which makes use of postfilter. I know that
the collect function is called for every document matched, but is there a
way i can access all the matched documents upto this point before collect
is called on each of them?
Thanks,
Rohir
Hi Amit,
Great article. I tried it and it works well. I am new to developing in solr
and had a question? do you know if there is a way to access all the matched
ids before collect is called?
Thanks,
Rohit
On Sat, Nov 10, 2012 at 1:12 PM, Erick Erickson wrote:
> That'll teach _me_
Hi,
I have a scenario.
String array = ["Input1 is good", ""Input2 is better", "Input2 is sweet",
"Input3 is bad"]
I want to compare the string array against the given input :
String inputarray= ["Input1", "Input2"]
It involves no indexes. I just want to use the power of string search to do
a r
.
Thanks
Rohit Kumar
?
Thanks,
Rohit
On Wed, Jul 10, 2013 at 6:10 PM, Yonik Seeley wrote:
> On Wed, Jul 10, 2013 at 6:08 PM, Rohit Harchandani
> wrote:
> > Hey,
> > I am trying to create a plugin which makes use of postfilter. I know that
> > the collect function is called for every document
I am facing this problem in solr 4.0 too. Its definitely not related to
autowarming. It just gets stuck while downloading a file and there is no
way to abort the replication except restarting solr.
On Wed, Jul 10, 2013 at 6:10 PM, adityab wrote:
> I have seen this in 4.2.1 too.
> Once replicati
1 - 100 of 146 matches
Mail list logo