and maximum version is required for APACHE SOLR
8.2.0?
2. List of all OS minimum and maximum version is required for APACHE SOLR
8.2.0?
Regards,
Rohit Rasal | Assistant Manager | NSDL e-Governance Infrastructure Limited |
(CIN U72900MH1995PLC095642)
Direct: 8347 |Email: roh
Thanks a lot Joel! No wonder I could not find it :-). I will try to see if
this will work for us.
Rohit
-Original Message-
From: Joel Bernstein [mailto:joels...@gmail.com]
Sent: Monday, June 12, 2017 1:01 PM
To: solr-user@lucene.apache.org
Subject: Re: Parallel API interface into
be an example of that.
Rohit
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Monday, June 12, 2017 11:56 AM
To: solr-user
Subject: Re: Parallel API interface into SOLR
Have you looked at Streaming Aggregation/Streaming Expressions/Parallel SQL etc?
Best,
Eric
our processes would receive them.
Any ideas on how this could be done?
Rohit Jain
copy. It was causing OOM. We have changed that and now making a deep copy.
Now It seems it is restricting old deletes map to capacity 1K.
After deployment of this change, we took another heap dump and did not find
this as leak suspects. Please let me know if anyone have questions.
Thanks
Rohit
On
entries or not. I will update this thread about my
findings. I really appreciate yours and Chris response.
Thanks
Rohit
On Mon, Mar 27, 2017 at 10:47 AM, Erick Erickson
wrote:
> Rohit:
>
> Well, whenever I see something like "I have this custom component..."
> I immediately
from a
committer. What do you guys think?
Thanks
Rohit
On Wed, Mar 22, 2017 at 1:36 PM, Rohit Kanchan
wrote:
> For commits we are relying on auto commits. We have define following in
> configs:
>
>
>
> 1
>
> 300
query solr before deleting at client
side. It is possible that there is a bug in this code but I am not sure,
because when I run tests in my local it is not showing any issues. I am
trying to remote debug now.
Thanks
Rohit
On Wed, Mar 22, 2017 at 9:57 AM, Chris Hostetter
wrote:
>
> : O
.
It would be better to know why old deletes Map is used there. I am still
digging, If I find something then I will share that.
Thanks
Rohit
On Tue, Mar 21, 2017 at 4:00 PM, Chris Hostetter
wrote:
>
> : facing. We are storing messages in solr as documents. We are running a
> : pruning
and because of this there is high gc
pause which is causing first replica go in recovery then leader and replica
crashes.
-
Rohit
schema. In solr you can define dynamic fields too. This is all my
understanding.
-
Rohit
On Wed, Nov 23, 2016 at 10:27 AM, Prateek Jain J <
prateek.j.j...@ericsson.com> wrote:
>
> Hi All,
>
> I have started to use mongodb and solr recently. Please feel free to
> correct me w
be a good idea to go through the whole solr code just
to make a external valuesourceparser. Any ideas?
Regards,
Rohit Agarwal
On Wed, Oct 5, 2016 at 10:11 AM, Rohit Agarwal
wrote:
> Hi Hoss,
>
> Thanks for the response. Will make the necessary changes and get back to
> you.
>
>
Hi Hoss,
Thanks for the response. Will make the necessary changes and get back to
you.
Btw this is just a testing code. The logic is yet to be implemented. What
according to you could be the best way to return hashcode?
Regards,
Rohit
On Oct 5, 2016 5:27 AM, "Chris Hostetter" wrote:
I am writing a custom ValueSourceParser. Everything is working fine except
when i use valuesourceparser for sorting it stops working for calls with
different data.
Eg: if i make a query to sort by func(cost) desc
It works.
Now if i change cost with some another field eg func(rating) desc
It sorts
With Java 8, you also need to upgrade your tomcat which can work on Java 8.
I think Tomcat 8.x compiled using Java 8. I think you can switch your
existing Tomcat also to Java 8 but that may break somewhere because of same
reason.
Thanks
Rohit Kanchan
On Sat, Sep 10, 2016 at 2:38 AM, Brendan
I think it is better to use zookeeper data. Solr Cloud updates zookeeper
about node status. If you are using cloud then u can check zookeeper
cluster api and get status of node from there. Zookeeper cluster state api
can give you information about your Solr cloud. I hope this helps.
Thanks
Rohit
helps in solving your problem.
Thanks
Rohit Kanchan
On Tue, Aug 30, 2016 at 5:11 PM, Erik Hatcher
wrote:
> Personally, I don’t think a QParser(Plugin) is the right place to modify
> other parameters, only to create a Query object. A QParser could be
> invoked from an fq, not just a q,
client is getting loaded from some where in your maven. Check
dependency tree of your pom.xml and see if you can exclude this jar getting
loaded from anywhere else. Just exclude them in your pom.xml. I hope this
solves your issue,
Thanks
Rohit
On Tue, Aug 2, 2016 at 9:44 AM, Steve Rowe wrote
data regarding elasticsearch/solr
performance in this area that I can refer to?
Thanks
Rohit
On Thu, Feb 4, 2016 at 11:48 AM, CKReddy Bhimavarapu
wrote:
> Hello Rohit,
>
> You can use the Banana project which was forked from Kibana
> <https://github.com/elastic/kibana>, and wo
aggregation
queries should be very fast.
Is Solr suitable for such use cases?
Thanks
Rohit
Thanks Shawn a lot !!.
Just wanted to clarify we have solrCloud so when doing my testing its not a
single server where im hitting. I have multiple servers. At a time we have 4
leaders n 4 replicas which are communicated using zookeeper.
So, in total we have 8 servers and zookeeper is install on
Thanks Shawn. I was looking at SOLR Admin UI and also using top command on
server.
Im running a endurance for 4 hr with 50/sec TPS and i see the physical
memory keeps on increasing during the time and if we have schedule delta
import during that time frame which can import upto 4 million docs. Af
Hi All,
I have just started with a new project and using solrCloud and during my
performance testing, i have been finding OOM issues. The thing which i
notice more is physical memory keeps on increasing and never comes to
original state.
I'm indexing 10 million documents and have 4 nodes as lea
Rohit
On Wed, Oct 9, 2013 at 4:14 PM, Erick Erickson wrote:
> Ah, I think you're misunderstanding the nature of post-filters.
> Or I'm confused, which happens a lot!
>
> The whole point of post filters is that they're assumed to be
> expensive (think ACL calculation)
of ids that the post filter is receiving reduces.
Thanks,
Rohit
On Tue, Oct 8, 2013 at 8:29 PM, Erick Erickson wrote:
> Hmmm, seems like it should. What's our evidence that it isn't working?
>
> Best,
> Erick
>
> On Tue, Oct 8, 2013 at 4:10 PM, Rohit Harchandani
>
seem to work:
&fq={!cache=false cost=200}field:value
Thanks,
Rohit
s.
>
> You must flatten your data your data to achieve any correspondence.
>
> Multivalued field are a powerful feature of Solr, but you must be
> extremely careful to use them only in moderation.
>
> -- Jack Krupansky
>
> -Original Message- From: Rohit Kuma
i.e. should return me only doc id 1.
Thanks,
Rohit Kumar
*
0.0
0.0
0.0
5.0
143.0
Please help.
Thanks,
Rohit Kumar
year_2004, year_2005, end_2005
schoolNameWithTermOriginal:Canterbury University||2001-2005
Please suggest if its a correct approach or there is a better way to do the
same.
I am using Solr 4.3.
Thanks,
Rohit Kumar
fields to
the docs using DocTransformer ?
Thanks,
Rohit
Basically i see it is looking up the cache and getting a hit, but it still
seems to be collecting all the documents again.
Thanks,
Rohit
On Thu, Aug 1, 2013 at 4:37 PM, Rohit Harchandani wrote:
> Hi,
> I did finally manage to this. I get all the documents in the post filter
> and
I am facing this problem in solr 4.0 too. Its definitely not related to
autowarming. It just gets stuck while downloading a file and there is no
way to abort the replication except restarting solr.
On Wed, Jul 10, 2013 at 6:10 PM, adityab wrote:
> I have seen this in 4.2.1 too.
> Once replicati
?
Thanks,
Rohit
On Wed, Jul 10, 2013 at 6:10 PM, Yonik Seeley wrote:
> On Wed, Jul 10, 2013 at 6:08 PM, Rohit Harchandani
> wrote:
> > Hey,
> > I am trying to create a plugin which makes use of postfilter. I know that
> > the collect function is called for every document
.
Thanks
Rohit Kumar
Hi,
I have a scenario.
String array = ["Input1 is good", ""Input2 is better", "Input2 is sweet",
"Input3 is bad"]
I want to compare the string array against the given input :
String inputarray= ["Input1", "Input2"]
It involves no indexes. I just want to use the power of string search to do
a r
Hi Amit,
Great article. I tried it and it works well. I am new to developing in solr
and had a question? do you know if there is a way to access all the matched
ids before collect is called?
Thanks,
Rohit
On Sat, Nov 10, 2012 at 1:12 PM, Erick Erickson wrote:
> That'll teach _me_
Hey,
I am trying to create a plugin which makes use of postfilter. I know that
the collect function is called for every document matched, but is there a
way i can access all the matched documents upto this point before collect
is called on each of them?
Thanks,
Rohir
hanks
*
On Fri, Jul 5, 2013 at 8:30 AM, Jack Krupansky wrote:
> 1. Do you have an update processor chain that doesn't have RunUpdate in it?
>
> 2. Is the solrconfig directive missing?
>
> 3. Is _version_ missing from your schema?
>
> -- Jack Krupansky
>
> -Original
hey come up at the
> expected frequency?
>
>
> On 4 July 2013 15:35, Rohit Kumar wrote:
>
> > My solr config has :
> >
> >
> >15000
> >false
> >
> >
> >
> >
> > 1000
> &
My solr config has :
15000
false
1000
Machine is ubuntu 13 / 4 cores / 16GB RAM. Given 6gb to Solr running
over tomcat.
Still when i am adding documents to solr and searching its returning 0
hits. Its taking long before the document actu
Need help to figure out the error below.
*Code Snippet*:
public class ConnectionComponent extends SearchComponent {
@Override
public void process(ResponseBuilder rb) throws IOException {
NamedList nList = new SimpleOrderedMap();
NamedList nl= new SimpleOrderedMap();
List ld =
fields were downloaded to the temp
folder, but it was never pulled into the index directory on the slave. The
only file which made it was the lock file. This problem does not happen
anymore?
Thanks,
Rohit
ok. but what are the problems when brining up multiple instances reading
from the same data directory?
also how to re-open the searchers without restarting solr?
Thanks,
Rohit
On Tue, Nov 13, 2012 at 11:20 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Hi,
>
> I
have all these documents in the same shard? I
went for this approach because the shard which is queried the most is small
and gives a lot of benefit in terms of time taken for all the stats
queries. This shard is only about 5 gb whereas the entire index will be
about 50 gb.
Thanks for the help,
Rohit
ids and
getting the remaining fields is turning out to be really slow. It takes a
while to search for a list of unique ids. Is there any config change to
make this process faster?
Also what does isDistrib=false mean when solr generates the queries
internally?
Thanks,
Rohit
On Fri, Oct 19, 2012 at
"A" which has the smallest index size
(4gb).
The query is made to a "master" shard which by default goes to all 3 shards
for results. (also, the query that i am trying matches documents only only
in shard "A" mentioned above)
Will try debugQuery now and post it here.
, I apply
an xslt transformation to the response to get a comma separated list of
unique keys. Is there a way to improve this speed?? Would sharding help in
this case?
I am currently using solr 4.0 beta in my application.
Thanks,
Rohit
Thanks everyone. Adding the _version_ field in the schema worked.
Deleting the data directory works for me, but was not sure why deleting
using curl was not working.
On Wed, Sep 5, 2012 at 1:49 PM, Michael Della Bitta <
michael.della.bi...@appinions.com> wrote:
> Rohit:
>
> If
I am taking of Physical memory here, we start at -Xms of 2gb but very soon it
goes high as 45Gb. The memory never comes down even when a single user is not
using the system.
Regards,
Rohit
-Original Message-
From: Markus Jelsma [mailto:markus.jel...@openindex.io]
Sent: 03 September
I am currently using StandardDirectoryFactory, would switching directory
factory have any impact on the indexes?
Regards,
Rohit
-Original Message-
From: Claudio Ranieri [mailto:claudio.rani...@estadao.com]
Sent: 03 September 2012 10:03
To: solr-user@lucene.apache.org
Subject: RES
HI Lance,
Thanks for explaining this, it does push out all other programs.
Regards,
Rohit
Mobile: +91-9901768202
-Original Message-
From: Lance Norskog [mailto:goks...@gmail.com]
Sent: 03 September 2012 01:00
To: solr-user@lucene.apache.org
Subject: Re: Solr Not releasing memory
1) I
, but that doesn't seem to help either.
Regards,
Rohit
Cool. Thanks. I will have a look at this.
But in this case, if all the files on the master are new, will the entire
index on the slave be replaced or will it add to whatever is currently
present on the slave?
Thanks again,
Rohit
On Tue, Aug 14, 2012 at 6:04 PM, Walter Underwood wrote:
> Why
d
not change after updating the symlinks.
(org.apache.lucene.store.MMapDirectory:org.apache.lucene.store.MMapDirectory@/bb/mbigd/mbig2580/srchSolr/apache-solr-4.0.0-ALPHA/example/solr/data/index
lockFactory=org.apache.lucene.store.NativeFSLockFactory@2447e380)
Is there a way to update this dynamically? Thanks a lot
Regards,
Rohit Harchandani
I can cross check our shards once again, but I am sure this is not the case.
Regards,
Rohit
Mobile: +91-9901768202
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: 08 August 2012 21:04
To: solr-user@lucene.apache.org
Subject: Re: numFound changes on
numFound- 56000
Second time
query=abc&start=4000&rows=4000
numFound- 55998
What can cause this?
Regards,
Rohit
Hi Brandan,
I am not sure if get whats being suggested. Our delete worked fine, but now
no new data is going into the system.
Could you please throw some more light.
Regards,
Rohit
-Original Message-
From: Brendan Grainger [mailto:brendan.grain...@gmail.com]
Sent: 19 July 2012 17:33
We delete some data from solr, post which solr is not accepting any
commit's. What could be wrong?
We don't see any error in logs or anywhere else.
Regards,
Rohit
Hi,
Just wanted to know how much memory can Tomcat running on Windows Enterprise
RC2 server effectively utilize. Is there any limitation to this?
Regards,
Rohit
but I don't know how to get it to work with Solr.
Has anyone else worked on this earlier?
Regards,
Rohit
Thanks for the pointers Jack, actually the strange part is that the
defaultSearchField element is present and uncommented yet not working.
docKey
searchText
Regards,
Rohit
-Original Message-
From: Jack Krupansky [mailto:j...@basetechnology.com]
Sent: 11 June 2012 20:35
To: solr-user
Hi Jack,
I understand that df would make this work normaly, but why did
defaultSearchField stop working suddenly. I notice that there is talk about
deprecating it, but even then it should continue to work right?
Regards,
Rohit
-Original Message-
From: Jack Krupansky [mailto:j
needs to be
provided everytime, which was not the case earlier. What might be casing
this?
Regards,
Rohit
<https://issues.apache.org/jira/browse/SOLR-1903>
https://issues.apache.org/jira/browse/SOLR-1903, is there any other fix to
solve this problem. I am currently using solr 3.6.
Regards,
Rohit
Hi Erick,
Yes I have enabled the following setting,
internal
5000
1
Will try with higher timeouts. I tried scp command and the link didnt break
once, I was able to copy the entire 300Gb files, so am not too sure if this
is a network problem.
Regards,
Rohit
Mobile
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:2
68)
at
org.apache.solr.handler.ReplicationHandler$1.run(ReplicationHandler.java:149
)
Actually the replication starts, but is never able to complete and then
restarts again.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
?
Regards,
Rohit
increasing the commit time, though I cannot find a reason are they
related in anyway?
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: 13 April 2012 11:01
To: solr-user@lucene.apache.org
Subject
Hi Shawn,
Thanks for the information, let me give this a try, since this is a live box I
will try it during the weekend and update you.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: 13
The machine has a total ram of around 46GB. My Biggest concern is Solr index
time gradually increasing and then the commit stops because of timeouts, out
commit rate is very high, but I am not able to find the root cause of the issue.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http
Thanks for pointing these out, but I still have one concern, why is the
Virtual Memory running in 300g+?
Regards,
Rohit
-Original Message-
From: Tirthankar Chatterjee [mailto:tchatter...@commvault.com]
Sent: 12 April 2012 13:43
To: solr-user@lucene.apache.org
Subject: Re: Solr 3.5
Thanks for pointing these out, but I still have one concern, why is the
Virtual Memory running in 300g+?
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Bernd Fehling [mailto:bernd.fehl...@uni-bielefeld.de]
Sent: 12 April 2012 11:58
To
Operating system in linux ubuntu.
No not using spellchecker
Only language detection in my update chain.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Jan Høydahl [mailto:jan@cominvent.com]
Sent: 12 April 2012 12:50
To: solr-user
Hi Tirthankar,
The average size of documents would be a few Kb's this is mostly tweets
which are being saved. The two cores are storing different kind of data and
nothing else.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Tirth
fine a few days back?
Regards,
Rohit
Mobile: +91-9901768202
About Me: <http://about.me/rohitg> http://about.me/rohitg
Memory details:
export JAVA_OPTS="$JAVA_OPTS -Xms6g -Xmx36g -XX:MaxPermSize=5g"
Solr Config:
false
10
32
1
1000
1
What could be causing this, as everything was running fine a few days back?
Regards,
Rohit
Mobile: +91-9901768202
About Me: <
le.
I have mentioned the first entity as the root entity and have given the
threads parameter as 4. I can attach the file if required if you need to
understand better.
Any help would be appreciated.
Regards,
Rohit K
Go the problem, I need to user "types=" parameter to ignore character like #,@
in WordDelimiterFilterFactory factory.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: 16 February 201
https://issues.apache.org/jira/browse/SOLR-2059
But searching for @username is also returning results for just username or
#hashtag is just returning result for hastag. How can I achieve this?
Regards,
Rohit
Hi,
We are storing a large number of tweets and blogs feeds into solr.
Now if the user searches for twitter mentions like, @rohit , records which
just contain the word rohit are also being returned. Even if we do an exact
match "@rohit", I understand this happens because
(AprEndpoint.java:1675)
at java.lang.Thread.run(Unknown Source)
Regards,
Rohit
Thanks Yurykats.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 13 December 2011 11:17
To: solr-user@lucene.apache.org
Subject: Re: Virtual Memory very high
If you allow me to chime in, is
What are the difference in the different DirectoryFactory?
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Yury Kats [mailto:yuryk...@yahoo.com]
Sent: 10 December 2011 12:11
To: solr-user@lucene.apache.org
Subject: Re: Virtual Memory very
2 0.6 3:52.59 java
3591 root 20 0 19352 1576 1068 R0 0.0 0:00.24 top
1 root 20 0 23684 1908 1276 S0 0.0 0:06.21 init
Regards,
Rohit
,
Rohit
I have saved tweets related to some keywords in solr, can Solr be used to
generate the tag cloud of important words from these tweets?
Regards,
Rohit
Ya the problem is with the length of the URL, with a lot of filters coming
in the length goes beyond the length allowed. But, I guess extending url
length would be a better approcach.
Regards,
Rohit
-Original Message-
From: Sujit Pal [mailto:sujit@comcast.net]
Sent: 14 October 2011
I want to query, right now I use it in the following way,
CommonsHttpSolrServer server = new CommonsHttpSolrServer("URL HERE");
SolrQuery sq = new SolrQuery();
sq.add("q",query);
QueryResponse qr = server.query(sq);
Regards,
Rohit
-Original Message-
From: Yur
https://issues.apache.org/jira/browse/SOLR-1709
The path is not working on both 3.1 and 3.4 version, how else can I apply
the patch?
Regards,
Rohit
I have been using solr 3.1 am planning to update to solr 3.4, whats the
steps to be followed or anything that needs to be take care of specifically
for the upgrade?
Regards,
Rohit
approximately 50,000 documents and optimizing once a
day.
I am using the fq parameter during faceting, since all my queries are
datetime bound and max and min auto_id bound. Eg, fq=createdOnGMT[Date1 TO
Date2]&fq=id:[id1 TO id2]&facet=true&facet.field..
Regards,
Rohit
Mobile: +9
idea about sharding right now, if you could point me to some
resource for date wise sharding.
Regards,
Rohit
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: 17 September 2011 00:19
To: solr-user@lucene.apache.org
Subject: RE: Out of memory
: Actually I am
Thanks Dmitry, let me look into sharading concepts.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 15 September 2011 10:15
To: solr-user@lucene.apache.org
Subject: Re: Out of memory
If you
It's happening more in search and search has become very slow particularly on
the core with 69GB index data.
Regards,
Rohit
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 15 September 2011 07:51
To: solr-user@lucene.apache.org
Subject: Re: Out of memory
://haklus.com/crssConfig.xml
http://haklus.com/rssConfig.xml
http://haklus.com/twitterConfig.xml
http://haklus.com/facebookConfig.xml
Thanks again
Rohit
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 14 September 2011 10:23
To: solr-user@lucene.apache.org
a jconsole to my solr as suggested to get a better picture.
Regards,
Rohit
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: 14 September 2011 08:15
To: solr-user@lucene.apache.org
Subject: Re: Out of memory
Hi Rohit,
Do you use caching?
How big is your index in
(Mostly facet queries
based on date, string fields).
After sometime about 18-20hr solr goes out of memory, the thread dump
doesn't show anything. How can I improve this besides adding more ram into
the system.
Regards,
Rohit
Mobile: +91-9901768202
About Me: http://about.me/rohitg
-Ori
past 15 days, unless
someone queries for it explicetly. How can this be achieved?
. Will it be better to go for Solr replication or distribution if
there is little option left
Regards,
Rohit
Mobile: +91-9901768202
About Me: <http://about.me/rohitg> http://about.me/rohitg
Nope not getting anything here also.
Regards,
Rohit
-Original Message-
From: Jerry Li [mailto:zongjie...@gmail.com]
Sent: 08 September 2011 08:09
To: solr-user@lucene.apache.org
Subject: Re: Unable to generate trace
what about kill -3 PID command?
On Thu, Sep 8, 2011 at 4:06 PM, Rohit
1 - 100 of 146 matches
Mail list logo