rowse/SOLR-11739.
The operations would succeed, but you might not be getting the status of the
task you think you are.
Best,
Erick
On Tue, Apr 10, 2018 at 9:25 AM, Petersen, Robert (Contr)
wrote:
> HI Erick,
>
>
> I *just* found that parameter in the guide... it was waaay down at the
he "async" property, see:
https://lucene.apache.org/solr/guide/6_6/collections-api.html
There's also a way to check the status of the backup running in the background.
Best,
Erick
On Mon, Apr 9, 2018 at 11:05 AM, Petersen, Robert (Contr)
wrote:
> Shouldn't this just cr
Shouldn't this just create the backup file(s) asynchronously? Can the timeout
be adjusted?
Solr 7.2.1 with five nodes and the addrsearch collection is five shards x five
replicas and "numFound":38837970 docs
Thx
Robi
http://myServer.corp.pvt:8983/solr/admin/collections?action=BACKUP&name=a
Hi all,
So for an initial CDCR setup documentation says bulk load should be performed
first otherwise CDCR won't keep up. By bulk load does that include an ETL
process doing rapid atomic updates one doc at a time (with multiple threads) so
like 4K docs per minute assuming bandwidth between DCs
OK just restarting all the solr nodes did fix it, since they are in production
I was hesitant to do that
From: Petersen, Robert (Contr)
Sent: Monday, January 8, 2018 12:34:28 PM
To: solr-user@lucene.apache.org
Subject: solr 5.4.1 leader issue
Hi got two out of
st restart the solr
instances, the zookeeper instances, both, or is there another better way
without restarting everything?
Thx
Robi
____
From: Petersen, Robert (Contr)
Sent: Monday, January 8, 2018 12:34:28 PM
To: solr-user@lucene.apache.org
Subject: solr 5.4.1 le
I'm on zookeeper 3.4.8
From: Petersen, Robert (Contr)
Sent: Monday, January 8, 2018 12:34:28 PM
To: solr-user@lucene.apache.org
Subject: solr 5.4.1 leader issue
Hi got two out of my three servers think they are replicas on one shard getting
exceptions wond
Hi got two out of my three servers think they are replicas on one shard getting
exceptions wondering what is the easiest way to fix this? Can I just restart
zookeeper across the servers? Here are the exceptions:
TY
Robi
ERROR
null
RecoveryStrategy
Error while trying to recover.
core=custsea
I remember when FAST (when it was still FAST) came to our enterprise to pitch
their search when we were looking to replace our alta vista search engine with
*something* and they demonstrated that relevance tool for business side. While
that thing was awesome, I've never seen anything close to it
you are using cloudera? sounds like a question for them...
From: Abhi Basu <9000r...@gmail.com>
Sent: Thursday, December 14, 2017 1:27:23 PM
To: solr-user@lucene.apache.org
Subject: SOLR Rest API for monitoring
Hi All:
I am using CDH 5.13 with Solr 4.10. Trying t
>From what I have read, you can only upgrade to the next major version number
>without using a tool to convert the indexes to the newer version. But that is
>still perilous due to deprications etc
So I think best advice out there is to spin up a new farm on 7.1 (especially
from 4.x), make a ne
From: Petersen, Robert (Contr)
Sent: Monday, November 6, 2017 5:05:31 PM
To: solr-user@lucene.apache.org
Subject: Can someone help? Two level nested doc... ChildDocTransformerFactory
sytax...
OK no faceting, no filtering, I just want the hierarchy to come backin the
results
OK no faceting, no filtering, I just want the hierarchy to come backin the
results. Can't quite get it... googled all over the place too.
Doc:
{ id : asdf, type_s:customer, firstName_s:Manny, lastName_s:Acevedo,
address_s:"123 Fourth Street", city_s:Gotham, tn_s:1234561234,
_childDocuments_:
Actually I can't believe they're depricating UseConcMarkSweepGC , That was the
one that finally made solr 'sing' with no OOMs!
I guess they must have found something better, have to look into that...
Robi
From: Chris Hostetter
Sent: Monday, November 6, 2017 3
Hi Guys,
Anyone else been noticing this this msg when starting up solr with java 9?
(This is just an FYI and not a real question)
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was
deprecated in version 9.0 and will likely be removed in a future release.
Java HotSpot(TM)
rst
service that is trying InfluxDB.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Nov 6, 2017, at 1:31 PM, Petersen, Robert (Contr)
> wrote:
>
> Hi Walter,
>
>
> Yes, now I see it. I'm wondering about using Grafana and New R
rrent solr monitoring favorites?
Look back down the string to my post. We use Grafana.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Nov 6, 2017, at 11:23 AM, Petersen, Robert (Contr)
> wrote:
>
> Interesting! Finally a Grafana user...
e_exporter)
- Prometheus JMX exporter to export "Solr metrics" (Cache usage, QPS,
Response times...) (https://github.com/prometheus/jmx_exporter)
- Grafana to visualize all the data scrapped by Prometheus (
https://grafana.com/)
Best regards
Daniel Ortega
2017-11-06 20:13 GMT+01:00 Pet
InfluxDB.
>
> I’m still working out the kinks in some of the more complicated queries, but
> the data is all there. I also want to expand the servlet filter to report
> HTTP response codes.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwoo
Hi Guys,
I was playing with payloads example as I had a possible use case of alternate
product titles for a product.
https://lucidworks.com/2017/09/14/solr-payloads/
bin/solr start
bin/solr create -c payloads
bin/post -c payloads -type text/csv -out yes -d $'id,vals_dpf\n1,one|1.0
two|2.0 thr
OK I'm probably going to open a can of worms here... lol
In the old old days I used PSI probe to monitor solr running on tomcat which
worked ok on a machine by machine basis.
Later I had a grafana dashboard on top of graphite monitoring which was really
nice looking but kind of complicated t
Thanks guys! I kind of suspected this would be the best route and I'll move
forward with a fresh start on 7.x as soon as I can get ops to give me the
needed machines! 😊
Best
Robi
From: Erick Erickson
Sent: Thursday, November 2, 2017 8:17:49 AM
To: solr-user
S
Hi Guys,
I just took over the care and feeding of three poor neglected solr 5.4.1 cloud
clusters at my new position. While spinning up new collections and supporting
other business initiatives I am pushing management to give me the green light
on migrating to a newer version of solr. The last
Perhaps some people like maybe those using DIH to feed their index might not
have that luxury and copyfield is the better way for them. If you have an
application you can do it either way. I have done both ways in different
situations.
Robi
-Original Message-
From: Steven White [mail
Hi
Overall I think you are mixing up your terminology. What used to be called a
'core' is now called a 'collection' in solr cloud. In the old master slave
setup, you made separate cores and replicated them to all slaves. Now they
want you to think of them as collections and let the cloud ma
25
90
-Original Message-
From: Lan [mailto:dung@gmail.com]
Sent: Monday, March 03, 2014 1:24 PM
To: solr-user@lucene.apache.org
Subject: Re: network slows when solr is running - help
How frequently are you committing? Frequent commits can slow everythi
tual machine? (Other machines causing slow
downs) Another possible option is the network card is offloading processing
onto the CPU which is introducing latency when the CPU is under load.
On Fri, Feb 28, 2014 at 4:11 PM, Petersen, Robert <
robert.peter...@mail.rakuten.com> wrote:
> Hi guys,
Hi guys,
Got an odd thing going on right now. Indexing into my master server (solr
3.6.1) has slowed and it is because when solr runs ping shows latency. When I
stop solr though, ping returns to normal. This has been happening
occasionally, rebooting didn't help. This is the first time I no
I agree with Erick, but if you want the special characters to count in
searches, you might consider not just stripping them out but replacing them
with textual placeholders (which would also have to be done at indexing time).
For instance, I replace C# with csharp and C++ with cplusplus during
Hi Bryan,
>From what I've seen it will only get rid of the deletes in the segments that
>the commit merged and there will be some residual deleted docs still in the
>index. It doesn't do the full rewrite. Even if you play with merge factors
>etc, you'll still have lint. In your situation
Hi Daniel,
How about trying something like this (you'll have to play with the boosts to
tune this), search all the fields with all the terms using edismax and use the
minimum should match parameter, but require all terms to match in the
allMetadata field.
https://wiki.apache.org/solr/Extend
My use case is basically to do a dump of all contents of the index with no
ordering needed. It's actually to be a product data export for third parties.
Unique key is product sku. I could take the min sku and range query up to the
max sku but the skus are not contiguous because some get turne
Hi solr users,
We have a new use case where need to make a pile of data available as XML to a
client and I was thinking we could easily put all this data into a solr
collection and the client could just do a star search and page through all the
results to obtain the data we need to give them.
Hi,
I'd go with (2) also but using dynamic fields so you don't have to define all
the storeX_price fields in your schema but rather just one *_price field. Then
when you filter on store:store1 you'd know to sort with store1_price and so
forth for units. That should be pretty straightforward.
Hi Erick,
I like your idea, FWIW please also leave room for boost by function query which
takes many numeric fields as input but results in a single value. I don't know
if this counts as a really clever function but here's one that I currently use:
{!boost
b=pow(sum(log(sum(product(boosted,90
This would describe the facet parameters we're talking about:
http://wiki.apache.org/solr/SimpleFacetParameters
Query something like this:
http://localhost:8983/solr/select?q=*:*&fl=id&rows=0&facet=true&facet.limit=-1&facet.field=&facet.mincount=2
Then filter on each facet returned with a filter
Hi
Perhaps you could query for all documents asking for the id field to be
returned and then facet on the field you say you can key off of for duplicates.
Set the facet mincount to 2, then you would have to filter on each facet value
and page through all doc IDs (except skip the first document
I have seen this happen before in our 3.6.1 deployment. It seemed related to
high JVM memory consumption on the server when our index got too big (ie we
were close to getting OOMs). That is probably why restarting solr sort of
fixes it, assuming the file it is stuck on is the final file and i
Hi guys,
We have used an integer as our unique key since solr 1.3 with no problems at
all. We never thought of using anything else because our solr unique key is
based upon our product sku data base field which is defined as an integer also.
We're on solr 3.6.1 currently.
Thanks
Robi
-
Hi Mark
Yes, it is something we implemented also. We just try various subsets of the
search terms when there are zero results. To increase performance for all
these searches we return only the first three results and no facets so we can
simply display the result counts for the various subsets
Shawn Heisey [mailto:s...@elyograg.org]
Sent: Wednesday, July 10, 2013 5:34 PM
To: solr-user@lucene.apache.org
Subject: Re: expunging deletes
On 7/10/2013 5:58 PM, Petersen, Robert wrote:
> Using solr 3.6.1 and the following settings, I am trying to run without
> optimizes. I used to optim
Hi guys,
Using solr 3.6.1 and the following settings, I am trying to run without
optimizes. I used to optimize nightly, but sometimes the optimize took a very
long time to complete and slowed down our indexing. We are continuously
indexing our new or changed data all day and night. After a f
Time Remaining: 88091277s, Speed: 281 bytes/s
-Original Message-
From: Petersen, Robert [mailto:robert.peter...@mail.rakuten.com]
Sent: Tuesday, July 09, 2013 1:22 PM
To: solr-user@lucene.apache.org
Subject: replication getting stuck on a file
Hi
My solr 3.6.1 slave farm is suddenly
Hi
My solr 3.6.1 slave farm is suddenly getting stuck during replication. It
seems to stop on a random file on various slaves (not all) and not continue.
I've tried stoping and restarting tomcat etc but some slaves just can't get the
index pulled down. Note there is plenty of space on the h
I've been trying it out on solr 3.6.1 with a 32GB heap and G1GC seems to be
more prone to OOMEs than CMS. I have been running it on one slave box in our
farm and the rest of the slaves are still on CMS and three times now it has
gone OOM on me whereas the rest of our slaves kept chugging along
rate a lot. That is OK, because users are
getting faster responses than they would from Solr. A 5% hit rate may be OK
since you have that front end HTTP cache.
The Netflix index was updated daily.
wunder
On Jun 19, 2013, at 10:36 AM, Petersen, Robert wrote:
> Hi Walter,
>
> I used
36 PM, Petersen, Robert
wrote:
> OK thanks, will do. Just out of curiosity, what would having that set way
> too high do? Would the index become fragmented or what?
>
> -Original Message-
> From: Michael McCandless [mailto:luc...@mikemccandless.com]
> Sent: Wednesday, June 19,
Subject: Re: TieredMergePolicy reclaimDeletesWeight
The default is 2.0, and higher values will more strongly favor merging segments
with deletes.
I think 20.0 is likely way too high ... maybe try 3-5?
Mike McCandless
http://blog.mikemccandless.com
On Tue, Jun 18, 2013 at 6:46 PM, Petersen, Robert
your document cache, too. I usually see about 0.75 or better on
that.
wunder
On Jun 18, 2013, at 10:22 AM, Petersen, Robert wrote:
> Hi Otis,
>
> Yes the query results cache is just about worthless. I guess we have too
> diverse of a set of user queries. The business unit h
Hi
In continuing a previous conversation, I am attempting to not have to do
optimizes on our continuously updated index in solr3.6.1 and I came across the
mention of the reclaimDeletesWeight setting in this blog:
http://blog.mikemccandless.com/2011/02/visualizing-lucenes-segment-merges.html
We
to
facet.method=fc, however the JVM heap usage went down from about 20GB to 4GB.
André
On 06/17/2013 08:21 PM, Petersen, Robert wrote:
> Also some time ago I made all our caches small enough to keep us from getting
> OOMs while still having a good hit rate.Our index has about 50
rconfig.xml files this is right above the merge
factor definition.
Otis
--
Solr & ElasticSearch Support -- http://sematext.com/
On Mon, Jun 17, 2013 at 8:00 PM, Petersen, Robert
wrote:
> Hi Upayavira,
>
> You might have gotten it. Yes we noticed maxdocs was way bigger than
is
--
Solr & ElasticSearch Support -- http://sematext.com/
On Mon, Jun 17, 2013 at 2:21 PM, Petersen, Robert
wrote:
> Hi Otis,
>
> Right I didn't restart the JVMs except on the one slave where I was
> experimenting with using G1GC on the 1.7.0_21 JRE. Also some time ago I
> m
fig for the
TieredMergePolicy, and therefore don't get to use it, seeing the old behaviour
which does require periodic optimise.
Upayavira
On Mon, Jun 17, 2013, at 07:21 PM, Petersen, Robert wrote:
> Hi Otis,
>
> Right I didn't restart the JVMs except on the one slave where I wa
hat? How many fields do you
index, facet, or group on?
Otis
--
Performance Monitoring - http://sematext.com/spm/index.html
Solr & ElasticSearch Support -- http://sematext.com/
On Fri, Jun 14, 2013 at 8:04 PM, Petersen, Robert
wrote:
> Hi guys,
>
> We're on solr 3.6.1 and
Hi guys,
We're on solr 3.6.1 and I've read the discussions about whether to optimize or
not to optimize. I decided to try not optimizing our index as was recommended.
We have a little over 15 million docs in our biggest index and a 32gb heap for
our jvm. So without the optimizes the index fo
Hi
It will not be double the disk space at all. You will not need to store the
field you search, only the field being returned needs to be stored.
Furthermore if you are not searching the XML field you will not need to index
that field, only store it.
Hope that helps,
Robi
-Original Mes
Hey I just want to verify one thing before I start doing this: function
queries only require fields to be indexed but don't require them to be stored
right?
-Original Message-
From: Petersen, Robert [mailto:robert.peter...@mail.rakuten.com]
Sent: Tuesday, April 23, 2013 4:39
Good info, Thanks Hoss! I was going to add a more specific fl= parameter to my
queries at the same time. Currently I am doing fl=*,score so that will have to
be changed.
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Tuesday, April 23, 2013 4:18 PM
T
al Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Tuesday, February 05, 2013 2:53 PM
To: solr-user@lucene.apache.org
Subject: Re: Really bad query performance for date range queries
On 2/5/2013 3:19 PM, Petersen, Robert wrote:
> Hi Shawn,
>
> I've looked at th
Hi Shawn,
I've looked at the xing JVM before but don't use it. jHiccup looks like a
really useful tool. Can you tell us how you are starting it up? Do you start
it wrapping the app container (ie tomcat / jetty)?
Thanks
Robi
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg
Thanks Shawn. Actually now that I think about it, Yonik also mentioned
something about lucene number representation once in reply to one of my
questions. Here it is:
Could you also tell me what these `#8;#0;#0;#0;#1; strings represent in the
debug output?
"That's internally how a number is e
Hi Jamel,
You can start solr slaves with them pointed at a master and then turn off
replication in the admin replication page.
Hope that helps,
-Robi
Robert (Robi) Petersen
Senior Software Engineer
Search Department
-Original Message-
From: Jamel ESSOUSSI [mailto:jamel.essou...@gma
as fast as we can.
Thanks!
Robi
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Tuesday, January 29, 2013 2:24 PM
To: solr-user@lucene.apache.org
Subject: Re: queryResultCache *very* low hit ratio
On 1/29/2013 1:36 PM, Petersen, Robert wrote:
> My queryResultC
al user queries
don't experience increased latency. If you remove all auto-warming of the
query result cache, you may want to add static warming entries for these fields.
-Yonik
http://lucidworks.com
On Tue, Jan 29, 2013 at 3:36 PM, Petersen, Robert wrote:
> Hi solr users,
>
> My
Hi solr users,
My queryResultCache hitratio has been trending down lately and is now at 0.01%,
and also it's warmup time was almost a minute. I have lowered the autowarm
count dramatically since there are no hits anyway. I also wanted to lower my
autowarm counts across the board because I am
(ppm)”. Would there be a problem putting
these in a dynamic field name?
3. Is it possible to query for the possible list of dynamic fieldnames? I
might need this when creating a list of attributes.
Thanks again Robi.
O. O.
------
Petersen, Robert w
Hi O.O.,
You don't need to add them all into the schema. You can use the wildcard
fields like to hold them. You can then have the attribute name be the
part of the wildcard and the attribute value be the field contents. So you
could have fields like Function_s:Scanner etc and then you could
Thanks Hoss, Good to know!
I have that exact situation: a complex function based on multiple field values
that I always run for particular types of searches including global star
searches to aid in sorting the results appropriately.
Robi
-Original Message-
From: Chris Hostetter [
ElasticSearch Support
http://sematext.com/
On Jan 22, 2013 8:08 PM, "Petersen, Robert" wrote:
> Hi guys,
>
> I was wondering if there was a way to pass commonly used boost values
> in with commonly used filter queries in these solrConfig event handler
> sections. Could I
Hi guys,
I was wondering if there was a way to pass commonly used boost values in with
commonly used filter queries in these solrConfig event handler sections. Could
I just append the ^1.5 at the end of the fq value? IE can I do this:
taxonomyCategoryTypeId:1^1.5
Or perhaps thi
PS the wt=ruby param is even better! Great tips.
-Original Message-
From: Petersen, Robert [mailto:rober...@buy.com]
Sent: Thursday, January 10, 2013 3:17 PM
To: solr-user@lucene.apache.org
Subject: RE: parsing debug output for readability
Hi Erik,
Thanks, debug.explain.structured
should come out with whitespace and
newlines in the actual XML source (browsers render it ugly though)
Erik
On Jan 10, 2013, at 15:35 , Petersen, Robert wrote:
> Hi Solr Users,
>
> Can someone give me some good parsing rules of thumb to make the debug
> explain output hu
Hi Solr Users,
Can someone give me some good parsing rules of thumb to make the debug explain
output human readable? I found this cool site for visualizing the output but
our queries are too complex and break their parser: http://explain.solr.pl
I tried adding new lines plus indenting after e
Hi Uwe,
We have hundreds of dynamic fields but since most of our docs only use some of
them it doesn't seem to be a performance drag. They can be viewed as a sparse
matrix of fields in your indexed docs. Then if you make the
sortinfo_for_groupx an int then that could be used in a function que
they let you!
Otis
--
Performance Monitoring - http://sematext.com/spm On Dec 20, 2012 6:29 PM,
"Petersen, Robert" wrote:
> Hi Otis,
>
> I thought Java 7 had a bug which wasn't being addressed by Oracle
> which was making it not suitable for Solr. Did that get fixed
to get the latest Java 7 or if you have to remain on 6 then use the
latest 6.
Otis
--
SOLR Performance Monitoring - http://sematext.com/spm On Dec 18, 2012 7:54 PM,
"Petersen, Robert" wrote:
> Hi solr user group,
>
> ** **
>
> Sorry if this isn't directly a Solr
Hi solr user group,
Sorry if this isn't directly a Solr question. Seems like once in a blue moon
the GC crashes on a server in our Solr 3.6.1 slave farm. This seems to only
happen on a couple of the twelve slaves we have deployed and only very rarely
on those. It seems like this doesn't dire
;>
>> You should just prevent deep paging. Humans with wallets don't do that, so
>> you will not lose anything by doing that. It's common practice.
>>
>> Otis
>> --
>> SOLR Performance Monitoring - http://sematext.com/spm
>> On Dec 7, 2012 8:10 PM, &
Hi guys,
Sometimes we get a bot crawling our search function on our retail web site.
The ebay crawler loves to do this (Request.UserAgent: Terapeakbot). They just
do a star search and then iterate through page after page. I've noticed that
when they get to higher page numbers like page 9000
ternative.
Best
Erick
On Wed, Oct 10, 2012 at 3:04 PM, Petersen, Robert wrote:
> You could be right. Going back in the logs, I noticed it used to happen less
> frequently and always towards the end of an optimize operation. It is
> probably my indexer timing out waiting for updates t
What do you want the results to be, persons? And the facets should be
interests or subinterests? Why are there two layers of interests anyway? Can
there my many subinterests under one interest? Is one of those two a name of
the interest which would look nice as a facet?
Anyway, have you rea
ture that keeps events from happening all at once.
Lately, it doesn't seem to be working. (Anonymous - via GTD
book)
On Wed, Oct 10, 2012 at 11:31 PM, Petersen, Robert wrote:
> Tomcat localhost log (not the catalina log) for my solr 3.6.1 (master)
> instance contains lots of these
Tomcat localhost log (not the catalina log) for my solr 3.6.1 (master)
instance contains lots of these exceptions but solr itself seems to be doing
fine... any ideas? I'm not seeing these exceptions being logged on my slave
servers btw, just the master where we do our indexing only.
Oct 9,
That is a great idea to run the updates thru the LB also! I like it!
Thanks for the replies guys
-Original Message-
From: jimtronic [mailto:jimtro...@gmail.com]
Sent: Thursday, September 20, 2012 1:46 PM
To: solr-user@lucene.apache.org
Subject: Re: some general solr 4.0 questions
I've
Hello solr user group,
I am evaluating the new Solr 4.0 beta with an eye to how to fit it into our
current solr setup. Our current setup is running on solr 3.6.1 and uses 12
slaves behind a load balancer and a master which we index into, and they all
have three cores (now referred to as collec
.
Regarding URLs
http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/stemdict.txt
http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/example/solr/collection1/conf/protwords.txt
--- On Tue, 9/18/12, Petersen, Robert wrote:
> From: Petersen, Rob
Hi group,
On this wiki page these two links below are broken as they are also on
lucidworks' version, can someone point me at the correct locations please? I
googled around and came up with possible good links.
Thanks
Robi
http://wiki.apache.org/solr/LanguageAnalysis#Other_Tips
http://lucidwo
Why not just index one title per document, each having author and specialty
fields included? Then you could search titles with a user query and also
filter/facet on the author and specialties at the same time. The author bio
and other data could be looked up on the fly from a DB if you didn't
I also had this problem on solr/tomcat and finally saw the errors were coming
from my application side disconnecting from solr after a timeout. This was
happening when solr was busy doing an optimize and thus not responding quickly
enough. Initially when I saw this in the logs, I was quite wor
This site is pretty cool also, just filter on solr-user like this:
http://markmail.org/search/?q=list%3Aorg.apache.lucene.solr-user
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Monday, July 02, 2012 5:34 PM
To: solr-user@lucene.apache.org
Subject: Re:
91 matches
Mail list logo