275.html>
--Thomas
On Mon, Sep 20, 2010 at 7:58 AM, Kjetil Ødegaard
wrote:
> On Thu, Sep 16, 2010 at 11:48 AM, Peter Karich wrote:
>
> > Hi Kjetil,
> >
> > is this custom component (which performes groub by + calcs stats)
> > somewhere available?
> > I would like t
I think what you want is a query like
all_text:(opening excellent) AND presentation_id:294 AND type:blob
which will require one of the all_text clauses to be true.
On Tue, Sep 21, 2010 at 12:20 PM, wrote:
> Alright, this is making much more sense now, but there are still some
> problems. Remov
Re: your problems with JIRA
I have no idea what caused it/what resolved it but I have had the same
problem as you. Assuming, that is, that the problem that is occurring is
when you click on a link to an issue, it instead takes you to
https://issues.apache.org/jira/secure/Dashboard.jspa Or perhap
name_de:"das urteil"
the expected document is found.
When I check this through the "Analysis" page of the solr admin it does show me
a match for the first query.
I'm sure I'm missing something obvious. But what?
Regards
Thomas
.
One the index is optimized (by calling SolServer.optimize()) the count is
correct again.
Am I missing something or is this a bug in Solr/Lucene?
Thanks in advance
Thomas
.
Regards
Thomas
This may be a bug if you did not change the field or the schema file but the
terms count is changing.
On Fri, Oct 15, 2010 at 9:14 AM, Thomas Kellerer wrote:
Hi,
we are updating our documents (that represent products in our shop) when a
dealer modifies them, by calling
Thanks.
Not really the answer I wanted to hear, but at least I know this is not my
fault ;)
Regards
Thomas
Erick Erickson, 15.10.2010 20:42:
This is actually known behavior. The problem is that when you update
a document, it's deleted and re-added, but the original is marked as
de
couldn't come up
with a viable solution.
Cheers,
Thomas
for now as I'm not a big fan of creating (and
especially maintaining) custom components.
Or is someone with an even better idea out there? ;)
Cheers,
Thomas
On Tue, Dec 4, 2012 at 11:34 PM, Chris Hostetter
wrote:
>
> : But it would be a lot harder than either splitting them out into
>
d with one leader and one replica? Could
this setup tolerate one of the instances going down? Or do I need three
instance because Zookeeper needs a quorum of instances?
Cheers,
Thomas
Thanks a lot guys!
On Thu, Dec 6, 2012 at 4:22 PM, Markus Jelsma wrote:
>
> -Original message-
> > From:Yonik Seeley
> > Sent: Thu 06-Dec-2012 16:01
> > To: solr-user@lucene.apache.org
> > Subject: Re: Minimum HA Setup with SolrCloud
> >
> > On Thu, Dec 6, 2012 at 9:56 AM, Markus Jelsma
Hi,
Simple question, I hope.
Using the nightly build of 4.1 from yesterday (Jan 8, 2013), I started 6 Solr
nodes.
I issued the following command to create a collection with 3 shards, and a
replication factor=2. So a total of 6 shards.
curl
'http://localhost:11000/solr/admin/collections?a
view.
I see a new collection called consumer1 - all of it's nodes are green and the
collection consists of 3 shards. Each shard has 1 leader and 1 replica, each
hosted by a different Solr instance.
In other words, it seemed to work for me.
- Mark
On Jan 9, 2013, at 10:58 AM, Jam
Oops, small copy-paste error. Had my i's and j's backwards.
Should be:
--- slice1, rep2 (i=1,j=2) ==> chooses node[1]
--- slice2, rep1 (i=2,j=1) ==> chooses node[1]
-Original Message-----
From: James Thomas [mailto:jtho...@camstar.com]
Sent: Wednesday, January 09, 2013
:
id
And yet I can compose a query with two hits in the index, showing:
#1: 03405443/v66i0003/347_mrirtaitmbpa
#2: 03405443/v66i0003/347_mrirtaitmbpa
Can anyone give pointers on where I'm screwing something up?
Thomas Dowling
thomas.dowl...@gmail.com
Thanks. In fact, the behavior I want is overwrite=true. I want to be
able to reindex documents, with the same id string, and automatically
overwrite the previous version.
Thomas
On 03/02/2012 04:01 PM, Mikhail Khludnev wrote:
Hello Tomas,
I guess you could just specify overwrite=false
master?
What makes the problem even more difficult is, that this isn't reproducable.
Sometimes re-starting the master puts everything back to normal.
Any ideas?
Regards
Thomas
Here is our configuration:
true
commit
startup
stopwords.txt,stopwords_de.txt,stopwords_en.txt,synonyms.txt
Stevo Slavić, 20.01.2011 13:26:
On which events did you configure master to perform replication? replicateAfter
Regards,
Stevo.
On Thu, Jan 20, 2011 at 12:53 PM, Thomas
Thomas Kellerer, 20.01.2011 12:53:
Hi all,
we have implemented a Solr based search in our web application. We
have one master server that maintains the index which is replicated
to the slaves using the built-in Solr replication.
This has been working fine so far, but suddenly the replication
lling on the slave (which doesn't pick up the changes)
Regards
Thomas
We have tried that as well, but the slave still claims to have a higher index
version, even when the index files were deleted completely
Regards
Thomas
Stevo Slavić, 20.01.2011 16:52:
Not too elegant but valid check would be to bring slave down, delete
it's index data directory, then to c
ot
find a way to do so.
GOK:IA\ 38* doesn't work with the contents of GOK indexed as text.
Is there a way to index and search that would meet my requirements?
Thomas
solr/conf/schema.xml),
the field is just
BTW, I have another field "DDC" with entries of the form "t1:086643" with
analogous requirements which yields similar problems due to the colon, also
indexed as text.
Here also
DDC:T1\:086643
works, but not
DDC:T1\:08664?
Thanks in
.
>
> It seems that you could do such type of queries :
>
> GOK:"IA 38*"
yes that sounds interesting.
But I don't know how to get and install it into solr. Cam you give me a hint?
Thanks
Thomas
are already separated as field with
multiValued="true".
But I need to be able to search for IA 310 - IA 319 with one call,
{!complexphrase}GOK:"IA 31?"
will do this now, or even for
{!complexphrase}GOK:"IA 3*"
to catch all those in one go.
Thanks, this helped a lot
Thomas
pache.lucene.search.PhraseQuery" found in phrase
query string "IA620" java.lang.IllegalArgumentException: Unknown query type
"org.apache.lucene.search.PhraseQuery" found in phrase query string "IA620" at
org.apache.lucene.queryParser.ComplexPhraseQueryParser
Cheers
Thomas
roman,
have u solved the problem? i'm facing a similar problem. our customer wants to
have separate indexes/core but still want to search across them at times w/o
the IDF limitation that solr has when using shards
> Unless I am wrong, sharding across two cores is done over HTTP and has
> the l
is is due to the Tomcat, the logging system of solr itself,
but it is annoying.
And yes, I've seen something like this before and found the error not by
inspecting solr but by opening the suspected files with an appropriate browser
(e.g. Firefox) which tells me exactly where something goes wrong.
All the best
Thomas
Hi,
when I restart my solr server it performs two warming queries.
When a request occures within this there is an exception and always
exceptions until i restart solr.
Logfile:
INFO: Added SolrEventListener:
org.apache.solr.core.QuerySenderListener{queries=[{q=solr,start=0,rows=10},
{q=rocks,start
-2.9.3.jar), nor what it means that
it is 'found in phrase query string "POF15?"'
Can anybody give me a hint how to handle this problem (apart from erasing the
quotes if no whitespace is present)?
Cheers
Thomas
und this error for now? I'd really like
to move from the trunk to the stable 3.3.0 release and this is the only
problem currently keeping me from doing so.
Cheers,
Thomas
I'm pretty sure my original query contained a distance filter as well. Do I
absolutely need to filter by distance in order to sort my results by it?
I'll write another unit test including a distance filter as soon as I get a
chance.
Cheers,
Thomas
On Tue, Jul 5, 2011 at 9:04 AM,
ield=user.uniqueId_s&sfield=user.location_p&pt=48.20927,16.3728&sort=geodist()
> asc
This works without a problem in my trunk build of Solr 4.0 from March 2011.
I use the standard schema.xml packaged with the Solr distribution.
Thomas
On Tue, Jul 5, 2011 at 10:20 AM, Thomas H
How should I proceed with this problem? Should I create a JIRA issue or
should I cross-post on the dev mailing list? Any suggestions?
Cheers,
Thomas
On Wed, Jul 6, 2011 at 9:49 AM, Thomas Heigl wrote:
> My query in the unit test looks like this:
>
> q=*:*&fq=_query_:&quo
Hi Yonik,
I just created a JIRA issue: https://issues.apache.org/jira/browse/SOLR-2642
Thomas
On Fri, Jul 8, 2011 at 4:00 PM, Yonik Seeley wrote:
> On Fri, Jul 8, 2011 at 4:11 AM, Thomas Heigl wrote:
> > How should I proceed with this problem? Should I create a JIRA issue or
>
>
> and get the following errors:
> ---
>
> [javac] warning: [options] bootstrap class path not set in conjunction
> with -source 1.6
> [javac]
> /home/swu/newproject/lucene_4x/lucene/analysis/opennlp/src/java/org/apache/lucene/analysis/opennlp/OpenNLPTokenizer.java:170:
> erro
Hi,
how can I map these complex Datastructure in Solr?
Document
- Groups
- Group_ID
- Group_Name
- .
- Title
- Chapter
- Chapter_Title
- Chapter_Content
Or
Product
- Groups
- Group_ID
- Group_Name
- .
search the data?
> 2. How do you want to access the fields once the Solr documents have been
> identified by a query - such as fields to retrieve, "join", etc.
>
> So, once the data is indexed, what are your requirements for accessing the
> data? E.g., some sample pseudo-qu
": 15.99,
"size": "XL",
"unit": "1 piece",
"inStore": false
}
]
}
]
}}
2012/8/1 Alexandre Rafalovitch :
> Sorry, tha
19,16.39
> d=50.0}"&sfield=user.location_p&pt=48.19,16.39
Any feedback would be greatly appreciated.
Cheers,
Thomas
On Tue, Sep 11, 2012 at 2:43 PM, mechravi25 wrote:
> Hi,
>
> I would like to know the base lined version of Solr 3.6.1 Source code for
> svn Check out. We tried to check out from the following link and found many
> base lined versions related to Solr 3.6.x version.
>
> https://svn.apache.org/repos
know what to do? How to find out which ratio of the index is
optimized, how many nights will it take to finish?
Best regards,
Thomas Koch, http://www.koch.ro
idering a setup like/with Katta?
Thanks for your insights,
Thomas Koch, http://www.koch.ro
Hi Gasol Wu,
thanks for your reply. I tried to make the config and syslog shorter and more
readable.
solrconfig.xml (shortened):
false
15
1500
2147483647
1
1000
1
false
10
1000
2147483647
1
t
Is a way to POST queries to Solr instead of supplying query string
parameters ?
Some of our queries may hit up against URL size limits.
If so, can someone provide an example ?
Thanks in advance
Hoping someone can help -
Problem:
Querying for non-english phrases such as Добавить do not return any
results under Tomcat but do work when using the Jetty example.
Both tomcat and jetty are being queried by the same custom (flash)
client and both reference the same solr/da
er.xml (on
connector element)?
Like:
Hope this helps.
Best regards
czinkos
2009/10/24 Glock, Thomas :
>
> Hoping someone can help -
>
> Problem:
> Querying for non-english phrases such as Добавить do not return any
> results under Tomcat but do work when using the Je
//www.lucidimagination.com
2009/10/24 Glock, Thomas :
>
> Thanks but not working...
>
> I did have the URIEncoding in place and just again moved the URIEncoding
> attribute to be the first attribute - ensured I saved sever.xml, shut down
> tomcat, deleted logs and cache and still
ue
Don't use POST. That is the wrong HTTP semantic for search results.
Use GET. That will make it possible to cache the results, will make your HTTP
logs useful, and all sorts of other good things.
wunder
On Oct 24, 2009, at 10:11 AM, Glock, Thomas wrote:
>
> Thanks - I now th
s
based on the user's role. I have only two roles to support, so my case
is very simple, but I could imagine having a multivalued "role" field
that you could perform facet queries on.
Mark
Glock, Thomas wrote:
>
> Thanks -
>
> I agree. However my application requires
Hello,
Is there a solution packaged in SOLR that deserializes XML response
documents into Lucene documents?
Hoping someone might help with getting /update/extract RequestHandler to
work under Tomcat.
Error 500 happens when trying to access
http://localhost:8080/apache-solr-1.4-dev/update/extract/ (see below)
Note /update/extract DOES work correctly under the Jetty provided
example.
I think I must ha
non-Lazy loaded handler. Does that help?
On Nov 2, 2009, at 4:37 PM, Glock, Thomas wrote:
>
> Hoping someone might help with getting /update/extract RequestHandler
> to work under Tomcat.
>
> Error 500 happens when trying to access
> http://localhost:8080/apache-solr-1.4-d
Follow-up -
This is now working (sadly I'm not sure exactly why!) but I've
successfully used curl (under windows) and the following examples to
parse content
curl
http://localhost:8080/apache-solr-1.4-dev/update/extract?extractOnly=tru
e --data-binary @curl-config.pdf -H "Content-type:applicati
Is it possible to configure Solr to fully load indexes in memory? I
wasn't able to find any documentation about this on either their site or
in the Solr 1.4 Enterprise Search Server book.
Hi Aseem -
I had a similar challenge. The solution that works for my case was to
add "role" as a repeating string value in the solr schema.
Each piece of content contains 1 or more roles and these values are
supplied to solr for indexing.
Users also have one or more roles (which correspond ex
I'm trying to get delta indexing set up. My configuration allows a full index
no problem, but when I create a test delta of a single record, the delta import
finds the record but then does nothing. I can only assume I have something
subtly wrong with my configuration, but according to the wiki,
e been ${dataimporter.delta. product_id}
>
> On Wed, Dec 2, 2009 at 11:52 PM, Thomas Woodard wrote:
> >
> > I'm trying to get delta indexing set up. My configuration allows a full
> > index no problem, but when I create a test delta of a single record, the
> >
lds to get highlighting in every case?
- Isn't it a big waste of hard disc space to store the content two times?
Thanks for any help,
Thomas Koch, http://www.koch.ro
test
s IO intensive. This would give me
the additional benefit, that I could selectively delete the fulltext of older
articles when running out of disc space while keeping the url of the document
in the index.
Do you know, whether sth. like this would be possible?
Best regards,
Thomas Koch, http://www.koch.ro
nl.jteam.search.solrext.spatial.SpatialTierQueryParserPlugin
extends QParserPlugin. I have checked into the solr.war file (the one
provided by solr download webpage) and the class is present.
Do you know if the current version "SSP version 1.0-RC3" is compatible with
solr 1.4 ?
Thanks
--
Thomas Rabaix
Kraus, Ralf | pixelhouse GmbH schrieb:
Hello,
Querry:
{wt=json&rows=30&json.nl=map&start=0&sort=RezeptName+asc}
Result :
Doppeldecker
Eiersalat
Curry - Eiersalat
Eiersalat
Why is my second "Curry..." after "Doppeldecker" ???
RezeptName is a normal "text" field defined as :
positionInc
https://issues.apache.org/jira/secure/attachment/12394266/apache_solr_b_red.jpg
https://issues.apache.org/jira/secure/attachment/12394314/apache_soir_001.jpg
https://issues.apache.org/jira/secure/attachment/12394264/apache_solr_a_red.jpg
[Standard caveat: I did try checking the solr-user archives, but was
hampered by the fact that there's no search function. The cobbler's
children go barefoot.]
--
Thomas Dowling
Ohio Library and Information Network
tdowl...@ohiolink.edu
essor with enough ram to store
> your index in ram - but that might not be possible with "millions" of
> records. Our 150,000 item index is about a gig and a half when optimized
> but yours will likely be different depending on how much you store.
> Faceting takes more memory than pure searching as well.
>
This is very helpful. Thanks again.
--
Thomas Dowling
Is adding QueryComponent to your SearchComponents an option? When
combined with the CollapseComponent this approach would return the
collapsed and the complete result set.
i.e.:
collapse
query
facet
mlt
highlight
Thomas
Marc Sturlese schrieb:
Hey there,
I have
Hello Matt,
the patch should work with trunk and after a small fix with 1.3 too (see
my comment in SOLR-236). I just made a successful build to be sure.
Do you see any error messages?
Thomas
Matt Mitchell schrieb:
Thanks guys. I looked at the dedup stuff, but the documents I'm adding
a
I assume you are using the StandardRequestHandler, so this should work:
http://192.168.105.54:8983/solr/itas?q=size:7* AND extension:pdf
Also have a look at the follwing links:
http://wiki.apache.org/solr/SolrQuerySyntax
http://lucene.apache.org/java/2_4_1/queryparsersyntax.html
Thomas
Jörg
3.jar in the core's lib directory and it
> worked.
>
> On Wed, Dec 23, 2009 at 8:25 PM, Thomas Rabaix >wrote:
>
> > Hello,
> >
> > I would like to set up the spatial solr plugin from
> > http://www.jteam.nl/news/spatialsolr on solr 1.4. However I am getting
, true);
NamedList result = server.request(up);
UpdateResponse test = server.commit();
But no doc is added, if i remove the comment tag from the second addFile.
What's wrong with this?
Thanks,
Thomas
let SOLR start with an empty index
again.
Does anybody has an idea, how this could be achieved?
Thanks a lot,
Thomas Koch, http://www.koch.ro
or datadirs that are older then the newest one and all these
can be picked up for submission to katta.
Now there remain two questions:
- When the old core is closed, will there be an implicit commit?
- How to be sure, that no more work is in progress on an old core datadir?
Thanks,
Thomas Koch, http://www.koch.ro
I've noticed that fields that I define as index="false" in the
schema.xml are still searchable. Here's the definition of the field:
or
I can then add a new document with the field object_id=26 and have the
document returned when searching for "+object_Id=26". On the other hand
if I ad
My schema has always had index="false" for that field. I only stopped and
restarted the servlet container when I added a document to the index using the
Lucene API instead of Solr.
-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com]
Sent: Tuesday, March 02, 2010 1:01 PM
To
For testing purposes. I just wanted to see if unindex fields in documents
added by Lucene API were searchable by Solr. This is after discovering that
the unindexed fields in documents added by Solr are searchable.
-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com]
Sent:
Great catch! Thanks for spotting my error :)
-Original Message-
From: Ahmet Arslan [mailto:iori...@yahoo.com]
Sent: Tuesday, March 02, 2010 2:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Unindexed Fields Are Searchable?
> Again, note that it should be
> index_ed_="false". "ed" -
Is there a setting in the config I can set to have Solr create a new
Lucene index if the dataDir is empty on startup? I'd like to open our
Solr system to allow other developers here to add new cores without
having to use the Lucene API directly to create the indexes.
esday, March 03, 2010 5:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Can Solr Create New Indexes?
On 03/03/2010 07:56 PM, Thomas Nguyen wrote:
> Is there a setting in the config I can set to have Solr create a new
> Lucene index if the dataDir is empty on startup? I'd like to open
Create New Indexes?
I'm guessing the index folder itself already exists?
The data dir can be there, but the index dir itself must not be - that's
how it knows to create a new one.
Otherwise it thinks the empty dir is the index and cant find the files
it expects.
On 03/03/2010 08:15 PM,
http://www.infoq.com/news/2010/03/egit-released
http://aniszczyk.org/2010/03/22/the-start-of-an-adventure-egitjgit-0-7-1/
Maybe, one day, some apache / hadoop projects will use GIT... :-)
(Yes, I know git.apache.org.)
Best regards,
Thomas Koch, http://www.koch.ro
Has anyone implemented a Dismax type solution that also uses a default
operator (or q.op)? I'd like to be able to use OR operators for all the
qf fields but have read that qf=dismax does not support operators.
plications during the day.
Is there by any chance the possibility that you'd rather want to store your
data in HBase then in MySQL? I'm working on a project right now to store
SOLR/Lucene indices directly in HBase too.
I'll be at the webtuesday tomorrow. Maybe I could give an introduction to
Hadoop/HBase on a next webtuesday?
Beste Grüße,
Thomas Koch, http://www.koch.ro
Hi,
could I interest you in this project?
http://github.com/thkoch2001/lucehbase
The aim is to store the index directly in HBase, a database system modelled
after google's Bigtable to store data in the regions of tera or petabytes.
Best regards, Thomas Koch
Lance Norskog:
> The 2B li
ince it's more advanced.
Best regards,
Thomas Koch, http://www.koch.ro
Is there anything wrong with wrapping the text content of all fields
with CDATA whether they be analyzed, not analyzed, indexed, not indexed
and etc.? I have a script that creates update XML documents and it's
just simple to wrap all text content in all fields with CDATA. From my
brief tests it d
Try the SnowballPorterFilterFactory described here:
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
You should use the German2 variant that converts ä and ae to a, ö and oe
to o and so on. More details:
http://snowball.tartarus.org/algorithms/german2/stemmer.html
Every document in
in short: use stemming
Try the SnowballPorterFilterFactory with German2 as language attribute
first and use synonyms for combined words i.e. "Herrenhose" => "Herren",
"Hose".
By using stemming you will maybe have some "interesting" results, but it
is much better living with them than having
Martin Grotzke schrieb:
Try the SnowballPorterFilterFactory with German2 as language attribute
first and use synonyms for combined words i.e. "Herrenhose" => "Herren",
"Hose".
so you use a combined approach?
Yes, we define the relevant parts of compounded words (keywords only) as
synon
gs). I mentioned it because it is a very good start when using solr
and especially when dealing with documents in languages other than english.
Tom
Matthias Eireiner schrieb:
Dear list,
it has been some time, but here is what I did.
I had a look at Thomas Traeger's tip
1.2 with no substantial differences between their schema.xml
files.
--
Thomas Dowling
[EMAIL PROTECTED]
Please make sure that you do NOT have a field called "category" in
in the documents you would like to add. For example:
camera
I am almost sure you have some documents,
which have this field "category" instead of "cat".
You can also add the field "category" to your schema.xml file and copy
it
Have a look at
http://wiki.apache.org/solr/HighlightingParameters?highlight=%28highlighting%29#head-dbf0474b5b2c0db08f3a464ff3525225a9c71fbc
and set
hl.fragsize=0
Hope this helps.
Christian Wittern said the following on 18/04/2008 09:59:
> Dear Solr users,
>
> Here I am having a problem with hi
On Mon, May 19, 2008 at 2:49 PM, Chris Hostetter
<[EMAIL PROTECTED]> wrote:
>
> : solr release in some time, would it be worth looking at what outstanding
> : issues are critical for 1.3 and perhaps pushing some over to 1.4, and
> : trying to do a release soon?
>
> That's what is typically done whe
ttp://localhost:8983/solr/update ?): java.io.IOException: S
erver returned HTTP response code: 400 for URL:
http://localhost:8983/solr/update
C:\test\output>
Regards Thomas Lauer
__ Hinweis von ESET NOD32 Antivirus, Signaturdatenbank-Version 3175
(20080611) __
E-Mail wurde g
Yes my file is UTF-8. I Have Upload my file.
Grant Ingersoll-6 wrote:
>
>
> On Jun 11, 2008, at 3:46 AM, Thomas Lauer wrote:
>
>> now I want tho add die files to solr. I have start solr on windows
>> in the example directory with java -jar start.jar
>>
&g
12, 2008 at 7:49 AM, Thomas Lauer <[EMAIL PROTECTED]> wrote:
>>
>> Yes my file is UTF-8. I Have Upload my file.
>>
>>
>>
>>
>> Grant Ingersoll-6 wrote:
>>>
>>>
>>> On Jun 11, 2008, at 3:46 AM, Thomas Lauer wrote:
>>&g
KIS
2.2
My files in the attachment
Regards Thomas
SimplePostTool: COMMITting Solr index changes..
but can´t find the document.
Regards
Thomas
-Ursprüngliche Nachricht-
Von: Brian Carmalt [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 13. Juni 2008 07:36
An: solr-user@lucene.apache.org
Betreff: Re: My First Solr
Hello Thomas,
Have you performed
ok, i find my files now. can I make all files to the default search file?
Regards Thomas
-Ursprüngliche Nachricht-
Von: Brian Carmalt [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 13. Juni 2008 08:03
An: solr-user@lucene.apache.org
Betreff: Re: AW: My First Solr
Do you see if the
101 - 200 of 305 matches
Mail list logo