quot; automatically with letter "t" in the
query chain? Or other direction to remove "t" letters after "d" letters in
index chain.
Thanks a lot
Thomas
--
View this message in context:
http://lucene.472066.n3.nabble.com/German-language-specific-problem-automatic-Sp
ave a look into the ColognePhonetic encoder. Im just afraid ill have
to reindex the whole content there as well.
Thomas
--
View this message in context:
http://lucene.472066.n3.nabble.com/German-language-specific-problem-automatic-Spelling-correction-automatic-Synonyms-tp3216278p3216414.html
Sent
Concerning the downtime, we found a solution that works well for us. We
allready implemented an update mechanism so that when authors are changing
some content in the cms, the index regarding this piece of content gets
updated (delete than index again) as well.
All we had to do is:
1. Change the s
Hi,
As part of my bachelor thesis I'm trying to archive NRT with Solr 3.6.
I've came up with a basic concept and would be trilled if I could get
some feedback.
The main idea is to use two different Indexes. One persistent on disc
and one in RAM. The plan is to route every added and modified
y be a solution
here and can it somehow be used
with Solr.1.2?
Thanks a lot!
Thomas
there an easier/cleaner solution?
Thanks,
Thomas
actly like #get. Also it's not consistent with
SolrInstance#query behavior which return me SolrDocument containing
values and not Fields.
Sorry if I missed it in the releases notes.
Thanks for your time !
--
Thomas
haracter available for field names
that is usually not allowed in programming language's identifiers (as a
cheap escape character).
Thanks in advance,
Thomas
y tough call whether to use them or not.
Cheers,
Thomas
On 2015-07-27 21:31, Erick Erickson wrote:
> The problem has been that field naming conventions weren't
> _ever_ defined strictly. It's not that anyone is taking away
> the ability to use other characters, rather it's
x. It didn't mention of any
corruption though.
Right now, the index has about 25k docs. I haven't optimized this index in
a while, and there are about 4000 deleted-docs. How can I confirm if we
lost anything? If we've lost docs, is there a way to recover it?
Thanks in advance!!
Regards
Thomas
boilerplate code look like?
Any ideas would be appreciated.
Kind regards,
Thomas
.
Do we have the use a similar set-up when using Solr, that is:
1. Create/update the index
2. Notify the Solr client
[cid:image001.jpg@01D08D5B.0112E420]
Guy Thomas
Analist-Programmeur
Provincie Vlaams-Brabant
D
it's not
possible to dump all their contents into a single field for that purpose.
Thanks in advance,
Thomas
t the internal mechanics are
that make the one or the other faster? Are there other suggestions for
how to make a "filters-only" search as fast as possible?
Also, can it be that it recently changed that the "q.alt" parameter now
influences relevance (in Solr 5.x)? I could have sworn that wasn't the
case previously.
Thanks in advance,
Thomas
Hi Ahmet,
Brilliant, thanks a lot!
I thought it might be possible with local parameters, but couldn't find
any information anywhere on how (especially setting the multi-valued
"qf" parameter).
Thanks again,
Thomas
On 2015-07-10 14:09, Ahmet Arslan wrote:
> Hi Tomasi
>
q' and 'facet.*' to work with my setup?
I am still using SOLR 4.10.5
kind regards
Thomas
when trying to restart a
solr server after activities such as patch.
Is it normal for old tlogs to never get removed in a CDCR setup?
Thomas Tickle
Nothing in this message is intended to constitute an electronic signature
unless a specific statement to the contrary is included in this me
quot;.
So I have to look in the logfile the observe when the import was finished.
In the old solr, non-cloud and non-partitioned, there was a hourglass while the
import was running.
Any idea?
Best regards
Thomas
I have defined a copied field on which I would like to use clustering. I
understood that the destination field will store the full content despite the
filter chain I defined.
Now, I have a keep word filter defined on the copied field.
If I run clustering on the copied field will it use the resu
This is understood.
My question is: I have a keep words filter on field2. field2 is used for
clustering.
Will the cluster algorithm use „some data“ or the result of the application of
the keep words filter applied to „some data“.
Cheers,
Thomas
> Am 26.07.2017 um 01:36 schrieb Erick Erick
find one of them. How to setup DIH for the
solrcould?
Best regards
Thomas
s
solrconfig.xml ...
in the zookeeper directories (installation dir and data dir)
3. Started solr:
bin/solr start -c
4. Created a books collection with 2 shards
bin/solr create -c books -shards 2
Result: I see in the web-ui my books collection with the 2 shards. No errors so
far.
However, the Dataimport-entry says:
"Sorry, no dataimport-handler defined!"
What could be the reason?
Thomas
ata/bestand/index/ manually , but
nothing changed.
What is the reason for this CREATE-error?
Thomas
> ANNAMANENI RAVEENDRA hat am 12. Mai 2017 um 15:54
> geschrieben:
>
>
> Hi ,
>
> If there is a request handler configured in solrconfig.xml and update the
> Co
quot; is a directory.
Are my steps wrong? Did I miss something important?
Any help is really welcome.
Thomas
calhost:8983/solr/admin/collections?action=CREATE&name=karpfen&numShards=2&replicationFactor=1&maxShardsPerNode=2&collection.configName=karpfen
--> error
Thomas
> Susheel Kumar hat am 15. Mai 2017 um 14:36
> geschrieben:
>
>
> what happens if you create just on
p;collection.configName=heise
{
"responseHeader":{
"status":0,
"QTime":2577},
"success":{"127.0.1.1:8983_solr":{
"responseHeader":{
"status":0,
"QTime":1441},
"core":"h
> Tom Evans hat am 17. Mai 2017 um 11:48 geschrieben:
>
>
> On Wed, May 17, 2017 at 6:28 AM, Thomas Porschberg
> wrote:
> > Hi,
> >
> > I did not manipulating the data dir. What I did was:
> >
> > 1. Downloaded solr-6.5.1.zip
> > 2. ensured no
> Shawn Heisey hat am 17. Mai 2017 um 15:10 geschrieben:
>
>
> On 5/17/2017 6:18 AM, Thomas Porschberg wrote:
> > Thank you. I am now a step further.
> > I could import data into the new collection with the DIH. However I
> > observed the following exception
&g
#end
So far my attempts have failed and I am unsure how to access it and the
right syntax for it.
Whatever i put in the url_root macro just ends up as strings in the
generated HTML:
I appreciate any pointer you can give me.
Regards
Thomas
--
View this message in context:
http
I want to ensure that nested
documents are deleted automatically on update and delete of the parent
document.
Does anyone had to deal with this problem and found a solution?
regards,
Thomas
and has some caveats:
http://blog.griddynamics.com/2013/09/solr-block-join-support.html
regards,
Thomas
On May 18, 2014, at 11:36 PM, Thomas Scheffler
wrote:
Hi,
I plan to use nested documents to group some of my fields
art0001 My first
article art0001-foo Smith, John author
art0001
Am 19.05.2014 19:25, schrieb Mikhail Khludnev:
Thomas,
Vanilla way to override a blocks is to send it with the same unique-key (I
guess it's "id" for your case, btw don't you have unique-key defined in the
schema?), but it must have at least one child. It seems like analys
gards
Thomas
ly SOLR is failing fast and I wonder if there is some way to specify
"group by first|any|last|min|max (means all the same here) value of foo".
regards,
Thomas
.
I thought of max(foo) for it but sadly it does work on multivalued
fields either. I wait for other suggestions and start looking at custom
functions (did not known that option exists) in parallel.
Thanks,
Thomas
.
In this case we get a substantial improvement in both memory use and query
time:
See: https://plus.google.com/+TokeEskildsen/posts/7oGxWZRKJEs
We have tested it for index with 300M documents.
From,
Thomas Egense
On Wed, Jun 11, 2014 at 5:36 PM, marcos palacios
wrote:
> Hello every
ranch anymore.
Does anybody know how to deal with that situation?
Can I move the administration to a new admin directory?
Best regards
Thomas Fischer
, it was a 1TB index that temporarily used 2.5TB disc space during the
optimize (near the end of the optimization).
From,
Thomas Egense
On Wed, Jun 25, 2014 at 8:21 PM, Markus Jelsma
wrote:
>
>
>
>
> -Original message-
> > From:johnmu...@aol.com
> > Sent: Wed
;
res = cloudSolrServer.query(query);
cloudSolrServer.shutdown();
assertEquals(5, res.getResults().getNumFound()); <-- Assertion failure
Should I create a Jira issue for this?
From,
Thomas Egense
Thanks to both of you for fixing the bug. Impressive response time for the
fix (7 hours).
Thomas Egense
On Wed, Oct 23, 2013 at 7:16 PM, Mark Miller wrote:
> I filed https://issues.apache.org/jira/browse/SOLR-5380 and just
> committed a fix.
>
> - Mark
>
> On Oct 23, 2013,
You can specify the shard in core.properties ie:
core.properties:
name=collection2
shard=shard2
Did this solve it ?
From,
Thomas Egense
On Mon, Feb 25, 2013 at 5:13 PM, Mark Miller wrote:
>
> On Feb 25, 2013, at 10:00 AM, "Markus.Mirsberger" <
> markus.mirsber...@gm
que because on update and on delete I don't know how
many dependent documents exists in the SOLR index. Especially for batch
index processes, I need a more efficient way than query before every
update or delete.
kind regards,
Thomas
Am 27.11.2013 09:58, schrieb Paul Libbrecht:
Thomas,
our experience with Curriki.org is that evaluating what I call the
"related documents" is a procedure that needs access to the complete
content and thus is run at the DB level and no thte sold-level.
For example, if a user changes
4.3.1 with the edismax query parser.
I am looking forward to get some hints, why the query might be failing.
Best regards and many thanks!
Thomas
--
Thomas Kurz
Forschung & Entwicklung / KMT
Salzburg Research Forschungsgesellschaft mbH
Jakob-Haringer-Straße 5/3 | 5020 Salzburg, Austria
T: +43
, the question is, what kind of plugin would I use (and how would it
have to be configured)? I first thought it'd have to be a
SearchComponent, but I think with that I'd only get the results after
they are sorted and trimmed to the range, right?
Thanks a lot in advance,
Thomas Seidl
on=1544545&view=markup
You're probably want to implement the QParserPlugin as PostFilter.
On Sun, Dec 1, 2013 at 3:46 PM, Thomas Seidl wrote:
Hi,
I'm currently looking at writing my first Solr plugin, but I could not
really find any "overview" information about how a S
olr/schema/ICUCollationField.java
but no corresponding class in solr4.6.1's contrib folder.
Best
Thomas
Hello Robert,
I already added
contrib/analysis-extras/lib/
and
contrib/analysis-extras/lucene-libs/
via lib directives in solrconfig, this is why the classes mentioned are loaded.
Do you know which jar is supposed to contain the ICUCollationField?
Best regards
Thomas
Am 19.02.2014 um 13:54
to add to your SOLR_HOME/lib in order to use it."
is misleading insofar as this README.txt doesn't mention the
solr-analysis-extras-4.6.1.jar in dist.
Best
Thomas
Am 19.02.2014 um 14:27 schrieb Robert Muir:
> you need the solr analysis-extras jar itself, too.
>
>
>
&g
rds together that
shouldn't be, "Äsen" and "Asen" are quite different concepts,
In general, the deprecation of ICUCollationKeyFilterFactory doesn't seem to be
really thought through.
Thanks anyway, best
Thomas
>
> On Wed, Feb 19, 2014 at 9:16 AM, Thomas Fi
ship with to enable latest
features, if the user wishes.
When I send a query to a request handler I can attach a "version"
parameter to tell SOLR which version of the response format I expect.
Is there such a configuration when indexing SolrInputDocuments? I did
not find it so far.
Am 27.02.2014 08:04, schrieb Shawn Heisey:
On 2/26/2014 11:22 PM, Thomas Scheffler wrote:
I am one developer of a repository framework. We rely on the fact, that
"SolrJ generally maintains backwards compatibility, so you can use a
newer SolrJ with an older Solr, or an older SolrJ with a
range types, too?
kind regards,
Thomas
es like in PostgreSQL
where the problem is solved.
regards,
Thomas
ersists if I start tomcat with
-Dsolr.solr.home=/srv/solr/solr4.6.1, it seems that the system just ignores the
solr home setting.
Can somebody give me a hint what I'm doing wrong?
Best regards
Thomas
P.S.: Is there a way to stop Tomcat from throwing these errors into my face
threefold: once as headi
Am 03.03.2014 um 22:43 schrieb Shawn Heisey:
> On 3/3/2014 9:02 AM, Thomas Fischer wrote:
>> The setting is
>> solr directories (I use different solr versions at the same time):
>> /srv/solr/solr4.6.1 is the solr home, in solr home is a file solr.xml of the
>> new
lowing the hint from the solr wiki
(http://wiki.apache.org/solr/ConfiguringSolr):
"In each core, Solr will look for a conf/solrconfig.xml file"
But I get the error message:
"Could not load config file /srv/solr/solr4.6.1/cores/geo/solrconfig.xml"
Why? My misunderstanding?
Best
Thomas
the worst approximation. Normalize them to a
range would be the best in my opinion. So a query like "date:2014" would
hit all but also "date:[2014-01 TO 2014-03]".
kind regards,
Thomas
Am 27.02.2014 09:15, schrieb Shawn Heisey:
On 2/27/2014 12:49 AM, Thomas Scheffler wrote:
What problems have you seen with mixing 4.6.0 and 4.6.1? It's possible
that I'm completely ignorant here, but I have not heard of any.
Actually bug reports arrive me that sound like
"
Am 04.03.2014 07:21, schrieb Thomas Scheffler:
Am 27.02.2014 09:15, schrieb Shawn Heisey:
On 2/27/2014 12:49 AM, Thomas Scheffler wrote:
What problems have you seen with mixing 4.6.0 and 4.6.1? It's possible
that I'm completely ignorant here, but I have not heard of any.
Ac
Hi Grant.
Will there be a Fusion demostration/presentation at Lucene/Solr Revolution
DC? (Not listed in the program yet).
Thomas Egense
On Mon, Sep 22, 2014 at 3:45 PM, Grant Ingersoll
wrote:
> Hi All,
>
> We at Lucidworks are pleased to announce the release of Lucidworks Fus
m-6.2/xwiki-platform-core/xwiki-platform-search/xwiki-platform-search-solr/xwiki-platform-search-solr-api/src/main/resources/solr/xwiki/conf
I know all that is only talking about Lucene classes but since on my
side what I use is Solr I tough it was better to ask on this mailing
list.
Thanks,
--
Thomas
er.java:1037)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:355)
at
org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:235)
Any hint on how to solve this? Google didn't reveal anything useful...
Kind regards
Thomas
--
Thomas Lamy
Cytainment AG & Co
Am 12.11.2014 um 15:29 schrieb Thomas Lamy:
Hi there!
As we got bitten by https://issues.apache.org/jira/browse/SOLR-6530 on
a regular basis, we started upgrading our 7 mode cloud from 4.10.1 to
4.10.2.
The first node upgrade worked like a charm.
After upgrading the second node, two cores no
no tomcat restart was neccessary - even
contra productive, since state changes may overwrite the just-fixed enty.
Best regards
Thomas
Am 13.11.2014 um 05:47 schrieb Jeon Woosung:
you can migrate zookeeper data manually.
1. connect zookeeper.
- zkCli.sh -server host:port
2. chec
he request I send is this:
foo
ENVELOPE(25.89, 41.13,
47.07, 35.31)
Does anyone have any idea what could be going wrong here?
Thanks a lot in advance,
Thomas
running, which does not have these problems
since upgrading to 4.10.2.
Any hints on where to look for a solution?
Kind regards
Thomas
--
Thomas Lamy
Cytainment AG & Co KG
Nordkanalstrasse 52
20097 Hamburg
Tel.: +49 (40) 23 706-747
Fax: +49 (40) 23 706-139
Sitz und Registerger
dward
www.flax.co.uk
On 7 Jan 2015, at 10:01, Thomas Lamy wrote:
Hi there,
we are running a 3 server cloud serving a dozen
single-shard/replicate-everywhere collections. The 2 biggest collections are
~15M docs, and about 13GiB / 2.5GiB size. Solr is 4.10.2, ZK 3.4.5, Tomcat
7.0.56, Oracle
d once a problem occurs. We're also enabling GC logging for
zookeeper; maybe we were missing problems there while focussing on solr
logs.
Thomas
Am 08.01.15 um 16:33 schrieb Yonik Seeley:
It's worth noting that those messages alone don't necessarily signify
a problem with the s
n 4.10.3 I think. What version are you on?
- Mark
On Mon Jan 12 2015 at 7:35:47 AM Thomas Lamy wrote:
Hi,
I found no big/unusual GC pauses in the Log (at least manually; I found
no free solution to analyze them that worked out of the box on a
headless debian wheezy box). Eventually i tried with -
>> The problem I am facing is how to read those data from hard disks which are
>> not HDFS
If you are planning to use a Map-Reduce job to do the indexing then the source
data will definitely have to be on HDFS.
The Map function can transform the source data to Solr documents and send them
to So
My understanding is the same that "{!join...}" does not work in SolrCloud (aka
distributed search)
based on:
1. https://issues.apache.org/jira/browse/LUCENE-3759
2. http://wiki.apache.org/solr/DistributedSearch
--- see "Limitations" section which refers to the JIRA above
-- James
-Original
Hi Henrik,
We did something related to this that I'll share. I'm rather new to Solr so
take this idea cautiously :-)
Our requirement was to show exact values but have case-insensitive sorting and
facet filtering (prefix filtering).
We created an index field (type="string") for creating facets
nyone has a good idea how to make this hack work?
From,
Thomas Egense
Hello,
Relatively new to SOLR, I am quite happy with the API.
I am a bit challenged by the faceting response in JSON though.
This is what i am getting which mirrors what is in the documentation:
"facet_counts":{"facet_queries":{},
"facet_fields":{"metadata_meta_last_author":["Nick",330,"
thank you Hoss,
What i would prefer to see as we do with all other parameters is a normal
key/value pairing. this might look like:
{"metadata_meta_last_author":[{"value": "Nick", "count": 330},{"value":
"standard user","count": 153},{"value": "Mohan","count":
52},{"value":"wwd","count": 49}…
First time on forum.
We are planning to use Solr to house some data mining formation, and we are
thinking of using attributes to add some semantic information to indexed
content. As a test, I wrote a filter that adds an "animal" attribute to tokens
like "dog", "cat", etc. After adding a document
Hello,
I am submitting rich documents to a SOLR index via Solr Cell. This is all
working well.
The documents are organized in meaningful folders. I would like to capture
the folder names in my index so that I can use the folder names to provide
facets.
I can pass the path data into the indexi
here:
>
> http://wiki.apache.org/solr/HierarchicalFaceting#PathHierarchyTokenizerFactory
>
>
> Hope that helps.
> Brendan
>
>
>
>
>
>
> On Mon, May 20, 2013 at 4:18 PM, Cord Thomas
> wrote:
>
> > Hello,
> >
> > I am submitting rich docume
Are you using IE? If so, you might want to try using Firefox.
-Original Message-
From: sathish_ix [mailto:skandhasw...@inautix.co.in]
Sent: Wednesday, June 05, 2013 6:16 AM
To: solr-user@lucene.apache.org
Subject: Sole instance state is down in cloud mode
Hi,
When i start a core in sol
This may help:
http://docs.lucidworks.com/display/solr/Shards+and+Indexing+Data+in+SolrCloud
--- See "Document Routing" section.
-Original Message-
From: sathish_ix [mailto:skandhasw...@inautix.co.in]
Sent: Friday, June 07, 2013 5:27 AM
To: solr-user@lucene.apache.org
Subject: How to st
FWIW, the Solr included with Cloudera Search, by default, "ignores all but the
most recent document version" during merges.
The conflict resolution is configurable however. See the documentation for
details.
http://www.cloudera.com/content/support/en/documentation/cloudera-search/cloudera-search
This page has some good information on custom document routing:
http://docs.lucidworks.com/display/solr/Shards+and+Indexing+Data+in+SolrCloud
-Original Message-
From: Rishi Easwaran [mailto:rishi.easwa...@aol.com]
Sent: Wednesday, June 12, 2013 1:40 PM
To: solr-user@lucene.apache.org
S
Looks like the javadoc on this parameter could use a little tweaking.
>From looking at the 4.3 source code (hoping I get this right :-), it appears
>the ConcurrentUpdateSolrServer will begin sending documents (on a single
>thread) as soon as the first document is added.
New threads (up to thread
Seems to work fine for me on 4.3.0, maybe you can try a newer version.
4.3.1 is available.
-Original Message-
From: Elodie Sannier [mailto:elodie.sann...@kelkoo.fr]
Sent: Friday, June 21, 2013 8:54 AM
To: solr-user@lucene.apache.org >> "solr-user@lucene.apache.org"
Subject: SolrCloud: no
and why, and why solr 3.5 is looking for
solrconfig.xml instead of solr.xml in solr.home
(Am I the only one who finds it confusing to have the three names
solr.solr.home (system property), solr.home (JNDI), solr/home (Environment
name) for the same object?)
Best
Thomas
r.solr.home=/srv/solr"
I get
INFO: No /solr/home in JNDI
INFO: using system property solr.solr.home: /srv/solr
and everything seems to work fine, so there obviously was a tightening of the
syntax somewhere between solr 1.4 and solr 3.5.
Thanks again
Thomas
Am 22.12.2011 um 17:06 schrieb Sh
back to help on
finding resources about implementing custom types...it would just be more
complicated if I couldn't use the AbstractSubTypeFieldType.
(This is my first time posting to a mailing list, so if I have violated
horribly some etiquette of mailing lists, please tell me).
Regards,
Thomas
Document: {title: My document, from: 8000, to: 9000}
> Query: q=title:"My" AND (from:[* TO 8500] AND to:[8500 TO *])
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> Training in Europe - www.solrtraining.com
>
> On 6. aug. 2010,
ersion, and build it? And if
so, is there a particular revision that I should go with? Or should I just
pull trunk and use that, and last of all, is trunk stable enough to be used
in production?
Regards,
Thomas
On Mon, Aug 9, 2010 at 8:38 AM, Mark Allan wrote:
> On 9 Aug 2010, at 1:01 pm,
ld love to see your code.
Regards,
Thomas
On Mon, Aug 9, 2010 at 8:50 AM, Thomas Joiner wrote:
> I'd love to see your code on this, however what I've really been wondering
> is the following: When did AbstractSubTypeFieldType get added? It isn't in
> 1.4.1 (as far as I c
Thanks you very much.
I know the feeling, I've definitely had times when I just got busy and
didn't reply, but I've had plenty to do that didn't require that to be done
first, so no worries.
Thanks,
Thomas
On Mon, Aug 16, 2010 at 9:14 AM, Mark Allan wrote:
> Hi Th
I am wondering...is there currently any way for queries to properly handle
multiValued polyFields?
For instance, if you have a
And if you added two values to that field such as "1,2" and "3,4", that
would match both "1,4", and "3,2" as well as "1,2" and "3,4".
So I'm wondering if that is somet
Is there any reason you aren't using http://wiki.apache.org/solr/Solrj to
interact with Solr?
On Tue, Aug 24, 2010 at 11:12 AM, Liz Sommers wrote:
> I am very new to the solr/lucene world. I am using solr 1.4.0 and cannot
> move to 1.4.1.
>
> I have to index about 50 fields for each document, t
I don't know about the shards, etc.
However I recently encountered that exception while indexing pdfs as well.
The way that I resolved it was to upgrade to a nightly build of Solr. (You
can find them https://hudson.apache.org/hudson/view/Solr/job/Solr-trunk/).
The problem is that the version of
While you have already solved your problem, my guess as to why it didn't
work originally is that you probably didn't have a
What subFieldType does is it registers a dynamicField for you.
subFieldSuffix requires that you have already defined that dynamicField.
On Tue, Aug 31, 2010 at 8:07 PM, Si
Can someone point me to the mechanism in Sol that might allow me to
roll-up or aggregate records for display. We have many items that are
similar and only want to show a representative record to the user until
they select that record.
As an example - We carry a polo shirt and have 15 records
My guess would be that Jetty has some configuration somewhere that is
telling it to use GCJ. Is it possible to completely remove GCJ from the
system? Another possibility would be to uninstall Jetty, and then reinstall
it, and hope that on the reinstall it would pick up on the OpenJDK.
What distr
If you wish to interface to Solr from PHP, and decide to go with Yonik's
suggestion to use JSON, I would suggest using
http://code.google.com/p/solr-php-client/
It has served my needs for the most part.
On Thu, Sep 16, 2010 at 1:33 PM, Yonik Seeley wrote:
> On Thu, Sep 16, 2010 at 2:30 PM, onlin
Also there is http://lucene.472066.n3.nabble.com/Solr-User-f472068.html if
you prefer a forum format.
On Fri, Sep 17, 2010 at 9:15 AM, Markus Jelsma wrote:
> http://www.lucidimagination.com/search/?q=
>
>
> On Friday 17 September 2010 16:10:23 alexander sulz wrote:
> > Im sry to bother you all
1 - 100 of 305 matches
Mail list logo