--joe
mponent to support that?
thx much
--joe
how would i go about modifying the dismax parser to treat +/- as regular text?
setting im missing? currently im only sending,
collapse.field=brand and collapse.includeCollapseDocs.fl=num_in_stock
--joe
On Thu, Oct 1, 2009 at 1:14 AM, Martijn v Groningen
wrote:
> Hi Joe,
>
> Currently the patch does not do that, but you can do something else
> that might help you in
does seem to work when the collapsed results fit on one page (10
rows in my case)
--joe
> 2) It seems that you are using the parameters as was intended. The
> collapsed documents will contain all documents (from whole query
> result) that have been collapsed on a certain field value that oc
i gotten two different out of memory errors while using the field
collapsing component, using the latest patch (2009-09-26) and the
latest nightly,
has anyone else encountered similar problems? my collection is 5
million results but ive gotten the error collapsing as little as a few
thousand
SEVE
M when having an
> index of a few million.
>
> Martijn
>
> 2009/10/2 Joe Calderon :
>> i gotten two different out of memory errors while using the field
>> collapsing component, using the latest patch (2009-09-26) and the
>> latest nightly,
>>
>> has anyone el
hello *, ive been noticing that /admin/stats.jsp is really slow in the
recent builds, has anyone else encountered this?
--joe
thx much guys, no biggie for me, i just wanted to get to the bottom of
it in case i had screwed something else up..
--joe
On Tue, Oct 6, 2009 at 1:19 PM, Mark Miller wrote:
> I was worried about that actually. I havn't tested how fast the RAM
> estimator is on huge String FieldCache
oking to concatenate tokens
--joe
maybe im just not familiar with the way the version numbers works in
trunk but when i build the latest nightly the jars have names like
*-1.5-dev.jar, is that normal?
On Wed, Oct 14, 2009 at 7:01 AM, Yonik Seeley
wrote:
> Folks, we've been in code freeze since Monday and a test release
> candida
hello *, sorry if this seems like a dumb question, im still fairly new
to working with lucene/solr internals.
given a Document object, what is the proper way to fetch an integer
value for a field called "num_in_stock", it is both indexed and stored
thx much
--joe
hello * , ive read in other threads that lucene 2.9 had a serious bug
in it, hence trunk moved to 2.9.1 dev, im wondering what the bug is as
ive been using the 2.9.0 version for the past weeks with no problems,
is it critical to upgrade?
--joe
i have a pretty basic question, is there an existing analyzer that
limits the number of words/tokens indexed from a field? let say i only
wanted to index the top 25 words...
thx much
--joe
cool np, i just didnt want to duplicate code if that already existed.
On Tue, Oct 20, 2009 at 12:49 PM, Yonik Seeley
wrote:
> On Tue, Oct 20, 2009 at 1:53 PM, Joe Calderon wrote:
>> i have a pretty basic question, is there an existing analyzer that
>> limits the number of words
it with the
relevancy score, rather than add it in. One way to do this is with the
boost query parser.
how exactly do i use the boost query parser along with the dismax
parser? can someone post an example solrconfig snippet?
thx much
--joe
seems to happen when sort on anything besides strictly score, even
score desc, num desc triggers it, using latest nightly and 10/14 patch
Problem accessing /solr/core1/select. Reason:
4731592
java.lang.ArrayIndexOutOfBoundsException: 4731592
at
org.apache.lucene.search.FieldComparat
as a curiosity ide like to use a profiler to see where within solr
queries spend most of their time, im curious what tools if any others
use for this type of task..
im using jetty as my servlet container so ideally ide like a profiler
thats compatible with it
--joe
found another exception, i cant find specific steps to reproduce
besides starting with an unfiltered result and then given an int field
with values (1,2,3) filtering by 3 triggers it sometimes, this is in
an index with very frequent updates and deletes
--joe
java.lang.NullPointerException
oose as possible as these searches power an auto suggest feature
i figured if faceted results could be sorted by score, i could simply
boost phrases instead of restricting by them, thoughts?
--joe
up into 3 tokens (aw, root, beer), but seems
like you cant use a tokenizer after a filter ... so whats the best way
of accomplishing this?
thx much
--joe
patch -p0 < /path/to/field-collapse-5.patch
On Tue, Nov 3, 2009 at 7:48 PM, michael8 wrote:
>
> Hmmm, perhaps I jumped the gun. I just looked over the field collapse patch
> for SOLR-236 and each file listed in the patch has its own revision #.
>
> E.g. from field-collapse-5.patch:
> --- src/jav
sorry got cut off,
patch, then ant clean dist, will give you the modified solr war file,
if it doesnt apply cleanly (which i dont think is currently the case),
you can go back to the latest revision referenced in the patch,
On Tue, Nov 3, 2009 at 8:17 PM, Joe Calderon wrote:
> patch -p0 <
CommonsHttpSolrServer seems to
set the charset to UTF-8. As a workaround I am able to use the
CommonsHttpSolrServer. Being new to Solr, not sure what the bug protocol is,
assuming this is a bug.
Thanks,
Joe
Specifying the file.encoding did work, although I don't think it is a suitable
workaround for my use case. Any idea what my next step is to having a bug
opened.
Thanks,
Joe
> Date: Wed, 18 Nov 2009 16:15:55 +0530
> Subject: Re: UTF-8 Character Set not specifed on OutputStre
I do something very similar and it works for me. I noticed on your URL that
you have a mixed case fetchIndex, which the request handler is checking for
fetchindex, all lowercase. If it is not that simple I can try to see the exact
url my code is generating.
Hope it helps,
Joe
> F
I finally got around to testing the patch and it works well.
Thanks,
Joe
> Date: Mon, 23 Nov 2009 12:32:46 -0800
> From: hossman_luc...@fucit.org
> To: solr-user@lucene.apache.org
> Subject: RE: UTF-8 Character Set not specifed on OutputStreamWriter in
> StreamingU
t seem
to work, and have not had a chance to revisit. I have not found a way to
persist the solrconfig.xml with the updates to the slave list, so the control /
management is within my application.
Hope this highlevel overview helps.
Joe
> Date: Tue, 8 Dec 2009 12:42:12 +0100
>
ead
of being 0 or more its acting like1 or more
the text im trying to match is "The Gang Gets Extreme: Home Makeover Edition"
the field uses the following analyzers
is anybody else having similar problems?
best,
--joe
It doesn't need to be a copy field, right? Could you create a new field
"ex", extract value from description, delete digits, and set to "ex"
field before add/index to solr server?
-Original Message-
From: Feak, Todd [mailto:[EMAIL PROTECTED]
Sent: Wednesday, O
Could you post fieldType specification for "ex"? What your regex look
like?
-Original Message-
From: Aleksey Gogolev [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 22, 2008 11:39 Joe
To: Joe Nguyen
Subject: Re[6]: Question about copyField
JN> It doesn't need
Hi,
I have two cores. When each core references the same dataDir, I could
access the core admin interface. However, when core1 dirData is
referencing one directory, and core2 another directory, I could not
access the admin interface.
Any idea?
//each core references a different dir
//both cor
I have a solr core having 2 million lengthy documents.
1. If I modify datatype of a field 'foo' from string to a sint and
restart the server, what would happen to the existing documents? And
documents added with the new schema? At query time (sort=foo desc),
should I expect the documents sorte
impact query time?
-Original Message-
From: Shalin Shekhar Mangar [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 28, 2008 1:33 Joe
To: solr-user@lucene.apache.org
Subject: Re: Changing field datatype
On Wed, Oct 29, 2008 at 1:55 AM, Nguyen, Joe <[EMAIL PROTECTED]>
wrote:
>
&
SITE is defined as integer. I wanted to select all document whose SITE=3002,
but SITE of the response was different.
http://localhost:8080/solr/mysite/select?indent=on&qt=standard&fl=SITE&fq:SITE:3002
http://localhost:8080/solr/mysite/select?indent=on&qt=dismax&fl=SITE&fq:SITE:3002
http:/
Never mind. I misused the syntax. :-)
-Original Message-
From: Nguyen, Joe [mailto:[EMAIL PROTECTED]
Sent: Tuesday, October 28, 2008 7:00 Joe
To: solr-user@lucene.apache.org
Subject: Query integer type
SITE is defined as integer. I wanted to select all document whose SITE=3002,
but
7;whatever' as the productName.
However, it seems that the productName statement is not being used,
because I get only records back that match anotherField:"Johnny".
Not sure if this is a bug or expected behavior, and I am able to work
around it, but I'd certainly expect the above query to work.
Thanks!
-Joe
. 2005 and 2011 articles will be boosted by
0%.
Any idea/suggestion how I implement this?
Cheers
Joe
Use synonym.
Added these line to your ../conf/synonym.txt
Stephen,Steven,Steve
Bobby,Bob,Robert
...
-Original Message-
From: news [mailto:[EMAIL PROTECTED] On Behalf Of Jon Drukman
Sent: Friday, November 07, 2008 3:19 Joe
To: solr-user@lucene.apache.org
Subject: Handling proper names
Is
Could you collaborate further? 20 synonyms would translated to 20
booleanQueries. Are you saying each booleanQuery requires a disk
access?
-Original Message-
From: Walter Underwood [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 12, 2008 7:46 Joe
To: solr-user@lucene.apache.org
How about create a new core, index data, then swap the core? Old core
is still available to handle queries till new core replaces it.
-Original Message-
From: Lance Norskog [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 12, 2008 11:16 Joe
To: solr-user@lucene.apache.org
Subject
PROTECTED]
Sent: Wednesday, November 12, 2008 2:55 Joe
To: solr-user@lucene.apache.org
Subject: RE: FW: Score customization
I effectively need to use a multiplication in the sorting of the items.
Something like score*popularity.
It seems the only way to do this is to use a bf parameter.
However how do
High number of
segments (implied more files need to be opened) would impact query
response time. In that case, you could run optimize script to
consolidate into a single segment.
-Original Message-
From: Lance Norskog [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 12, 2008 11:16 J
y, November 17, 2008 6:09 Joe
To: solr-user@lucene.apache.org
Subject: Re: abt Multicore
Are all the documents in the same search space? That is, for a given
query, could any of the 10MM docs be returned?
If so, I don't think you need to worry about multicore. You may however
need to pu
Any suggestions?
-Original Message-
From: Nguyen, Joe
Sent: Monday, November 17, 2008 9:40 Joe
To: 'solr-user@lucene.apache.org'
Subject: RE: abt Multicore
"Are all the documents in the same search space? That is, for a given
query, could any of the 10MM docs be retu
e optimize will consolidate all segments
into a single segment. At the end, you'll have a single segment which
include the new field.
Would that work?
-Original Message-
From: Jeff Lerman [mailto:[EMAIL PROTECTED]
Sent: Monday, November 17, 2008 12:45 Joe
To: solr-user@lucene.apa
Score = 2.3518934 + 0.2 = 2.5518934
You can specify the function in the request URL
(http://wiki.apache.org/solr/FunctionQuery)
-Original Message-
From: Derek Springer [mailto:[EMAIL PROTECTED]
Sent: Tuesday, November 18, 2008 8:39 Joe
To: solr-user@lucene.apache.org
Subject: Re:
Could trigger the commit in this case?
-Original Message-
From: Nickolai Toupikov [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2008 8:36 Joe
To: solr-user@lucene.apache.org
Subject: Question about autocommit
Hello,
I would like some details on the autocommit mechanism. I
[mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2008 9:09 Joe
To: solr-user@lucene.apache.org
Subject: Re: Question about autocommit
They are separate commits. ramBufferSizeMB controls when the underlying
Lucene IndexWriter flushes ram to disk (this isnt like the IndexWriter
commiting or
-Original Message-
From: Nickolai Toupikov [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2008 9:51 Joe
To: solr-user@lucene.apache.org
Subject: Re: Question about autocommit
The documents have an average size of about a kilobyte i would say.
bigger ones can pop up,
but not nearly often
: Caligula [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2008 11:11 Joe
To: solr-user@lucene.apache.org
Subject: No search result behavior (a la Amazon)
It appears to me that Amazon is using a 100% minimum match policy. If
there are no matches, they break down the original search terms
would be
1. q= +A +B +C Match all terms
2. q= +A +B -C Match A and B but not C
3. q =+A -B +C
4. q =
-Original Message-
From: Caligula [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2008 11:52 Joe
To: solr-user@lucene.apache.org
Subject: RE: No search result behavior (a la A
of my test case setup, fill it with interesting documents,
and do some queries, comparing the results to expected results.
Are there wiki pages or other documented examples of doing this? It seems
rather straight-forward, but who knows, it may be dead simple with some unknown
feature.
Thanks!
-Joe
l the
different scenarios in which we use Solr.
Also, when/if we start making solr config changes, I can ensure that they
change nothing from my app's functional point of view (with the exception of
ridding us of dreaded OOMs).
Thanks,
-Joe
-Original Message-
From: Eric Pugh [mail
es on how much memory a particular
document takes up in memory, based on the data types, etc?
I have several stored fields, numerous other non-stored fields, a
largish copyTo field, and I am doing some sorting on indexed, non-stored
fields.
Any pointers would be appreciated!
Thanks,
-Joe
by '2008-08-12' and then by '121826'.
Any other tips/guidance like this would be great!
Thanks,
-Joe
On Mon, 2009-04-06 at 15:43 -0500, Joe Pollard wrote:
> To combat our frequent OutOfMemory Exceptions, I'm attempting to come up
> with a model so that we can determi
Cool, great resource, thanks.
-Original Message-
From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Sent: Tuesday, April 07, 2009 10:13 AM
To: solr-user@lucene.apache.org
Subject: Re: Coming up with a model of memory usage
On Tue, Apr 7, 2009 at 8:25 PM, Joe Pollard wrote:
>
ecide whether or not it's worth the effort
Best
Erick
On Tue, Apr 7, 2009 at 10:55 AM, Joe Pollard wrote:
> It doesn't seem to matter whether fields are stored or not, but I've
> found a rather striking difference in the memory requirements during
> sorting. Sorting o
up
> to you to decide whether or not it's worth the effort
>
> Best
> Erick
>
> On Tue, Apr 7, 2009 at 10:55 AM, Joe Pollard
> wrote:
>
>> It doesn't seem to matter whether fields are stored or not, but I've
>> found a rather striking diff
I see this interesting line in the wiki page LargeIndexes
http://wiki.apache.org/solr/LargeIndexes (sorting section towards the bottom)
Using _val:ord(field) as a search term will sort the results without incurring
the memory cost.
I'd like to know what this means, but I'm having a bit of trou
ds & sort fields)
2) Merge these into one list of N sorted id fields.
3) Query each shard for the details of these documents (by id), getting
back a field list of id only.
It seems to me that step 3 is overhead that can be skipped.
Any thoughts on this/known patches?
Thanks,
-Joe
fwiw, when implementing distributed search i ran into a similar
problem, but then i noticed even google doesnt let you go past page
1000, easier to just set a limit on start
On Thu, Dec 24, 2009 at 8:36 AM, Walter Underwood wrote:
> When do users do a query like that? --wunder
>
> On Dec 24, 200
name, but how do i specify the use query as the other
parameter?
http://wiki.apache.org/solr/FunctionQuery#strdist
best,
--joe
how can i make the score be solely the output of a function query?
the function query wiki page details something like
q=boxname:findbox+_val_:"product(product(x,y),z)"&fl=*,score
but that doesnt seems to work
--joe
Hello *, im trying to make an index to support spelling errors/fuzzy
matching, ive indexed my document titles with NGramFilterFactory
minGramSize=2 maxGramSize=3, using the analysis page i can see the
common grams match between the indexed value and the query value,
however when i try to do a query
"if this is the expected behaviour is there a way to override it?"[1]
[1] me
On Thu, Dec 31, 2009 at 10:13 AM, AHMET ARSLAN wrote:
>> Hello *, im trying to make an index
>> to support spelling errors/fuzzy
>> matching, ive indexed my document titles with
>> NGramFilterFactory
>> minGramSize=2 ma
im looking for, the problem im trying to solve is
matching wildcards on terms that can be entered multiple ways, i have
a set of analyzers that generate the various terms, ex wildcarding on
stemmed fields etc
thx much
--joe
'description2'
2. given a set of fields how to return matches that match across them
but on one specific field match as a phrase only, ex im using a dismax
parser currently but i want matches against a field called 'people' to
only match as a phrase
thx much,
--joe
matches
sorry if i was unclear
--joe
On Mon, Jan 11, 2010 at 10:13 AM, Erik Hatcher wrote:
>
> On Jan 11, 2010, at 12:56 PM, Joe Calderon wrote:
>>
>> 1. given a set of fields how to return matches that match across them
>> but not just one specific one, ex im using a
it seems to be in flux right now as the solr developers slowly make
improvements and ingest the various pieces into the solr trunk, i think
your best bet might be to use the 12/24 patch and fix any errors where
it doesnt apply cleanly
im using solr trunk r892336 with the 12/24 patch
--joe
I think you need to use the new trieDateField
On 01/12/2010 07:06 PM, Daniel Higginbotham wrote:
Hello,
I'm trying to boost results based on date using the first example
here:http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_boost_the_score_of_newer_documents
However, I'm getting an er
at we needed replicate after startup as there was no version
information on the master after restarting the instance. Is there something
special that needs to be done when using replicate after startup? Or is this a
bug?
below is the solr portion of the stacktrace.
Thanks,
Joe
open a jira issue.
Thanks,
Joe
> Date: Fri, 15 Jan 2010 19:06:15 -0500
> Subject: Re: OverlappingFileLockException when using name="replicateAfter">startup
> From: yo...@lucidimagination.com
> To: solr-user@lucene.apache.org
>
> Interesting... this should be impo
There is no issue here, we had patched our solr to include SOLR-1595 and our
webapps directory contained two wars. With only a single war file there is no
issue with this replication handler.
thanks for the quick response.
Joe
> From: isjust...@hotmail.com
> To: sol
this has come up before, my suggestions would be to use the 12/24
patch with trunk revision 892336
http://www.lucidimagination.com/search/document/797549d29e1810d9/solr_1_4_field_collapsing_what_are_the_steps_for_applying_the_solr_236_patch
2010/1/19 Licinio Fernández Maurelo :
> Hi folks,
>
> i'
Using Solr 1.4 and the StreamingUpdateSolrServer on Weblogic 10.3 and get the
following error on commit. The data seems to load fine, and the same code
works fine with Tomcat. On the client side an Internal Server Error is
reported.
Thanks,
Joe
weblogic.utils.NestedRuntimeException
:8080/solr/core3
query
facet
spellcheck
debug
which works as long as qt=standard, if i change it to dismax it doenst
use the shards parameter anymore...
thx much
--joe
main reason im creating the new request
handler, or do i put them all as defaults under my new request handler
and let the query parser use whichever ones it supports?
On Thu, Jan 21, 2010 at 11:45 AM, Yonik Seeley
wrote:
> On Thu, Jan 21, 2010 at 2:39 PM, Joe Calderon wrote:
>> hello *
ne on Weblogic, I assume this issue is with the
StreamingUpdateSolrServer.
Thanks,
Joe
Caused by: org.apache.commons.httpclient.ProtocolException: Unbuffered entity
enclosing request can not be repeated.
at
org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeReques
ared config and schema.
Thanks,
Joe
_
Hotmail: Powerful Free email with security by Microsoft.
http://clk.atdmt.com/GBL/go/196390710/direct/01/
facets are based off the indexed version of your string nor the stored
version, you probably have an analyzer thats removing punctuation,
most people index the same field multiple ways for different purposes,
matching. storting, faceting etc...
index a copy of your field as string type and facet o
hello *, in distributed search when a shard goes down, an error is
returned and the search fails, is there a way to avoid the error and
return the results from the shards that are still up?
thx much
--joe
by default solr will only search the default fields, you have to
either query all fields field1:(ore) or field2:(ore) or field3:(ore)
or use a different query parser like dismax
On Tue, Feb 2, 2010 at 3:31 PM, Stefan Maric wrote:
> I have got a basic configuration of Solr up and running and have
the
lucene query parser as i mentioned before, the syntax cen be found at
http://lucene.apache.org/java/2_9_1/queryparsersyntax.html
hope this helps
--joe
On Tue, Feb 2, 2010 at 3:59 PM, Stefan Maric wrote:
> Thanks for the quick reply
> I will have to see if the default query mechanis
a shard has failed
On Wed, Feb 3, 2010 at 10:55 AM, Yonik Seeley
wrote:
> On Fri, Jan 29, 2010 at 3:31 PM, Joe Calderon wrote:
>> hello *, in distributed search when a shard goes down, an error is
>> returned and the search fails, is there a way to avoid the error and
>> re
is it possible to configure the distance formula used by fuzzy
matching? i see there are other under the function query page under
strdist but im wondering if they are applicable to fuzzy matching
thx much
--joe
if there is another lucene tree i should grab to use to build solr?
--joe
hello *, currently with hl.usePhraseHighlighter=true, a query for (joe
jack*) will highlight joe jackson, however after reading the
archives, what im looking for is the old 1.1 behaviour so that only
joe jack is highlighted, is this possible in solr 1.5 ?
thx much
--joe
when i set hl.highlightMultiTerm=false the term that matches the wild
card is not highlighted at all, ideally ide like a partial highlight
(the characters before the wildcard), but if not i can live without it
thx much for the help
--joe
On Fri, Feb 5, 2010 at 10:44 PM, Mark Miller wrote:
>
hello *, quick question, what would i have to change in the query
parser to allow wildcarded terms to go through text analysis?
you can do that very easily yourself in a post processing step after
you receive the solr response
On Wed, Feb 10, 2010 at 8:12 AM, gdeconto
wrote:
>
> I have been able to apply and use the solr-236 patch (field collapsing)
> successfully.
>
> Very, very cool and powerful.
>
> My one comment/conc
sorry, what i meant to say is apply text analysis to the part of the
query that is wildcarded, for example if a term with latin1 diacritics
is wildcarded ide still like to run it through ISOLatin1Filter
On Wed, Feb 10, 2010 at 4:59 AM, Fuad Efendi wrote:
>> hello *, quick question, what would i h
if you use the core model via solr.xml you can reload a core without
having to to restart the servlet container,
http://wiki.apache.org/solr/CoreAdmin
On 02/11/2010 02:40 PM, Emad Mushtaq wrote:
Hi,
I would like to know if there is a way of reindexing data without restarting
the server. Lets sa
when using solr.xml, you can specify a sharedlib directory to share
among cores, is it possible to reload the classes in this dir without
having to restart the servlet container? it would be useful to be able
to make changes to those classes on the fly or be able to drop in new
plugins
i ran into a problem while using the edgengramtokenfilter, it seems to
report incorrect offsets when generating tokens, more specifically all
the tokens have offset 0 and term length as start and end, this leads
to goofy highlighting behavior when creating edge grams for tokens
beyond the first one
lucene-2266 filed and patch posted.
On 02/13/2010 09:14 PM, Robert Muir wrote:
Joe, can you open a Lucene JIRA issue for this?
I just glanced at the code and it looks like a bug to me.
On Sun, Feb 14, 2010 at 12:07 AM, Joe Calderonwrote:
i ran into a problem while using the
no but you can set a default for the qf parameter with the same value
On 02/15/2010 01:50 AM, Steve Radhouani wrote:
Hi there,
Can the option be used by the DisMaxRequestHandler?
Thanks,
-Steve
no, youre just changing how your querying the index, not the actual
index, you will need to restart the servlet container or reload the core
for the config changes to take effect tho
On 02/17/2010 10:04 AM, Frederico Azeiteiro wrote:
Hi,
If i change the "defaultSearchField" in the core schem
use the common grams filter, itll create tokens for stop words and
their adjacent terms
On Thu, Feb 18, 2010 at 7:16 AM, Nagelberg, Kallin
wrote:
> I've noticed some peculiar behavior with the dismax searchhandler.
>
> In my case I'm making the search "The British Open", and am getting 0
> resul
i had to create a autosuggest implementation not too long ago,
originally i was using faceting, where i would match wildcards on a
tokenized field and facet on an unaltered field, this had the
advantage that i could do everything from one index, though it was
also limited by the fact suggestions ca
201 - 300 of 417 matches
Mail list logo