Okay, I start guessing:
- Do we have to write a customized QueryParserPlugin?
- On which point does the RequestHandler/QueryParser/whatever decide what
query-analyzer to use?
10% for every copied field is a lot for us, we're facing Terra-bytes of
digitized Book-Data. So we want to keep the index
Hi!
We've got one index splitted into 4 shards á 70.000 records of large
full-text data from (very dirty) OCR. Thus we got a lot of "unique" terms.
No we try to obtain the first 400 most common words for "CommonGramsFilter"
via TermsComponent but the request runs allways out of memory. The VM is
Thanks for your suggestion. It seems to be the use of shards and
TermsComponent together. Now we simple requesting shard-by-shard without
"shard" and "shard.qt" params and merge the results via XSLT.
Sebastian
--
View this message in context:
http://lucene.472066.n3.nabble.com/TermsCompo
Hi,
"if i type complete word (such as "übersicht").
But there are no hits, if i use wildcards (such as "über*")
Searching with wildcards and without umlauts works as well."
I can confirm that.
Greetz,
Sebastian
--
View this message in context:
http://lucene.472066.n3.nabble.com/wildcards-an
Ah, BTW,
since the problem seems to be a query-parser-issue a simple workarround
could be done by simple replace all Umlauts with ASCII-Characters (ä = ae, ö
= oe, ü = ue for example) before sending the query to Solr and use a
solr.MappingCharFilterFactory with the same replacements (ä = ae, ö = o
I don't get you. Did I wrote something of an Analyzer? Actually not.
--
View this message in context:
http://lucene.472066.n3.nabble.com/wildcards-and-German-umlauts-tp499972p2999074.html
Sent from the Solr - User mailing list archive at Nabble.com.
Ah, NOW I got it. It's not a bug, it's a feature.
But that would mean, that every character-manipulation (e.g.
char-mapping/replacement, Porter-Stemmer in some cases ...) would cause a
wildcard-query to fail. That too bad.
But why? What's the Problem with passing the prefix through the
analyzer/
Hi Developers and Users,
a serious Problem occurred:
19.07.2011 10:50:32 org.apache.solr.common.SolrException log
SEVERE: java.io.IOException: seek past EOF
at
org.apache.lucene.store.MMapDirectory$MMapIndexInput.seek(MMapDirectory.java:343)
at org.apache.lucene.index.FieldsReade
Ups, false alarm.
CustomSimilarity, combined with a very small set of documents caused the
problem.
Greetings,
Sebastian
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-3-3-SEVERE-java-io-IOException-seek-past-EOF-tp3181869p3181943.html
Sent from the Solr - User mai
Dear Devs and Users,
it is I!
Okay, it starts with that:
/Exception in thread "Lucene Merge Thread #1"
org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: Map
failed
at
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.ja
Update.
After adding 1626 documents without doing a commit or optimize:
/Exception in thread "Lucene Merge Thread #1"
org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: Map
failed
at
org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMerg
Here we go ...
This time we tried to use the old LogByteSizeMergePolicy and
SerialMergeScheduler:
We did this before, just to be sure ...
~300 Documents:
/
SEVERE: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:782)
at
org.apache.lucen
Yeah, indeed.
But since the VM is equipped with plenty of RAM (22GB) and it works so far
(Solr 3.2) very well with this setup, I AM slightly confused, am I?
Maybe we should LOWER the dedicated Physical Memory? The remaining 10GB are
used for a second tomcat (8GB) and the OS (Suse). As far as I un
mdz-munich wrote:
>
> Yeah, indeed.
>
> But since the VM is equipped with plenty of RAM (22GB) and it works so far
> (Solr 3.2) very well with this setup, I AM slightly confused, am I?
>
> Maybe we should LOWER the dedicated Physical Memory? The remaining 10GB
> are
I was wrong.
After rebooting tomcat we discovered a new sweetness:
/SEVERE: REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@3c753c75
(core.name) has a reference count of 1
22.07.2011 11:52:07 org.apache.solr.common.SolrException log
SEVERE: java.lang.RuntimeException: java.io.IOExcep
Hi Yonik,
thanks for your reply!
> Are you specifically selecting MMapDirectory in solrconfig.xml?
Nope.
We installed Oracle's Runtime from
http://java.com/de/download/linux_manual.jsp?locale=de
/java.runtime.name = Java(TM) SE Runtime Environment
sun.boot.library.path = /usr/java/jdk1.6.0
mit -a" says.
>
> -Yonik
> http://www.lucidimagination.com
>
> On Fri, Jul 22, 2011 at 12:51 PM, mdz-munich
> <sebastian.lu...@bsb-muenchen.de> wrote:
>> Hi Yonik,
>>
>> thanks for your reply!
>>
>>> Are you specifically selecting MMapDirec
Maybe it's important:
- The OS (Open Suse 10) is virtualized on VMWare
- Network Attached Storage
Best regards
Sebastian
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3191986.html
Sent from the Solr - User mail
It seems to work now.
We simply added
/ulimit -v unlimited /
to our tomcat-startup-script.
@Yonik: Thanks again!
Best regards,
Sebastian
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3200105.html
Sen
Hi Tobias,
try this, it works for us (Solr 3.3):
solrconfig.xml:
/
word
suggestion
org.apache.solr.spelling.suggest.Suggester
org.apache.solr.spelling.suggest.fst.FSTLookup
wordCorpus
score
./suggester
false
true
0.005
true
true
true
true
suggestion
50
50
suggest
/
Query like that:
htt
Hi Tobias,
sadly, it seems you are right.
After a little bit investigation we also recognized that some names (we use
it for auto-completing author-names), are missing. And since it is a
distributed setup ...
But I am almost sure it worked with Solr 3.2.
Best regards,
Sebastian
--
View
21 matches
Mail list logo