providing SolrIndexSearcher.java#L522
>
> Would you mind to raise a ticket?
>
> On Wed, Nov 5, 2014 at 6:51 PM, Dirk Högemann wrote:
>
> > Our production Solr-Slaves-Cores (we have about 40 Cores (each has a
> > moderate size about 10K documents to 90K documents)) produc
Our production Solr-Slaves-Cores (we have about 40 Cores (each has a
moderate size about 10K documents to 90K documents)) produce many
exceptions of type:
014-11-05 15:06:06.247 [searcherExecutor-158-thread-1] ERROR
org.apache.solr.search.SolrCache: Error during auto-warming of
key:org.apache.sol
Hello,
I have implemented a Solr EventListener, which should be fired after
committing.
This works fine on the Solr-Master Instance and it also worked in Solr 3.5
on any Slave Instance.
I upgraded my installation to Solr 4.2 and now the postCommit event is not
fired any more on the replication (S
Do you really need them all in the response to show them in the results?
As you define them as not stored now this does not seem so.
2012/12/23 Otis Gospodnetic
> Hi,
>
> You can specify them in solrconfig.xml for your request handler, so you
> don't have to specify it for each query unless you
You can define the fields to be returned with the fl parameter fl=the,
needed, fields - usually the score and the id...
2012/12/23 uwe72
> hi
>
> i am indexing pdf documents to solr by tika.
>
> when i do the query in the client with solrj the performance is very bad
> (40
> seconds) to load 100
; escaped or quoted characters which will then be seen by the analyzer
> tokenizer.
>
>
> -- Jack Krupansky
>
> -Original Message- From: Dirk Högemann
> Sent: Monday, December 17, 2012 11:01 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr3.5 PatternTokenizer / Sea
ot; to "query", but that won't change your problem since Solr
> defaults to using the "index" analyzer if it doesn't "see" a "query"
> analyzer.
>
> -- Jack Krupansky
>
> -Original Message- From: Dirk Högemann
> S
+cl2Categories_NACE:bergbau
That is the relevant debug Output from the query.
2012/12/17 Dirk Högemann
> Hi,
>
> I am not sure if am missing something, or maybe I do not exactly
> understand the index/search analyzer definition and their execution.
>
> I have a field definition like this:
>
>
>
Hi,
I am not sure if am missing something, or maybe I do not exactly understand
the index/search analyzer definition and their execution.
I have a field definition like this:
Any field starting with cl2 should be recogni
aybe a custom search component that runs before the QueryComponent and
> does the escaping?
>
> -- Jack Krupansky
>
> -Original Message- From: Dirk Högemann
> Sent: Tuesday, October 30, 2012 1:07 PM
> To: solr-user@lucene.apache.org
> Subject: Forwardslash delimiter.So
Hi,
I am currently upgrading from Solr 3.5 to Solr 4.0
I used to have filter-bases restrictions for my search based on the paths
of documents in a content repository.
E.g. fq={!q.op=OR df=}folderPath_}/customer/content/*
Unfortunately this does not work anymore, as lucene now supports
Regexpsea
> throw new IllegalArgumentException("**maxTokenCount is mandatory.");
> }
> maxTokenCount = Integer.parseInt(args.get(**maxTokenCountArg));
>
> Hmmm... try this "workaround":
>
> maxTokenCount="foo" foo="1"/>
>
> -- Jack Kru
Hi,
I am trying to upgrade from Solr 3.5 to Solr 4.0.
I read the following in the example solrconfig:
I tried that as follows:
...
...
The LimitTokenCountFilterFactory configured like that crashes the startup
of th
t; 4.0 you can start indexing new documents into an existing index.
> To get optimal performance, use oal.index.IndexUpgrader
> to upgrade your indexes to latest file format (LUCENE-3082).
>
> -- Jack Krupansky
>
> -Original Message- From: Dirk Högemann
> Sent: Tuesday,
Hello,
I am trying to make our search application Solr 4.0 (Beta) ready and
elaborate on the tasks necessary to accomplish this.
When I try to reindex our documents I get the following exception:
auto commit error...:java.lang.UnsupportedOperationException: this codec
can only be used for readin
Interesting thing is that the only Tool I found to handle my pdf correctly
was pdftotext.
2012/2/10 Robert Muir
> On Fri, Feb 10, 2012 at 6:18 AM, Dirk Högemann
> wrote:
> >
> > Our suggest component and parts of our search is getting hard to use by
> > this. Any ot
ere's a JIRA for it, but it depends on some Tika1.1 stuff as far I can
> understand
>
> https://issues.apache.org/jira/browse/SOLR-2930
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
> Solr Training - www.solrtraining.com
>
&
Hello,
we use Solr 3.5 and Tika to index a lot of PDFs. The content of those PDFs
is searchable via a full-text search.
Also the terms are used to make search suggestions.
Unfortunately pdfbox seems to insert a space character, when there are
soft-hyphens in the content of the PDF
Thus the extrac
>
> Not sure what that means for the rest of your app though.
>
> Best
> Erick
>
> On Mon, Feb 6, 2012 at 5:44 AM, Dirk Högemann
> wrote:
> > Hi,
> >
> > I have a question on phonetic search and matching in solr.
> > In our application all the
Hi,
I have a question on phonetic search and matching in solr.
In our application all the content of an article is written to a full-text
search field, which provides stemming and a phonetic filter (cologne
phonetic for german).
This is the relevant part of the configuration for the index analyzer
Is ist better to collect a list of documents to add and commit these,
instead of using the auto-commit function?
Thanks in advance for any help!
Dirk Högemann
___
Schon gehört? WEB.DE hat einen genialen Phishing-Filter in die
Toolbar eingebau
21 matches
Mail list logo