rned db record?
Thanks in advance,
Mathias
--
View this message in context:
http://lucene.472066.n3.nabble.com/Where-does-the-value-for-dih-delta-id-come-from-exactly-tp4328678.html
Sent from the Solr - User mailing list archive at Nabble.com.
Yonik Seeley wrote
> On Fri, Oct 21, 2016 at 7:07 AM, Mathias
> <
> mathias.mahlknecht@
> > wrote:
>> With the first version I get the fallowing error:
>>
>> "org.apache.solr.search.SyntaxError: Cannot parse
>> '(type:EM_PM_Timerecord
With the first version I get the fallowing error:
"org.apache.solr.search.SyntaxError: Cannot parse '(type:EM_PM_Timerecord':
Encountered \"\" at line 1, column 22.\nWas expecting one of:\n
...\n ...\n ...\n\"+\" ...\n\"-\" ...\n
...\n\"(\" ...\n\")\" ...\n\"*\"
I tested it with solr version 6.1.0 and 6.2.1.
Thanks,
Mathias
--
View this message in context:
http://lucene.472066.n3.nabble.com/OR-two-joins-tp4302415p4302416.html
Sent from the Solr - User mailing list archive at Nabble.com.
eated:[142007400
TO 145161000])
Can someone tell me what I'm missing? And what is wrong with the first
statement?
Thanks in advance,
Mathias
--
View this message in context:
http://lucene.472066.n3.nabble.com/OR-two-joins-tp4302415.html
Sent from the Solr - User mailing list archive at Nabble.com.
based search
utilizing the standard handler but scoring with a funtcion (it's slower,
but more flexible).
Feel free to test it and let me know what you think :)
http://demo-itec.uni-klu.ac.at/liredemo/
cheers,
Mathias
--
Priv.-Doz. Dr. Dipl.-Ing. Mathias Lux
Associate Professor at
have to ensure reliability by myself.
Thanks.
Mathias
/LireEntityProcessor.java?at=master#cl-56
The EntityProcessor is part of this image search plugin if anyone is
interested: https://bitbucket.org/dermotte/liresolr/
:) It's always the small things that are hard to find
cheers and thanks, Mathias
On Wed, Dec 18, 2013 at 7:26 PM, P Williams
ssume that the nested entity processor will be called
for each of the rows that come out from its parent. I've read
somewhere, that the data has to be taken from the data source, and
I've implemented that, but it doesn't seem to change anything.
cheers,
Mathias
On Wed, Dec 18, 201
as the filePath attribute, but it ends up all the
same. However, the FileListEntityProcessor is able to read all the
files according to the debug output, but I'm missing the link from the
FileListEntityProcessor to the LireEntityProcessor.
I'd appreciate any pointer or help :)
cheers,
set the queryResultCache size to 0 in the solrconfig.xml
cheers,
Mathias
On Thu, Oct 24, 2013 at 4:51 PM, Joel Bernstein wrote:
> Mathias,
>
> I'd have to do a close review of the function sort code to be sure, but I
> suspect if you implement the equals() method on the Valu
That's a possibility, I'll try that and report on the effects. Thanks,
Mathias
Am 24.10.2013 16:52 schrieb "Joel Bernstein" :
> Mathias,
>
> I'd have to do a close review of the function sort code to be sure, but I
> suspect if you implement the equals(
ounteract?
btw. I'm using Solr 4.4 (so if you are aware of the issue and it has
been resolved in 4.5 I'll port it :) The code I'm using is at
https://bitbucket.org/dermotte/liresolr
regards,
Mathias
--
Dr. Mathias Lux
Assistant Professor, Klagenfurt University, Austria
http://tinyurl.com/mlux-itec
Got it! Just for you to share ... and maybe for inclusion in the Java
API docs of ValueSource :)
For sorting one needs to implement the method
public double doubleVal(int) of the class ValueSource
then it works like a charm.
cheers,
Mathias
On Tue, Sep 17, 2013 at 6:28 PM, Chris Hostetter
u can extend and override the Similarity
implementation. You might take a look at
http://lucene.apache.org/core/4_4_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html
cheers,
Mathias
On Tue, Sep 17, 2013 at 1:59 PM, Upayavira wrote:
> Have you used debugQuery=true, or fl=*,
g8QEgsgEBAQEBAgEBAQEBA%3D)+asc&fl=id%2Ctitle%2Clirefunc(cl_hi%2CFQY5DhMYDg0ODg0PEBEPDg4ODg8QEgsgEBAQEBAgEBAQEBA%3D)&wt=json&indent=true
cheers,
Mathias
On Tue, Sep 17, 2013 at 1:01 AM, Chris Hostetter
wrote:
> : dissimilarity functions). What I want to do is to search using common
>
t the DocValues for search is handled by a custom
RequestHandler, which works great, but using text as a main search
feature, and my DocValues for re-ranking, I'd rather just add a
function for sorting and use the current, stable and well performing
request handler.
cheers,
Mathias
ps. a de
PL-ed) source online at the end of
September (as module of LIRE), after some stress tests, documentation
and further bug fixing.
cheers,
Mathias
On Mon, Aug 12, 2013 at 4:51 PM, Robert Muir wrote:
> On Mon, Aug 12, 2013 at 8:38 AM, Mathias Lux wrote:
>> Hi!
>>
>> I&
Hi!
That's what I'm doing currently, but it ends up in StoredField
implementations, which create an overhead on decompression I want to
avoid.
cheers,
Mathias
On Mon, Aug 12, 2013 at 3:11 PM, Raymond Wiker wrote:
> base64-encode the binary data? That will give you strings, at the
Hi!
I'm basically searching for a method to put byte[] data into Lucene
DocValues of type BINARY (see [1]). Currently only primitives and
Strings are supported according to [1].
I know that this can be done with a custom update handler, but I'd
like to avoid that.
cheers,
Mathias
Values?
cheers,
Mathias
--
Dr. Mathias Lux
Assistant Professor, Klagenfurt University, Austria
http://tinyurl.com/mlux-itec
Hi,
I'm using Embedded Solr 4.0 with SolrJ. In solrconfig.xml you can
specify a RunExecutableListener. Is there something similar in SolrJ,
so I can get an event, if the index gets updated?
This can be very useful if using SolrCloud, to get an event if other
shards updating the index.
Thanks.
lds is the only
possibility if you can't increase memory.
Thanks.
Mathias
2012/5/14 Erick Erickson :
> But consider what would happen if the cache was cleaned up the next
> query in would require that the terms be re-loaded. I guess it's possible
> that some people would be willing
s not
enough memory left. But instead of that, Field Cache will always
remains in "Old Generation GC".
Could this be fixed or is the only way out to get more memory?
Thanks.
Mathias
Hi,
I'm looking for a parameter like "group.truncate=true". Though I not
only want to count facets based on the most relevant document of each
group but based on all documents. Moreover if a facet value is in more
than in one document of a group it should only count once.
Example:
Doc 1:
type: s
Hi Ahmet,
awesome! Now it works.
2012/2/10 Ahmet Arslan :
>> I'm using the NGramFilterFactory for indexing and querying.
>>
>> So if I'm searching for "overflow" it creates an query like
>> this:
>>
>> mySearchField:"ov ve ... erflow overflo verflow overflow"
>>
>> But if I misspelled "overflow",
Hi,
I'm using the NGramFilterFactory for indexing and querying.
So if I'm searching for "overflow" it creates an query like this:
mySearchField:"ov ve ... erflow overflo verflow overflow"
But if I misspelled "overflow", i.e. "owerflow" there are no matches
because the quotes around the query:
Hi Morten,
thanks, this is a very good solution.
I also found another solution:
Creating a custom ValueSourceParser for price sorting which considered
the standard price and the campaign price.
In my special case I think your approach isn't working, because i also
need result grouping and this c
Sorry, here are some details:
requestHandler: XmlUpdateRequesetHandler
protocol: http (10 concurrend threads)
document: 1kb size, 15 fields
cpu load: 20%
memory usage: 50%
But generally speaking, is that normal or must be something wrong with my
configuration, ...
2011/6/17 Erick Erickson
>
string representation
of the array address will be added the the SolrInputDocument.
BTW: I've tested it with EmbeddedSolrServer and Solr/Lucene trunk.
Why has the string representation changed? From the changed string I cannot
decode the correct ID.
--
Kind regards,
Mathias
Hi,
> On Mon, Oct 25, 2010 at 3:41 AM, Mathias Walter
> wrote:
> > I indexed about 90 million sentences and the PAS (predicate argument
> structures) they consist of (which are about 500 million). Then
> > I try to do NER (named entity recognition) by searching about 5 mi
.
BTW: I made some tests with a smaller index and the ID encoded as string. Using
the field cache improves the hit retrieval
dramatically (from 18 seconds down to 2 seconds per query, with a large number
of results).
--
Kind regards,
Mathias
> -Ursprüngliche Nachricht-
> Von: Er
FieldCache with a binary field?
--
Kind regards,
Mathias
uot;1");
args.put("stemEnglishPossessive", "0");
args.put("language", "English");
wordDelimiter = new WordDelimiterFilterFactory();
wordDelimiter.init(args);
stream = wordDelimiter.create(stream);
--
Kind regards,
Mathias
> -Original Message-
> From: Max Lyn
Hi Robert,
> On Fri, Sep 24, 2010 at 3:54 AM, Mathias Walter wrote:
>
> > Hi,
> >
> > I'm combined the WordDelimiterFilter with the PositionFilter to prevent the
> > creation of expensive Phrase and MultiPhraseQueries. But
> > if I now parse an es
d, I would expect a PhraseQuery and not a BooleanQuery.
What should be the correct behavior?
--
Kind regards,
Mathias
eld
cache, i.e. could a filter query somehow limit the values loaded in
the FieldCache?
Is there otherwise a WTH (Well Known Hack) to be able to sort on
fields when an index has lots (100s of 1000s) of values for that
field?
Any help appreciated,
Mathias.
37 matches
Mail list logo