I've used this before, by specifying the chain as the default processor chain
by putting the following directly under the entry:
uniq-fields
Not sure if this is the best way, but since our app is the only one using Solr,
we want every update to use the chain across all ou
Has anyone else observed NPEs attempting to have expired docs removed? I'm
seeing the following exceptions:
2019-02-28 04:06:34.849 ERROR (autoExpireDocs-30-thread-1) [ ]
o.a.s.u.p.DocExpirationUpdateProcessorFactory Runtime error in periodic
deletion of expired docs: null
java.lang.NullPointe
--
From: Jason Gerlowski [mailto:gerlowsk...@gmail.com]
Sent: Monday, March 11, 2019 1:24 PM
To: solr-user@lucene.apache.org
Subject: Re: ClassCastException in SolrJ 7.6+
Hi Gerald,
That looks like it might be a bug in SolrJ's JSON faceting support.
Do you have a small code snippet that reprod
I'm seeing the following Exception using JSON Facet API in SolrJ 7.6, 7.7,
7.7.1:
Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to
java.lang.Integer
at
org.apache.solr.client.solrj.response.json.NestableJsonFacet.(NestableJsonFacet.java:52)
at
org.apache.so
Hi,
We have some custom code that extends SearchHandler to be able to :
- do an extra request
- merge/combine the original request and the extra request results
On Solr 5.x, our code was working very well, now with Solr 6.x we
have the following issue: the number of SolrI
Hi,
The custom code we have is something like this :
public class MySearchHandlerextends SearchHandler {
@Override public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse
rsp)throws Exception {
SolrIndexSearcher searcher =req.getSearcher();
try{
the past to
investigate a performance problem. But it might not help if the problem
only occurs at 165 queries per second (is that true?).
cheers -- Rick
On 2017-01-30 04:02 AM, Gerald Reinhart wrote:
Hello,
In addition to the following settings, we have tried to :
- force Jetty to
,
Gérald Reinhart
On 01/27/2017 11:22 AM, Gerald Reinhart wrote:
Hello,
We are migrating our platform
from
- Solr 5.4.1 hosted by a Tomcat
to
- Solr 5.4.1 standalone (hosted by Jetty)
=> Jetty is 15% slower than Tomcat in the same conditions.
Here are deta
Hello,
We are migrating our platform
from
- Solr 5.4.1 hosted by a Tomcat
to
- Solr 5.4.1 standalone (hosted by Jetty)
=> Jetty is 15% slower than Tomcat in the same conditions.
Here are details about the benchmarks :
Context :
- Index with 9 000 000 docume
ould in fact take longer than copying over all the index files from
leader)
On Thu, Oct 6, 2016 at 5:23 AM, Gerald Reinhart
mailto:gerald.reinh...@kelkoo.com>> wrote:
Hello everyone,
Our Solr Cloud works very well for several months without any significant
changes: the traffic to s
Hello everyone,
Our Solr Cloud works very well for several months without any significant
changes: the traffic to serve is stable, no major release deployed...
But randomly, the Solr Cloud leader puts all the replicas in recovery at the
same time for no obvious reason.
Hence, we ca
toSoftCommit nodes in the solrconfig.xml.
Set them to reasonable values. The idea is that if you commit too often,
searchers will be warmed up and thrown away. If at any point in time you
get overlapping commits, there will be several searchers sitting on the
deck.
Dmitry
On Mon, Feb 29, 2016 at 4:20 PM, G
Hi,
In short: backup on a recovering index should fail.
We are using the backup command "http:// ...
/replication?command=backup&location=/tmp" against one server of the
cluster.
Most of the time there is no issue with this command.
But in some particular case, the server can be in
Hi,
We are facing an issue during a migration from Solr4 to Solr5.
Given
- migration from solr 4.10.4 to 5.4.1
- 2 collections
- cloud with one leader and several replicas
- in solrconfig.xml: maxWarmingSearchers=1
- no code change
When collection reload using /admin/collectio
ide
public void add(Term term, int position) {
String termText = term.text();
if (!termText.matches(Constants.SYNONYM_PIVOT_ID_REGEX)) {
super.add(term, position - deltaPosition);
}else{
deltaPosition++;
}
}
}
On 02/03/2016 03:05 PM, Erik Hatcher wrote:
reate the query...
so not easy to override the behavior: we would need to
overrride/duplicate getAliasedQuery() and getQuery() methods. we don't
really want to do this either.
So we don't really know where to go.
Thanks,
Gerald (I'm working with Elodie on the subject)
Ke
tter luck if we upgraded to Solr 4.1?
Thanks in advance.
--
*Gerald Blanck*
baro*m*eter*IT*
1331 Tyler Street NE, Suite 100
Minneapolis, MN 55413
612.208.2802
gerald.bla...@barometerit.com
Mikhail-
Let me know how to contribute a test case and I will put it on my to do
list.
When your many-to-many BlockJoin solution matures I would love to see it.
Thanks.
-Gerald
On Tue, Nov 13, 2012 at 11:52 PM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Gerald,
>
t-with.html
> http://blog.griddynamics.com/2012/08/block-join-query-performs.html
>
>
>
>
> On Tue, Nov 13, 2012 at 3:25 PM, Gerald Blanck <
> gerald.bla...@barometerit.com> wrote:
>
>> Thank you. I've not heard of BlockJoin. I will look into it today.
&g
> explosion, that's
> not a problem.
>
> And the join functionality isn't called "pseudo" for nothing. It was
> written for a specific
> use-case. It is often expensive, especially when the field being joined has
> many unique
> values.
>
> FWIW,
me why the code is written as it is? And if we were
to run with only the else block being executed, what type of adverse
impacts we might have?
Does anyone have other ideas on how to solve this issue?
Thanks in advance.
-Gerald
I dont really have any specific use case in mind; I was just wondering what I
could (or couldn't) do with custom functions
possible reasons for allowing that type of syntax include:
1. in general, to simplify queries, and make them more readable, by
eliminating the need for the _val_ hack (which
Thanks Grant
Am looking forward to the day when I can create a SOLR URL that looks
something like this:
http://mysolrserver:8080/solr/select?q=*:* AND
mycustomstrfunction(mysolrstrfield):'somestringvalue' AND
mycustomintfunction(mysolrintfield):[1 TO 100]
--
View this message in context:
http:
while investigating custom functions in solr, I noticed LiteralValueSource
according to the one line documentation will "Pass a the field value through
as a String, no matter the type"
how would I use such a value source? if it is a value source, I should be
able to use it in functionqueries fo
Figured this out about ten minutes after I posted the message, and much
simpler than I thought it would be.
I used the SumFloatFunction (which extends MultiFloatFunction) as a starting
point and was able to achieve what I was going for for my test; a simple
string length function that returns the
using the NvlValueSourceParser example, I was able to create a custom
function that has two parameters; a valuesource (a solr field) and a string
literal. i.e.: myfunc(mysolrfield, "test")
it works well but is a pretty simple function.
what is the the best way to implement a (more complex) cust
Collection myFL =
searcher.getReader().getFieldNames(IndexReader.FieldOption.ALL);
will return all fields in the schema (i.e. index, stored, and
indexed+stored).
Collection myFL =
searcher.getReader().getFieldNames(IndexReader.FieldOption.INDEXED );
likely returns all fields that are indexed (I
I was looking at solr-386 and thought I would try to create a custom
highlighter for something I was doing.
I created a class that looks something like this:
public class CustomOutputHighlighter extends DefaultSolrHighlighter {
@Override
public NamedList doHighlighting(DocL
AWESOME. may take me some time to understand the regex pattern but it worked
And many thanks for looking into RegexTransformer.process(). Nice to know
that splitby cant be used with regex or replacewith etc
Many thanks Steve.
--
View this message in context:
http://n3.nabble.com/problem-wit
Thanks guys. Unfortunately, neither pattern works.
I tried various combos including these:
([^|]*)\|([^|]*) with replaceWith="$1"
(.*?)(\|.*) with replaceWith="$1"
(.*?)\|.* with and without replaceWith="$1"
(.*?)\| with and without replaceWith="$1"
As previously mentioned, I have tried
Thanks guys. Unfortunately, neither pattern works.
I tried various combos including these:
([^|]*)\|([^|]*) with replaceWith="$1"
(.*?)(\|.*) with replaceWith="$1"
(.*?)\|.* with and without replaceWith="$1"
(.*?)\| with and without replaceWith="$1"
As previously mentioned, I have tried many
forgot to mention that I DID use replaceWith="$1" in tests where the pattern
was like "(.*)(\|.*)" in order to only get the first group
--
View this message in context:
http://n3.nabble.com/problem-with-RegexTransformer-and-delimited-data-tp713846p714636.html
Sent from the Solr - User mailing li
I have some delimited data that I would like to import but am having issues
getting the regex patterns to work properly with Solr. The following is just
one example of the issues I have experienced.
The regex required for this example should be very simple (delimited data).
I have some regex pat
Are you using the field name suffixes like Blacklight?xxx_text,
_xxx_facet, xxx_string? With the xxx_string field you can request
"begins with" search, but you may need some different search term
normalization than with a _text search.
Gerald Snyder
Florida Center f
Thanks for the answer and the alternative idea.--Gerald
Chris Hostetter wrote:
: Reverse alphabetical ordering. The option "index" provides alphabetical
: ordering.
be careful: "index" doesn't mean "alphabetical" -- it means the natural
ordering
Reverse alphabetical ordering. The option "index" provides
alphabetical ordering.
I have a year_facet field, that I would like to display in reverse order
(most recent years first). Perhaps there is some other way to
accomplish this.
Thanks.
--Gerald
Chris Hostetter wrote:
Is there any value for the "f.my_year_facet.facet.sort" parameter that
will return the facet values in descending order? So far I only see
"index" and "count" as the choices.
http://lucene.apache.org/solr/api/org/apache/solr/common/params/FacetParams.html#FA
url in the request
However this expects that you may need to setup a loadbalancer if a
shard hhos more than one host
On Wed, Nov 26, 2008 at 12:25 AM, Gerald De Conto
<[EMAIL PROTECTED]> wrote:
> I wasn't able to find examples/anything via google so thought I'd ask:
>
>
>
I wasn't able to find examples/anything via google so thought I'd ask:
Say I want to implement a solution using distributed searches with many
"shards" in SOLR 1.3.0. Also, say there are too many shards to pass in
via the URL (dozens, hundreds, whatever)
Is there a way to specify in solrcon
39 matches
Mail list logo