It's time to enforce and document field type constraints
https://issues.apache.org/jira/browse/SOLR-14230.
On Mon, Jan 27, 2020 at 4:12 PM Doss wrote:
> @ Alessandro Benedetti , Thanks for your input!
>
> @ Mikhail Khludnev , I made docValues="true" for from & to and did a index
> rotation, now
@ Alessandro Benedetti , Thanks for your input!
@ Mikhail Khludnev , I made docValues="true" for from & to and did a index
rotation, now the score join works perfectly! Saw 7x performance increase.
Thanks!
On Thu, Jan 23, 2020 at 9:53 PM Mikhail Khludnev wrote:
> On Wed, Jan 22, 2020 at 4:27
On Wed, Jan 22, 2020 at 4:27 PM Doss wrote:
> HI,
>
> SOLR version 8.3.1 (10 nodes), zookeeper ensemble (3 nodes)
>
> Read somewhere that the score join parser will be faster, but for me it
> produces no results. I am using string type fields for from and to.
>
That's odd. Can you try to enable
>From the Join Query Parser code:
"// most of these statistics are only used for the enum method
int fromSetSize; // number of docs in the fromSet (that match
the from query)
long resultListDocs; // total number of docs collected
int fromTermCount;
long fromTermTotalDf;
int fromTerm
Ok, you can set it as a sysvar when starting solr. Or you can change your
solrconfig.xml to either use classic schema (schema.xml) or take out the
add-unknown-fields... from the update processor chain. You can also set a
cluster property IIRC. Better to use one of the supported options...
On Fri,
Hi Jörn/Erick/Shawn thanks for your responses.
@Jörn - much apprecaited for the heads up on Kerberos authentication its
something we havent really considered at the moment, more production this may
well be the case. With regards to the Solr Nodes 3 is something we are looking
as a minimum, when
If you have a properly secured cluster eg with Kerberos then you should not
update files in ZK directly. Use the corresponding Solr REST interfaces then
you also less likely to mess something up.
If you want to have HA you should have at least 3 Solr nodes and replicate the
collection to all t
Having custom core.properties files is “fraught”. First of all, that file can
be re-written. Second, the collections ADDREPLICA command will create a new
core.properties file. Third, any mistakes you make when hand-editing the file
can have grave consequences.
What change exactly do you want to
On 9/3/2019 7:22 AM, Porritt, Ian wrote:
We have a schema which I have managed to upload to Zookeeper along with
the Solrconfig, how do I get the system to recognise both a lib/.jar
extension and a custom core.properties file? I bypassed the issue of the
core.properties by amending the update.a
On 8/7/2019 6:39 AM, Khare, Kushal (MIND) wrote:
Hello People !
Hope you all are doing good !
Well, I am new to the solr server and want to use it for content search in one
of my applications.
I have already been working upon it since quite a few days, and have the basics
done.
The issue that
Just what it says. Solr/Lucene like lots of file handles, I regularly
see several thousand. If you run out of file handles Solr stops
working.
Ditto processes. Solr in particular spawns a lot of threads,
particularly when handling many incoming requests through Jetty. If
you exceed the limit, requ
A very high rate of indexing documents could cause heap usage to go high
(all temporary objects getting created are in JVM memory and with very high
rate heap utilization may go high)
Having Cache's not sized/set correctly would also return in high JVM usage
since as searches are happening, it wil
*:*
10
*,score
on
1
Thanks & Best Regards,
Lulu Paul
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: 27 March 2018 18:01
To: solr-user@lucene.apache.org
Subject: Re: query regarding Solr partial se
This is as much about your schema as it is about your query parser usage.
What’s parsed_query say in your debug=true output? What query parser are you
using? If edismax, check qf/pf/mm settings, etc.
Erik
> On Mar 27, 2018, at 9:56 AM, Paul, Lulu wrote:
>
> Hi ,
>
> Below is m
On 5/11/2017 12:26 PM, Deepak Mali wrote:
> if there is any way to set threshold memory to the solr indexing process.
> My computer is hung and the indexing process is killed by the OS.
>
> So , I was wondering if there is any way to set threshold memory usage to
> solr indexing process in linux en
"The reason I asked about the Cache sizes is I had read that
configuring the Cache sizes of Solr does not provide you enough
benefits"
Where is "somewhere"? Because this is simply wrong as a blanket statement.
filterCache can have tremendous impact on query performance, depending
on the how m
Hi Shawn,
Thanks for the reply, it is useful. The reason I asked about the Cache
sizes is I had read that configuring the Cache sizes of Solr does not
provide you enough benefits, instead it is better to provide a lot of
memory space to the Solr outside the JVM heap.
Is it true that in general the
On 5/11/2017 4:58 PM, Suresh Pendap wrote:
> This question might have been asked on the solr user mailing list earlier.
> Solr has four different types of Cache DocumentCache, QueryResultCache,
> FieldValueCache and FilterQueryCache
> I would like to know which of these Caches are off heap cache?
Specifically answering the _indexing_ part of the question, in
solrconfig.xml theres a ramBufferSizeMB (from memory) that governs how
much RAM is used while indexing before flushing to disk. I think the
default is 100MB or so.
On Thu, May 11, 2017 at 2:34 PM, Rick Leir wrote:
> You could limit th
You could limit the Java heap, but that is counter productive. You should have
a look at how much heap it uses. But let Solr use what it needs. My guess is
that your -Xmx or -Xmm is too low at the moment.
Apart from that, Solr will mmap large files. When there is not enough RAM for
this, any sw
It would be better to implement such logic as a separate process - watching
for events or reading from a stream, and then feeding discrete requests (or
modest-sized batches of documents) to Solr in parallel with such processing.
-- Jack Krupansky
On Sat, Apr 11, 2015 at 1:49 AM, vishal dsouza
wr
570806 - simon - 2014-02-22 08:36:25]
>
>
> Thanks & Regards,
> Arjun M
>
>
> -Original Message-----
> From: ext Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
> Sent: Monday, August 04, 2014 11:08 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Q
2014-02-22 08:36:25]
Thanks & Regards,
Arjun M
-Original Message-
From: ext Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Sent: Monday, August 04, 2014 11:08 AM
To: solr-user@lucene.apache.org
Subject: Re: Query regarding Solr doc atomic updates
In order for Atomic U
In order for Atomic Updates to work, all your fields must be stored in the
index. In this case, ClusterId is not stored and is therefore lost on an
atomic update.
--
Regards,
Shalin Shekhar Mangar.
On Mon, Aug 4, 2014 at 11:00 AM, M, Arjun (NSN - IN/Bangalore) <
arju...@nsn.com> wrote:
> Hi,
>
lucene.apache.org'
Cc: 'Sawant, Amit2 '
Subject: RE: Query regarding solr search
Hi,
I understand that in the solr search considering the field type as text_en
and not as int for the fields.
So how do I convert the field type of a particular field in solr XML as
int so that I can op
type text.
Thanks,
Leena Jawale
From: Leena Jawale
Sent: Tuesday, October 30, 2012 4:58 PM
To: 'solr-user@lucene.apache.org'
Cc: 'Sawant, Amit2 '
Subject: RE: Query regarding solr search
Hi,
I understand that in the solr search considering the field type as text_en
and not as i
-- Jack Krupansky
-Original Message-
From: Leena Jawale
Sent: Tuesday, October 30, 2012 7:27 AM
To: solr-user@lucene.apache.org
Cc: Sawant, Amit2
Subject: RE: Query regarding solr search
Hi,
I understand that in the solr search considering the field type as text_en
and not as int
On 30 October 2012 16:57, Leena Jawale wrote:
> Hi,
>
> I understand that in the solr search considering the field type as text_en
> and not as int for the fields.
> So how do I convert the field type of a particular field in solr XML as int
> so that I can operate that
> field for range queries
Hi,
I understand that in the solr search considering the field type as text_en and
not as int for the fields.
So how do I convert the field type of a particular field in solr XML as int so
that I can operate that
field for range queries in solr??
Thanks,
Leena Jawale
From: Leena Jawale
Sent: T
Hi,
Let me clarify the situation here in details.
The default sort which Websphere commerce provide is based on name & price
of any item. but we are having unique values of every item. hence sorting
goes on fine either as intger or as string but while preprocess we generate
some temporary tables
Hi Uma,
i don't understand what you're looking for.
Do you need to sort on fields of type double with precision 2 or what?
In your example you were talking about
1 2 3 4 5 6 7 8 9 10 11 12 13 14.
Regards,
Bernd
Am 06.01.2012 07:11, schrieb umaswayam:
Hi Bernd,
The column which comes f
Hi Bernd,
The column which comes from database is string only, & that is being default
populated. How do I convert it to double as the format is 1.00,2.00,3.00 in
database. So I need it to be coverted to double only.
Thanks,
Uma Shankar
--
View this message in context:
http://lucene.472066.n3.n
Hi,
I suggest using the following fieldType for your field:
Regards
Bernd
Am 04.01.2012 14:40, schrieb umaswayam:
Hi,
We want to sort our records based on some sequence which is like
1 2 3 4 5 6 7 8 9 10 11 12 13 14.
I am using Websphere commerce to retrieve data using solr. When we a
Hi Uma,
Have you declared the type as integer for this field? In case, type is
some form of String (text, string etc.) the sorting will happen
lexicographically.
-param
On 1/4/12 8:40 AM, "umaswayam" wrote:
>Hi,
>
>We want to sort our records based on some sequence which is like
>1 2 3 4 5 6 7
You're using a string field type, I imagine. Use a numeric field type instead.
wc-search.xml? That's not a solr config file; must be something specific to
your app.
Erik
On Jan 4, 2012, at 08:40 , umaswayam wrote:
> Hi,
>
> We want to sort our records based on some sequence which i
Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Monday, August 01, 2011 3:21 AM
To: solr-user@lucene.apache.org
Subject: Re: Query regarding Solr ClassCastException on client side for Java
: I am using apache solr 3.3 on windows 7 with JDK 1.6.
First question: what
: I am using apache solr 3.3 on windows 7 with JDK 1.6.
First question: what version of SolrJ are you using? is it also 3.3?
: At the line, QueryResponse queryResponse = solrServer.query(query1);
: Class Cast Exception is thrown and no data is fetched. I suspect this
: has something to do wit
In solr 1.4.1, for getting "distinct facet terms count" across shards,
The piece of code added for getting count of distinct facet terms across
distributed process is as followed:
Class: facetcomponent.java
Function: -- finishStage(ResponseBuilder rb)
for (DistribFieldFacet dff : fi.
No such issues . Successfully integrated with 1.4.1 and it works across
single index.
for f.2.facet.numFacetTerms=1 parameter it will give the distinct count
result
for f.2.facet.numFacetTerms=2 parameter it will give counts as well as
results for facets.
But this is working only across singl
I am pretty sure it does not yet support distributed shards..
But the patch was written for 4.0... So there might be issues with running
it on 1.4.1.
On 5/26/11 11:08 PM, "rajini maski" wrote:
> The patch solr 2242 for getting count of distinct facet terms doesn't
>work for distributedProce
Erick,
Thank you. I could fix the problem. Started from scratch considering your
advice and been successful. Thanks a lot.
Rajani Maski
On Tue, Apr 26, 2011 at 5:28 PM, Erick Erickson wrote:
> Sorry, but there's too much here to debug remotely. I strongly advise you
> back wy up. Undo (but
Sorry, but there's too much here to debug remotely. I strongly advise you
back wy up. Undo (but save) all your changes. Start by doing
the simplest thing you can, just get a dummy class in place and
get it called. Perhaps create a really dumb logger method that
opens a text file, writes a messa
Thanks Erick. I have added my replies to the points you did mention. I am
somewhere going wrong. I guess do I need to club both the jars or something
? If yes, how do i do that? I have no much idea about java and jar files.
Please guide me here.
A couple of things to try.
1> when you do a 'jar -t
Looking at things more carefully, it may be one of your dependent classes
that's not being found.
A couple of things to try.
1> when you do a 'jar -tfv ", you should see
output like:
1183 Sun Jun 06 01:31:14 EDT 2010
org/apache/lucene/analysis/sinks/TokenTypeSinkTokenizer.class
and your stateme
Erick ,
*
*
* Thanks.* It was actually a copy mistake. Anyways i did a redo of all the
below mentioned steps. I had given class name as
I did it again now following few different steps following this link :
http://help.eclipse.org/helios/index.jsp?topic=/org.eclipse.jdt.doc.user/tasks/tasks-32.ht
First I appreciate your writeup of the problem, it's very helpful when people
take the time to put in the details
I can't reconcile these two things:
{{{
as org.apache.solr.common.SolrException: Error loading class
'pointcross.orchSynonymFilterFactory' at}}}
This seems to indicate that your
Hello,
Not quite "lines", but look at the various Highlighter options on the Wiki and
in the example solrconfig.xml.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Silent Surfer
> To: Solr User
> Sent: Tuesday, June 23, 2009 11:04:53
47 matches
Mail list logo