Q1. Is is possible to pass *analyzed* content to the
public abstract class Signature {
public void init(SolrParams nl) { }
public abstract String calculate(String content);
}
Q2. Method calculate() is using concatenated fields from name,features,cat
Is there any mechanism I could build "fi
Thanks Hoss,
Externanlizing this part is exactly the path we are exploring now, not
only for this reason.
We already started testing Hadoop SequenceFile for write ahead log for
updates/deletes.
SequenceFile supports append now (simply great!). It was a a pain to
have to add hadoop into mix for "
Your best bet is MMapDirectoryFactory, you can come very close to the
performance of the RAMDirectory. Unfortunatelly this setup with
Master_on_disk->Slaves_in_ram type of setup is not possible using
solr.
We are moving our architecture to solr at the moment, and this is one
of "missings" we have
Quick question,
Is there a way with solr to conditionally update document on unique
id? Meaning, default, add behavior if id is not already in index and
*not to touch index" if already there.
Deletes are not important (no sync issues).
I am asking because I noticed with deduplication turned on,
i
Quick question,
Is there a way with solr to conditionally update document on unique
id? Meaning, default, add behavior if id is not already in index and
*not to touch index" if already there.
Deletes are not important (no sync issues).
I am asking because I noticed with deduplication turned on,
i
...Using RAMDirectory really does not help performance...
I kind of agree, but in my experience with lucene, there are cases
where RAMDirectory helps a lot, with all its drawbacks (huge heap and
gc() tuning).
We had very good experience with MMAP on average, but moving to
RAMDirectory with prop
Wed, 2011-06-29 at 09:35 +0200, eks dev wrote:
>> In MMAP, you need to have really smart warm up (MMAP) to beat IO
>> quirks, for RAMDir you need to tune gc(), choose your poison :)
>
> Other alternatives are operating system RAM disks (avoids the GC
> problem) and using SSDs (nea
011-06-29 at 09:35 +0200, eks dev wrote:
>> In MMAP, you need to have really smart warm up (MMAP) to beat IO
>> quirks, for RAMDir you need to tune gc(), choose your poison :)
>
> Other alternatives are operating system RAM disks (avoids the GC
> problem) and using SSDs (nearly
t 2:01 AM, eks dev wrote:
>
>> Quick question,
>> Is there a way with solr to conditionally update document on unique
>> id? Meaning, default, add behavior if id is not already in index and
>> *not to touch index" if already there.
>>
>> Deletes are no
On Wed, Jun 29, 2011 at 4:32 PM, eks dev wrote:
>> req.getSearcher().getFirstMatch(t) != -1;
>
> Yep, this is currently the fastest option we have.
>
> -Yonik
> http://www.lucidimagination.com
>
Well, Lucid released "LucidWorks Enterprise"
with " Complete Apache Solr 4.x Release Integrated and tested with
powerful enhancements"
Whatever it means for solr 4.0
On Tue, Aug 2, 2011 at 11:10 PM, David Smiley (@MITRE.org)
wrote:
> My best guess (and it is just a guess) is between December
hey have yet to
> bump that to trunk/4.x; it was only recently updated to 3.2.
>
> On Aug 2, 2011, at 5:26 PM, eks dev wrote:
>
>> Well, Lucid released "LucidWorks Enterprise"
>> with " Complete Apache Solr 4.x Release Integrated and tested with
>> po
I would appreciate some clarifications about DIH
I do not have reliable timestamp, but I do have atomic sequence that
only grows on inserts/changes.
You can understand it as a timestamp on some funky timezone not
related to wall clock time, it is integer type.
Is DIH keeping track of the MAX(comm
ne to do, but I really do not
know simple and fast way...
cheers,
eks
On Sat, Aug 6, 2011 at 8:32 PM, Shawn Heisey wrote:
> On 8/6/2011 8:49 AM, eks dev wrote:
>>
>> I would appreciate some clarifications about DIH
>>
>> I do not have reliable timestamp, but I do have ato
Thinking aloud and grateful for sparing ..
I need to support high commit rate (low update latency) in a master
slave setup and I have a bad feelings about it, even with disabling
warmup and stripping everything down that slows down refresh.
I will try it anyway, but I started thinking about "back
Hi All,
I am trying to tune ramBufferSizeMB and merge factor for my setup.
So, i enabled Lucene Index Writer's log info stream and started monitoring
Data folder where index files are created.
I started my test with following
Heap: 3GB
Solr 1.4.1,
Index Size = 20 GB,
ramBufferSizeMB=856
Merge Fac
Hi Guys
I am also seeing this problem.
I am using SOLR 4 from Trunk and seeing this issue repeat every day.
Any inputs about how to resolve this would be great
-Saroj
On Thu, Jul 26, 2012 at 8:33 AM, Karthick Duraisamy Soundararaj <
karthick.soundara...@gmail.com> wrote:
> Did you find any m
it was from 4/11/12
-Saroj
On Thu, Jul 26, 2012 at 4:21 PM, Mark Miller wrote:
>
> On Jul 26, 2012, at 3:18 PM, roz dev wrote:
>
> > Hi Guys
> >
> > I am also seeing this problem.
> >
> > I am using SOLR 4 from Trunk and seeing this issue repeat eve
5:12 PM, Mark Miller wrote:
> I'd take a look at this issue:
> https://issues.apache.org/jira/browse/SOLR-3392
>
> Fixed late April.
>
> On Jul 26, 2012, at 7:41 PM, roz dev wrote:
>
> > it was from 4/11/12
> >
> > -Saroj
> >
> &g
t; On Thu, Jul 26, 2012 at 6:15 PM, Karthick Duraisamy Soundararaj
> wrote:
> > Mark,
> > We use solr 3.6.0 on freebsd 9. Over a period of time, it
> > accumulates lots of space!
> >
> > On Thu, Jul 26, 2012 at 8:47 PM, roz dev wrote:
> >
> >
Hi All
I am trying to find out the reason for very high memory use and ran JMAP
-hist
It is showing that i have too many instances of org.tartarus.snowball.Among
Any ideas what is this for and why am I getting so many of them
num #instances#bytes Class description
---
wnable synchronizers:
- None
On Fri, Jul 27, 2012 at 5:19 AM, Alexandre Rafalovitch
wrote:
> Try taking a couple of thread dumps and see where in the stack the
> snowball classes show up. That might give you a clue.
>
> Did you customize the parameters to the stemmer? If so, may
.html
Any inputs are welcome
-Saroj
On Mon, Jul 30, 2012 at 4:39 PM, roz dev wrote:
> I did take couple of thread dumps and they seem to be fine
>
> Heap dump is huge - close to 15GB
>
> I am having hard time to analyze that heap dump
>
> 2012-07-30 16:07:32
> Full thread
Hi All
I am using Solr 4 from trunk and using it with Tomcat 6. I am noticing that
when we are indexing lots of data with 16 concurrent threads, Heap grows
continuously. It remains high and ultimately most of the stuff ends up
being moved to Old Gen. Eventually, Old Gen also fills up and we start
You are referring to a very old thread
Did you take any heap dump and thread dumo? They can help you get more
insight.
-Saroj
On Tue, Jul 31, 2012 at 9:04 AM, Suneel wrote:
> Hello Kevin,
>
> I am also facing same problem After few hours or few day my solr server
> getting crash.
> I try t
Thanks Robert for these inputs.
Since we do not really Snowball analyzer for this field, we would not use
it for now. If this still does not address our issue, we would tweak thread
pool as per eks dev suggestion - I am bit hesitant to do this change yet as
we would be reducing thread pool which
Changing the Subject Line to make it easier to understand the topic of the
message
is there any plan to expose IndexDocValues as part of Solr 4?
Any thoughts?
-Saroj
On Thu, Aug 2, 2012 at 5:10 PM, roz dev wrote:
> As we all know, FIeldCache can be costly if we have lots of documents
I have seen this happening
We retry and that works. Is your solr server stalled?
On Mon, Sep 24, 2012 at 4:50 PM, balaji.gandhi
wrote:
> Hi,
>
> I am encountering this error randomly (under load) when posting to Solr
> using SolrJ.
>
> Has anyone encountered a similar error?
>
> org.apache.solr.
Transformer is great to augment Documents before shipping to response,
but what would be a way to prevent document from being delivered?
I have some search components that make some conclusions after search
, duplicates removal, clustering and one Augmenter(solr Transformer)
to shape the response
Thanks Hoss,
I probably did not formulate the question properly, but you gave me an answer.
I do it already in SearchComponent, just wanted to centralise this
control of the depth and width of the response to the single place in
code [style={minimal, verbose, full...}].
It just sounds logical t
Thanks Hoss.
Yes, that approach would work as I can change the query.
Is there a way to extend the Edismax Handler to read a config file at
startup and then use some events like commit to instruct edismax handler to
re-read the config file.
That way, I can ensure that my boost params are just on
ing
> them on commit sounds like a way to make for a very confusing application!
>
> But if you really need to re-read all this info on a running system,
> consider the core admin RELOAD command.
>
> Best
> Erick
>
>
> On Mon, Nov 5, 2012 at 8:43 PM, roz dev wrote:
eally, if using
> index-time synonyms. And if you're using search-time synonyms you have
> multi-word synonym issue described on the Wiki.
>
> Otis
> --
> Performance Monitoring - http://sematext.com/spm
> On Nov 6, 2012 11:02 PM, "roz dev" wrote:
>
> > Eri
Hi everybody,
I'm new to Solr, and have been reading through documentation off-and-on for
days, but still have some unanswered basic/fundamental questions that have a
huge impact on my implementation approach.
I am thinking of moving my company's web app's main search engine over to
Solr. My goal
Hi Otis,
First off, thanks for your complete reply! It certainly has a lot of
good info in it.
To address some of the questions you asked, please see below:
On Fri, Sep 26, 2008 at 1:36 AM, Otis Gospodnetic <
[EMAIL PROTECTED]> wrote:
> Hi,
>
> Your questions don't have simple answers,
Hi Otis,
Ah, okay those are all great pointers, thanks. I will certainly have to
do more research, and then I'll certainly have more questions later.
I have thought of using some kind of non-lucene/solr distributed cache
to narrow-down the online search... but the problem comes when ther
Hi everybody,
With regard to RSS feeds; I noticed that there's a stylesheet to
convert the output of a Solr search into RSS format in the
example\solr\conf\xlst directory. My questions are:
1) Where can I find docs on how to get Solr to feed RSS directly?
2) Correct me if I'm wrong here: No
Hello,
I'm indexing HTML files and would like the highlighted fragment to return an
entire element where the hightlight is contained.
For example, one of my documents is
...here's some text...
When the query is "some text" I would like the fragment to be
here's some text
I did see this thr
Hi all,
I have an implmentation of solr (rev.708837) running on tomcat 6.
Approx 600,000 docs, 2 fairly content heavy text fields, between 4 and 7
facets (depending on what our front end is requesting, and mostly low unique
values)
1GB of memory allocated, generally I do not see it using all of t
or lots of CPU
> being used during those times?
>
>
> Otis --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
> > From: hbi dev
> > To: solr-user@lucene.apache.org
> > Sent: Thursday, January 22, 2009 6:39:3
Hi wojtekpia,
That's interesting, I shall be looking into this over the weekend so I shall
look at the GC also. I was briefly reading about GC last night, am I right
in thinking it could be affected by what version of the jvm I'm using
(1.5.0.8), and also what type of Collector is set? What collec
Hi All,
I have a question regarding the dismax handler and minimum match (mm=)
I have an index which we are setting the default operator to AND.
Am I right in saying that using the dismax handler, the default operator in
the schema file is effectively ignored? (This is the conclusion I've made
fr
Thanks Yonik.
If it is using enum method then it should also be caching the facet query
for every indexed value for the facet fields.
1) Do I need to add filterCache and hashDocSet entry to the solrconfig.xml
for this caching to happen.?
I did not find any noticeable difference in query time
+1 vote here. We are based in London.
Regards
Waseem
On Mon, May 18, 2009 at 11:42 AM, Toby Cole wrote:
> I know of a few people who'd be interested, we've got quite a few projects
> using Solr down here in Brighton.
>
>
> On 14 May 2009, at 10:41, Fergus McMenemie wrote:
>
> I was wondering if
I'm running,
solr -version
8.6.3
on
uname -rm
5.8.13-200.fc32.x86_64 x86_64
grep _NAME /etc/os-release
PRETTY_NAME="Fedora 32 (Server Edition)"
CPE_NAME="cpe:/o:fedoraproject:fedora:32"
with
java
Hello,
I want to subscribe solr mailing list.
When I sent a request, I got the following message.
Can you add this email address to the mailing list please?
Thank you.
Louis Choi
---
This is the mail system at host n3.nabble.com.
I'm sorry to have to inform you that your message could not
[Happy New Year to all]
Is all herein
https://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
mentioned/recommended still valid for Solr 5.x?
- Clemens
mailto:erickerick...@gmail.com]
Gesendet: Montag, 4. Januar 2016 17:36
An: solr-user
Betreff: Re: Hard commits, soft commits and transaction logs
As far as I know. If you see anything different, let me know and we'll see if
we can update it.
Best,
Erick
On Mon, Jan 4, 2016 at 1:34 AM, Cl
too much time on a
significantly-sized corpus to be feasible. At least that's my fear, I'm mostly
advising you to check this before even trying to scale up.
Best,
Erick
On Wed, Feb 3, 2016 at 11:07 PM, Clemens Wyss DEV wrote:
> Sorry for coming back to this topic:
> You (Erick
Environment: Solr 5.4.1
I am facing OOMs when batchupdating SolrJ. I am seeing approx 30'000(!)
SolrInputDocument instances, although my batchsize is 100. I.e. I call
solrClient.add( documents ) for every 100 documents only. So I'd expect to see
at most 100 SolrInputDocument's in memory at any
ore 'fust-1-fr_CH_1' -3-thread-1 Thread
And there is another byte[] with 260MB.
The logic is somewhat this:
SolrClient solrClient = new HttpSolrClient( coreUrl );
while ( got more elements to index )
{
batch = create 100 SolrInputDocuments
solrClient.add( batch )
}
-Ursprü
ing-transaction-logs-softcommit-and-commit-in-sorlcloud/
'Be very careful committing from the client! In fact, don’t do it'
I would not want to commit "just to flush a client side buffer" ...
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gese
rk/is the issue.
java -Xmx4096m
Thanks,
Susheel
On Fri, Feb 19, 2016 at 6:25 AM, Clemens Wyss DEV
wrote:
> Guessing on ;) :
> must I commit after every "batch", in order to force a flushing of
> org.apache.solr.client.solrj.request.RequestWriter$LazyContentStream et al?
>
ceProblems
Thanks,
Susheel
On Fri, Feb 19, 2016 at 9:17 AM, Clemens Wyss DEV
wrote:
> > increase heap size
> this is a "workaround"
>
> Doesn't SolrClient free part of its buffer? At least documents it has
> sent to the Solr-Server?
>
> -
hen batchupdating from SolrJ
On 2/19/2016 3:08 AM, Clemens Wyss DEV wrote:
> The logic is somewhat this:
>
> SolrClient solrClient = new HttpSolrClient( coreUrl ); while ( got
> more elements to index ) {
> batch = create 100 SolrInputDocuments
> solrClient.add( batch )
> }
> solrClient.add( documents ); // [2]
is of course:
solrClient.add( batch ); // [2]
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Montag, 22. Februar 2016 09:55
An: solr-user@lucene.apache.org
Betreff: AW: AW: OutOfMemory when batchupdating f
: Re: AW: AW: OutOfMemory when batchupdating from SolrJ
On 2/22/2016 1:55 AM, Clemens Wyss DEV wrote:
> SolrClient solrClient = getSolrClient( coreName, true );
> Collection batch = new
> ArrayList(); while ( elements.hasNext() ) {
> IIndexableElement elem = elements.next();
Looks like not. I get to see
'can not sort on a field which is neither indexed nor has doc values: '
- Clemens
er
Betreff: Re: Is it possible to sort on a BooleanField?
Please share your schema.
On Thu, Dec 3, 2015 at 11:28 AM, Clemens Wyss DEV
wrote:
> Looks like not. I get to see
> 'can not sort on a field which is neither indexed nor has doc values:
> '
>
> - Clemens
>
Does Solr provider a (Java)constant for "the name of the version field" (ie
_version_)?
Just about to upgrade to Solr5. My UnitTests fail:
13:50:41.178 [main] ERROR org.apache.solr.core.CoreContainer - Error creating
core [1-de_CH]: null
java.lang.ExceptionInInitializerError: null
at
org.apache.solr.core.SolrConfig.getConfigOverlay(SolrConfig.java:359)
~[solr-core.jar:5.0.0
lder version of noggit around. You need
version 0.6.
Alan Woodward
www.flax.co.uk
On 23 Feb 2015, at 13:00, Clemens Wyss DEV wrote:
> Just about to upgrade to Solr5. My UnitTests fail:
> 13:50:41.178 [main] ERROR org.apache.solr.core.CoreContainer - Error
> creating
[Solr 5.0]
Whereas in
fq={!tag="facet15"}facet15__d_i:1.8 facet15__d_i:2.2
&q=(*:*)
&facet=true
&facet.mincount=1
&facet.field={!key="facet15" ex="facet15"}facet15__d_i
"facet15" is not affected by the fq (as desired). This does not hold true for
the facet.query
fq={!tag="till2"}facet15__d_i:[
facet15__d_i:[2.0 TO *] don;t you?
On Tue, Mar 3, 2015 at 12:00 PM, Clemens Wyss DEV
wrote:
> [Solr 5.0]
> Whereas in
>
> fq={!tag="facet15"}facet15__d_i:1.8 facet15__d_i:2.2
> &q=(*:*)
> &facet=true
> &facet.mincount=1
> &facet.field={!key="fa
I am seeing the following stacktrace(s):
Caused by: java.lang.IllegalArgumentException: Unknown type of result: class
javax.xml.transform.dom.DOMResult
at
net.sf.saxon.event.SerializerFactory.getReceiver(SerializerFactory.java:154)
~[netcdfAll.jar:4.5.4]
at
net.sf.saxon.Identity
Context: Solr/Lucene 5.1
Adding documents to Solr core/index through SolrJ
I extract pdf's using tika. The pdf-content is one of the fields of my
SolrDocuments that are transmitted to Solr using SolrJ.
As not all documents seem to be "coming through" I looked into the Solr-logs
and see the follw
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Unknown Source)
What are possible reasons herfore?
Thx
Clemens
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Freitag, 24. April 2015 14:01
An: solr-user@lucene.apache.org
Thx. It would be helpful if we'd see the originating request URL for this error
- Clemens
PS: Last time I saw "Hoss" was when watching Bonanza as a kid ;)
-Ursprüngliche Nachricht-
Von: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Gesendet: Freitag, 24. April 2015 19:15
An: solr-user
If I run Solr in ebmedded mode (which I shouldn't, I know ;) ) how do I know
(event?) that the cores are up-and-running, i.e. all is initialized?
Thx
Clemens
Context: Solr 5.1, EmbeddedSolrServer(-mode)
I have a rather big index/core (>1G). I was able to initially index this core
and could then search within it. Now when I restart my app I am no more able to
search.
getSearcher seems to "hang"... :
java.lang.Object.wait(long) line: not available [n
> more than 15 minutes
It took 37minutes!
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Sonntag, 3. Mai 2015 10:00
An: solr-user@lucene.apache.org
Betreff: "blocked" in org.apache.solr.core.SolrCore.getSearcher(...) ?
Cont
ed with Tika.
-Ursprüngliche Nachricht-
Von: Yonik Seeley [mailto:ysee...@gmail.com]
Gesendet: Sonntag, 3. Mai 2015 17:53
An: solr-user@lucene.apache.org
Betreff: Re: "blocked" in org.apache.solr.core.SolrCore.getSearcher(...) ?
What are the other threads doing during this time?
-Yo
Just opened the very core in a "normal" Solr server instance. Same delay till
it's usable. I.e. nothing to do with embedded-mode or any other thread slowing
down things
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Sonntag, 3. Ma
about
what the system is doing.
You can get the same information from the command line using
# jstack (pid) > output.log
Best,
Andrea
On 3 May 2015 18:53, "Clemens Wyss DEV"
mailto:clemens...@mysign.ch>> wrote:
> Just opened the very core in a "normal"
rCore.java:1751)
java.util.concurrent.FutureTask.run(Unknown Source)
java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
java.lang.Thread.run(Unknown Source)
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:cleme
79
If you don't use the suggest component, the easiest fix is to comment it out.
-Yonik
On Sun, May 3, 2015 at 1:11 PM, Clemens Wyss DEV wrote:
> I guess it's the "searcherExecutor-7-thread-1 (30)" which seems to be loadi
Context: Solr/Lucene 5.1
Is there a way to determine documents that occupy alot "space" in the index. As
I don't store any fields that have text, it must be the terms extracted from
the documents occupying the space.
So my question is: which documents occupy a most space in the inverted index?
On one of my fields (the "phrase suggestion" field) has 30'860'099 terms. Is
this "too much"?
Another field (the "single word suggestion") has 2'156'218 terms.
-----Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.
tual terms are, it does take a bit of poking around though.
There's no good way I know of to know which docs are taking up space in the
index. What I'd probably do is use Tika in a SolrJ client and look at the data
as I sent it, here's a place to start:
https://lucidworks.com/blog/dev
I'd like to make use of solr.allow.unsafe.resourceloading=true.
Is the commandline "-D solr.allow.unsafe.resourceloading=true" the only way to
inject/set this property or can it be done (e.g.) in solr.xml ?
Thx
Clemens
Given the following schema.xml
_my_id
When I try to include the very schema from another schema file, e.g.:
http://www.w3.org/2001/XInclude"/>
I get SolrException
copyField source :'_my_title' is not a glob and doesn't match any explicit
f
schema file
<- the included schema-common.xml
tags from your schema-common.xml. You won’t be able to use
it alone in that case, but if you need to do that, you could just create
another schema file that includes it inside wrapping tags.
Steve
> On May 15, 2015, at 4:01
ore/schema?wt=schema.xml&indent=on”:
——
_my_id
——
Steve
> On May 15, 2015, at 8:57 AM, Clemens Wyss DEV wrote:
>
> Thought about that too (should have written ;) ).
> When I remove the schema-tag from the composite xml I get:
> org.apache.solr.common.S
tp://www.lucidworks.com
<http://www.lucidworks.com/>
> On May 13, 2015, at 3:49 AM, Clemens Wyss DEV wrote:
>
> I'd like to make use of solr.allow.unsafe.resourceloading=true.
> Is the commandline "-D solr.allow.unsafe.resourceloading=true" the only way
> to inject
I also noticed that (see my post this "morning")
...
SOLR_OPTS="$SOLR_OPTS -Dsolr.allow.unsafe.resourceloading=true"
...
Is not taken into consideration (anymore). Same "bug"?
-Ursprüngliche Nachricht-
Von: Ere Maijala [mailto:ere.maij...@helsinki.fi]
Gesendet: Mittwoch, 15. April 2015 0
26, 2015 at 9:15 AM, Clemens Wyss DEV wrote:
> I also noticed that (see my post this "morning") ...
> SOLR_OPTS="$SOLR_OPTS -Dsolr.allow.unsafe.resourceloading=true"
> ...
> Is not taken into consideration (anymore). Same "bug"?
>
>
> -
Lucene 5.1:
I am (also) facing
"java.lang.IllegalStateException: suggester was not built"
At the very moment no new documents seem tob e added to the index/core. Will a
reboot "sanitize" the index/core?
I (still) have
name="buildOnCommit">true
How can I tell Solr to peridoically update the s
take many minutes so I recommend against these options for a large
index, and strongly recommend you test these with a large corpus.
Best,
Erick
On Mon, Jun 1, 2015 at 4:01 AM, Clemens Wyss DEV wrote:
> Lucene 5.1:
> I am (also) facing
> "java.lang.IllegalStateException: sug
Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for
Solr.
I am seeing the following OOMs:
ERROR - 2015-06-03 05:17:13.317; [ customer-1-de_CH_1]
org.apache.solr.common.SolrException; null:java.lang.RuntimeException:
java.lang.OutOfMemoryError: Java heap space
a
ograg.org]
Gesendet: Mittwoch, 3. Juni 2015 09:16
An: solr-user@lucene.apache.org
Betreff: Re: Solr OutOfMemory but no heap and dump and oo_solr.sh is not
triggered
On 6/3/2015 12:20 AM, Clemens Wyss DEV wrote:
> Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G available for
>
e a JIRA issue please. That OOM Exception is getting wrapped in a
> RuntimeException it looks. Bug.
>
> - Mark
>
>
> On Wed, Jun 3, 2015 at 2:20 AM Clemens Wyss DEV
> wrote:
>
>> Context: Lucene 5.1, Java 8 on debian. 24G of RAM whereof 16G
>> available for Sol
t. Don't worry too much, the title & etc. can be
changed after as things become clearer.
Best,
Erick
On Wed, Jun 3, 2015 at 5:58 AM, Clemens Wyss DEV wrote:
> Hi Mark,
> what exactly should I file? What needs to be added/appended to the issue?
>
> Regards
> Clemens
I am (seldom) seeing NPEs at line 610 of HttpSolrClient:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error
from server at http://xxx.xxx.x.xxx:8983/solr/core1:
java.lang.NullPointerException
at
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(Http
Context: solr 6.6.0
Im switching my schemas from derprecated solr.LatLonType to
solr.LatLonPointSpatialField. Now my sortquery (which used to work with
solr.LatLonType):
sort=geodist(b4_location__geo_si,47.36667,8.55) asc
raises the error
"sort param could not be parsed as a query, and is not
Sorry for "re-asking". Anybody else facing this issue (bug?), or can anybody
provide an advice "where to look"?
Thx
Clemens
-Ursprüngliche Nachricht-
Von: Clemens Wyss DEV [mailto:clemens...@mysign.ch]
Gesendet: Mittwoch, 1. November 2017 11:06
An: 'solr-user@lu
I am still using 5.4.1 and have the following code to create a new core:
...
Properties coreProperties = new Properties();
coreProperties.setProperty( CoreDescriptor.CORE_CONFIGSET, configsetToUse );
CoreDescriptor coreDescriptor = new CoreDescriptor( container, coreName,
coreFolder, corePropertie
does
http://localhost:8983/solr/admin/cores?action=RELOAD
reload all cores?
Thx
Clemens
I am seeing many exceptions like this in my Solr [5.4.1] log:
null:java.lang.StringIndexOutOfBoundsException: String index out of range: -2
at
java.lang.AbstractStringBuilder.replace(AbstractStringBuilder.java:824)
at java.lang.StringBuilder.replace(StringBuilder.java:262)
My index size 20 GB and I have issues solr backup command , now this backup
is going on its taking too much time , so how can i stop backup command?
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-stop-backup-once-initiated-tp4126020.html
Sent from the Solr - User mai
Currently i am exploring hadoop with solr, Somewhere it is written as "This
does not use Hadoop Map-Reduce to process Solr data, rather it only uses the
HDFS filesystem for index and transaction log file storage. " ,
then what is the advantage of using using hadoop over local file system?
will use
101 - 200 of 302 matches
Mail list logo