I've got a ConcurrentModificationException during a cron-ed delta import of
DIH, I'm using multicore solr nightly from hudson 2009-04-02_08-06-47.
I don't know if this stacktrace maybe useful to you, but here it is:
java.util.ConcurrentModificationException
at java.util.LinkedHashMap$Linke
you may try to put true in that useCompoundFile entry; this way indexing
should use far less file descriptors, but it will slow down indexing, see
http://issues.apache.org/jira/browse/LUCENE-888.
Try to see if the reason of lack of descriptors is related only on solr. How
are you using indexing, by
the only issue you may have will be related to software that writes files in
solr-home, but the only one I can think of is dataimport.properties of DIH,
so if you use DIH, you may want to make dataimport.properties location to be
configurable dinamically, like an entry in data-config.xml, otherwise
le Paul നോബിള് नोब्ळ्
> wrote:
> > On Wed, Mar 4, 2009 at 5:24 PM, Walter Ferrara
> wrote:
> >> using:
> >>
> >>
> >>
> >>
> >> doesn't work either
> >
> > dataDir="/multicore/core0" means the path
l and it should use this value.
>
> but you can specify it as follows
>
>
> then it should be fine.
>
> can you just paste the log messages as solr starts
> --Noble
>
>
> On Wed, Mar 4, 2009 at 4:15 PM, Walter Ferrara
> wrote:
> > it also ignore
it also ignore dataDir directive in solr.xml, in fact adding:
doesn't change the behavior.
this seems a bug introduced somewhere after 2nd february
any clue?
On Tue, Mar 3, 2009 at 5:56 PM, Walter Ferrara wrote:
> there is a strange behavior which seems to affec
there is a strange behavior which seems to affect hudson today (March 3rd)
build but not (for example) hudson February 2th build.
Basically when I start the multicore enviroment, it just create datadir in
the current path.
To replicate:
1. download latest trunk
2. go to example directory
$ ls
READ
Shalin Shekhar Mangar wrote:
> Hi Walter,
>
> Indeed, there's a race condition there because we didn't expect people to
> hit it concurrently. We expected that imports would be run sequentially.
>
> Thanks for noticing this. We shall add synchronization to the next release.
> Do you mind (again) op
I'm using DIH and its wonderful delta-import.
I have a question: the delta-import is synchronized? multiple call to
delta imports, shouldn't result in one refused because the status is not
idle?
I've noticed however that calling multiple times in a sec the
dataimport/?command=delta-import result in
Shalin Shekhar Mangar wrote:
> Can you please open a JIRA issue for this? However, we may only be able to
> fix this after 1.3 because a code freeze has been decided upon, to release
> 1.3 asap.
>
I've open https://issues.apache.org/jira/browse/SOLR-726
Walter
Launching a multicore solr with dataimporthandler using a mysql driver,
(driver="com.mysql.jdbc.Driver") works fine if the mysql connector jar
(mysql-connector-java-5.0.7-bin.jar) is in the classpath, either jdk
classpath or inside the solr.war lib dir.
While putting the mysql-connector-java-5.0.7-
Dallan Quass wrote:
I have a situation where it would be beneficial to issue queries in a filter
that is called during analysis. In a nutshell, I have an index of places
that includes possible abbreviations. And I want to query this index during
analysis to convert user-entered places to "stand
Ryan McKinley wrote:
check the "status" action
also, check the index.jsp page
index.jsp do:
org.apache.solr.core.MultiCore multicore =
(org.apache.solr.core.MultiCore)request.getAttribute("org.apache.solr.MultiCore");
which is ok in a servlet, but how should I do the same inside an
handler,
In solr, last trunk version in svn, is it possible to access the "core
registry", or what used to be the static MultiCore object? My goal is to
retrieve all the cores registered in a given (multicore) enviroment.
It used to be MultiCore.getRegistry() initially, at first stages of
solr-350; but n
I've noticed that passing html to a field using
HTMLStripWhitespaceTokenizerFactory, ends up in having some javascripts too.
For example, using a analyzer like:
with a text such as:
title
pre
var time = new Date();
ordval= (time.getTime());
post
did you create/modify the index with a newer version of lucene than the
one you use in solr?
In this case I doubt you can downgrade your index, but maybe you can
upgrade lucene in your solr (search in this forum, there should be a
thread about this), (or try with the latest nightly builds)
Pau
See:
http://wiki.apache.org/solr/SchemaXml#head-af67aefdc51d18cd8556de164606030446f56554
indexed means searchable (facet and sort also need this), stored instead
is needed only when you need the original text (i.e. not
tokenized/analyzed) to be returned.
When stored and indexed are not present, I
take a look @:
http://lucene.apache.org/solr/version_control.html#Anonymous+Access+%28read-only%29
and http://www.apache.org/dev/version-control.html#anon-svn
You may want to use an IDE (eclipse/netbeans/...) to svn there (look for
"trunk" dir); this way you could easily download the trunk and com
Isn't Xeon5110 64bit? Maybe you could just put a 64 bit OS in you box.
Also, take a look at http://www.spack.org/wiki/LinuxRamLimits
--
Walter
Isart Montane wrote:
> I've got a dual Xeon. Here you are my cpuinfo. I've read the limit on
> a 2.6linux kernel is 4GB on user space and 4GB for kernel...
older reader(s) will
be gc-ed when the reader is no longer referenced (i.e. when solr load
the new one, after its warmup and so on), is that right?
Thanks
--
J.J. Larrea wrote:
> At 5:30 PM +0200 9/20/07, Walter Ferrara wrote:
>
>> I have an index with several fields, but just one s
I have an index with several fields, but just one stored: ID (string,
unique).
I need to access that ID field for each of the tops "nodes" docs in my
results (this is done inside a handler I wrote), code looks like:
Hits hits = searcher.search(query);
for(int i=0; i
21 matches
Mail list logo