Thanks Abe-san!
Your advice is very informative.
Thanks again.
Regards,
Shigeki
2012/12/21 Shinichiro Abe
> You can place the missing JAR files in the contrib/extraction/lib.
>
> For class files: asm-x.x.jar
> For mp4 files: aspectjrt-x.x.jar
>
> FWIW, please see https://issues.apache.org/
This is on Solr 3.5.0.
We are getting a java.lang.NegativeArraySizeException when our webapp
sends a query where the start parameter is set to a negative value.
This seems to set off a denial of service problem within Solr. I don't
yet know whether it's a mistake in coding, or whether some ma
Hi,
I am looking for a token filter that can combine 2 terms into 1? E.g.
the input has been tokenized by white space:
t1 t2 t2a t3
I want a filter that output:
t1 t2t2a t3
I know it is a very special case, and I am thinking about develop a filter
of my own. But I cannot figure out which API
You can place the missing JAR files in the contrib/extraction/lib.
For class files: asm-x.x.jar
For mp4 files: aspectjrt-x.x.jar
FWIW, please see https://issues.apache.org/jira/browse/SOLR-4209
Regards,
Shinichiro Abe
On 2012/12/21, at 15:08, Shigeki Kobayashi wrote:
> Hi,
>
> I use ManifoldC
The issue has been solved and sorry for my negligence.
At 2012-12-21 11:10:53,SuoNayi wrote:
>Hi all, for solrcloud(solr 4.0) how to add the third analyzer?
>There is a third analyzer jar and I want to integrate it with solrcloud.
>Here are my steps but the ClassNotFoundException is thrown at
I agree actually (about not surprising the users). But the consequences of
forgetting this value may also lead to some serious debugging issues.
An interesting (not sure if reasonable) compromise would be to look at an
error message for @version=1 and using @multiValued attribute and make sure
it
: On another hand, having @version default to 1.0 is probably an oversight,
: given the number of changes present Should it not default to latest or
: at least to 1.5 (and change periodically)?
If the default value changed, then users w/o a version attribute in their
schema would suddenly ge
Thank you.
So, the conclusion to me is that @name can be skipped. It is not used in
anything (or anything critical anyway) and there is a default. That's good
enough for me.
On another hand, having @version default to 1.0 is probably an oversight,
given the number of changes present Should it
Hi,
Have a look at http://search-lucene.com/?q=invalid+version+javabin
Otis
--
Solr Monitoring - http://sematext.com/spm/index.html
Search Analytics - http://sematext.com/search-analytics/index.html
On Wed, Dec 19, 2012 at 11:23 AM, Shahar Davidson wrote:
> Hi,
>
> I'm encountering this erro
On 12/20/2012 4:57 PM, Chris Hostetter wrote:
i just tried running the 4x solr example with the jetty options to allow
remote JMX...
java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.po
Hi Robi,
Oh that's the thing of the past, go for the latest Java 7 if they let you!
Otis
--
Performance Monitoring - http://sematext.com/spm
On Dec 20, 2012 6:29 PM, "Petersen, Robert" wrote:
> Hi Otis,
>
> I thought Java 7 had a bug which wasn't being addressed by Oracle which
> was making it
Hi,
You can use Solr's DataImportHandler to index files in the file system.
You could set things up in such a way that Solr keeps indexing whatever
you put in some specific location in the FS. This is not the most common
setup, but it's certainly possible. Solr keeps the searchable index in its
hi there,
I am quite new to Solr and have a very basic question about storing and
indexing the document.
I am trying with the Solr example, and when I run command like 'java -jar
post.jar foo/test.xml', it gives me the feeling that solr will index the
given file, no matter where it is store, and
: If I connect jconsole to a remote Solr installation (or any app) using jmx,
: all the graphs are populated except 'threads' ... is this expected, or have I
: done something wrong? I can't seem to locate the answer with google.
i just tried running the 4x solr example with the jetty options to
I think you are hitting solr-3589. There is a vote underway for a 3.6.2
that contains this fix
On Dec 20, 2012 6:29 PM, "kirpakaro" wrote:
> Hi folks,
>
> I am having couple of problems with Japanese data, 1. it is not
> properly
> indexing all the data 2. displaying the exact match result on
Hi folks,
I am having couple of problems with Japanese data, 1. it is not properly
indexing all the data 2. displaying the exact match result on top and then
90%match and 80%match etc. does not work.
I am using solr3.6.1 and using text_ja as the fieldType here is the schema
Hi Otis,
I thought Java 7 had a bug which wasn't being addressed by Oracle which was
making it not suitable for Solr. Did that get fixed now?
http://searchhub.org/2011/07/28/dont-use-java-7-for-anything/
I did see this but it doesn't really mention the bug:
http://opensearchnews.com/2012/04/a
Right, you can store it, but you can't search on it that way, and you
certainly can't do complex searches that take the XML structure into
account (e.g. xpath queries).
Upayavira
On Thu, Dec 20, 2012, at 10:22 PM, Alexandre Rafalovitch wrote:
> What happens if you just supply it as CDATA into a s
What happens if you just supply it as CDATA into a string field? Store, no
index, probably compressed and lazy.
Regards,
Alex
On 20 Dec 2012 09:30, "Modou DIA" wrote:
> Hi everybody,
>
> i'm newbie with Solr technologies but in the past i worked with lucene
> and another solution similar to
Yeah... not sure how I missed it, but my search sees it now.
Also, the name will default to "schema.xml" is you do leave it out of the
schema.
-- Jack Krupansky
-Original Message-
From: Mikhail Khludnev
Sent: Thursday, December 20, 2012 3:06 PM
To: solr-user
Subject: Re: Where does
Depending on your architecture, why not index the same data into two machines?
One will be your prod another your backup?
Thanks.
Alex.
-Original Message-
From: Upayavira
To: solr-user
Sent: Thu, Dec 20, 2012 11:51 am
Subject: Re: Pause and resume indexing on SolR 4 for backup
Jack,
FWIW I've found occurrence in SystemInfoHandler.java
On Thu, Dec 20, 2012 at 6:32 PM, Jack Krupansky wrote:
> I checked the 4.x source code and except for the fact that you will get a
> warning if you leave it out, nothing uses that name. But... that's not to
> say that a future release mi
You're saying that there's no chance to catch it in the middle of
writing the segments file?
Having said that, the segments file is pretty small, so the chance would
be pretty slim.
Upayavira
On Thu, Dec 20, 2012, at 06:45 PM, Lance Norskog wrote:
> To be clear: 1) is fine. Lucene index updates
To be clear: 1) is fine. Lucene index updates are carefully sequenced so
that the index is never in a bogus state. All data files are written and
flushed to disk, then the segments.* files are written that match the
data files. You can capture the files with a set of hard links to create
a back
Hi Lance,
I am an IT Recruiter in Raleigh, NC. Would you or would anyone you know be
interested in a long term contract opportunity for a Solr/Lucene Engineer with
Cisco here in RTP, NC?
Thanks for your time Lance and have a safe and happy Holiday!
[Description: Description: Description: Desc
Mark, yes, they have unique ids. Most the time, after the 2nd json http
post, query will return complete results.
I believe the data was indexed already with 1st post since if I shutdown the
solr after 1st post and restart again, query will return complete result
set.
Thanks,
Lili
--
Vi
Are you sure a commit didn't happen between? Also, a background merge
might have happened.
As to using a backup, you are right, just stop solr, put the snapshot
into index/data, and restart.
Upayavira
On Thu, Dec 20, 2012, at 05:16 PM, Andy D'Arcy Jewell wrote:
> On 20/12/12 13:38, Upayavira wro
On 20/12/12 13:38, Upayavira wrote:
The backup directory should just be a clone of the index files. I'm
curious to know whether it is a cp -r or a cp -lr that the replication
handler produces.
You would prevent commits by telling your app not to commit. That is,
Solr only commits when it is *tol
The spellchecker doesn't support checking the indivdual words against the index
with "fq" applied. This is only done for collations (and only if
"maxCollationTries" is greater than 0). Checking every suggested word
individually against the index after applying filter queries is probably going
Solr does not support nested structures. You need to flatten your data
before indexing. You can store data in the way you did to be returned to
your users, but you will not be able to search within the XML as XML.
If you can explain the problem you are trying to solve, maybe folks here
can help yo
Does all the data have unique ids?
- Mark
On Dec 19, 2012, at 8:30 PM, Lili wrote:
> We set up SolrCloud with 2 shards and separate multiple zookeepers. The
> data added using http post with json in tutorial sample are not completely
> returned in query.However, if you send the same http
Hi James,
I don't get how the spellcheck.maxResultsForSuggest param helps with making
sure that the suggestions returned satisfy the fq params?
That's the main problem we're trying to solve, how often suggestions are
being returned is not really an issue for us at the moment.
Thanks,
Nalini
On
I checked the 4.x source code and except for the fact that you will get a
warning if you leave it out, nothing uses that name. But... that's not to
say that a future release might not require it - the doc/comments don't
explicitly say that it is optional.
Note that the version attribute is opt
I am sorry, I am still not clear. Do you mean I should use enterpriseid as ID.
-Original Message-
From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
Sent: Wednesday, December 19, 2012 7:35 PM
To: solr-user@lucene.apache.org
Subject: Re: Putting more weight on particular column.
H
Hi everybody,
i'm newbie with Solr technologies but in the past i worked with lucene
and another solution similar to Solr.
I'm working with solr 4.0. I use solrj for embedding an Solr server in
a cocoon 2.1 application.
I want to know if it's possible to store (without indexing) a field
containin
That's neat, but wouldn't that run on every commit? How would you use it
to, say, back up once a day?
Upayavira
On Thu, Dec 20, 2012, at 01:57 PM, Markus Jelsma wrote:
> You can use the postCommit event in updateHandler to execute a task.
>
> -Original message-
> > From:Upayavira
> >
You can use the postCommit event in updateHandler to execute a task.
-Original message-
> From:Upayavira
> Sent: Thu 20-Dec-2012 14:45
> To: solr-user@lucene.apache.org
> Subject: Re: Pause and resume indexing on SolR 4 for backups
>
> The backup directory should just be a clone of the
In our system (using 3.6), it is displayed on /solr/admin/. I'd guess that the
value in solr.xml overrides the one in schema.xml, but not sure.
-Michael
-Original Message-
From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
Sent: Thursday, December 20, 2012 12:08 AM
To: solr-user@lu
Personally I have never given it any attention, so I suspect it doesn't
matter much.
Upayavira
On Thu, Dec 20, 2012, at 05:08 AM, Alexandre Rafalovitch wrote:
> Hello,
>
> In the schema.xml, we have a name attribute on the root note. The
> documentation says it is for display purpose only. But f
The backup directory should just be a clone of the index files. I'm
curious to know whether it is a cp -r or a cp -lr that the replication
handler produces.
You would prevent commits by telling your app not to commit. That is,
Solr only commits when it is *told* to.
Unless you use autocommit, in
On 20/12/12 11:58, Upayavira wrote:
I've never used it, but the replication handler has an option:
http://master_host:port/solr/replication?command=backup
Which will take you a backup.
I've looked at that this morning as suggested by Markus Jelsma. Looks
good, but I'll have to work out how
Which strikes me as the right way to go.
Upayavira
On Thu, Dec 20, 2012, at 12:30 PM, AlexeyK wrote:
> Implemented it with http://wiki.apache.org/solr/DocTransformers.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Dynamic-modification-of-field-value-tp4028234p
I cannot see how SolrJ and the admin UI would return different results.
Could you run exactly the same query on both and show what you get here?
Upayavira
On Thu, Dec 20, 2012, at 06:17 AM, Joe wrote:
> I'm using SOLR 4 for an application, where I need to search the index
> soon
> after inserting
I've never used it, but the replication handler has an option:
http://master_host:port/solr/replication?command=backup
Which will take you a backup.
Also something to note, if you don't want to use the above, and you are
running on Unix, you can create fast 'hard link' clones of lucene
indexe
On 20 December 2012 16:14, Andy D'Arcy Jewell
wrote:
[...]
> It's attached to a web-app, which accepts uploads and will be available
> 24/7, with a global audience, so "pausing" it may be rather difficult (tho I
> may put this to the developer - it may for instance be possible if he has a
> small
On 20/12/12 10:24, Gora Mohanty wrote:
Unless I am missing something, the index is only being written to
when you are adding/updating the index. So, the question is how
is this being done in your case, and could you pause indexing for
the duration of the backup?
Regards,
Gora
It's attached to a
You can use the replication handler to fetch a complete snapshot of the index
over HTTP.
http://wiki.apache.org/solr/SolrReplication#HTTP_API
-Original message-
> From:Andy D'Arcy Jewell
> Sent: Thu 20-Dec-2012 11:23
> To: solr-user@lucene.apache.org
> Subject: Pause and resume indexi
On 20 December 2012 15:46, Andy D'Arcy Jewell
wrote:
> Hi all.
>
> Can anyone advise me of a way to pause and resume SolR 4 so I can perform a
> backup? I need to be able to revert to a usable (though not necessarily
> complete) index after a crash or other "disaster" more quickly than a
> re-inde
Hi all.
Can anyone advise me of a way to pause and resume SolR 4 so I can
perform a backup? I need to be able to revert to a usable (though not
necessarily complete) index after a crash or other "disaster" more
quickly than a re-index operation would yield.
I can't yet afford the "extravagan
Yeah, I ran into this issue myself with solr-4.0.0.
To fix it, I had to compile my own version from the solr-4x branch. That
is, I assume it's fixed as I have been unable to replicate it after the
switch.
I'm afraid you will have to reindex your data.
--
Med venlig hilsen / Best regards
*John
50 matches
Mail list logo