Hoss, I'm so happy you realized the problem because I was quite worried
about it!!
Let me know if I can provide support with testing it.
The last two days I was busy with migrating a bunch of hosts which
should -hopefully- be finished today.
Then I have again the infrastructure for running tes
Thank you Lance.
I just found out the problem, in case somebody came across this.
It turn out to be the problem that tomcat is not accepting UTF-8 in URL
by default.
http://wiki.apache.org/solr/SolrTomcat#URI_Charset_Config
I have no idea why it is the case but after i follow the instruction i
I believe that you should remove the Analyzer class name from the field type. I
think it overrides the stacks of tokenizer/tokenfilter. Other
declarations do not have an Analyzer class and Tokenizers.
should be:
This may not help with your searching problem.
- Original Message -
|
There is another way to do this: crawl the mobile site!
The Fennec browser from Mozilla talks Android. I often use it to get pagecrap
off my screen.
- Original Message -
| From: "Lance Norskog"
| To: solr-user@lucene.apache.org
| Sent: Wednesday, August 29, 2012 7:37:37 PM
| Subject: R
Any thoughts?
It is weird, i can see the words are cutting correctly in Field
Analysis. I checked almost every website that they are telling either
CJKAnalyzer, IKAnalyzer or SmartChineseAnalyzer. But if i can see the
words are cutting then it should not be the problem of settings of
differen
Thank you Hoss. I imported the KEYS file using *gpg --import KEYS.txt*.
Then I did the *--verify* again. This time I get an output like this:
gpg: Signature made 08/06/12 19:52:21 Pacific Daylight Time using RSA key
ID 322
D7ECA
gpg: Good signature from "Robert Muir (Code Signing Key) "
*gpg: WARN
I have a data-config.xml with 2 entity, like
...
and
...
entity delta_build is for delta import, query is
?command=full-import&entity=delta_build&clean=false
and I want to using deletedPkQuery to delete index. So I have add those to
entity "delta_build"
deltaQuery="select -1 as ID from d
: I download solr 4.0 beta and the .asc file. I use gpg4win and type this in
: the command line:
:
: >gpg --verify file.zip file.asc
:
: I get a message like this:
:
: *gpg: Can't check signature: No public key*
you can verify the asc sig file using the public KEYS file hosted on the
main apac
Some extra information. If I use curl and force it to use HTTP 1.0, it
is more visible that Solr doesn't allow persistent connections:
$ curl -v -0 'http://localhost:8983/solr/select?q=*:*' -H'Connection:
Keep-Alive'* About to connect() to localhost port 8983 (#0)
* Trying ::1... connected
>
Hi,
Running example Solr from the 3.6.1 distribution I can not make it to
keep persistent HTTP connections:
$ ab -c 1 -n 100 -k 'http://localhost:8983/solr/select?q=*:*' | grep
Keep-Alive
Keep-Alive requests:0
What should I change to fix that?
P.S. We have the same issue in production
Thanks for posting this!
I ran into exactly this issue yesterday, and ended up felting the files to
get around it.
Mark
Sent from my mobile doohickey.
On Sep 6, 2012 4:13 AM, "Rohit Harchandani" wrote:
> Thanks everyone. Adding the _version_ field in the schema worked.
> Deleting the data dire
Not sure whether it is a duplicate question. Did try to browse through the
archive and did not find anything specific to what I was looking for.
I see duplicates in the dictionary if I update the document concurrently.
I am using Solr 3.6.1 with the following configurations for suggester:
Solr Co
The replication finally worked after I removed the compression setting
from the solrconfig.xml on the slave. Thanks for providing the
workaround.
Ravi Kiran
On Wed, Sep 5, 2012 at 10:23 AM, Ravi Solr wrote:
> Wow, That was quick. Thank you very much Mr. Siren. I shall remove the
> compression no
: Subject: Re: use of filter queries in Lucene/Solr Alpha40 and Beta4.0
Günter, This is definitely strange
The good news is, i can reproduce your problem.
The bad news is, i can reproduce your problem - and i have no idea what's
causing it.
I've opened SOLR-3793 to try to get to the bottom of
: Actually, I didn't technically "upgrade". I downloaded the new
: version, grabbed the example, and pasted in the fields from my schema
: into the new one. So the only two files I changed from the example are
: schema.xml and solr.xml.
ok -- so with the fix for SOLR-3432, anyone who tries simila
And when you pasted your 3.5 fields into the 4.0 schema, did you delete the
existing fields (including _version_) at the same time?
-- Jack Krupansky
-Original Message-
From: Paul
Sent: Wednesday, September 05, 2012 4:32 PM
To: solr-user@lucene.apache.org
Subject: Re: Still see docume
I don't see a Jira for it, but I do see the bad behavior in both Solr 3.6
and 4.0-BETA in Solr admin analysis.
Interestingly, the screen shot for LUCENE-3642 does in fact show the
(improperly) incremented positions for successive ngrams.
See:
https://issues.apache.org/jira/browse/LUCENE-3642
Actually, I didn't technically "upgrade". I downloaded the new
version, grabbed the example, and pasted in the fields from my schema
into the new one. So the only two files I changed from the example are
schema.xml and solr.xml.
Then I reindexed everything from scratch so there was no old index
in
: I don't think I changed by solrconfig.xml file from the default that
: was provided in the example folder for solr 4.0.
ok ... well the Solr 4.0-BETA example solrconfig.xml has this in it...
${solr.data.dir:}
So if you want to override the dataDir using a "property" like your second
exampl
: That was exactly it. I added the following line to schema.xml and it now
works.
:
:
Just to be clear: how exactly did you "upgraded to solr 4.0 from solr 3.5"
-- did you throw out your old solrconfig.xml and use the example
solrconfig.xml from 4.0, but keep your 3.5 schema.xml? Do you in
Nicolas -
Can you elaborate on your use and configuration of Solr on NFS?What lock
factory are you using? (you had to change from the default, right?)
And how are you coordinating updates/commits to the other servers? Where does
indexing occur and then how are commits sent to the NFS mou
That was exactly it. I added the following line to schema.xml and it now works.
On Wed, Sep 5, 2012 at 10:13 AM, Jack Krupansky wrote:
> Check to make sure that you are not stumbling into SOLR-3432: "deleteByQuery
> silently ignored if updateLog is enabled, but {{_version_}} field does not
> e
Amazon doesn't have a prebuilt network filesystem that's mountable on
multiple hosts out of the box. The closest thing would be setting up
NFS among your hosts yourself, but at that point it'd probably be
easier to set up Solr replication.
Michael Della Bitta
-
Thanks everyone. Adding the _version_ field in the schema worked.
Deleting the data directory works for me, but was not sure why deleting
using curl was not working.
On Wed, Sep 5, 2012 at 1:49 PM, Michael Della Bitta <
michael.della.bi...@appinions.com> wrote:
> Rohit:
>
> If it's easy, the easi
In the analysis page, the n-grams produced by EdgeNgramTokenFilter are at
sequential positions. This seems wrong, because an n-gram is associated with a
source token at a specific position. It also really messes up phrase matches.
With the source text "fleen", these positions and tokens are gene
Hi,
We currently share a single solr read index on an nfs accessed by
various solr instances from various devices which gives us a high
performant cluster framework. We would like to migrate to Amazon or
other cloud. Is there any way (compatibility) to have solr index on
Amazon S3 file cloud syste
Rohit:
If it's easy, the easiest thing to do is to turn off your servlet
container, rm -r * inside of the data directory, and then restart the
container.
Michael Della Bitta
Appinions | 18 East 41st St., Suite 1806 | New York, NY 10017
www.appinio
Check to make sure that you are not stumbling into SOLR-3432: "deleteByQuery
silently ignored if updateLog is enabled, but {{_version_}} field does not
exist in schema".
See:
https://issues.apache.org/jira/browse/SOLR-3432
This could happen if you kept the new 4.0 solrconfig.xml, but copied in
Hello!
You can implement your own crawler using Droids
(http://incubator.apache.org/droids/) or use Apache Nutch
(http://nutch.apache.org/), which is very easy to integrate with Solr
and is very powerful crawler.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch -
Please take a look at the Apache Nutch project.
http://nutch.apache.org/
-Original message-
> From:Lochschmied, Alexander
> Sent: Wed 05-Sep-2012 17:09
> To: solr-user@lucene.apache.org
> Subject: Website (crawler for) indexing
>
> This may be a bit off topic: How do you index an exis
This may be a bit off topic: How do you index an existing website and control
the data going into index?
We already have Java code to process the HTML (or XHTML) and turn it into a
SolrJ Document (removing tags and other things we do not want in the index). We
use SolrJ for indexing.
So I guess
Wow, That was quick. Thank you very much Mr. Siren. I shall remove the
compression node in the solrconfig.xml and let you know how it went.
Thanks,
Ravi Kiran Bhaskar
On Wed, Sep 5, 2012 at 2:54 AM, Sami Siren wrote:
> I opened SOLR-3789. As a workaround you can remove name="compression">inter
I think I found the cause for this. It is partially my fault, because I sent
solr a field with empty value, but this is also a configuration problem.
https://issues.apache.org/jira/browse/SOLR-3792
-Original Message-
From: Yoni Amir [mailto:yoni.a...@actimize.com]
Sent: Tuesday, Septem
Check to make sure that you are not stumbling into SOLR-3432: "deleteByQuery
silently ignored if updateLog is enabled, but {{_version_}} field does not
exist in schema".
See:
https://issues.apache.org/jira/browse/SOLR-3432
-- Jack Krupansky
-Original Message-
From: Paul
Sent: Wednes
I've recently upgraded to solr 4.0 from solr 3.5 and I think my delete
statement used to work, but now it doesn't seem to be deleting. I've
been experimenting around, and it seems like this should be the URL
for deleting the document with the uri of "network_24".
In a browser, I first go here:
ht
> i want to search with title and empname both.
I know, I give that URL just to get the idea here.
If you try
suggest/?q="michael b"&df=title&defType=lucene&fl=title
you will see that your interested will in results section not section.
> or title or song...). Here (*suggest/?q="michael
> b"&d
I don't think I changed by solrconfig.xml file from the default that
was provided in the example folder for solr 4.0.
On Tue, Sep 4, 2012 at 3:40 PM, Chris Hostetter
wrote:
>
> :
>
> I'm pretty sure what you hav above tells solr that core MYCORE_test it
> should use the instanceDir MYCORE but
Hi,
At the moment, partitioning with solrcloud is hash based on uniqueid.
What I'd like to do is have custom partitioning, e.g. based on date
(shard_MMYY).
I'm aware of https://issues.apache.org/jira/browse/SOLR-2592, but
after a cursory look it seems that with the latest patch, one might
end up
HI,
Thanks,
i want to search with title and empname both. for example when we use any
search engine like google,yahoo... we donot specify any type that is (name
or title or song...). Here (*suggest/?q="michael
b"&df=title&defType=lucene*) we are specifying the title type search.
I removed said
Thanks for all the information.
> I'm not sure how exactly you are measuring/defining "replication lag" but
> if you mean "lag in how long until the newly replicated documents are
> visible in searches"
That is exactly what I wanted to say.
I've attached the cache statistics.
If you are inter
Hi Markus,
Can you please tell me the exact file name in the tomcat folder?
Means where I have to set the properties?
I am using Windows machine and I have the Tomcat6.
Thanks,
Guru
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Cloud-Implementation-with-Apache-Tomc
Hi,
You are trying to use two different approaches at the same time.
1) Remove
suggest
query
from your requestHandler.
2) Execute this query URL : suggest/?q="michael b"&df=title&defType=lucene
And you will see my point.
--- On Wed, 9/5/12, aniljayanti wrote:
> F
Set the -DzkHost= property in some Tomcat configuration as per the wiki page
and point it to the Zookeeper(s). On Debian systems you can use
/etc/default/tomcat6 to configure your properties.
-Original message-
> From:bsargurunathan
> Sent: Wed 05-Sep-2012 10:40
> To: solr-user@luce
Hi Rafal,
I worked with standalone zookeeper, which is starting.
But the next step is, I want to configure the zookeeper with my solr cloud
using Apache Tomcat.
How it is really possible? Can you please tell me the steps, which I have to
follow to implement the Solr Cloud with Apache Tomcat. Thank
On Fri, 2012-08-31 at 13:35 +0200, Erick Erickson wrote:
> Imagine you have two entries, aardvark and emu in your
> multiValued field. How should that document sort relative to
> another doc with camel and zebra? Any heuristic
> you apply will be wrong for someone else
I see two obvious choice
45 matches
Mail list logo