Thanks for your reply. Nested boolean queries is a valid concern. I also
realized that isCoordDisabled needs to be considered in
BooleanQuery.hashCode so that a query with coord=false will have a different
cache key in Solr.
On Thu, Nov 12, 2009 at 12:12 PM, Chris Hostetter
wrote:
>
> : I want
Your first definition of text_fr seems to be correct and should work
as expected. I tested it and worked fine ("mémé" was highlighted).
What was the output of HTMLStripCharFilterFactory in analysis.jsp?
In my analysis.jsp, I got "ça va mémé ?".
Koji
Kundig, Andreas wrote:
Hello
I indexed an
Can you do:
ps auxwww | grep java
(or whatever you need to do to show us the command-line used for starting the
servlet container)
I assume you are using Solr 1.4?
Otis --
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
I have a Solr instance in a directory in 1 location on my hard drive,
and set solr.solr.home to that location, when I open it, I can add
documents and close the instance with no problem, but the data is
written to a new directory solr/data in the current directory.
Has anyone seen this before?
-S
Hi Lance,
Lance Norskog wrote:
> What platform are you using? Windows does not use UTF-8 by default,
> and this can cause subtle problems. If you can do the same thing on
> other platforms (Linux, Mac) that would help narrow down the problem.
My Solr server runs in a Tomcat server on a Ubuntu Linu
Thanks
Could you elaborate what is compatible schema change.
Do you mean schema change which deals only with query time.
darniz
Otis Gospodnetic wrote:
>
> Darniz,
>
> Yes, if there is an incompatible schema change, you need to reindex your
> documents.
>
> Otis
> P.S.
> Please include the c
Your autocommit settings are still pretty aggressive causing very frequent
commits, and that is using your CPU.
Yes, splitting the servers into a master and slaves tends to be the
performant/scalable way to go. There is no real downside to replication,
really, just a bit of network traffic.
O
Can "boost" attribute really be specified for a field in the schema? I wasn't
aware of that, and I don't see it on http://wiki.apache.org/solr/SchemaXml .
Maybe you are mixing it with
http://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22field.22
?
Otis
--
Sematext is hirin
Darniz,
Yes, if there is an incompatible schema change, you need to reindex your
documents.
Otis
P.S.
Please include the copy of the response when replying, so the
context/background of your question is easy to figure out.
--
Sematext is hiring -- http://sematext.com/about/jobs.html?mls
Lucene,
Ah, thanks for the tip about switching out the jdk jar with the
log4j jar. I think I was running into this issue and couldn't
figure out why Solr logging couldn't be configured when running
inside Hadoop which uses log4j, maybe this was the issue?
On Wed, Nov 18, 2009 at 9:11 AM, Ryan McKinley wr
What platform are you using? Windows does not use UTF-8 by default,
and this can cause subtle problems. If you can do the same thing on
other platforms (Linux, Mac) that would help narrow down the problem.
On Wed, Nov 18, 2009 at 8:15 AM, Sascha Szott wrote:
> Hi Erik,
>
> Erik Hatcher wrote:
>>
Specifying the file.encoding did work, although I don't think it is a suitable
workaround for my use case. Any idea what my next step is to having a bug
opened.
Thanks,
Joe
> Date: Wed, 18 Nov 2009 16:15:55 +0530
> Subject: Re: UTF-8 Character Set not specifed on OutputStreamWriter in
>
Thanks
So going by you reply, can i assume that if there is a configuration change
to my schema I have to again index documents,
There is no short cut of updating the index.
Because we cant afford to index 2 million documents again and again.
There should be some utility or command line which doe
Solr includes slf4j-jdk14-1.5.5.jar, if you want to use the nop (or
log4j, or loopback) impl you will need to include that in your own
project.
Solr uses slf4j so that each user can decide their logging
implementation, it includes the jdk version so that something works
off-the-shelf, but
Hi Erik,
Erik Hatcher wrote:
Can you give me a test document that causes an issue? (maybe send me a
Solr XML document in private e-mail). I'll see what I can do once I
can see the issue first hand.
Thank you! Just try the utf8-example.xml file in the exampledoc
directory. After having index
Cheers.. investigating...
2009/11/18 sophSophie :
>
> Hi,
>
> previously I was using a NGramFilterFactory for the completion on my website
> but the EdgeNGramTokenizerFactory seems to be more pertinent.
>
> I defined my own field type but when I start solr I got the error log :
>
> GRAVE: java.la
Hi,
previously I was using a NGramFilterFactory for the completion on my website
but the EdgeNGramTokenizerFactory seems to be more pertinent.
I defined my own field type but when I start solr I got the error log :
GRAVE: java.lang.ClassCastException:
org.apache.solr.analysis.EdgeNGramTokenize
Sascha,
Can you give me a test document that causes an issue? (maybe send me
a Solr XML document in private e-mail). I'll see what I can do once
I can see the issue first hand.
Erik
On Nov 18, 2009, at 2:48 PM, Sascha Szott wrote:
Hi,
I've played around with Solr's VelocityR
Hi,
I've played around with Solr's VelocityResponseWriter (which is indeed a
very useful feature for rapid prototyping). I've realized that Velocity
uses ISO-8859-1 as default character encoding. I've changed this setting
to UTF-8 in my velocity.properties file (inside the conf directory), i.e
Erik,
Erik Hatcher wrote:
Andrea,
I'd guess you have json.nl=arrarr set for your dismax handler (or
request).
sigh, you're right, sorry for the noise :/
Andrea
Andrea,
I'd guess you have json.nl=arrarr set for your dismax handler (or
request).
Erik
On Nov 18, 2009, at 12:01 PM, Andrea Campi wrote:
Hi,
not sure this is something new in Solr 1.4, but I just noticed that
facets results are serialized differently with standard and dismax
I have the following field configured in schema.xml:
Where "text" is the type which came with the Solr distribution. I have
not been able to get this configuration to alter any document scores,
and if I look at the indexes in Luke there is no change in the norms
(compared to an un-boosted equiv
Hello
I indexed an html document with a decimal HTML Entity encodings: the character
é (e with an acute accent) is encoded as é The exact content of the
document is:
ça va mémé ?
A search for 'mémé' returns no document. If I put the line above in solr
admin's analysis.jsp it also doesn't matc
Hi,
not sure this is something new in Solr 1.4, but I just noticed that
facets results are serialized differently with standard and dismax when
using wt=ruby.
Standard returns:
'my_facet'=>{'20344'=>1}
Whereas dismax has:
'my_facet'=>['20344',1]
Admittedly this is not a big deal, it's eas
On Wed, Nov 18, 2009 at 6:56 AM, Joe Kessel wrote:
>
> While trying to make use of the StreamingUpdateSolrServer for updates with
> the release code for Solr.14 I noticed some characters such as é did not
> show up in the index correctly. The code should set the CharsetName via the
> constructor
Thanks. I see. It seems that slf4j-nop-1.5.5.jar is the only jar file missing
in solrj-lib, so I suggest that it should be included in the next release.
Per Halvor
-Opprinnelig melding-
Fra: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sendt: 17. november 2009 20:51
Til: 'solr-u
26 matches
Mail list logo