First...
: In-Reply-To: <21051199.p...@talk.nabble.com>
: Subject: correct use of copyFields in schema.xml
http://people.apache.org/~hossman/#threadhijack
Thread Hijacking on Mailing Lists
When starting a new discussion on a mailing list, please do not reply to
an existing message, instead sta
Hi,
I am prototyping lanuage search using solr 1.3 .I have 3 fields in the
schema -id,content and language.
I am indexing 3 pdf files ,the languages are foroyo,chinese and japanese.
I use xpdf to convert the content of pdf to text and push the text to solr
in the content field.
What is the ana
Hi Yonik,
Thanks for the quick response. Do you know the release schedule
when 1.4 would be released or if it is possible to backport the NIO
implementation into 1.3? If you could give me a pointer that would be
great. It seems like a huge performance gain that would be of value
to a lot of p
On Wed, Dec 17, 2008 at 10:53 PM, Matthew Runo wrote:
> I'm using Java 6 and it's compiling for me.
>
I believe rhino is included by default in Java 6
--
Regards,
Shalin Shekhar Mangar.
Grant
It completely crazy do something like this i know, but the customer want´s,
i´m really trying to figure out how to do it in a better way, maybe using
the (auto suggest) filter from solr 1.3 to get all the words starting with
some letter and cache the letter in the client side, out client is
Love it! Congratulations Michiel.
Matt
On Wed, Dec 17, 2008 at 9:15 PM, Chris Hostetter
wrote:
> (replies to solr-user please)
>
> On behalf of the Solr Committers, I'm happy to announce that we the Solr
> Logo Contest is officially concluded. (Woot!)
>
> And the Winner Is...
>
> https://issues.
Hi,Is there a recommended index size (on disk, number of documents) for when
to start partitioning it to ensure good response time?
Thanks,
S
(replies to solr-user please)
On behalf of the Solr Committers, I'm happy to announce that we the Solr
Logo Contest is officially concluded. (Woot!)
And the Winner Is...
https://issues.apache.org/jira/secure/attachment/12394264/apache_solr_a_red.jpg
...by Michiel
We ran into a few hiccups dur
On Wed, Dec 17, 2008 at 7:52 PM, Sammy Yu wrote:
> I read somewhere that there are contention issues with the current
> cache implementation of LRUCache in 1.3 in that it is synchronous,
> could this be the reason why the filter query are slow?
Probably not. The change is much more likely due to
Hi,
I'm making a simple query that uses the standard query handler to
make constructed query such as
title:iphone OR text:firmware. Next, I am use the filterquery to
limit the amount of items to data from within the last year via
fq=+dateCreated:[NOW-1YEAR/MONTH TO NOW/MONTH] which is significa
Hi,
We need to enhance our existing solr search to support query with 'AND'
/'OR'. That is, when user enters "(breast OR liver) AND cancer" in the query
field, the documents with 'liver' & 'cancer' and the documents with 'breast'
& 'cancer' should be returned.
Our existing solr schema includes m
All terms from all docs? Really?
At any rate, see http://wiki.apache.org/solr/TermsComponent May need
a mod to not require any field, but for now you can enter all fields
(which you can get from LukeRequestHandler)
-Grant
On Dec 17, 2008, at 2:17 PM, roberto wrote:
Hello,
I need to g
Don't forget to consider scaling concerns (if there are any). There are
strong differences in the number of searches we receive for each
language. We chose to create separate schema and config per language so
that we can throw servers at a particular language (or set of languages)
if we needed to.
MDC looks like the way to go. Thanks for that info, David.
Erik
On Dec 17, 2008, at 3:13 PM, Smiley, David W. wrote:
You bet it does; that's the point!
Under the covers, I believe it's simply a thread-local hashmap.
Nothing is stored in the loggers the code is using. We just need
I think this is up to your needs.
If you will make one search in many languages, and your doc's won't get too
big, you can put all the data in one schema.xml and configure your field
types by a language basis.
2008/12/17 Julian Davchev
> Hi,
> From my study on solr and lucene so far it seems t
You bet it does; that's the point!
Under the covers, I believe it's simply a thread-local hashmap. Nothing is
stored in the loggers the code is using. We just need to be careful to remove
the variable from MDC when we're done.
~ David
On 12/17/08 3:09 PM, "Ryan McKinley" wrote:
but does t
but does this work in a multi-threaded environment?
if multiple requests are coming in on multiple threads, would it still
be accurate? Perhaps that depends on the underlying implementation?
adding the core to the MDC within a RequestHandler context seems
reasonable and minimally invasive.
Hi,
>From my study on solr and lucene so far it seems that I will use single
scheme.at least don't see scenario where I'd need more than that.
So question is how do I approach multilanguage indexing and multilang
searching. Will it really make sense for just searching word..or rather
I should s
Thank you,
But that's a WebSphere reference. Not a Weblogic one.
2008/12/11 Otis Gospodnetic
> http://wiki.apache.org/solr/SolrWebSphere
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
> > From: Alexander Ramos Jardim
> > To: solr-u
On Dec 17, 2008, at 2:17 PM, Erik Hatcher wrote:
We get the Logger everytime we use it with something like:
Logger log = LoggerFactory.getLogger(classname+":"+core.getName() );
but with a dot separator and the core in front, since the logging
configuration treats dots as hierarchical logging
I propose that MDC or NDC be used instead. I prefer MDC. I've written some
server-side multi-threaded code where each Thread would run a job and I wanted
the job name in the logs.
http://www.slf4j.org/api/org/slf4j/MDC.html
At some early point when Solr receives a request, you simply store a
Hello,
I need to get all terms from all documents to be placed in my interface
almost like the facets, how can i do it?
thanks
--
"Without love, we are birds with broken wings."
Morrie
On Dec 17, 2008, at 12:24 PM, Ryan McKinley wrote:
I'm not sure I understand...
are you suggesting that rather then configuring our logger like this:
static Logger log = LoggerFactory.getLogger(SolrCore.class);
We get the Logger everytime we use it with something like:
Logger log = LoggerFact
I was thinking, maybe we should write a patch to fix this issue.
For instance by making a dispatch servlet (with a "core" parameter or
request attribute) that would act the same way as the filter but
provide a cross context addressable entry point.
What do you think ?
Jerome
On Wed, Dec 17, 200
On Wed, Dec 17, 2008 at 10:54 PM, Ryan McKinley wrote:
> I'm not sure I understand...
>
> are you suggesting that rather then configuring our logger like this:
> static Logger log = LoggerFactory.getLogger(SolrCore.class);
>
> We get the Logger everytime we use it with something like:
> Logger l
Maybe there's an 'internal query' concept in j2ee that could be a workaround ?
I'm not really a j2ee expert ..
Jerome.
On Wed, Dec 17, 2008 at 5:09 PM, Smiley, David W. wrote:
> This bothers me too. I find it really strange that Solr's entry-point is a
> servlet filter instead of a servlet.
>
>
I'm not sure I understand...
are you suggesting that rather then configuring our logger like this:
static Logger log = LoggerFactory.getLogger(SolrCore.class);
We get the Logger everytime we use it with something like:
Logger log = LoggerFactory.getLogger(classname+":"+core.getName() );
That
at the root of the issue is that logging uses a static logger:
static Logger log = LoggerFactory.getLogger(SolrCore.class);
i don't know of any minimally invasive way to get get around this...
On Dec 17, 2008, at 10:22 AM, Marc Sturlese wrote:
I am thinking in doing a hack to specify the lo
I'm using Java 6 and it's compiling for me.
I'm doing..
ant clean
ant dist
and it works just fine. Maybe try an 'ant clean'?
Thanks for your time!
Matthew Runo
Software Engineer, Zappos.com
mr...@zappos.com - 702-943-7833
On Dec 17, 2008, at 9:17 AM, Toby Cole wrote:
I came across this too
Thanks Toby.
Aliter:
Under contrib/javascript/build.xml -> dist target - I removed the
dependency on 'docs' , to circumvent the problem.
But may be - it would be great to get js.jar from the rhino library
distributed ( if not for license contradictions) to circumvent this.
Toby Cole wrote:
I came across this too earlier, I just deleted the contrib/javascript
directory.
Of course, if you need javascript library then you'll have to get it
building.
Sorry, probably not that helpful. :)
Toby.
On 17 Dec 2008, at 17:03, Kay Kay wrote:
I downloaded the latest .tgz and ran
$ ant di
This bothers me too. I find it really strange that Solr's entry-point is a
servlet filter instead of a servlet.
~ David
On 12/17/08 12:07 PM, "Jérôme Etévé" wrote:
Hi all,
In solr.xml ( /lucene/solr/trunk/src/webapp/web/WEB-INF/web.xml
),it's written that
"It is unnecessary, and potentia
Hi all,
In solr.xml ( /lucene/solr/trunk/src/webapp/web/WEB-INF/web.xml
),it's written that
"It is unnecessary, and potentially problematic, to have the SolrDispatchFilter
configured to also filter on forwards. Do not configure
this dispatcher as FORWARD."
The problem is that if f
I downloaded the latest .tgz and ran
$ ant dist
docs:
[mkdir] Created dir:
/opt/src/apache-solr-nightly/contrib/javascript/dist/doc
[java] Exception in thread "main" java.lang.NoClassDefFoundError:
org/mozilla/javascript/tools/shell/Main
[java] at JsRun.main(Unknown Source)
The IsoLatin1 Filter doe smost of this, œ, ö are both converted to o
hth
Paul
On Wed, Dec 17, 2008 at 1:27 AM, Stephen Weiss wrote:
> I believe the german porter stemmer should handle this. I haven't used it
> with SOLR but I've used it with other projects, and basically, when the word
> is par
Can't we log with the core as part of the context of the logger,
rather than just the classname? This would give you core logging
granularity just by config, rather than scraping.
Yes?
Erik
On Dec 17, 2008, at 9:47 AM, Ryan McKinley wrote:
As is, the log classes are statically bou
I am thinking in doing a hack to specify the log path of the solr cores in
solr.xml.
Would like to do something like:
...
Any advice about how should I start?
Thanks in advance
ryantxu wrote:
>
> As is, the log classes are statically bound to the class,
Hi everybody,
So I have applied the Ivans latest patch to a clean 1.3.
I built it using 'ant compile' and 'ant dist', got the solr build war
file.
Moved that into the Tomcat directory.
Modified my solrconfig.xml to include the following:
class="org.apache.solr.handler.component.CollapseCo
As is, the log classes are statically bound to the class, so they are
configured for the entire VM context.
Off hand i can't think of any good work around either. The only thing
to note is that most core specific log messages include the core name
as a prefix: [core0] ...
ryan
On Dec 1
Hi,
Is it possible to incrementally index given document? Meaning, I would
like to index filed with large size separate request so that even if it
fails I would have basic document indexed.
Thanks,
Rajesh
Hi,
I need to resolve the following issue:
I need to get the Value of all the FIELDS defined in solrconfig.xml when i
have the Value of the Unique-Key-Field or the corresponding Doc Number. I
require this because, we need to find the value of a particular field, if a
document is Duplicated.
*Pro
Hello all,
Reviewing the various examples that comes with Solr I cant make up
mind wether the copyFields element should be nested within the
fields element or not. The http://wiki.apache.org/solr/SchemaXml
documentation makes it clear it should be outside, yet a number
of examples have it nested.
Hi Lance,
Can you tell us what's this parameter and how to set it ?
I'm also stucked with the same problem :(
Thanks !!
Jerome
On Mon, Sep 8, 2008 at 6:02 PM, Lance Norskog wrote:
> You can give a default core set by adding a default parameter to the query
> in solrconfig.xml. This is ha
Hey there,
My original app (before getting into Solr) use to have 3 index in the same
web app. I used log4j with a log file per index.
Now in Solr I have different cores and I am trying to set a log file per
core via slf4 but don't know how to do it.
As I understood this thread:
http://www.nabbl
Hey there,
1.- I am trying to use date facets but I am facing a trouble. I want to use
the same field to do 2 facet classification. I want to show the count of the
docs of the last week and the counts od the docs of the last month.
What I am doing is:
source_date
NOW/DAY-1MONTH
Hi Lance,
thanks for your feedback!
The problem with your suggestion is that we do not want to exclude the
fields without a date but boost them differently, which Chris has
provided a solution for (see other posts in this thread).
Cheers,
Robert
Norskog, Lance wrote:
> This query:field:
46 matches
Mail list logo