On Wed, May 5, 2010 at 10:23 AM, Ranveer wrote:
>
> How many column can we defined in schema.
> I have already around 100 column in schema..
>
>
You can have as many as you need.
--
Regards,
Shalin Shekhar Mangar.
On Wed, May 5, 2010 at 8:41 AM, dc tech wrote:
> We are using SOLR in a production setup with a jRuby on Rails front end
> with about 20 different instances of SOLR running on heavy duty hardware.
> The setup is load balanced front end (jRoR) on a pair of machines and the
> SOLR backends on a d
Hey,
I have the same list, and i added to it the extraction library (apache solr
cell jar), though you might not need it specifically inside the war file.
Marc
> From: sagar...@opentext.com
> To: solr-user@lucene.apache.org
> Date: Wed, 5 May 2010 10:21:36 +0530
> Subject: RE: Problem with pdf, u
Marc & Sandhya,
Did you use Solr from trunk?
I used Solr 1.4 distn, and even after copying all the jars, i still get the
same results for the pdfs i posted here.
Thanks.
On Wed, May 5, 2010 at 1:09 PM, Marc Ghorayeb wrote:
>
> Hey,
> I have the same list, and i added to it the extraction library
Praveen,
I am indeed using a trunk version from last week's svn i think. You could
always try a version from the hudson builds. I did not try this procedure with
Solr's 1.4 release though.
Marc
___
Hi there,
i´m working with the solr-pecl extension and asking me how I to permanently
activate spellchecking.
I couldn´t find a command from the pecl library to activate it by the client -
like $solrQuery->enableFacet(true) for factes.
Or is it possible to keep spellchecking permanently activa
Praveen,
I got the solr 1.4 release from here,
http://download.filehat.com/apache/lucene/solr/1.4.0/
Thanks,
Sandhya
-Original Message-
From: Praveen Agrawal [mailto:pkal...@gmail.com]
Sent: Wednesday, May 05, 2010 1:52 PM
To: solr-user@lucene.apache.org
Subject: Re: Problem with pdf,
Morning all,
I was wondering if anyone had written an XSD/DTD for schema.xml? A quick look
at the wiki (http://wiki.apache.org/solr/SchemaXml) suggests that this has yet
to be done.
I'm starting to research how our application could create and configure Solr
cores at runtime; and I think schem
The same for me, IMO I think it should be nice to have that
Regards,
Andrea
Il 05/05/2010 11:58, Jon Poulton ha scritto:
Morning all,
I was wondering if anyone had written an XSD/DTD for schema.xml? A quick look
at the wiki (http://wiki.apache.org/solr/SchemaXml) suggests that this has yet
to
For this scenario, you'll want to use copyField. Search on the
lowercased field, but facet on a field without lowercasing involved.
Erik
On May 4, 2010, at 7:24 PM, dbashford wrote:
I've looked through the history and tried a lot of things but can't
quite get
this to work.
Used
(10/05/05 16:33), Shalin Shekhar Mangar wrote:
On Wed, May 5, 2010 at 10:23 AM, Ranveer wrote:
How many column can we defined in schema.
I have already around 100 column in schema..
You can have as many as you need.
Many fields consumes PermGen, though.
https://issues.apache.
>
> All my fields are stored.
>
> And if my field name is "state" means that your suggestion
> is appending
> "fl=state", then no, that's not doing anything for
> me. =(
>
> The above config gets me part of the way to where I need to
> be. Storing,
> for instance, "Alaska" in such a way that
On May 4, 2010, at 7:20 AM, pointbreak+s...@ml1.net wrote:
> I want to link documents to multiple spatial points, and filter
> documents based on a bounding box. I was expecting that the
> solr.PointType would help me with that, but run into a problem. When I
> create a filter, it seems that Solr
Hi,
Currently, there are similar topics active in the mailing list, but it I did
not want to steal the topic.
I have currently indexed 100.000 documents, they are microsoft office/pdf
etc documents I convert them to TXT files before indexing. Files are between
1-500 pages. When I search something
Hi,
I have an existing Lucene application which I want to port to Solr.
A scenario I need to support requires me to use dynamic fields
with Solr, since users can add new fields at runtime.
At the same time, the existing Lucene application is using a
PerFieldAnalyzerWrapper in order to use differe
Paolo,
Solr takes care of associating fields with the proper analysis defined
in schema.xml already. This, of course, depends on which query parser
you're using, but both the standard Solr query parser and dismax do
the right thing analysis-wise automatically.
But, I think you need to el
You can't get 'round this without creating a copyField or similar. It's easy
to do in schema.xml
Store one field (e.g. 'state') using a fieldType that is configured to use a
LowercaseFilterFactory, and the other not (e.g. 'state_verbatim').
When you search, use the lowercase one for case-insensiti
This was extremely helpful. Thanks a lot.
On 05/04/2010 05:30 PM, Chris Hostetter wrote:
First off: i would suggest that instead of doing a simple prefix search,
you look into using EdgeNGrams for this sort of thing.
I'm also assuming since you need custom scoring for this, you aren't going
to
Hi Erik,
first of all, thanks for your reply.
The "source" of my problems is the fact that I do not know in advance the
field names. Users are allowed to decide they own field names, they can,
at runtime, add new fields and different Lucene documents might have
different field names.
So, in addit
Hey Guys,
There is some work on SOLR-17 to track this. I put up a patch that's
incomplete, based on the prior work done by Mike Baranczak and Hoss and others.
I've been meaning to get back to it, but have been swamped.
Contributions/updates welcome!
Cheers,
Chris
[1] http://issues.apache.org/
Hi Peter
A full list of spell check parameters are available here
http://wiki.apache.org/solr/SpellCheckComponent
With the PECL extension, there is currently no special method that handles
the spell check component so you would have to use the SolrParams::set() or
SolrParams::setParam() method a
On 5 May 2010 14:19, Erik Hatcher wrote:
> But, I think you need to elaborate on what you're doing in your Lucene
> application to know more specifically.
Hi Erik,
perhaps, this is another way to explain and maybe solve my issue...
At query time (everything here is just an illustrative example):
In the solrconfig, is there any way to have a "fragmenter" that doesn't
escape html in the text? We are going to render the full text of the field
and want to render the text as is (with html in tact).
--
View this message in context:
http://lucene.472066.n3.nabble.com/Highlighting-Turn-of-Esc
Hey everyone,
I'm curious if anyone has experiencing working with the company NStein and
their Solr based search solution S3. Any comments on performance, usability,
support etc. would be really appreciated.
Thanks,
-Kallin Nagelberg
Thanks Paul, that will certainly work. I was just hoping there was a way I
could write my own class that would inject this value as needed instead of
precomputing this value and then passing it along in the params.
My specific use case is instead of using dataimporter.last_index_time I want
to us
Vishal,
Look at the 'sysnonyms.txt' file under /example/solr/conf and you can get
the idea.
~Umesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Alphabetic-range-bucketing-tp739467p779314.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
On 05.05.2010 03:49, Chris Hostetter wrote:
>
> : Are you accidentally building the spellchecker database on each commit?
> ...
> : > This could also be caused by performing an optimize after the commit, or
> it
> : > could be caused by auto warming the caches, or a combination of both
The mail servers are often not too friendly with attachments, so people
either inline configs or put them on a server and post the URL.
HTH
Erick
On Wed, May 5, 2010 at 12:06 PM, Markus Fischer wrote:
> Hi,
>
> On 05.05.2010 03:49, Chris Hostetter wrote:
> >
> > : Are you accidentally building
It reports that Jukka has resolved the issue (Tika-419), and now waiting for
Grant to verify (Solr-1902). But it seems the resolution will be available
in 0.8 version of Tika.
If it solves the problem, Is there a way to get it now? Any SVN trunk access
etc? All i see there is 0.7 src zip to downlo
ok , u can't write a variable. But you may write a function
(Evaluator). it will look something like
${dataimporter.functions.foo()}
http://wiki.apache.org/solr/DataImportHandler#Custom_formatting_in_query_and_url_using_Functions
On Wed, May 5, 2010 at 9:12 PM, Blargy wrote:
>
> Thanks Paul, tha
I followed through some of the previous post, there seems to be a general
problem with trying to us XInclude in the solr schema.xml. I use several
variation to include my fieldType declarations. I keep getting an error file
not found. I put the file first in the SOLRHOME, then in CATALINA_HO
Thanks for the reply, I was wondering if I wrote it such that no one was ever
going to reply.
:: Shouldn't all the parameters be added to the solr.xml core2 that were
:
:yep .. it does in fact look like a bug in the solr.xml persistence code.
:please file a bug in Jira.
Will do.
:: passed i
Hi, I'm a long-time lurker, first-time poster. I have an issue with a
search filter I need to resolve and I'm not sure how to handle it. I
have documents like the one below that I am searching against. The
field "editionsarray" is only present in the document if it has
specific editions attached to
> Hi, I'm a long-time lurker,
> first-time poster. I have an issue with a
> search filter I need to resolve and I'm not sure how to
> handle it. I
> have documents like the one below that I am searching
> against. The
> field "editionsarray" is only present in the document if it
> has
> specific e
Hello,
I have an issue trying to use DataImportHandler on a MySQL database.
I have settted up a multicore installation, and have something like the
following on my disk:
/solr
/cores
solr.xml
/lib
/mysql-connector-java-5.1.12-bin.jar
/core0
/conf
/data
/core1
Shouldn't the lib folder be in each /coreX folder
--Robert
-Original Message-
From: Johan Cwiklinski [mailto:johan.cwiklin...@ajlsm.com]
Sent: Wednesday, May 05, 2010 3:23 PM
To: solr-user@lucene.apache.org
Subject: DIH: mysql driver not found
Hello,
I have an issue trying to use DataI
(10/05/05 22:08), Serdar Sahin wrote:
Hi,
Currently, there are similar topics active in the mailing list, but it I did
not want to steal the topic.
I have currently indexed 100.000 documents, they are microsoft office/pdf
etc documents I convert them to TXT files before indexing. Files are betw
Thanks Noble this is exactly what I was looking for.
What is the preferred way to query solr within these sorts of classes?
Should I grab the core from the context that is being passed in? Should I be
using SolrJ?
Can you provide an example and/or provide some tutorials/documentation.
Once aga
worked like a charm, thanks
On Wed, May 5, 2010 at 2:00 PM, Ahmet Arslan wrote:
>
>
> &fq=-editionsarray:[* TO *] should do the job.
>
I know one can create custom event listeners for update or query events, but
is it possible to create one for any DIH event (Full-Import, Delta-Import)?
Thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/Custom-DIH-EventListeners-tp780517p780517.html
Sent from the Solr -
: There is some work on SOLR-17 to track this. I put up a patch that's
: incomplete, based on the prior work done by Mike Baranczak and Hoss and
: others. I've been meaning to get back to it, but have been swamped.
Actually SOLR-17 tracks an XSD for the XML "response" format you get form
the X
Hi,
Le 06/05/2010 00:24, Robert Risley a écrit :
> Shouldn't the lib folder be in each /coreX folder
You are right. Moving lib folder directly in the core itself does the trick.
But I tought having in
my sol.xml would permit to share jar in the {cores_home}/lib directory
between all the cores.
Hi,
recently I start to work on solr, So I am still very new to use solr. Sorry
if I am logically wrong.
I have two table, parent and referenced (child).
for that I set multivalue field following is my schema details
indexed data details:
student1
student2
stud
43 matches
Mail list logo