> if you don't have any custom components, you can probably just use
> your entire solr home dir as is -- just change the solr.war. (you can't
> just copy the data dir though, you need to use the same configs)
>
> test it out, and note the "Upgrading" notes in the CHANGES.txt for the
> 1.3, 1.4, a
thanks guys
I will try the trunk
as for unpacking the war and changing the lucene... I am not an expect and
this my get complicated for me maybe over time
when I am comfortable
Mambe Churchill Nanje
237 33011349,
AfroVisioN Founder, President,CEO
http://www.afrovisiongroup.com | http://mambenanj
Try to run the "svn co" command by using the console (in case you're
running a UNIX-like OS). Add the following files for Solr (.project and
.classpath) into your the solr folder:
http://markmail.org/message/yb5qgeamosvdscao
Then do an "import as an existing project" in Eclipse, and you're do
On Tue, Feb 1, 2011 at 5:59 PM, Eric Grobler wrote:
> Hi
>
> I am a newbie and I am trying to run solr in eclipse.
>
> From this url
> http://wiki.apache.org/solr/HowToContribute#Development_Environment_Tips
> there is a subclipse example:
>
> I use Team -> Share Project and this url:
> http://sv
> [] ASF Mirrors (linked in our release announcements or via the Lucene website)
>
> [] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [x] I/we build them from source via an SVN/Git checkout.
>
> [x] Other (someone in your company mirrors them internally or via a
> downstr
Sorry to reply to myself, but I just wanted to see if anyone saw
this/had ideas why MBeans would be removed/re-added/removed.
I tried looking for this in the code but was unable to grok what
triggers bean removal.
Any hints?
On Thu, Jan 27, 2011 at 3:30 PM, matthew sporleder wrote:
> I am usin
Hello list,
I am aware that setting the value of maxFieldLength in solrconfig.xml too high
may/will result in out-of-mem errors. I wish to provide content extraction on a
number of pdf documents which are large, by large I mean 8-11MB (occasionally
more), and I am also not sure how many terms r
Good Morning,
I am planning to get started on indexing MS office using ApacheSolr -
can someone please direct me where I should start?
Thanks,
Sai Thumuluri
http://wiki.apache.org/solr/ExtractingRequestHandler
On Wednesday 02 February 2011 16:49:12 Thumuluri, Sai wrote:
> Good Morning,
>
> I am planning to get started on indexing MS office using ApacheSolr -
> can someone please direct me where I should start?
>
> Thanks,
> Sai Thumuluri
--
Marku
http://wiki.apache.org/solr/ExtractingRequestHandler
Regards,
Jayendra
On Wed, Feb 2, 2011 at 10:49 AM, Thumuluri, Sai
wrote:
> Good Morning,
>
> I am planning to get started on indexing MS office using ApacheSolr -
> can someone please direct me where I should start?
>
> Thanks,
> Sai Thumulur
Hi,
have a look at Solr's ExtractingRequestHandler:
http://wiki.apache.org/solr/ExtractingRequestHandler
-Sascha
On 02.02.2011 16:49, Thumuluri, Sai wrote:
Good Morning,
I am planning to get started on indexing MS office using ApacheSolr -
can someone please direct me where I should start?
take a look at DIH
http://wiki.apache.org/solr/DataImportHandler
> I always use New...Other...SVN...Checkout Projects from SVN
Thanks, that seemed to work perfectly :-)
On Wed, Feb 2, 2011 at 12:43 PM, Robert Muir wrote:
> On Tue, Feb 1, 2011 at 5:59 PM, Eric Grobler
> wrote:
> > Hi
> >
> > I am a newbie and I am trying to run solr in eclipse.
> >
> > From
[x] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[x] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
Dear all,
I got an exception when querying the index within Solr. It told me that too
many files are opened. How to handle this problem?
Thanks so much!
LB
[java] org.apache.solr.client.solrj.
SolrServerException: java.net.SocketException: Too many open files
[java] at
org.apache.solr.c
On Jan 28, 2011, at 5:38 PM, Andreas Kemkes wrote:
> Just getting my feet wet with the text extraction using both schema and
> solrconfig settings from the example directory in the 1.4 distribution, so I
> might miss something obvious.
>
> Trying to provide my own title (and discarding the one
> I always use New...Other...SVN...Checkout Projects from SVN
And how do run in eclipse jetty in the example folder?
Thanks for your help
Ericz
On Wed, Feb 2, 2011 at 12:43 PM, Robert Muir wrote:
> On Tue, Feb 1, 2011 at 5:59 PM, Eric Grobler
> wrote:
> > Hi
> >
> > I am a newbie and I am tr
Hi
In http://wiki.apache.org/solr/SpatialSearch
there is an example of a bbox filter and a geodist function.
Is it possible to do a bbox filter and sort by distance - combine the two?
Thanks
Ericz
On Jan 30, 2011, at 2:47 AM, Dennis Gearon wrote:
> I would love it if I could use 'latitude' and 'longitude' in all places. But
> it
> seems that solr spatial for 1.4 plugin only works with lat/lng. Any way to
> change that?
What 1.4 plugin are you referring to?
>
> Dennis Gearon
>
>
> S
On Wed, Feb 2, 2011 at 11:15 AM, Eric Grobler wrote:
>> I always use New...Other...SVN...Checkout Projects from SVN
>
> And how do run in eclipse jetty in the example folder?
>
you can always go to the commandline and use the usual techniques,
e.g. ant run-example, or java -jar start.jar from the
Sorry to re-post, but can anyone help out on the question below of dynamic
custom results filtering using CommonsHttpSolrServer? If anyone is doing
this sort of thing, any suggestions would be much appreciated.
Thanks!
Dave
On 1/31/11 2:47 PM, "Dave Troiano" wrote:
> Hi,
>
> I'm implementing
> I only use eclipse as a fancy text editor!
Eclipse will feed insulted :-)
I will just try to create hot keys to start/stop jetty manually.
Thanks for your feedback
Regards
Ericz
On Wed, Feb 2, 2011 at 4:26 PM, Robert Muir wrote:
> On Wed, Feb 2, 2011 at 11:15 AM, Eric Grobler
> wrote:
> >>
Hi,
I have a question on filters on multivalued atrribute. Is there a way to
filter a multivalue attribute based on a particular value inside that
attribute?
Consider the below example.
DEF_BY
BEL_TO
I want to do a search which returns the result which just has only the
relationship DEF_BY a
Hello list,
I've met a few google matches that indicate that SOLR-based servers implement
the Open Archive Initiative's Metadata Harvesting Protocol.
Is there something made to be re-usable that would be an add-on to solr?
thanks in advance
paul
Hello,
I have the following definitions in my schema.xml:
...
...
There is a document "Hippopotamus is fatter than a Platypus" indexed.
When I search for "Hippopotamus" I receive the expected result. When I
search for any partial such as "Hippo"
About this:
The NGrams are going to be indexed on the field "text_ngrams", not on
"text". For the field "text", Solr will apply the text analysis (which I
guess doesn't have NGrams). You have to search on the "text_ngrams" field,
something like "text_ngrams:hippo" or "text_ngrams:potamu". Are yo
Hi Paul, I don't fully understand what you want to do. The way, I think,
SolrJ is intended to be used is from a client application (outside Solr). If
what you want is something like what's done with Velocity I think you could
implement a response writer that renders the JSP and send it on the
respo
Hi,
I don't know whether it fits to your need, but we are builing a tool
based on Drupal (eXtensible Catalog Drupal Toolkit), which can harvest
with OAI-PMH and index the harvested records into Solr. The records is
harvested, processed, and stored into MySQL, then we index them into
Solr. We creat
Peter,
I'm afraid your service is harvesting and I am trying to look at a PMH provider
service.
Your project appeared early in the goolge matches.
paul
Le 2 févr. 2011 à 20:46, Péter Király a écrit :
> Hi,
>
> I don't know whether it fits to your need, but we are builing a tool
> based on D
Hi Paul,
yes, you are right, the project is about harvesting, and not to be harvestable.
Péter
2011/2/2 Paul Libbrecht :
> Peter,
>
> I'm afraid your service is harvesting and I am trying to look at a PMH
> provider service.
>
> Your project appeared early in the goolge matches.
>
> paul
>
>
>
The trick is that you can't just have a generic black box OAI-PMH
provider on top of any Solr index. How would it know where to get the
metadata elements it needs, such as title, or last-updated date, etc.
Any given solr index might not even have this in stored fields -- and a
given app might w
I already replied to the original poster off-list, but it seems that it may be
worth weighing in here as well...
The next release of VuFind (http://vufind.org) is going to include OAI-PMH
server support. As you say, there is really no way to plug OAI-PMH directly
into Solr... but a tool like
Hello,
Let me give a brief description of my scenario.
Today I am only using Lucene 2.9.3. I have an index of 30 million documents
distributed on three machines and each machine with 6 hds (15k rmp).
The server queries the search index using the remote class search. And each
machine is made to sea
Hi,
I'm using SOLR 1.4.1 and have a rather large index with 800+M docs.
Until now we have, erroneously I think, indexed a long field with the type:
Now the range queries have become slow as there are many distinct terms in the
index.
My question is if it would be possible to just change
On Wed, Feb 2, 2011 at 3:46 PM, Dan G wrote:
> My question is if it would be possible to just change the field to the
> preferred
> type "tlong" with a precision of "8"?
>
> Would this change be compatible with my indexed data or should I re-indexed
> the
> date (a pain with 800+M docs :))?
>
I
Hi, I'm having a weirdness with indexing multiple terms to a single field
using a copyField. An example:
For document A
field:contents_1 is a multivalued field containing "cat", "dog" and "duck"
field:contents_2 is a multivalued field containing "cat", "horse", and
"flower"
For document B
field:c
On a closer review, i am noticing that the fieldNorm is what is killing
document A.
If I reindex with omitNorms=true, will this problem be "solved"?
On Wed, Feb 2, 2011 at 4:54 PM, Martin J wrote:
> Hi, I'm having a weirdness with indexing multiple terms to a single field
> using a copyField. A
Does something like this work to extract dates, phone numbers, addresses across
international formats and languages?
Or, just in the plain ol' USA?
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn
On 2/2/2011 5:19 PM, Dennis Gearon wrote:
Does something like this work to extract dates, phone numbers, addresses across
international formats and languages?
Or, just in the plain ol' USA?
What are you talking about? There is nothing discussed in this thread
that does any 'extracting' of da
I would think OAI certainly has a trans-national format for dates.
And that probably dives well into SOLR's own date format.
But all of that is non-user-oriented so... no culture dependency in principle.
paul
Le 2 févr. 2011 à 23:19, Dennis Gearon a écrit :
> Does something like this work to e
Hi,
I am a newbie to Apache Solr.
We are using ContentStreamUpdateRequest to insert into Solr. For eg,
ContentStreamUpdateRequest req = new ContentStreamUpdateRequest(
"/update/extract")
req.addContentStream(stream);
req.addContentStream(literal.name, na
Hello,
I have the following definitions in my schema.xml:
...
...
There is a document "Hippopotamus is fatter than a Platypus" indexed.
When I search for "Hippopotamus" I receive the expected result. When I
search for any partial such as "Hippo"
Yes, I have tried searching on text_ngrams as well and it produces no results.
On a related note, since I have wouldn't the ngrams produced by text_ngrams field
definition also be available within the text field?
2011/2/2 Tomás Fernández Löbbe :
> About this:
>
>
>
> The NGrams are going to be
I guess I didn't understand 'meta data'. That's why I asked the question.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from 'htt
2011/2/2 Gustavo Maia
> Hello,
>
> Let me give a brief description of my scenario.
> Today I am only using Lucene 2.9.3. I have an index of 30 million
> documents distributed on three machines and each machine with 6 hds (15k
> rmp).
> The server queries the search index using the remote class se
For time of day fields, NOT unix timestamp/dates, what is the best way to do
that?
I can think of seconds since beginning of day as integer
OR
string
Any other ideas? Assume that I'll be using range queries. TIA.
Dennis Gearon
Signature Warning
It is always a good idea to
Got my API to input into both the database and the Solr instance, search
geograhically/chronologically in Solr.
Next is Update and Delete. And then .. and then ... and then ..
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usua
So I'm trying to update a single entity in my index using DataImportHandler.
http://solr:8983/solr/dataimport?command=full-import&entity=games
It ends near-instantaneously without hitting the database at all, apparently.
Status shows:
0
0
0
0
Indexing completed. Added/Updated: 0 documents. Del
If your using a DIH you can configure it however you want. Here is a
snippet of my code. Note the DateTimeTransformer.
On Wed, Feb 2, 2011 at 7:28 PM, Dennis Gearon wrote:
> For time of d
On Thu, Feb 3, 2011 at 6:08 AM, Jon Drukman wrote:
> So I'm trying to update a single entity in my index using DataImportHandler.
>
> http://solr:8983/solr/dataimport?command=full-import&entity=games
>
> It ends near-instantaneously without hitting the database at all, apparently.
[...]
* Does th
Dear all,
I am trying to implement an autocomplete system for research. But I am stuck
on some problems that I can't solve.
Here is my problem :
I give text like :
"the cat is black" and I want to explore all 1 gram to 8 gram for all the
text that are passed :
the, cat, is, black, the cat, cat is
solr-user-help
This is posted as an enhancement on SOLR-2345.
I am willing to work on it. But I am stuck. I would like to loop through the
lat/long values when they are stored in a multiValue list. But it appears that
I cannot figure out to do that. For example:
sort=geodist() asc
This should grab the close
This is posted as an enhancement on SOLR-2345.
I am willing to work on it. But I am stuck. I would like to loop through
the lat/long values when they are stored in a multiValue list. But it
appears that I cannot figure out to do that. For example:
sort=geodist() asc
This should grab the closest
This is posted as an enhancement on SOLR-2345.
I am willing to work on it. But I am stuck. I would like to loop through
the lat/long values when they are stored in a multiValue list. But it
appears that I cannot figure out to do that. For example:
sort=geodist() asc
This should grab the closest
Use analysis.jsp to see how your analysis is going .
Also you can see the parse queries by adding the parameter debugQuery=on on
request URL
-
Thanx:
Grijesh
http://lucidimagination.com
--
View this message in context:
http://lucene.472066.n3.nabble.com/Partial-matches-don-t-work-solr-NGra
increase the OS parameter for max file to open. it is less set as default
depending on the OS.
-
Thanx:
Grijesh
http://lucidimagination.com
--
View this message in context:
http://lucene.472066.n3.nabble.com/Open-Too-Many-Files-tp2406289p2411217.html
Sent from the Solr - User mailing list a
Use analysis.jsp to see what happening at index time and query time with your
input data.You can use highlighting to see if match found.
-
Thanx:
Grijesh
http://lucidimagination.com
--
View this message in context:
http://lucene.472066.n3.nabble.com/Using-terms-and-N-gram-tp2410938p2411244.
Nevermind.. got the details from here..
http://wiki.apache.org/solr/ExtractingRequestHandler
Thanks..
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-inserting-Multivalued-filelds-tp2406612p2411248.html
Sent from the Solr - User mailing list archive at Nabble.com.
59 matches
Mail list logo