OK thanks. So I guess I will set up my own "normal" webserver and have the solr
server a sort of private web-based API (or possibly a front-end that, when a
user clicks on a search result link, just redirects the user to my "normal" web
server that has the related file). That's easy enough. If t
There's nothing built into the indexing process that stores URLs allowing
you to fetch the document, you have to do that yourself. I'm not sure how
the link is getting into the search results, you're assigning "doc1" as the
ID of the doc, and I think the browse request handler, aka Solaritas is
con
Sorry, i thought it was obvious. The links that are broken are the links that
are returned in the search results. Using the example in the documentation I
mentioned below, to load a word doc via
curl
"http://localhost:8983/solr/update/extract?literal.id=doc1&commit=true"; -F
"myfile=@my
What links? You haven't shown us what link you're clicking on
that generates the 404 error.
You might want to review:
http://wiki.apache.org/solr/UsingMailingLists
Best
Erick
On Fri, Jun 28, 2013 at 2:04 PM, MA LIG wrote:
> Hello,
>
> I ran the solr example as described in
> http://lucene.apa
Hello,
I ran the solr example as described in
http://lucene.apache.org/solr/4_3_1/tutorial.html and then loaded some doc
files to solr as described in
http://wiki.apache.org/solr/ExtractingRequestHandler. The commands I used
to load the files were of the form
curl "
http://localhost:8983/solr/u