I'm now having the same problem but I'm not finding the problem yet.

$ bin/nutch solrindex http://localhost:8080/solr crawl/crawldb/0
crawl/linkdb crawl/segments/0/20110418100309
SolrIndexer: starting at 2011-04-18 10:03:40
java.io.IOException: Job failed!

But everything else seems to have worked[1]. I've tried the Jetty tutorial
example and that worked too. I can access sorl through
http://localhost:8080/solr/admin/ and even with curl[2].

Trying to debug I've modified SolrIndexer.java to add a few prints[3] and
deploy again but still nothing gets printed. Any clues on how to fix?

$ cd $SOLR_HOME
$ ant compile
$ ant dist
$CATALINA_HOME/bin/catalina.sh stop
$ cp $SOLR_HOME/solr.war $CATALINA_HOME/webapps/solr.war
$CATALINA_HOME/bin/catalina.sh start

[3] 
catch (final Exception e) {
          e.printStackTrace();
                LOG.fatal("hello");
      LOG.fatal("SolrIndexer: " + StringUtils.stringifyException(e));
      return -1;
    }


[2]
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int
name="QTime">217</int></lst>
</response>

[1]
$ bin/nutch readdb crawl/crawldb/0 -stats
CrawlDb statistics start: crawl/crawldb/0
Statistics for CrawlDb: crawl/crawldb/0
TOTAL urls:     3
retry 0:        3
min score:      1.0
avg score:      1.0
max score:      1.0
status 2 (db_fetched):  3
CrawlDb statistics: done

bin/nutch parse crawl/segments/0/20110418100309
ParseSegment: starting at 2011-04-18 10:03:28
ParseSegment: segment: crawl/segments/0/20110418100309
ParseSegment: finished at 2011-04-18 10:03:31, elapsed: 00:00:03

bin/nutch updatedb crawl/crawldb/0 crawl/segments/0/20110418100309
CrawlDb update: starting at 2011-04-18 10:03:33
CrawlDb update: db: crawl/crawldb/0
CrawlDb update: segments: [crawl/segments/0/20110418100309]
CrawlDb update: additions allowed: true
CrawlDb update: URL normalizing: false
CrawlDb update: URL filtering: false
CrawlDb update: Merging segment data into db.
CrawlDb update: finished at 2011-04-18 10:03:35, elapsed: 00:00:02

bin/nutch invertlinks crawl/linkdb -dir crawl/segments/0
LinkDb: starting at 2011-04-18 10:03:37
LinkDb: linkdb: crawl/linkdb
LinkDb: URL normalize: true
LinkDb: URL filter: true
LinkDb: adding segment:
file:/Users/simpatico/nutch-1.3/runtime/local/crawl/segments/0/20110418100309
LinkDb: finished at 2011-04-18 10:03:39, elapsed: 00:00:01


______________________________________
If you reply to this email, your message will be added to the discussion below:
http://lucene.472066.n3.nabble.com/SolrIndex-problems-tp2243073p2833462.html
This email was sent by Gabriele Kahlout (via Nabble)
To receive all replies by email, subscribe to this discussion: 
http://lucene.472066.n3.nabble.com/template/NamlServlet.jtp?macro=subscribe_by_code&node=2243073&code=c29sci11c2VyQGx1Y2VuZS5hcGFjaGUub3JnfDIyNDMwNzN8MTIyNjQ0MTk0Mg==

Reply via email to