Hi Erick and All,

The problem is solved by copying schema-solr4.xml into my collection's Solr
conf (renamed to schema.xml).
I didn't use hadoop there, and apologize if it's better to post on this
Solr list since the problem appeared first on Solr Indexer step.
Regarding "/2" option it's "e-mail body evolution" I thought :)
On my first posting, that was a crawl script syntax, as on my case:

# ./bin/crawl urls/seed.txt TestCrawl http://localhost:8080/solr/ 2

2 = the number of rounds.

See here:
http://wiki.apache.org/nutch/NutchTutorial#A3.3._Using_the_crawl_script

Again, thanks everyone!


On Mon, Oct 28, 2013 at 5:39 PM, Erick Erickson <erickerick...@gmail.com>wrote:

> This seems like a better question for the Nutch list. I see hadoop
> in there, so unless you've specifically configured solr to use
> the HDFS directory writer factory, this has to be coming from
> someplace else. And there are map/reduce tasks in here.
>
> BTW, it would be more helpful if you posted the URL that you
> successfully queried Solr with... What is the /2 on the end for?
> Do you use that when you query?
>
> Best,
> Erick
>
>
> On Mon, Oct 28, 2013 at 2:37 AM, Bayu Widyasanyata
> <bwidyasany...@gmail.com>wrote:
>
> > On Mon, Oct 28, 2013 at 1:26 PM, Raymond Wiker <rwi...@gmail.com> wrote:
> >
> > > > request: http://localhost:8080/solr/update?wt=javabin&version=2
> > >
> > > I think this url is incorrect: there should be a core name between
> "solr"
> > > and "update".
> > >
> >
> > I changed th SolrURL on crawl script's option to:
> >
> > ./bin/crawl urls/seed.txt TestCrawl
> > http://localhost:8080/solr/mycollection/2
> >
> > And the result now is "Bad Request".
> > I will look for another misconfiguration things...
> >
> > =====
> >
> > org.apache.solr.common.SolrException: Bad Request
> >
> > Bad Request
> >
> > request:
> > http://localhost:8080/solr/mycollection/update?wt=javabin&version=2
> >         at
> >
> >
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:430)
> >         at
> >
> >
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:244)
> >         at
> >
> >
> org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
> >         at
> >
> >
> org.apache.nutch.indexwriter.solr.SolrIndexWriter.close(SolrIndexWriter.java:155)
> >         at
> > org.apache.nutch.indexer.IndexWriters.close(IndexWriters.java:118)
> >         at
> >
> >
> org.apache.nutch.indexer.IndexerOutputFormat$1.close(IndexerOutputFormat.java:44)
> >         at
> >
> >
> org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.close(ReduceTask.java:467)
> >         at
> > org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:535)
> >         at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
> >         at
> > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:398)
> > 2013-10-28 13:30:02,804 ERROR indexer.IndexingJob - Indexer:
> > java.io.IOException: Job failed!
> >         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1357)
> >         at
> org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:123)
> >         at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:185)
> >         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >         at
> org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:195)
> >
> >
> >
> > --
> > wassalam,
> > [bayu]
> >
>



-- 
wassalam,
[bayu]

Reply via email to