What have you tried and what doesn't it do that you want it to do?

This works, instantiating the StreamingUpdateSolrServer (server) and
the JDBC connection/SQL statement are left as exercises for the
reader <G>.:

    while (rs.next()) {
      SolrInputDocument doc = new SolrInputDocument();

      String id = rs.getString("id");
      String title = rs.getString("title");
      String text = rs.getString("text");

      doc.addField("id", id);
      doc.addField("title", title);
      doc.addField("text", text);

      docs.add(doc);
      ++counter;
      ++total;
      if (counter > 100) { // Completely arbitrary, just batch up more
than one document for throughput!
        server.add(docs);
        docs.clear();
        counter = 0;
      }
    }

Best
Erick

On Mon, Aug 15, 2011 at 6:25 PM, Shawn Heisey <s...@elyograg.org> wrote:
> Is there a simple way to get all the fields from a jdbc resultset into a
> bunch of SolrJ documents, which I will then send to be indexed in Solr?  I
> would like to avoid the looping required to copy the data one field at a
> time.  Copying it one document at a time would be acceptable, but it would
> be nice if there was a way to copy them all at once.
>
> Another idea that occurred to me is to add the dataimporter jar to my
> project and leverage it to do the heavy lifting, but I will need some
> pointers about what objects and methods to research.  Is that a reasonable
> idea, or is it too integrated into the server code to be used with SolrJ?
>
> Can anyone point me in the right direction?
>
> Thanks,
> Shawn
>
>

Reply via email to