Pesky requirements <G>.... But yep, you can't just use &wt=json...

In fact it sounds like no matter what you do you have to transform
things somehow...

You could _consider_ (and I'm not recommending, just being sure
you know it exists) writing a custom response writer. See
JSONResponseWriter for a model/place to start. You have access
to the incoming request at that point and can do whatever you want.

You could even append a non-solr parameter to the url, e.g.
&requestResponseType=whatever that you have access to in
the response writer to know which transformation to apply. Solr
ignores params it doesn't know about.

When I say I'm not recommending this, I'm also not saying it's
a bad idea, mostly I'm making sure you're aware you could
do this if it fits your problem space.

Best
Erick

On Thu, Aug 30, 2012 at 1:39 PM, David Martin <dmar...@netflix.com> wrote:
> Erick:
>
> Thanks for your reply. Simply omitting the null fields is an intriguing
> idea, and I will test this out.
>
> Why not use the JSON response writer?  Two reasons:  our clients dictate a
> particular JSON schema that changes on a query-by-query basis.  The
> schemas can be quite complex.  Also, we roll our responses as a JAXB DTO
> so that our web service can supply responses in either JSON or XML.  I
> think either requirement means having to do some "manual" post-processing
> of the Solrj responses, right?
>
> Thanks,
>
> David
>
> On 8/29/12 6:15 PM, "Erick Erickson" <erickerick...@gmail.com> wrote:
>
>>If I'm reading this right, you're kind of stuck. Solr/DIH don't have any
>>way to reach out to your mapping file and "do the right thing"....
>>
>>A couple of things come to mind.
>>Use a Transformer in DIH to simply remove the field from the document
>>you're indexing. Then the absence of the field in the result set is NULL,
>>and 0 is 0. You could also do this in SolrJ.....
>>
>>And I have to ask why you transform output into JSON when you could
>>use the JSON response writer.....
>>
>>Best
>>Erick
>>
>>On Mon, Aug 27, 2012 at 6:04 PM, David Martin <dmar...@netflix.com> wrote:
>>> Smart Folks:
>>>
>>> I use JDBC to produce simple XML entities such as this one:
>>>
>>> <awardtype>
>>>   <entity_type>AWARDTYPE</entity_type>
>>>   <movie_id>0</movie_id>
>>>   <award_id>31</award_id>
>>>   <festivalId>1</festivalId>
>>>   <id>awardtypes::31:1</id>
>>> </awardtype>
>>>
>>> The XML entities are stored in file and loaded by the
>>> FileListEntityProcessor.
>>>
>>> In this case, the "movie_id" element has a value of zero because the
>>>JDBC
>>> getString("movie_id") method returned null.  I can search Solr for
>>> entities of this type (i.e. query on "entity_type:AWARDTYPE") and get
>>>back
>>> the appropriate result set.  Then, I want to transform the result set
>>>into
>>> JSON objects with fields that map to XML elements.
>>>
>>> Today, I have to teach the JSON mapping that it should convert 0 to
>>> JSONObject.NULL on a case-by-case basis -- I actually keep a mapping
>>> document around that dictates whether a zero should be handled this way.
>>>
>>> In some cases though, a zero may be legitimate where null values are
>>>also
>>> legit.  Sure, I could always change the zero to a less likely integer or
>>> such...
>>>
>>> =======
>>> But doesn't Solr and the Data Import Handler have a better way to read a
>>> null value from an XML entity during import, AND to represent it in
>>>search
>>> results?  Do I need a different approach depending on my field's type?
>>> =======
>>>
>>> I apologize if this is an asked and answered question.  None of my web
>>> searches turned up an answer.
>>>
>>> Thanks,
>>>
>>> David
>>>
>>
>

Reply via email to