The approach that Alfresco/Solr takes with this is store the original
document in filesystem when it indexes content. This way you can be frugal
about which fields are stored in the index. Then Alfresco/Solr can retrieve
the original document as part of the results using a doc transformer.
This ma
You could fetch the data from your application directly :;)
Also, the Streaming expressions has a jdbc() function but then you will need to
know what to query for. It also has a fetch() function which enriches documents
with fields from another collection. It would probably be possible to write a
Well, you can always throw more replicas at the problem as well.
But Andrea's comment is spot on. When Solr stores a field, it
compresses it. So to fetch the stored info, it has to:
1> seek the disk
2> decompress at minimum 16K
3> assemble the response.
All the while perhaps causing memory to be
Hi Sam, I have been in a similar scenario (not recently so my answer could
be outdated). As far as I remember caching, at least in that scenario,
didn't help so much, probably because the field size.
So we went with the second option: a custom SearchComponent connected with
Redis. I'm not aware if
Hi everyone,
We at MetaBrainz are trying to scale our solr cloud instance but are
hitting a bottle-neck.
Each of the documents in our solr index is accompanied by a '_store' field
that store our API compatible response for that document (which is
basically parsed and displayed by our custom respo