nt%40kcl.ac.uk%7C0c5a8ff25fe5427a978c08d451fe0df9%7C8370cf1416f34c16b83c724071654356%7C0&sdata=jfzd2uMZr5DPOy6FeFMZuV4P3%2B4l1ImhQjjl9i0hvOA%3D&reserved=0
for this
-Yonik
On Fri, Feb 10, 2017 at 4:32 PM, Yonik Seeley wrote:
> On Thu, Feb 9, 2017 at 6:58 AM, Bryant, Michael
> wrote:
>> Hi all,
>&g
uses masses of memory.
Cheers,
~Mike
--
Mike Bryant
Research Associate
Department of Digital Humanities
King’s College London
On 10 Feb 2017, at 18:53, Bryant, Michael
mailto:michael.bry...@kcl.ac.uk>> wrote:
Hi Tom,
Well the collapsing query parser is… a much better solution
erloglog function - hll() - instead
of unique(), which should give slightly better performance.
Cheers
Tom
On Thu, Feb 9, 2017 at 11:58 AM, Bryant, Michael
wrote:
Hi all,
I'm converting my legacy facets to JSON facets and am seeing much better
performance, especially with high cardinality fa
Hi all,
I'm converting my legacy facets to JSON facets and am seeing much better
performance, especially with high cardinality facet fields. However, the one
issue I can't seem to resolve is excessive memory usage (and OOM errors) when
trying to simulate the effect of "group.facet" to sort face