Hi,
I use solr for distributed indexing in cloud mode. I run solr in kubernetes
on a 72 core, 256 GB sever. In the work im doing, i benchmark index times
so we are constantly indexing, and then deleting the collection, etc for
accurate benchmarking on certain size of GB. In theory, this should not
Hi,
Is there any assistance around writing parquets from spark to solr shards
or is it possible to customize a DIH to import a parquet to a solr shard.
Let me know if this is possible, or the best work around for this. Much
appreciated, thanks
Kevin VL