epotyom commented on code in PR #13568:
URL: https://github.com/apache/lucene/pull/13568#discussion_r1692723491


##########
lucene/sandbox/src/java/org/apache/lucene/sandbox/facet/recorders/CountFacetRecorder.java:
##########
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.sandbox.facet.recorders;
+
+import static 
org.apache.lucene.sandbox.facet.ordinals.OrdinalIterator.NO_MORE_ORDS;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.internal.hppc.IntCursor;
+import org.apache.lucene.internal.hppc.IntIntHashMap;
+import org.apache.lucene.sandbox.facet.cutters.LeafFacetCutter;
+import org.apache.lucene.sandbox.facet.misc.FacetRollup;
+import org.apache.lucene.sandbox.facet.ordinals.OrdinalIterator;
+
+/**
+ * {@link FacetRecorder} to count facets.
+ *
+ * <p>TODO: add an option to keep counts in an array, to improve performance 
for facets with small
+ * number of ordinals e.g. range facets. Options: - {@link LeafFacetCutter} 
can inform {@link
+ * LeafFacetRecorder} about expected number of facet ordinals ({@link
+ * org.apache.lucene.sandbox.facet.FacetFieldCollector} can orchestrate that). 
If expeted facet ord
+ * number is below some threshold - use array instead of a map? - first 100/1k 
counts in array, the
+ * rest - in a map; the limit can also be provided in a constructor? It is 
similar to what
+ * LongValuesFacetCounts does today.
+ *
+ * <p>TODO: We can also consider collecting 2 (3, 4, ..., can be parametrizes) 
slices to a single
+ * sync map which can reduce thread contention compared to single sync map for 
all slices; at the
+ * same time there will be less work for reduce method. So far reduce wasn't a 
bottleneck for us,
+ * but it is definitely not free.
+ *
+ * <p>TODO: If we come back to some for of synchronized count maps, we should 
be more careful what
+ * we acquire locks for - we used to lock addTo method itself, but it could be 
faster if we only
+ * synchronized after computing the key's hash; or we can lock the entire map 
only if we need to
+ * insert key, and lock single key otherwise?
+ */
+public class CountFacetRecorder implements FacetRecorder {
+  IntIntHashMap values;
+  List<IntIntHashMap> perLeafValues;
+
+  /** Create. */
+  public CountFacetRecorder() {
+    // Has to be synchronizedList as we have one recorder per all slices.
+    perLeafValues = Collections.synchronizedList(new ArrayList<>());
+  }
+
+  /** Get count for provided ordinal. */
+  public int getCount(int ord) {
+    return values.get(ord);
+  }
+
+  @Override
+  public LeafFacetRecorder getLeafRecorder(LeafReaderContext context) {
+    IntIntHashMap leafValues = new IntIntHashMap();
+    perLeafValues.add(leafValues);

Review Comment:
   The only reason we keep them separately is that we experimented and we are 
planning to run more experiments for how the maps are assigned to leafs 
(segments) or slices (threads), see TODOs in this class. E.g. one of ideas is 
to use syncronized hash maps, e.g. one hash map per two slices. We already 
experimented with a single sync hash map for all segments, and it turned out to 
be slower than a map per leaf + reduce. But merging maps in `reduce` is still 
not free, so we can try to find balance between thread contention and number of 
maps to merge?
   
   What I'm saying is, I suggest we keep the maps in a list as we are not sure 
yet is we want to have one map per leaf, WDYT? I've added a TODO to this method 
to address your comment when we're done with the experiments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org

Reply via email to