mjsax commented on code in PR #14360:
URL: https://github.com/apache/kafka/pull/14360#discussion_r1508421651
##########
docs/streams/developer-guide/config-streams.html:
##########
@@ -300,8 +306,10 @@ <h4><a class="toc-backref"
href="#id23">num.standby.replicas</a><a class="header
set by the user or all serdes must be passed in explicitly (see
also default.key.serde).</td>
<td><code class="docutils literal"><span
class="pre">null</span></code></td>
</tr>
- <tr class="row-even"><td>default.windowed.key.serde.inner</td>
+ <tr class="row-even"><td>default.windowed.key.serde.inner
(Deprecated. Use windowed.inner.class.serde instead.)</td>
<td>Medium</td>
+<<<<<<< HEAD
+<<<<<<< HEAD
Review Comment:
Seems you missed to delete some marker lines from resolving conflict during
rebasing. (more below)
##########
docs/streams/developer-guide/config-streams.html:
##########
@@ -1010,6 +1016,18 @@ <h4><a class="toc-backref"
href="#id31">topology.optimization</a><a class="heade
</p>
</div></blockquote>
</div>
+ <div class="section" id="windowed.inner.class.serde">
+ <h4><a class="toc-backref" href="#id31">windowed.inner.class.serde</a><a
class="headerlink" href="#windowed.inner.class.serde" title="Permalink to this
headline"></a></h4>
+ <blockquote>
+ <div>
+ <p>
+ Serde for the inner class of a windowed record. Must implement the
org.apache.kafka.common.serialization.Serde interface.
+ </p>
+ <p>
+ Note that setting this config in KafkaStreams application would
result in an error as it is meant to be used only from Plain consumer client.
Review Comment:
I just did some more digging, and now I actually think that @ableegoldman is
right, we might want to treat `windowed.inner.serde.class` similar to
`window.size`... (ie, maybe remove from StreamsConfig -- we could add this to
the KIP Lucia started).
I also understand now, why the docs says, using it would result in an error
(for both configs): Kafka Streams will always pass window-size and inner-serde
via the _constructor_ and we will also verify that we don't get an parameter
set twice (or zero time), and throw an error inside `configure()` method of the
windowed serdes...
Thus, we might want to not add `windowed.inner.serde.class` to the docs in
this PR to begin with...
Sorry for the back and forth. Reading and understanding code is hard...
##########
docs/streams/developer-guide/config-streams.html:
##########
@@ -257,7 +258,12 @@ <h4><a class="toc-backref"
href="#id23">num.standby.replicas</a><a class="header
<td colspan="2">The maximum number of records to buffer per
partition.</td>
<td><code class="docutils literal"><span
class="pre">1000</span></code></td>
</tr>
- <tr class="row-even"><td>cache.max.bytes.buffering</td>
+ <tr class="row-even"><td>statestore.cache.max.bytes</td>
+ <td>Medium</td>
+ <td colspan="2">Maximum number of memory bytes to be used for
record caches across all threads. Note that at the debug level you can use
<code>cache.size</code> to monitor the actual size of the Kafka Streams
cache.</td>
Review Comment:
> Note that at the debug level you can use <code>cache.size</code> to
monitor the actual size of the Kafka Streams cache.
What does this mean? Cannot follow.
##########
docs/streams/developer-guide/config-streams.html:
##########
@@ -326,6 +334,18 @@ <h4><a class="toc-backref"
href="#id23">num.standby.replicas</a><a class="header
the
<code>org.apache.kafka.streams.state.DslStoreSuppliers</code> interface.
</td>
<td><code>BuiltInDslStoreSuppliers.RocksDBDslStoreSuppliers</code></td>
+=======
+ <td colspan="2">Default serializer/deserializer for the inner
class of windowed keys, implementing the <code class="docutils literal"><span
class="pre">Serde</span></code> interface. Deprecated.</td>
+=======
+ <td colspan="2">Default serializer/deserializer for the inner
class of windowed keys, implementing the <code class="docutils literal"><span
class="pre">Serde</span></code> interface.</td>
Review Comment:
Duplicate line (both are not 100% the same) -- seems a conflict was not
resolve correctly.
##########
docs/streams/developer-guide/config-streams.html:
##########
@@ -257,7 +258,12 @@ <h4><a class="toc-backref"
href="#id23">num.standby.replicas</a><a class="header
<td colspan="2">The maximum number of records to buffer per
partition.</td>
<td><code class="docutils literal"><span
class="pre">1000</span></code></td>
</tr>
- <tr class="row-even"><td>cache.max.bytes.buffering</td>
+ <tr class="row-even"><td>statestore.cache.max.bytes</td>
+ <td>Medium</td>
+ <td colspan="2">Maximum number of memory bytes to be used for
record caches across all threads. Note that at the debug level you can use
<code>cache.size</code> to monitor the actual size of the Kafka Streams
cache.</td>
+ <td>10485760</td>
+ </tr>
+ <tr class="row-even"><td>cache.max.bytes.buffering (Deprecated. Use
cache.max.bytes instead.)</td>
Review Comment:
If we insert a new row, we need to change "even / odd" for all rows below...
super annoying... (otherwise we get two rows with the same background color
instead of nicely interleaved rows)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]