On Mon, 28 Apr 2025 15:23:18 GMT, kabutz wrote:
> We logged several bugs on the LinkedBlockingDeque. This aggregates them into
> a single bug report and PR.
>
> 1. LinkedBlockingDeque does not immediately throw InterruptedException in
> put/take
>
> The LinkedBlocking
On Tue, 13 May 2025 13:16:45 GMT, kabutz wrote:
>>> > @kabutz I think @AlanBateman might be able to have a look as well.
>>> > As for timing, it seems to me most reasonable if this PR (if it is to be
>>> > integrated) to go in _after_ JDK25 has been forked,
ld
> exit prior to locking by checking the count field.
>
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in
On Wed, 11 Jun 2025 09:17:57 GMT, Viktor Klang wrote:
>> kabutz has updated the pull request incrementally with two additional
>> commits since the last revision:
>>
>> - Removed sizes from LBD constructors - not necessary
>> - More optimizing volatile reads
>
ld
> exit prior to locking by checking the count field.
>
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in
On Tue, 10 Jun 2025 10:13:20 GMT, Viktor Klang wrote:
>> kabutz has updated the pull request incrementally with one additional commit
>> since the last revision:
>>
>> Whitespace
>
> src/java.base/share/classes/java/util/concurrent/LinkedBlockingDeq
ld
> exit prior to locking by checking the count field.
>
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in
ld
> exit prior to locking by checking the count field.
>
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in
ld
> exit prior to locking by checking the count field.
>
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in
On Thu, 8 May 2025 14:27:07 GMT, kabutz wrote:
>> test/jdk/java/util/concurrent/tck/LinkedBlockingDequeTest.java line 1958:
>>
>>> 1956: q.add(four);
>>> 1957: q.add(five);
>>> 1958: q.add(six);
>>
>> Out of curiosity,
On Mon, 9 Jun 2025 13:11:34 GMT, Viktor Klang wrote:
>> What would you like to do if the invariant fails inside linkFirst() and
>> linkLast()? Should we throw an AssertionError each time?
>
> No, I was more thinking keeping it as it was: (return true/false from
> linkFirst / linkLast depending
ld
> exit prior to locking by checking the count field.
>
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes i
ld
> exit prior to locking by checking the count field.
>
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in
ld
> exit prior to locking by checking the count field.
>
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in
ld
> exit prior to locking by checking the count field.
>
> 4. LinkedBlockingDeque allows us to overflow size with addAll()
>
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in
riginal returning a boolean?
>>
>> I based the approach on the LBQ enqueue() and dequeue() methods, which also
>> return void and have a comment with the assertion.
>
> @kabutz I'd think maintaining the invariants within linkFirst and linkLast
> would be preferable (`
On Tue, 13 May 2025 12:44:38 GMT, Viktor Klang wrote:
> About a month or so.
Perfect, thanks @viktorklang-ora. I've marked the other issues as closed -
duplicates, and referenced this single umbrella PR.
-
PR Comment: https://git.openjdk.org/jdk/pull/24925#issuecomment-2876484773
On Tue, 8 Apr 2025 18:39:30 GMT, kabutz wrote:
> In the JavaDoc of LinkedBlockingDeque, it states: "Linked nodes are
> dynamically created upon each insertion unless this would bring the deque
> above capacity." However, in the current implementation, nodes are always
>
On Wed, 5 Feb 2025 15:36:15 GMT, kabutz wrote:
> The LinkedBlockingDeque does not behave consistently with other concurrency
> components. If we call putFirst(), putLast(), takeFirst(), or takeLast() with
> a thread that is interrupted, it does not immediately throw an
> Interrup
On Mon, 7 Apr 2025 11:55:08 GMT, kabutz wrote:
> LinkedBlockingDeque.clear() should preserve weakly-consistent iterators by
> linking f.prev and f.next back to f, allowing the iterators to continue from
> the first or last respectively. This would be consistent with how the other
>
On Tue, 8 Apr 2025 18:39:30 GMT, kabutz wrote:
> In the JavaDoc of LinkedBlockingDeque, it states: "Linked nodes are
> dynamically created upon each insertion unless this would bring the deque
> above capacity." However, in the current implementation, nodes are always
>
On Mon, 7 Apr 2025 11:55:08 GMT, kabutz wrote:
> LinkedBlockingDeque.clear() should preserve weakly-consistent iterators by
> linking f.prev and f.next back to f, allowing the iterators to continue from
> the first or last respectively. This would be consistent with how the other
>
On Wed, 9 Apr 2025 06:24:30 GMT, kabutz wrote:
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in "int n"
> and then whilst holding the lock, we check that we haven't
On Wed, 9 Apr 2025 06:24:30 GMT, kabutz wrote:
> In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
> add that chain in bulk to the existing nodes. We count the nodes in "int n"
> and then whilst holding the lock, we check that we haven't
On Fri, 9 May 2025 06:37:40 GMT, Alan Bateman wrote:
> No, it will still uses synchronized if there is contention and this will not
> consume the park permit when it's a virtual thread. Im not sure if Heinz ran
> into an issue, or just remember the issue from 2015, Heinz?
I saw this comment in
On Thu, 8 May 2025 08:59:59 GMT, Viktor Klang wrote:
>> We logged several bugs on the LinkedBlockingDeque. This aggregates them into
>> a single bug report and PR.
>>
>> 1. LinkedBlockingDeque does not immediately throw InterruptedException in
>> put/take
>>
>> The LinkedBlockingDeque does no
On Thu, 8 May 2025 08:33:06 GMT, Viktor Klang wrote:
> I'm a bit uneasy about incrementing the `count` in `linkFirst` but not
> enforcing the invariant. What's the benefit to changing linkFirst and
> linkLast to return void instead of keeping the original returning a boolean?
I based the appro
On Thu, 8 May 2025 08:41:52 GMT, Viktor Klang wrote:
> We could likely check if there's any remaining capacity up front, and
> immediately return false?
We could if you like. I wanted to make as few changes as possible, to not
introduce unexpected changes. This particular bug was to stop a siz
On Wed, 7 May 2025 10:43:55 GMT, kabutz wrote:
>> @kabutz Thanks for opening this PR—just confirming that it's on my to-review
>> queue.
>
> @viktorklang-ora any idea whom else we can ask to approve this PR?
> @kabutz I think @AlanBateman might be able to have a
On Mon, 28 Apr 2025 19:48:40 GMT, Viktor Klang wrote:
>> @viktorklang-ora @DougLea @AlanBateman
>
> @kabutz Thanks for opening this PR—just confirming that it's on my to-review
> queue.
@viktorklang-ora any idea whom else we can ask to approve this PR?
On Fri, 2 May 2025 13:19:13 GMT, Doug Lea wrote:
>> We logged several bugs on the LinkedBlockingDeque. This aggregates them into
>> a single bug report and PR.
>>
>> 1. LinkedBlockingDeque does not immediately throw InterruptedException in
>> put/take
>>
>> The LinkedBlockingDeque does not be
On Tue, 29 Apr 2025 17:02:38 GMT, kabutz wrote:
> In 2015, Google discovered a rare disastrous classloading bug in first call
> to LockSupport.park() that occurred in the AppClassLoader using the Java 7
> ConcurrentHashMap, which used ReentrantLock for the synchronization.
>
>
In 2015, Google discovered a rare disastrous classloading bug in first call to
LockSupport.park() that occurred in the AppClassLoader using the Java 7
ConcurrentHashMap, which used ReentrantLock for the synchronization.
Since then, the recommended fix for this bug seems to be this code snippet i
On Mon, 28 Apr 2025 15:23:18 GMT, kabutz wrote:
> We logged several bugs on the LinkedBlockingDeque. This aggregates them into
> a single bug report and PR.
>
> 1. LinkedBlockingDeque does not immediately throw InterruptedException in
> put/take
>
> The LinkedBlocking
We logged several bugs on the LinkedBlockingDeque. This aggregates them into a
single bug report and PR.
1. LinkedBlockingDeque does not immediately throw InterruptedException in
put/take
The LinkedBlockingDeque does not behave consistently with other concurrency
components. If we call putFirs
On Tue, 8 Apr 2025 18:39:30 GMT, kabutz wrote:
> In the JavaDoc of LinkedBlockingDeque, it states: "Linked nodes are
> dynamically created upon each insertion unless this would bring the deque
> above capacity." However, in the current implementation, nodes are always
>
On Tue, 8 Apr 2025 08:50:37 GMT, kabutz wrote:
> One of the features of the LinkedBlockingDeque is that it is a doubly-linked
> node queue, with pointers in each node to "prev" and "next", which allows
> remove() in the Iterator to remove the node in constant tim
On Tue, 8 Apr 2025 08:50:37 GMT, kabutz wrote:
> One of the features of the LinkedBlockingDeque is that it is a doubly-linked
> node queue, with pointers in each node to "prev" and "next", which allows
> remove() in the Iterator to remove the node in constant tim
One of the features of the LinkedBlockingDeque is that it is a doubly-linked
node queue, with pointers in each node to "prev" and "next", which allows
remove() in the Iterator to remove the node in constant time. However, in the
JavaDoc of the class, it lists Iterator.remove() as an example of a
public HUUUGECollection(long size) {
>> this.size = size;
>> }
>>
>> @Override
>> public int size() {
>> return size < Integer.MAX_VALUE ? (int) size : Integer.MAX_VALUE;
>> }
>>
>>
On Wed, 9 Apr 2025 12:36:21 GMT, Chen Liang wrote:
> Thanks! FYI the CSR requires a few more fields to be filled - see
> https://wiki.openjdk.org/display/csr/Fields+of+a+CSR+Request for details. You
> can click the "Edit" button on the CSR to see all those fields; many are not
> available in t
On Wed, 9 Apr 2025 12:36:21 GMT, Chen Liang wrote:
>> One of the features of the LinkedBlockingDeque is that it is a doubly-linked
>> node queue, with pointers in each node to "prev" and "next", which allows
>> remove() in the Iterator to remove the node in constant time. However, in
>> the Ja
On Wed, 9 Apr 2025 12:12:02 GMT, Chen Liang wrote:
>> One of the features of the LinkedBlockingDeque is that it is a doubly-linked
>> node queue, with pointers in each node to "prev" and "next", which allows
>> remove() in the Iterator to remove the node in constant time. However, in
>> the Ja
in constant time. However, in
>> the JavaDoc of the class, it lists Iterator.remove() as an example of a
>> method that takes linear time.
>
> FYI @kabutz you can log in to bugs.openjdk.org and create an issue for your
> patch. This issue can be noreg-doc, but will require
In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then
add that chain in bulk to the existing nodes. We count the nodes in "int n" and
then whilst holding the lock, we check that we haven't exceeded the capacity
with "if (count + n <= capacity)". However, if we pass in a c
in constant time. However, in
>> the JavaDoc of the class, it lists Iterator.remove() as an example of a
>> method that takes linear time.
>
> FYI @kabutz you can log in to bugs.openjdk.org and create an issue for your
> patch. This issue can be noreg-doc, but will require
In the JavaDoc of LinkedBlockingDeque, it states: "Linked nodes are dynamically
created upon each insertion unless this would bring the deque above capacity."
However, in the current implementation, nodes are always created, even if the
deque is full. This is because count is non-volatile, and w
LinkedBlockingDeque.clear() should preserve weakly-consistent iterators by
linking f.prev and f.next back to f, allowing the iterators to continue from
the first or last respectively. This would be consistent with how the other
node-based weakly consistent queues LinkedBlockingQueue LinkedTransf
On Wed, 5 Feb 2025 15:36:15 GMT, kabutz wrote:
> The LinkedBlockingDeque does not behave consistently with other concurrency
> components. If we call putFirst(), putLast(), takeFirst(), or takeLast() with
> a thread that is interrupted, it does not immediately throw an
> Interrup
On Wed, 5 Feb 2025 18:38:40 GMT, Doug Lea wrote:
> Thanks for finding this. The only question is whether we believe that any
> existing usage might depend on current behavior, and if so should it be done
> anyway?
Good question - your call Doug.
LinkedBlockingDeque's comment is the same as LI
The LinkedBlockingDeque does not behave consistently with other concurrency
components. If we call putFirst(), putLast(), takeFirst(), or takeLast() with a
thread that is interrupted, it does not immediately throw an
InterruptedException, the way that ArrayBlockingQueue and LInkedBlockingQueue
On Tue, 19 Nov 2024 19:33:18 GMT, kabutz wrote:
> See internal bug review 9077848
>
> This is a very small unimportant mistake in the naming of the documentation
> that does not match the code.
This pull request has now been integrated.
Changeset: da2d7a09
Author:Dr Hei
See internal bug review 9077848
This is a very small unimportant mistake in the naming of the documentation
that does not match the code.
-
Commit messages:
- Made documentation match state field names WAIT and TIMED_WAIT
Changes: https://git.openjdk.org/jdk/pull/22253/files
Web
On Mon, 18 Nov 2024 15:46:04 GMT, Viktor Klang wrote:
> This change now halves each side of the split regardless, the old code would
> end up in a situation where it wouldn't decrement since unsigned shift right
> twice would lead to decrementing estimated size by 0.
>
> It's worth noting the
On Fri, 1 Nov 2024 10:13:15 GMT, kabutz wrote:
> Since Java 10, spliterators for the ConcurrentSkipListMap were pointing to
> the head, which has item == null, rather than to the first element. The
> trySplit() method no longer worked, and always returned null. Therefore,
> para
On Tue, 12 Nov 2024 15:41:36 GMT, kabutz wrote:
>>> @kabutz I think @DougLea identified some potential edge-cases with the
>>> proposed solution, so I added his suggested diff to the JBS Issue for
>>> reference.
>>
>> Indeed, I didn't check wha
just head.node as "origin". Since the "item" field is
> always null on head.node, we never enter the first if() statement in the
> trySplit() method and thus it always returns null.
kabutz has updated the pull request incrementally with one additional commit
since the las
just head.node as "origin". Since the "item" field is
> always null on head.node, we never enter the first if() statement in the
> trySplit() method and thus it always returns null.
kabutz has updated the pull request with a new target base due to a merge or a
rebase. The
On Mon, 4 Nov 2024 16:14:35 GMT, kabutz wrote:
>> Sure, where should I add that test?
>
>> @kabutz I think @DougLea identified some potential edge-cases with the
>> proposed solution, so I added his suggested diff to the JBS Issue for
>> reference.
>
> Indeed,
On Wed, 30 Oct 2024 08:54:55 GMT, kabutz wrote:
> The ArrayBlockingQueue has had a readObject() method since Java 7, which
> checks invariants of the deserialized object. However, it does not have a
> writeObject() method. This means that the ArrayBlockingQueue could be
> modifi
On Fri, 1 Nov 2024 10:58:09 GMT, kabutz wrote:
>> Since Java 10, spliterators for the ConcurrentSkipListMap were pointing to
>> the head, which has item == null, rather than to the first element. The
>> trySplit() method no longer worked, and always returned null. There
On Fri, 1 Nov 2024 10:13:15 GMT, kabutz wrote:
> Since Java 10, spliterators for the ConcurrentSkipListMap were pointing to
> the head, which has item == null, rather than to the first element. The
> trySplit() method no longer worked, and always returned null. Therefore,
> para
Since Java 10, spliterators for the ConcurrentSkipListMap were pointing to the
head, which has item == null, rather than to the first element. The trySplit()
method no longer worked, and always returned null. Therefore, parallel streams
have not worked for ConcurrentSkipListMap and ConcurrentSki
The ArrayBlockingQueue has had a readObject() method since Java 7, which checks
invariants of the deserialized object. However, it does not have a
writeObject() method. This means that the ArrayBlockingQueue could be modified
whilst it is being written, resulting in broken invariants. The readOb
FWIW, I've also submitted this as a bug report:
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8301863
Regards
Heinz
--
Dr Heinz M. Kabutz (PhD CompSci)
Author of "The Java™ Specialists' Newsletter" -www.javaspecialists.eu
Java Champion -www.javachampions.o
Hi Roger,
thanks for your quick response. That does seem to work better.
Regards
Heinz
--
Dr Heinz M. Kabutz (PhD CompSci)
Author of "The Java™ Specialists' Newsletter" -www.javaspecialists.eu
Java Champion -www.javachampions.org
JavaOne Rock Star Speaker
Tel: +30 69 75 595 26
UndecidedClass(merge(restrictLargeArrays,rejectUndecidedClass(merge(allowInteger,rejectUndecidedClass(allowArrayList)
Thus we could never allow any classes except for ArrayList.
Regards
Heinz
--
Dr Heinz M. Kabutz (PhD CompSci)
Author of "The Java™ Specialists' Newsletter" - w
On Wed, 1 Feb 2023 15:51:33 GMT, kabutz wrote:
> ThreadLocalRandom.current().doubles().parallel() had a bad regression,
> because it called the superclass methods of the ThreadLocalRandomProxy's
> nextDouble() method instead of delegating to the ThreadLocalRandom.current().
&g
On Thu, 2 Feb 2023 07:03:44 GMT, Alan Bateman wrote:
>> ThreadLocalRandom.current().doubles().parallel() had a bad regression,
>> because it called the superclass methods of the ThreadLocalRandomProxy's
>> nextDouble() method instead of delegating to the ThreadLocalRandom.current().
>>
>> Affe
On Wed, 1 Feb 2023 15:51:33 GMT, kabutz wrote:
> ThreadLocalRandom.current().doubles().parallel() had a bad regression,
> because it called the superclass methods of the ThreadLocalRandomProxy's
> nextDouble() method instead of delegating to the ThreadLocalRandom.current().
&g
ThreadLocalRandom.current().doubles().parallel() had a bad regression, because
it called the superclass methods of the ThreadLocalRandomProxy's nextDouble()
method instead of delegating to the ThreadLocalRandom.current().
Affects all versions of ThreadLocalRandom since Java 17.
-
C
On Tue, 16 Nov 2021 20:53:26 GMT, kabutz wrote:
>> This is a draft proposal for how we could improve stream performance for the
>> case where the streams are empty. Empty collections are common-place. If we
>> iterate over them with an Iterator, we would have to create one
72 matches
Mail list logo