[
https://issues.apache.org/jira/browse/KAFKA-17345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tim Fox updated KAFKA-17345:
----------------------------
Description:
It appears that the SCRAM server side implementation in Apache Kafka is not
checking that the nonce sent during the second request from the client during
authentication using SASL-SCRAM-256 is the same as the nonce returned from the
server in the first response from the server.
SCRAM RFC is here [https://datatracker.ietf.org/doc/html/rfc5802]
The part of the RFC that says the client returns the same response that it
received from the server is here:
{quote}The client then responds by sending a "client-final-message" with the
*same nonce* and a ClientProof computed using the selected hash function as
explained earlier.
{quote}
Here's the part of the RFC saying to check nonces:
{quote}The server *_verifies the nonce_* and the proof, verifies that the
authorization identity (if supplied by the client in the first
message) is authorized to act as the authentication identity, and, finally, it
responds with a "server-final-message", concluding the authentication exchange.
{quote}
It appears that latest Apache Kafka _does not_ verify the nonces are the same,
contrary to the RFC. I.e.
[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/scram/internals/ScramSaslServer.java#L152]
- i.e. there is no check on the nonce.
According to the RFC, verifying the nonce on the server is *not* optional:
{quote}
The server MUST verify that the nonce sent by the client in the second message
is the same as the one sent by the server in its first message.
{quote}
I stumbled upon this while creating a Kafka compatible server in golang and
using the xdg golang library - which is known to be strictly RFC compliant.
[https://github.com/xdg-go/scram]
This implementation *does* check that the nonce sent from the client in the
second request is the same as the one returned from the server in the first
request, and this results in an authentication failure when using it with a
librdkafka client as it appears that librdkafka *does not* send the same nonce
it received from the server in the second request. I have filed a related issue
with librdkafka to cover that:
[https://github.com/confluentinc/librdkafka/issues/4814]
In summary, this means, that a client (for example librdkafka) which sends an
incorrect nonce in the second request of SASL-SCRAM-256 authentication
(contrary to the RFC) can still pass authentication with Apache Kafka.
I am not a security expert and I do not know if this enables an exploit or
vulnerability in Apache Kafka, but I have filed this as critical to be on the
safe so you can take a closer look.
I believe there is a significant possibility that not checking the nonce makes
current Apache Kafka installations that use SCRAM susceptible to replay attacks.
was:
It appears that the SCRAM server side implementation in Apache Kafka is not
checking that the nonce sent during the second request from the client during
authentication using SASL-SCRAM-256 is the same as the nonce returned from the
server in the first response from the server.
SCRAM RFC is here [https://datatracker.ietf.org/doc/html/rfc5802]
The part of the RFC that says the client should return the same response that
it received from the server is here:
{quote}The client then responds by sending a "client-final-message" with the
*same nonce* and a ClientProof computed using the selected hash function as
explained earlier.
{quote}
Here's the part of the RFC saying to check nonces:
{quote}The server *{_}verifies the nonce*{_} and the proof, verifies that the
authorization identity (if supplied by the client in the first
message) is authorized to act as the authentication identity, and, finally, it
responds with a "server-final-message", concluding the authentication exchange.
{quote}
It appears that latest Apache Kafka _does not_ verify the nonces are the same,
contrary to the RFC. I.e.
[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/scram/internals/ScramSaslServer.java#L152]
- i.e. there is no check on the nonce.
I stumbled upon this while creating a Kafka compatible server in golang and
using the xdg golang library - which is known to be strictly RFC compliant.
[https://github.com/xdg-go/scram]
This implementation *does* check that the nonce sent from the client in the
second request is the same as the one returned from the server in the first
request, and this results in an authentication failure when using it with a
librdkafka client as it appears that librdkafka *does not* send the same nonce
it received from the server in the second request. I have filed a related issue
with librdkafka to cover that:
[https://github.com/confluentinc/librdkafka/issues/4814]
In summary, this means, that a client (for example librdkafka) which sends an
incorrect nonce in the second request of SASL-SCRAM-256 authentication
(contrary to the RFC) can still pass authentication with Apache Kafka.
I am not a security expert and I do not know if this enables an exploit or
vulnerability in Apache Kafka, but I have filed this as critical to be on the
safe so you can take a closer look.
> Client nonce is not checked during SASL-SCRAM-256 authentication
> ----------------------------------------------------------------
>
> Key: KAFKA-17345
> URL: https://issues.apache.org/jira/browse/KAFKA-17345
> Project: Kafka
> Issue Type: Bug
> Components: security
> Affects Versions: 3.8.0
> Reporter: Tim Fox
> Priority: Critical
>
> It appears that the SCRAM server side implementation in Apache Kafka is not
> checking that the nonce sent during the second request from the client during
> authentication using SASL-SCRAM-256 is the same as the nonce returned from
> the server in the first response from the server.
> SCRAM RFC is here [https://datatracker.ietf.org/doc/html/rfc5802]
> The part of the RFC that says the client returns the same response that it
> received from the server is here:
> {quote}The client then responds by sending a "client-final-message" with the
> *same nonce* and a ClientProof computed using the selected hash function as
> explained earlier.
> {quote}
>
> Here's the part of the RFC saying to check nonces:
> {quote}The server *_verifies the nonce_* and the proof, verifies that the
> authorization identity (if supplied by the client in the first
> message) is authorized to act as the authentication identity, and, finally,
> it responds with a "server-final-message", concluding the authentication
> exchange.
> {quote}
> It appears that latest Apache Kafka _does not_ verify the nonces are the
> same, contrary to the RFC. I.e.
> [https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/scram/internals/ScramSaslServer.java#L152]
> - i.e. there is no check on the nonce.
>
> According to the RFC, verifying the nonce on the server is *not* optional:
> {quote}
> The server MUST verify that the nonce sent by the client in the second
> message is the same as the one sent by the server in its first message.
> {quote}
> I stumbled upon this while creating a Kafka compatible server in golang and
> using the xdg golang library - which is known to be strictly RFC compliant.
> [https://github.com/xdg-go/scram]
> This implementation *does* check that the nonce sent from the client in the
> second request is the same as the one returned from the server in the first
> request, and this results in an authentication failure when using it with a
> librdkafka client as it appears that librdkafka *does not* send the same
> nonce it received from the server in the second request. I have filed a
> related issue with librdkafka to cover that:
> [https://github.com/confluentinc/librdkafka/issues/4814]
> In summary, this means, that a client (for example librdkafka) which sends an
> incorrect nonce in the second request of SASL-SCRAM-256 authentication
> (contrary to the RFC) can still pass authentication with Apache Kafka.
> I am not a security expert and I do not know if this enables an exploit or
> vulnerability in Apache Kafka, but I have filed this as critical to be on the
> safe so you can take a closer look.
> I believe there is a significant possibility that not checking the nonce
> makes current Apache Kafka installations that use SCRAM susceptible to replay
> attacks.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)