Hello Roman,

Do you know if the reply from the server indicates that it received a
request header that was too large? If so, then you could try tuning
configuration of the maximum request header size. The mechanisms for this
tuning are different depending on which specific Hadoop version you are
running.

In Hadoop 2.x, HTTPFS uses Tomcat as the web server. The environment
variable HTTPFS_MAX_HTTP_HEADER_SIZE will pass through to override the
Tomcat default. [1]

In Hadoop 3.x, HTTPFS switched to using Jetty instead of Tomcat. [2] The
configuration property "hadoop.http.max.request.header.size" in
core-site.xml sets the equivalent Jetty configuration.

This originally used the Tomcat default of 8 KB, which would be too small
for your ~10 KB SPNEGO token. That default was increased in HDFS-10423. [3]
If you are running a version that predates this, then I'm not sure you'll
have any option for tuning this. You might find that you need some kind of
backport of that patch.

I hope this helps.

[1]
https://github.com/apache/hadoop/blob/rel/release-2.10.1/hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/httpfs-config.sh#L207
[2] https://issues.apache.org/jira/browse/HDFS-10860
[3] https://issues.apache.org/jira/browse/HDFS-10423

Chris Nauroth


On Mon, Jan 10, 2022 at 7:26 AM Roman Savchenko <[email protected]> wrote:

> Dear Hadoop Developers,
>
> I'm seeing an issue with httpfs server (Cloudera server with Kerberos and
> HTTPFS enabled), when I'm trying to connect (via CUrl) to it with a
> Kerberos authentication and large negotiation token (~10k bytes) that is
> generated by a large amount of WIndows Security Groups and Windows SSPI.
> Server just replies with 400 (Bad request) I'm curious is it possible to
> handle it?
>
> Thanks for helping with it,
> Roman.
>

Reply via email to