[
https://issues.apache.org/jira/browse/SPARK-23410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16699333#comment-16699333
]
Maxim Gekk commented on SPARK-23410:
------------------------------------
> Every line has the BOM?
BOM can be only at the beginning of files.
> I wonder how we're going to support both UTF-16 and UTF-32 with BOMs.
Before reading files, need to pre-process by detecting actual encoding from
BOMs (UTF-16BE/LE or UTF-32BE/LE), and cut out the BOM (do seek after BOMs). At
the same time, we could detect line separator as well because without encoding
it is impossible.
This pre-processing can be done on the driver or in distributed way on
executors. For the former case, we could introduce a threshold for amount of
files precessed in parallel.
This pre-processing would be useful independently from UTF-16/UTF-32 with BOMs
in detection of lineSep for encodings different from UTF-8 in the per-line
mode. For now we force users to set lineSeps in those cases.
> Unable to read jsons in charset different from UTF-8
> ----------------------------------------------------
>
> Key: SPARK-23410
> URL: https://issues.apache.org/jira/browse/SPARK-23410
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.3.0
> Reporter: Maxim Gekk
> Priority: Major
> Attachments: utf16WithBOM.json
>
>
> Currently the Json Parser is forced to read json files in UTF-8. Such
> behavior breaks backward compatibility with Spark 2.2.1 and previous versions
> that can read json files in UTF-16, UTF-32 and other encodings due to using
> of the auto detection mechanism of the jackson library. Need to give back to
> users possibility to read json files in specified charset and/or detect
> charset automatically as it was before.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]