This is an automated email from the ASF dual-hosted git repository.
dongjoon pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/spark-connect-swift.git
The following commit(s) were added to refs/heads/main by this push:
new 8816b7c [SPARK-53777] Update `Spark Connect`-generated `Swift` source
code with `4.1.0-preview2`
8816b7c is described below
commit 8816b7c154d8c0ed2ff3eb90ffe969d9077ff171
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Wed Oct 1 13:07:03 2025 -0700
[SPARK-53777] Update `Spark Connect`-generated `Swift` source code with
`4.1.0-preview2`
### What changes were proposed in this pull request?
This PR aims to update Spark Connect-generated Swift source code with
Apache Spark `4.1.0-preview2`.
### Why are the changes needed?
There are many changes from Apache Spark 4.1.0.
- https://github.com/apache/spark/pull/52342
- https://github.com/apache/spark/pull/52256
- https://github.com/apache/spark/pull/52271
- https://github.com/apache/spark/pull/52242
- https://github.com/apache/spark/pull/51473
- https://github.com/apache/spark/pull/51653
- https://github.com/apache/spark/pull/52072
- https://github.com/apache/spark/pull/51561
- https://github.com/apache/spark/pull/51563
- https://github.com/apache/spark/pull/51489
- https://github.com/apache/spark/pull/51507
- https://github.com/apache/spark/pull/51462
- https://github.com/apache/spark/pull/51464
- https://github.com/apache/spark/pull/51442
To use the latest bug fixes and new messages to develop for new features of
`4.1.0-preview2`.
```
$ git clone -b v4.1.0-preview2 https://github.com/apache/spark.git
$ cd spark/sql/connect/common/src/main/protobuf/
$ protoc --swift_out=. spark/connect/*.proto
$ protoc --grpc-swift_out=. spark/connect/*.proto
// Remove empty GRPC files
$ cd spark/connect
$ grep 'This file contained no services' *
catalog.grpc.swift:// This file contained no services.
commands.grpc.swift:// This file contained no services.
common.grpc.swift:// This file contained no services.
example_plugins.grpc.swift:// This file contained no services.
expressions.grpc.swift:// This file contained no services.
ml_common.grpc.swift:// This file contained no services.
ml.grpc.swift:// This file contained no services.
pipelines.grpc.swift:// This file contained no services.
relations.grpc.swift:// This file contained no services.
types.grpc.swift:// This file contained no services.
$ rm catalog.grpc.swift commands.grpc.swift common.grpc.swift
example_plugins.grpc.swift expressions.grpc.swift ml_common.grpc.swift
ml.grpc.swift pipelines.grpc.swift relations.grpc.swift types.grpc.swift
```
### Does this PR introduce _any_ user-facing change?
Pass the CIs.
### How was this patch tested?
Pass the CIs.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #250 from dongjoon-hyun/SPARK-53777.
Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
---
Sources/SparkConnect/SparkConnectClient.swift | 2 +-
Sources/SparkConnect/base.pb.swift | 291 +++++++++++++++++++++++++-
Sources/SparkConnect/expressions.pb.swift | 261 ++++++++++++++++++++++-
Sources/SparkConnect/ml.pb.swift | 84 +++++++-
Sources/SparkConnect/pipelines.pb.swift | 91 +++++---
Sources/SparkConnect/types.pb.swift | 90 +++++++-
6 files changed, 782 insertions(+), 37 deletions(-)
diff --git a/Sources/SparkConnect/SparkConnectClient.swift
b/Sources/SparkConnect/SparkConnectClient.swift
index 7f033fd..71b1e63 100644
--- a/Sources/SparkConnect/SparkConnectClient.swift
+++ b/Sources/SparkConnect/SparkConnectClient.swift
@@ -1288,7 +1288,7 @@ public actor SparkConnectClient {
defineFlow.dataflowGraphID = dataflowGraphID
defineFlow.flowName = flowName
defineFlow.targetDatasetName = targetDatasetName
- defineFlow.plan = relation
+ defineFlow.relation = relation
var pipelineCommand = Spark_Connect_PipelineCommand()
pipelineCommand.commandType = .defineFlow(defineFlow)
diff --git a/Sources/SparkConnect/base.pb.swift
b/Sources/SparkConnect/base.pb.swift
index c2e962a..ec561cc 100644
--- a/Sources/SparkConnect/base.pb.swift
+++ b/Sources/SparkConnect/base.pb.swift
@@ -1136,6 +1136,14 @@ struct Spark_Connect_ExecutePlanRequest: @unchecked
Sendable {
set {requestOption = .reattachOptions(newValue)}
}
+ var resultChunkingOptions: Spark_Connect_ResultChunkingOptions {
+ get {
+ if case .resultChunkingOptions(let v)? = requestOption {return v}
+ return Spark_Connect_ResultChunkingOptions()
+ }
+ set {requestOption = .resultChunkingOptions(newValue)}
+ }
+
/// Extension type for request options
var `extension`: SwiftProtobuf.Google_Protobuf_Any {
get {
@@ -1149,6 +1157,7 @@ struct Spark_Connect_ExecutePlanRequest: @unchecked
Sendable {
enum OneOf_RequestOption: Equatable, Sendable {
case reattachOptions(Spark_Connect_ReattachOptions)
+ case resultChunkingOptions(Spark_Connect_ResultChunkingOptions)
/// Extension type for request options
case `extension`(SwiftProtobuf.Google_Protobuf_Any)
@@ -1428,11 +1437,35 @@ struct Spark_Connect_ExecutePlanResponse: Sendable {
/// Clears the value of `startOffset`. Subsequent reads from it will
return its default value.
mutating func clearStartOffset() {self._startOffset = nil}
+ /// Index of this chunk in the batch if chunking is enabled. The index
starts from 0.
+ var chunkIndex: Int64 {
+ get {return _chunkIndex ?? 0}
+ set {_chunkIndex = newValue}
+ }
+ /// Returns true if `chunkIndex` has been explicitly set.
+ var hasChunkIndex: Bool {return self._chunkIndex != nil}
+ /// Clears the value of `chunkIndex`. Subsequent reads from it will return
its default value.
+ mutating func clearChunkIndex() {self._chunkIndex = nil}
+
+ /// Total number of chunks in this batch if chunking is enabled.
+ /// It is missing when chunking is disabled - the batch is returned whole
+ /// and client will treat this response as the batch.
+ var numChunksInBatch: Int64 {
+ get {return _numChunksInBatch ?? 0}
+ set {_numChunksInBatch = newValue}
+ }
+ /// Returns true if `numChunksInBatch` has been explicitly set.
+ var hasNumChunksInBatch: Bool {return self._numChunksInBatch != nil}
+ /// Clears the value of `numChunksInBatch`. Subsequent reads from it will
return its default value.
+ mutating func clearNumChunksInBatch() {self._numChunksInBatch = nil}
+
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
fileprivate var _startOffset: Int64? = nil
+ fileprivate var _chunkIndex: Int64? = nil
+ fileprivate var _numChunksInBatch: Int64? = nil
}
struct Metrics: Sendable {
@@ -2390,6 +2423,41 @@ struct Spark_Connect_ReattachOptions: Sendable {
init() {}
}
+struct Spark_Connect_ResultChunkingOptions: Sendable {
+ // SwiftProtobuf.Message conformance is added in an extension below. See the
+ // `Message` and `Message+*Additions` files in the SwiftProtobuf library for
+ // methods supported on all messages.
+
+ /// Although Arrow results are split into batches with a size limit
according to estimation, the
+ /// size of the batches is not guaranteed to be less than the limit,
especially when a single row
+ /// is larger than the limit, in which case the server will fail to split it
further into smaller
+ /// batches. As a result, the client may encounter a gRPC error stating
“Received message larger
+ /// than max” when a batch is too large.
+ /// If allow_arrow_batch_chunking=true, the server will split large Arrow
batches into smaller chunks,
+ /// and the client is expected to handle the chunked Arrow batches.
+ ///
+ /// If false, the server will not chunk large Arrow batches.
+ var allowArrowBatchChunking: Bool = false
+
+ /// Optional preferred Arrow batch size in bytes for the server to use when
sending Arrow results.
+ /// The server will attempt to use this size if it is set and within the
valid range
+ /// ([1KB, max batch size on server]). Otherwise, the server's maximum batch
size is used.
+ var preferredArrowChunkSize: Int64 {
+ get {return _preferredArrowChunkSize ?? 0}
+ set {_preferredArrowChunkSize = newValue}
+ }
+ /// Returns true if `preferredArrowChunkSize` has been explicitly set.
+ var hasPreferredArrowChunkSize: Bool {return self._preferredArrowChunkSize
!= nil}
+ /// Clears the value of `preferredArrowChunkSize`. Subsequent reads from it
will return its default value.
+ mutating func clearPreferredArrowChunkSize() {self._preferredArrowChunkSize
= nil}
+
+ var unknownFields = SwiftProtobuf.UnknownStorage()
+
+ init() {}
+
+ fileprivate var _preferredArrowChunkSize: Int64? = nil
+}
+
struct Spark_Connect_ReattachExecuteRequest: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library for
@@ -2918,12 +2986,79 @@ struct Spark_Connect_FetchErrorDetailsResponse:
Sendable {
/// Clears the value of `sqlState`. Subsequent reads from it will return
its default value.
mutating func clearSqlState() {self._sqlState = nil}
+ /// Additional information if the error was caused by a breaking change.
+ var breakingChangeInfo:
Spark_Connect_FetchErrorDetailsResponse.BreakingChangeInfo {
+ get {return _breakingChangeInfo ??
Spark_Connect_FetchErrorDetailsResponse.BreakingChangeInfo()}
+ set {_breakingChangeInfo = newValue}
+ }
+ /// Returns true if `breakingChangeInfo` has been explicitly set.
+ var hasBreakingChangeInfo: Bool {return self._breakingChangeInfo != nil}
+ /// Clears the value of `breakingChangeInfo`. Subsequent reads from it
will return its default value.
+ mutating func clearBreakingChangeInfo() {self._breakingChangeInfo = nil}
+
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
fileprivate var _errorClass: String? = nil
fileprivate var _sqlState: String? = nil
+ fileprivate var _breakingChangeInfo:
Spark_Connect_FetchErrorDetailsResponse.BreakingChangeInfo? = nil
+ }
+
+ /// BreakingChangeInfo defines the schema for breaking change information.
+ struct BreakingChangeInfo: Sendable {
+ // SwiftProtobuf.Message conformance is added in an extension below. See
the
+ // `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
+ // methods supported on all messages.
+
+ /// A message explaining how the user can migrate their job to work
+ /// with the breaking change.
+ var migrationMessage: [String] = []
+
+ /// A spark config flag that can be used to mitigate the breaking change.
+ var mitigationConfig:
Spark_Connect_FetchErrorDetailsResponse.MitigationConfig {
+ get {return _mitigationConfig ??
Spark_Connect_FetchErrorDetailsResponse.MitigationConfig()}
+ set {_mitigationConfig = newValue}
+ }
+ /// Returns true if `mitigationConfig` has been explicitly set.
+ var hasMitigationConfig: Bool {return self._mitigationConfig != nil}
+ /// Clears the value of `mitigationConfig`. Subsequent reads from it will
return its default value.
+ mutating func clearMitigationConfig() {self._mitigationConfig = nil}
+
+ /// If true, the breaking change should be inspected manually.
+ /// If false, the spark job should be retried by setting the
mitigationConfig.
+ var needsAudit: Bool {
+ get {return _needsAudit ?? false}
+ set {_needsAudit = newValue}
+ }
+ /// Returns true if `needsAudit` has been explicitly set.
+ var hasNeedsAudit: Bool {return self._needsAudit != nil}
+ /// Clears the value of `needsAudit`. Subsequent reads from it will return
its default value.
+ mutating func clearNeedsAudit() {self._needsAudit = nil}
+
+ var unknownFields = SwiftProtobuf.UnknownStorage()
+
+ init() {}
+
+ fileprivate var _mitigationConfig:
Spark_Connect_FetchErrorDetailsResponse.MitigationConfig? = nil
+ fileprivate var _needsAudit: Bool? = nil
+ }
+
+ /// MitigationConfig defines a spark config flag that can be used to
mitigate a breaking change.
+ struct MitigationConfig: Sendable {
+ // SwiftProtobuf.Message conformance is added in an extension below. See
the
+ // `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
+ // methods supported on all messages.
+
+ /// The spark config key.
+ var key: String = String()
+
+ /// The spark config value that mitigates the breaking change.
+ var value: String = String()
+
+ var unknownFields = SwiftProtobuf.UnknownStorage()
+
+ init() {}
}
/// Error defines the schema for the representing exception.
@@ -4773,7 +4908,7 @@ extension Spark_Connect_ExecutePlanRequest:
SwiftProtobuf.Message, SwiftProtobuf
extension Spark_Connect_ExecutePlanRequest.RequestOption:
SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase,
SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String =
Spark_Connect_ExecutePlanRequest.protoMessageName + ".RequestOption"
- static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}reattach_options\0\u{2}f\u{f}extension\0")
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}reattach_options\0\u{3}result_chunking_options\0\u{2}e\u{f}extension\0")
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
while let fieldNumber = try decoder.nextFieldNumber() {
@@ -4794,6 +4929,19 @@ extension
Spark_Connect_ExecutePlanRequest.RequestOption: SwiftProtobuf.Message,
self.requestOption = .reattachOptions(v)
}
}()
+ case 2: try {
+ var v: Spark_Connect_ResultChunkingOptions?
+ var hadOneofValue = false
+ if let current = self.requestOption {
+ hadOneofValue = true
+ if case .resultChunkingOptions(let m) = current {v = m}
+ }
+ try decoder.decodeSingularMessageField(value: &v)
+ if let v = v {
+ if hadOneofValue {try decoder.handleConflictingOneOf()}
+ self.requestOption = .resultChunkingOptions(v)
+ }
+ }()
case 999: try {
var v: SwiftProtobuf.Google_Protobuf_Any?
var hadOneofValue = false
@@ -4822,6 +4970,10 @@ extension
Spark_Connect_ExecutePlanRequest.RequestOption: SwiftProtobuf.Message,
guard case .reattachOptions(let v)? = self.requestOption else {
preconditionFailure() }
try visitor.visitSingularMessageField(value: v, fieldNumber: 1)
}()
+ case .resultChunkingOptions?: try {
+ guard case .resultChunkingOptions(let v)? = self.requestOption else {
preconditionFailure() }
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 2)
+ }()
case .extension?: try {
guard case .extension(let v)? = self.requestOption else {
preconditionFailure() }
try visitor.visitSingularMessageField(value: v, fieldNumber: 999)
@@ -5197,7 +5349,7 @@ extension
Spark_Connect_ExecutePlanResponse.SqlCommandResult: SwiftProtobuf.Mess
extension Spark_Connect_ExecutePlanResponse.ArrowBatch: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String =
Spark_Connect_ExecutePlanResponse.protoMessageName + ".ArrowBatch"
- static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}row_count\0\u{1}data\0\u{3}start_offset\0")
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}row_count\0\u{1}data\0\u{3}start_offset\0\u{3}chunk_index\0\u{3}num_chunks_in_batch\0")
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
while let fieldNumber = try decoder.nextFieldNumber() {
@@ -5208,6 +5360,8 @@ extension Spark_Connect_ExecutePlanResponse.ArrowBatch:
SwiftProtobuf.Message, S
case 1: try { try decoder.decodeSingularInt64Field(value:
&self.rowCount) }()
case 2: try { try decoder.decodeSingularBytesField(value: &self.data) }()
case 3: try { try decoder.decodeSingularInt64Field(value:
&self._startOffset) }()
+ case 4: try { try decoder.decodeSingularInt64Field(value:
&self._chunkIndex) }()
+ case 5: try { try decoder.decodeSingularInt64Field(value:
&self._numChunksInBatch) }()
default: break
}
}
@@ -5227,6 +5381,12 @@ extension Spark_Connect_ExecutePlanResponse.ArrowBatch:
SwiftProtobuf.Message, S
try { if let v = self._startOffset {
try visitor.visitSingularInt64Field(value: v, fieldNumber: 3)
} }()
+ try { if let v = self._chunkIndex {
+ try visitor.visitSingularInt64Field(value: v, fieldNumber: 4)
+ } }()
+ try { if let v = self._numChunksInBatch {
+ try visitor.visitSingularInt64Field(value: v, fieldNumber: 5)
+ } }()
try unknownFields.traverse(visitor: &visitor)
}
@@ -5234,6 +5394,8 @@ extension Spark_Connect_ExecutePlanResponse.ArrowBatch:
SwiftProtobuf.Message, S
if lhs.rowCount != rhs.rowCount {return false}
if lhs.data != rhs.data {return false}
if lhs._startOffset != rhs._startOffset {return false}
+ if lhs._chunkIndex != rhs._chunkIndex {return false}
+ if lhs._numChunksInBatch != rhs._numChunksInBatch {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
@@ -6628,6 +6790,45 @@ extension Spark_Connect_ReattachOptions:
SwiftProtobuf.Message, SwiftProtobuf._M
}
}
+extension Spark_Connect_ResultChunkingOptions: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
+ static let protoMessageName: String = _protobuf_package +
".ResultChunkingOptions"
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}allow_arrow_batch_chunking\0\u{3}preferred_arrow_chunk_size\0")
+
+ mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
+ while let fieldNumber = try decoder.nextFieldNumber() {
+ // The use of inline closures is to circumvent an issue where the
compiler
+ // allocates stack space for every case branch when no optimizations are
+ // enabled. https://github.com/apple/swift-protobuf/issues/1034
+ switch fieldNumber {
+ case 1: try { try decoder.decodeSingularBoolField(value:
&self.allowArrowBatchChunking) }()
+ case 2: try { try decoder.decodeSingularInt64Field(value:
&self._preferredArrowChunkSize) }()
+ default: break
+ }
+ }
+ }
+
+ func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
+ // The use of inline closures is to circumvent an issue where the compiler
+ // allocates stack space for every if/case branch local when no
optimizations
+ // are enabled. https://github.com/apple/swift-protobuf/issues/1034 and
+ // https://github.com/apple/swift-protobuf/issues/1182
+ if self.allowArrowBatchChunking != false {
+ try visitor.visitSingularBoolField(value: self.allowArrowBatchChunking,
fieldNumber: 1)
+ }
+ try { if let v = self._preferredArrowChunkSize {
+ try visitor.visitSingularInt64Field(value: v, fieldNumber: 2)
+ } }()
+ try unknownFields.traverse(visitor: &visitor)
+ }
+
+ static func ==(lhs: Spark_Connect_ResultChunkingOptions, rhs:
Spark_Connect_ResultChunkingOptions) -> Bool {
+ if lhs.allowArrowBatchChunking != rhs.allowArrowBatchChunking {return
false}
+ if lhs._preferredArrowChunkSize != rhs._preferredArrowChunkSize {return
false}
+ if lhs.unknownFields != rhs.unknownFields {return false}
+ return true
+ }
+}
+
extension Spark_Connect_ReattachExecuteRequest: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package +
".ReattachExecuteRequest"
static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}session_id\0\u{3}user_context\0\u{3}operation_id\0\u{3}client_type\0\u{3}last_response_id\0\u{3}client_observed_server_side_session_id\0")
@@ -7179,7 +7380,7 @@ extension
Spark_Connect_FetchErrorDetailsResponse.QueryContext.ContextType: Swif
extension Spark_Connect_FetchErrorDetailsResponse.SparkThrowable:
SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase,
SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String =
Spark_Connect_FetchErrorDetailsResponse.protoMessageName + ".SparkThrowable"
- static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}error_class\0\u{3}message_parameters\0\u{3}query_contexts\0\u{3}sql_state\0")
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}error_class\0\u{3}message_parameters\0\u{3}query_contexts\0\u{3}sql_state\0\u{3}breaking_change_info\0")
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
while let fieldNumber = try decoder.nextFieldNumber() {
@@ -7191,6 +7392,7 @@ extension
Spark_Connect_FetchErrorDetailsResponse.SparkThrowable: SwiftProtobuf.
case 2: try { try decoder.decodeMapField(fieldType:
SwiftProtobuf._ProtobufMap<SwiftProtobuf.ProtobufString,SwiftProtobuf.ProtobufString>.self,
value: &self.messageParameters) }()
case 3: try { try decoder.decodeRepeatedMessageField(value:
&self.queryContexts) }()
case 4: try { try decoder.decodeSingularStringField(value:
&self._sqlState) }()
+ case 5: try { try decoder.decodeSingularMessageField(value:
&self._breakingChangeInfo) }()
default: break
}
}
@@ -7213,6 +7415,9 @@ extension
Spark_Connect_FetchErrorDetailsResponse.SparkThrowable: SwiftProtobuf.
try { if let v = self._sqlState {
try visitor.visitSingularStringField(value: v, fieldNumber: 4)
} }()
+ try { if let v = self._breakingChangeInfo {
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 5)
+ } }()
try unknownFields.traverse(visitor: &visitor)
}
@@ -7221,6 +7426,86 @@ extension
Spark_Connect_FetchErrorDetailsResponse.SparkThrowable: SwiftProtobuf.
if lhs.messageParameters != rhs.messageParameters {return false}
if lhs.queryContexts != rhs.queryContexts {return false}
if lhs._sqlState != rhs._sqlState {return false}
+ if lhs._breakingChangeInfo != rhs._breakingChangeInfo {return false}
+ if lhs.unknownFields != rhs.unknownFields {return false}
+ return true
+ }
+}
+
+extension Spark_Connect_FetchErrorDetailsResponse.BreakingChangeInfo:
SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase,
SwiftProtobuf._ProtoNameProviding {
+ static let protoMessageName: String =
Spark_Connect_FetchErrorDetailsResponse.protoMessageName + ".BreakingChangeInfo"
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}migration_message\0\u{3}mitigation_config\0\u{3}needs_audit\0")
+
+ mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
+ while let fieldNumber = try decoder.nextFieldNumber() {
+ // The use of inline closures is to circumvent an issue where the
compiler
+ // allocates stack space for every case branch when no optimizations are
+ // enabled. https://github.com/apple/swift-protobuf/issues/1034
+ switch fieldNumber {
+ case 1: try { try decoder.decodeRepeatedStringField(value:
&self.migrationMessage) }()
+ case 2: try { try decoder.decodeSingularMessageField(value:
&self._mitigationConfig) }()
+ case 3: try { try decoder.decodeSingularBoolField(value:
&self._needsAudit) }()
+ default: break
+ }
+ }
+ }
+
+ func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
+ // The use of inline closures is to circumvent an issue where the compiler
+ // allocates stack space for every if/case branch local when no
optimizations
+ // are enabled. https://github.com/apple/swift-protobuf/issues/1034 and
+ // https://github.com/apple/swift-protobuf/issues/1182
+ if !self.migrationMessage.isEmpty {
+ try visitor.visitRepeatedStringField(value: self.migrationMessage,
fieldNumber: 1)
+ }
+ try { if let v = self._mitigationConfig {
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 2)
+ } }()
+ try { if let v = self._needsAudit {
+ try visitor.visitSingularBoolField(value: v, fieldNumber: 3)
+ } }()
+ try unknownFields.traverse(visitor: &visitor)
+ }
+
+ static func ==(lhs:
Spark_Connect_FetchErrorDetailsResponse.BreakingChangeInfo, rhs:
Spark_Connect_FetchErrorDetailsResponse.BreakingChangeInfo) -> Bool {
+ if lhs.migrationMessage != rhs.migrationMessage {return false}
+ if lhs._mitigationConfig != rhs._mitigationConfig {return false}
+ if lhs._needsAudit != rhs._needsAudit {return false}
+ if lhs.unknownFields != rhs.unknownFields {return false}
+ return true
+ }
+}
+
+extension Spark_Connect_FetchErrorDetailsResponse.MitigationConfig:
SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase,
SwiftProtobuf._ProtoNameProviding {
+ static let protoMessageName: String =
Spark_Connect_FetchErrorDetailsResponse.protoMessageName + ".MitigationConfig"
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}key\0\u{1}value\0")
+
+ mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
+ while let fieldNumber = try decoder.nextFieldNumber() {
+ // The use of inline closures is to circumvent an issue where the
compiler
+ // allocates stack space for every case branch when no optimizations are
+ // enabled. https://github.com/apple/swift-protobuf/issues/1034
+ switch fieldNumber {
+ case 1: try { try decoder.decodeSingularStringField(value: &self.key) }()
+ case 2: try { try decoder.decodeSingularStringField(value: &self.value)
}()
+ default: break
+ }
+ }
+ }
+
+ func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
+ if !self.key.isEmpty {
+ try visitor.visitSingularStringField(value: self.key, fieldNumber: 1)
+ }
+ if !self.value.isEmpty {
+ try visitor.visitSingularStringField(value: self.value, fieldNumber: 2)
+ }
+ try unknownFields.traverse(visitor: &visitor)
+ }
+
+ static func ==(lhs:
Spark_Connect_FetchErrorDetailsResponse.MitigationConfig, rhs:
Spark_Connect_FetchErrorDetailsResponse.MitigationConfig) -> Bool {
+ if lhs.key != rhs.key {return false}
+ if lhs.value != rhs.value {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
diff --git a/Sources/SparkConnect/expressions.pb.swift
b/Sources/SparkConnect/expressions.pb.swift
index 0a268b0..c2fc6c1 100644
--- a/Sources/SparkConnect/expressions.pb.swift
+++ b/Sources/SparkConnect/expressions.pb.swift
@@ -218,6 +218,14 @@ struct Spark_Connect_Expression: @unchecked Sendable {
set {_uniqueStorage()._exprType = .subqueryExpression(newValue)}
}
+ var directShufflePartitionID:
Spark_Connect_Expression.DirectShufflePartitionID {
+ get {
+ if case .directShufflePartitionID(let v)? = _storage._exprType {return v}
+ return Spark_Connect_Expression.DirectShufflePartitionID()
+ }
+ set {_uniqueStorage()._exprType = .directShufflePartitionID(newValue)}
+ }
+
/// This field is used to mark extensions to the protocol. When plugins
generate arbitrary
/// relations they can add them here. During the planning the correct
resolution is done.
var `extension`: SwiftProtobuf.Google_Protobuf_Any {
@@ -251,6 +259,7 @@ struct Spark_Connect_Expression: @unchecked Sendable {
case mergeAction(Spark_Connect_MergeAction)
case typedAggregateExpression(Spark_Connect_TypedAggregateExpression)
case subqueryExpression(Spark_Connect_SubqueryExpression)
+ case
directShufflePartitionID(Spark_Connect_Expression.DirectShufflePartitionID)
/// This field is used to mark extensions to the protocol. When plugins
generate arbitrary
/// relations they can add them here. During the planning the correct
resolution is done.
case `extension`(SwiftProtobuf.Google_Protobuf_Any)
@@ -556,6 +565,31 @@ struct Spark_Connect_Expression: @unchecked Sendable {
fileprivate var _storage = _StorageClass.defaultInstance
}
+ /// Expression that takes a partition ID value and passes it through
directly for use in
+ /// shuffle partitioning. This is used with RepartitionByExpression to allow
users to
+ /// directly specify target partition IDs.
+ struct DirectShufflePartitionID: @unchecked Sendable {
+ // SwiftProtobuf.Message conformance is added in an extension below. See
the
+ // `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
+ // methods supported on all messages.
+
+ /// (Required) The expression that evaluates to the partition ID.
+ var child: Spark_Connect_Expression {
+ get {return _storage._child ?? Spark_Connect_Expression()}
+ set {_uniqueStorage()._child = newValue}
+ }
+ /// Returns true if `child` has been explicitly set.
+ var hasChild: Bool {return _storage._child != nil}
+ /// Clears the value of `child`. Subsequent reads from it will return its
default value.
+ mutating func clearChild() {_uniqueStorage()._child = nil}
+
+ var unknownFields = SwiftProtobuf.UnknownStorage()
+
+ init() {}
+
+ fileprivate var _storage = _StorageClass.defaultInstance
+ }
+
struct Cast: @unchecked Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See
the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
@@ -835,6 +869,28 @@ struct Spark_Connect_Expression: @unchecked Sendable {
set {literalType = .specializedArray(newValue)}
}
+ var time: Spark_Connect_Expression.Literal.Time {
+ get {
+ if case .time(let v)? = literalType {return v}
+ return Spark_Connect_Expression.Literal.Time()
+ }
+ set {literalType = .time(newValue)}
+ }
+
+ /// Data type information for the literal.
+ /// This field is required only in the root literal message for null
values or
+ /// for data types (e.g., array, map, or struct) with non-trivial
information.
+ /// If the data_type field is not set at the root level, the data type
will be
+ /// inferred or retrieved from the deprecated data type fields using best
efforts.
+ var dataType: Spark_Connect_DataType {
+ get {return _dataType ?? Spark_Connect_DataType()}
+ set {_dataType = newValue}
+ }
+ /// Returns true if `dataType` has been explicitly set.
+ var hasDataType: Bool {return self._dataType != nil}
+ /// Clears the value of `dataType`. Subsequent reads from it will return
its default value.
+ mutating func clearDataType() {self._dataType = nil}
+
var unknownFields = SwiftProtobuf.UnknownStorage()
enum OneOf_LiteralType: Equatable, Sendable {
@@ -862,6 +918,7 @@ struct Spark_Connect_Expression: @unchecked Sendable {
case map(Spark_Connect_Expression.Literal.Map)
case `struct`(Spark_Connect_Expression.Literal.Struct)
case specializedArray(Spark_Connect_Expression.Literal.SpecializedArray)
+ case time(Spark_Connect_Expression.Literal.Time)
}
@@ -923,6 +980,11 @@ struct Spark_Connect_Expression: @unchecked Sendable {
// `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
// methods supported on all messages.
+ /// (Deprecated) The element type of the array.
+ ///
+ /// This field is deprecated since Spark 4.1+. Use data_type field
instead.
+ ///
+ /// NOTE: This field was marked as deprecated in the .proto file.
var elementType: Spark_Connect_DataType {
get {return _elementType ?? Spark_Connect_DataType()}
set {_elementType = newValue}
@@ -932,6 +994,7 @@ struct Spark_Connect_Expression: @unchecked Sendable {
/// Clears the value of `elementType`. Subsequent reads from it will
return its default value.
mutating func clearElementType() {self._elementType = nil}
+ /// The literal values that make up the array elements.
var elements: [Spark_Connect_Expression.Literal] = []
var unknownFields = SwiftProtobuf.UnknownStorage()
@@ -946,6 +1009,11 @@ struct Spark_Connect_Expression: @unchecked Sendable {
// `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
// methods supported on all messages.
+ /// (Deprecated) The key type of the map.
+ ///
+ /// This field is deprecated since Spark 4.1+. Use data_type field
instead.
+ ///
+ /// NOTE: This field was marked as deprecated in the .proto file.
var keyType: Spark_Connect_DataType {
get {return _keyType ?? Spark_Connect_DataType()}
set {_keyType = newValue}
@@ -955,6 +1023,12 @@ struct Spark_Connect_Expression: @unchecked Sendable {
/// Clears the value of `keyType`. Subsequent reads from it will return
its default value.
mutating func clearKeyType() {self._keyType = nil}
+ /// (Deprecated) The value type of the map.
+ ///
+ /// This field is deprecated since Spark 4.1+ and should only be set
+ /// if the data_type field is not set. Use data_type field instead.
+ ///
+ /// NOTE: This field was marked as deprecated in the .proto file.
var valueType: Spark_Connect_DataType {
get {return _valueType ?? Spark_Connect_DataType()}
set {_valueType = newValue}
@@ -964,8 +1038,10 @@ struct Spark_Connect_Expression: @unchecked Sendable {
/// Clears the value of `valueType`. Subsequent reads from it will
return its default value.
mutating func clearValueType() {self._valueType = nil}
+ /// The literal keys that make up the map.
var keys: [Spark_Connect_Expression.Literal] = []
+ /// The literal values that make up the map.
var values: [Spark_Connect_Expression.Literal] = []
var unknownFields = SwiftProtobuf.UnknownStorage()
@@ -981,6 +1057,12 @@ struct Spark_Connect_Expression: @unchecked Sendable {
// `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
// methods supported on all messages.
+ /// (Deprecated) The type of the struct.
+ ///
+ /// This field is deprecated since Spark 4.1+ because using DataType as
the type of a struct
+ /// is ambiguous. Use data_type field instead.
+ ///
+ /// NOTE: This field was marked as deprecated in the .proto file.
var structType: Spark_Connect_DataType {
get {return _structType ?? Spark_Connect_DataType()}
set {_structType = newValue}
@@ -990,6 +1072,7 @@ struct Spark_Connect_Expression: @unchecked Sendable {
/// Clears the value of `structType`. Subsequent reads from it will
return its default value.
mutating func clearStructType() {self._structType = nil}
+ /// The literal values that make up the struct elements.
var elements: [Spark_Connect_Expression.Literal] = []
var unknownFields = SwiftProtobuf.UnknownStorage()
@@ -1069,7 +1152,33 @@ struct Spark_Connect_Expression: @unchecked Sendable {
init() {}
}
+ struct Time: Sendable {
+ // SwiftProtobuf.Message conformance is added in an extension below. See
the
+ // `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
+ // methods supported on all messages.
+
+ var nano: Int64 = 0
+
+ /// The precision of this time, if omitted, uses the default value of
MICROS_PRECISION.
+ var precision: Int32 {
+ get {return _precision ?? 0}
+ set {_precision = newValue}
+ }
+ /// Returns true if `precision` has been explicitly set.
+ var hasPrecision: Bool {return self._precision != nil}
+ /// Clears the value of `precision`. Subsequent reads from it will
return its default value.
+ mutating func clearPrecision() {self._precision = nil}
+
+ var unknownFields = SwiftProtobuf.UnknownStorage()
+
+ init() {}
+
+ fileprivate var _precision: Int32? = nil
+ }
+
init() {}
+
+ fileprivate var _dataType: Spark_Connect_DataType? = nil
}
/// An unresolved attribute that is not explicitly bound to a specific
column, but the column
@@ -1865,7 +1974,7 @@ fileprivate let _protobuf_package = "spark.connect"
extension Spark_Connect_Expression: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".Expression"
- static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}literal\0\u{3}unresolved_attribute\0\u{3}unresolved_function\0\u{3}expression_string\0\u{3}unresolved_star\0\u{1}alias\0\u{1}cast\0\u{3}unresolved_regex\0\u{3}sort_order\0\u{3}lambda_function\0\u{1}window\0\u{3}unresolved_extract_value\0\u{3}update_fields\0\u{3}unresolved_named_lambda_variable\0\u{3}common_inline_user_defined_function\0\u{3}call_function\0\u{3}named_argument_expression\0\u{1}common\0\u{3}merge_acti
[...]
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}literal\0\u{3}unresolved_attribute\0\u{3}unresolved_function\0\u{3}expression_string\0\u{3}unresolved_star\0\u{1}alias\0\u{1}cast\0\u{3}unresolved_regex\0\u{3}sort_order\0\u{3}lambda_function\0\u{1}window\0\u{3}unresolved_extract_value\0\u{3}update_fields\0\u{3}unresolved_named_lambda_variable\0\u{3}common_inline_user_defined_function\0\u{3}call_function\0\u{3}named_argument_expression\0\u{1}common\0\u{3}merge_acti
[...]
fileprivate class _StorageClass {
var _common: Spark_Connect_ExpressionCommon? = nil
@@ -2161,6 +2270,19 @@ extension Spark_Connect_Expression:
SwiftProtobuf.Message, SwiftProtobuf._Messag
_storage._exprType = .subqueryExpression(v)
}
}()
+ case 22: try {
+ var v: Spark_Connect_Expression.DirectShufflePartitionID?
+ var hadOneofValue = false
+ if let current = _storage._exprType {
+ hadOneofValue = true
+ if case .directShufflePartitionID(let m) = current {v = m}
+ }
+ try decoder.decodeSingularMessageField(value: &v)
+ if let v = v {
+ if hadOneofValue {try decoder.handleConflictingOneOf()}
+ _storage._exprType = .directShufflePartitionID(v)
+ }
+ }()
case 999: try {
var v: SwiftProtobuf.Google_Protobuf_Any?
var hadOneofValue = false
@@ -2273,6 +2395,10 @@ extension Spark_Connect_Expression:
SwiftProtobuf.Message, SwiftProtobuf._Messag
guard case .subqueryExpression(let v)? = _storage._exprType else {
preconditionFailure() }
try visitor.visitSingularMessageField(value: v, fieldNumber: 21)
}()
+ case .directShufflePartitionID?: try {
+ guard case .directShufflePartitionID(let v)? = _storage._exprType else
{ preconditionFailure() }
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 22)
+ }()
case .extension?: try {
guard case .extension(let v)? = _storage._exprType else {
preconditionFailure() }
try visitor.visitSingularMessageField(value: v, fieldNumber: 999)
@@ -2680,6 +2806,76 @@ extension
Spark_Connect_Expression.SortOrder.NullOrdering: SwiftProtobuf._ProtoN
static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{2}\0SORT_NULLS_UNSPECIFIED\0\u{1}SORT_NULLS_FIRST\0\u{1}SORT_NULLS_LAST\0")
}
+extension Spark_Connect_Expression.DirectShufflePartitionID:
SwiftProtobuf.Message, SwiftProtobuf._MessageImplementationBase,
SwiftProtobuf._ProtoNameProviding {
+ static let protoMessageName: String =
Spark_Connect_Expression.protoMessageName + ".DirectShufflePartitionID"
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}child\0")
+
+ fileprivate class _StorageClass {
+ var _child: Spark_Connect_Expression? = nil
+
+ // This property is used as the initial default value for new instances
of the type.
+ // The type itself is protecting the reference to its storage via CoW
semantics.
+ // This will force a copy to be made of this reference when the first
mutation occurs;
+ // hence, it is safe to mark this as `nonisolated(unsafe)`.
+ static nonisolated(unsafe) let defaultInstance = _StorageClass()
+
+ private init() {}
+
+ init(copying source: _StorageClass) {
+ _child = source._child
+ }
+ }
+
+ fileprivate mutating func _uniqueStorage() -> _StorageClass {
+ if !isKnownUniquelyReferenced(&_storage) {
+ _storage = _StorageClass(copying: _storage)
+ }
+ return _storage
+ }
+
+ mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
+ _ = _uniqueStorage()
+ try withExtendedLifetime(_storage) { (_storage: _StorageClass) in
+ while let fieldNumber = try decoder.nextFieldNumber() {
+ // The use of inline closures is to circumvent an issue where the
compiler
+ // allocates stack space for every case branch when no optimizations
are
+ // enabled. https://github.com/apple/swift-protobuf/issues/1034
+ switch fieldNumber {
+ case 1: try { try decoder.decodeSingularMessageField(value:
&_storage._child) }()
+ default: break
+ }
+ }
+ }
+ }
+
+ func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
+ try withExtendedLifetime(_storage) { (_storage: _StorageClass) in
+ // The use of inline closures is to circumvent an issue where the
compiler
+ // allocates stack space for every if/case branch local when no
optimizations
+ // are enabled. https://github.com/apple/swift-protobuf/issues/1034 and
+ // https://github.com/apple/swift-protobuf/issues/1182
+ try { if let v = _storage._child {
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 1)
+ } }()
+ }
+ try unknownFields.traverse(visitor: &visitor)
+ }
+
+ static func ==(lhs: Spark_Connect_Expression.DirectShufflePartitionID, rhs:
Spark_Connect_Expression.DirectShufflePartitionID) -> Bool {
+ if lhs._storage !== rhs._storage {
+ let storagesAreEqual: Bool = withExtendedLifetime((lhs._storage,
rhs._storage)) { (_args: (_StorageClass, _StorageClass)) in
+ let _storage = _args.0
+ let rhs_storage = _args.1
+ if _storage._child != rhs_storage._child {return false}
+ return true
+ }
+ if !storagesAreEqual {return false}
+ }
+ if lhs.unknownFields != rhs.unknownFields {return false}
+ return true
+ }
+}
+
extension Spark_Connect_Expression.Cast: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String =
Spark_Connect_Expression.protoMessageName + ".Cast"
static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}expr\0\u{1}type\0\u{3}type_str\0\u{3}eval_mode\0")
@@ -2798,7 +2994,7 @@ extension Spark_Connect_Expression.Cast.EvalMode:
SwiftProtobuf._ProtoNameProvid
extension Spark_Connect_Expression.Literal: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String =
Spark_Connect_Expression.protoMessageName + ".Literal"
- static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}null\0\u{1}binary\0\u{1}boolean\0\u{1}byte\0\u{1}short\0\u{1}integer\0\u{1}long\0\u{2}\u{3}float\0\u{1}double\0\u{1}decimal\0\u{1}string\0\u{2}\u{3}date\0\u{1}timestamp\0\u{3}timestamp_ntz\0\u{3}calendar_interval\0\u{3}year_month_interval\0\u{3}day_time_interval\0\u{1}array\0\u{1}map\0\u{1}struct\0\u{3}specialized_array\0")
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}null\0\u{1}binary\0\u{1}boolean\0\u{1}byte\0\u{1}short\0\u{1}integer\0\u{1}long\0\u{2}\u{3}float\0\u{1}double\0\u{1}decimal\0\u{1}string\0\u{2}\u{3}date\0\u{1}timestamp\0\u{3}timestamp_ntz\0\u{3}calendar_interval\0\u{3}year_month_interval\0\u{3}day_time_interval\0\u{1}array\0\u{1}map\0\u{1}struct\0\u{3}specialized_array\0\u{1}time\0\u{4}J\u{1}data_type\0")
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
while let fieldNumber = try decoder.nextFieldNumber() {
@@ -3009,6 +3205,20 @@ extension Spark_Connect_Expression.Literal:
SwiftProtobuf.Message, SwiftProtobuf
self.literalType = .specializedArray(v)
}
}()
+ case 26: try {
+ var v: Spark_Connect_Expression.Literal.Time?
+ var hadOneofValue = false
+ if let current = self.literalType {
+ hadOneofValue = true
+ if case .time(let m) = current {v = m}
+ }
+ try decoder.decodeSingularMessageField(value: &v)
+ if let v = v {
+ if hadOneofValue {try decoder.handleConflictingOneOf()}
+ self.literalType = .time(v)
+ }
+ }()
+ case 100: try { try decoder.decodeSingularMessageField(value:
&self._dataType) }()
default: break
}
}
@@ -3104,13 +3314,21 @@ extension Spark_Connect_Expression.Literal:
SwiftProtobuf.Message, SwiftProtobuf
guard case .specializedArray(let v)? = self.literalType else {
preconditionFailure() }
try visitor.visitSingularMessageField(value: v, fieldNumber: 25)
}()
+ case .time?: try {
+ guard case .time(let v)? = self.literalType else { preconditionFailure()
}
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 26)
+ }()
case nil: break
}
+ try { if let v = self._dataType {
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 100)
+ } }()
try unknownFields.traverse(visitor: &visitor)
}
static func ==(lhs: Spark_Connect_Expression.Literal, rhs:
Spark_Connect_Expression.Literal) -> Bool {
if lhs.literalType != rhs.literalType {return false}
+ if lhs._dataType != rhs._dataType {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
@@ -3462,6 +3680,45 @@ extension
Spark_Connect_Expression.Literal.SpecializedArray: SwiftProtobuf.Messa
}
}
+extension Spark_Connect_Expression.Literal.Time: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
+ static let protoMessageName: String =
Spark_Connect_Expression.Literal.protoMessageName + ".Time"
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}nano\0\u{1}precision\0")
+
+ mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
+ while let fieldNumber = try decoder.nextFieldNumber() {
+ // The use of inline closures is to circumvent an issue where the
compiler
+ // allocates stack space for every case branch when no optimizations are
+ // enabled. https://github.com/apple/swift-protobuf/issues/1034
+ switch fieldNumber {
+ case 1: try { try decoder.decodeSingularInt64Field(value: &self.nano) }()
+ case 2: try { try decoder.decodeSingularInt32Field(value:
&self._precision) }()
+ default: break
+ }
+ }
+ }
+
+ func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
+ // The use of inline closures is to circumvent an issue where the compiler
+ // allocates stack space for every if/case branch local when no
optimizations
+ // are enabled. https://github.com/apple/swift-protobuf/issues/1034 and
+ // https://github.com/apple/swift-protobuf/issues/1182
+ if self.nano != 0 {
+ try visitor.visitSingularInt64Field(value: self.nano, fieldNumber: 1)
+ }
+ try { if let v = self._precision {
+ try visitor.visitSingularInt32Field(value: v, fieldNumber: 2)
+ } }()
+ try unknownFields.traverse(visitor: &visitor)
+ }
+
+ static func ==(lhs: Spark_Connect_Expression.Literal.Time, rhs:
Spark_Connect_Expression.Literal.Time) -> Bool {
+ if lhs.nano != rhs.nano {return false}
+ if lhs._precision != rhs._precision {return false}
+ if lhs.unknownFields != rhs.unknownFields {return false}
+ return true
+ }
+}
+
extension Spark_Connect_Expression.UnresolvedAttribute: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String =
Spark_Connect_Expression.protoMessageName + ".UnresolvedAttribute"
static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}unparsed_identifier\0\u{3}plan_id\0\u{3}is_metadata_column\0")
diff --git a/Sources/SparkConnect/ml.pb.swift b/Sources/SparkConnect/ml.pb.swift
index 1ed6ecc..0acc9ca 100644
--- a/Sources/SparkConnect/ml.pb.swift
+++ b/Sources/SparkConnect/ml.pb.swift
@@ -116,6 +116,14 @@ struct Spark_Connect_MlCommand: Sendable {
set {command = .createSummary(newValue)}
}
+ var getModelSize: Spark_Connect_MlCommand.GetModelSize {
+ get {
+ if case .getModelSize(let v)? = command {return v}
+ return Spark_Connect_MlCommand.GetModelSize()
+ }
+ set {command = .getModelSize(newValue)}
+ }
+
var unknownFields = SwiftProtobuf.UnknownStorage()
enum OneOf_Command: Equatable, Sendable {
@@ -128,6 +136,7 @@ struct Spark_Connect_MlCommand: Sendable {
case cleanCache(Spark_Connect_MlCommand.CleanCache)
case getCacheInfo(Spark_Connect_MlCommand.GetCacheInfo)
case createSummary(Spark_Connect_MlCommand.CreateSummary)
+ case getModelSize(Spark_Connect_MlCommand.GetModelSize)
}
@@ -399,6 +408,28 @@ struct Spark_Connect_MlCommand: Sendable {
fileprivate var _dataset: Spark_Connect_Relation? = nil
}
+ /// This is for query the model estimated in-memory size
+ struct GetModelSize: Sendable {
+ // SwiftProtobuf.Message conformance is added in an extension below. See
the
+ // `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
+ // methods supported on all messages.
+
+ var modelRef: Spark_Connect_ObjectRef {
+ get {return _modelRef ?? Spark_Connect_ObjectRef()}
+ set {_modelRef = newValue}
+ }
+ /// Returns true if `modelRef` has been explicitly set.
+ var hasModelRef: Bool {return self._modelRef != nil}
+ /// Clears the value of `modelRef`. Subsequent reads from it will return
its default value.
+ mutating func clearModelRef() {self._modelRef = nil}
+
+ var unknownFields = SwiftProtobuf.UnknownStorage()
+
+ init() {}
+
+ fileprivate var _modelRef: Spark_Connect_ObjectRef? = nil
+ }
+
init() {}
}
@@ -532,7 +563,7 @@ fileprivate let _protobuf_package = "spark.connect"
extension Spark_Connect_MlCommand: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".MlCommand"
- static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}fit\0\u{1}fetch\0\u{1}delete\0\u{1}write\0\u{1}read\0\u{1}evaluate\0\u{3}clean_cache\0\u{3}get_cache_info\0\u{3}create_summary\0")
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}fit\0\u{1}fetch\0\u{1}delete\0\u{1}write\0\u{1}read\0\u{1}evaluate\0\u{3}clean_cache\0\u{3}get_cache_info\0\u{3}create_summary\0\u{3}get_model_size\0")
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
while let fieldNumber = try decoder.nextFieldNumber() {
@@ -657,6 +688,19 @@ extension Spark_Connect_MlCommand: SwiftProtobuf.Message,
SwiftProtobuf._Message
self.command = .createSummary(v)
}
}()
+ case 10: try {
+ var v: Spark_Connect_MlCommand.GetModelSize?
+ var hadOneofValue = false
+ if let current = self.command {
+ hadOneofValue = true
+ if case .getModelSize(let m) = current {v = m}
+ }
+ try decoder.decodeSingularMessageField(value: &v)
+ if let v = v {
+ if hadOneofValue {try decoder.handleConflictingOneOf()}
+ self.command = .getModelSize(v)
+ }
+ }()
default: break
}
}
@@ -704,6 +748,10 @@ extension Spark_Connect_MlCommand: SwiftProtobuf.Message,
SwiftProtobuf._Message
guard case .createSummary(let v)? = self.command else {
preconditionFailure() }
try visitor.visitSingularMessageField(value: v, fieldNumber: 9)
}()
+ case .getModelSize?: try {
+ guard case .getModelSize(let v)? = self.command else {
preconditionFailure() }
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 10)
+ }()
case nil: break
}
try unknownFields.traverse(visitor: &visitor)
@@ -1046,6 +1094,40 @@ extension Spark_Connect_MlCommand.CreateSummary:
SwiftProtobuf.Message, SwiftPro
}
}
+extension Spark_Connect_MlCommand.GetModelSize: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
+ static let protoMessageName: String =
Spark_Connect_MlCommand.protoMessageName + ".GetModelSize"
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}model_ref\0")
+
+ mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
+ while let fieldNumber = try decoder.nextFieldNumber() {
+ // The use of inline closures is to circumvent an issue where the
compiler
+ // allocates stack space for every case branch when no optimizations are
+ // enabled. https://github.com/apple/swift-protobuf/issues/1034
+ switch fieldNumber {
+ case 1: try { try decoder.decodeSingularMessageField(value:
&self._modelRef) }()
+ default: break
+ }
+ }
+ }
+
+ func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
+ // The use of inline closures is to circumvent an issue where the compiler
+ // allocates stack space for every if/case branch local when no
optimizations
+ // are enabled. https://github.com/apple/swift-protobuf/issues/1034 and
+ // https://github.com/apple/swift-protobuf/issues/1182
+ try { if let v = self._modelRef {
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 1)
+ } }()
+ try unknownFields.traverse(visitor: &visitor)
+ }
+
+ static func ==(lhs: Spark_Connect_MlCommand.GetModelSize, rhs:
Spark_Connect_MlCommand.GetModelSize) -> Bool {
+ if lhs._modelRef != rhs._modelRef {return false}
+ if lhs.unknownFields != rhs.unknownFields {return false}
+ return true
+ }
+}
+
extension Spark_Connect_MlCommandResult: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".MlCommandResult"
static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}param\0\u{1}summary\0\u{3}operator_info\0")
diff --git a/Sources/SparkConnect/pipelines.pb.swift
b/Sources/SparkConnect/pipelines.pb.swift
index 241b5c9..2f8c655 100644
--- a/Sources/SparkConnect/pipelines.pb.swift
+++ b/Sources/SparkConnect/pipelines.pb.swift
@@ -360,28 +360,18 @@ struct Spark_Connect_PipelineCommand: Sendable {
mutating func clearTargetDatasetName() {self._targetDatasetName = nil}
/// An unresolved relation that defines the dataset's flow.
- var plan: Spark_Connect_Relation {
- get {return _plan ?? Spark_Connect_Relation()}
- set {_plan = newValue}
+ var relation: Spark_Connect_Relation {
+ get {return _relation ?? Spark_Connect_Relation()}
+ set {_relation = newValue}
}
- /// Returns true if `plan` has been explicitly set.
- var hasPlan: Bool {return self._plan != nil}
- /// Clears the value of `plan`. Subsequent reads from it will return its
default value.
- mutating func clearPlan() {self._plan = nil}
+ /// Returns true if `relation` has been explicitly set.
+ var hasRelation: Bool {return self._relation != nil}
+ /// Clears the value of `relation`. Subsequent reads from it will return
its default value.
+ mutating func clearRelation() {self._relation = nil}
/// SQL configurations set when running this flow.
var sqlConf: Dictionary<String,String> = [:]
- /// If true, this flow will only be run once per full refresh.
- var once: Bool {
- get {return _once ?? false}
- set {_once = newValue}
- }
- /// Returns true if `once` has been explicitly set.
- var hasOnce: Bool {return self._once != nil}
- /// Clears the value of `once`. Subsequent reads from it will return its
default value.
- mutating func clearOnce() {self._once = nil}
-
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
@@ -389,8 +379,7 @@ struct Spark_Connect_PipelineCommand: Sendable {
fileprivate var _dataflowGraphID: String? = nil
fileprivate var _flowName: String? = nil
fileprivate var _targetDatasetName: String? = nil
- fileprivate var _plan: Spark_Connect_Relation? = nil
- fileprivate var _once: Bool? = nil
+ fileprivate var _relation: Spark_Connect_Relation? = nil
}
/// Resolves all datasets and flows and start a pipeline update. Should be
called after all
@@ -410,11 +399,40 @@ struct Spark_Connect_PipelineCommand: Sendable {
/// Clears the value of `dataflowGraphID`. Subsequent reads from it will
return its default value.
mutating func clearDataflowGraphID() {self._dataflowGraphID = nil}
+ /// List of dataset to reset and recompute.
+ var fullRefreshSelection: [String] = []
+
+ /// Perform a full graph reset and recompute.
+ var fullRefreshAll: Bool {
+ get {return _fullRefreshAll ?? false}
+ set {_fullRefreshAll = newValue}
+ }
+ /// Returns true if `fullRefreshAll` has been explicitly set.
+ var hasFullRefreshAll: Bool {return self._fullRefreshAll != nil}
+ /// Clears the value of `fullRefreshAll`. Subsequent reads from it will
return its default value.
+ mutating func clearFullRefreshAll() {self._fullRefreshAll = nil}
+
+ /// List of dataset to update.
+ var refreshSelection: [String] = []
+
+ /// If true, the run will not actually execute any flows, but will only
validate the graph and
+ /// check for any errors. This is useful for testing and validation
purposes.
+ var dry: Bool {
+ get {return _dry ?? false}
+ set {_dry = newValue}
+ }
+ /// Returns true if `dry` has been explicitly set.
+ var hasDry: Bool {return self._dry != nil}
+ /// Clears the value of `dry`. Subsequent reads from it will return its
default value.
+ mutating func clearDry() {self._dry = nil}
+
var unknownFields = SwiftProtobuf.UnknownStorage()
init() {}
fileprivate var _dataflowGraphID: String? = nil
+ fileprivate var _fullRefreshAll: Bool? = nil
+ fileprivate var _dry: Bool? = nil
}
/// Parses the SQL file and registers all datasets and flows.
@@ -894,7 +912,7 @@ extension Spark_Connect_PipelineCommand.DefineDataset:
SwiftProtobuf.Message, Sw
extension Spark_Connect_PipelineCommand.DefineFlow: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String =
Spark_Connect_PipelineCommand.protoMessageName + ".DefineFlow"
- static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}dataflow_graph_id\0\u{3}flow_name\0\u{3}target_dataset_name\0\u{1}plan\0\u{3}sql_conf\0\u{1}once\0")
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}dataflow_graph_id\0\u{3}flow_name\0\u{3}target_dataset_name\0\u{1}relation\0\u{3}sql_conf\0")
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
while let fieldNumber = try decoder.nextFieldNumber() {
@@ -905,9 +923,8 @@ extension Spark_Connect_PipelineCommand.DefineFlow:
SwiftProtobuf.Message, Swift
case 1: try { try decoder.decodeSingularStringField(value:
&self._dataflowGraphID) }()
case 2: try { try decoder.decodeSingularStringField(value:
&self._flowName) }()
case 3: try { try decoder.decodeSingularStringField(value:
&self._targetDatasetName) }()
- case 4: try { try decoder.decodeSingularMessageField(value: &self._plan)
}()
+ case 4: try { try decoder.decodeSingularMessageField(value:
&self._relation) }()
case 5: try { try decoder.decodeMapField(fieldType:
SwiftProtobuf._ProtobufMap<SwiftProtobuf.ProtobufString,SwiftProtobuf.ProtobufString>.self,
value: &self.sqlConf) }()
- case 6: try { try decoder.decodeSingularBoolField(value: &self._once) }()
default: break
}
}
@@ -927,15 +944,12 @@ extension Spark_Connect_PipelineCommand.DefineFlow:
SwiftProtobuf.Message, Swift
try { if let v = self._targetDatasetName {
try visitor.visitSingularStringField(value: v, fieldNumber: 3)
} }()
- try { if let v = self._plan {
+ try { if let v = self._relation {
try visitor.visitSingularMessageField(value: v, fieldNumber: 4)
} }()
if !self.sqlConf.isEmpty {
try visitor.visitMapField(fieldType:
SwiftProtobuf._ProtobufMap<SwiftProtobuf.ProtobufString,SwiftProtobuf.ProtobufString>.self,
value: self.sqlConf, fieldNumber: 5)
}
- try { if let v = self._once {
- try visitor.visitSingularBoolField(value: v, fieldNumber: 6)
- } }()
try unknownFields.traverse(visitor: &visitor)
}
@@ -943,9 +957,8 @@ extension Spark_Connect_PipelineCommand.DefineFlow:
SwiftProtobuf.Message, Swift
if lhs._dataflowGraphID != rhs._dataflowGraphID {return false}
if lhs._flowName != rhs._flowName {return false}
if lhs._targetDatasetName != rhs._targetDatasetName {return false}
- if lhs._plan != rhs._plan {return false}
+ if lhs._relation != rhs._relation {return false}
if lhs.sqlConf != rhs.sqlConf {return false}
- if lhs._once != rhs._once {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
@@ -953,7 +966,7 @@ extension Spark_Connect_PipelineCommand.DefineFlow:
SwiftProtobuf.Message, Swift
extension Spark_Connect_PipelineCommand.StartRun: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String =
Spark_Connect_PipelineCommand.protoMessageName + ".StartRun"
- static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}dataflow_graph_id\0")
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}dataflow_graph_id\0\u{3}full_refresh_selection\0\u{3}full_refresh_all\0\u{3}refresh_selection\0\u{1}dry\0")
mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
while let fieldNumber = try decoder.nextFieldNumber() {
@@ -962,6 +975,10 @@ extension Spark_Connect_PipelineCommand.StartRun:
SwiftProtobuf.Message, SwiftPr
// enabled. https://github.com/apple/swift-protobuf/issues/1034
switch fieldNumber {
case 1: try { try decoder.decodeSingularStringField(value:
&self._dataflowGraphID) }()
+ case 2: try { try decoder.decodeRepeatedStringField(value:
&self.fullRefreshSelection) }()
+ case 3: try { try decoder.decodeSingularBoolField(value:
&self._fullRefreshAll) }()
+ case 4: try { try decoder.decodeRepeatedStringField(value:
&self.refreshSelection) }()
+ case 5: try { try decoder.decodeSingularBoolField(value: &self._dry) }()
default: break
}
}
@@ -975,11 +992,27 @@ extension Spark_Connect_PipelineCommand.StartRun:
SwiftProtobuf.Message, SwiftPr
try { if let v = self._dataflowGraphID {
try visitor.visitSingularStringField(value: v, fieldNumber: 1)
} }()
+ if !self.fullRefreshSelection.isEmpty {
+ try visitor.visitRepeatedStringField(value: self.fullRefreshSelection,
fieldNumber: 2)
+ }
+ try { if let v = self._fullRefreshAll {
+ try visitor.visitSingularBoolField(value: v, fieldNumber: 3)
+ } }()
+ if !self.refreshSelection.isEmpty {
+ try visitor.visitRepeatedStringField(value: self.refreshSelection,
fieldNumber: 4)
+ }
+ try { if let v = self._dry {
+ try visitor.visitSingularBoolField(value: v, fieldNumber: 5)
+ } }()
try unknownFields.traverse(visitor: &visitor)
}
static func ==(lhs: Spark_Connect_PipelineCommand.StartRun, rhs:
Spark_Connect_PipelineCommand.StartRun) -> Bool {
if lhs._dataflowGraphID != rhs._dataflowGraphID {return false}
+ if lhs.fullRefreshSelection != rhs.fullRefreshSelection {return false}
+ if lhs._fullRefreshAll != rhs._fullRefreshAll {return false}
+ if lhs.refreshSelection != rhs.refreshSelection {return false}
+ if lhs._dry != rhs._dry {return false}
if lhs.unknownFields != rhs.unknownFields {return false}
return true
}
diff --git a/Sources/SparkConnect/types.pb.swift
b/Sources/SparkConnect/types.pb.swift
index 003d6db..a4394d6 100644
--- a/Sources/SparkConnect/types.pb.swift
+++ b/Sources/SparkConnect/types.pb.swift
@@ -255,6 +255,14 @@ struct Spark_Connect_DataType: @unchecked Sendable {
set {_uniqueStorage()._kind = .unparsed(newValue)}
}
+ var time: Spark_Connect_DataType.Time {
+ get {
+ if case .time(let v)? = _storage._kind {return v}
+ return Spark_Connect_DataType.Time()
+ }
+ set {_uniqueStorage()._kind = .time(newValue)}
+ }
+
var unknownFields = SwiftProtobuf.UnknownStorage()
enum OneOf_Kind: Equatable, Sendable {
@@ -290,6 +298,7 @@ struct Spark_Connect_DataType: @unchecked Sendable {
case udt(Spark_Connect_DataType.UDT)
/// UnparsedDataType
case unparsed(Spark_Connect_DataType.Unparsed)
+ case time(Spark_Connect_DataType.Time)
}
@@ -451,6 +460,29 @@ struct Spark_Connect_DataType: @unchecked Sendable {
init() {}
}
+ struct Time: Sendable {
+ // SwiftProtobuf.Message conformance is added in an extension below. See
the
+ // `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
+ // methods supported on all messages.
+
+ var precision: Int32 {
+ get {return _precision ?? 0}
+ set {_precision = newValue}
+ }
+ /// Returns true if `precision` has been explicitly set.
+ var hasPrecision: Bool {return self._precision != nil}
+ /// Clears the value of `precision`. Subsequent reads from it will return
its default value.
+ mutating func clearPrecision() {self._precision = nil}
+
+ var typeVariationReference: UInt32 = 0
+
+ var unknownFields = SwiftProtobuf.UnknownStorage()
+
+ init() {}
+
+ fileprivate var _precision: Int32? = nil
+ }
+
struct CalendarInterval: Sendable {
// SwiftProtobuf.Message conformance is added in an extension below. See
the
// `Message` and `Message+*Additions` files in the SwiftProtobuf library
for
@@ -804,7 +836,7 @@ fileprivate let _protobuf_package = "spark.connect"
extension Spark_Connect_DataType: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String = _protobuf_package + ".DataType"
- static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}null\0\u{1}binary\0\u{1}boolean\0\u{1}byte\0\u{1}short\0\u{1}integer\0\u{1}long\0\u{1}float\0\u{1}double\0\u{1}decimal\0\u{1}string\0\u{1}char\0\u{3}var_char\0\u{1}date\0\u{1}timestamp\0\u{3}timestamp_ntz\0\u{3}calendar_interval\0\u{3}year_month_interval\0\u{3}day_time_interval\0\u{1}array\0\u{1}struct\0\u{1}map\0\u{1}udt\0\u{1}unparsed\0\u{1}variant\0\u{c}\u{1a}\u{1}\u{c}\u{1b}\u{1}")
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}null\0\u{1}binary\0\u{1}boolean\0\u{1}byte\0\u{1}short\0\u{1}integer\0\u{1}long\0\u{1}float\0\u{1}double\0\u{1}decimal\0\u{1}string\0\u{1}char\0\u{3}var_char\0\u{1}date\0\u{1}timestamp\0\u{3}timestamp_ntz\0\u{3}calendar_interval\0\u{3}year_month_interval\0\u{3}day_time_interval\0\u{1}array\0\u{1}struct\0\u{1}map\0\u{1}udt\0\u{1}unparsed\0\u{1}variant\0\u{2}\u{3}time\0\u{c}\u{1a}\u{1}\u{c}\u{1b}\u{1}")
fileprivate class _StorageClass {
var _kind: Spark_Connect_DataType.OneOf_Kind?
@@ -1162,6 +1194,19 @@ extension Spark_Connect_DataType: SwiftProtobuf.Message,
SwiftProtobuf._MessageI
_storage._kind = .variant(v)
}
}()
+ case 28: try {
+ var v: Spark_Connect_DataType.Time?
+ var hadOneofValue = false
+ if let current = _storage._kind {
+ hadOneofValue = true
+ if case .time(let m) = current {v = m}
+ }
+ try decoder.decodeSingularMessageField(value: &v)
+ if let v = v {
+ if hadOneofValue {try decoder.handleConflictingOneOf()}
+ _storage._kind = .time(v)
+ }
+ }()
default: break
}
}
@@ -1275,6 +1320,10 @@ extension Spark_Connect_DataType: SwiftProtobuf.Message,
SwiftProtobuf._MessageI
guard case .variant(let v)? = _storage._kind else {
preconditionFailure() }
try visitor.visitSingularMessageField(value: v, fieldNumber: 25)
}()
+ case .time?: try {
+ guard case .time(let v)? = _storage._kind else { preconditionFailure()
}
+ try visitor.visitSingularMessageField(value: v, fieldNumber: 28)
+ }()
case nil: break
}
}
@@ -1691,6 +1740,45 @@ extension Spark_Connect_DataType.TimestampNTZ:
SwiftProtobuf.Message, SwiftProto
}
}
+extension Spark_Connect_DataType.Time: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
+ static let protoMessageName: String =
Spark_Connect_DataType.protoMessageName + ".Time"
+ static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{1}precision\0\u{3}type_variation_reference\0")
+
+ mutating func decodeMessage<D: SwiftProtobuf.Decoder>(decoder: inout D)
throws {
+ while let fieldNumber = try decoder.nextFieldNumber() {
+ // The use of inline closures is to circumvent an issue where the
compiler
+ // allocates stack space for every case branch when no optimizations are
+ // enabled. https://github.com/apple/swift-protobuf/issues/1034
+ switch fieldNumber {
+ case 1: try { try decoder.decodeSingularInt32Field(value:
&self._precision) }()
+ case 2: try { try decoder.decodeSingularUInt32Field(value:
&self.typeVariationReference) }()
+ default: break
+ }
+ }
+ }
+
+ func traverse<V: SwiftProtobuf.Visitor>(visitor: inout V) throws {
+ // The use of inline closures is to circumvent an issue where the compiler
+ // allocates stack space for every if/case branch local when no
optimizations
+ // are enabled. https://github.com/apple/swift-protobuf/issues/1034 and
+ // https://github.com/apple/swift-protobuf/issues/1182
+ try { if let v = self._precision {
+ try visitor.visitSingularInt32Field(value: v, fieldNumber: 1)
+ } }()
+ if self.typeVariationReference != 0 {
+ try visitor.visitSingularUInt32Field(value: self.typeVariationReference,
fieldNumber: 2)
+ }
+ try unknownFields.traverse(visitor: &visitor)
+ }
+
+ static func ==(lhs: Spark_Connect_DataType.Time, rhs:
Spark_Connect_DataType.Time) -> Bool {
+ if lhs._precision != rhs._precision {return false}
+ if lhs.typeVariationReference != rhs.typeVariationReference {return false}
+ if lhs.unknownFields != rhs.unknownFields {return false}
+ return true
+ }
+}
+
extension Spark_Connect_DataType.CalendarInterval: SwiftProtobuf.Message,
SwiftProtobuf._MessageImplementationBase, SwiftProtobuf._ProtoNameProviding {
static let protoMessageName: String =
Spark_Connect_DataType.protoMessageName + ".CalendarInterval"
static let _protobuf_nameMap = SwiftProtobuf._NameMap(bytecode:
"\0\u{3}type_variation_reference\0")
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]