GitHub user wolfboys added a comment to the discussion: Support Spark Deployment

Are you suggesting that we save all the description information of Spark jobs 
in a YAML format? For example, the spark job status (running, cancel), the 
job's application ID, and the resource parameters required for running the job 
(e.g: memory, CPU)? Why don't we describe the job information in a table? 
Similar to the way StreamPark handles Flink jobs (e.g, using a table: 
t_flink_app), we can create a new table to record Spark job information 
(t_spark_app).

GitHub link: 
https://github.com/apache/incubator-streampark/discussions/3043#discussioncomment-7024822

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to