GitHub user wolfboys added a comment to the discussion: Support Spark Deployment

hi:

As mentioned in the background introduction of the proposal, currently 
StreamPark already provides support for flink jobs and has a large user base. 
However, support for Spark capabilities has not been initiated yet. StreamPark 
has never abandoned support for Spark jobs and I sincerely appreciate your 
drive for this PR. The initial plan for StreamPark support for Spark is as 
follows:

On one hand, we will provide development framework for Spark jobs (this feature 
is not part of the current discussion). On the other hand, we will provide the 
deployment and management capabilities of Spark jobs. For this aspect, I 
suggest we divide it into several steps:

1. First, we can start by implementing the minimum viable product (MVP), taking 
inspiration from StreamPark's support for Flink jobs. We can create a table for 
Spark jobs with essential fields containing necessary information (job name, 
status, parameters, deployment mode, etc.). We will also need a frontend page 
for adding Spark jobs.

Regarding the deployment method of spark jobs, there are differences between 
the implementation on yarn and k8s. We can begin with a simplest deployment 
mode. For the job status handling, we can directly leverage Spark's API. This 
part ensures the minimal functionality (jobs can be added, started, and their 
status automatically tracked).

2. Based on user feedback, we will gradually improve the functions, e.g support 
more deployment modes(Each deployment mode implementation involves various 
detailed issues. We can discuss them in depth at the appropriate time)

3. Unified job(flink|spark) status, we can redefine the job(flink|spark) states 
within StreamPark, completely shielding the original states from Flink and 
Spark for the users.

GitHub link: 
https://github.com/apache/incubator-streampark/discussions/3043#discussioncomment-7024801

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to