gnehil opened a new pull request, #145:
URL: https://github.com/apache/doris-spark-connector/pull/145

   # Proposed changes
   
   ## Problem Summary:
   
   After write optimization, the upstream data is read through the iterator. 
Since the iterator can only traverse in one direction, the current batch cannot 
be reread during the internal retry.
   
   So the solution is to remove the internal retry, and the failure exception 
when executing the load will be thrown.
   If the `spark.task.maxFailures` parameter is set (default value is 4), or 
other retry-related parameters, the Spark scheduler will retry the task.
   
   Other changes:
   1. abort transaction by label when current load is failed
   2. do some style changes
   
   ## Checklist(Required)
   
   1. Does it affect the original behavior: (Yes/No/I Don't know)
   3. Has unit tests been added: (Yes/No/No Need)
   4. Has document been added or modified: (Yes/No/No Need)
   5. Does it need to update dependencies: (Yes/No)
   6. Are there any changes that cannot be rolled back: (Yes/No)
   
   ## Further comments
   
   If this is a relatively large or complex change, kick off the discussion at 
[d...@doris.apache.org](mailto:d...@doris.apache.org) by explaining why you 
chose the solution you did and what alternatives you considered, etc...
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@doris.apache.org
For additional commands, e-mail: commits-h...@doris.apache.org

Reply via email to