bajiaolong commented on issue #15637:
URL: 
https://github.com/apache/dolphinscheduler/issues/15637#issuecomment-2513408912

   > > > > > > > > > @bajiaolong Just FYI, I encountered the same problem and 
resolved it by adding an execution environment for the task and configuring 
"export HADOOP_USER_NAME=your spark user".
   > > > > > > > > 
   > > > > > > > > 
   > > > > > > > > good tirk,i chang some conf make ds use worker's deploy user 
to avoid this problem
   > > > > > > > 
   > > > > > > > 
   > > > > > > > I encountered the same problem in FLINK task, could u share 
what conf u changed? thank you!
   > > > > > > 
   > > > > > > 
   > > > > > > There is a tenant option in the task running page, where you can 
specify a specific tenant. Please ensure that the tenant's environment 
variables are correct
   > > > > > 
   > > > > > 
   > > > > > I'm using version 3.2.1, and when the workflow contains subtasks, 
after I specify the workflow's tenant in the task running page, the subtasks 
still run with `default` instead of the specified tenant.
   > > > > 
   > > > > 
   > > > > Is it convenient to tell me your task type and subtask type
   > > > 
   > > > 
   > > > It's all flink-sql.
   > > 
   > > 
   > > The preliminary judgment is the parameter problem of flink, please see 
my other PR. #15708
   > 
   > :( So this problem can not be solved in the current version (3.2.1)?
   
   I also found this problem when I used version 3.2.0. I can execute it 
correctly by modifying the code. This PR is not merged now. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to