GitHub user Nataneljpwd added a comment to the discussion: Redesign the 
scheduler logic to avoid starvation due to dropped tasks in critical section

I agree with most of the things you said, however, I do think that instead of a 
window over every concurrency requirement, we iterate over every holder of such 
requirement, so instead of a window over concurrency, we window over Dag, which 
in turn solves the concurrency issue, if we do iterate over concurrency you may 
get a similar problem when having 2 dags, same pool and concurrency so it is 
better to instead window over the holder of the requirement (i.e Dag for 
concurrency).

About the slow SQL, I do not agree here, the SQL scales that are expected when 
using airflow are not at such a large scale that cardinality will cause a major 
runtime increase for the query, I think that instead of iterating and sending a 
lot of queries it will be far more efficient to send one single big query, it 
will most likely reduce the amount of time in the critical section as we send 
less queries and use a more efficient plan to check the filters rather than 
doing it in python, windowing can be very fast as long as the ordering is 
chosen carefully to fit the window function.

Transforming priority to weight can also be a very good idea, though in the 
case that the query will do more things and the code will do less, it will be 
harder to implement it properly in SQL, while allowing different strategies so 
that users will be able to tailor the prioritization behaviour to their needs.

If you have any new suggestions, improvements or possible solutions, I would be 
happy to hear them.

GitHub link: 
https://github.com/apache/airflow/discussions/49160#discussioncomment-12830943

----
This is an automatically sent email for [email protected].
To unsubscribe, please send an email to: [email protected]

Reply via email to