This is an automated email from the ASF dual-hosted git repository. pierrejeambrun pushed a commit to branch v2-5-test in repository https://gitbox.apache.org/repos/asf/airflow.git
commit db628cbf071d1cbb39b39dc986122e27cfd5489f Author: Ye Cao <[email protected]> AuthorDate: Wed Mar 15 06:16:50 2023 +0800 Fix some typos on the kubernetes documentation (#29936) * Fix some typos on kubernetes doc. Signed-off-by: Ye Cao <[email protected]> --------- Signed-off-by: Ye Cao <[email protected]> Co-authored-by: Tzu-ping Chung <[email protected]> (cherry picked from commit feab21362e2fee309990a89aea39031d94c5f5bd) --- .../administration-and-deployment/production-deployment.rst | 8 ++++---- docs/apache-airflow/administration-and-deployment/scheduler.rst | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/apache-airflow/administration-and-deployment/production-deployment.rst b/docs/apache-airflow/administration-and-deployment/production-deployment.rst index e07a98dcf7..98cbbb0a49 100644 --- a/docs/apache-airflow/administration-and-deployment/production-deployment.rst +++ b/docs/apache-airflow/administration-and-deployment/production-deployment.rst @@ -38,7 +38,7 @@ You can change the backend using the following config sql_alchemy_conn = my_conn_string Once you have changed the backend, airflow needs to create all the tables required for operation. -Create an empty DB and give airflow's user the permission to ``CREATE/ALTER`` it. +Create an empty DB and give Airflow's user permission to ``CREATE/ALTER`` it. Once that is done, you can run - .. code-block:: bash @@ -146,10 +146,10 @@ command and the worker command in separate containers - where only the ``airflow access to the Keytab file (preferably configured as secret resource). Those two containers should share a volume where the temporary token should be written by the ``airflow kerberos`` and read by the workers. -In the Kubernetes environment, this can be realized by the concept of side-car, where both Kerberos -token refresher and worker are part of the same Pod. Only the Kerberos side-car has access to +In the Kubernetes environment, this can be realized by the concept of sidecar, where both Kerberos +token refresher and worker are part of the same Pod. Only the Kerberos sidecar has access to Keytab secret and both containers in the same Pod share the volume, where temporary token is written by -the side-car container and read by the worker container. +the sidecar container and read by the worker container. This concept is implemented in :doc:`the Helm Chart for Apache Airflow <helm-chart:index>`. diff --git a/docs/apache-airflow/administration-and-deployment/scheduler.rst b/docs/apache-airflow/administration-and-deployment/scheduler.rst index 3c2eee3b5d..dfc97e6120 100644 --- a/docs/apache-airflow/administration-and-deployment/scheduler.rst +++ b/docs/apache-airflow/administration-and-deployment/scheduler.rst @@ -62,8 +62,8 @@ In the UI, it appears as if Airflow is running your tasks a day **late** .. note:: The scheduler is designed for high throughput. This is an informed design decision to achieve scheduling tasks as soon as possible. The scheduler checks how many free slots available in a pool and schedule at most that number of tasks instances in one iteration. - This means that task priority will only come in to effect when there are more scheduled tasks - waiting than the queue slots. Thus there can be cases where low priority tasks will be schedule before high priority tasks if they share the same batch. + This means that task priority will only come into effect when there are more scheduled tasks + waiting than the queue slots. Thus there can be cases where low priority tasks will be scheduled before high priority tasks if they share the same batch. For more read about that you can reference `this GitHub discussion <https://github.com/apache/airflow/discussions/28809>`__. @@ -215,7 +215,7 @@ Generally for fine-tuning, your approach should be the same as for any performan optimizations (we will not recommend any specific tools - just use the tools that you usually use to observe and monitor your systems): -* its extremely important to monitor your system with the right set of tools that you usually use to +* it's extremely important to monitor your system with the right set of tools that you usually use to monitor your system. This document does not go into details of particular metrics and tools that you can use, it just describes what kind of resources you should monitor, but you should follow your best practices for monitoring to grab the right data. @@ -304,7 +304,7 @@ When you know what your resource usage is, the improvements that you can conside simply exchanging one performance aspect for another. For example if you want to decrease the CPU usage, you might increase file processing interval (but the result will be that new DAGs will appear with bigger delay). Usually performance tuning is the art of balancing different aspects. -* sometimes you change scheduler behaviour slightly (for example change parsing sort order) +* sometimes you change scheduler behavior slightly (for example change parsing sort order) in order to get better fine-tuned results for your particular deployment.
