================
@@ -1753,91 +1742,93 @@ Possible Questions
How modules speed up compilation
--------------------------------
-A classic theory for the reason why modules speed up the compilation is:
-if there are ``n`` headers and ``m`` source files and each header is included
by each source file,
-then the complexity of the compilation is ``O(n*m)``;
-But if there are ``n`` module interfaces and ``m`` source files, the
complexity of the compilation is
-``O(n+m)``. So, using modules would be a big win when scaling.
-In a simpler word, we could get rid of many redundant compilations by using
modules.
+A classic theory for the reason why modules speed up the compilation is: if
+there are ``n`` headers and ``m`` source files and each header is included by
+each source file, then the complexity of the compilation is ``O(n*m)``.
+However, if there are ``n`` module interfaces and ``m`` source files, the
+complexity of the compilation is ``O(n+m)``. Therefore, using modules would be
+a significant improvement at scale. More simply, use of modules causes many of
+the redundant compilations to no longer be necessary.
-Roughly, this theory is correct. But the problem is that it is too rough.
-The behavior depends on the optimization level, as we will illustrate below.
+While this is accurate at a high level, this depends greatly on the
+optimization level, as illustrated below.
-First is ``O0``. The compilation process is described in the following graph.
+First is ``-O0``. The compilation process is described in the following graph.
.. code-block:: none
├-------------frontend----------┼-------------middle
end----------------┼----backend----┤
│ │ │
│
└---parsing----sema----codegen--┴----- transformations ---- codegen
----┴---- codegen --┘
-
┌---------------------------------------------------------------------------------------┐
+
├---------------------------------------------------------------------------------------┐
|
│
| source file
│
|
│
└---------------------------------------------------------------------------------------┘
- ┌--------┐
+ ├--------┐
│ │
│imported│
│ │
│ code │
│ │
└--------┘
-Here we can see that the source file (could be a non-module unit or a module
unit) would get processed by the
-whole pipeline.
-But the imported code would only get involved in semantic analysis, which is
mainly about name lookup,
-overload resolution and template instantiation.
-All of these processes are fast relative to the whole compilation process.
-More importantly, the imported code only needs to be processed once in
frontend code generation,
-as well as the whole middle end and backend.
-So we could get a big win for the compilation time in O0.
+In this case, the source file (which could be a non-module unit or a module
+unit) would get processed by the entire pipeline. However, the imported code
+would only get involved in semantic analysis, which, for the most part, is name
+lookup, overload resolution, and template instantiation. All of these processes
+are fast relative to the whole compilation process. More importantly, the
+imported code only needs to be processed once during frontend code generation,
+as well as the whole middle end and backend. So we could get a big win for the
+compilation time in ``-O0``.
-But with optimizations, things are different:
-
-(we omit ``code generation`` part for each end due to the limited space)
+But with optimizations, things are different (the ``code generation`` part for
+each end is omitted due to limited space):
.. code-block:: none
├-------- frontend ---------┼--------------- middle end
--------------------┼------ backend ----┤
│ │
│ │
└--- parsing ---- sema -----┴--- optimizations --- IPO ----
optimizations---┴--- optimizations -┘
-
┌-----------------------------------------------------------------------------------------------┐
+
├-----------------------------------------------------------------------------------------------┐
│
│
│ source file
│
│
│
└-----------------------------------------------------------------------------------------------┘
- ┌---------------------------------------┐
+ ├---------------------------------------┐
│ │
│ │
│ imported code │
│ │
│ │
└---------------------------------------┘
-It would be very unfortunate if we end up with worse performance after using
modules.
-The main concern is that when we compile a source file, the compiler needs to
see the function body
-of imported module units so that it can perform IPO (InterProcedural
Optimization, primarily inlining
-in practice) to optimize functions in current source file with the help of the
information provided by
-the imported module units.
-In other words, the imported code would be processed again and again in
importee units
-by optimizations (including IPO itself).
-The optimizations before IPO and the IPO itself are the most time-consuming
part in whole compilation process.
-So from this perspective, we might not be able to get the improvements
described in the theory.
-But we could still save the time for optimizations after IPO and the whole
backend.
-
-Overall, at ``O0`` the implementations of functions defined in a module will
not impact module users,
-but at higher optimization levels the definitions of such functions are
provided to user compilations for the
-purposes of optimization (but definitions of these functions are still not
included in the use's object file)-
-this means the build speedup at higher optimization levels may be lower than
expected given ``O0`` experience,
-but does provide by more optimization opportunities.
+It would be very unfortunate if we end up with worse performance when using
+modules. The main concern is that when a source file is compiled, the compiler
+needs to see the body of imported module units so that it can perform IPO
+(InterProcedural Optimization, primarily inlining in practice) to optimize
+functions in the current source file with the help of the information provided
+by the imported module units. In other words, the imported code would be
+processed again and again in importee units by optimizations (including IPO
+itself). The optimizations before IPO and IPO itself are the most
time-consuming
+part in whole compilation process. So from this perspective, it might not be
+possible to get the compile time improvements described, but there could be
+time savings for optimizations after IPO and the whole backend.
+
+Overall, at ``-O0`` the implementations of functions defined in a module will
+not impact module users, but at higher optimization levels the definitions of
+such functions are provided to user compilations for the purposes of
+optimization (but definitions of these functions are still not included in the
+use's object file). This means the build speedup at higher optimization levels
+may be lower than expected given ``-O0`` experience, but does provide more
+optimization opportunities.
Interoperability with Clang Modules
-----------------------------------
-We **wish** to support clang modules and standard c++ modules at the same time,
-but the mixed using form is not well used/tested yet.
-
-Please file new github issues as you find interoperability problems.
+We **wish** to support Clang modules and standard C++ modules at the same time,
+but the mixing them together is not well used/tested yet. Please file new
+github issues as you find interoperability problems.
----------------
AaronBallman wrote:
Good catch! Done.
https://github.com/llvm/llvm-project/pull/90237
_______________________________________________
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits