================
@@ -243,14 +244,16 @@ class AnnotatingParser {
       // operator that was misinterpreted because we are parsing template
       // parameters.
       // FIXME: This is getting out of hand, write a decent parser.
-      if (InExpr && !Line.startsWith(tok::kw_template) &&
+      if (InExpr && !SeenFatArrow && !Line.startsWith(tok::kw_template) &&
           Prev.is(TT_BinaryOperator)) {
         const auto Precedence = Prev.getPrecedence();
         if (Precedence > prec::Conditional && Precedence < prec::Relational)
           return false;
       }
       if (Prev.isOneOf(tok::question, tok::colon) && !Style.isProto())
         SeenTernaryOperator = true;
+      else if (Prev.is(TT_FatArrow))
----------------
mydeveloperday wrote:

It seems a general rule of thumb that the more complicated the code the harder 
the job clang-format seems to have.

Whilst I admire the programming prowess of such code, I generally struggle to 
read it no matter how its formatted. My view has often been when clang-format 
does fail, it might say more about the code than about clang-format. For the 
cases we butcher (and we will from time to time) there is still //clang-format 
off until we can resolve it.

that said I believe we can only walk towards clang-format doing the right thing 
one patch at a time, and I don't believe we should let "Perfection be the enemy 
of good". 

I personally like to follow the "Beyonce" rule, "if you liked it you should 
have put a test on it". IMHO as long as our tests pass, and the test cases keep 
growing, and we don't allow tests to change without good reason, and all new 
code has tests, then I believe we are good. 

Some person, with some specific failure will always happen, given how much code 
is likely clang-formatted daily. But honestly I don't think of it as a true 
regression unless the test is failing, because often regressions are caused by
us now interpreting better, annotating better, or just fixing something and 
often what previously worked by chance before doesn't now. That's not a real 
regression its just bad luck and a lack of a test to keep us honest.

While I don't really like code becoming a glorious game of whack-a-mole I think 
our intentions are honorable and we should make forward process, even if its 
one patch at a time.

When this does happen it means that previous use case wasn't covered by a test. 
It needs to be, and we can fix it accordingly adding the missing test which 
ultimately should give us more robustness around the unintended "bad-luck"

However the clang-format users community must remember, we are a core team of 4 
with really @owenca doing what I consider to be the majority share, (sorry I'm 
so busy with my day job. I only get time for drive by reviewing, but I'm trying 
to continue to give moral support), as far as I am aware none of us are paid to 
work on clang-format, and despite all the big IDE players using clang-format 
integrated into their commercial offerings I see little in the way of 
contribution from them. In fact at last years CppCon I saw one even openly 
criticize clang-format with vague obtuse corner cases (A cheap shot in my view).

Its my belief that "our modus operandi" is as correct as it can be given our 
time and resources. We never intentionally break things, but its extremely hard 
to find a large enough code base that we can test against especially given that 
not even all of LLVM or even Clang is clang-format clean, enough to uncover 
issues.  This is especially difficult to find code using new C++ language 
concepts, modules, complex template scenarios and lambdas.  

Now if anyone wants to persuade all of  LLVM that the entire repo should be 
clang-formatted top to bottom, then maybe we'd have more code examples to 
ensure we introduced less issues.



https://github.com/llvm/llvm-project/pull/108671
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to