On Jul  8, 2022, Richard Biener <richard.guent...@gmail.com> wrote:

> I'm possibly missing the importance of 'redundancy' in -fharden-control-flow

I took "Control Flow Redundancy" as a term of the art and never
questioned it.  I think the "redundancy" has to do with the fact that
control flow is generally affected by tests and conditionals, and the
checks that an expected path was seemingly taken is redundant with
those.

> but how can you, from a set of visited blocks local to a function,
> determine whether the control flow through the function is "expected"

Hmm, maybe the definition should be in the negated form: what the check
catches is *unexpected* execution flows, e.g. when a block that
shouldn't have been reached (because none of its predecessors was)
somehow was.  This unexpected circumstance indicates some kind of fault
or attack, which is what IIUC this check is about.

Whether the fault was that the hardware took a wrong turn because it was
power deprived, or some software exploit returned to an artifact at the
end of a function to get it to serve an alternate purpose, the check at
the end of the function would catch the unexpected execution of a block
that couldn't be reached under normal circumstances, and flag the error
before further damage occurs.

> Can you elaborate on what kind of "derailed" control flow this catches
> (example?) and what cases it does not?

As in the comments for the pass: for each visited block, check that at
least one predecessor and at least one successor were also visited.  


> I'm also curious as of how this compares to hardware
> mitigations like x86 indirect branch tracking and shadow stack

I'm not expert in the field, but my understanding is that these are
complementary.

Indirect branch tracking constrains the set of available artifacts one
might indirectly branch to, but if you reach one of them, you'd be no
wiser that something fishy was going on without checking that you got
there from some of the predecessor blocks.  (we don't really check
precisely that, nor do we check at that precise time, but we check at
the end of the function that at least one of the predecessor blocks was
run.)  Constraining the available indirect branch targets helps avoid
bypassing the code that sets the bit corresponding to that block, which
might enable an attacker to use an artifact without detection., if
there's no subsequent block that would be inexplicably reached.

Shadow stacks avoid corruption of return addresses, so you're less
likely to reach an unexpected block by means of buffer overruns that
corrupt the stack and overwrite the return address.  Other means to land
in the middle of a function, such as corrupting memory or logical units
through power deprivation remain, and this pass helps guard against
those too.

> and how this relates to the LLVM control flow hardening (ISTR such
> thing exists).

I've never heard of it.  I've just tried to learn about it, but I
couldn't find anything pertinent.

Are you by any chance thinking of
https://clang.llvm.org/docs/ControlFlowIntegrity.html
?

This appears to be entirely unrelated: the control flow nodes it's
concerned with are functions/methods/subprograms in a program, rather
than basic blocks within a function.


Thanks a lot for these questions.  They're going to help me be better
prepared for a presentation about various hardening features (*) that
I've submitted and am preparing for the upcoming Cauldron.

(*) 
https://docs.adacore.com/live/wave/gnat_rm/html/gnat_rm/gnat_rm/security_hardening_features.html

-- 
Alexandre Oliva, happy hacker                https://FSFLA.org/blogs/lxo/
   Free Software Activist                       GNU Toolchain Engineer
Disinformation flourishes because many people care deeply about injustice
but very few check the facts.  Ask me about <https://stallmansupport.org>

Reply via email to