On Mon, Sep 03, 2018 at 03:21:00AM +0000, tide via Digitalmars-d wrote: [...] > Any graphic problems are going to stem probably more from shaders and > interaction with the GPU than any sort of logic code. [...] > What he was talking about was basically that, he was saying how it > could be used to identify possible memory corruption, which is > completely absurd. That's just stretching it's use case so thin.
You misquote me. I never said asserts could be used to *identify* memory corruption -- that's preposterous. What I'm saying is that when an assert failed, it *may* be caused by a memory corruption (among many other possibilities), and that is one of the reasons why it's a bad idea to keep going in spite of the assertion failure. The reason I picked memory corruption is because it's a good illustration of how badly things can go wrong when code that is known to have programming bugs continue running unchecked. When an assertion fails it basically means the program has a logic error, and what the programmer assumed the program will do is wrong. Therefore, by definition, you cannot predict what the program will actually do -- and remote exploits via memory corruption is a good example of how your program can end up doing something completely different from what it was designed to do when you keep going in spite of logic errors. Obviously, assertions aren't going to catch *all* memory corruptions, but given that an assertion failure *might* be caused by a memory corruption, why would anyone in their sane mind want to allow the program to keep going? We cannot catch *all* logic errors by assertions, but why would anyone want to deliberately ignore the logic errors that we *can* catch? T -- If creativity is stifled by rigid discipline, then it is not true creativity.
