On 20/09/2017 03:43, Sebastian Huber wrote: > ----- Am 19. Sep 2017 um 16:40 schrieb Gedare Bloom ged...@rtems.org: >> On Tue, Sep 19, 2017 at 10:09 AM, Joel Sherrill <j...@rtems.org> wrote: >>> On Tue, Sep 19, 2017 at 8:16 AM, Sebastian Huber >>> <sebastian.hu...@embedded-brains.de> wrote: >>> Newlib/Cygwin folks have been pretty insistent that they do not want to >>> check for NULL on the shared methods. >>> >>> Personally I hate to delete NULL argument pointer checks. Would a >>> long-term compromise be to move them to argument checking macros >>> that are enabled with --enable-rtems-debug? >>> >>>> >>>> >>>> Do we want to check for other obviously invalid pointer values, e.g. >>>> SEM_FAILED? >>> >>> >>> IMO Yes > > With the move to self-contain objects the storage space management moves from > the system to the user.
This is sort of correct, the user always has to manage the storage allocation, system, user or other, all we are doing is changing the mechanism being used. The user of an object creates the allocation where we previously needed to specify the quality and allocate the memory in the workspace and any failure was only detected at runtime. I think 'self-contain' objects is a nice model to adopt because it statically manages a number of system space issues, if there are too many objects the linker will tell us. I feel long term the kernel should move to all resources being self contained so a user knows the object counts in the configuration are for them. It will also make the test configuration simpler because object resources in drivers, BSP code or services in RTEMS vary from BSP to BSP. > This makes it hard for the system to validate things. I am not sure if we > should check for error conditions that shouldn't be present in production > code. The fact you say 'should' here means there is some doubt. There can be no doubt or you need the checks. > So, making these checks RTEMS_DEBUG dependent is something worth > considering. Maybe we need a RTEMS_ROBUST option focusing on user introduced > errors. RTEMS_DEBUG enables a lot of internal consistency checks. I see a contradiction here. If a debug configured environment and specifically a kernel you test to validate all is ok is not a production environment then your testing is only helpful. Changing the environment, ie the kernel without debug on, invalidates the previous testing. Any change without a careful and precise audit must be treated as a change. This leaves our users with an awkward question "Should debugging or robust settings be left on for production?". If all testing shows the system is working and performing to specification why turn the settings off? If there is any doubt with that last question it must relate to the tests and the collected results. I wonder if there is a case of looking to hard, for example I could see a case where you extrapolate this argument to removing malloc NULL checks in production, after all the memory is fixed, the initialisation is always the same and malloc will never fail !! It is what you do not see not what you see that you need to cater for. I view RTEMS_DEBUG as a develop aid for those working within RTEMS. It should only relate to internal consistency checks. I view RTEMS_ROBUST as dangerous, is RTEMS not robust if I do not use it? Chris _______________________________________________ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel