Can you check the memory immediately after a download'? Then after the loop that copies initialized data into place?
I suspect something off there. Could be a linker script issue or the copy gone crazy. --joel On Fri, Jul 28, 2017 at 3:20 PM, Denis Obrezkov <denisobrez...@gmail.com> wrote: > 2017-07-28 22:16 GMT+02:00 Joel Sherrill <j...@rtems.org>: > >> >> >> On Fri, Jul 28, 2017 at 2:50 PM, Denis Obrezkov <denisobrez...@gmail.com> >> wrote: >> >>> >>>>> I can see that during task initialization I have a call: >>>>> _Thread_Initialize_information (information=information@entry=0x80000ad4 >>>>> <_RTEMS_tasks_Information>, the_api=the_api@entry=OBJECTS_CLASSIC_API, >>>>> the_class=the_class@entry=1, maximum=124, >>>>> is_string=is_string@entry=false, maximum_name_length=maximum_na >>>>> me_length@entry=4) >>>>> >>>>> And maximum is 124, but I have a configuration parameter: >>>>> #define CONFIGURE_MAXIMUM_TASKS 4 >>>>> >>>> >>>> I can't imagine any standard RTEMS test configuring that many tasks. >>>> Is there a data corruption issue? >>>> >>>> 124 = 0x7c which doesn't ring any bells for me on odd memory issues. >>>> >>>> What is the contents of "Configuration_RTEMS_API"? >>>> >>> Oh, I change my configuration options a bit, they are: >>> #define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER >>> #define CONFIGURE_APPLICATION_DISABLE_FILESYSTEM >>> #define CONFIGURE_DISABLE_NEWLIB_REENTRANCY >>> #define CONFIGURE_TERMIOS_DISABLED >>> #define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 0 >>> #define CONFIGURE_MINIMUM_TASK_STACK_SIZE 512 >>> #define CONFIGURE_MAXIMUM_PRIORITY 3 >>> #define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS >>> #define CONFIGURE_IDLE_TASK_BODY Init >>> #define CONFIGURE_IDLE_TASK_INITIALIZES_APPLICATION >>> #define CONFIGURE_TASKS 4 >>> >>> #define CONFIGURE_MAXIMUM_TASKS 4 >>> >>> #define CONFIGURE_UNIFIED_WORK_AREAS >>> >>> Also it is the test from a lower ticker example. >>> Configuration_RTEMS_API with -O0 option: >>> {maximum_tasks = 5, maximum_timers = 0, maximum_semaphores = 7, >>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0, >>> maximum_ports = 0, maximum_periods = 0, >>> maximum_barriers = 0, number_of_initialization_tasks = 0, >>> User_initialization_tasks_table = 0x0} >>> >>> with -Os option: >>> {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 7, >>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 0, >>> maximum_ports = 0, maximum_periods = 0, >>> maximum_barriers = 0, number_of_initialization_tasks = 0, >>> User_initialization_tasks_table = 0x0} >>> >> >> Hmmm.. If you look at this structure in gdb without attaching to the >> target, what >> is maximum_tasks? >> >> --joel >> >>> >>> >>> >>>> >>>>> >>>>> It seems that other tasks are LIBBLOCK tasks. >>>>> >>>>> Also, this is my Configuration during run: >>>>> (gdb) p Configuration.stack_space_size >>>>> $1 = 2648 >>>>> (gdb) p Configuration.work_space_size >>>>> $2 = 4216 >>>>> (gdb) p Configuration.interrupt_stack_size >>>>> $3 = 512 >>>>> (gdb) p Configuration.idle_task_stack_size >>>>> $4 = 512 >>>>> >>>> >>>> That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it to 4. >>>> That should >>>> reduce the workspace. >>>> >>>> long term, we might want to consider lowering it permanently like one >>>> of the Coldfires >>>> had to. Or change the default scheduler to the Simple one to save >>>> memory. >>>> >>>> >>> I haven't dealt with the Scheduler option yet. >>> >>> >>> >>> -- >>> Regards, Denis Obrezkov >>> >> >> maximum_tasks = 4 > So, is it a linker file issue? > > This is it: > https://github.com/embeddedden/rtems-riscv/blob/hifive1/c/src/lib/libbsp/ > riscv32/hifive1/startup/linkcmds > > -- > Regards, Denis Obrezkov >
_______________________________________________ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel