Re: Optimization issue in RISC-V BSP

2017-07-29 Thread Denis Obrezkov
2017-07-29 3:45 GMT+02:00 Joel Sherrill :

>
>
> On Jul 28, 2017 7:11 PM, "Denis Obrezkov"  wrote:
>
> 2017-07-29 1:41 GMT+02:00 Joel Sherrill :
>
>>
>>
>> On Jul 28, 2017 6:39 PM, "Denis Obrezkov" 
>> wrote:
>>
>> 2017-07-29 1:28 GMT+02:00 Joel Sherrill :
>>
>>>
>>>
>>> On Jul 28, 2017 6:14 PM, "Denis Obrezkov" 
>>> wrote:
>>>
>>> 2017-07-29 0:57 GMT+02:00 Joel Sherrill :
>>>


 On Jul 28, 2017 5:55 PM, "Denis Obrezkov" 
 wrote:

 2017-07-28 22:36 GMT+02:00 Joel Sherrill :

> Can you check the memory immediately after a download'?
>
> Then after the loop that copies initialized data into place?
>
> I suspect something off there. Could be a linker script issue or the
> copy gone crazy.
>
> --joel
>
> On Fri, Jul 28, 2017 at 3:20 PM, Denis Obrezkov <
> denisobrez...@gmail.com> wrote:
>
>> 2017-07-28 22:16 GMT+02:00 Joel Sherrill :
>>
>>>
>>>
>>> On Fri, Jul 28, 2017 at 2:50 PM, Denis Obrezkov <
>>> denisobrez...@gmail.com> wrote:
>>>

>> I can see that during task initialization I have a call:
>>  _Thread_Initialize_information 
>> (information=information@entry=0x8ad4
>> <_RTEMS_tasks_Information>, 
>> the_api=the_api@entry=OBJECTS_CLASSIC_API,
>> the_class=the_class@entry=1, maximum=124,
>> is_string=is_string@entry=false,
>> maximum_name_length=maximum_name_length@entry=4)
>>
>> And maximum is 124, but I have a configuration parameter:
>> #define CONFIGURE_MAXIMUM_TASKS 4
>>
>
> I can't imagine any standard RTEMS test configuring that many
> tasks.
> Is there a data corruption issue?
>
> 124 = 0x7c which doesn't ring any bells for me on odd memory
> issues.
>
> What is the contents of "Configuration_RTEMS_API"?
>
 Oh, I change my configuration options a bit, they are:
   #define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
   #define CONFIGURE_APPLICATION_DISABLE_FILESYSTEM
   #define CONFIGURE_DISABLE_NEWLIB_REENTRANCY
   #define CONFIGURE_TERMIOS_DISABLED
   #define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 0
   #define CONFIGURE_MINIMUM_TASK_STACK_SIZE 512
   #define CONFIGURE_MAXIMUM_PRIORITY 3
   #define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
   #define CONFIGURE_IDLE_TASK_BODY Init
   #define CONFIGURE_IDLE_TASK_INITIALIZES_APPLICATION
   #define CONFIGURE_TASKS 4

   #define CONFIGURE_MAXIMUM_TASKS 4

   #define CONFIGURE_UNIFIED_WORK_AREAS

 Also it is the test from a lower ticker example.
 Configuration_RTEMS_API with -O0 option:
 {maximum_tasks = 5, maximum_timers = 0, maximum_semaphores = 7,
 maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 
 0,
 maximum_ports = 0, maximum_periods = 0,
   maximum_barriers = 0, number_of_initialization_tasks = 0,
 User_initialization_tasks_table = 0x0}

 with -Os option:
 {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 7,
 maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 
 0,
 maximum_ports = 0, maximum_periods = 0,
   maximum_barriers = 0, number_of_initialization_tasks = 0,
 User_initialization_tasks_table = 0x0}

>>>
>>> Hmmm.. If you look at this structure in gdb without attaching to the
>>> target, what
>>> is maximum_tasks?
>>>
>>> --joel
>>>



>
>>
>> It seems that other tasks are LIBBLOCK tasks.
>>
>> Also, this is my Configuration during run:
>> (gdb) p Configuration.stack_space_size
>> $1 = 2648
>> (gdb) p Configuration.work_space_size
>> $2 = 4216
>> (gdb) p Configuration.interrupt_stack_size
>> $3 = 512
>> (gdb) p Configuration.idle_task_stack_size
>> $4 = 512
>>
>
> That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it
> to 4. That should
> reduce the workspace.
>
>  long term, we might want to consider lowering it permanently like
> one of the Coldfires
> had to. Or change the default scheduler to the Simple one to save
> memory.
>
>
 I haven't dealt with the Scheduler option yet.



 --
 Regards, Denis Obrezkov

>>>
>>> maximum_tasks = 4
>> So, is it a linker file issue?
>>
>> This is it:
>> https://github.com/embeddedden/rtems-riscv/blob/hifive1/c/sr
>> c/lib/libbsp/riscv32/hifive1/startup/linkcmds
>>
>> --
>> Regards, Denis Obrezkov
>>
>
> After do

Checksum failure in sparc tool set building

2017-07-29 Thread Aditya Upadhyay
Hello All,

I tried a lot to solve this issue but not able to do this. I have
changed the python non dev verison to dev version, have correctly
setup the prefix.

download: http://ftp.gnu.org/gnu/gdb/gdb-7.12.tar.xz -> sources/gdb-7.12.tar.xz
downloading: sources/gdb-7.12.tar.xz - 18.3MB of 18.3MB (100%)
download: https://gaisler.org/gdb/gdb-7.12-sis-leon2-leon3.diff ->
patches/gdb-7.12-sis-leon2-leon3.diff
downloading: patches/gdb-7.12-sis-leon2-leon3.diff - 165.8kB of 165.8kB (100%)
warning: checksum error: gdb-7.12-sis-leon2-leon3.diff
error: checksum failure file: patches/gdb-7.12-sis-leon2-leon3.diff
Build FAILED
  See error report: rsb-report-sparc-rtems4.12-gdb-7.12-x86_64-linux-gnu-1.txt
error: checksum failure file: patches/gdb-7.12-sis-leon2-leon3.diff
Build Set: Time 1:35:24.577050
Build FAILED
aditya@aditya-Lenovo-ideapad-110-15ACL:~/development/rtems/src/rtems-source-builder/rtems$


Any help will be greatly appreciable.


Thanks & Regards,
Aditya Upadhyay
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: Checksum failure in sparc tool set building

2017-07-29 Thread Joel Sherrill
On Jul 29, 2017 10:52 AM, "Aditya Upadhyay"  wrote:

Hello All,

I tried a lot to solve this issue but not able to do this. I have
changed the python non dev verison to dev version, have correctly
setup the prefix.

download: http://ftp.gnu.org/gnu/gdb/gdb-7.12.tar.xz ->
sources/gdb-7.12.tar.xz
downloading: sources/gdb-7.12.tar.xz - 18.3MB of 18.3MB (100%)
download: https://gaisler.org/gdb/gdb-7.12-sis-leon2-leon3.diff ->
patches/gdb-7.12-sis-leon2-leon3.diff
downloading: patches/gdb-7.12-sis-leon2-leon3.diff - 165.8kB of 165.8kB
(100%)
warning: checksum error: gdb-7.12-sis-leon2-leon3.diff
error: checksum failure file: patches/gdb-7.12-sis-leon2-leon3.diff
Build FAILED
  See error report: rsb-report-sparc-rtems4.12-gdb-7.12-x86_64-linux-gnu-1.
txt
error: checksum failure file: patches/gdb-7.12-sis-leon2-leon3.diff
Build Set: Time 1:35:24.577050
Build FAILED
aditya@aditya-Lenovo-ideapad-110-15ACL:~/development/rtems/
src/rtems-source-builder/rtems$


There are a number of potential explanations including the file changing on
the gaisler.org server or a bad download.

A couple of options are to delete that diff file and try again and to
comment out the checksum in the rsb file.

Hopefully someone who knows the file will comment



Any help will be greatly appreciable.


Thanks & Regards,
Aditya Upadhyay
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: Checksum failure in sparc tool set building

2017-07-29 Thread Jiri Gaisler

Sorry, this is my fault. I updated the diff yesterday but forgot that
the checksum also needs to be changed. I have restored the original diff
so RSB should build OK now. I will send a proper patch for the new diff
on the list later...

Jiri.


On 07/29/2017 05:52 PM, Aditya Upadhyay wrote:
> Hello All,
>
> I tried a lot to solve this issue but not able to do this. I have
> changed the python non dev verison to dev version, have correctly
> setup the prefix.
>
> download: http://ftp.gnu.org/gnu/gdb/gdb-7.12.tar.xz -> 
> sources/gdb-7.12.tar.xz
> downloading: sources/gdb-7.12.tar.xz - 18.3MB of 18.3MB (100%)
> download: https://gaisler.org/gdb/gdb-7.12-sis-leon2-leon3.diff ->
> patches/gdb-7.12-sis-leon2-leon3.diff
> downloading: patches/gdb-7.12-sis-leon2-leon3.diff - 165.8kB of 165.8kB (100%)
> warning: checksum error: gdb-7.12-sis-leon2-leon3.diff
> error: checksum failure file: patches/gdb-7.12-sis-leon2-leon3.diff
> Build FAILED
>   See error report: rsb-report-sparc-rtems4.12-gdb-7.12-x86_64-linux-gnu-1.txt
> error: checksum failure file: patches/gdb-7.12-sis-leon2-leon3.diff
> Build Set: Time 1:35:24.577050
> Build FAILED
> aditya@aditya-Lenovo-ideapad-110-15ACL:~/development/rtems/src/rtems-source-builder/rtems$
>
>
> Any help will be greatly appreciable.
>
>
> Thanks & Regards,
> Aditya Upadhyay
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: Optimization issue in RISC-V BSP

2017-07-29 Thread Joel Sherrill
On Jul 29, 2017 4:04 AM, "Denis Obrezkov"  wrote:

2017-07-29 3:45 GMT+02:00 Joel Sherrill :

>
>
> On Jul 28, 2017 7:11 PM, "Denis Obrezkov"  wrote:
>
> 2017-07-29 1:41 GMT+02:00 Joel Sherrill :
>
>>
>>
>> On Jul 28, 2017 6:39 PM, "Denis Obrezkov" 
>> wrote:
>>
>> 2017-07-29 1:28 GMT+02:00 Joel Sherrill :
>>
>>>
>>>
>>> On Jul 28, 2017 6:14 PM, "Denis Obrezkov" 
>>> wrote:
>>>
>>> 2017-07-29 0:57 GMT+02:00 Joel Sherrill :
>>>


 On Jul 28, 2017 5:55 PM, "Denis Obrezkov" 
 wrote:

 2017-07-28 22:36 GMT+02:00 Joel Sherrill :

> Can you check the memory immediately after a download'?
>
> Then after the loop that copies initialized data into place?
>
> I suspect something off there. Could be a linker script issue or the
> copy gone crazy.
>
> --joel
>
> On Fri, Jul 28, 2017 at 3:20 PM, Denis Obrezkov <
> denisobrez...@gmail.com> wrote:
>
>> 2017-07-28 22:16 GMT+02:00 Joel Sherrill :
>>
>>>
>>>
>>> On Fri, Jul 28, 2017 at 2:50 PM, Denis Obrezkov <
>>> denisobrez...@gmail.com> wrote:
>>>

>> I can see that during task initialization I have a call:
>>  _Thread_Initialize_information 
>> (information=information@entry=0x8ad4
>> <_RTEMS_tasks_Information>, 
>> the_api=the_api@entry=OBJECTS_CLASSIC_API,
>> the_class=the_class@entry=1, maximum=124,
>> is_string=is_string@entry=false,
>> maximum_name_length=maximum_name_length@entry=4)
>>
>> And maximum is 124, but I have a configuration parameter:
>> #define CONFIGURE_MAXIMUM_TASKS 4
>>
>
> I can't imagine any standard RTEMS test configuring that many
> tasks.
> Is there a data corruption issue?
>
> 124 = 0x7c which doesn't ring any bells for me on odd memory
> issues.
>
> What is the contents of "Configuration_RTEMS_API"?
>
 Oh, I change my configuration options a bit, they are:
   #define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
   #define CONFIGURE_APPLICATION_DISABLE_FILESYSTEM
   #define CONFIGURE_DISABLE_NEWLIB_REENTRANCY
   #define CONFIGURE_TERMIOS_DISABLED
   #define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 0
   #define CONFIGURE_MINIMUM_TASK_STACK_SIZE 512
   #define CONFIGURE_MAXIMUM_PRIORITY 3
   #define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
   #define CONFIGURE_IDLE_TASK_BODY Init
   #define CONFIGURE_IDLE_TASK_INITIALIZES_APPLICATION
   #define CONFIGURE_TASKS 4

   #define CONFIGURE_MAXIMUM_TASKS 4

   #define CONFIGURE_UNIFIED_WORK_AREAS

 Also it is the test from a lower ticker example.
 Configuration_RTEMS_API with -O0 option:
 {maximum_tasks = 5, maximum_timers = 0, maximum_semaphores = 7,
 maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 
 0,
 maximum_ports = 0, maximum_periods = 0,
   maximum_barriers = 0, number_of_initialization_tasks = 0,
 User_initialization_tasks_table = 0x0}

 with -Os option:
 {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 7,
 maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 
 0,
 maximum_ports = 0, maximum_periods = 0,
   maximum_barriers = 0, number_of_initialization_tasks = 0,
 User_initialization_tasks_table = 0x0}

>>>
>>> Hmmm.. If you look at this structure in gdb without attaching to the
>>> target, what
>>> is maximum_tasks?
>>>
>>> --joel
>>>



>
>>
>> It seems that other tasks are LIBBLOCK tasks.
>>
>> Also, this is my Configuration during run:
>> (gdb) p Configuration.stack_space_size
>> $1 = 2648
>> (gdb) p Configuration.work_space_size
>> $2 = 4216
>> (gdb) p Configuration.interrupt_stack_size
>> $3 = 512
>> (gdb) p Configuration.idle_task_stack_size
>> $4 = 512
>>
>
> That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it
> to 4. That should
> reduce the workspace.
>
>  long term, we might want to consider lowering it permanently like
> one of the Coldfires
> had to. Or change the default scheduler to the Simple one to save
> memory.
>
>
 I haven't dealt with the Scheduler option yet.



 --
 Regards, Denis Obrezkov

>>>
>>> maximum_tasks = 4
>> So, is it a linker file issue?
>>
>> This is it:
>> https://github.com/embeddedden/rtems-riscv/blob/hifive1/c/sr
>> c/lib/libbsp/riscv32/hifive1/startup/linkcmds
>>
>> --
>> 

Re: Optimization issue in RISC-V BSP

2017-07-29 Thread Denis Obrezkov
2017-07-29 19:14 GMT+02:00 Joel Sherrill :

>
>
> On Jul 29, 2017 4:04 AM, "Denis Obrezkov"  wrote:
>
> 2017-07-29 3:45 GMT+02:00 Joel Sherrill :
>
>>
>>
>> On Jul 28, 2017 7:11 PM, "Denis Obrezkov" 
>> wrote:
>>
>> 2017-07-29 1:41 GMT+02:00 Joel Sherrill :
>>
>>>
>>>
>>> On Jul 28, 2017 6:39 PM, "Denis Obrezkov" 
>>> wrote:
>>>
>>> 2017-07-29 1:28 GMT+02:00 Joel Sherrill :
>>>


 On Jul 28, 2017 6:14 PM, "Denis Obrezkov" 
 wrote:

 2017-07-29 0:57 GMT+02:00 Joel Sherrill :

>
>
> On Jul 28, 2017 5:55 PM, "Denis Obrezkov" 
> wrote:
>
> 2017-07-28 22:36 GMT+02:00 Joel Sherrill :
>
>> Can you check the memory immediately after a download'?
>>
>> Then after the loop that copies initialized data into place?
>>
>> I suspect something off there. Could be a linker script issue or the
>> copy gone crazy.
>>
>> --joel
>>
>> On Fri, Jul 28, 2017 at 3:20 PM, Denis Obrezkov <
>> denisobrez...@gmail.com> wrote:
>>
>>> 2017-07-28 22:16 GMT+02:00 Joel Sherrill :
>>>


 On Fri, Jul 28, 2017 at 2:50 PM, Denis Obrezkov <
 denisobrez...@gmail.com> wrote:

>
>>> I can see that during task initialization I have a call:
>>>  _Thread_Initialize_information 
>>> (information=information@entry=0x8ad4
>>> <_RTEMS_tasks_Information>, 
>>> the_api=the_api@entry=OBJECTS_CLASSIC_API,
>>> the_class=the_class@entry=1, maximum=124,
>>> is_string=is_string@entry=false,
>>> maximum_name_length=maximum_name_length@entry=4)
>>>
>>> And maximum is 124, but I have a configuration parameter:
>>> #define CONFIGURE_MAXIMUM_TASKS 4
>>>
>>
>> I can't imagine any standard RTEMS test configuring that many
>> tasks.
>> Is there a data corruption issue?
>>
>> 124 = 0x7c which doesn't ring any bells for me on odd memory
>> issues.
>>
>> What is the contents of "Configuration_RTEMS_API"?
>>
> Oh, I change my configuration options a bit, they are:
>   #define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
>   #define CONFIGURE_APPLICATION_DISABLE_FILESYSTEM
>   #define CONFIGURE_DISABLE_NEWLIB_REENTRANCY
>   #define CONFIGURE_TERMIOS_DISABLED
>   #define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 0
>   #define CONFIGURE_MINIMUM_TASK_STACK_SIZE 512
>   #define CONFIGURE_MAXIMUM_PRIORITY 3
>   #define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
>   #define CONFIGURE_IDLE_TASK_BODY Init
>   #define CONFIGURE_IDLE_TASK_INITIALIZES_APPLICATION
>   #define CONFIGURE_TASKS 4
>
>   #define CONFIGURE_MAXIMUM_TASKS 4
>
>   #define CONFIGURE_UNIFIED_WORK_AREAS
>
> Also it is the test from a lower ticker example.
> Configuration_RTEMS_API with -O0 option:
> {maximum_tasks = 5, maximum_timers = 0, maximum_semaphores = 7,
> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 
> 0,
> maximum_ports = 0, maximum_periods = 0,
>   maximum_barriers = 0, number_of_initialization_tasks = 0,
> User_initialization_tasks_table = 0x0}
>
> with -Os option:
> {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 7,
> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions = 
> 0,
> maximum_ports = 0, maximum_periods = 0,
>   maximum_barriers = 0, number_of_initialization_tasks = 0,
> User_initialization_tasks_table = 0x0}
>

 Hmmm.. If you look at this structure in gdb without attaching to
 the target, what
 is maximum_tasks?

 --joel

>
>
>
>>
>>>
>>> It seems that other tasks are LIBBLOCK tasks.
>>>
>>> Also, this is my Configuration during run:
>>> (gdb) p Configuration.stack_space_size
>>> $1 = 2648
>>> (gdb) p Configuration.work_space_size
>>> $2 = 4216
>>> (gdb) p Configuration.interrupt_stack_size
>>> $3 = 512
>>> (gdb) p Configuration.idle_task_stack_size
>>> $4 = 512
>>>
>>
>> That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it
>> to 4. That should
>> reduce the workspace.
>>
>>  long term, we might want to consider lowering it permanently
>> like one of the Coldfires
>> had to. Or change the default scheduler to the Simple one to save
>> memory.
>>
>>
> I haven't dealt with the Scheduler option yet.
>
>
>
> --
> Regards, Denis Obrezkov
>

 maximum_tasks = 4
>

Re: Optimization issue in RISC-V BSP

2017-07-29 Thread Denis Obrezkov
2017-07-30 1:35 GMT+02:00 Denis Obrezkov :

> 2017-07-29 19:14 GMT+02:00 Joel Sherrill :
>
>>
>>
>> On Jul 29, 2017 4:04 AM, "Denis Obrezkov" 
>> wrote:
>>
>> 2017-07-29 3:45 GMT+02:00 Joel Sherrill :
>>
>>>
>>>
>>> On Jul 28, 2017 7:11 PM, "Denis Obrezkov" 
>>> wrote:
>>>
>>> 2017-07-29 1:41 GMT+02:00 Joel Sherrill :
>>>


 On Jul 28, 2017 6:39 PM, "Denis Obrezkov" 
 wrote:

 2017-07-29 1:28 GMT+02:00 Joel Sherrill :

>
>
> On Jul 28, 2017 6:14 PM, "Denis Obrezkov" 
> wrote:
>
> 2017-07-29 0:57 GMT+02:00 Joel Sherrill :
>
>>
>>
>> On Jul 28, 2017 5:55 PM, "Denis Obrezkov" 
>> wrote:
>>
>> 2017-07-28 22:36 GMT+02:00 Joel Sherrill :
>>
>>> Can you check the memory immediately after a download'?
>>>
>>> Then after the loop that copies initialized data into place?
>>>
>>> I suspect something off there. Could be a linker script issue or the
>>> copy gone crazy.
>>>
>>> --joel
>>>
>>> On Fri, Jul 28, 2017 at 3:20 PM, Denis Obrezkov <
>>> denisobrez...@gmail.com> wrote:
>>>
 2017-07-28 22:16 GMT+02:00 Joel Sherrill :

>
>
> On Fri, Jul 28, 2017 at 2:50 PM, Denis Obrezkov <
> denisobrez...@gmail.com> wrote:
>
>>
 I can see that during task initialization I have a call:
  _Thread_Initialize_information 
 (information=information@entry=0x8ad4
 <_RTEMS_tasks_Information>, 
 the_api=the_api@entry=OBJECTS_CLASSIC_API,
 the_class=the_class@entry=1, maximum=124,
 is_string=is_string@entry=false,
 maximum_name_length=maximum_name_length@entry=4)

 And maximum is 124, but I have a configuration parameter:
 #define CONFIGURE_MAXIMUM_TASKS 4

>>>
>>> I can't imagine any standard RTEMS test configuring that many
>>> tasks.
>>> Is there a data corruption issue?
>>>
>>> 124 = 0x7c which doesn't ring any bells for me on odd memory
>>> issues.
>>>
>>> What is the contents of "Configuration_RTEMS_API"?
>>>
>> Oh, I change my configuration options a bit, they are:
>>   #define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
>>   #define CONFIGURE_APPLICATION_DISABLE_FILESYSTEM
>>   #define CONFIGURE_DISABLE_NEWLIB_REENTRANCY
>>   #define CONFIGURE_TERMIOS_DISABLED
>>   #define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 0
>>   #define CONFIGURE_MINIMUM_TASK_STACK_SIZE 512
>>   #define CONFIGURE_MAXIMUM_PRIORITY 3
>>   #define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
>>   #define CONFIGURE_IDLE_TASK_BODY Init
>>   #define CONFIGURE_IDLE_TASK_INITIALIZES_APPLICATION
>>   #define CONFIGURE_TASKS 4
>>
>>   #define CONFIGURE_MAXIMUM_TASKS 4
>>
>>   #define CONFIGURE_UNIFIED_WORK_AREAS
>>
>> Also it is the test from a lower ticker example.
>> Configuration_RTEMS_API with -O0 option:
>> {maximum_tasks = 5, maximum_timers = 0, maximum_semaphores = 7,
>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions 
>> = 0,
>> maximum_ports = 0, maximum_periods = 0,
>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>> User_initialization_tasks_table = 0x0}
>>
>> with -Os option:
>> {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 7,
>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions 
>> = 0,
>> maximum_ports = 0, maximum_periods = 0,
>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>> User_initialization_tasks_table = 0x0}
>>
>
> Hmmm.. If you look at this structure in gdb without attaching to
> the target, what
> is maximum_tasks?
>
> --joel
>
>>
>>
>>
>>>

 It seems that other tasks are LIBBLOCK tasks.

 Also, this is my Configuration during run:
 (gdb) p Configuration.stack_space_size
 $1 = 2648
 (gdb) p Configuration.work_space_size
 $2 = 4216
 (gdb) p Configuration.interrupt_stack_size
 $3 = 512
 (gdb) p Configuration.idle_task_stack_size
 $4 = 512

>>>
>>> That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it
>>> to 4. That should
>>> reduce the workspace.
>>>
>>>  long term, we might want to consider lowering it permanently
>>> like one of the Coldfires
>>> had to. Or change the default scheduler to the Simple one to
>>> save memory.
>>>
>>

Re: Optimization issue in RISC-V BSP

2017-07-29 Thread Joel Sherrill
Sorry to top post but this thread is very deep to answer on a phone.

Try looking at the same code on the erc32 bsp and see how it is done.

Also you could disable the atexit() call and see how.much further you get.

On Jul 29, 2017 7:03 PM, "Denis Obrezkov"  wrote:

2017-07-30 1:35 GMT+02:00 Denis Obrezkov :

> 2017-07-29 19:14 GMT+02:00 Joel Sherrill :
>
>>
>>
>> On Jul 29, 2017 4:04 AM, "Denis Obrezkov" 
>> wrote:
>>
>> 2017-07-29 3:45 GMT+02:00 Joel Sherrill :
>>
>>>
>>>
>>> On Jul 28, 2017 7:11 PM, "Denis Obrezkov" 
>>> wrote:
>>>
>>> 2017-07-29 1:41 GMT+02:00 Joel Sherrill :
>>>


 On Jul 28, 2017 6:39 PM, "Denis Obrezkov" 
 wrote:

 2017-07-29 1:28 GMT+02:00 Joel Sherrill :

>
>
> On Jul 28, 2017 6:14 PM, "Denis Obrezkov" 
> wrote:
>
> 2017-07-29 0:57 GMT+02:00 Joel Sherrill :
>
>>
>>
>> On Jul 28, 2017 5:55 PM, "Denis Obrezkov" 
>> wrote:
>>
>> 2017-07-28 22:36 GMT+02:00 Joel Sherrill :
>>
>>> Can you check the memory immediately after a download'?
>>>
>>> Then after the loop that copies initialized data into place?
>>>
>>> I suspect something off there. Could be a linker script issue or the
>>> copy gone crazy.
>>>
>>> --joel
>>>
>>> On Fri, Jul 28, 2017 at 3:20 PM, Denis Obrezkov <
>>> denisobrez...@gmail.com> wrote:
>>>
 2017-07-28 22:16 GMT+02:00 Joel Sherrill :

>
>
> On Fri, Jul 28, 2017 at 2:50 PM, Denis Obrezkov <
> denisobrez...@gmail.com> wrote:
>
>>
 I can see that during task initialization I have a call:
  _Thread_Initialize_information 
 (information=information@entry=0x8ad4
 <_RTEMS_tasks_Information>, 
 the_api=the_api@entry=OBJECTS_CLASSIC_API,
 the_class=the_class@entry=1, maximum=124,
 is_string=is_string@entry=false,
 maximum_name_length=maximum_name_length@entry=4)

 And maximum is 124, but I have a configuration parameter:
 #define CONFIGURE_MAXIMUM_TASKS 4

>>>
>>> I can't imagine any standard RTEMS test configuring that many
>>> tasks.
>>> Is there a data corruption issue?
>>>
>>> 124 = 0x7c which doesn't ring any bells for me on odd memory
>>> issues.
>>>
>>> What is the contents of "Configuration_RTEMS_API"?
>>>
>> Oh, I change my configuration options a bit, they are:
>>   #define CONFIGURE_APPLICATION_NEEDS_CLOCK_DRIVER
>>   #define CONFIGURE_APPLICATION_DISABLE_FILESYSTEM
>>   #define CONFIGURE_DISABLE_NEWLIB_REENTRANCY
>>   #define CONFIGURE_TERMIOS_DISABLED
>>   #define CONFIGURE_LIBIO_MAXIMUM_FILE_DESCRIPTORS 0
>>   #define CONFIGURE_MINIMUM_TASK_STACK_SIZE 512
>>   #define CONFIGURE_MAXIMUM_PRIORITY 3
>>   #define CONFIGURE_DISABLE_CLASSIC_API_NOTEPADS
>>   #define CONFIGURE_IDLE_TASK_BODY Init
>>   #define CONFIGURE_IDLE_TASK_INITIALIZES_APPLICATION
>>   #define CONFIGURE_TASKS 4
>>
>>   #define CONFIGURE_MAXIMUM_TASKS 4
>>
>>   #define CONFIGURE_UNIFIED_WORK_AREAS
>>
>> Also it is the test from a lower ticker example.
>> Configuration_RTEMS_API with -O0 option:
>> {maximum_tasks = 5, maximum_timers = 0, maximum_semaphores = 7,
>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions 
>> = 0,
>> maximum_ports = 0, maximum_periods = 0,
>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>> User_initialization_tasks_table = 0x0}
>>
>> with -Os option:
>> {maximum_tasks = 124, maximum_timers = 0, maximum_semaphores = 7,
>> maximum_message_queues = 0, maximum_partitions = 0, maximum_regions 
>> = 0,
>> maximum_ports = 0, maximum_periods = 0,
>>   maximum_barriers = 0, number_of_initialization_tasks = 0,
>> User_initialization_tasks_table = 0x0}
>>
>
> Hmmm.. If you look at this structure in gdb without attaching to
> the target, what
> is maximum_tasks?
>
> --joel
>
>>
>>
>>
>>>

 It seems that other tasks are LIBBLOCK tasks.

 Also, this is my Configuration during run:
 (gdb) p Configuration.stack_space_size
 $1 = 2648
 (gdb) p Configuration.work_space_size
 $2 = 4216
 (gdb) p Configuration.interrupt_stack_size
 $3 = 512
 (gdb) p Configuration.idle_task_stack_size
 $4 = 512

>>>
>>> That looks reasonable. Add CONFIGURE_MAXIMUM_PRIORITY and set it
>>> to 4. That should
>>

Re: Optimization issue in RISC-V BSP

2017-07-29 Thread Denis Obrezkov
2017-07-30 2:34 GMT+02:00 Joel Sherrill :

>
> Sorry to top post but this thread is very deep to answer on a phone.
>
> Try looking at the same code on the erc32 bsp and see how it is done.
>
> Also you could disable the atexit() call and see how.much further you get.
>
>
> Ok, I will look at erc32 bsp.
I have removed atexit call, now I can proceed further, till the while(1)
loop in the low ticker test.
But the Dummy Clock doesn't tick.
So, I again get the output:
*** LOW MEMORY CLOCK TICK TEST ***
TA1  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA3  - rtems_clock_get_tod - 09:00:00   12/31/1988


-- 
Regards, Denis Obrezkov
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: Optimization issue in RISC-V BSP

2017-07-29 Thread Joel Sherrill
On Jul 29, 2017 8:02 PM, "Denis Obrezkov"  wrote:



2017-07-30 2:34 GMT+02:00 Joel Sherrill :

>
> Sorry to top post but this thread is very deep to answer on a phone.
>
> Try looking at the same code on the erc32 bsp and see how it is done.
>
> Also you could disable the atexit() call and see how.much further you get.
>
>
> Ok, I will look at erc32 bsp.
I have removed atexit call, now I can proceed further, till the while(1)
loop in the low ticker test.
But the Dummy Clock doesn't tick.
So, I again get the output:
*** LOW MEMORY CLOCK TICK TEST ***
TA1  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA3  - rtems_clock_get_tod - 09:00:00   12/31/1988


Low ticker has its Init task become the idle task. Your clock driver
simulator thread is likely never running.

Try turning that option off in the test



-- 
Regards, Denis Obrezkov
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel