writing a new Machine Description file (e.g cortex-a57.md) in gcc
Hi Linaro Toolchain Group, I am new to gcc development. I am trying to write a new md file describing pipeline information for a processor. Please suggest some good reference document for understanding machine description file. Few questions from cortex-a53.md file: For first integer pipeline following is defined - (define_cpu_unit "cortex_a53_slot0" "cortex_a53") Is name cortex_a53_slot0 is a keyword or it is any general string? Is there any convention in choosing names for cpu units? If ‘cortex_a53_slot0’ a general string, how assembler knows it is first integer pipeline? How these *.md files are used? When they are compiled and how they are used? How to verify an md file for a processor is written correctly or not? How to test it? What other design consideration must be kept in mind while writing a new md file? Thanks. with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: writing a new Machine Description file (e.g cortex-a57.md) in gcc
Hi Jim, Thanks for the reply. This of great help. Thanks Virendra On 7 May 2015 at 23:42, Jim Wilson wrote: > On Thu, May 7, 2015 at 10:56 AM, Jim Wilson wrote: > > There are various genX programs that read the md file and generate a > > insn-X.c file to perform actions based on info in the md file. For > > instance genrecog creates the insn-recog.c file, which is for > > recognizing RTL patterns that map to valid instructions on the target. > > genemit creates the insn-emit.c file which emits the assembly code for > > an RTL insn. Etc. > > I got this wrong. genoutput and insn-output.c are for the assembly > code. genemit and insn-emit.c are for emitting the initial RTL. > > Jim ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: 502 connecting to abe.tcwglab.linaro.org
Hi, I tried on abe 'stable' branch and with below configuration build was successful. ../abe/configure --with-fileserver=148.251.136.42 --with-remote-snapshots= http://148.251.136.42/snapshots-ref ../abe/abe.sh --target aarch64-linux-gnu --build all (Additionally I had to increase the wget_timeout=100) But on master branch, with below commands (as suggested in above mail) following error were observed. ../abe/configure --with-fileserver=148.251.136.42 --with-remote-snapshots=/snapshots-ref ../abe/abe.sh --target aarch64-linux-gnu --build all ERROR (#144): fetch_http (md5sums doesn't exist and you disabled updating.) Looks like following commit is causing problem on the abe master branch. On removing it, build successfully downloaded the packages from 148.251.136.42. commit 5a5ab3582851d52d903846d34c79850f5d7ebda5 don't try to fetch anything is updates are disabled. Thanks Virendra On 6 May 2015 at 17:30, wrote: > Send linaro-toolchain mailing list submissions to > linaro-toolchain@lists.linaro.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.linaro.org/mailman/listinfo/linaro-toolchain > or, via email, send a message with subject or body 'help' to > linaro-toolchain-requ...@lists.linaro.org > > You can reach the person managing the list at > linaro-toolchain-ow...@lists.linaro.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of linaro-toolchain digest..." > > > Today's Topics: > >1. Re: 502 connecting to abe.tcwglab.linaro.org (Tim Entinger) >2. Re: 502 connecting to abe.tcwglab.linaro.org (Victor Chong) >3. [ANNOUNCE] Linaro GCC 4.9 2015.04 snapshot re-spin (Yvan Roux) > > > -- > > Message: 1 > Date: Tue, 5 May 2015 12:44:44 + (UTC) > From: Tim Entinger > To: linaro-toolchain@lists.linaro.org > Subject: Re: 502 connecting to abe.tcwglab.linaro.org > Message-ID: > Content-Type: text/plain; charset=us-ascii > > Rob Savoye writes: > > > > > On 04/21/2015 04:23 PM, Christopher Covington wrote: > > > abe$ ./abe.sh --target aarch64-linux-gnu > > > NOTE: Downloading md5sums to abe/snapshots > > > RUN: /usr/bin/wget --timeout=10 --tries=2 --directory- > prefix=abe/snapshots/ http://abe.tcwglab.linaro.org/snapshots/md5sums > > > > We're having a major security related issue, so the entire TCWG lab > > got taken offline yesterday till this is fixed and more secured. No > idea > > what the ETA is for all of that. ABE will work regardless other than > > some warning messages about not being able to download the md5sums > file, > > or anything under 'infrastructure'. Most of what ABE needs is at > > git.linaro.org, and that's still online. The only files ABE downloads > > from abe.tcwglab.linaro.org are source tarballs, which don't change > very > > often. As long as you have them already downloaded, you can build a > > toolchain still. I work offline pretty frequently, so have tried to > make > > ABE work without the upstream access to anything. > > > > Try adding '--disable update' to your command line for abe.sh, and > > it'll stop trying. > > > > - rob - > > > > ___ > > linaro-toolchain mailing list > > linaro-toolchain lists.linaro.org > > https://lists.linaro.org/mailman/listinfo/linaro-toolchain > > > > > Is there any update for when the TCWG lab will be back online? I was > trying to use ABE for the first time and got the 503 Service Unavailable > message. Is there a way to specify different locations for the source > tarballs in the meantime? I'm hoping to work around the issue if a > timeline is still unknown. > > > > -- > > Message: 2 > Date: Tue, 5 May 2015 23:12:28 +0900 > From: Victor Chong > To: Tim Entinger > Cc: Linaro Toolchain Mailman List > Subject: Re: 502 connecting to abe.tcwglab.linaro.org > Message-ID: > q...@mail.gmail.com> > Content-Type: text/plain; charset="iso-8859-1" > > Hi Tim, > > On Tue, May 5, 2015 at 9:44 PM, Tim Entinger > wrote: > > > Rob Savoye writes: > > > > > > > > On 04/21/2015 04:23 PM, Christopher Covington wrote: > > > > abe$ ./abe.sh --target aarch64-linux-gnu > > > > NOTE: Downloading md5sums to abe/snapshots > > > > RUN: /usr/bin/wget --timeout=10 --tries=2 --directory- > > prefix=abe/snapshots/ http://abe.tcwglab.linaro.org/snapshots/md5sums > > > > > > We're having a major security related issue, so the entire TCWG lab > > > got taken offline yesterday till this is fixed and more secured. No > > idea > > > what the ETA is for all of that. ABE will work regardless other than > > > some warning messages about not being able to download the md5sums > > file, > > > or anything under 'infrastructure'. Most of what ABE needs is at > > > git.linaro.org, and that's still online. The only files ABE downloads > > > from abe.tcwglab.linaro.org are source tarballs, which don'
question on bfd - arch & mach
Hi Linaro Toolchain Group, I am going through the binutils code base specific to arm & aarch64. Please give some insight on below questions. 1. In the struct bfd_arch_info {...} (in bfd/bfd-in2.h) there are two fields 'enum bfd_architecture arch' and 'unsigned long mach'. I went trough the binutils porting guide (by mr.swami.re...@nsc.com) which says 'arch' is for architecture & 'mach' is for machine value. At present in the bfd/bfd-in2.h :- arch = bfd_arch_aarch64 and mach = bfd_mach_aarch64 or bfd_mach_aarch64_ilp32. But what these fields really means ? What is the difference between 'arch' and 'mach'? Lets say instruction set architecture is ARMv8 (also known as aarch64 for 64 bit- if I am not wrong). Then we have specific implementation of this like cortex53, cortex57, Cavium ThunderX etc. With respect to this what will be the value of arch = ? and mach = ? 2. In the include/opcode/arm.h the 'arm_feature_set' is defined as a structure where as in include/opcode/aarch64.h 'aarch64_feature_set' is defined as unsigned long. Is there any specific reason for this? Why structure definition was not followed in aarch64 ? typedef struct { unsigned long core; unsigned long coproc; } arm_feature_set; typedef unsigned long aarch64_feature_set; 3. Also I see that in the case of arm, 'mach' values are derived from cpu extension value specified in that 'arm_feature_set' structure. For example. if (ARM_CPU_HAS_FEATURE (cpu_variant, arm_cext_iwmmxt2)) mach = bfd_mach_arm_iWMMXt2; Whereas in aarch64 mach is derived based on API type (64 or 32). Any reason for this ? mach = ilp32_p ? bfd_mach_aarch64_ilp32 : bfd_mach_aarch64; Thanks in advance. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: question on bfd - arch & mach
Hi Jim, Thanks for giving the insight. On 27 May 2015 at 21:37, Jim Wilson wrote: > On Wed, May 27, 2015 at 1:21 AM, Virendra Kumar Pathak > wrote: > > 1. In the struct bfd_arch_info {...} (in bfd/bfd-in2.h) there are two > fields > > 'enum bfd_architecture arch' and 'unsigned long mach'. > > I went trough the binutils porting guide (by mr.swami.re...@nsc.com) > which > > says 'arch' is for architecture & 'mach' is for machine value. > > At present in the bfd/bfd-in2.h :- arch = bfd_arch_aarch64 and mach = > > bfd_mach_aarch64 or bfd_mach_aarch64_ilp32. > > But what these fields really means ? What is the difference between > 'arch' > > and 'mach'? > > arch is for different incompatible architectures, e.g. sparc versus > mips versus arm. mach is for different incompatible machines within > an architecture. So for arm we have for instance bfd_mach_arm_2 for > armv2 and bfd_mach_arm_5T for armv5t, etc. These fields have little > meaning outside what the rest of the binutils code gives to them, so > the author of a port can use them however he sees fit, and sometimes > different ports use them slightly differently. Practical > considerations will sometimes force particular choices, to get a > working linux system. > > > Lets say instruction set architecture is ARMv8 (also known as aarch64 > for 64 > > bit- if I am not wrong). Then we have specific implementation of this > like > > cortex53, cortex57, Cavium ThunderX etc. With respect to this what will > be > > the value of arch = ? and mach = ? > > All of the announced aarch64 parts implement the same instruction set > (more or less), so they all use the same mach value, bfd_mach_aarch64. > > > 2. In the include/opcode/arm.h the 'arm_feature_set' is defined as a > > structure where as in include/opcode/aarch64.h 'aarch64_feature_set' is > > defined as unsigned long. Is there any specific reason for this? Why > > structure definition was not followed in aarch64 ? > > typedef struct > > { > > unsigned long core; > > unsigned long coproc; > > } arm_feature_set; > > > > typedef unsigned long aarch64_feature_set; > > Ports are free to implement this as they see fit. Often different > people will do it slightly differently. There is no requirement to do > it exactly the same way as some other port. So no requirement that > aarch64 do anything exactly the same as how the arm port did it. > > On the practical side, arm is an old architecture, which has many > variants, and has a definite need to express different feature sets. > Whereas aarch64 is new, and as yet does not have any specific need for > different feature sets, since all of the announced parts implement > mostly the same feature sets. So aarch64 has a simple definition as > it doesn't need anything complicated here. And arm has a complicated > definition, as this was necessary to get correct behaviour from the > arm port. > > > 3. Also I see that in the case of arm, 'mach' values are derived from cpu > > extension value specified in that 'arm_feature_set' structure. > >For example. > > if (ARM_CPU_HAS_FEATURE (cpu_variant, arm_cext_iwmmxt2)) > > mach = bfd_mach_arm_iWMMXt2; > >Whereas in aarch64 mach is derived based on API type (64 or 32). Any > > reason for this ? > >mach = ilp32_p ? bfd_mach_aarch64_ilp32 : bfd_mach_aarch64; > > These are effectively working the same way. The only difference is > that there are many arm variants, but only one aarch64 variant, which > is why there are many bfd_mach_arm* codes and only one > bfd_mach_aarch64* code. > > As for the ILP32 ABI, it is incompatible with the default LP64 ABI, > and traditionally ILP32 and LP64 use different ELF formats (ELF32 > versus ELF64), so it is convenient to give the ILP32 ABI its own > machine code so that we can use the machine code to select the ELF > format. This is also done for x86, where 32-bit, 64-bit, and x32 ABI > are 3 different machine codes. > > There is a practical consideration here that if you are using mach > codes for ABIs, and have x ABIs, and are using mach codes for > implementations, and have y implementations, then you would need x*y > mach codes to represent every combination of ABIs and implementation, > which would quickly get impractical. So for instance in the x86 port, > they only have a few mach codes for implementations, even though there > are dozens of variants of the x86 architecture. A mach code is not > the only way to express a different implementation. It is an > implementer's choice whet
Tool for checking coding style in Linaro Toolchain
Hi Linaro Toolchain Group, Is there any tool for checking coding style of the patches submitted to the Linaro Toolchain ? In other word is there any equivalent of Linux checkpatch.pl in Linaro Toolchain (GCC, Binutils etc) ? If yes, please let me know. If not, then in general how people check the coding style of the patches they submit to Linaro Toolchain (GCC, Binutils etc). I assume manual inspection (line by line) may be very tedious thing to do. Thanks. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
abe: cross native toolchain compilation for aarch64-linux-gnu
Hi Linaro Toolchain Group, I am trying to build a cross-native toolchain for AARCH64 using ABE build framework. By cross native I mean that toolchain will be build on 0x86 (Red hat Santiago 6.4), it will run on AARCH64 (Juno) and will produce binaries to be run on AARCH64 (Juno) (If I am not mistaked) --build=0x86, --host=AARCH64, --target=AARCH64 Steps followed: 1. I built a cross toolchain first ../abe/configure ../abe/abe.sh --target aarch64-linux-gnu --build all --release 2015.05.29 --tarbin --disable update 2. Added the above cross toolchain (bin path) in the PATH 3. To build to cross-native ../abe/configure ../abe/abe.sh --host aarch64-linux-gnu --target aarch64-linux-gnu --build all --release 2015.06.01.native --tarbin --disable update But after some time, I got following error and compilation hanged. make: Leaving directory `/home/user_name/vpathak/build_abe/builds/aarch64-linux-gnu/aarch64-linux-gnu/gcc.git~linaro-4.9-branch-stage1' RUN: copy_gcc_libs_to_sysroot "/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu/bin/aarch64-linux-gnu-gcc --sysroot=/home/user_name/vpathak/build_abe/sysroots/aarch64-linux-gnu" /home/user_name/vpathak/abe/lib/make.sh: line 962: ./builds/destdir/aarch64-linux-gnu/bin/aarch64-linux-gnu-gcc: cannot execute binary file Am I missing something ? Please help. However with the following hack in the abe, I am able to compile cross-native toolchain for aarch64-linux-gnu. In the abe code base: --- lib/make.sh Function copy_gcc_libs_to_sysroot() gcc_exe="`find -name ${target}-gcc`" -libgcc="`${gcc_exe} -print-file-name=${libgcc}`" +#libgcc="`${gcc_exe} -print-file-name=${libgcc}`" + libgcc="/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu/lib/gcc/aarch64-linux-gnu/4.9.3/libgcc.a" Since './builds/destdir/aarch64-linux-gnu/bin/aarch64-linux-gnu-gcc' is native toolchain it will not run on 0x86. Thus libgcc will be empty. Therefore I hard-coded libgcc path. --- I had to disable gdb compilation. This error I faced while compiling cross-native compilation only. Cross compilation is successful on the same 0x86 machine. checking for library containing waddstr... no configure: error: no enhanced curses library found; disable TUI make: *** [configure-gdb] Error 1 make: Leaving directory `/home/user_name/vpathak/build_abe/builds/aarch64-linux-gnu/aarch64-linux-gnu/binutils-gdb.git~linaro_gdb-7.8-branch-gdb' WARNING: Make had failures! ERROR (#159): build_all (Failed building .) hacked patch --- a/lib/make.sh +++ b/lib/make.sh @@ -31,10 +31,10 @@ build_all() # to rebuilt the sysroot. local builds="infrastructure binutils libc stage2 gdb" else -local builds="infrastructure binutils stage1 libc stage2 gdb" +local builds="infrastructure binutils stage1 libc stage2" fi if test "`echo ${target} | grep -c -- -linux-`" -eq 1; then - local builds="${builds} gdbserver" + local builds="${builds}" fi ----------- Thanks. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
gcc-5 git branch
Hi Linaro Toolchain Group, I see two git branches for GCC-5 at https://git.linaro.org 1. https://git.linaro.org/toolchain/gcc.git/shortlog/refs/heads/gcc-5-branch 2. https://git.linaro.org/toolchain/gcc.git/shortlog/refs/heads/linaro/gcc-5-branch What is the difference between these two branches ? Which branch should be used for development ? Is it branch (1) refers to GCC upstream branch for GCC5 & branch (2) is linaro version of branch (1). Please comment. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
mfpu=neon and -march=native on aarch64-linux-gnu toolchain
Hi Linaro Toolchain Group, I have questions regarding -mfpu=neon and -march=native w.r.t aarch64 linaro native toolchain (aarch64-linux-gnu) I found gcc is not accepting these flags. On juno Board: gcc -mfpu=neon hello.c -o hello gcc: error: unrecognized command line option '-mfpu=neon' gcc -march=native hello.c -o hello hello.c:1:0: error: unknown value 'native' for –march Am I missing something ? Please help. Below is the machine details uname -a Linux juno 3.10.55.0-1-linaro-lt-vexpress64 #1ubuntu1~ci+141022094025 SMP Wed Oct 22 09:41:06 UTC 2014 aarch64 aarch64 aarch64 GNU/Linux gcc -v Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/local/apps/gcc-native/gcc-linaro-5.1-2015.06.06.native-i686-mingw32_aarch64-linux-gnu/bin/../libexec/gcc/aarch64-linux-gnu/5.1.1/lto-wrapper Target: aarch64-linux-gnu Configured with: '/home/user_name/vpathak/build_abe/snapshots/gcc.git~linaro-4.9-branch/configure' SHELL=/bin/sh --with-bugurl=https://bugs.linaro.org --with-mpc=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu --with-mpfr=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu --with-gmp=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu --with-gnu-as --with-gnu-ld --disable-libstdcxx-pch --disable-libmudflap --with-cloog=no --with-ppl=no --with-isl=no --disable-nls --enable-multiarch --disable-multilib --enable-c99 --with-arch=armv8-a --disable-shared --enable-static --with-build-sysroot=/home/user_name/vpathak/build_abe/sysroots/aarch64-linux-gnu --enable-lto --enable-linker-build-id --enable-long-long --enable-shared --with-sysroot=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu/aarch64-linux-gnu/libc --enable-languages=c,c++,fortran,lto --enable-fix-cortex-a53-835769 --enable-checking=release --disable-bootstrap --with-bugurl= https://bugs.linaro.org --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu --target=aarch64-linux-gnu --prefix=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu Thread model: posix gcc version 5.1.1 (GCC) Thanks. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: mfpu=neon and -march=native on aarch64-linux-gnu toolchain
Hi Andrew, Thanks for the information. So does it mean gcc will not recognize the 'mfpu' option itself for aarch64 ? Linaro ToolChain FAQ says along with '-mfpu', we also need to specify -mfloat-abi=softfp|hard otherwise VFP/Neon/Crypto instructions will not be generated. Also Linaro connect presentation ( http://www.slideshare.net/linaroorg/lcu14-307-advanced-toolchain-usage-parts-12) suggest to pass '-funsafe-math-optimizations' since NEON does not support full IEEE 754. Are such flags restrictions are still applicable ? Please also tell how can we disable NEON/SIMD instructions in gcc for aarch64. Thanks. On 10 June 2015 at 11:05, Pinski, Andrew wrote: > -march=native was not included in GCC 5.1 that Linaro provided. I don't > know if Linaro has plans to backport the support it but it is already there > for GCC 6. > > You don't need -mfpu=neon for AARCH64 at all. AARCH64 defaults to having > simd turned on. > > > Thanks, > > Andrew Pinski > ---------- > *From:* linaro-toolchain on > behalf of Virendra Kumar Pathak > *Sent:* Tuesday, June 9, 2015 10:26 PM > *To:* Linaro Toolchain Mailman List > *Subject:* mfpu=neon and -march=native on aarch64-linux-gnu toolchain > > Hi Linaro Toolchain Group, > > I have questions regarding -mfpu=neon and -march=native w.r.t aarch64 > linaro native toolchain (aarch64-linux-gnu) > I found gcc is not accepting these flags. > > On juno Board: > > gcc -mfpu=neon hello.c -o hello > gcc: error: unrecognized command line option '-mfpu=neon' > > gcc -march=native hello.c -o hello > hello.c:1:0: error: unknown value 'native' for –march > > Am I missing something ? Please help. > > > Below is the machine details > > uname -a > Linux juno 3.10.55.0-1-linaro-lt-vexpress64 #1ubuntu1~ci+141022094025 SMP > Wed Oct 22 09:41:06 UTC 2014 aarch64 aarch64 aarch64 GNU/Linux > > gcc -v > Using built-in specs. > COLLECT_GCC=gcc > > COLLECT_LTO_WRAPPER=/usr/local/apps/gcc-native/gcc-linaro-5.1-2015.06.06.native-i686-mingw32_aarch64-linux-gnu/bin/../libexec/gcc/aarch64-linux-gnu/5.1.1/lto-wrapper > Target: aarch64-linux-gnu > Configured with: > '/home/user_name/vpathak/build_abe/snapshots/gcc.git~linaro-4.9-branch/configure' > SHELL=/bin/sh --with-bugurl=https://bugs.linaro.org > --with-mpc=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu > --with-mpfr=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu > --with-gmp=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu > --with-gnu-as --with-gnu-ld --disable-libstdcxx-pch --disable-libmudflap > --with-cloog=no --with-ppl=no --with-isl=no --disable-nls > --enable-multiarch --disable-multilib --enable-c99 --with-arch=armv8-a > --disable-shared --enable-static > --with-build-sysroot=/home/user_name/vpathak/build_abe/sysroots/aarch64-linux-gnu > --enable-lto --enable-linker-build-id --enable-long-long --enable-shared > --with-sysroot=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu/aarch64-linux-gnu/libc > --enable-languages=c,c++,fortran,lto --enable-fix-cortex-a53-835769 > --enable-checking=release --disable-bootstrap --with-bugurl= > https://bugs.linaro.org --build=x86_64-unknown-linux-gnu > --host=aarch64-linux-gnu --target=aarch64-linux-gnu > --prefix=/home/user_name/vpathak/build_abe/builds/destdir/aarch64-linux-gnu > Thread model: posix > gcc version 5.1.1 (GCC) > > Thanks. > -- > with regards, > Virendra Kumar Pathak > -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Run time comparison of sin() in libm
Hi Linaro Toolchain Group, I am comparing execution time (run time) of sin() trigonometric function between following glibc (including libm) libraries for aarch64 (juno cortex a57) : Linaro glibc 2.19, Linaro eglibc 2.19, eglibc 2.19 (from http://www.eglibc.org/) and Linaro glibc 2.21. My observation for execution time of sin(): with Linaro glibc 2.19 and eglibc 2.19 = 1m24.703s (approx) whereas, with Linaro eglibc 2.19 & Linaro glibc 2.21 = 0m25.243s (approx) Has Linaro optimized the libm functions for aarch64 in Linaro eglibc 2.19 ? If yes, please point me to relevant reference from where I can find more information on them. Since the eglibc development from version 2.19 has stopped, will Linaro maintain its own development version of glibc ? I am using below snippet code and linux 'time' command to calculate the time. void sin_func(void) { double incr = 0.732; double result, count = 0.0; printf("%s\n", __func__); while (count < 105414350.0) { result = sin(count); count += incr; } } Thanks. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
binutils - question on decoding decision tree for aarch64
Hi Linaro Toolchain Group, I am trying to learn the 'decoding decision tree' for aarch64 in binutils by trying to add a new assembly instruction 'addvp'. For example: addvp x0, x0, 9 For this, I added a entry in struct aarch64_opcode aarch64_opcode_table[] (file opcodes/aarch64-tbl.h) as below: {"addvp", 0x0100, 0x7f00, addsub_imm, 0, CORE, OP3 (Rd_SP, Rn_SP, AIMM), QL_R2NIL, F_SF}, ARM manual say, bit 27 & bit 28 are unallocated. Thus for addvp, I am giving opcode 0100 (with bit 27 & 28 as 0). With this, generating object file from assembly file is successful (test.s --> test.o); but while disassembling using objdump, it say undefined instruction. >From objdump log: 81002400.inst 0x81002400 ; undefined (but instruction was generated correct i.e. 81002400 !!!). I know since addvp is a hack instruction, it won't execute on cpu. But still disassembly should succeed. 1. Please help me in knowing what I am doing wrong here ? What else I should do to add a new instruction in binutils ? 2. I also saw some printf in opcodes/aarch64-gen.c which I guess create decoding tree (initialize_decoder_tree()). How to print them ? I made debug =1 but still print is not coming. 3. There are some auto-generated files like aarch64-asm-2.c, aarch64-dis-2.c. How to re-generate them ? Thanks. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
ABE - bug in copy_gcc_libs_to_sysroot() while building native toolchain for aarch64
Hi Linaro Toolchain Group, I am building a native toolchain for aarch64 with below configurations: --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu --target=aarch64-linux-gnu. In copy_gcc_libs_to_sysroot() - which copy libgcc.a to sysroot, current implementation try to find the absolute path of libgcc.a as below : libgcc="`${local_builds}/destdir/${host}/bin/${target}-gcc -print-file-name=${libgcc}` But above line will not execute (i.e. gcc -print-file-name) on x86_64 as the toolchain is native toolchain for aarch64-linux-gnu. Thus a infinite loop will be created in copy command i.e. copying directory x in x. however, when I hard coded the libgcc.a path in my machine (as below), everything went fine. libgcc="/home/vpathak/arm/toolchain/build_abe_new/builds/destdir/aarch64-linux-gnu/lib/gcc/aarch64-linux-gnu/5.1.1/libgcc.a" I think this is a bug in ABE build infrastructure. Thanks. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
abe - compiling cross aarch64 toolchain using local directory
Hi Linaro Toolchain Group, I am trying to compiling cross aarch64-linux-gnu toolchain using local directory (tar file) for gcc. I am using file:/// as mentioned in the https://wiki.linaro.org/ABE. ../abe/abe.sh --target aarch64-linux-gnu --build all --release 20150819 --tarbin gcc=file:///home/vpathak/arm/toolchain/build/snapshots/gcc-2015.11-5.tar.bz2 But I get following error: ERROR (#146): get_URL (not supported for .tar.* files.) ERROR (#533): get_source (file:///home/vpathak/arm/toolchain/build/snapshots/gcc-2015.11-5.tar.bz2 not a valid sources.conf identifier.) TRACE(#190): checkout () ERROR (#193): checkout (No URL given!) ERROR (#161): checkout_all (Failed checkout out of gcc.) Am I missing something ? How can we use a local directory (e.g. gcc source code) for building toolchain using abe ? Please help. Thanks. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
GNU GCC development plan and its interaction with Linaro
Hi Linaro Toolchian Group, I am new to GCC development and have some basic question on its development process. Could you please give some insight on below questions. (Apology if they are very trivial). I have read https://gcc.gnu.org/develop.html If I am correct, gcc trunk is on gcc 6.0.0 (stage 3) at present and will becomes 6.0.1 (regression fix only) in January 2016. gcc 6.0.1 will be released as gcc 6.1.0 in April, 2016 and from there onwards gcc 6 release branch will start. However, There is also a gcc 5 branch in parallel whose current version is 5.2.1 and will be released as gcc 5.3 soon. Hopefully gcc 5.3 would be the last release in gcc 5 series. (Please correct me if I am wrong). [Questions] 1. What is the difference between experimental(gcc 6.0.0 stage 3) & gcc release branch (gcc 5.2.1)? Is there any rule which decides which changes will go where? In case, I have some patches for new aarch64 processor at present, in which branch these changes would be merged (assuming they passes reviews)? 2. How is the subversion of release branches are decided? Is it correct to say that there will be always 3 subversion of any release branch (e.g. gcc 5.1, gcc 5.2 & gcc 5.3)? 3. What is the working model between GNU GCC and Linaro GCC? Does Linaro directly accept patches? or they need to go to GNU GCC first? Thanks in advance for your time. with regards, Virendra Kumar Pathak -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: GNU GCC development plan and its interaction with Linaro
Thanks Yvan Roux & Charles Baylis for the help. On 2 December 2015 at 20:20, Charles Baylis wrote: > On 2 December 2015 at 14:15, Yvan Roux wrote: > > >> 1. What is the difference between experimental(gcc 6.0.0 stage 3) & gcc > >> release branch (gcc 5.2.1)? > > > > trunk (actual gcc 6.0.0) is where everything goes, new features, new > > optimizations, new targets, bugfixes, etc ... On release branches it > > is mainly bugfixes even if maintainers can do some exception sometimes > > > >> Is there any rule which decides which changes will go where? > >> In case, I have some patches for new aarch64 processor at present, > in > >> which branch these changes would be merged (assuming they passes > reviews)? > > > > in trunk, but only in stage 1, if you submit it today, it will be in GCC > 7.0.0 > > IIRC, in some circumstances such patches can be acceptable during > Stage 3 if they are sufficiently self-contained and the port > maintainer accepts them. > -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Linaro-binutils release git tags
Hi All, As per release notes, linaro_binutils-2_25-2015_01-2_release should be present at http://git.linaro.org/toolchain/binutils-gdb.git. But I am not able to see any "linaro_binutils" tags on https://git.linaro.org/toolchain/binutils-gdb.git/tags. Am I doing any mistake? or on git it is renamed to something else (same as FSF binutils tags?). Apology for asking such a trivial question. Thanks for your time. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Linaro LLVM engagement
Hi All, I am interested in understanding Linaro LLVM activity. I have already gone through https://wiki.linaro.org/WorkingGroups/ToolChain/LLVM. Could you please guide me on below questions. 1. On which LLVM & clang version, linaro is actively working now ? 2. Where can I find the latest "linaro-llvm" source code & binary? I could not find any official git repo for "linaro-llvm" at https://git.linaro.org/. 3. Could you please explain Linaro LLVM working model? How similar/different it is when compared with Linaro-GCC engagement. 4. Certain links (e.g Roadmap) at https://wiki.linaro.org/WorkingGroups/ToolChain/LLVM ask for login credentials. Any comment on how to obtain the permission? Thanks in advance for your time. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
question on aarch64 libm
Hi Linaro Toochain Group, I have few questions on glibc+libm w.r.t aarch64. If possible, please provide some insight, otherwise kindly redirect me to the concerned person/forum. 1.It seems from the community patches that ARM/Linaro is optimizing glibc functions such as memcpy/memmove, string for aarch64. However, looks like some of these (e.g. memcpy/memmov) patches are still not merged in glibc. Any comment on their availability in glibc? e.g. https://www.sourceware.org/ml/libc-alpha/2015-12/msg00341.html 2. On the same note, is there any plan for optimizing/tuning libm functions (e.g. trigonometric) for aarch64? I could find any matching patches on review board. Please correct me if I am wrong. 3. Looks like ARM have released an independent version of libm for certain trigonometric functions. https://github.com/ARM-software/optimized-routines. Any plan of these optimization going in glibc's libm? Any comment on its performance improvement over GNU libm ? Thanks in advance for your time. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
ldr instruction selection in the aarch64 backend
Hi Linaro Toolchain Group, I have a question on the ldr instruction selection in the aarch64 backend. Could someone help me in this regards, please? I am trying to allow only type A instructions while disabling the type B. Type A example: ldr x4, [x20,x1]---> allow Type B example: ldr x1, [x9,x3,lsl #3] ---> disable Experiment/My Understanding - aarch64_classify_address() returns true if rtx X is a valid address. If allow_reg_index_p=true then it calls aarch64_classify_index(). aarch64_classify_index() identify the address mode of second operand (op1) and accordingly calculate the shift. If shift=0 then type A is generated otherwise Type B will be generated. Thus if (shift != 0) then I am returning 'false' from aarch64_classify_index(). -patch- --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -3586,6 +3586,9 @@ aarch64_classify_index (struct aarch64_address_info *info, rtx x, if (GET_CODE (index) == SUBREG) index = SUBREG_REG (index); + if (shift != 0) + return false; if ((shift == 0 || (shift > 0 && shift <= 3 && (1 << shift) == GET_MODE_SIZE (mode))) --- Result - Before change ldr x0, [x13,x0,lsl #3] After Change lsl x1, x1, #3 ldr x0, [x15,x1] Question - How the returning 'false' from aarch64_classify_index() is resulting in the selection of type A versus type B? I could not find the function which is taking the decision based on return from aarch64_classify_address(). Could someone please explain this process or point me to the relevant files or code? Please correct me if my understanding is wrong. Thanks in advance for your time and patience. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: ldr instruction selection in the aarch64 backend
Hi Andrew, Thanks for explaining the process. On 5 February 2016 at 21:49, Pinski, Andrew < andrew.pin...@caviumnetworks.com> wrote: > aarch64_legitimate_address_hook_p is the place where the result of > aarch64_classify_address is returned to the middle-end. The middle-end > then knows that possibility for a+b is a legitimate address so it forces x3 > << 3 into a register and tries aarch64_legitimate_address_hook_p again. > > > > Thanks, > > Andrew Pinski > > > > *From:* linaro-toolchain [mailto:linaro-toolchain-boun...@lists.linaro.org] > *On Behalf Of *Virendra Kumar Pathak > *Sent:* Friday, February 5, 2016 6:58 AM > *To:* Linaro Toolchain Mailman List > *Subject:* ldr instruction selection in the aarch64 backend > > > > Hi Linaro Toolchain Group, > > > > I have a question on the ldr instruction selection in the aarch64 backend. > > Could someone help me in this regards, please? > > > > I am trying to allow only type A instructions while disabling the type B. > > Type A example: ldr x4, [x20,x1]---> allow > > Type B example: ldr x1, [x9,x3,lsl #3] ---> > disable > > > > > > Experiment/My Understanding - > > aarch64_classify_address() returns true if rtx X is a valid address. If > allow_reg_index_p=true then it calls aarch64_classify_index(). > > aarch64_classify_index() identify the address mode of second operand (op1) > and accordingly calculate the shift. > > If shift=0 then type A is generated otherwise Type B will be generated. > > > > Thus if (shift != 0) then I am returning 'false' from > aarch64_classify_index(). > > -patch- > > --- a/gcc/config/aarch64/aarch64.c > > +++ b/gcc/config/aarch64/aarch64.c > > @@ -3586,6 +3586,9 @@ aarch64_classify_index (struct aarch64_address_info > *info, rtx x, > >if (GET_CODE (index) == SUBREG) > > index = SUBREG_REG (index); > > + if (shift != 0) > > + return false; > >if ((shift == 0 || > > (shift > 0 && shift <= 3 > > && (1 << shift) == GET_MODE_SIZE (mode))) > > --- > > > > Result - > > Before change > > ldr x0, [x13,x0,lsl #3] > > After Change > > lsl x1, x1, #3 > > ldr x0, [x15,x1] > > > > Question - > > How the returning 'false' from aarch64_classify_index() is resulting in > the selection of type A versus type B? > > I could not find the function which is taking the decision based on return > from aarch64_classify_address(). > > Could someone please explain this process or point me to the relevant > files or code? > > Please correct me if my understanding is wrong. > > > > Thanks in advance for your time and patience. > > > > > > -- > > with regards, > Virendra Kumar Pathak > -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain
setting loop buffer size in the gcc (aarch64)
Hi Toolchain Group, I am trying to study the effect of loop buffer size on loop unrolling & the way gcc (aarch64) handles this. To my understanding, Loop Buffer is like i-cache which contains pre-decoded instruction that can be re-used if branch instruction loopbacks to an instruction which is still present in the buffer. For example, in Intel’s Nehalem loop buffer size is 28 u-ops. In LLVM compiler, it seems LoopMicroOpBufferSize is for the same purpose. However, I could not find any parameter/variable inside config/aarch64 representing loop buffer size. I am using Linaro gcc 5.2.1 [Question] 1. Is there any example inside aarch64 (or in general) which uses the loop buffer size in loop unrolling decision? If yes, could you please mention the relevant files or code section? 2. Otherwise any guidance/input on adding this support in aarch64 backend assuming architecture has the loop buffer support. [My Experiments/Code Browsing] I have collected following information from code browsing. Please correct if I missed or misunderstood something. TARGET_LOOP_UNROLL_ADJUST - This target hook return the number of times a loop can be unrolled. This can be used to handle the architecture constraint such number of memory references inside a loop e.g. ix86_loop_unroll_adjust() & s390_loop_unroll_adjust(). On the same note, can this be used to handle loop buffer size too? Without above hook, in loop-unroll.c parameters like PARAM_MAX_UNROLLED_INSNS (default 200), PARAM_MAX_AVERAGE_UNROLLED_INSNS (default 80) decides the unrolling factor. e.g. nunroll = PARAM_VALUE (PARAM_MAX_UNROLLED_INSNS) / loop->ninsns; In config/aarch64.c, I found align_loops variable in aarch64_override_options_after_change() function. I guess this an alignment done before starting the loop header in the executable. This should not play any role in loop unrolling. Right? So any guidance on how we can instruct aarch64 backend to utilize loop buffer size in deciding the loop unrolling factor? Thanks in advance for your time. -- with regards, Virendra Kumar Pathak ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org https://lists.linaro.org/mailman/listinfo/linaro-toolchain