New testsuite warnings: mkdir: plugin: File exists

2013-08-19 Thread Gerald Pfeifer
Recently (three days perhaps) I started to notice the following

  mkdir: plugin: mkdir: plugin: File exists
  File exists
  mkdir: mkdir: testsuitetestsuite: : File exists
  File exists

in the log excerpts of my nightly builders that I get by mail.

(For successful builds the build environment is then automatically
removed, but if that helps I could see to keep some.)

Any ideas what/who caused that?

Gerald


Re: New testsuite warnings: mkdir: plugin: File exists

2013-08-19 Thread Andreas Schwab
Gerald Pfeifer  writes:

> Recently (three days perhaps) I started to notice the following
>
>   mkdir: plugin: mkdir: plugin: File exists
>   File exists
>   mkdir: mkdir: testsuitetestsuite: : File exists
>   File exists
>
> in the log excerpts of my nightly builders that I get by mail.

That's nothing to worry about, it's just executing three instances of
mkdir with the same directory in parallel.  One succeeded, the other two
failed.

Andreas.

-- 
Andreas Schwab, SUSE Labs, sch...@suse.de
GPG Key fingerprint = 0196 BAD8 1CE9 1970 F4BE  1748 E4D4 88E3 0EEA B9D7
"And now for something completely different."


Fwd: failure notice

2013-08-19 Thread Uday Khedker


Hi Ilya,

Let me respond to your first question. I am not so much well versed with 
the requirements of the second question.


Yes, your conclusions are correct. You can find some more details in
slides 39/62 to 61/62 in 
http://www.cse.iitb.ac.in/grc/gcc-workshop-13/downloads/slides/Day2/gccw13-code-view.pdf.


Thanks and regards,


Dr. Uday Khedker
Professor
Department of Computer Science & Engg.
IIT Bombay, Powai, Mumbai 400 076, India.


Email   :   u...@cse.iitb.ac.in
Homepage:   http://www.cse.iitb.ac.in/~uday
Phone   :   
Office -91 (22) 2572 2545 x 7717, 91 (22) 2576 7717 (Direct)
Res.   -91 (22) 2572 2545 x 8717, 91 (22) 2576 8717 (Direct)



On Thursday 15 August 2013 07:06 PM, Ilya Verbin wrote:

Hi All,

I'm trying to figure out how LTO infrastructure works on a high level.
I want to make sure that I understand this correctly.  Could you please
help me with that?

1.  Execution flow.  As far as I understood, there are 2 modes of
operation - with/without LTO plugin.  Below are the execution flows
for each mode.

Without LTO plugin:

gcc -flto  # Call GCC driver
  |_ cc1# Compile first source file into asm + intermediate language
  |_ as # Assemble these asm + IL into temporary object file
  |_ ...# Compile and assemble all remaining source files
  |_ collect2   # Call linker driver
  |_ lto-wrapper# Call lto-wrapper directly from collect2
  |   |_ gcc# Driver
  |   |   |_ lto1   # Perform WPA and split into partitions
  |   |_ gcc# Driver
  |   |   |_ lto1   # Perform LTRANS for the first partition
  |   |   |_ as # Assemble this partition into final object file
  |   |_ ...# Perform LTRANS for each partition
  |_ collect-ld # Simple wrapper over ld
  |_ ld # Perform linking

Using LTO plugin:

gcc -flto  # Call GCC driver
  |_ cc1# Compile first source file into asm + intermediate language
  |_ as # Assemble these asm + IL into temporary object file
  |_ ...# Compile and assemble all remaining source files
  |_ collect2   # Call linker driver
  |_ collect-ld   # Simple wrapper over ld
  |_ ld with liblto_plugin.so   # Perform LTO and linking
  |_ lto-wrapper# Is called from liblto_plugin.so
  |_ gcc# Driver
  |   |_ lto1   # Perform WPA and split into partitions
  |_ gcc# Driver
  |   |_ lto1   # Perform LTRANS for the first partition
  |   |_ as # Assemble this partition into final object file
  |_ ...# Perform LTRANS for each partition

Are they correct?

2.  The second question, regarding #pragma omp target implementation.
I'm going to reuse LTO approach in a prototype, that will produce 2
binaries - for host and target architectures.  Target binary will contain
functions outlined from omp target region and some infrastructure to run
them.
To produce 2 binaries we need to run gcc and ld twice.  At the first run
gcc will generate object file, that contains optimized code for host and
GIMPLE for target.  At the second run gcc will read the GIMPLE and
generate optimized code for target.

So, the question is - what is the right place for the second run of gcc
and ld?  Should I insert them into liblto_plugin.so?  Or should I create
entirely new plugin, that will only call gcc and ld for target, without
performing any LTO optimizations for host?
Suggestions?


Thanks,
Ilya Verbin,
Software Engineer
Intel Corporation



--010105060304040106080107
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit


  

  
  
Hi Ilya,
  
  Let me respond to your first question. I am not so much well
  versed with the requirements of the second question.
  
  Yes, your conclusions are correct. You can find some more details
  in slides 39/62 to 61/62 in
href="http://www.cse.iitb.ac.in/grc/gcc-workshop-13/downloads/slides/Day2/gccw13-code-view.pdf";>http://www.cse.iitb.ac.in/grc/gcc-workshop-13/downloads/slides/Day2/gccw13-code-view.pdf.

  
  Thanks and regards,
  
  Uday.
  
  

signature.html

  

  

  

  Dr. Uday Khedker


  Professor


  Department
  of Computer Science & Engg.


  IIT Bombay,
  Powai, Mumbai 400 076, India.

  

 
  

  

  Email  
:
   

Re: Toolchain Build Robot

2013-08-19 Thread Jan-Benedict Glaw
On Sun, 2013-08-18 12:52:14 +0200, Gerald Pfeifer  wrote:
> On Sun, 11 Aug 2013, Jan-Benedict Glaw wrote:
> > I just added a different view. All results can also be viewed as a
> > timeline, both per-buildhost and per-target:
> > 
> > http://toolchain.lug-owl.de/buildbot/timeline.php
> 
> Nice!  Would you like to add a reference to http://gcc.gnu.org/testing/ ?

Ok?

Index: htdocs/testing/index.html
===
RCS file: /cvs/gcc/wwwdocs/htdocs/testing/index.html,v
retrieving revision 1.29
diff -u -p -r1.29 index.html
--- htdocs/testing/index.html   18 Aug 2013 10:54:32 -  1.29
+++ htdocs/testing/index.html   19 Aug 2013 11:43:47 -
@@ -34,6 +34,10 @@ the test suite directories.
   send their test results to the
   http://gcc.gnu.org/ml/gcc-testresults/";>gcc-testresults
   mailing list.
+
+  Jan-Benedict Glaw is runing a
+  http://toolchain.lug-owl.de/buildbot/";>build robot that
+  tries to build various cross-target (stage1 only) on some machines.
 
 
 Ideas for further testing


MfG, JBG

-- 
  Jan-Benedict Glaw  jbg...@lug-owl.de  +49-172-7608481
Signature of: 17:44 <@uschebit> Evangelist ist doch ein Vertriebler
the second  :   für unverkäufliche Produkte, oder? (#korsett, 20120821)


signature.asc
Description: Digital signature


Re: Toolchain Build Robot

2013-08-19 Thread Jan-Benedict Glaw
On Mon, 2013-08-19 13:45:24 +0200, Jan-Benedict Glaw  wrote:
> On Sun, 2013-08-18 12:52:14 +0200, Gerald Pfeifer  wrote:
> > On Sun, 11 Aug 2013, Jan-Benedict Glaw wrote:
> > > I just added a different view. All results can also be viewed as a
> > > timeline, both per-buildhost and per-target:
> > > 
> > >   http://toolchain.lug-owl.de/buildbot/timeline.php
> > 
> > Nice!  Would you like to add a reference to http://gcc.gnu.org/testing/ ?

...and this on top of that:

Ok?


Index: htdocs/testing/index.html
===
RCS file: /cvs/gcc/wwwdocs/htdocs/testing/index.html,v
retrieving revision 1.29
diff -u -p -r1.29 index.html
--- htdocs/testing/index.html   18 Aug 2013 10:54:32 -  1.29
+++ htdocs/testing/index.html   19 Aug 2013 11:51:09 -
@@ -110,6 +113,13 @@ the test suite directories.
   Build and test packages that are normally available on your
   platform and for which you have access to source.
   Run benchmarks regularly and report performance regressions.
+  Extend the
+  http://toolchain.lug-owl.de/buildbot/";>build robot to also
+  do local builds, run the testsuite, visualize test result differences
+  and probably use something like
+  http://buildbot.net/";>BuildBot. Some of the
+  http://gcc.gnu.org/wiki/CompileFarm";>Compile Farm machines
+  could also be used.
 
 
 


MfG, JBG

-- 
  Jan-Benedict Glaw  jbg...@lug-owl.de  +49-172-7608481
Signature of:   http://www.eyrie.org/~eagle/faqs/questions.html
the second  :


signature.asc
Description: Digital signature


[buildrobot] gcc/config/linux-android.c:40:7: error: ‘OPTION_BIONIC’ was not declared in this scope

2013-08-19 Thread Jan-Benedict Glaw
Hi!

My build robot[1] catched this error[2] while cross-building for
powerpc64le-linux:

g++ -c   -g -O2 -DIN_GCC -DCROSS_DIRECTORY_STRUCTURE  -fno-exceptions -fno-rtti 
-fasynchronous-unwind-tables -W -Wall -Wno-narrowing -Wwrite-strings 
-Wcast-qual -Wmissing-format-attribute -pedantic -Wno-long-long 
-Wno-variadic-macros -Wno-overlength-strings -fno-common  -DHAVE_CONFIG_H -I. 
-I. -I../../../../gcc/gcc -I../../../../gcc/gcc/. 
-I../../../../gcc/gcc/../include -I../../../../gcc/gcc/../libcpp/include  
-I../../../../gcc/gcc/../libdecnumber -I../../../../gcc/gcc/../libdecnumber/dpd 
-I../libdecnumber -I../../../../gcc/gcc/../libbacktrace-I. -I. 
-I../../../../gcc/gcc -I../../../../gcc/gcc/. -I../../../../gcc/gcc/../include 
-I../../../../gcc/gcc/../libcpp/include  -I../../../../gcc/gcc/../libdecnumber 
-I../../../../gcc/gcc/../libdecnumber/dpd -I../libdecnumber 
-I../../../../gcc/gcc/../libbacktrace   \
../../../../gcc/gcc/config/linux-android.c
../../../../gcc/gcc/config/linux-android.c: In function ‘bool 
linux_android_libc_has_function(function_class)’:
../../../../gcc/gcc/config/linux-android.c:40:7: error: ‘OPTION_BIONIC’ was not 
declared in this scope
   if (OPTION_BIONIC)
   ^
make[2]: *** [linux-android.o] Error 1

Probably introduced with r201838
(http://gcc.gnu.org/viewcvs/gcc?view=revision&revision=201838). I
guess it's just a forgotten chunk.

MfG, JBG
[1] http://toolchain.lug-owl.de/buildbot/
[2] http://toolchain.lug-owl.de/buildbot/#job8903

-- 
  Jan-Benedict Glaw  jbg...@lug-owl.de  +49-172-7608481
 Signature of:  http://perl.plover.com/Questions.html
 the second  :


signature.asc
Description: Digital signature


Re: [x86-64 psABI] RFC: Extend x86-64 PLT entry to support MPX

2013-08-19 Thread H.J. Lu
On Wed, Aug 14, 2013 at 8:49 AM, Jakub Jelinek  wrote:
> On Tue, Jul 23, 2013 at 12:49:06PM -0700, H.J. Lu wrote:
>> There are 2 psABI considerations:
>>
>>  1. Should PLT entries in all binaries, with and without MPX, be changed
>> to 32-byte or just the necessary ones?
>
> Ugh, please don't.
>
>>  2. Only branch to PLT entry with BND prefix needs 32-byte PLT entry. If
>> we use 32-byte PLT entry only when needed, it can be decided by:
>> a. A new MPX PLT relocation:
>>i. No new run-time relocation since MPX PLT relocation is
>>   resolved to branch to PLT entry at link-time.
>>ii. Pro: No new section.
>>iii. Con:
>> Need a new relocation.
>> Can't mark executable nor shared library.
>
> I think I prefer new relocation, @mpxplt or similar.  The linker could then
> use the 32-byte PLT slot for both @plt and @mpxplt relocs if there is at
> least one @mpxplt reloc for the symbol, otherwise it would use 16-byte PLT
> slot.  And you can certainly mark executables or PIEs or shared libraries
> this way, the linker could do that if it creates any 32-byte PLT slot.

We don't have to add @mpxplt since we have "bnd" prefix.  We also
need to handle "bnd call foo" in executable.  We can add new BND
version relocation for R_X86_64_PC32 and R_X86_64_PLT32, instead
of using the GNU attribute section.  Which approach is preferred?

-- 
H.J.


Re: Propose moving vectorization from -O3 to -O2.

2013-08-19 Thread Xinliang David Li
+cc auto-vectorizer maintainers.

David

On Mon, Aug 19, 2013 at 10:37 AM, Cong Hou  wrote:
> Nowadays, SIMD instructions play more and more important roles in our
> daily computations. AVX and AVX2 have extended 128-bit registers to
> 256-bit ones, and the newly announced AVX-512 further doubles the
> size. The benefit we can get from vectorization will be larger and
> larger. This is also a common practice in other compilers:
>
> 1) Intel's ICC turns on vectorizer at O2 by default and it has been
> the case for many years;
>
> 2) Most recently, LLVM turns it on for both O2 and Os.
>
>
> Here we propose moving vectorization from -O3 to -O2 in GCC. Three
> main concerns about this change are: 1. Does vectorization greatly
> increase the generated code size? 2. How much performance can be
> improved? 3. Does vectorization increase  compile time significantly?
>
>
> I have fixed GCC bootstrap failure with vectorizer turned on
> (http://gcc.gnu.org/ml/gcc-patches/2013-07/msg00497.html). To evaluate
> the size and performance impact, experiments on SPEC06 and internal
> benchmarks are done. Based on the data, I have tuned the parameters
> for vectorizer which reduces the code bloat without sacrificing the
> performance gain. There are some performance regressions in SPEC06,
> and the root cause has been analyzed and understood. I will file bugs
> tracking them independently. The experiments failed on three
> benchmarks (please refer to
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56993). The experiment
> result is attached here as two pdf files. Below are our summaries of
> the result:
>
>
> 1) We noticed that vectorization could increase the generated code
> size, so we tried to suppress this problem by doing some tunings,
> which include setting a higher loop bound so that loops with small
> iterations won't be vectorized, and disabling loop versioning. The
> average size increase is decreased from 9.84% to 7.08% after our
> tunings (13.93% to 10.75% for Fortran benchmarks, and 3.55% to 1.44%
> for C/C++ benchmarks). The code size increase for Fortran benchmarks
> can be significant (from 18.72% to 34.15%), but the performance gain
> is also huge. Hence we think this size increase is reasonable. For
> C/C++ benchmarks, the size increase is very small (below 3% except
> 447.dealII).
>
>
> 2) Vectorization improves the performance for most benchmarks by
> around 2.5%-3% on average, and much more for Fortran benchmarks. On
> Sandybridge machines, the improvement can be more if using
> -march=corei7 (3.27% on average) and -march=corei7-avx (4.81% on
> average) (Please see the attachment for details). We also noticed that
> some performance degrades exist, and after investigation, we found
> some are caused by the defects of GCC's vectorization (e.g. GCC's SLP
> could not vectorize a group of accesses if the number of group cannot
> be divided by VF http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49955,
> and any data dependence between statements can prevent vectorization),
> which can be resolved in the future.
>
>
> 3) As last, we found that introducing vectorization almost does not
> affect the build time. GCC bootstrap time increase is negligible.
>
>
> As a reference, Richard Biener is also proposing to move vectorization
> to O2 by improving the cost model
> (http://gcc.gnu.org/ml/gcc-patches/2013-05/msg00904.html).
>
>
> Vectorization has great performance potential -- the more people use
> it, the likely it will be further improved -- turning it on at O2 is
> the way to go ...
>
>
> Thank you!
>
>
> Cong Hou


Re: Propose moving vectorization from -O3 to -O2.

2013-08-19 Thread Richard Biener
Xinliang David Li  wrote:
>+cc auto-vectorizer maintainers.
>
>David
>
>On Mon, Aug 19, 2013 at 10:37 AM, Cong Hou  wrote:
>> Nowadays, SIMD instructions play more and more important roles in our
>> daily computations. AVX and AVX2 have extended 128-bit registers to
>> 256-bit ones, and the newly announced AVX-512 further doubles the
>> size. The benefit we can get from vectorization will be larger and
>> larger. This is also a common practice in other compilers:
>>
>> 1) Intel's ICC turns on vectorizer at O2 by default and it has been
>> the case for many years;
>>
>> 2) Most recently, LLVM turns it on for both O2 and Os.
>>
>>
>> Here we propose moving vectorization from -O3 to -O2 in GCC. Three
>> main concerns about this change are: 1. Does vectorization greatly
>> increase the generated code size? 2. How much performance can be
>> improved? 3. Does vectorization increase  compile time significantly?
>>
>>
>> I have fixed GCC bootstrap failure with vectorizer turned on
>> (http://gcc.gnu.org/ml/gcc-patches/2013-07/msg00497.html). To
>evaluate
>> the size and performance impact, experiments on SPEC06 and internal
>> benchmarks are done. Based on the data, I have tuned the parameters
>> for vectorizer which reduces the code bloat without sacrificing the
>> performance gain. There are some performance regressions in SPEC06,
>> and the root cause has been analyzed and understood. I will file bugs
>> tracking them independently. The experiments failed on three
>> benchmarks (please refer to
>> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56993). The experiment
>> result is attached here as two pdf files. Below are our summaries of
>> the result:
>>
>>
>> 1) We noticed that vectorization could increase the generated code
>> size, so we tried to suppress this problem by doing some tunings,
>> which include setting a higher loop bound so that loops with small
>> iterations won't be vectorized, and disabling loop versioning. The
>> average size increase is decreased from 9.84% to 7.08% after our
>> tunings (13.93% to 10.75% for Fortran benchmarks, and 3.55% to 1.44%
>> for C/C++ benchmarks). The code size increase for Fortran benchmarks
>> can be significant (from 18.72% to 34.15%), but the performance gain
>> is also huge. Hence we think this size increase is reasonable. For
>> C/C++ benchmarks, the size increase is very small (below 3% except
>> 447.dealII).
>>
>>
>> 2) Vectorization improves the performance for most benchmarks by
>> around 2.5%-3% on average, and much more for Fortran benchmarks. On
>> Sandybridge machines, the improvement can be more if using
>> -march=corei7 (3.27% on average) and -march=corei7-avx (4.81% on
>> average) (Please see the attachment for details). We also noticed
>that
>> some performance degrades exist, and after investigation, we found
>> some are caused by the defects of GCC's vectorization (e.g. GCC's SLP
>> could not vectorize a group of accesses if the number of group cannot
>> be divided by VF http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49955,
>> and any data dependence between statements can prevent
>vectorization),
>> which can be resolved in the future.
>>
>>
>> 3) As last, we found that introducing vectorization almost does not
>> affect the build time. GCC bootstrap time increase is negligible.
>>
>>
>> As a reference, Richard Biener is also proposing to move
>vectorization
>> to O2 by improving the cost model
>> (http://gcc.gnu.org/ml/gcc-patches/2013-05/msg00904.html).

And my conclusion is that we are not ready for this.  The compile time cost 
does not outweigh the benefit.

Richard.

>>
>> Vectorization has great performance potential -- the more people use
>> it, the likely it will be further improved -- turning it on at O2 is
>> the way to go ...
>>
>>
>> Thank you!
>>
>>
>> Cong Hou




Re: Propose moving vectorization from -O3 to -O2.

2013-08-19 Thread Xinliang David Li
On Mon, Aug 19, 2013 at 11:53 AM, Richard Biener
 wrote:
> Xinliang David Li  wrote:
>>+cc auto-vectorizer maintainers.
>>
>>David
>>
>>On Mon, Aug 19, 2013 at 10:37 AM, Cong Hou  wrote:
>>> Nowadays, SIMD instructions play more and more important roles in our
>>> daily computations. AVX and AVX2 have extended 128-bit registers to
>>> 256-bit ones, and the newly announced AVX-512 further doubles the
>>> size. The benefit we can get from vectorization will be larger and
>>> larger. This is also a common practice in other compilers:
>>>
>>> 1) Intel's ICC turns on vectorizer at O2 by default and it has been
>>> the case for many years;
>>>
>>> 2) Most recently, LLVM turns it on for both O2 and Os.
>>>
>>>
>>> Here we propose moving vectorization from -O3 to -O2 in GCC. Three
>>> main concerns about this change are: 1. Does vectorization greatly
>>> increase the generated code size? 2. How much performance can be
>>> improved? 3. Does vectorization increase  compile time significantly?
>>>
>>>
>>> I have fixed GCC bootstrap failure with vectorizer turned on
>>> (http://gcc.gnu.org/ml/gcc-patches/2013-07/msg00497.html). To
>>evaluate
>>> the size and performance impact, experiments on SPEC06 and internal
>>> benchmarks are done. Based on the data, I have tuned the parameters
>>> for vectorizer which reduces the code bloat without sacrificing the
>>> performance gain. There are some performance regressions in SPEC06,
>>> and the root cause has been analyzed and understood. I will file bugs
>>> tracking them independently. The experiments failed on three
>>> benchmarks (please refer to
>>> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=56993). The experiment
>>> result is attached here as two pdf files. Below are our summaries of
>>> the result:
>>>
>>>
>>> 1) We noticed that vectorization could increase the generated code
>>> size, so we tried to suppress this problem by doing some tunings,
>>> which include setting a higher loop bound so that loops with small
>>> iterations won't be vectorized, and disabling loop versioning. The
>>> average size increase is decreased from 9.84% to 7.08% after our
>>> tunings (13.93% to 10.75% for Fortran benchmarks, and 3.55% to 1.44%
>>> for C/C++ benchmarks). The code size increase for Fortran benchmarks
>>> can be significant (from 18.72% to 34.15%), but the performance gain
>>> is also huge. Hence we think this size increase is reasonable. For
>>> C/C++ benchmarks, the size increase is very small (below 3% except
>>> 447.dealII).
>>>
>>>
>>> 2) Vectorization improves the performance for most benchmarks by
>>> around 2.5%-3% on average, and much more for Fortran benchmarks. On
>>> Sandybridge machines, the improvement can be more if using
>>> -march=corei7 (3.27% on average) and -march=corei7-avx (4.81% on
>>> average) (Please see the attachment for details). We also noticed
>>that
>>> some performance degrades exist, and after investigation, we found
>>> some are caused by the defects of GCC's vectorization (e.g. GCC's SLP
>>> could not vectorize a group of accesses if the number of group cannot
>>> be divided by VF http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49955,
>>> and any data dependence between statements can prevent
>>vectorization),
>>> which can be resolved in the future.
>>>
>>>
>>> 3) As last, we found that introducing vectorization almost does not
>>> affect the build time. GCC bootstrap time increase is negligible.
>>>
>>>
>>> As a reference, Richard Biener is also proposing to move
>>vectorization
>>> to O2 by improving the cost model
>>> (http://gcc.gnu.org/ml/gcc-patches/2013-05/msg00904.html).
>
> And my conclusion is that we are not ready for this.  The compile time cost 
> does not outweigh the benefit.

Can you elaborate on your reasoning ?

thanks,

David


>
> Richard.
>
>>>
>>> Vectorization has great performance potential -- the more people use
>>> it, the likely it will be further improved -- turning it on at O2 is
>>> the way to go ...
>>>
>>>
>>> Thank you!
>>>
>>>
>>> Cong Hou
>
>


Build problem msg: xgcc: error trying to exec 'cc1': execvp: No such file or directory

2013-08-19 Thread George R Goffe


Hi,

I keep getting this error message while building gcc checked out of the 
repository. Could I get a little help with resolving this problem 
please?

Regards,

George...

xgcc: error trying to exec 'cc1': execvp: No such file or directory


Re: Build problem msg: xgcc: error trying to exec 'cc1': execvp: No such file or directory

2013-08-19 Thread Florian Weimer

On 08/20/2013 05:22 AM, George R Goffe wrote:

I keep getting this error message while building gcc checked out of the
repository. Could I get a little help with resolving this problem
please?


You need to provide more details, like the host and target system and a 
dozen or so lines leading up to the error message.


This is probably a question that is more suitable for the gcc-help 
mailing list.


--
Florian Weimer / Red Hat Product Security Team