[GSoC] How to get started with the isl code generation

2014-05-06 Thread Tobias Grosser

Hi Roman,

as you already look very actively through the graphite code, I was 
wondering if you already have an idea how to start with the 
implementation? It is obviously a good idea to first browse the code
and get a general picture of the existing implementation as it helps to 
get a good picture of what is needed and to avoid writing unneeded code.
Hudging from the questions you ask you seem to understand precisely what 
is going on (you spot many bugs just by inspection), so this is good.


On the other side, I think it is a good idea to simultaneously keep 
track of the design you have in mind and the first steps you are 
planning to take. Even though the full design may still need some time,
some basic decisions can probably already be taken and maybe even 
implemented. Staying in some way close to the coding you will do,

may be helpful to direct your code inspections to areas of code that
will be important for your implementation.

E.g. just setting up a second code generation in parallel that does
minimal work (generates no code at all), but that can be enabled by a 
command line flag might be a first start.


Cheers,
Tobias

P.S.: If your wiki account is still not writeable, could you ask on the 
mailing list for a fix?


Re: Improving Asan code on ARM targets

2014-05-06 Thread Yury Gribov

Andrew Pinski wrote:
> Yury Gribov wrote:
>> Andrew Pinski wrote:
>>> Yury Gribov wrote:
 I've recently noticed that GCC generates suboptimal code
 for Asan on ARM targets. E.g. for a 4-byte memory access check
>>>
>>> Does the patch series at located at:
>>> http://gcc.gnu.org/ml/gcc-patches/2014-02/msg01407.html
>>> http://gcc.gnu.org/ml/gcc-patches/2014-02/msg01405.html
>>> Fix this code generation issue?  I suspect it does and improves more
>>> than just the above code.
>>
>> No, they don't help as is.
>
> I think it would good to figure out how to improve this code gen
> with the above patches rather than changing asan.
> I suspect it might easy to expand them to handle this case too.

I was indeed able to reuse Zhenqiang's work. After updating 
select_ccmp_cmp_order hook to also return suggestions on how to change 
comparisons to allow better code generation (so it sounds more like 
select_ccmp_cmp_layout now) I was able to use this information in 
expand_ccmp_expr to generate optimal code.


The patch is still a draft (only supports Asan's case) and I think I'll 
wait until Zhenqiang's conditional compare patches get into trunk before 
going deeper (not sure when this is going to happen though...).


-Y


Re: [GSoC] questions about graphite_clast_to_gimple.c

2014-05-06 Thread Richard Biener
On Tue, May 6, 2014 at 8:57 AM, Tobias Grosser  wrote:
> On 05/05/2014 21:11, Roman Gareev wrote:
>>
>> Hi Tobias,
>>
>> thank you for your reply! I have questions about types. Could you
>> please answer them?
>
>
> I looked through them and most seem to be related to how we derive types in
> graphite. As I said before, this is a _very_ fragile hack
> that works surprisingly well, but which is both too complex and
> in the end still incorrect. Sebastian wrote this code, so I am not familiar
> with the details. I also don't think it is necessary to
> understand the details. Instead of using any code, we should start
> implementing the new code using 64 bit signed integers. This
> should be correct in 99.9% of the cases.

Of course compilers have to work correctly in 100% of the cases, so
if you choose an approach that will be incorrect in > 0% of the cases
then you should make sure to detect those and not apply any transform.

> One of the selling points for the new isl code generation was however,
> that it will be possible to get precise information about the types
> needed for code generation. There existed already a patch for an older
> isl version and there is a partial patch for newer versions that Sven and me
> have been working on. It is not yet stable enough to be tested, but I
> attached it anyway for illustration. The idea is to
> introduce a set of functions
>
> +   int isl_ast_expr_has_known_size(
> +   __isl_keep isl_ast_expr *expr);
> +   int isl_ast_expr_is_bounded(
> +   __isl_keep isl_ast_expr *expr);
> +   int isl_ast_expr_is_signed(
> +   __isl_keep isl_ast_expr *expr);
> +   size_t isl_ast_expr_size_in_bits(
> +   __isl_keep isl_ast_expr *expr);
>
> in isl, where we can precisely compute the minimal legal type. We can then
> use this during code generation to derive good types.

You should be able to do this for all types you need up-front and check
if there is a suitable GIMPLE type available.  For example by using
lang_hooks.types.type_for_size () which will return NULL_TREE if there
isn't one.

>> Questions related to “type_for_interval”:
>>
>> 1. What happens in these lines?
>>
>> int precision = MAX (mpz_sizeinbase (bound_one, 2),
>> mpz_sizeinbase (bound_two, 2));
>> if (precision > BITS_PER_WORD)
>> {
>> gloog_error = true;
>> return integer_type_node;
>> }
>
>
>>
>> Do we try to count maximum number of value bits in bound_one and
>> bound_two?
>
>
> I believe.
>
>
>> Why can't it be greater than BITS_PER_WORD?
>
>
> No idea.

Looks artificial to me - up to BITS_PER_WORD is certianly fast on
the CPU, but it will reject 'long long' on 32bit x86 for example.

>
>> 2. Why do we want to generate signed types as much as possible?
>
>
> Because in the code cloog generates negative values are common. To be save
> we generate unsigned code.

signed types allow for more optimistic optimization later on (they have
undefined behavior on overflow) - which can be both good and a problem.

>
>> 3. Why do we always have enough precision in case of precision <
>> wider_precision?
>
>
> I have no idea (and did not bother trying to understand)
>
>
>> Questions related to “clast_to_gcc_expression”:
>>
>> 4. What is the idea behind this code?
>>
>> if (POINTER_TYPE_P (TREE_TYPE (name)) != POINTER_TYPE_P (type))
>> name = convert_to_ptrofftype (name);
>
>
> Sorry, again no idea.

We have special requirements in GIMPLE for pointer + offset arithmetic.
Thus if either of the operands is a pointer the other operand has to be
of pointer-offset type.

>
>> 5. Why do we check POINTER_TYPE_P(type)? (“type” has tree type and the
>> manual says that a tree is a pointer type)
>
>
> Sorry, again no idea.

The type of tree in the GCC implementation is a pointer type, but we
check whether the intermediate language type refered to by the implementation
object 'type' is a pointer type.

>
>> Questions related to “max_precision_type”:
>>
>> 6. Why is type1, for example, is the maximal precision type in case of
>> truth of POINTER_TYPE_P (type1)?
>
>
> I don't know. This really lacks comments. This may very well have a good
> reason, but it is hard to see as most of the other stuff for deriving the
> types is very much a hack.
>
>
>> 7. Why do we have enough precision for p2 in case of p1 > p2 and signed
>> type1?
>
>
> No idea.
>
>
>> 8. Why do we always build signed integer type in the line: “type =
>> build_nonstandard_integer_type (precision, false);”?
>
>
> No idea.
>
>
>> Questions related to “type_for_clast_red”:
>>
>> 9. Why do we use this code in case of clast_red_sum?
>>
>> value_min (m1, bound_one, bound_two);
>> value_min (m2, b1, b2);
>> mpz_add (bound_one, m1, m2);
>
>
> We try to derive the bounds for the sum. No idea, regarding the actual
> computation.
>
>
>> Can bound_one be greater then bound_two? (We also consider two cases
>> in “type_for_interval”)
>
>
> I would guess not. This may be a bug.
>
>
>> 10. Why do we assume that new bound

Re: [GSoC] questions about graphite_clast_to_gimple.c

2014-05-06 Thread Tobias Grosser

On 06/05/2014 10:19, Richard Biener wrote:

Hi Richi,

thanks for the comments.


On Tue, May 6, 2014 at 8:57 AM, Tobias Grosser  wrote:

On 05/05/2014 21:11, Roman Gareev wrote:


Hi Tobias,

thank you for your reply! I have questions about types. Could you
please answer them?



I looked through them and most seem to be related to how we derive types in
graphite. As I said before, this is a _very_ fragile hack
that works surprisingly well, but which is both too complex and
in the end still incorrect. Sebastian wrote this code, so I am not familiar
with the details. I also don't think it is necessary to
understand the details. Instead of using any code, we should start
implementing the new code using 64 bit signed integers. This
should be correct in 99.9% of the cases.


Of course compilers have to work correctly in 100% of the cases, so
if you choose an approach that will be incorrect in > 0% of the cases
then you should make sure to detect those and not apply any transform.


I agree we want to get to 100%. Just the way how to get there needs to 
be chosen.


Detecting broken cases does not work. During code generation we generate
new expressions, e.g. i + j + 200 * b. To code generate them we need to
choose a type for the computation.

cloog has zero knowledge about possible types, that's why graphite tries 
to derive types by estimating the minimal/maximal value of
an expression i + j from the knowledge it has about i and j. This 
estimate is very imprecise especially as the initial knowledge we have 
is incomplete. As Roman pointed out, several of the 'estimates' just 
don't make sense at all.


To get it 100% right we need to derive the minimal/maximal value a 
subexpression i + j can take and to use this to find a type that is 
large enough and also fast on our target platform. The best solution I 
see is to compute this information within the isl code generation, where 
we have all necessary information available.


Unfortunately, this patch is not finished yet. There are two ways to 
proceed.


1) finish the patch as the very first step

2) Go for 64bits and plug in the patch later

I would obviously prefer to get 1) done as soon as possible, but in case 
it still needs more time, defaulting to 64/128 bit types allows Roman to 
proceed. In the end, 64 bits is almost always large enough.


I am busy for the next 6 weeks, but am planning to work on the isl patch 
after. Sven, do you happen to have any time to work on the isl patch?



One of the selling points for the new isl code generation was however,
that it will be possible to get precise information about the types
needed for code generation. There existed already a patch for an older
isl version and there is a partial patch for newer versions that Sven and me
have been working on. It is not yet stable enough to be tested, but I
attached it anyway for illustration. The idea is to
introduce a set of functions

+   int isl_ast_expr_has_known_size(
+   __isl_keep isl_ast_expr *expr);
+   int isl_ast_expr_is_bounded(
+   __isl_keep isl_ast_expr *expr);
+   int isl_ast_expr_is_signed(
+   __isl_keep isl_ast_expr *expr);
+   size_t isl_ast_expr_size_in_bits(
+   __isl_keep isl_ast_expr *expr);

in isl, where we can precisely compute the minimal legal type. We can then
use this during code generation to derive good types.


You should be able to do this for all types you need up-front and check
if there is a suitable GIMPLE type available.  For example by using
lang_hooks.types.type_for_size () which will return NULL_TREE if there
isn't one.


How could we do this upfront? For each subexpression, we need to know 
what is the minimal legal type. Only after we know this we can derive a 
type for it.



2. Why do we want to generate signed types as much as possible?



Because in the code cloog generates negative values are common. To be save
we generate unsigned code.


That should have been _signed_ code.


Again, I do not think spending time to understand the heuristics in
type_for_clast is worth it. Some are rather complex and work well, some
or just buggy but have never been triggered and a small percentage actually
might be reusable later (pointer handling). As the approach
has generally too little information to work reliably, I would not spend
any time on it. I pointed out the correct approach above. Going with 64bit
types will bring us a very long way, and we can finish the isl patch to get
it 100% right.


If ISL can give you for each expression a type precision and signedness
then I'd stick to that if it is available (or else punt).


Not yet, but hopefully soon.

At the moment, we have zero information about the types (the same holds 
for cloog).


I see only three choices:

1) Finish this feature of the isl code generation first
2) Try to do 'estimate' the types from the graphite side as
   we did it before.
3) Assume 64/128 bits and plug in th

Re: [GSoC] questions about graphite_clast_to_gimple.c

2014-05-06 Thread Sven Verdoolaege
On Tue, May 06, 2014 at 01:02:09PM +0200, Tobias Grosser wrote:
> I am busy for the next 6 weeks, but am planning to work on the isl patch
> after. Sven, do you happen to have any time to work on the isl patch?

No.  It's going to be difficult enough for me to finish
what I need to finish for the CARP project.
I may have more time in July.  Then again, I may also
have no time at all if I need to go and find a "real job".

skimo


Re: [GSoC] questions about graphite_clast_to_gimple.c

2014-05-06 Thread Richard Biener
On Tue, May 6, 2014 at 1:02 PM, Tobias Grosser  wrote:
> On 06/05/2014 10:19, Richard Biener wrote:
>
> Hi Richi,
>
> thanks for the comments.
>
>
>> On Tue, May 6, 2014 at 8:57 AM, Tobias Grosser  wrote:
>>>
>>> On 05/05/2014 21:11, Roman Gareev wrote:


 Hi Tobias,

 thank you for your reply! I have questions about types. Could you
 please answer them?
>>>
>>>
>>>
>>> I looked through them and most seem to be related to how we derive types
>>> in
>>> graphite. As I said before, this is a _very_ fragile hack
>>> that works surprisingly well, but which is both too complex and
>>> in the end still incorrect. Sebastian wrote this code, so I am not
>>> familiar
>>> with the details. I also don't think it is necessary to
>>> understand the details. Instead of using any code, we should start
>>> implementing the new code using 64 bit signed integers. This
>>> should be correct in 99.9% of the cases.
>>
>>
>> Of course compilers have to work correctly in 100% of the cases, so
>> if you choose an approach that will be incorrect in > 0% of the cases
>> then you should make sure to detect those and not apply any transform.
>
>
> I agree we want to get to 100%. Just the way how to get there needs to be
> chosen.
>
> Detecting broken cases does not work. During code generation we generate
> new expressions, e.g. i + j + 200 * b. To code generate them we need to
> choose a type for the computation.
>
> cloog has zero knowledge about possible types, that's why graphite tries to
> derive types by estimating the minimal/maximal value of
> an expression i + j from the knowledge it has about i and j. This estimate
> is very imprecise especially as the initial knowledge we have is incomplete.
> As Roman pointed out, several of the 'estimates' just don't make sense at
> all.
>
> To get it 100% right we need to derive the minimal/maximal value a
> subexpression i + j can take and to use this to find a type that is large
> enough and also fast on our target platform. The best solution I see is to
> compute this information within the isl code generation, where we have all
> necessary information available.
>
> Unfortunately, this patch is not finished yet. There are two ways to
> proceed.
>
> 1) finish the patch as the very first step
>
> 2) Go for 64bits and plug in the patch later
>
> I would obviously prefer to get 1) done as soon as possible, but in case it
> still needs more time, defaulting to 64/128 bit types allows Roman to
> proceed. In the end, 64 bits is almost always large enough.
>
> I am busy for the next 6 weeks, but am planning to work on the isl patch
> after. Sven, do you happen to have any time to work on the isl patch?
>
>
>>> One of the selling points for the new isl code generation was however,
>>> that it will be possible to get precise information about the types
>>> needed for code generation. There existed already a patch for an older
>>> isl version and there is a partial patch for newer versions that Sven and
>>> me
>>> have been working on. It is not yet stable enough to be tested, but I
>>> attached it anyway for illustration. The idea is to
>>> introduce a set of functions
>>>
>>> +   int isl_ast_expr_has_known_size(
>>> +   __isl_keep isl_ast_expr *expr);
>>> +   int isl_ast_expr_is_bounded(
>>> +   __isl_keep isl_ast_expr *expr);
>>> +   int isl_ast_expr_is_signed(
>>> +   __isl_keep isl_ast_expr *expr);
>>> +   size_t isl_ast_expr_size_in_bits(
>>> +   __isl_keep isl_ast_expr *expr);
>>>
>>> in isl, where we can precisely compute the minimal legal type. We can
>>> then
>>> use this during code generation to derive good types.
>>
>>
>> You should be able to do this for all types you need up-front and check
>> if there is a suitable GIMPLE type available.  For example by using
>> lang_hooks.types.type_for_size () which will return NULL_TREE if there
>> isn't one.
>
>
> How could we do this upfront? For each subexpression, we need to know what
> is the minimal legal type. Only after we know this we can derive a type for
> it.

I thought that ISL gives you this information.  If not, then of course there
is no way - but then there is no way at any point.

>
 2. Why do we want to generate signed types as much as possible?
>>>
>>>
>>>
>>> Because in the code cloog generates negative values are common. To be
>>> save
>>> we generate unsigned code.
>
>
> That should have been _signed_ code.
>
>
>>> Again, I do not think spending time to understand the heuristics in
>>> type_for_clast is worth it. Some are rather complex and work well, some
>>> or just buggy but have never been triggered and a small percentage
>>> actually
>>> might be reusable later (pointer handling). As the approach
>>> has generally too little information to work reliably, I would not spend
>>> any time on it. I pointed out the correct approach above. Going with
>>> 64bit
>>> types will bring us a very long way, and we can finish the isl patc

Re: [GSoC] questions about graphite_clast_to_gimple.c

2014-05-06 Thread Tobias Grosser

On 06/05/2014 13:52, Richard Biener wrote:

On Tue, May 6, 2014 at 1:02 PM, Tobias Grosser  wrote:

On 06/05/2014 10:19, Richard Biener wrote:

Hi Richi,

thanks for the comments.



On Tue, May 6, 2014 at 8:57 AM, Tobias Grosser  wrote:


On 05/05/2014 21:11, Roman Gareev wrote:



Hi Tobias,

thank you for your reply! I have questions about types. Could you
please answer them?




I looked through them and most seem to be related to how we derive types
in
graphite. As I said before, this is a _very_ fragile hack
that works surprisingly well, but which is both too complex and
in the end still incorrect. Sebastian wrote this code, so I am not
familiar
with the details. I also don't think it is necessary to
understand the details. Instead of using any code, we should start
implementing the new code using 64 bit signed integers. This
should be correct in 99.9% of the cases.



Of course compilers have to work correctly in 100% of the cases, so
if you choose an approach that will be incorrect in > 0% of the cases
then you should make sure to detect those and not apply any transform.



I agree we want to get to 100%. Just the way how to get there needs to be
chosen.

Detecting broken cases does not work. During code generation we generate
new expressions, e.g. i + j + 200 * b. To code generate them we need to
choose a type for the computation.

cloog has zero knowledge about possible types, that's why graphite tries to
derive types by estimating the minimal/maximal value of
an expression i + j from the knowledge it has about i and j. This estimate
is very imprecise especially as the initial knowledge we have is incomplete.
As Roman pointed out, several of the 'estimates' just don't make sense at
all.

To get it 100% right we need to derive the minimal/maximal value a
subexpression i + j can take and to use this to find a type that is large
enough and also fast on our target platform. The best solution I see is to
compute this information within the isl code generation, where we have all
necessary information available.

Unfortunately, this patch is not finished yet. There are two ways to
proceed.

1) finish the patch as the very first step

2) Go for 64bits and plug in the patch later

I would obviously prefer to get 1) done as soon as possible, but in case it
still needs more time, defaulting to 64/128 bit types allows Roman to
proceed. In the end, 64 bits is almost always large enough.

I am busy for the next 6 weeks, but am planning to work on the isl patch
after. Sven, do you happen to have any time to work on the isl patch?



One of the selling points for the new isl code generation was however,
that it will be possible to get precise information about the types
needed for code generation. There existed already a patch for an older
isl version and there is a partial patch for newer versions that Sven and
me
have been working on. It is not yet stable enough to be tested, but I
attached it anyway for illustration. The idea is to
introduce a set of functions

+   int isl_ast_expr_has_known_size(
+   __isl_keep isl_ast_expr *expr);
+   int isl_ast_expr_is_bounded(
+   __isl_keep isl_ast_expr *expr);
+   int isl_ast_expr_is_signed(
+   __isl_keep isl_ast_expr *expr);
+   size_t isl_ast_expr_size_in_bits(
+   __isl_keep isl_ast_expr *expr);

in isl, where we can precisely compute the minimal legal type. We can
then
use this during code generation to derive good types.



You should be able to do this for all types you need up-front and check
if there is a suitable GIMPLE type available.  For example by using
lang_hooks.types.type_for_size () which will return NULL_TREE if there
isn't one.



How could we do this upfront? For each subexpression, we need to know what
is the minimal legal type. Only after we know this we can derive a type for
it.


I thought that ISL gives you this information.  If not, then of course there
is no way - but then there is no way at any point.


Not yet. We are close, but it is not finished enough to use it in 
production.



2. Why do we want to generate signed types as much as possible?




Because in the code cloog generates negative values are common. To be
save
we generate unsigned code.



That should have been _signed_ code.



Again, I do not think spending time to understand the heuristics in
type_for_clast is worth it. Some are rather complex and work well, some
or just buggy but have never been triggered and a small percentage
actually
might be reusable later (pointer handling). As the approach
has generally too little information to work reliably, I would not spend
any time on it. I pointed out the correct approach above. Going with
64bit
types will bring us a very long way, and we can finish the isl patch to
get
it 100% right.



If ISL can give you for each expression a type precision and signedness
then I'd stick to that if it is available (or else punt).



Not yet, but hopefully soon.

At the moment, 

Resurrecting -Wunreachable

2014-05-06 Thread Florian Weimer
I would like to resurrect -Wunreachable, using an algorithm which is 
roughly based on the Java rules for reachable statements and normal 
completion, augmented to deal with labels and gotos, no-return 
functions, statement expressions, and whatever else is relevant to C and 
C++.  This analysis would be based mostly on syntax (except that 
constant folding is applied to detect trivially infinite loops), so it 
wouldn't suffer from the dependence on target and optimization levels 
the previous option had to deal with.


I think computing this information is a prerequisite for a high-quality 
switch warning, to detect cases that do not fall through because they do 
not complete normally, but this is a separate matter.


The question I have if it would be feasible right now to implement this 
as an early GIMPLE pass, or if I should bury this project until we have 
mostly eliminated early folding (Jeff Law thinks it's too difficult…).


As far as I can tell, the target dependence we have in fold right now 
(see ) would not 
interfere too much with Java-style reachability analysis: The CFG is 
target-dependent, but under the Java rules, both branches of an if 
statement a reachable even if the condition is a compile-time constant. 
 (It makes sense to keep this behavior because just as in Java, 
constant conditions are used for conditional compilation in C and C++, 
it's not always the preprocessor that is used for this.)  So I hope that 
the known target dependency issues would not impact the generated warnings.


Comments?

--
Florian Weimer / Red Hat Product Security Team


Re: Resurrecting -Wunreachable

2014-05-06 Thread Richard Biener
On Tue, May 6, 2014 at 4:09 PM, Florian Weimer  wrote:
> I would like to resurrect -Wunreachable, using an algorithm which is roughly
> based on the Java rules for reachable statements and normal completion,
> augmented to deal with labels and gotos, no-return functions, statement
> expressions, and whatever else is relevant to C and C++.  This analysis
> would be based mostly on syntax (except that constant folding is applied to
> detect trivially infinite loops), so it wouldn't suffer from the dependence
> on target and optimization levels the previous option had to deal with.
>
> I think computing this information is a prerequisite for a high-quality
> switch warning, to detect cases that do not fall through because they do not
> complete normally, but this is a separate matter.
>
> The question I have if it would be feasible right now to implement this as
> an early GIMPLE pass, or if I should bury this project until we have mostly
> eliminated early folding (Jeff Law thinks it's too difficult…).
>
> As far as I can tell, the target dependence we have in fold right now (see
> ) would not interfere
> too much with Java-style reachability analysis: The CFG is target-dependent,
> but under the Java rules, both branches of an if statement a reachable even
> if the condition is a compile-time constant.  (It makes sense to keep this
> behavior because just as in Java, constant conditions are used for
> conditional compilation in C and C++, it's not always the preprocessor that
> is used for this.)  So I hope that the known target dependency issues would
> not impact the generated warnings.
>
> Comments?

Like I have suggested in the past a good point to do this kind of analysis
on the (mostly, as you say) unoptimized IL is right after going into SSA
form and implementing said analysis as an IPA pass (yeah, that somewhat
conflicts).

From there you can run things like the CCP analysis phase (without
doing any transform) and use the converted CCP lattice to optimize
things (you can also run value-numbering).

If you are fine with getting function-local info only you can get away
with not doing an IPA pass (but you might get info from optimized
bodies leaked into the analysis I think).

Basically look for pass_build_ssa in passes.def and insert your pass
after that (then it's not an IPA pass).  If you want to have IPA passes
at that point you need to re-organize things slightly - I'd suggest
moving pass_init_datastructures and pass_build_ssa into
all_lowering_passes so you can add IPA passes to the beginning
of all_small_ipa_passes.

Richard.

> --
> Florian Weimer / Red Hat Product Security Team


we are starting the wide int merge

2014-05-06 Thread Kenneth Zadeck
please hold off on committing patches for the next couple of hours as we 
have a very large merge to do.

thanks.

kenny


Best Affordable Web Hosting

2014-05-06 Thread sales
Hello,

Are you looking for web hosting for your website.

We are providing virtual private server in linux and windows for website which 
need better web hosting environment.

Virtual private server provide isolated environment and suitable for sites 
which need more resources and custom settings.

If your website facing problem in shared hosting environment do contact us for 
solution of your web hosting need.

Please reply this mail.


Warm Regards

Sales Host Net India

website : http://www.hostnetindia.com
Email : sa...@hostnetindia.com
mob : 8875578666
phone: 0091-141-4113929



Re: we are starting the wide int merge

2014-05-06 Thread Mike Stump
On May 6, 2014, at 8:19 AM, Kenneth Zadeck  wrote:
> please hold off on committing patches for the next couple of hours as we have 
> a very large merge to do.
> thanks.

All done…  It is in.

Go frontend no longer includes GCC header files

2014-05-06 Thread Ian Lance Taylor
I'm very pleased to report that, thanks to the hard work of Chris
Manghane, the Go frontend no longer includes any header files from
GCC.  This means that global changes to the GCC middle-end no longer
need to touch any files in gcc/go/gofrontend.  Instead, they will only
need to modify files in gcc/go, files that are under the GPL and can
be maintained using the usual GCC processes.

This finally fulfills a promise I made when the Go frontend was added to GCC.

Ian


GNU Tools Cauldron 2014 - Update

2014-05-06 Thread Diego Novillo

An update to this year's Cauldron:

- We have been able to accept everyone in the registration
  waiting list! If you were in the waiting list but have not yet
  received a confirmation, please contact us at
  tools-cauldron-ad...@googlegroups.com.

- If you need a travel visa for the UK, please contact Kate
  Stewart .

- We will be publishing a presentation schedule in the next few
  days.

The workshop will likely have a similar format to last year's
meeting. It begins on Fri 18/Jul in the evening (no presentations
that day) and runs through Sat and Sun.

We have a pretty packed schedule, so we will have at least two
parallel streams and room for breakout meetings. More details as
we get closer to the conference.


Thanks. Diego.


Re: SH -ml not turning into -EL to ld

2014-05-06 Thread Kaz Kojima
Joel Sherrill  wrote:
> We have a few build failures on the RTEMS target where it appears
> that the -ml argument to make a relocatable is not turned into a
> -EL argument to ld by gcc 4.8.2.
> 
> This is the output of invoking gcc with "-v". Below that I invoked
> the same LD command with -EL on the command line and it
> worked.
> 
> Any ideas or suggestions?
[snip]
> /users/joel/rtems-4.11-work/tools/libexec/gcc/sh-rtems4.11/4.8.2/collect2
-dc -dp -N -o cache.rel -r

Usually some linker emulation option like -m shlelf is used here
to specify endian.  Looks that bsp_specs overrides the default
linker options and doesn't give linker emulation option.
Could you check *link with -dumpspecs and see if something like

-m 
%(link_emul_prefix)%{m5-compact*|m5-32media*:32}%{m5-64media*:64}%{!m1:%{!m2:%{!m3*:%{!m4*:%{!m5*:%(link_default_cpu_emul)}%(subtarget_link_emul_suffix)
 

is there?

Regards,
kaz