Adam Butcher wrote: >John Freeman wrote: >> >> I just inspected my code again. The call to layout_class_type at the >> beginning of finish_lambda_function_body at semantics.c:5241 was >> intended to recalculate offsets to members in the case of default captures. >> >> Here is the complete order of the pertinent function calls (file >> location has nothing to do with call order; I supply them to help the >> reader to follow): >> >> finish_struct @ parser.c:6985 >> >> cp_parser_lambda_body @ parser.c:6992 >> --> finish_lambda_function_body @ parser.c:7380 >> ----> layout_class_type @ semantics.c:5241 >> >> finish_struct_1 @ ???? I don't see this added yet. I've checked out >> revision 150396. >> >I think Jason's waiting for the formality of copyright assignment to be >finalized. I attached my patches against the >latest lambda branch head in the following mail if you want to try them out: > http://gcc.gnu.org/ml/gcc/2009-08/msg00058.html > I see you've committed the fix. Great. Much better to do the relayout in semantics.c where the previous layout stuff was than in the parser. I take you're point on it potentially being overkill but at least it means that user programs that copy can work.
I guess this thread is done with now that the fix has been committed. I should
start another to discuss polymorphic
lambda experimentation and implicit template parameters. BTW I have got the
latter working now -- to a certain (read
limited and buggy) extent.
The 'implicit template parameter via auto' addition is literally a quick hack
for me to investigate what functionally
needs to occur to achieve it -- the implementation is not pleasant by any means
as yet.
I've attached my two diffs made against the latest lambda head. First is
explicit polymorphic lambda support via the
additional template parameter syntax, second is the very hacky
'for-discovery-purposes-only' prototype for typename
inference. The examples below demonstrate the supported syntaxes.
1. [] <typename T, typename U> (T const& t, U u) { return t + u; }
2. [] (auto const& t, auto u) { return t + u; }
3. [] <typename T> (T const& t, auto u) { return t + u; }
Currently for auto typename inference, cv-qualifiers (and other bits like
attributes) are lost but I'll come to that
when I rewrite it all in light of what I have found out. Just thought I'd
share this functional-approximation to a
solution. As a result of the aforementioned bug, although 1. and 3. produce
effectively the same code, 2. ends up
being equivalent to:
[] <typename __AutoT1, typename __AutoT2> (__AutoT1& t, __AutoT2 u) {
return t + u; }
rather than the expected:
[] <typename __AutoT1, typename __AutoT2> (__AutoT1 const& t, __AutoT2 u)
{ return t + u; }
There's a number of things I'm not sure about regarding location of the
implementation (parser.c, semantics.c, decl.c
etc).
One thing I'm worried about is that I'm using make_tree_vec() with a length one
greater than that of the previous
vector in order to grow the template parameter list whilst parsing function
arguments. This seems inefficient and
ugly. Not least as there seems to be no way to ditch the old tree-vec. I can
ggc_free it but that won't do any
housekeeping of the tree counts and sizes. It looks like tree-vecs are only
supposed to be alloc'd into the pool
(zone?) and never removed. In this case, incrementally adding additional
parameters, you get allocs like:
[--tree-vec-1--]
[--tree-vec-1--] [-+-tree-vec-2-+-]
[--tree-vec-1--] [-+-tree-vec-2-+-] [-++-tree-vec-3-++-]
And all you want is the last one. I appreciate that its probably done this way
to avoiding full fragmentation
management but I'd expect this sort of thing may happen often (or maybe it
shouldn't!). Off the top of my head, one
solution would be to add tree_vec_resize() which would realloc the memory and
update the counts/sizes iff the tree-vec
were the last in the list. The fallback behaviour if it weren't the last would
be to do the previous manual
make_tree_vec() behaviour. Something like:
[--tv1--]
tv1 = tree_vec_resize (tv1, TREE_VEC_LENGTH (tv1) + 1);
[-+-tv1-+-]
tv1 = tree_vec_resize (tv1, TREE_VEC_LENGTH (tv1) + 1);
[-++-tv1-++-]
tv2 = make_tree_vec (n)
[-++-tv1-++-] [--tv2--]
tv1 = tree_vec_resize (tv1, TREE_VEC_LENGTH (tv1) + 1);
[-++-tv1-++-] [--tv2--] [-+++-tv1-+++-]
^ ^
| no longer |
| referenced|
This seems to work optimally in the case of incremental addition to the last
tree-vec -- you are left with the minimum
tree-vecs necessary to avoid full fragmentation handling, and it degrades to
supports the effect of the manual method
of calling make_tree_vec (old_size + n).
Maybe a doubling size on realloc and starting at, say, 4 elements could make
realloc'ing more efficient -- but it
would require different 'end' and length handling for tree-vec which may
pervade the code-base (though the macro
front-ends may be able to hide this).
Maybe I've misunderstood tree-vecs and the garbage collection mechanics
completely and its simply not an issue. Or it
shouldn't be an issue because they shouldn't be used in that way.
The other option would be to keep hold of the original tree-list (which can be
trivially chained) and only build the
tree-vec after parsing the function parameter decl (rather than build it after
each parameter). This may complicate
scoping/lookup of the template arguments though as current_template_parameters
value is a currently a tree-vec.
In all there's a lot I'm not sure of! Its all just experimentation!
Any feedback is always appreciated.
Regards,
Adam
0001-First-pass-polymorphic-lambda-support.patch
Description: Binary data
0002-Very-hacky-implementation-of-template-typename-infer.patch
Description: Binary data
