Re: Optimizing a 16-bit * 8-bit -> 24-bit multiplication

2006-12-04 Thread David Nicol

here's an ignorant, naive, and very likely wrong attempt:

what happens if you mask off the high and low bytes of the
larger number, do two 8,8->16 multiplies, left shift the result
of the result of the higher one, and add, as a macro?

#define _mul8x16(c,s)  (  \
  (long int) ((c) * (unsigned char) ( (s) & 0x00FF) )\
  +
 (long int) ( \
(long int) ( (c) * (unsigned char) ( ( (s) & 0xFF00 ) >> 8)) \
<< 8  \
  ) \
)

what would that do?  I don't know which if any of the casting is needed,
or what exactly you have to do to suppress upgrading the
internal representations to 32-bit until the left-shift and the add;
one would expect that multiplying a char by a short on this platform
would produce that code, and that the avr-gcc-list would be the
right place to find someone who could make that happen.

since you know the endianness of your machine, you could reasonably
pull chars out of the long directly instead of shifting and masking, also
you could store the to-be-shifted result directly into an address one byte
off from the address of the integer that you will add the low result to --
that's what you're proposing unions for, right?


On 12/1/06, Shaun Jackman <[EMAIL PROTECTED]> wrote:

I would like to multiply a 16-bit number by an 8-bit number and
produce a 24-bit result on the AVR. The AVR has a hardware 8-bit *
8-bit -> 16-bit multiplier.

If I multiply a 16-bit number by a 16-bit number, it produces a 16-bit
result, which isn't wide enough to hold the result.

If I cast one of the operands to 32-bit and multiply a 32-bit number
by a 16-bit number, GCC generates a call to __mulsi3, which is the
routine to multiply a 32-bit number by a 32-bit number and produce a
32-bit result and requires ten 8-bit * 8-bit multiplications.

A 16-bit * 8-bit -> 24-bit multiplication only requires two 8-bit *
8-bit multiplications. A 16-bit * 16-bit -> 32-bit multiplication
requires four 8-bit * 8-bit multiplications.

I could write a mul24_16_8 (16-bit * 8-bit -> 24-bit) function using
unions and 8-bit * 8-bit -> 16-bit multiplications, but before I go
down that path, is there any way to coerce GCC into generating the
code I desire?

Cheers,
Shaun




--
perl -le'1while(1x++$_)=~/^(11+)\1+$/||print'


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread David Nicol

On 12/20/06, Marcin Dalecki <[EMAIL PROTECTED]> wrote:

You are apparently using a different definition of an algebra or ring
than the common one.


Fascinating discussion.  Pointers to canonical on-line definitions of
the terms "algebra" and "ring" as used in compiler design please?


Re: GCC optimizes integer overflow: bug or feature?

2006-12-21 Thread David Nicol

On 12/20/06, Marcin Dalecki <[EMAIL PROTECTED]> wrote:

You better don't. Really! Please just realize for example the impact
of the (in)famous 80 bit internal (over)precision of a
very common IEEE 754 implementation...

volatile float b = 1.;

if (1. / 3. == b / 3.) {
printf("HALLO!\n")
} else {
printf("SURPRISE SURPRISE!\n");
}


It has always seemed to me that floating point comparison could
be standardized to regularize the exponent and ignore the least significant
few bits and doing so would save a lot of headaches.  Would it really save
the headaches or would it just make the cases where absolute comparisons
of fp results break less often, making the error more intermittent and
thereby worse?  Could a compiler switch be added that would alter
fp equality?

I have argued for "precision" to be included in numeric types in other forae
and have been stunned that all except people with a background in Chemistry
find the suggestion bizarre and unnecessary; I realize that GCC is not really
a good place to try to shift norms; but on the other hand if a patch was to
be prepared that would add a command-line switch (perhaps -sloppy-fpe and
-no-sloppy-fpe) that would govern wrapping ((fptype) == (fptype)) with something
that threw away the least sig. GCC_SLOPPY_FPE_SLOP_SIZE bits in
the mantissa, would it get accepted or considered silly?


Re: [c++] switch ( enum ) vs. default statment.

2007-01-23 Thread David Nicol

On 1/23/07, Paweł Sikora <[EMAIL PROTECTED]> wrote:

typedef enum { X, Y } E;
int f( E e )
{
switch ( e )
{
case X: return -1;
case Y: return +1;
}


+ throw runtime_error("invalid value got shoehorned into E enum")


}

In this example g++ produces a warning:

e.cpp: In function 'int f(E)':
e.cpp:9: warning: control reaches end of non-void function

Adding `default' statemnet to `switch' removes the warning but
in C++ out-of-range values in enums are undefined.


nevertheless, that integer type might get its bits twiddled somehow.


optimizing away parts of macros?

2006-04-17 Thread David Nicol
I am using
gcc (GCC) 4.0.2 20051125 (Red Hat 4.0.2-8)

under the Inline::C perl module

and having a very weird situation.

I have a multi-line macro that declares several variables and then does some
work with them, for use in several functions that have similar invocations,
interfacing to an external library.

I was getting mysterious segfaults so I went over everything with a tweezers,
eventually adding a printf line to the end of the macro so I could verify that
the unpacking of my arguments was proceeding correctly.

To my surprise, the addition of this line, which shouldn't have any
side effects,
has solved the problem.

Adding -O0 to the CCFLAGS makes no difference.

GCC appears to be treating my long macro as some kind of block
and throwing out variables that are not used within it instead of simply
pasting the code in at the macro invocation point.

Is this a known problem with 4.0.2?  Is there a workaround?  Should I upgrade?

--
David L Nicol
Can you remember when vending machines took pennies?


Re: optimizing away parts of macros?

2006-04-17 Thread David Nicol
Thank you.  Nobody is aware of such a problem.


Re: Oops

2006-06-06 Thread David Nicol

On 6/6/06, Peter Michael Gerdes <[EMAIL PROTECTED]> wrote:

Ignore that last email.  It was sent to the wrong address.


Thesis, antithesis, synthesis.


--
David L Nicol
"fans of liza minelli should always be
disconnected immediately" -- Matthew Henry


Re: Highlevel C Backend

2006-06-09 Thread David Nicol

On 6/9/06, Sebastian Pop <[EMAIL PROTECTED]> wrote:

Steven Bosscher wrote:
> 2. Probably GIMPLE, but you can't express all of GIMPLE in ANSI C
> (with GCC extensions you can probably express most of it though).

Theoretically you can express all of GIMPLE in ANSI C,
practically it would require some engineering,
pragmatically it is worthless for GCC.


One imagines that one would construct a virtual machine architecture framework
and then write a back-end that would generate "machine code" for that virtual
machine, which would be a subset of ANSI C.  Hey-presto, you then have a
butt-ugly anything-to-ANSI-C translator, that (whoops) loses any
back-end-specific
optimizations.

By framework I mean something like a C prelude that would declare all the
registers and define all the ops.

How could writing to C possibly be more complex, rather than less complex, than
writing to an actual machine code set?

Pragmatically, the exercise could be re-used to generate byte-code for other
virtual machines, such as javaVM or Parrot, or as a generic starting
point for writing
back-ends for new architectures.


--
David L Nicol, who has had plenty of coffee this afternoon


what motivates people to submit patches anyway?

2006-06-14 Thread David Nicol

Not off topic, in response to thread about Goobles, and the contest to collect
Goobles towards the purely symbolic end of becoming GCC Grand Poobah -- does
this person get their own parking space?  E-mail for [EMAIL PROTECTED] 
forwarded to
them during thge duration of their reign?

i think if direct benefits were engineered somehow -- even symbolicly,
which they are,
but more explicitly symbolicly -- there might be more better patches.
Until someone
comes up with a way to organize open-source projects as revenue
generators -- even
if only as t-shirt stores, with revenues from the t-shirts shared
according to a decaying
contribution share model -- an easy way to divide up the pie, no
matter how small --
apathy will continue to reign.

speaking very generally here, all volunteer-staffed best-efforts
software projects
share these kinds of problems.


Re: Coroutines

2006-06-16 Thread David Nicol

On 6/16/06, Dustin Laurence <[EMAIL PROTECTED]> wrote:

I'm pretty sure this is stepping into deep quicksand, but I'll ask
anyway...I'm interested in writing an FE for a language that has
stackable coroutines (Lua-style, where you can yield and resume
arbitrarily far down the call stack).  I'm trying to wrap my head around
what would be involve and how awful it would be.
...
is. :-)  OTOH if it is possible I'd consider trying to write it, if my
GCC-fu ever reaches the requisite level (my rank is somewhere below
"pale piece of pigs ear" right now).


Wow.  How many is that in kyu?

As the quicksand closes in above our heads I'll try to answer:

I don't have any GCC credibility either but I once got an approving off-list
reply from Damian Conway while discussing coroutines in the context of
Perl 6 features -- which had me reading the university library's dusty
copy of Knuth's Art of Computer Programming -- so I have done a little
homework in this problem domain, although my ideas about GCC internals
are rough and possibly naive and/or incorrect.

That said, my thoughts on the question of how to implement coroutines
using the GCC system are that it would best be done by abandoning the
idea of using the "native" stack for your program's "logical" stack.  Your
target language has some concept of multiple stacks, and recursion, so
you will have to implement these stacks some how, unlike TAOCP's
coroutines that manipulated global variables in a stackless environment.

Why not implement your stacks as linked lists of dynamically allocated
stack frame objects, and coerce your language into primitives that
operate on those objects instead of trying to force things to be C-like?

This apprach might not make sense -- I am sure if I search the
archives I will find out why, for instance, GNU Lisp is a separate
project from GCC instead of merging.

GCC might not be the best back end for your project.
At some point someone asked the perl 6 people why they were designing
Parrot when the scheme48 internals were available -- here is scheme48's
documentation on threading
http://s48.org/1.3/manual/manual-Z-H-8.html#node_chap_6
and the answer was "because nobody knew about it"

So the other thought, or the rest of the thought, is, why try to force
your language into GCC when it might be easier to force it into something else?

--
David L Nicol
"you couldn't bullshit [Bill Gates] for a minute
because he was a programmer. A real, actual,
programmer." -- Joel Spolsky


Re: Boehm-gc performance data

2006-06-23 Thread David Nicol

On 6/23/06, Laurynas Biveinis <[EMAIL PROTECTED]> wrote:

What do you think?


Is it possible to turn garbage collection totally off for a null-case
run-time comparison or would that cause thrashing except for very
small jobs?

--
David L Nicol
"if life were like Opera, this would probably
have poison in it" -- Lyric Opera promotional
coffee cup sleeve from Latte Land


Re: [boehms-gc] Performance results

2006-07-25 Thread David Nicol

On 7/24/06, Laurynas Biveinis <[EMAIL PROTECTED]> wrote:


[How is it that setting pointers] to NULL can
actually increase peak GC memory usage?


I'll guess that during collection phases, the list of
collectible structures becomes longer.

GCC is just too huge to try and implement reference counting
just to see what it would do, right?  All managed structures would have
to include a base class with the count in it and all new references would
have to be through a macro...


Re: algol 60 for gcc

2006-08-08 Thread David Nicol

On 8/8/06, Petr Machata <[EMAIL PROTECTED]> wrote:


I'm trying to make the university to GPL the code and documentation, and
give up their copyright, so that it could be used without restriction,
but won't know the outcome until later this year.


I am not a lawyer, but my understanding from researching university claims
to copyright of student work is that a university policy of holding a copyright
on work you have done in a class that you paid to be in is entirely specious.

If you are being compensated for your work and receiving research grants or
something like that it's different, but if you are doing work for a
class that you
are paying to be in, that work is yours under the Berne convention, and it is
yours to license how you please.  As a paying student, you hire your university
to coach you.

You never hear about universities claiming copyright on the output from authors
who have taken creative writing classes, do you?

of course as a doctoral student you may be receiving grant money or stipends
which may make your work work-for-hire, but then again it might not, if your
grant or stipend is a financial aid or is for tasks other than writing
software and
docs, such as being a teaching assistant.

http://www.google.com/search?q=copyright+on+student+work
shows many links to various policy documents, most of which refer to
international copyright law, which should apply to FIT BUT as well as
they apply in Canada or Minnesota.

--
David L Nicol
all your grammar nit are belong to us


Re: RFC: deprecated functions calling deprecated functions

2006-09-29 Thread David Nicol

I think we should continue to warn.  I can see the arguments on both
sides, but I think warning makes sense.  The person compiling the
library should use -Wno-deprecated, and accept that they be calling some
other deprecated function they don't intend to call.


how about suppressing nested warnings only with -Wno-depreciated-nested
or something like that?



--
The Country Of The Blind, by H.G. Wells
http://cronos.advenge.com/pc/Wells/p528.html