Re: Pipe in the "return" statement

2011-07-25 Thread Ian Collins

On 07/26/11 12:00 AM, Archard Lias wrote:

Hi,

Still I dont get how I am supposed to understand the pipe and its task/
idea/influece on control flow, of:
return  |
??


It's simply a bitwise OR.

--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list


Re: Cpp + Python: static data dynamic initialization in *nix shared lib?

2010-07-09 Thread Ian Collins

On 07/10/10 03:52 AM, Alf P. Steinbach /Usenet wrote:

[Cross-posted comp.lang.python and comp.lang.c++]

I lack experience with shared libraries in *nix and so I need to ask...

This is about "cppy", some support for writing Python extensions in C++
that I just started on (some days ago almost known as "pynis" (not funny
after all)).

For an extension module it seems that Python requires each routine to be
defined as 'extern "C"'. And although e.g. MSVC is happy to mix 'extern
"C"' and C++ linkage, using a routine declared as 'static' in a class as
a C callback, formally they're two different kinds, and I seem to recall
that /some/ C++ compiler balks at that kind of mixing unless specially
instructed to allow it. Perhaps it was the Sun compiler?


Yes, it will (correctly) issue a warning.

As the is a bit OT, contact me directly and we can work through it.  I 
have had similar fun and games adding PHP modules!


--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing"

2010-09-29 Thread Ian Collins

On 09/30/10 02:17 PM, Seebs wrote:

On 2010-09-30, RG  wrote:

That the problem is "elsewhere in the program" ought to be small
comfort.


It is, perhaps, but it's also an important technical point:  You CAN write
correct code for such a thing.


int maximum(int a, int b) { return a>  b ? a : b; }



int main() {
   long x = 8589934592;
   printf("Max of %ld and 1 is %d\n", x, maximum(x,1));


You invoked implementation-defined behavior here by calling maximum() with
a value which was outside the range.  The defined behavior is that the
arguments are converted to the given type, namely int.  The conversion
is implementation-defined and could include yielding an implementation-defined
signal which aborts execution.

Again, the maximum() function is 100% correct -- your call of it is incorrect.
You didn't pass it the right sort of data.  That's your problem.

(And no, the lack of a diagnostic doesn't necessarily prove anything; see
the gcc documentation for details of what it does when converting an out
of range value into a signed type, it may well have done exactly what it
is defined to do.)


Note that the mistake can be diagnosed:

lint /tmp/u.c -m64 -errchk=all
(7) warning: passing 64-bit integer arg, expecting 32-bit integer: 
maximum(arg 1)


--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing"

2010-09-29 Thread Ian Collins

On 09/30/10 05:57 PM, RG wrote:


I'm not saying one should not use compile-time tools, only that one
should not rely on them.  "Compiling without errors" is not -- and
cannot ever be -- be a synonym for "bug-free."


We is why wee all have run time tools called unit tests, don't we?

--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing"

2010-09-30 Thread Ian Collins

On 09/30/10 06:38 PM, Lie Ryan wrote:


The /most/ correct version of maximum() function is probably one written
in Haskell as:

maximum :: Integer ->  Integer ->  Integer
maximum a b = if a>  b then a else b

Integer in Haskell has infinite precision (like python's int, only
bounded by memory), but Haskell also have static type checking, so you
can't pass just any arbitrary objects.

But even then, it's still not 100% correct. If you pass a really large
values that exhaust the memory, the maximum() could still produce
unwanted result.

Second problem is that Haskell has Int, the bounded integer, and if you
have a calculation in Int that overflowed in some previous calculation,
then you can still get an incorrect result. In practice, the
type-agnostic language with *mandatory* infinite precision arithmetic
wins in terms of correctness. Any language which only has optional
infinite precision arithmetic can always produce erroneous result.

Anyone can dream of 100% correct program; but anyone who believes they
can write a 100% correct program is just a dreamer. In reality, we don't
usually need 100% correct program; we just need a program that runs
correctly enough most of the times that the 0.001% chance of
producing erroneous result becomes irrelevant.

In summary, in this particular case with maximum() function, static
checking does not help in producing the most correct code; if you need
to ensure the highest correctness, you must use a language with
*mandatory* infinite precision integers.


Or using the new suffix return syntax in C++0x.  Something like

template 
[] maximum( T0 a, T1 b) { return a > b ? a : b; }

Where the return type is deduced at compile time.

--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing"

2010-09-30 Thread Ian Collins

On 09/30/10 09:02 PM, Paul Rubin wrote:


 int maximum(int a, int b);

 int foo() {
   int (*barf)() = maximum;
   return barf(3);
 }

This compiles fine for me.  Where is the cast?  Where is the error message?
Are you saying barf(3) doesn't call maximum?


Try a language with stricter type checking:

CC /tmp/u.c
"/tmp/u.c", line 7: Error: Cannot use int(*)(int,int) to initialize 
int(*)().

"/tmp/u.c", line 8: Error: Too many arguments in call to "int(*)()".

--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing"

2010-09-30 Thread Ian Collins

On 10/ 1/10 02:57 AM, Pascal Bourguignon wrote:

Nick Keighley  writes:


On 27 Sep, 20:29, [email protected] (Pascal J. Bourguignon)
wrote:

If you start with the mindset of static type checking, you will consider
that your types are checked and if the types at the interface of two
modules matches you'll think that everything's ok.  And six months later
you Mars mission will crash.


do you have any evidence that this is actually so? That people who
program in statically typed languages actually are prone to this "well
it compiles so it must be right" attitude?


Yes, I can witness that it's in the mind set.

Well, the problem being always the same, the time pressures coming from
the sales people (who can sell products of which the first line of
specifications has not been written yet, much less of code), it's always
a battle to explain that once the code is written, there is still a lot
of time needed to run tests and debug it.  I've even technical managers,
who should know better, expecting that we write bug-free code in the
first place (when we didn't even have a specification to begin with!).


Which is why agile practices such as TDD have an edge.  If it compiles 
*and* passes all its tests, it must be right.


--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing"

2010-09-30 Thread Ian Collins

On 10/ 1/10 08:21 AM, Seebs wrote:

On 2010-09-30, Keith Thompson  wrote:


IMHO it's better to use prototypes consistently than to figure out the
rules for interactions between prototyped vs. non-prototyped function
declarations.


Yes.  It's clearly undefined behavior to call a function through a
pointer to a different type, or to call a function with the wrong number
of arguments.  I am pretty sure at least one compiler catches this.


Any C++ compiler will refuse to accept it.

C isn't really a strongly typed language and having to support archaic 
non-prototyped function declarations makes thorough type checking 
extremely difficult if not impossible.


--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing"

2010-09-30 Thread Ian Collins

On 10/ 1/10 10:27 AM, Seebs wrote:

On 2010-09-30, Ian Collins  wrote:

Which is why agile practices such as TDD have an edge.  If it compiles
*and* passes all its tests, it must be right.


So far as I know, that actually just means that the test suite is
insufficient.  :)

Based on my experience thus far, anyway, I am pretty sure it's essentially
not what happens that the tests and code are both correct, and it is usually
the case either that the tests fail or that there are not enough tests.


Which is why we write the tests first.  The only code written is written 
to pass a test.


Reviewing tests is a lot easier than reviewing the code while working 
out what is is supposed to do.


--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list


Re: "Strong typing vs. strong testing"

2010-10-01 Thread Ian Collins

On 10/ 2/10 05:18 AM, Pascal J. Bourguignon wrote:

Seebs  writes:


On 2010-10-01, Pascal J. Bourguignon  wrote:

 static  dynamic

compiler detects wrong type fail at compile fails at run-time
 (with exception
 explaining this is
 the wrong type)


Unless, of course, the "wrong type" happens to be compatible enough to
pass.  In which case, it's unclear whether it is the "wrong type" or not.


compiler passes wrong type  wrong resultfails at run-time
 (the programmer (with exception
 spends hoursexplaining this is
 finding the the wrong type)
 problem)


I have no clue what exact scenario you're talking about here.  I've never
seen a bug that could plausibly be described as "compiler passes wrong
type" which wasn't picked up quickly by running with more warnings enabled.


This is the scenario discussed in this thread, a long is passed to
maximum without a compiler warning.


Which will cause the test for the bit of code doing the call to fail. 
So it fails at run-time with a failed test, just as it would in a 
dynamic language.


--
Ian Collins
--
http://mail.python.org/mailman/listinfo/python-list