Re: Cult-like behaviour [was Re: Kindness]

2018-07-13 Thread dbd
On Friday, July 13, 2018 at 4:59:06 PM UTC-7, Steven D'Aprano wrote:
...
> I think that Marko sometimes likes to stir the ants nest by looking down 
> at the rest of us and painting himself as the Lone Voice Of Sanity in a 
> community gone mad *wink*
...

You mean he thinks he's Ranting Rick?

Dale B. Dalrymple
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to create a single executable of a Python program

2007-07-25 Thread dbd
On Jul 25, 10:20 am, Larry Bates <[EMAIL PROTECTED]> wrote:
> NicolasG wrote:
> > Dear fellows,
>
> > I'm trying to create a executable file using py2exe . Unfortunately
> > along with the python executable file it also creates some other files
> > that are needed in order to the executable be able to run in a system
> > that doesn't have Python installed. Can some one guide me on how can I
> > merge all this files created by py2exe in a single exe file ? If I
> > have a python program that uses an image file I don't want this image
> > file to be exposed in the folder but only to be accessible through the
> > program flow..
>
> > Regards,
> > Nicolas.
>
> You need to tell us why you "think" you need this and perhaps we can make a
> suggestion.  Question: Have you installed ANY applications recently that
> consisted of only a single file on your hard drive?  Answer: No.  Most
> applications install many (sometimes hundreds) of files.  So what is the
> problem.  If you want a single file to distribute, look at Inno Installer.  
> Use
> it to make a single, setup.exe out of all the files that come out of py2exe
> along with documentation, shortcuts, etc. that a good modern application 
> needs.
>
> -Larry

I use a number of utilities that install as a single executable file.
In fact, that is why I use them. I can install them on systems and
remove them with simple fast tools and I can count on not leaving any
extraneous crap behind.

Utilities can afford to trade off simplicity for the efficiencies in
memory footprint and programmer time that giant apps must struggle
for.

But, not everyone writes utilities. Does this multi-file stance mean
that there is an automatic assumption that python is only for
Microsoft wannabes?

Dale B. Dalrymple

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Float precision and float equality

2009-12-06 Thread dbd
On Dec 6, 1:12 am, Raymond Hettinger  wrote:
> On Dec 5, 11:42 pm, Tim Roberts  wrote:
>
> > Raymond Hettinger  wrote:
>
> > >   if not round(x - y, 6): ...
>
> > That's a dangerous suggestion.  It only works if x and y happen to be
> > roughly in the range of integers.
.>
.> Right.  Using abs(x-y) < eps is the way to go.
.>
.> Raymond

This only works when abs(x) and abs(y) are larger that eps, but not
too much larger.

Mark's suggestion is longer, but it works. The downside is it requires
you to think about the scale and accuracy of your application.

Dale B. Dalrymple
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Float precision and float equality

2009-12-06 Thread dbd
On Dec 6, 1:48 pm, sturlamolden  wrote:
> On 6 Des, 21:52, r0g  wrote:
>
> > > .> Right.  Using abs(x-y) < eps is the way to go.
> > > .>
> > > .> Raymond
>
> > > This only works when abs(x) and abs(y) are larger that eps, but not
> > > too much larger.
>
> > Okay, I'm confused now... I thought them being larger was entirely the
> > point.
>
> Yes. dbd got it wrong. If both a smaller than eps, the absolute
> difference is smaller than eps, so they are considered equal.

Small x,y failure case:
eps and even eps squared are representable as floats. If you have
samples of a sine wave with peak amplitude of one half eps, the "abs(x-
y) < eps" test would report all values on the sine wave as equal to
zero. This would not be correct.
Large x,y failure case:
If you have two calculation paths that symbolically should produce the
same value of size one over eps, valid floating point implementations
may differ by an lsb or more. An single lsb error would be 1, much
greater than the test allows as 'nearly equal' for floating point
comparison.

1.0 + eps is the smallest value greater than 1.0, distinguishable from
1.0. Long chains of floating point calculations that would
symbolically be expected to produce a value of 1.0 many be expected to
produce errors of an eps or more due to the inexactness of floating
point representation. These errors should be allowed in floating point
equality comparison. The value of the minimum representable error will
scale as the floating point number varies. A constant comparison value
is not appropriate.

Mark was right, DaveA's discussion explains a strategy to use.

Dale B. Dalrymple
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Float precision and float equality

2009-12-07 Thread dbd
On Dec 7, 4:28 am, sturlamolden  wrote:
> ...
>
> You don't understand this at all do you?
>
> If you have a sine wave with an amplitude less than the truncation
> error, it will always be approximately equal to zero.
>
> Numerical maths is about approximations, not symbolic equalities.
>
> > 1.0 + eps is the smallest value greater than 1.0, distinguishable from
> > 1.0.
>
> Which is the reason 0.5*eps*sin(x) is never distinguishable from 0.
> ...

A calculated value of 0.5*eps*sin(x) has a truncation error on the
order of eps squared. 0.5*eps and 0.495*eps are readily distinguished
(well, at least for values of eps << 0.01 :).

At least one of us doesn't understand floating point.

Dale B. Dalrymple

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Float precision and float equality

2009-12-10 Thread dbd
On Dec 7, 12:58 pm, Carl Banks  wrote:
> On Dec 7, 10:53 am, dbd  wrote:
> > ...
>
> You're talking about machine epsilon?  I think everyone else here is
> talking about a number that is small relative to the expected smallest
> scale of the calculation.
>
> Carl Banks

When you implement an algorithm supporting floats (per the OP's post),
the expected scale of calculation is the range of floating point
numbers. For floating point numbers the intrinsic truncation error is
proportional to the value represented over the normalized range of the
floating point representation. At absolute values smaller than the
normalized range, the truncation has a fixed value. These are not
necessarily 'machine' characteristics but the characteristics of the
floating point format implemented.

A useful description of floating point issues can be found:

http://dlc.sun.com/pdf/800-7895/800-7895.pdf

Dale B. Dalrymple
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Float precision and float equality

2009-12-11 Thread dbd
On Dec 10, 2:23 pm, Carl Banks  wrote:

> ...
> > A useful description of floating point issues can be found:
>
> [snip]
>
> I'm not reading it because I believe I grasp the situation just fine.
> ...
>
> Say I have two numbers, a and b.  They are expected to be in the range
> (-1000,1000).  As far as I'm concerned, if they differ by less than
> 0.1, they might as well be equal.
> ...
> Carl Banks

I don't expect Carl to read. I posted the reference for the OP whose
only range specification was "calculations with floats" and "equality
of floats" and who expressed concern about "truncation errors". Those
who can't find "floats" in the original post will find nothing of
interest in the reference.

Dale B. Dalrymple
-- 
http://mail.python.org/mailman/listinfo/python-list