[Python-Dev] test_zlib.py suggestion

2018-08-01 Thread Michael
I have a build_bot running (yeah me!), and was surprised to see 
test_zlib fail on AIX.


There is not an issue with test_zlib, but I do have a suggestion.

I was getting an error with test_flushes(). On python2-2.7.15 the test 
passes and in python3-3.8 (and earlier I expect) the test fails. The 
difference is the addition of the two modes 'Z_PARTIAL_FLUSH' and 'Z_BLOCK'


sync_opt = ['Z_NO_FLUSH', 'Z_SYNC_FLUSH', 'Z_FULL_FLUSH',
    'Z_PARTIAL_FLUSH', 'Z_BLOCK']

On default AIX installs zlib is, sadly, still at version 1.2.3 and 
Z_BLOCK was 'improved' in version 1.2.5. And it fails on Z_BLOCK.


My suggestion is to enhance the test_library_version() so that it 
verifies that the minor (or smaller?) number is at least '5'


When I static link python with zlib-1.2.11 then the test_zlib passes all 
sub-tests.


Again, not a python bug - but a suggestion for improving what is tested.

I can open an issue, and with a bit of assistance/interest from others 
I'll even do a PR.


Michael

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] spwd and AIX

2018-08-01 Thread Michael

a) I am looking at getting spwd integrated from AIX

b) only the parameter sp_pwdp is my concern - as AIX really does not 
want to reveal the encrypted password. Rather, AIX will say '!' (meaning 
there is, or should be a shadow password, or '*' - no user password). 
Would this horribly break things if only '!' was returned, rather than 
the shadow password?


It does not look terribly hard - but how do you deal with defaults such 
as 0 or -1 for the numeric values?


Regards,

Michael

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] test_zlib.py suggestion

2018-08-02 Thread Michael

On 01/08/2018 18:03, Brett Cannon wrote:

Open an issue as this will surely get forgotten otherwise, then people can
discuss on the issue how to handle this.
Will do. I have more homework - as I just tested AIX 7.1 TL5 as a build 
system. The good news is libz.a is finally updated to something newer. 
The bad news is they introduced libreadline-6, with no support, and that 
it seems it going to be difficult to get it to link.


I so wish they would include "include files" with these standard 
libraries. Not doing so is one of the reasons things break - either 
double files, overwriting files and/or mixed versions. Sigh.

On Wed, 1 Aug 2018 at 08:40 Michael  wrote:


I have a build_bot running (yeah me!), and was surprised to see
test_zlib fail on AIX.

There is not an issue with test_zlib, but I do have a suggestion.

I was getting an error with test_flushes(). On python2-2.7.15 the test
passes and in python3-3.8 (and earlier I expect) the test fails. The
difference is the addition of the two modes 'Z_PARTIAL_FLUSH' and 'Z_BLOCK'

sync_opt = ['Z_NO_FLUSH', 'Z_SYNC_FLUSH', 'Z_FULL_FLUSH',
  'Z_PARTIAL_FLUSH', 'Z_BLOCK']

On default AIX installs zlib is, sadly, still at version 1.2.3 and
Z_BLOCK was 'improved' in version 1.2.5. And it fails on Z_BLOCK.

My suggestion is to enhance the test_library_version() so that it
verifies that the minor (or smaller?) number is at least '5'

When I static link python with zlib-1.2.11 then the test_zlib passes all
sub-tests.

Again, not a python bug - but a suggestion for improving what is tested.

I can open an issue, and with a bit of assistance/interest from others
I'll even do a PR.

Michael

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/brett%40python.org



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] [RELEASED] Python 3.4.9 and Python 3.5.6 are now available

2018-08-05 Thread Michael

On 03/08/2018 03:22, Larry Hastings wrote:



On 08/02/2018 07:17 AM, Victor Stinner wrote:

3.4.9 and 3.5.6 have no more known security vulnerabilities :-)


Well, not to be a complete pill, but...

   https://bugs.python.org/issue17180
   https://bugs.python.org/issue17239
   https://bugs.python.org/issue19050

Sadly, just because they're languishing on bpo doesn't mean they 
aren't valid security vulnerabilities.


+1 - Sadly, not fixed after 5 years - Why? Because it isn't sexy, or 
fear for breaking things?


Breaking things could be valid - when it is a feature/design change, but 
the whole point of security fixes is because we believe the security 
vulnerability is breakage. Not fixing it keeps everything that depends 
on it (intentional or not) also broken. Any app that depends on 'broken' 
behavior needs to be fixed - rather than let a known vulnerability go 
from 0-day to 1825-day vulnerability (or is it 2000 already?)


Only read the discussion for 17180 - but it seems anything old does not 
get fixed because it did not get fixed years ago.


my two cents!

On a side note: I have been trying to test python on different 
"enterprise" distros of linux and am amazed to see Python2-2.7.5 as the 
'standard'. Rather disheartening for the all the good work that gets 
done. i.e., I am amazed that CVE's like the ones fixed in 3.4.9 and 
3.5.6 (and maybe already/later in 2.7.X) do not motivate distributions 
to update to current levels.


oh my - up to 4 cents! :)

Thanks for the work - I'll get to packaging them for AIX.



//arry/



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/aixtools%40felt.demon.nl



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] AIX and python tests

2018-08-05 Thread Michael

As I have time, I'll dig into these.

I have a couple of PR already 'out there', which I hope someone will be 
looking at when/as he/she/they have time. My time will also be intermittent.


My next test - and I hope not too difficult - would be the test_utf8. 
The test:


FAIL: test_cmd_line (test.test_utf8_mode.UTF8ModeTests) fails - and I am 
wondering if it is as simple as AIX default mode is ISO8559-1 and the 
test looks to be comparing UTF8 with the locale_default. If that is the 
case, obviously this test will never succeed - asis. Am I understanding 
the test properly. If yes, then I'll see what I can come up with for a 
patch to the test for AIX. If no, I'll need some hand holding to help me 
understand the test A bigger challenge, and I think a major issue with 
many of the test failures is test_ssl. Here I already know I'll need so 
assistance. I am quite lost. I know AIX at an expert level, but I do not 
know python (especially python internals, macros, etc..) and after about 
3 levels I am lost. I also find it hard to get 'artifacts' from the 
tests to know what is expected. Looking forward to assistance from 
various people - in understanding the tests, and probably better python 
coding criticism. Michael


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [issue11191] test_search_cpp error on AIX (with xlc)

2018-08-09 Thread Michael
On 09/08/2018 00:52, Michael Felt wrote:
> Change by Michael Felt :
>
>
> --
> pull_requests: +8196
>
> ___
> Python tracker 
> <https://bugs.python.org/issue11191>
> ___
>
Don't know why it is says +8196. I would have expected +8709.
https://github.com/python/cpython/pull/8709

Would appreciate someone look at/review this. It corrects 6 or 7 errors
from test_distutils (when not using gcc on AIX). And it is very small.
Maybe smaller if the review is to remove the comments.

Thx.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] Winding down 3.4

2018-08-20 Thread Michael
On 20/08/2018 14:52, Victor Stinner wrote:
>> "shutil copy* unsafe on POSIX - they preserve setuid/setgit bits"
>> https://bugs.python.org/issue17180
> There is no fix. A fix may break the backward compatibility. Is it really
> worth it for the last 3.4 release?
>
My idea would be to focus on a "fix" for 3.8, and then decide if it can,
in one form or another, be backported. And also how far. IMHO - the
discussion about breakage is holding back even an attempt for a
resolution for 3.8.

Michael



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] make patchcheck and git path

2018-08-24 Thread Michael
I am trying to be a 'good scout' and run "make patchcheck" more
regularly. However, I generally am not successful because I build and
test in separate directories.

There is access to git! just no ready reference in the build area.

So, not calling it a bug - but if someone else also experiences this,
and feels capable of makeing it more flexible - you will get thanks from
me, in any case!

Michael




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] AIX, test_ssl in particular, but also AIX failed tests in general - getting to the 'STABLE' side

2018-08-27 Thread Michael
Dear all,

Last week I experimented with test_ssl. My expectation was that test
failures was caused by the openssl.base provided by IBM for AIX not
having a default certificate file (CApath). However, that is not the case.

The tests that fail are similar to:
self.assertRaisesRegex(ssl.SSLError, "PEM lib")

I started out by testing with something as:
if not AIX:
    with self.assertRaisesRegex(ssl.SSLError, "PEM lib"):
    ctx.load_cert_chain(BADCERT)
else:
    with self.assertRaises(ssl.SSLError):
    ctx.load_cert_chain(BADCERT)

This is after an analysis where I saw that calls too SSL were returning
an non-success status (!= 1) while ERR_peek_last_error() regularly
returned 0. Hence, the frequent 'AssertionError: "PEM lib" does not
match "unknown error ...' with "unknown error" the string Python provides.

While above might remove the 'fail messages' it did not satisfy me. So,
I downloaded openssl (1.0.2p) and compiled - with no optimization! And
now, even from Python3.6 I see:

test_ssl passed in 1 min 23 sec

== Tests result: SUCCESS ==

1 test OK.

In short, the failures of test_ssl may be ignored - as far as raising an
exception goes.

a) I am running a bot for Python, and once the argument
"-with-openssl=/opt/aixtools" is added my bot will stop showing these
errors. I mention this so that it is clear why they suddenly disappear
on my bot (but not elsewhere). Also to alert Python-Dev that the AIX
platform, regarding ssl.py, _ssl.c and test_ssl.py functions 'stable'
but is not as friendly when it comes to saying why WHEN (my guess) a
heavily optimized (I am thinking -O3 to -O5) library is used.

b) With this feedback - MAYBE - the team from IBM might review the way
they package openssl and make sure the messages are visible via
ERR_peek_last_error() et al. Ideally, IBM will notice and work on it
without prompting. One can dream :)

c) In the meantime - I am curious to know what this 'proof' means to
Python-Dev.

I have a simple goal - work through the tests that AIX has been failing
historically and figure out why they fail and fix the tests. To that end
I have submitted several PR's - starting back In January, then nothing
as noone ever seemed to notice, and the last weeks several additional
ones. Victor has been kind enough to say he will look at the tests as he
has time (and back from vacation). But we are all, or most, working on
our time. My goal, rephrased, is to see AIX in the 'stable' column so
that when a test fails it is because there is a regression that needs
addressing - either in the test or in the proposed code change. So I
would be grateful if others were also looking.

I am not trying to re-invent the wheel and will not be surprised if my
'test fix' is not done in the 'Python' way. I'll learn over time - but
this calls for instructive (and critical) comments. "bij voorbaat dank"
aka Thanks in Advance.

So, hoping this helps - I'll continue as I can. Time and resources are
limited. And, I am very curious re: point c) above.

Great Days! everyone,

Michael




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] make patchcheck and git path

2018-09-01 Thread Michael
On 28/08/2018 09:57, Stephen J. Turnbull wrote:
> Michael Felt (aixtools) writes:
>
>  > When building out of tree there is no .git reference. If I
>  > understand the process it uses git to see what files have changed,
>  > and does further processing on those.
>
> Just guessing based on generic git knowledge here:
>
> If you build in a sibling directory of the .git directory, git should
> "see" the GITDIR, and it should work.  Where is your build directory
> relative to the GITDIR?
I work in "parallel"
/data/prj/python/python-version
/data/prj/python/git/python-version

I suppose I should try setting GITDIR - but, I think it would be better,
at least nicer, if "patchcheck" as a target did some checking for git
early on, rather than bail out at the end. The results of the check
might be just a message to set GITDIR, e.g..
> I suspect you could also set GITDIR=/path/to/python/source/.git in
> make's process environment, and do "make patchcheck" outside of the
> Python source tree successfully.
I'll give this a try next time around. (vacation, so not really 'active'
atm).

Thanks for the suggestions.
>
> Regards,
>




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] debugging test_importlib.test_bad_traverse - script status is SUCCESS - but FAIL is expected.

2018-09-17 Thread Michael
I read the discussion related to issue32374. That seems to be sure that
other events that could
cause the test to fail (i.e., the program executes successfully) are
caught early, and/or ignored
so that the program fails - and the test succeeds.

I am having trouble figuring out why the script below does not fail on
AIX, and would appreciate your assistance in debugging what is
happening, i.e., getting deeper.

Many thanks!

  +270  @unittest.skipIf(not hasattr(sys, 'gettotalrefcount'),
  +271  '--with-pydebug has to be enabled for this test')
  +272  def test_bad_traverse(self):
  +273  ''' Issue #32374: Test that traverse fails when
accessing per-module
  +274  state before Py_mod_exec was executed.
  +275  (Multiphase initialization modules only)
  +276  '''
  +277  script = """if True:
  +278  try:
  +279  from test import support
  +280  import importlib.util as util
  +281  spec = util.find_spec('_testmultiphase')
  +282  spec.name = '_testmultiphase_with_bad_traverse'
  +283
  +284  with support.SuppressCrashReport():
  +285  m = spec.loader.create_module(spec)
  +286  except:
  +287  # Prevent Python-level exceptions from
  +288  # ending the process with non-zero status
  +289  # (We are testing for a crash in C-code)
  +290  pass"""
  +291  assert_python_failure("-c", script)

To make sure the full debug info is loaded I added "-X dev", and for
your reading
added some additional print statements - and for speed run the command
directly.
Regardless of how I run it (calling as a test, or directly) the
end-result is the same.

# Note: I was not able to fine the default "loader.create_module() code"
to add debugging statements.
# Pointer for that is welcome!

./python -X dev '-X' 'faulthandler' '-I' '-c' "if True:
    try:
    from test import support
    import importlib.util as util
    spec = util.find_spec('_testmultiphase')
    spec.name = '_testmultiphase_with_bad_traverse'

    m = spec.loader.create_module(spec)
    print(m)
    print(dir(m))
    print(m.__doc__)
    print(m.__loader__)
    print(m.__name__)
    print(m.__package__)
    print(m.__spec__)
    except:
    # Prevent Python-level exceptions from
    # ending the process with non-zero status
    # (We are testing for a crash in C-code)
    print('in except')"

['__doc__', '__loader__', '__name__', '__package__', '__spec__']
Test module _testmultiphase_with_bad_traverse
None
_testmultiphase_with_bad_traverse
None
None
root@x066:[/data/prj/python/git/Python3-3.8.0]echo $?
0

To get some additional idea of what is happening I added some fprintf
statements:

The additional debug info is: (see diff below)
1. bad_traverse:0
2. bad_traverse:0
1. bad_traverse:0
2. bad_traverse:0
1. bad_traverse:0
2. bad_traverse:0

*** To my SURPRISE *** only one routine with these print statements is
ever called.
I was expecting at more. (only bad_traverse(...) gets called, I was
expecting
both bad_traverse_test (Objects/moduleobject.c) and some kind of
initialization
of m_state->integer.

Since the macro Py_VISIT includes a return() statement, and my debug
statement always print the
second line - I assume Py_VISIT(m_state->integer) is not doing anything
(i.e., vret == 0)

/* Utility macro to help write tp_traverse functions.
 * To use this macro, the tp_traverse function must name its arguments
 * "visit" and "arg".  This is intended to keep tp_traverse functions
 * looking as much alike as possible.
 */
#define Py_VISIT(op)    \
    do {    \
    if (op) {   \
    int vret = visit((PyObject *)(op), arg);    \
    if (vret)   \
    return vret;    \
    }   \
    } while (0)


Is this what it should be?

root@x066:[/data/prj/python/git/Python3-3.8.0]git status
On branch aix-pr
Changes not staged for commit:
  (use "git add ..." to update what will be committed)
  (use "git checkout -- ..." to discard changes in working directory)

    modified:   Modules/_testmultiphase.c
    modified:   Objects/moduleobject.c

no changes added to commit (use "git add" and/or "git commit -a")
root@x066:[/data/prj/python/git/Pytho

[Python-Dev] Nearly - all tests PASS for AIX

2018-09-17 Thread Michael
Dear all,

The last two months I have spent nearly all my free time to cleanup "a
frustration" - from my side - the long list of failing tests for AIX
(there were nearly 20 when I started).

atm - I am stuck on one - test_importlib (mail elsewhere), and the one I
just finished (test_httpservers) may be overly simplified (just skipping
the trailing-slash tests) - see issue34711 for a discussion. I would be
grateful for feedback before I post it as a PR - to avoid working in
circles.

I hope you, the developers and development-minded community consider in
a useful contribution.

Currently - with all my proposed patches combined I have:

393 tests OK.

1 test failed:
    test_importlib

25 tests skipped:
    test_dbm_gnu test_devpoll test_epoll test_gdb test_idle
    test_kqueue test_lzma test_msilib test_ossaudiodev test_readline
    test_spwd test_sqlite test_startfile test_tcl test_tix test_tk
    test_ttk_guionly test_ttk_textonly test_turtle test_unicode_file
    test_unicode_file_functions test_winconsoleio test_winreg
    test_winsound test_zipfile64

1 re-run test:
    test_importlib

Awaiting comments and suggestions. Many thanks for your time.

Michael Felt




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] debugging test_importlib.test_bad_traverse - script status is SUCCESS - but FAIL is expected.

2018-09-18 Thread Michael
On 17/09/2018 09:39, Michael wrote:
> I read the discussion related to issue32374. That seems to be sure that
> other events that could
> cause the test to fail (i.e., the program executes successfully) are
> caught early, and/or ignored
> so that the program fails - and the test succeeds.

After reading below, I would appreciate knowing whether to ask that
issue32374 be reopened and the test adjusted so that the test is
"SkipIf" AIX? Or, something else? I'll work on something else, but I do
not want to guess the current intent of this test module.

+++

In: Modules/_testmultiphase.c - found where AIX and Linux differ
in their response to accessing a NULL pointer, in this case
m_state->integer

  +624  static int
  +625  bad_traverse(PyObject *self, visitproc visit, void *arg) {
  +626  testmultiphase_state *m_state;
  +627 FILE *err = fopen("/tmp/err","a");
  +628
  +629  m_state = PyModule_GetState(self);
  +630
  +631  fprintf(err,"%s:%d\n", __FILE__,__LINE__); fflush(err);
  +632  fprintf(err, "m_state:08%lx &m_state->integer:%08lx\n",
  +633  m_state, &(m_state->integer));
  +634  fclose(err);
  +635  Py_VISIT(m_state->integer);
  +636  /*
  +637  #define Py_VISIT(op)
  +638  do {
  +639  if (m_state->integer) {
  +640  int vret = visit((PyObject *)(m_state->integer), arg);
  +641  if (vret) {
  +642  return vret;
  +643  }
  +644  }
  +645  } while (0);
  +646  */
  +647  return 0;
  +648  }

The "m_state" and m_state->integer values are identical, but the
response is not.

root@x066:[/data/prj/python/git]uname
AIX
/data/prj/python/git/python3-3.8/Modules/_testmultiphase.c:631
m_state:080 &m_state->integer:

root@x074:/data/prj/python/git# uname
Linux
/data/prj/python/git/Python3-3.8.0/Modules/_testmultiphase.c:631
m_state:080 &m_state->integer:

++ Test program to demonstrate +++
AIX does not segmentfault on access of a NULL pointer
++
root@x074:/data/prj/python/git# cat nullpr.c
#include
main()
{
    int *vpt = NULL;

fprintf(stdout, "vpt = %08lx\n", vpt);
if (*vpt)
    fprintf(stdout,"True\n");
else
    fprintf(stdout,"False\n");
}

root@x074:/data/prj/python/git# rm -f nullpr; make nullpr; ./nullpr
make: Warning: File 'nullpr.c' has modification time 387 s in the future
cc nullpr.c   -o nullpr
nullpr.c:2:1: warning: return type defaults to 'int' [-Wimplicit-int]
 main()
 ^
make: warning:  Clock skew detected.  Your build may be incomplete.
vpt = 
Segmentation fault


++ AIX does not 'Segmenttation fault' +
root@x066:[/data/prj/python/git]rm -r nullpr; make nullpr; ./nullpr
cc nullpr.c   -o nullpr
vpt = 
False




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Nearly - all tests PASS for AIX

2018-09-18 Thread Michael
On 17/09/2018 12:50, Michael wrote:
> Dear all,
>
> The last two months I have spent nearly all my free time to cleanup "a
> frustration" - from my side - the long list of failing tests for AIX
> (there were nearly 20 when I started).

== Tests result: SUCCESS ==

393 tests OK.

1 test altered the execution environment:
    test_threading

25 tests skipped:
    test_dbm_gnu test_devpoll test_epoll test_gdb test_idle
    test_kqueue test_lzma test_msilib test_ossaudiodev test_readline
    test_spwd test_sqlite test_startfile test_tcl test_tix test_tk
    test_ttk_guionly test_ttk_textonly test_turtle test_unicode_file
    test_unicode_file_functions test_winconsoleio test_winreg
    test_winsound test_zipfile64

Total duration: 13 min 30 sec
Tests result: SUCCESS

May I put this up as a PR - not for merging - but to see how it
performs, or does not perform, with the Travis Ci, etc. tests?

Regards,

Michael

p.s. - most of the time test_threading just passes. Going to Rinse and
repeat!




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Nearly - all tests PASS for AIX

2018-09-20 Thread Michael
On 20/09/2018 15:18, Nick Coghlan wrote:
> That seems like a reasonable approach to me - it will also allow folks
> to give the changes a quick skim and provide suggestions for splitting
> it up into more easily reviewed PRs.
I already have them as individual PR's (8 Open) -
https://github.com/python/cpython/pulls?q=is%3Aopen+is%3Apr+author%3Aaixtools+sort%3Aupdated-desc

But I'll add the combined one to get it through grinder and see if there
are unexpected surprises.

Michael
>
> Cheers,
> Nick.





signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] AIX to stable, what does that take?

2018-10-05 Thread Michael
On 05/10/2018 16:15, Michael Haubenwallner wrote:
> Hi Michael,
>
> being on a similar road with Gentoo Prefix, I really do appreciate
> your AIX related work!
>
> However, for two (not so minor) topics I've got a little different
> experience, which I think should be mentioned here for completion:
Always.
>
> On 10/04/2018 11:13 AM, Michael Felt wrote:
>> On 10/4/2018 10:30 AM, INADA Naoki wrote:
>>> Hello,
>>>
>>> First of all, congratulations on passing all test on AIX.
>>> As a one of core developer, I don't know anything about AIX.
>>> If my change breaks AIX build, I can't investigate what's happened.
>>>
>>> So I think we need following in devguide:
>>>
>>> * Brief description about AIX, from developer's point of view.
>> This I might be able to do. Bullet form:
>> ... I build everything myself, using xlc
>> (gcc introduces the need for a GNU RTE, e.g., glibc).
> Using gcc does *not* require to use glibc or even GNU binutils at all.
> Except for gcc's own runtime libraries, there's no need for a GNU RTE.
> In fact, in Gentoo Prefix I do use gcc as the compiler, configured to
> use AIX provided binutils (as, ld, nm, ...), with AIX libc as RTE.
Well, this is something I learned - second hand - from someone who
worked hard to make much more OSS available than I. Probably wrong then
- in how I came to my conclusion - but the few things I tried to compile
"asis" to shared libraries would not work without also a lot of the gcc
compiler libraries. While I could have bitten the bullet and just found
a way to add those I was warned that different versions of gcc need
different level of supporting files.
>> * finally, a bit deeper: while the AIX linker loader supports svr4
>> shared libraries (it is the data, not the file name) it also supports
>> having multiple shared libraries in a classic archive. So, rather that
>> .../lib/libxxx.so and .../lib64/libxxx.so AIX prefers .../lib/libxxx.a
>> with two so-called members, with same or different names. The one found
>> is not it's name, but the symbol name and size of the ABI (32-bit or 64-bit)
> While this all is true, having multiple *versions* of one shared library in
> one single file is a PITA for package managers - both human or software.
Yes, it is a necessary pain. My secret is a) do not touch /usr/lib -
leave what is as it is, and in the few situations where it must be in
/usr/lib I add/replace named archive members with my new ones - and all
the other ones get extracted, modify a flag in their respective header -
so that the linker knows they are only to be used for applications that
expect them - not for new applications.
> But fortunately, the AIX linker does support so called "Import Files",
> allowing for *filename based* shared library versioning like on Linux,
> while still allowing for both ABIs in a single library archive file.
>
> For example, libtool provides the --with-aix-soname={aix|svr4|both}
> configure flag since libtool-2.4.4.  Although the default will stay
> at 'aix' here, in Gentoo Prefix I do use 'svr4' only.  This actually
> is a package manager's decision, ideally for all depending packages.
> As gcc does use libtool, for more information please refer to
> https://gcc.gnu.org/install/configure.html#WithAixSoname
> But note that "Import Files" should work with xlc as well.
Actually, more detail than I really want to know. I recall the day when
a library was a collection of "static" .o files, and ranlib was nearly
always needed - Or you told the linker to link against that library
multiple times. And I recall going to my first conference where "RPC"
was the next greatest thing, and shared libraries were going to be such
a space savior - both on disk and in memory. And I was always more of a
bsd fan having schooled myself on UNIX v7, then bsd (2.9 iirc) and bsd
4.1, 4.2 (in comes tcpip) and 4.3. But I diverge :p

To return briefly to the question of what is AIX for the developer -
very flexible. You can choose your architecture and it generally, just
works. I wrote some scripts as a front-end for packaging and most
packages are a one-liner - that runs configure, make, makeinstall (using
DESTDIR) and then packaging the DESTDIR.

As far as python development and AIX goes I am open to helping others.
Been doing that for more years than I care to count. :)
>
> Thanks!
> /haubi/



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] AIX to stable, what does that take?

2018-10-05 Thread Michael
On 05/10/2018 22:01, Rob Boehne wrote:
> On 10/5/18, 10:33 AM, "Python-Dev on behalf of Michael Haubenwallner" 
>  michael.haubenwall...@ssi-schaefer.com> wrote:
>
> >
> >... I build everything myself, using xlc
> >(gcc introduces the need for a GNU RTE, e.g., glibc).
> 
> Using gcc does *not* require to use glibc or even GNU binutils at all.
> Except for gcc's own runtime libraries, there's no need for a GNU RTE.
> In fact, in Gentoo Prefix I do use gcc as the compiler, configured to
> use AIX provided binutils (as, ld, nm, ...), with AIX libc as RTE.
> 
> I think the author was referring to the dependency on libgcc_s when using gcc.
> It's typical for native UNIX package builders to use gcc only when necessary 
> because the correct runtime is always installed (if the os running it is 
> newer) and therefore won't clash when something else in the process space is 
> using a different version of libgcc_s (I'm not sure what the ABI guarantees 
> are with libgcc_s specifically, and neither are UNIX packagers - not 
> necessarily anyway)
Thank you Rob. My core mistake is calling it glibc (that is the gnome
libc not that I think back), and libgcc* are something else entirely.

In any case, I need to get my facts more accurate.
> It also eliminates the need to ship a version of libgcc_s as a shared library.
That would make life easier. Would probably have to package gcc on my
own to get it work that way though.
>
>
>

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Thanks - come a long way with getting tests resolved!

2019-01-15 Thread Michael
Many thanks to all who assisted with feedback. Back at the start of
August my make tests had a nasty block like this:

29 tests failed:
    test__xxsubinterpreters test_array test_asyncio test_cmath
    test_compile test_complex test_ctypes test_distutils test_embed
    test_float test_fractions test_getargs2 test_httplib
    test_httpservers test_imaplib test_importlib test_math test_poplib
    test_shutil test_signal test_socket test_ssl test_statistics
    test_strtod test_struct test_subprocess test_time test_timeout
    test_utf8_mode

And now that is down to:


4 tests failed:
test_eintr test_importlib test_multiprocessing_forkserver
test_multiprocessing_spawn
While I am still trying to figure out where the "multiprocessing" errors
come from - there is
only one "test" left from the original list - and that one, plus
test_eintr have PR's waiting for your approval.
I could not have gotten this far without help!
Sincerely,
Michael aka aixtools



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Question - Bug Triage for 3.4 & 3.5

2019-02-21 Thread Michael
On 20/02/2019 18:58, Victor Stinner wrote:
> If Python 3.4 was the current version when a bug was reported, I would
> expect the version field of the bug set to Python 3.4. Maybe the bug
> has been fixed in the meanwhile, maybe not. Closing all bugs affected
> to 3.4 is a risk of loosing useful information on real bugs: closed
> bugs are ignored by default in "Search" operation.

Short: add version 3.X when it is discovered in "latest branch" versus
issues reported against a "binary packaged" version.

Maybe the instructions re: setting version (for new issues) should be to
leave it blank (especially if it is still valid on the latest (e.g., 3.X
rather than official numbered branch) or only indicate the branches that
will be considered for a fix).

Where "version" could be useful would be when someone finds something in
a "binary" release at say level 3.6, while testing shows it works fine
on 3.7 (or 3.8-alpha).

In other words, I see little value in a bug/issue reported when, e.g.,
3.4 was fully supported (or better becoming the latest branch comparable
to labeling as 3.8 today). Maybe having a label "3.X" that just goes
with the flow - in addition to 3.4 - (I am thinking maybe it is not bad
to know it was first reported against 3.4, but does that also mean it
wasn't there at 3.3?)


>
> Changing the version field: well, I don't think that it's useful. I
> usually ignore this field. And it would send like 3000 emails... I
> don't see the point.
>
> It's not uncommon that I fix bugs which 5 years old if not longer.
> Sometimes, I decide to look at all bugs of a specific module. And most
> of old bugs are still relevant nowadays. Sometimes, closing the bug as
> WONTFIX is the right answer, but it can only be done on a case by case
> basis.
>
> Note: Same rationale for Python 3.5, Python 2.6, or another other old
> Python version ;-)
>
> Bug triage is hard and requires plenty of time :-)
Again, if early on, an issue could (also) be flagged as 3.X - this may
make it easier to track 'ancient' bugs - and automate keeping them in sight.





signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] before I open an issue re: posix.stat and/or os.stat

2019-02-21 Thread Michael
My focus is AIX - and I believe I found a bug in AIX include files in
64-bit mode. I'll take that up with IBM and AIX support. However, this
issue might also be valid in Python3.


The following is from Centos, not AIX

Python 2.7.5 (default, Jul 13 2018, 13:06:57)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.maxsize
9223372036854775807
>>> import posix
>>> posix.stat("/tmp/xxx")
posix.stat_result(st_mode=33188, st_ino=33925869, st_dev=64768L,
st_nlink=1, st_uid=0, st_gid=0, st_size=0, st_atime=1550742595,
st_mtime=1550742595, st_ctime=1550742595)
>>> st=posix.stat("/tmp/xxx")
>>> dev=st.st_dev
>>> min=posix.minor(dev)
>>> maj=posix.major(dev)
>>> min,max
(0, )
>>> min
0
>>> max

>>> maj
253
>>> posix.minor(dev)
0
>>> posix.major(655536)
2560
>>> posix.major(65536)
256
>>> posix.major(256)
1
>>> import os
>>> os.major(256)
1
>>>

In AIX - 64-bit mode

Python 3.8.0a1+ (heads/master:e7a4bb554e, Feb 20 2019, 18:40:08) [C] on aix7
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys,os,posix
>>> sys.maxsize
9223372036854775807
>>> posix.major(256)
0
>>> posix.major(65536)
1
>>> posix.stat("/tmp/xxx")
os.stat_result(st_mode=33188, st_ino=12, st_dev=-9223371993905102841,
st_nlink=1, st_uid=202, st_gid=1954, st_size=0, st_atime=1550690105,
st_mtime=1550690105, st_ctime=1550690105)

AIX 32-bit:

root@x066:[/data/prj/python/git/python3-3.8.0.66]./python
Python 3.8.0a1+ (heads/master:e7a4bb554e, Feb 19 2019, 11:22:56) [C] on aix6
Type "help", "copyright", "credits" or "license" for more information.
>>> import os,sys,posix
>>> sys.maxsize
2147483647
>>> posix.major(65536)
1
>>> posix.stat("/tmp/xxx")
os.stat_result(st_mode=33188, st_ino=149, st_dev=655367, st_nlink=1,
st_uid=0, st_gid=0, st_size=0, st_atime=1550743517, st_mtime=1550743517,
st_ctime=1550743517)


To make it easier to view:

buildbot@x064:[/home/buildbot]cat osstat.c
#include 
#include 
#include 
#include 

main()
{
    dev_t dev;
    char *path = "/tmp/xxx";
    struct stat st;
    int minor,major;

    lstat(path,&st);

    printf("size: %d\n", sizeof(st.st_dev));
    dev = st.st_dev;
    minor = minor(dev);
    major = major(dev);
    printf("%016lx %ld %ld\n",dev,dev, (unsigned) dev);
    printf("%d,%d\n",major,minor);
}

buildbot@x064:[/home/buildbot]OBJECT_MODE=32 cc osstat.c -o osstat-32 &&
./osstat-32
size: 4
000a0007 655367 655367
10,7

And here is the AIX behavior (and bug - major() macro!)

buildbot@x064:[/home/buildbot]OBJECT_MODE=64 cc osstat.c -o osstat-64 &&
./osstat-64
size: 8
800a0007 -9223371993905102841 7
0,7

The same on AIX 6 (above is AIX7) - and also with gcc:

root@x068:[/data/prj]gcc -maix64 osstat.c -o osstat-64 && ./osstat-64
size: 8
800a0007 -9223371993905102841 42949672967
0,7
root@x068:[/data/prj]gcc -maix32 osstat.c -o osstat-32 && ./osstat-32
size: 4
000a0007 655367 0
10,7
root@x068:[/data/prj]

So, the AIX 'bug' with the macro major() has been around for ages - but
ALSO setting the MSB of the st_dev.

+

Now my question:

Will this continue to be enough space - i.e., is the Dev size going to
be enough?

 +2042  #ifdef MS_WINDOWS
 +2043  PyStructSequence_SET_ITEM(v, 2,
PyLong_FromUnsignedLong(st->st_dev));
 +2044  #else
 +2045  PyStructSequence_SET_ITEM(v, 2, _PyLong_FromDev(st->st_dev));
 +2046  #endif

 +711  #define _PyLong_FromDev PyLong_FromLongLong

It seems so - however, Is there something such as PyUnsignedLong and is
that large enough for a "long long"? and if it exists, would that make
the value positive (for the first test).

posix.major and os.major will need to mask away the MSB and
posix.makedev and os.makedev will need to add it back.

OR - do I need to make the PyStat values "the same" in both 32-bit and
64-bit?

Puzzled on what you think is the correct approach.

Michael





signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Making PyInterpreterState an opaque type

2019-02-21 Thread Michael
On 16/02/2019 23:34, Steve Dower wrote:
> I like that we're taking (small) steps to reduce the size of our API. 

I consider myself - an "outsider", so an "outsider's" view is that
anything that makes it more clear about what is intended aka supported
as the Python API is an improvement.

Without clarity there is a chance (read risk) that someone starts using
something and forces a long and difficult process to make it part of the
official API or get acceptance that something never should have been
done "that way".

Shorter: promote clarity.

IMHO: it is easier to move something from the 'internal' to public than
v.v. and whenever there is not a compelling reason to not put something
into some form of 'internal' - do it before not doing so bites you in a
nasty way.

My two (outsider) bits - :)




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASE] Python 2.7.16

2019-03-04 Thread Michael
On 04/03/2019 04:30, Benjamin Peterson wrote:
> Hello all,
> I'm pleased to announce the immediate availability of Python 2.7.16 for 
> download at https://www.python.org/downloads/release/python-2716/.

Congratulations!




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] How to (best) organize platform dependent blocks of Python code

2019-06-17 Thread Michael
CONCERN: How to organize platform dependent blocks of code
POLICY/RECOMMENDATION: (today) seems to be: arbitrary, although
sys.platform, seems to be the favorite.

* I, as 'python-consumer' was very surprised when I learned that
sys.platform is "set" when Python is built - and that platform.system()
was the call to make to get the run-time value. And, of course, they
answered the "question" - what platform am I 'running' on, quite
differently.

* There are several - to many - issues in the past where, in one form or
another, sys.platform, platform.system(), os.name, and I expect more but
cannot think of them atm - used throughout all of Python.

* A Python expert (e.g., core-devs) may understand - inherently - when
each of these is the best option.
As an 'experienced-novice' I am surprised that a "build-time constant"
seems to be preference. This preference is not new (maybe
platform.system() is "new", and os.name() too broad).

* Zooming back to 2012 when "linux3" first appeared this became apparent
that the build-time constant of linux2 was making 'things' difficult
when the packaging was built on linux3 but running with applications
developed on linux2 systems - and early on the move was made to just
have "linux" returned (issue13236). For years the recommendation was
(and maybe is?) to use sys.platform.startswith('linux').

* In between there have been other discussions - and the concern remains
unresolved.

* Personally, not "satisfied", and reluctant to "give up", or "ignore"
something that continues to come up - I opened issue36624 to serve as a
discussion point and worked through what would be needed to have "most"
platforms testing against a CONSTANT defined in test.support - to
provide an example of what it could look like.

* As a non-expert - I EXPECT guidance from the people who know best why
and when the different approaches are preferred. (I am hoping for
something better than 'historical' or 'personal preferences').

* So, using this mail to python-dev to re-open the discussion - my
suggested focus is NOT look at specific (test) code as it is now, but
FOCUS from a perspective of Python3 and beyond - answering the 'concern'
how should 'platform dependencies' be organized'. In other words, do not
focus on an exception (although "listing" perceived exceptions will be
helpful). Focus on "policy" aka "good practice". With that as a starting
point discussing, understanding and validating exceptions to the
guidelines will be much easier.

-- as a 'mentoring' project - I am willing to do as much as I am able to
"implement" something.
My first PR was a 'stab in the dark' and I have already learned "other
things" such as how to git rebase (read merge!). The discussion in the
issue-tracker and PR conversation indicate to me that documentation of
different
approaches to organizing code dependencies is vital. And that it is not
easily available. Devguide is one area - but 'core' documentation is
more important imho - I read, and compare, the Python documentation much
much more often than the devguide docs.

Sincerely,
Michael





signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YCZYPRBRNOCXLBJ42HA6QZHIBCKZDTGD/


[Python-Dev] Re: Is "%zd" format is portable now?

2019-08-05 Thread Michael
On 02/08/2019 04:12, Inada Naoki wrote:
> On Thu, Aug 1, 2019 at 10:21 PM Victor Stinner  wrote:
>> Hi INADA-san,
>>
>> Is it supported on macOS, FreeBSD, AIX, Android, etc.?
>>
>> My notes on platforms supported by Python:
>> https://pythondev.readthedocs.io/platforms.html
>>
>> For example, xlc C compiler seems to be commonly used on AIX. I don't
>> know how is its C99 support.
>>
>> Can we write an unit test somewhere to ensure that %zd works as expected?
>>
>> Victor
>>
> I don't know about AIX too.  I googled, but I can not find even man manual for
> snprintf(3) on AIX.
>
> I'm frustrated I wasted a few hours to read some PDFs and searching
> but I can not
> find any official document about snprintf(3).
> I feel it's impossible to support such platforms...

AIX man pages "group" printf() functions together on one page.

During a quick search I found a man page for AIX 4.3.3 (which is roughly
1997), and also for AIX 7.2. So the core function is there (in libc.a).

I do not have any systems older than AIX 5.3 - and a much too simplified
program (printf(), sprintf(), snprintf() all support the same formatting
arguments.

In the documentation I do not see any reference to %z... support
(although I do see support for vectors using %v...).

As to XLC and c99, etc.. FYI: when called as "cc" the compiler behavior
is "pre-c99", whatever that may be. For strict c99 you call it as c99.

I call it as xlc and xlc_r to get c99 and some other extensions by
default. The only thing you need remember is that XLC supports, for
years, various standards - including c99. Just not, by default, if
called as "cc".

Further, (imho) it is not the compiler that "supports" a particular
printf() format - rather the compiler verifies the syntax, does the work
for the call - but the actual support is provided by libc.a - which is
maintained by core AIX - not the compiler developer/maintainers.


OK - a too simple program - run on AIX 5.3 - seems to indicate that XLC
(version 11, so quite old!) accepts "%zd".

If someone would be kind enough to mail me a better example of what
needs to be verfied - I am happy to compile and publish the results.

root@x065:[/data/prj/aixtools/tests]cat printf-test.c
#include 

main()
{
    printf("Hello World - testing for %%zd support\n");
    printf("%%zd of 1001 == %zd\n", 1001);
    printf("\nTest complete\n");
}

oot@x065:[/data/prj/aixtools/tests]./a.out
Hello World - testing for %zd support
%zd of 1001 == 1001

Test complete

fyi: AIX 4.3
https://sites.ualberta.ca/dept/chemeng/AIX-43/share/man/info/C/a_doc_lib/libs/basetrf2/snprintf.htm

and AIX 7.2
https://www.ibm.com/support/knowledgecenter/ssw_aix_72/p_bostechref/printf.html

> Except AIX, I believe all platforms supports size_t and %zd because
> it's very basic
> C99 feature.
>
> Regards,
> --
> Inada Naoki  
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/O7H4FBLDQBHSKGSEJQ2TU7IRNKUAPJDV/





signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/O55OVT5ORYQGBGVTJXIGQMXDO4D3F5Y4/


[Python-Dev] AIX XL C knows/supports C99: was Re: Re: Is "%zd" format is portable now?

2019-08-05 Thread Michael
On 01/08/2019 15:14, Victor Stinner wrote:
> For example, xlc C compiler seems to be commonly used on AIX. I don't
> know how is its C99 support.

FYI - this is from the XL C "compiler" reference document for V-10, with
copyright set in 2008.
Just an excerpt - c99 is supported. Just have to say CC=c99 or CC=xlc or
one of the other variants.

If more documentation is desired - just let me know.

Michael


Standards and specifications XL

C is designed to support the following standards and specifications. You
can refer to these standards for precise definitions of some of the
features found in this information.

* Information Technology – Programming languages – C, ISO/IEC 9899:1990,
also known as C89.

* Information Technology – Programming languages – C, ISO/IEC 9899:1999,
also known as C99.




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4PMTNVUS6I4UVUTFLOVJIIG6VXGUETML/


[Python-Dev] Re: What to do about invalid escape sequences

2019-08-05 Thread Michael
On 05/08/2019 06:22, raymond.hettin...@gmail.com wrote:
I have read through (most) of the thread, and visited the issue referenced.
> We should revisit what we want to do (if anything) about invalid escape 
> sequences.
IMHO - revisit is okay, generally - but that was actually done a long
time ago. Now it is, to me, just another example of "Python" being
indecisive.
>
> For Python 3.8, the DeprecationWarning was converted to a SyntaxWarning which 
> is visible by default.  The intention is to make it a SyntaxError in Python 
> 3.9.
Sounds like this has been discussed in depth - and decided.
>
> This once seemed like a reasonable and innocuous idea to me; however, I've 
> been using the 3.8 beta heavily for a month and no longer think it is a good 
> idea.  The warning crops up frequently, often due to third-party packages 
> (such as docutils and bottle) that users can't easily do anything about.  And 
> during live demos and student workshops, it is especially distracting. 

Because it is not innocuous? My experience with developers (you mention
3rd party) - is that they are lazy. If something is not up there, "in
the face", they will always have a reason to say - tomorrow. Or,
perhaps, since this has been a silent issue (and they are too lazy to
read "What's new" they do not even know. The "head buried in the sand"
sort of thing.

As to demo's and workshops - YOU know this - so use it as an example to
explain how Python development works and DEPENDS on 3rd party developers
paying attention. Yes,I am sure you are concerned about speeding
adoption of Python3.latest-is-greatest, but that is not the world.

For example, RHEL8 is (coming) out. iirc, they way it comes out it what
they intend to support for 10 years - so changes are it will be Python
3.7 (at best) for several years. I have a system with Centos(-7) and
it's default python is python2

[root@t430 ~]# python3
bash: python3: command not found...
Similar command is: 'python'
[root@t430 ~]# python
Python 2.7.5 (default, Jun 20 2019, 20:27:34)
...

> I now think our cure is worse than the disease.  If code currently has a 
> non-raw string with '\latex', do we really need Python to yelp about it (for 
> 3.8) or reject it entirely (for 3.9)?   If someone can't remember exactly 
> which special characters need to be escaped, do we really need to stop them 
> in their tracks during a data analysis session?  Do we really need to reject 
> ASCII art in docstrings: ` \---> special case'?  
Simply put - yes, reject. You decided. There is a solution - perhaps
boring to implement - but as is mentioned - there are 'linters', so an
automated approach is likely possible. If not today, someone will write
a module.
> IIRC, the original problem to be solved was false positives rather than false 
> negatives:  filename = '..\training\new_memo.doc'.  The warnings and errors 
> don't do (and likely can't do) anything about this.
For "filenames" you could, perhaps, make an exception in the calls that
use them. e.g., when they are hard-coded in something such as
open("..\training\new_memo.doc"). iirc, Windows can (and does) use
forward-slash for file names for system calls like open. The "shell"
command.exe does not, because it uses "/" the way posix shells use "-"
(as in /h and -h for the "option" h).
>
> If Python 3.8 goes out as-is, we may be punching our users in the nose and 
> getting almost no gain from it.  ISTM this is a job best left for linters.  
> For a very long time, Python has been accepting the likes of 'more \latex 
> markup' and has been silently converting it to 'more \\latex markup'.  I now 
> think it should remain that way.  This issue in the 3.8 beta releases has 
> been an almost daily annoyance for me and my customers. Depending on how you 
> use Python, this may not affect you or it may arise multiple times per day.

IMHO - Python will not be punching anyone. Python will be delivering "a
promise", being decisive, being clear. Not following through only
creates insecurity - will they ever do it? Nah - no guts (these are
3rd-party developers chatting). Users are your friend. If they really
want Python3.8+ and they get lots of warning messages - THEY will
complain - and be heard - in ways CPython never will (or was).

Again - revisit is fine - and I hope my 2 cents helps you stay the course!

Michael





signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/54MBQXAKGCFNM2KSNSVS6HDAQQSERCMY/


[Python-Dev] Re: Is "%zd" format is portable now?

2019-08-06 Thread Michael
On 05/08/2019 11:16, Inada Naoki wrote:
> https://github.com/python/cpython/blob/1213123005d9f94bb5027c0a5256ea4d3e97b61d/Include/pyport.h#L158-L168
>
> This can be changed to this:
>
> #ifndef PY_FORMAT_SIZE_T
> /* "z" is defined C99 and portable enough.  We can use "%zd" instead of
>"%" PY_FORMAT_SIZE_T "d" for now.
> */
> # define PY_FORMAT_SIZE_T "z"
> #endif
>
Just in case you are curious - I made the change manually in pyport.h
and AIX passes all tests.

Hope this helps!




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WU74QPST7A42UE25F7OPR6B5LRBTVKDL/


[Python-Dev] It all works - wheels for AIX - up to a point

2019-08-15 Thread Michael
Spent some hours - experimenting...

So - short story - it works. There remains a big chance for failure
though as the current tag ignores differences in TL level, build_date,
etc.. Also, a source for failure, is no recognition for 32-bit or 64-bit.


However, focusing on the good news - the wheels install - without a
compiler being called.

Looking Forward!!
AIX has a long history of letting binaries from the past be reused: See:
https://www.ibm.com/support/knowledgecenter/ssw_aix_72/install/binary_compatability.html

++ The gritty details, for the curious/interested

After having used devpi (client) to upload to a test devpi-server I
pointed my pip at the devpi-server to download the packages.

Note: using default pipy "source" I would get two .gz files.

The initial upload is from an AIX 6.1 server, the download here is to an
AIX 7.1 server.

(testpi) devpi@x064:[/home/devpi/testpi]pip download -i 
http://x064:8441/aixtools/dev cffi
Looking in indexes: http://x064:8441/aixtools/dev
Collecting cffi
  Downloading
http://x064:8441/aixtools/dev/%2Bf/504/244e6fc36188b/cffi-1.12.3.tar.gz
(452kB)
 || 460kB 28.7MB/s
  Saved ./cffi-1.12.3.tar.gz
Collecting pycparser (from cffi)
  Downloading
http://x064:8441/aixtools/dev/%2Bf/d9d/f2f3597ea75ab/pycparser-2.19-py2.py3-none-any.whl
(111kB)
 || 112kB 28.4MB/s
  Saved ./pycparser-2.19-py2.py3-none-any.whl
Successfully downloaded cffi pycparser

So, a wheel (for pycparser) and a .gz (for cffi).

Phase 2:

repeat the upload of cffi, but now from an AIX 7.1 server.

Preview:

(piserv) devpi@x064:[/home/devpi/testpi/cffi-1.12.3]devpi list pycparser
http://x064:8441/aixtools/dev/+f/d9d/f2f3597ea75ab/pycparser-2.19-py2.py3-none-any.whl
http://x064:8441/aixtools/dev/+f/2a8/90514e40f6d1c/pycparser-2.19.tar.gz
(piserv) devpi@x064:[/home/devpi/testpi/cffi-1.12.3]devpi list cffi
http://x064:8441/aixtools/dev/+f/53d/50f8d231592dc/cffi-1.12.3-cp37-cp37m-aix_6_1.whl
http://x064:8441/aixtools/dev/+f/504/244e6fc36188b/cffi-1.12.3.tar.gz
http://x064:8441/aixtools/dev/+f/4d2/0d8ee28c40fe7/cffi-1.12.3-cp37-cp37m-aix_7_1.whl

(testpi) devpi@x064:[/home/devpi/xyz]pip download -i 
http://x064:8441/aixtools/dev cffi
Looking in indexes: http://x064:8441/aixtools/dev
Collecting cffi
  Downloading
http://x064:8441/aixtools/dev/%2Bf/4d2/0d8ee28c40fe7/cffi-1.12.3-cp37-cp37m-aix_7_1.whl
(197kB)
 || 204kB 20.7MB/s
  Saved ./cffi-1.12.3-cp37-cp37m-aix_7_1.whl
Collecting pycparser (from cffi)
  Downloading
http://x064:8441/aixtools/dev/%2Bf/d9d/f2f3597ea75ab/pycparser-2.19-py2.py3-none-any.whl
(111kB)
 || 112kB 23.3MB/s
  Saved ./pycparser-2.19-py2.py3-none-any.whl
Successfully downloaded cffi pycparser

Regards,
Michael



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LABQ2YBCN2CN6FLHLTZYXV5Y2HXHK37B/


[Python-Dev] Re: Inline links in Misc/NEWS entries

2019-08-16 Thread Michael
On 16/08/2019 05:31, Kyle Stanley wrote:
> Yeah definitely, it was my intention to mention this in the devguide,
> particularly with adding an example of the Sphinx roles being used and
> explaining appropriate usage. I hadn't thought of linking to the list of
> roles (https://devguide.python.org/documenting/#id4), but that's definitely
> a good idea to include. I was just waiting for everyone to get a chance to
> provide feedback on the topic before expanding the devguide.
>
> After the devguide is updated, I was also planning on adding the markup to
> 3.8's Misc/NEWS entries (in the appropriate branch, as Ned recommended),
> and then work on the 3.9. I'll probably split it into several smaller PRs
> so it's easier to review.

There has been "a lot" of discussion re: things for new contributors to
do and learn.

a) this seems to be "well-defined", and imho, suitable as "easy", etc..
b) isn't this something we want new people to be more aware of (as you
said, you have been working with this for a year)
c) it is an area (Documentation) I have clearly 'missed' as I focused on
'other things', and, with myself and many projects I have worked on over
the years - Documentation seems to come in last. Getting new (and
newish, as myself) working here only makes us better suited for review
in the future.

So, I guess this is an area where you could "mentor", perhaps create
"issues" that specify the "paragraphs", or whatever you think are
appropriate 'chunks' to make review sensible (if not also easier).

Michael




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/T5YQW2I624S7UU63AXNBNGOH7ASF6OWC/


[Python-Dev] Re: [WARNING] Some users who downloaded the Python 3.5.8 .xz tarball got the wrong version

2019-10-31 Thread Michael
On 31/10/2019 00:17, Larry Hastings wrote:
>
>
> Due to awkward CDN caching, some users who downloaded the source code
> tarballs of Python 3.5.8 got a preliminary version instead of the
> final version.  As best as we can tell, this only affects the .xz
> release; there are no known instances of users downloading an
> incorrect version of the .tgz file.
>
> If you downloaded "Python-3.5.8.tar.xz" during the first twelve hours
> of its release, you might be affected.  It's easy to determine this
> for yourself.  The file size (15,382,140 bytes) and MD5 checksum
> (4464517ed6044bca4fc78ea9ed086c36) published on the release page have
> always matched the correct version.  Also, the GPG signature file will
> only report a "Good signature" for the correct .xz file (using "gpg
> --verify").
>
> What's the difference between the two?  The only difference is that
> the final version also merges a fix for Python issue tracker #38243:
>
> https://bugs.python.org/issue38243
>
> The fix adds a call to "html.escape" at a judicious spot, line 896 in
> Lib/xmlrpc/server.py.  The only other changes are one new test, to
> ensure this new code is working, and an entry in the NEWS file.  You
> can see the complete list of changes here:
>
> https://github.com/python/cpython/pull/16516/files
>
> What should you do?  It's up to you.
>
>   * If you and your users aren't using the XMLRPC library built in to
> Python, you don't need to worry about which version of 3.5.8 you
> downloaded.
>   * If you downloaded the .tgz tarball or the Git repo, you already
> have the correct version.
>   * If you downloaded the xz file and want to make sure you have the
> fix, check the MD5 sum, and if it's wrong download a fresh copy
> (and make sure that one matches the known good MD5 sum!).
>
> To smooth over this whole sordid mess, I plan to make a 3.5.9 release
> in the next day or so.  It'll be identical to the 3.5.8 release; its
> only purpose is to ensure that all users have the same updated source
> code, including the fix for #38243.
>
>
> Sorry for the mess, everybody,
>
a) "Congratulations" on the 3.5.8 release

b) excellent solution - to up the release number!

c) Thanks!!

>
> //arry/
>
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/OYNQS2BZYABXACBRHBHV4RCEPQU5R6EP/
> Code of Conduct: http://python.org/psf/codeofconduct/




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/LKXNJQLSNAOONGVAGUXMTBEF77QXEY65/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP proposal to limit various aspects of a Python program to one million.

2019-12-06 Thread Michael
On 03/12/2019 17:15, Mark Shannon wrote:
> Hi Everyone,
>
> I am proposing a new PEP, still in draft form, to impose a limit of
> one million on various aspects of Python programs, such as the lines
> of code per module.
>
> Any thoughts or feedback?
>
> The PEP:
> https://github.com/markshannon/peps/blob/one-million/pep-100.rst
>
> Cheers,
> Mark. 

Shortened the mail - as I want my comment to be short. There are many
longish ones, and have not gotten through them all.

One guiding principle I learned from a professor (forgot his name sadly).

A program has exactly - zero (0) of something, one (1) of something, or
infinite. The moment it gets set to X, the case for X+1 appears.

Since we are not talking about zero, or one - I guess my comment is make
sure it can be used to infinity.

Regards,

Michael

p.s. If this has already been suggested - my apologies for any noise.




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IDACDRYRQDYUB3YZFANONLKZFR6LPOE7/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Python Documentation and AIX specifics - how to proceed?

2019-12-26 Thread Michael
First - best wishes all for a happy and healthy 2020!

As my nickname implies - my primary means to contribute to Python is
with regard to AIX. One of the things I recently came across is
Misc/README.AIX which was last updated sometime between 2010 and 2014. I
am thinking a facelift is in order. Assuming 2010-2011 as the original
date - much has changed and many of the comments are no longer accurate.

Before saying so, I would like to check here that - having all tests
pass on the 3.8 bot implies that there are no outstanding issues. As to
the historical issues in the current document - these could either be
deleted, or a short text describing when they were resolved (e.g.,
Python 3.7, or just resolved, but noone knows exactly how or when -
whether it was a Python change, or a platform (AIX) change.

What I see as being more relevant is the description re: how to build
Python for AIX - for the "do it youself"-ers.

So, besides the direct question re: what to say about the old "known
issues" and whether there are no known issues aka, no issues identified
via the standard tests - I would appreciate feedback on what is
considered appropriate - anno 2020 - for any Misc/README.platform text.

Additionally, I am willing to work on other areas of the standard
documentation where it is either needed or considered appropriate for
platform specific details and/or examples.

This is not something I would try to get done in a single PR, Instead I
am thinking a single -longer term- issue - and multiple PR's to work
through corrections and additions during 2020. Focus on Python 3.9 and
beyond yet where appropriate backlevel to Python 3.8 or even 3.7.

Sincerely,

Michael




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/6BMGBUK4A54N56UOEE3OSXXU2TSU3MIN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Python Documentation and AIX specifics - how to proceed?

2020-01-07 Thread Michael
On 03/01/2020 00:53, Brett Cannon wrote:
> That whole directory is mostly outdated (e.g. those Wing files are two major 
> versions behind and we have ripped out all other editor-specific files in the 
> repo). I think the first question is what do we want for that directory to 
> be? Based on that we can decide if something like AIX build instructions 
> makes sense or if we should just gut the directory.
>
> For me personally, I'm torn. While helping out other folks using AIX through 
> that text file might be good due to the work you put in, Michael, while not 
> being an officially supported OS, I'm also fine with emptying that folder out 
> down to the bare minimum.

Yes, I am sad that my support is not sufficient for closing the gap to
being a recognized platform. No additional discussion. IMHO - while AIX
may not be important to Python - Python working properly is important
for AIX users who need applications developed using Python. My start
came about because I was asked to help to get a Python application
working - and that continues to be my primary area of interest -
supporting users.

In that sense - supporting users (and I see someone doing a self-build
as a user) I am willing to re-work the information needed to build
Python3 on AIX. Where the file is stored is not exciting. IMHO, not
having that directory may make it easier to find. I only found it by
using find, xargs, and grep.

Fast forward - thank you Petr - I'll get something started and add a
link, read replace the current contents with a link, once I have
something - and whether this info ever becomes part of the distribution,
or not - is something that can be determined later.

> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/O5KPNXWTFLGRB6XAAS3WXYYYHNFDXVG7/
> Code of Conduct: http://python.org/psf/codeofconduct/





signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OC7ITKBZYSOHBNR6HUYJWNYIMKFPOLUN/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Differences in what lint (e.g., mypy, flake8, etc) require from new code and what Cpython uses/returns - e.g., strings with single/double quotes.

2020-01-09 Thread Michael
Hi all,

Last year I was struggling to get some code to pass CI in
pypa/packaging. There were other issues, but one that suprised me most
was learning to ALWAYS use double quotes (") to get the code to pass the
lint check (type checking). Anything using single quotes (') as string
delimiters were not accepted as a STRING type.

I am not questioning the demands of the lint checker - rather - I am
offering my services (aka time) to work through core to make CPython
pass it's own (I assume) PEP recommendations or guidelines.

Again, this is not a bug-report - but an offer to assist with what I see
as an inconsistency.

If my assistance in this area is appreciated/desired I would appreciate
one or more core developers to assist me with managing the PR process.
Not for speed, but I would not want to burden just one core developer.

My approach would entail opening a number of related 'issues' for
different pieces: e.g., Lib, Modules, Python, Documentation where some
Lib|Modules|Python pieces might need to be individual 'issues' due to
size and/or complexity.

I do not see this as happening 'overnight'.

so-called simple things to fix:

In Documentation:

The return value is the result of the evaluated expression. Syntax
errors are reported as exceptions. Example:

x = 1

eval('x+1')
2 Change the text above to state eval("x+1") - assuming the lint process
would no longer accept eval('x+1') as proper typed syntax. And, then,
hoping this is still regarded as 'simple' make sure code such as
ctypes.util.find_library() is consistent, returning strings terminated
by double quotes rather than the single quotes as of now. >>> import
ctypes.util >>> ctypes.util.find_library("c") >>> 'libc.a(shr.o)' >>>
ctypes.util.find_library("ssl") >>> 'libssl.a(libssl.so)' Something more
"complex" may be the list of names dir() returns: >>> >>> import
sysconfig >>> dir(sysconfig)) ['_BASE_EXEC_PREFIX', '_BASE_PREFIX',
'_CONFIG_VARS', '_EXEC_PREFIX', '_INSTALL_SCHEMES', '_PREFIX',
'_PROJECT_BASE', '_PYTHON_BUILD', '_PY_VERSION', '_PY_VERSION_SHORT',
'_PY_VERSION_SHORT_NO_DOT', '_SCHEME_KEYS', '_USER_BASE', '__all__',
'__builtins__', '__cached__', '__doc__', '__file__', '__loader__',
'__name__', '__package__', '__spec__', '_expand_vars', '_extend_dict',
'_generate_posix_vars', '_get_default_scheme',
'_get_sysconfigdata_name', '_getuserbase', '_init_non_posix',
'_init_posix', '_is_python_source_dir', '_main', '_parse_makefile',
'_print_dict', '_safe_realpath', '_subst_vars', '_sys_home',
'get_config_h_filename', 'get_config_var', 'get_config_vars',
'get_makefile_filename', 'get_path', 'get_path_names', 'get_paths',
'get_platform', 'get_python_version', 'get_scheme_names',
'is_python_build', 'os', 'pardir', 'parse_config_h', 'realpath',
'scheme', 'sys'] >>> Regards, Michael



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DZGPIUWBI75KMIG6VPRXKDZBN3L4MQKF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Differences in what lint (e.g., mypy, flake8, etc) require from new code and what Cpython uses/returns - e.g., strings with single/double quotes.

2020-01-09 Thread Michael
On 09/01/2020 13:16, Steven D'Aprano wrote:
> Hi Michael, and welcome!
>
>
> On Thu, Jan 09, 2020 at 11:37:33AM +0100, Michael wrote:
>
>> I am not questioning the demands of the lint checker - rather - I am
>> offering my services (aka time) to work through core to make CPython
>> pass it's own (I assume) PEP recommendations or guidelines.
> That would create a lot of code churn for no good reason. The CPython 
> project doesn't encourage making style changes just for the sake of 
> changing the style.

Code churn is not my goal. Passing pypa/packaging CI (as Paul commented)
is only required by them - afaik. And a tool, such as `black` can fix
these things auto/magic/ally. But it got me thinking as it is not the
first time I have been forced to make a style change because there is a
new tweak that can be turned - yet ignore everything else. Further, this
is something I would expect to be extremely boring to someone wanting to
make functional improvements, rather than just "sweeping the floor".

So thanks you for your clear responses! Discussion closed.

Best wishes for 2020!

>
> The most important part of PEP 8 is this:
>
> https://www.python.org/dev/peps/pep-0008/#a-foolish-consistency-is-the-hobgoblin-of-little-minds
>
> Going through the entire stdlib creating bug reports for style issues is 
> not a productive use of anyone's time. There are large numbers of open 
> bug reports and documentation issues that are far more important.
>
> Regarding strings, PEP 8 doesn't recommend either single quotes or 
> double quotes for the std lib, except that doc strings should use triple 
> double-quotes. Each project or even each module is free to choose its 
> own rules.
>
> https://www.python.org/dev/peps/pep-0008/#string-quotes
>
>
>



signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DEYPS6RM3IZ5LAWAVSZPLIFTZOHFU25M/
Code of Conduct: http://python.org/psf/codeofconduct/


Re: [Python-Dev] [edk2] Official port of Python on EDK2

2017-11-03 Thread Michael Zimmermann
> FYI, this library adds thread support to UEFI:
>
> https://github.com/Openwide-Ingenierie/GreenThreads-UEFI

IMO this library has some crucial problems like changing the TPL
during context switching.
For my project "EFIDroid" I've invested many months analyzing, testing
and implementing my own threading implementation based on
LK(LittleKernel, a MIT licensed project) threads and get/set -context.

The result is a pretty stable implementation which can even be used in
UEFI drivers:
https://github.com/efidroid/uefi_edk2packages_EFIDroidLKLPkg/tree/master/UEFIThreads
I'm currently using this lib for my LKL(LinuxKernelLibrary) port to be
able to use linux touchscreen drivers in UEFI - so you could say it
has been well tested.

The only "problem" is that it only supports ARM right now and that the
get/set context implementation was copied (and simplified) from glibc
which means that this part is GPL code.

Thanks
Michael Zimmermann

On Thu, Nov 2, 2017 at 8:37 PM, Blibbet  wrote:
> On 11/02/2017 09:41 AM, Jayaprakash, N wrote:
>> Would you consider adding thread support in this port of Python for
> EDK2 shell?
>
> FYI, this library adds thread support to UEFI:
>
> https://github.com/Openwide-Ingenierie/GreenThreads-UEFI
>
> Note that the library is GPLv2, ...but the author (a 1-person project)
> could be asked to relicense to BSD to fit into Tianocore.
>
> Note that library is currently Intel x64-centric, and contains a bit of
> assembly. Will need some ARM/RISC-V/x86 contributions.
>
> HTH,
> Lee Fisher
> ___
> edk2-devel mailing list
> edk2-de...@lists.01.org
> https://lists.01.org/mailman/listinfo/edk2-devel
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Add __reversed__ methods for dict

2018-06-08 Thread Michael Selik
Am I correct in saying that the consensus is +1 for inclusion in v3.8?

The last point in the thread was INADA Naoki researching various
implementations and deciding that it's OK to include this feature in 3.8.
As I understand it, Guido was in agreement with INADA's advice to wait for
MicroPython's implementation of v3.7. Since INADA has changed minds, I'm
guessing it's all in favor?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-22 Thread Michael Selik
On Fri, Jun 22, 2018 at 8:09 AM Antoine Pitrou  wrote:

> Thank you.  Personally, I'd like to see feedback from
> educators/teachers after they take the time to read the PEP and take
> some time to think about its consequences.
>

I've started testing the proposed syntax when I teach. I don't have a large
sample yet, but most students either dislike it or don't appreciate the
benefits. They state a clear preference for shorter, simpler lines at the
consequence of more lines of code. This may partially be caused by the
smaller screen real estate on a projector or large TV than a desktop
monitor.

My intuition is that one strength of Python for beginners is the relative
lack of punctuation and operators compared with most other languages. This
proposal encourages denser lines with more punctuation. Because of the
order of operations, many uses of ``:=`` will also require parentheses.
Even relatively simple uses, like ``if (match := pattern.search(data)) is
not None:`` require doubled parentheses on one side or the other. Beginners
are especially prone to typographical errors with mismatched parentheses
and missing colons and get easily frustrated by the associated syntax
errors.


Given the following options:

A.

if (row := cursor.fetchone()) is None:
raise NotFound
return row


B.

row = cursor.fetchone()
if row is None:
raise NotFound
return row


C.

if (row := cursor.fetchone()) is not None:
return row
raise NotFound


D.

row = cursor.fetchone()
if row is not None:
return row
raise NotFound


The majority of students preferred option B. I also tested some regex match
examples. Results were similar.




> My main concern is we're introducing a second different way of doing
> something which is really fundamental.
>

The few students who like the proposal ask why it requires creating a new
operator instead of repurposing the ``=`` operator.

I'll reserve my personal opinions for a different thread.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-22 Thread Michael Selik
On Fri, Jun 22, 2018 at 10:02 AM Michael Selik  wrote:

> On Fri, Jun 22, 2018 at 8:09 AM Antoine Pitrou 
> wrote:
>
>> Thank you.  Personally, I'd like to see feedback from
>> educators/teachers after they take the time to read the PEP and take
>> some time to think about its consequences.
>
>
I forgot to add that I don't anticipate changing my lesson plans if this
proposal is accepted. There's already not enough time to teach everything
I'd like. Including a new assignment operator would distract from the
learning objectives.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-22 Thread Michael Selik
On Fri, Jun 22, 2018 at 10:19 AM Chris Angelico  wrote:

> On Sat, Jun 23, 2018 at 3:02 AM, Michael Selik  wrote:
> > On Fri, Jun 22, 2018 at 8:09 AM Antoine Pitrou 
> wrote:
> >>
> >> Thank you.  Personally, I'd like to see feedback from
> >> educators/teachers after they take the time to read the PEP and take
> >> some time to think about its consequences.
> >
> >
> > I've started testing the proposed syntax when I teach. I don't have a
> large
> > sample yet, but most students either dislike it or don't appreciate the
> > benefits. They state a clear preference for shorter, simpler lines at the
> > consequence of more lines of code.
>
> This is partly because students, lacking the experience to instantly
> recognize larger constructs, prefer a more concrete approach to
> coding. "Good code" is code where the concrete behaviour is more
> easily understood. As a programmer gains experience, s/he learns to
> grok more complex expressions, and is then better able to make use of
> the more expressive constructs such as list comprehensions.
>

I don't think that's the only dynamic going on here. List comprehensions
are more expressive, but also more declarative and in Python they have nice
parallels with SQL and speech patterns in natural language. The concept of
a comprehension is separate from its particular expression in Python. For
example, Mozilla's array comprehensions in Javascript are/were ugly [0].

Students who are completely new to programming can see the similarity of
list comprehensions to spoken language. They also appreciate the revision
of certain 3-line and 4-line for-loops to comprehensions. I didn't get the
same sense of "Oh! That looks better!" from my students when revising code
with an assignment expression.

Despite my best efforts to cheerlead, some students initially dislike list
comprehensions. However, they come around to the idea that there's a
tradeoff between line density and code block density. Comprehensions have a
3-to-1 or 4-to-1 ratio of code line shrinkage. They're also often used in
sequence, like piping data through a series of transforms. Even if students
dislike a single comprehension, they agree that turning 15 lines into 5
lines improves the readability.

In contrast, an assignment expression only has a 2-to-1 code line
compression ratio. It might save a level of indentation, but I think there
are usually alternatives. Also, the assignment expression is less likely to
be used several times in the same block.

A good pitch for an assignment expression is refactoring a cascade of
regular expressions:


for line in f:
mo = foo_re.search(line)
if mo is not None:
foo(mo.groups())
continue

mo = bar_re.search(line)
if mo is not None:
bar(mo.groups())
continue

mo = baz_re.search(line)
if mo is not None:
baz(mo.groups())
continue


Here the assignment operator makes a clear improvement:

for line in f:
if (mo := foo_re.search(line)) is not None:
foo(mo.groups())
elif (mo := bar_re.search(line)) is not None:
bar(mo.groups())
elif (mo := baz_re.search(line)) is not None:
baz(mo.groups())


However, I think this example is cheating a bit. While I've written similar
code many times, it's almost never just a function call in each if-block.
It's nearly always a handful of lines of logic which I wouldn't want to cut
out into a separate function. The refactor is misleading, because I'd
nearly always make a visual separation with a newline and the code would
still look similar to the initial example.


[0]
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Array_comprehensions
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-24 Thread Michael Selik
This thread started with a request for educator feedback, which I took to
mean observations of student reactions. I've only had the chance to test
the proposal on ~20 students so far, but I'd like the chance to gather more
data for your consideration before the PEP is accepted or rejected.



On Sun, Jun 24, 2018 at 11:09 AM Steven D'Aprano 
wrote:

> Remember, the driving use-case which started this (ever-so-long)
> discussion was the ability to push data into a comprehension and then
> update it on each iteration, something like this:
>
> x = initial_value()
> results = [x := transform(x, i) for i in sequence]
>

If that is the driving use-case, then the proposal should be rejected. The
``itertools.accumulate`` function has been available for a little while now
and it handles this exact case. The accumulate function may even be more
readable, as it explains the purpose explicitly, not merely the algorithm.
And heck, it's a one-liner.

results = accumulate(sequence, transform)


The benefits for ``any`` and ``all`` seem useful. Itertools has
"first_seen" in the recipes section. While it feels intuitively useful, I
can't recall ever writing something similar myself. For some reason, I
(almost?) always want to find all (counter-)examples and aggregate them in
some way -- min or max, perhaps -- rather than just get the first.

Even so, if it turns out those uses are quite prevalent, wouldn't a new
itertool be better than a new operator? It's good to solve the general
problem, but so far the in-comprehension usage seems to have only a handful
of cases.



On Fri, Jun 22, 2018 at 9:14 PM Chris Barker via Python-Dev <
python-dev@python.org> wrote:

> again, not a huge deal, just a little bit more complexity
>

I worry that Python may experience something of a "death by a thousand
cuts" along the lines of the "Remember the Vasa" warning. Python's greatest
strength is its appeal to beginners. Little bits of added complexity have a
non-linear effect. One day, we may wake up and Python won't be recommended
as a beginner's language.




On Fri, Jun 22, 2018 at 7:48 PM Steven D'Aprano  wrote:

> On Fri, Jun 22, 2018 at 10:59:43AM -0700, Michael Selik wrote:
>
> Of course they do -- they're less fluent at reading code. They don't
> have the experience to judge good code from bad.
>

On the other hand, an "expert" may be so steeped in a particular subculture
that he no longer can distinguish esoteric from intuitive. Don't be so fast
to reject the wisdom of the inexperienced.



> The question we should be asking is, do we only add features to Python
> if they are easy for beginners? It's not that I especially want to add
> features which *aren't* easy for beginners, but Python isn't Scratch and
> "easy for beginners" should only be a peripheral concern.
>

On the contrary, I believe that "easy for beginners" should be a major
concern.  Ease of use has been and is a, or even the main reason for
Python's success. When some other language becomes a better teaching
language, it will eventually take over in business and science as well.
Right now, Python is Scratch for adults. That's a great thing. Given the
growth of the field, there are far more beginner programmers working today
than there ever have been experts.


Mozilla's array comprehensions are almost identical to Python's, aside
> from a couple of trivial differences:
>

I can't prove it, but I think the phrase ordering difference is not trivial.


> Students who are completely new to programming can see the similarity of
> > [Python] list comprehensions to spoken language.
>
> I've been using comprehensions for something like a decade, and I can't
>

Python: any(line.startswith('#') for line in file)
English: Any line starts with "#" in the file?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-24 Thread Michael Selik
On Sun, Jun 24, 2018 at 4:57 PM Guido van Rossum  wrote:

> On Sun, Jun 24, 2018 at 2:41 PM Michael Selik  wrote:
>
>> This thread started with a request for educator feedback, which I took to
>> mean observations of student reactions. I've only had the chance to test
>> the proposal on ~20 students so far, but I'd like the chance to gather more
>> data for your consideration before the PEP is accepted or rejected.
>>
>
> Sure. Since the target for the PEP is Python 3.8 I am in no particular
> hurry. It would be important to know how you present it to your students.
>

Absolutely. Since this has come up, I'll make an effort to be more
systematic in data collection.




> On Sun, Jun 24, 2018 at 11:09 AM Steven D'Aprano 
>> wrote:
>>
>>> Remember, the driving use-case which started this (ever-so-long)
>>> discussion was the ability to push data into a comprehension and then
>>> update it on each iteration, something like this:
>>>
>>> x = initial_value()
>>> results = [x := transform(x, i) for i in sequence]
>>>
>>
>> If that is the driving use-case, then the proposal should be rejected.
>> The ``itertools.accumulate`` function has been available for a little while
>> now and it handles this exact case. The accumulate function may even be
>> more readable, as it explains the purpose explicitly, not merely the
>> algorithm. And heck, it's a one-liner.
>>
>> results = accumulate(sequence, transform)
>>
>
> I think that's a misunderstanding. At the very least the typical use case
> is *not* using an existing transform function which is readily passed to
> accumulate -- instead, it's typically written as a simple expression (e.g.
> `total := total + v` in the PEP) which would require a lambda.
>

Plus, I don't know what kind of students you are teaching, but for me,
> whenever the solution requires a higher-order function (like accumulate),
> this implies a significant speed bump -- both when writing and when reading
> code. (Honestly, whenever I read code that uses itertools, I end up making
> a trip to StackOverflow :-).
>

Mostly mid-career professionals, of highly varying backgrounds. The
higher-order functions do require some cushioning getting into, but I have
some tricks I've learned over the years to make it go over pretty well.


On Fri, Jun 22, 2018 at 7:48 PM Steven D'Aprano  wrote:
>>
> On Fri, Jun 22, 2018 at 10:59:43AM -0700, Michael Selik wrote:
>>>
>>> Of course they do -- they're less fluent at reading code. They don't
>>> have the experience to judge good code from bad.
>>>
>>
>> On the other hand, an "expert" may be so steeped in a particular
>> subculture that [they] no longer can distinguish esoteric from intuitive.
>> Don't be so fast to reject the wisdom of the inexperienced.
>>
>
> Nor should we cater to them excessively though. While the user is indeed
> king, it's also well known that most users when they are asking for a
> feature don't know what they want (same for kings, actually, that's why
> they have advisors :-).
>
>
>> The question we should be asking is, do we only add features to Python
>>> if they are easy for beginners? It's not that I especially want to add
>>> features which *aren't* easy for beginners, but Python isn't Scratch and
>>> "easy for beginners" should only be a peripheral concern.
>>>
>>
>> On the contrary, I believe that "easy for beginners" should be a major
>> concern.  Ease of use has been and is a, or even the main reason for
>> Python's success. When some other language becomes a better teaching
>> language, it will eventually take over in business and science as well.
>> Right now, Python is Scratch for adults. That's a great thing. Given the
>> growth of the field, there are far more beginner programmers working today
>> than there ever have been experts.
>>
>
> I'm sorry, but this offends me, and I don't believe it's true at all.
> Python is *not* a beginners language, and you are mixing ease of use and
> ease of learning. Python turns beginners into experts at an unprecedented
> rate, and that's the big difference with Scratch.
>

By saying "Scratch for adults" I meant that Python is a language that can
be adopted by beginners and rapidly make them professionals, not that it's
exclusively a beginner's language.

Also, Scratch and similar languages, like NetLogo, have some interesting
features that allow beginners to write some sophisticated parallelism. I
don't mean "beginner's language" in that it's overly simplistic, but that
it enables what would be complex in other languages.

I realize that my phrasing was likely to be misunderstood without knowing
the context that I teach working professionals who are asked to be
immediately productive at high-value tasks.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-16 Thread Michael Foord
On Tuesday, 14 July 2015, Christie Wilson  wrote:
>> If people do misspell it, I think they do learn not to in after it
happens a few times.
>
> Unless the line silently executes and they don't notice the mistake for
years :'(

Indeed. This has been a problem with mock, misspelled (usually
misremembered) assert methods silently did nothing.

With this fix in place several failing tests were revealed in code bases!

As for assret, it's the common misspelling people have told me about. It
seems a ridiculous thing for people to get worked up about, but people
enjoy getting worked up.

Michael


> On Tue, Jul 14, 2015 at 9:15 AM, Ron Adam  wrote:
>>
>>
>> On 07/14/2015 09:41 AM, Steven D'Aprano wrote:
>>>
>>> On Tue, Jul 14, 2015 at 02:06:14PM +0200, Dima Tisnek wrote:
>>>>
>>>> >https://bugs.python.org/issue21238  introduces detection of
>>>> >missing/misspelt mock.assert_xxx() calls on getattr level in Python
>>>> >3.5
>>>> >
>>>> >Michael and Kushal are of the opinion that "assret" is a common typo
>>>> >of "assert" and should be supported in a sense that it also triggers
>>>> >AttributeError and is not silently ignored like a mocked user
>>>> >attribute.
>>>> >
>>>> >I disagree
>>>
>>> I must admit I don't use mock so don't quite understand what is going on
>>> in this bug report. But I don't imagine that anything good will come out
>>> of treating*one*  typo differently from all the other possible typos.
>>> Why should "assret" be treated differently from other easy-to-make typos
>>> like "asert", "assrt", "asset"? Or "assort", which is not only a
>>> standard and common English word, but "e" and "o" are right next to each
>>> other on Dvorak keyboards, making it an easy typo to make.
>>>
>>> Surely this is an obvious case where the Zen should apply. "Special
>>> cases aren't special enough..." -- either all such typos raise
>>> AttributeError, or they are all silent.
>>
>> I agree with Steven that it doesn't seem correct to not raise
AttributeError here.
>>
>> For what it's worth, I have a life long sleep disorder and am a tarrable
(<-- like this)  speller because of it.   I still don't want spell, or
grammar, checkers to not report my mistakes.  And I don't recall ever
making the particular error of using "assret" in place of "assert".  I'd be
more likely to mispell it as "assirt" if I wasn't already so familiar with
"assert".
>>
>> If people do misspell it, I think they do learn not to in after it
happens a few times.
>>
>> Regards,
>>Ron
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
https://mail.python.org/mailman/options/python-dev/bobcatfish%40gmail.com
>
>
>
> --
> Christie Wilson

-- 

http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How far to go with user-friendliness

2015-07-16 Thread Michael Foord
On Wednesday, 15 July 2015, Robert Collins 
wrote:
> On 15 July 2015 at 12:59, Nick Coghlan  wrote:
>>
>> There is zero urgency here, so nothing needs to change for 3.5.
>> Robert's plan is a fine one to propose for 3.6 (and the PyPI mock
>> backport).
>
> Right - the bad API goes back to the very beginning. I'm not planning


I disagree it's a bad api. It's part of why mock was so easy to use and
part of why it was so successful. With the new check for non-existent
assert methods it's no longer dangerous and so doesn't need fixing.

So -1 from me.

Michael


> on writing the new thing I sketched, though it should be straight
> forward if someone wishes to do so. I'll probably file a ticket in the
> tracker asking for it once this thread quiesces.
>
> -Rob
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
https://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk
>

-- 

http://www.voidspace.org.uk/

May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing http://www.sqlite.org/different.html
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Rationale behind lazy map/filter

2015-11-04 Thread Michael Selik
> I'm not suggesting restarting at the top (I've elsewhere suggested that
> many such methods would be better as an *iterable* that can be restarted
> at the top by calling iter() multiple times, but that's not the same
> thing). I'm suggesting raising an exception other than StopIteration, so
> that this situation can be detected. If you are writing code that tries
> to resume iterating after the iterator has been exhausted, I have to
> ask: why?

The most obvious case for me would be tailing a file. Loop over the lines
in the file, sleep, then do it again. There are many tasks analogous to
that scenario -- anything querying a shared resource.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] collections.Counter __add__ implementation quirk

2015-11-24 Thread Michael Selik
Raymond,
I think you made a typographical error in your Counter.update example.

>>> from collections import Counter
>>> c = Counter(a=4, b=2, c=0, d=-2)
>>> d = Counter(a=1, b=-5, c=-2, d=6)
>>> c.update(d)
>>> c
Counter({'a': 5, 'd': 4, 'c': -2, 'b': -3})

Pair programming ;-)


On Tue, Nov 24, 2015 at 1:02 AM Raymond Hettinger <
raymond.hettin...@gmail.com> wrote:

>
> > On Nov 23, 2015, at 10:43 AM, Vlastimil Brom 
> wrote:
> >
> >> Is there any particular reason counters drop negative values when you
> add
> >> them together?  I definitely expected them to act like ints do when you
> add
> >> negatives, and had to subclass it to get what I think is the obvious
> >> behavior.
> >> ___
> >> Python-Dev mailing list
> > ...
> > Hi,
> > this is probably more appropriate for the general python list rathere
> > then this developers' maillist, however, as I asked a similar question
> > some time ago, I got some detailed explanations for the the current
> > design decissions from the original developer; cf.:
> > https://mail.python.org/pipermail/python-list/2010-March/570618.html
> >
> > (I didn't check possible changes in Counter since that version (3.1 at
> > that time).)
>
> In Python3.2, Counter grew a subtract() method:
>
> >>> c = Counter(a=4, b=2, c=0, d=-2)
> >>> d = Counter(a=1, b=2, c=3, d=4)
> >>> c.subtract(d)
> >>> c
> Counter({'a': 3, 'b': 0, 'c': -3, 'd': -6})
>
> The update() method has been around since the beginning:
>
> >>> from collections import Counter
> >>> c = Counter(a=4, b=2, c=0, d=-2)
> >>> d = Counter(a=1, b=-5, c=-2, d=6)
> >>> c.update(d)
> >>> d
> Counter({'d': 6, 'a': 1, 'c': -2, 'b': -5})
>
>
> So, you have two ways of doing counter math:
>
> 1. Normal integer arithmetic using update() and subtract() does straight
> addition and subtraction, either starting with or ending-up with negative
> values.
>
> 2. Saturating arithmetic using the operators: + - & | excludes
> non-positive results.  This supports bag-like behavior (c.f. smalltalk) and
> multiset operations (https://en.wikipedia.org/wiki/Multiset).
>
>
> Raymond
>
>
>
>
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mike%40selik.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-02 Thread Michael Droettboom
You may also be interested in a project I've been working on, airspeed
velocity, which will automatically benchmark historical versions of a git
or hg repo.

http://github.com/spacetelescope/asv

astropy, scipy, numpy and dask are already using it.

Cheers,
Mike
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Question about sys.path and sys.argv and how packaging (may) affects default values

2016-03-02 Thread Michael Felt

Hello all,

1) There are many lists to choose from - if this is the wrong one for 
questions about packaging - please forgive me, and point me in the right 
direction.


2) Normally, I have just packaged python, and then moved on. However, 
recently I have been asked to help with packaging an 'easier to install' 
python by people using cloud-init, and more recently people wanting to 
use salt-stack (on AIX).


FYI: I have been posting about my complete failure to build 2.7.11 ( 
http://bugs.python.org/issue26466) - so, what I am testing is based on 
2.7.10 - which built easily for me.


Going through the 'base documentation' I saw a reference to both 
sys.argv and sys.path. atm, I am looking for an easy way to get the 
program name (e.g., /opt/bin/python, versus ./python).
I have my reasons (basically, looking for a compiled-in library search 
path to help with http://bugs.python.org/issue26439)


Looking on two platforms (AIX, my build, and debian for power) I am 
surprised that sys.argv is empty in both cases, and sys.path returns 
/opt/lib/python27.zip with AIX, but not with debian.


root@x064:[/data/prj/aixtools/python/python-2.7.10]/opt/bin/python
Python 2.7.10 (default, Nov  3 2015, 14:36:51) [C] on aix5
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.argv
['']
>>> sys.path
['', '/opt/lib/python27.zip', '/opt/lib/python2.7', 
'/opt/lib/python2.7/plat-aix5', '/opt/lib/python2.7/lib-tk', 
'/opt/lib/python2.7/lib-old', '/opt/lib/python2.7/lib-dynload', 
'/opt/lib/python2.7/site-packages']


michael@ipv4:~$ python
Python 2.7.9 (default, Mar  1 2015, 13:01:00)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.argv
['']
>>> sys.path
['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-powerpc-linux-gnu', 
'/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', 
'/usr/lib/python2.7/lib-dynload', 
'/usr/local/lib/python2.7/dist-packages', 
'/usr/lib/python2.7/dist-packages', 
'/usr/lib/python2.7/dist-packages/PILcompat', 
'/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7']


And I guess I would be interested in getting 
'/opt/lib/python2.7/dist-packages' in there as well (or learn a way to 
later add it for pre-compiled packages such as cloud-init AND that those 
would also look 'first' in /opt/lib/python2.7/dist-packages/cloud-init 
for modules added to support cloud-init - should I so choose (mainly in 
case of compatibility issues between say cloud-init and salt-stack that 
have common modules BUT may have conflicts) - Hopefully never needed for 
that reason, but it might also simplify packaging applications that 
depend on python.


Many thanks for your time and pointers into the documentation, It is a 
bit daunting :)


Michael
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL

2016-03-08 Thread Michael Felt
As a relative newcomer I may have missed a long previous discussion re: 
linking with OpenSSL and/or LibreSSL.
In an ideal world this would be rtl linking, i.e., underlying 
complexities of *SSL libraries are hidden from applications.


In short, when I saw this http://bugs.python.org/issue26465 Title: 
Upgrade OpenSSL shipped with python installers, it reminded me I need to 
start looking at LibreSSL again - and that, if not already done - might 
be something "secure" for python as well.


Michael
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New OpenSSL - has anyone ever looked at (in)compatibility with LibreSSL

2016-03-09 Thread Michael Felt
Can look at it. There has been a lot of discussion, iirc, between 
OpenSSL and LibreSSL re: version identification.

Thx for the reference.

On 08-Mar-16 14:55, Hasan Diwan wrote:


On 8 March 2016 at 00:49, Michael Felt <mailto:mich...@felt.demon.nl>> wrote:


As a relative newcomer I may have missed a long previous
discussion re: linking with OpenSSL and/or LibreSSL.
In an ideal world this would be rtl linking, i.e., underlying
complexities of *SSL libraries are hidden from applications.

In short, when I saw this http://bugs.python.org/issue26465 Title:
Upgrade OpenSSL shipped with python installers, it reminded me I
need to start looking at LibreSSL again - and that, if not already
done - might be something "secure" for python as well.


According to the libressl website, one of the projects primary goals 
is to remain "backwards-compatible with OpenSSL", which is to say, to 
either have code work without changes or to fail gracefully when it 
uses the deprecated bits. It does seem it ships with OpenBSD. There is 
an issue open on bugs to address whatever incompatibilities remain 
between LibreSSL and OpenSSL[1]. Perhaps you might want to take a look 
at that? -- H

1. https://bugs.python.org/issue23177


Michael
___
Python-Dev mailing list
Python-Dev@python.org <mailto:Python-Dev@python.org>
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/hasan.diwan%40gmail.com




--
OpenPGP: http://hasan.d8u.us/gpg.asc
Sent from my mobile device
Envoyé de mon portable


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Summary of Python tracker Issues

2016-03-11 Thread Michael Felt
I guess I should have never changed the title - apparently the tracker 
loses track - there are more than 5 messages.


On 2016-03-04 18:08, Python tracker wrote:

#26439: ctypes.util.find_library fails when ldconfig/glibc not availab
http://bugs.python.org/issue264395 msgs
And, while I do not want to ping the list in a rude way, I submitted a 
patch - not perfect of course (seems to work as expected stand-alone, 
but not in a 'build' attempt (a previous 'half' patch that was the 'work 
in progress' did build) - so as I hope to have time in the coming days 
to dig further - some hints on how to debug the failed 'build moment 
during 'make install' would be greatly appreciated.


Basically, the make install ends with:

...

Compiling 
/var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/xml/sax/xmlreader.py 
...
Compiling /var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/xmllib.py ...
Compiling /var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/xmlrpclib.py 
...
Compiling /var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/zipfile.py 
...
make: 1254-004 The error code from the last command is 1.


Stop.
root@x064:[/data/prj/aixtools/python/python-2.7.11.2]


So, my question: how do I make the 'compile' of 
/var/aixtools/aixtools/python/2.7.11.2/opt/lib/python2.7/zipfile.py more 
verbose?

I tried "make V=1 DESTDIR=/var/aixtools/aixtools/python/2.7.11.2 install". but 
the output was identical.

Thanks,
Michael
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Question about sys.path and sys.argv and how packaging (may) affects default values

2016-03-13 Thread Michael Felt

On 2016-03-02 18:45, Thomas Wouters wrote:

On Wed, Mar 2, 2016 at 3:50 AM, Michael Felt  wrote:


Hello all,

1) There are many lists to choose from - if this is the wrong one for
questions about packaging - please forgive me, and point me in the right
direction.


It's hard to say where this belongs best, but python-list would probably
have done as well.



2) Normally, I have just packaged python, and then moved on. However,
recently I have been asked to help with packaging an 'easier to install'
python by people using cloud-init, and more recently people wanting to use
salt-stack (on AIX).

FYI: I have been posting about my complete failure to build 2.7.11 (
http://bugs.python.org/issue26466) - so, what I am testing is based on
2.7.10 - which built easily for me.

Going through the 'base documentation' I saw a reference to both sys.argv
and sys.path. atm, I am looking for an easy way to get the program name
(e.g., /opt/bin/python, versus ./python).
I have my reasons (basically, looking for a compiled-in library search
path to help with http://bugs.python.org/issue26439)


I think the only way to get at the compiled-in search path is to recreate
it based on the compiled-in prefix, which you can get through distutils.
Python purposely only uses the compiled-in path as the last resort.
Instead, it searches for its home relative to the executable and adds a set
of directories relative to its home (if they exist).

It's not clear to me why you're focusing on these differences, as (as I
describe below) they are immaterial.



Looking on two platforms (AIX, my build, and debian for power) I am
surprised that sys.argv is empty in both cases, and sys.path returns
/opt/lib/python27.zip with AIX, but not with debian.


When you run python interactively, sys.argv[0] will be '', yes. Since
you're not launching a program, there's nothing else to set it to. 'python'
(or the path to the executable) wouldn't be the right thing to set it to,
because python itself isn't a Python program :)

The actual python executable is sys.executable, not sys.argv[0], but you
shouldn't usually care about that, either. If you want to know where to
install things, distutils is the thing to use. If you want to know where
Python thinks it's installed (for debugging purposes only, really),
sys.prefix will tell you.



root@x064:[/data/prj/aixtools/python/python-2.7.10]/opt/bin/python
Python 2.7.10 (default, Nov  3 2015, 14:36:51) [C] on aix5
Type "help", "copyright", "credits" or "license" for more information.

import sys
sys.argv

['']

sys.path

['', '/opt/lib/python27.zip', '/opt/lib/python2.7',
'/opt/lib/python2.7/plat-aix5', '/opt/lib/python2.7/lib-tk',
'/opt/lib/python2.7/lib-old', '/opt/lib/python2.7/lib-dynload',
'/opt/lib/python2.7/site-packages']

michael@ipv4:~$ python
Python 2.7.9 (default, Mar  1 2015, 13:01:00)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.

import sys
sys.argv

['']

sys.path

['', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-powerpc-linux-gnu',
'/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages/PILcompat',
'/usr/lib/python2.7/dist-packages/gtk-2.0', '/usr/lib/pymodules/python2.7']


In sys.path, you're seeing the difference between a vanilla Python and
Debian's patched Python. Vanilla Python adds $prefix/lib/python27.zip to
sys.path unconditionally, whereas Debian removes it when it doesn't exist.
Likewise, the dist-packages directory is a local modification by Debian; in
vanilla Python it's called 'site-packages' instead. The subdirectories in
dist-packages that you see in the Debian case are added by .pth files
installed in $prefix -- third-party packages, in other words, adding their
own directories to the module search path.



And I guess I would be interested in getting
'/opt/lib/python2.7/dist-packages' in there as well (or learn a way to
later add it for pre-compiled packages such as cloud-init AND that those
would also look 'first' in /opt/lib/python2.7/dist-packages/cloud-init for
modules added to support cloud-init - should I so choose (mainly in case of
compatibility issues between say cloud-init and salt-stack that have common
modules BUT may have conflicts) - Hopefully never needed for that reason,
but it might also simplify packaging applications that depend on python.


A vanilla Python (or non-Debian-built python, even) has no business looking
in dist-packages. It 

[Python-Dev] bitfields - short - and xlc compiler

2016-03-19 Thread Michael Felt
a) hope this is not something you expect to be on -list, if so - my 
apologies!


Getting this message (here using c99 as compiler name, but same issue 
with xlc as compiler name)
c99 -qarch=pwr4 -qbitfields=signed -DNDEBUG -O -I. -IInclude -I./Include 
-I/data/prj/aixtools/python/python-2.7.11.2/Include 
-I/data/prj/aixtools/python/python-2.7.11.2 -c 
/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c 
-o 
build/temp.aix-5.3-2.7/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.o
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field M must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field N must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field O must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field P must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field Q must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field R must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field S must be of type signed int, 
unsigned int or int.


for:

struct BITS {
int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9;
short M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7;
};

in short xlC v11 does not like short (xlC v7 might have accepted it, but 
"32-bit machines were common then". I am guessing that 16-bit is not 
well liked on 64-bit hw now.


reference for xlC v7, where short was (apparently) still accepted: 
http://www.serc.iisc.ernet.in/facilities/ComputingFacilities/systems/cluster/vac-7.0/html/language/ref/clrc03defbitf.htm


I am taking this is from xlC v7 documentation from the URL, not because 
I know it personally.


So - my question: if "short" is unacceptable for POWER, or maybe only 
xlC (not tried with gcc) - how terrible is this, and is it possible to 
adjust the test so - the test is accurate?


I am going to modify the test code so it is
struct BITS {
   signed  int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9;
   unsigned int M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7;
};

And see what happens - BUT - what does this have for impact on python - 
assuming that "short" bitfields are not supported?


p.s. not submitting this a bug (now) as it may just be that "you" 
consider it a bug in xlC to not support (signed) short bit fields.


p.p.s. Note: xlc, by default, considers bitfields to be unsigned. I was 
trying to force them to signed with -qbitfields=signed - and I still got 
messages. So, going back to defaults.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bitfields - short - and xlc compiler

2016-03-19 Thread Michael Felt

Update:
Is this going to be impossible?

test_short fails om AIX when using xlC in any case. How terrible is this?

==
FAIL: test_shorts (ctypes.test.test_bitfields.C_Test)
--
Traceback (most recent call last):
  File 
"/data/prj/aixtools/python/python-2.7.11.2/Lib/ctypes/test/test_bitfields.py", 
line 48, in test_shorts
self.assertEqual((name, i, getattr(b, name)), (name, i, 
func(byref(b), name)))

AssertionError: Tuples differ: ('M', 1, -1) != ('M', 1, 1)

First differing element 2:
-1
1

- ('M', 1, -1)
?  -

+ ('M', 1, 1)

--
Ran 440 tests in 1.538s

FAILED (failures=1, skipped=91)
Traceback (most recent call last):
  File "./Lib/test/test_ctypes.py", line 15, in 
test_main()
  File "./Lib/test/test_ctypes.py", line 12, in test_main
run_unittest(unittest.TestSuite(suites))
  File 
"/data/prj/aixtools/python/python-2.7.11.2/Lib/test/test_support.py", 
line 1428, in run_unittest

_run_suite(suite)
  File 
"/data/prj/aixtools/python/python-2.7.11.2/Lib/test/test_support.py", 
line 1411, in _run_suite

raise TestFailed(err)
test.test_support.TestFailed: Traceback (most recent call last):
  File 
"/data/prj/aixtools/python/python-2.7.11.2/Lib/ctypes/test/test_bitfields.py", 
line 48, in test_shorts
self.assertEqual((name, i, getattr(b, name)), (name, i, 
func(byref(b), name)))

AssertionError: Tuples differ: ('M', 1, -1) != ('M', 1, 1)

First differing element 2:
-1
1

- ('M', 1, -1)
?  -

+ ('M', 1, 1)




On 17-Mar-16 23:31, Michael Felt wrote:
a) hope this is not something you expect to be on -list, if so - my 
apologies!


Getting this message (here using c99 as compiler name, but same issue 
with xlc as compiler name)
c99 -qarch=pwr4 -qbitfields=signed -DNDEBUG -O -I. -IInclude 
-I./Include -I/data/prj/aixtools/python/python-2.7.11.2/Include 
-I/data/prj/aixtools/python/python-2.7.11.2 -c 
/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c 
-o 
build/temp.aix-5.3-2.7/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.o
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field M must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field N must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field O must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field P must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field Q must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field R must be of type signed int, 
unsigned int or int.
"/data/prj/aixtools/python/python-2.7.11.2/Modules/_ctypes/_ctypes_test.c", 
line 387.5: 1506-009 (S) Bit field S must be of type signed int, 
unsigned int or int.


for:

struct BITS {
int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9;
short M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7;
};

in short xlC v11 does not like short (xlC v7 might have accepted it, 
but "32-bit machines were common then". I am guessing that 16-bit is 
not well liked on 64-bit hw now.


reference for xlC v7, where short was (apparently) still accepted: 
http://www.serc.iisc.ernet.in/facilities/ComputingFacilities/systems/cluster/vac-7.0/html/language/ref/clrc03defbitf.htm 



I am taking this is from xlC v7 documentation from the URL, not 
because I know it personally.


So - my question: if "short" is unacceptable for POWER, or maybe only 
xlC (not tried with gcc) - how terrible is this, and is it possible to 
adjust the test so - the test is accurate?


I am going to modify the test code so it is
struct BITS {
   signed  int A: 1, B:2, C:3, D:4, E: 5, F: 6, G: 7, H: 8, I: 9;
   unsigned int M: 1, N: 2, O: 3, P: 4, Q: 5, R: 6, S: 7;
};

And see what happens - BUT - what does this have for impact on python 
- assuming that "short" bitfields are not supported?


p.s. not submitting this a bug (now) as it may just be that "you" 
consider it a bug in xlC to not support (signed) short bit fields.


p.p.s. Note: xlc, by default, considers bitfields to be unsigned. I 
was trying to force them to signe

Re: [Python-Dev] bitfields - short - and xlc compiler

2016-03-20 Thread Michael Felt



On 2016-03-18 05:57, Andrew Barnert via Python-Dev wrote:

Yeah, C99 (6.7.2.1) allows "a qualified or unqualified version of _Bool, signed int, unsigned 
int, or some other implementation-defined type", and same for C11. This means that a compiler 
could easily allow an implementation-defined type that's identical to and interconvertible with 
short, say "i16", to be used in bitfields, but not short itself.

And yet, gcc still allows short "even in strictly conforming mode" (4.9), and 
it looks like Clang and Intel do the same.

Meanwhile, MSVC specifically says it's illegal ("The type-specifier for the 
declarator must be unsigned int, signed int, or int") but then defines the semantics 
(you can't have a 17-bit short, bit fields act as the underlying type when accessed, 
alignment is forced to a boundary appropriate for the underlying type). They do mention 
that allowing char and long types is a Microsoft extension, but still nothing about 
short, even though it's used in most of the examples on the page.

Anyway, is the question what ctypes should do? If a platform's compiler allows "short M: 1", 
especially if it has potentially different alignment than "int M: 1", ctypes on that platform had 
better make ("M", c_short, 1) match the former, right?

So it sounds like you need some configure switch to test that your compiler 
doesn't allow short bit fields, so your ctypes build at least skips that part 
of _ctypes_test.c and test_bitfields.py, and maybe even doesn't allow them in 
Python code.



>>  test_short fails om AIX when using xlC in any case. How terrible is this?
a) this does not look solveable using xlC, and I expect from the comment 
above re: MSVC, that it will, or should also fail there. And, imho, if 
anything is to done, it is a decision to be made by "Python".

b) aka - it sounds like a defect, at least in the test.
c) what danger is there to existing Python code if "short" is expected, 
per legacy when compilers did (and GCC still does - verified that when I 
compile with gcc the test does not signal failure)


So, more with regard to c) - is there something I could/should be 
looking at in Python itself, in order to message that the code is not 
supported by the compiler?


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASED] Python 3.4.0b2

2014-01-06 Thread Michael Urman
On Mon, Jan 6, 2014 at 9:43 AM, Guido van Rossum  wrote:
> Since MSIEXEC.EXE is a legit binary (not coming from our packager) and
> Akamai is a legitimate company (MS most likely has an agreement with
> them), at this point I would assume that there's something that
> MSIEXEC.EXE wants to get from Akamai, which is unintentionally but
> harmlessly triggered by the Python install. Could it be checking for
> upgrades?

Here's some more guesswork. Does it seem possible that msiexec is
trying to verify the revocation status of the certificate used to sign
the python .msi file? Per
http://blogs.technet.com/b/pki/archive/2006/11/30/basic-crl-checking-with-certutil.aspx
it looks like crl.microsoft.com is the host; this is hosted on akamai:
   crl.microsoft.com is an alias for crl.www.ms.akadns.net.
   crl.www.ms.akadns.net is an alias for a1363.g.akamai.net.

There are various things you could try to verify this. You could test
with simpler .msi files where one is signed and another is not signed
(I'll leave it up to you to find such things, but ORCA is a common
"test" .msi file). Or you could take a verbose log of the installation
process (msiexec /l*v python.log python.msi OR
http://support.microsoft.com/kb/223300), sit on the prompt for network
access so you can uniquely identify the log's timestamps, and try to
identify at what point of the installation the network access occurs.
Once that is known, more steps can be taken to identify and resolve
any actual issues.

Michael
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 461 - Adding % and {} formatting to bytes

2014-01-16 Thread Michael Urman
On Thu, Jan 16, 2014 at 8:45 AM, Brett Cannon  wrote:
> Fine, if you're worried about bytes.format() overstepping by implicitly
> calling str.encode() on the return value of __format__() then you will need
> __bytes__format__() to get equivalent support.

Could we just re-use PEP-3101's note (easily updated for Python 3):

Note for Python 2.x: The 'format_spec' argument will be either
a string object or a unicode object, depending on the type of the
original format string.  The __format__ method should test the type
of the specifiers parameter to determine whether to return a string or
unicode object.  It is the responsibility of the __format__ method
to return an object of the proper type.

If __format__ receives a format_spec of type bytes, it should return
bytes. For such cases on objects that cannot support bytes (i.e. for
str), it can raise. This appears to avoid the need for additional
methods. (As does Nick's proposal of leaving it out for now.)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 461 - Adding % and {} formatting to bytes

2014-01-16 Thread Michael Urman
On Thu, Jan 16, 2014 at 11:13 AM, Neil Schemenauer  wrote:
> A TypeError exception is what we want if the object does not support
> bytes formatting.  Some possible problems:
>
> - It could be hard to provide a helpful exception message since it
>   is generated inside the __format__ method rather than inside the
>   bytes.__mod__ method (in the case of a missing __ascii__ method).
>   The most common error will be using a str object and so we could
>   modify the __format__ method of str to provide a nice hint (use
>   encode()).

The various format functions could certainly intercept and wrap
exceptions raised by __format__ methods. Once the core types were
modified to expect bytes in format_spec, however, this may not be
critical; __format__ methods which delegate would work as expected,
str could certainly be clear about why it raised, and custom
implementations would be handled per comments I'll make on your second
point. Overall I suspect this is no worse than unhandled values in the
format_spec are today.

> - Is there some risk that an object will unwittingly implement a
>   __format__ method that unintentionally accepts a bytes argument?
>   That requires some investigation.

Agreed. Some quick armchair calculations suggest to me that there are
three likely outcomes:
 - Properly handle the type (perhaps written with the 2.x clause in mind)
 - Raise an exception internally (perhaps ValueError, such as from
format(3, 'q'))
 - Mishandle and return a str (perhaps due to to if/else defaulting)
The first and second outcome may well reflect what we want, and the
third could easily be detected and turned into an exception by the
format functions.

I'm uncertain whether this reflects all the scenarios we would care about.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Call for Python Developers for our humanoid Robot NAO

2014-03-18 Thread Michael Foord


On 18/03/14 16:44, Antoine Pitrou wrote:


Hello Xavier,

It is not obvious your message is appropriate for python-dev. It looks 
like mere advertising; if it is not, please explain.


To clarify what this mailing-list is about: "On this list the key 
Python developers discuss the future of the language and its 
implementation. Topics include Python design issues, release 
mechanics, and maintenance of existing releases."


(from https://mail.python.org/mailman/listinfo/python-dev)



Unless you're offering all the core-devs free robots. In which case it's 
fine.


Michael


Regards

Antoine.


Le 17/03/2014 23:05, Xavier Salort a écrit :

Hi,

We are the manufacturer of the humanoid robot NAO :
http://www.youtube.com/watch?v=nNbj2G3GmAo
We are now offering great opportunities for developers to use NAO and
would like to get in touch with you and your members :
http://www.youtube.com/watch?v=_AxErdP0YI8

Let me know if you may be interested.
Thank you!

Xavier
--
Xavier Salort
Area Sales Manager
M + 1 857 247 






___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Fwd: Jython Report

2014-04-12 Thread Michael Foord
Below is the Jython "status update" report on Jython I received from Jim Baker 
and summarised in the Language Summit. It comes with one addendum from Frank:

Jim's list is fantastic - the one bit I'd like to add to the list:

Jython now supports a buffer protocol that parallels CPython's C API buffer 
protocol. This provided the basis for support of buffer() and memoryview(). The 
work was done with Jython3 in mind and will be a huge boost to that eventual 
effort.

Begin forwarded message:

> From: Jim Baker 
> Subject: Re: Jython Report
> Date: 7 April 2014 06:42:51 BST
> To: Michael Foord 
> Cc: Frank Wierzbicki 
> 
> Recent changes to trunk (last 6 months)
> 
> * Recently tagged a soft beta 2!
> * Java 7 JVM is now the minimum version, which gives a larger base of 
> functionality to work with (such as using Java 7's AutoCloseable to imply 
> corresponding context manager support in using Python code)
> * Enable mixing Python and Java types in the bases of a class when using a 
> metaclass
> * Added support for buffer, memoryview, although not complete yet with 
> respect to Java integration
> * Console and encoding support, such as unicodedata/idna updates
> * Many, many small fixes
> 
> About to be in trunk, to support beta 3
> 
> * socket-reboot reimplements socket/select/ssl on top of Netty 4, a popular 
> event loop networking framework for the JVM (used by a large number of 
> performant projects in Java space and originally part of JBoss). There was no 
> ssl support before, but now socket and especially select semantics are much 
> closer to CPython as well (basically close to the Windows socket model). 
> * socket-reboot in turn enables requests and thereby pip. A branch of pip 
> currently works, actually modifying an upstream vendor lib (html5lib) so that 
> it doesn't use isolated UTF-16 surrogates in literals, since this is not 
> actually legal unicode, nor does it work in Jython's UTF-16 based 
> representation. Ironically this usage is to detect such illegal use in input 
> streams.
> * Relative star imports, which seems to impact a number of interesting 
> projects.
> * Performance tuning of sre. Jython has a port of CPython's sre, however our 
> use of UTF-16 requires expansion into an array of codepoints. Currently this 
> is done on demand, which can potentially add another O(n) factor in 
> evaluating regexes. A pull request we will apply memoizes. In the future, we 
> will rewrite the logic in sre so that it does next/prev, much like JRuby 
> currently does for similar encoding issues.
> 
> Related work
> 
> * Other PyPA tooling including virtualenv and wheel needs more diagnosis to 
> see why they currently fail on Jython, but our hope is that this is minor. 
> * New project jythontools by a number of Jython developers (including Frank 
> and Jim). This includes a number of projects that will help evolve Jython, 
> but outside the usual release schedule and the usual problem of being in core 
> (such as eventual deprecation):
>   - Clamp - precise integration with Java, enabling such capabilities as 
> Java directly importing Python modules without explicitly initializing the 
> Jython runtime or using object factories. Future work will enable Java 
> annotation integration, as decorators. Integrates with setuptools; future 
> integration as well with Maven via Aether.
>   - Jiffy - provide a CFFI backend for Jython. Right now it is pure 
> vaporware, but cursory examination of cffi.backend_ctypes suggests that it 
> should be straightforward and of modest effort to provide a similar backend 
> by using JFFI, which Jython and JRuby both use to access native runtime 
> services (such as Posix API) as part of the Java native runtime project.
> * The Patois project has been started to collect examples for 
> cross-implementation support, as seen in surrogate support, but it will be a 
> good question to get that really going, vs just talking about it.
> * JyNI - simply adding this jar to the classpath enables C extension API 
> support. Note that this project has been licensed by its developer (not a 
> Jython committer) under an LGPL license.
> 
> Release schedule
> 
> * Complete beta 2
> * Beta 3 is forthcoming, likely in 2 weeks
> * For beta 4, need to perform a comprehensive bug triage - what will be in, 
> not in for 2.7.0
> * EuroPython sprint to finalize a release candidate for 2.7.0?
> 
> Future
> 
> * Mostly around performance, Java integration, and of course the usual bug 
> fixes
> * Python bytecode compiler remains important, including for support targeting 
> Android and removing restriction on getting too large a method for the JVM
> * More hooks for Java integration, includi

Re: [Python-Dev] cpython: Minor clean-ups for heapq.

2014-05-27 Thread Michael Urman
On Tue, May 27, 2014 at 4:05 AM, Chris Angelico  wrote:
> On Tue, May 27, 2014 at 6:58 PM, Serhiy Storchaka  wrote:
>> 26.05.14 10:59, raymond.hettinger написав(ла):
>>>
>>> +result = [(elem, i) for i, elem in zip(range(n), it)]
>>
>>
>> Perhaps it is worth to add simple comment explaining why this is not
>> equivalent to just list(zip(it, range(n))). Otherwise it can be
>> unintentionally "optimized" in future.
>>
>
> Where is the difference? I'm very much puzzled now. My first thought
> was based on differing-length iterables in zip, but the docs say it
> stops at the shortest of its args.

Due to how zip stops, it leaves the longer iterable in different places:

>>> it = iter(string.ascii_letters); list(zip(range(3), it)); next(it)
[(0, 'a'), (1, 'b'), (2, 'c')]
'd'
>>> it = iter(string.ascii_letters); list(zip(it, range(3))); next(it)
[('a', 0), ('b', 1), ('c', 2)]
'e'

This seems like a potentially nasty gotcha, but I'm unclear what real
use cases would be impacted.

Michael
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] use cases for "python-config" versus "pkg-config python"

2014-05-28 Thread Michael Haubenwallner
Hello!

Stumbling over problems on AIX (Modules/python.exp not found) building libxml2 
as python module
let me wonder about the intended use-cases for 'python-config' and 'pkg-config 
python'.

FWIW, I can see these distinct use cases here, and I'm kindly asking if I got 
them right:

* Build an application containing a python interpreter (like python$EXE itself):
  + link against libpython.so
  + re-export symbols from libpython.so for python-modules (platform-specific)
  + This is similar to build against any other library, thus
  = 'python.pc' is installed (for 'pkg-config python').

* Build a python-module (like build/lib.-/*.so):
  + no need to link against libpython.so, instead
  + expect symbols from libpython.so to be available at runtime, 
platform-specific either as
  + undefined symbols at build-time (Linux, others), or
  + a list of symbols to import from "the main executable" (AIX)
  + This is specific to python-modules, thus
  = 'python-config' is installed.

Thank you!
/haubi/
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] use cases for "python-config" versus "pkg-config python"

2014-06-02 Thread Michael Haubenwallner
Hi,

following up myself with a patch proposal:

On 05/28/2014 04:51 PM, Michael Haubenwallner wrote:
> Stumbling over problems on AIX (Modules/python.exp not found) building 
> libxml2 as python module
> let me wonder about the intended use-cases for 'python-config' and 
> 'pkg-config python'.
> 
> FWIW, I can see these distinct use cases here, and I'm kindly asking if I got 
> them right:
> 
> * Build an application containing a python interpreter (like python$EXE 
> itself):
>   + link against libpython.so
>   + re-export symbols from libpython.so for python-modules (platform-specific)
>   + This is similar to build against any other library, thus
>   = 'python.pc' is installed (for 'pkg-config python').
> 
> * Build a python-module (like build/lib.-/*.so):
>   + no need to link against libpython.so, instead
>   + expect symbols from libpython.so to be available at runtime, 
> platform-specific either as
>   + undefined symbols at build-time (Linux, others), or
>   + a list of symbols to import from "the main executable" (AIX)
>   + This is specific to python-modules, thus
>   = 'python-config' is installed.
> 

Based on these use-cases, I'm on a trip towards a patch improving AIX support 
here,
where the attached one is a draft against python-tip (next step is to have 
python-config
not print $LIBS, but $LINKFORMODULE only).

Thoughts?

Thank you!
/haubi/

diff -r dc3afbee4ad1 Makefile.pre.in
--- a/Makefile.pre.in   Mon Jun 02 01:32:23 2014 -0700
+++ b/Makefile.pre.in   Mon Jun 02 19:57:54 2014 +0200
@@ -87,6 +87,9 @@
 SGI_ABI=   @SGI_ABI@
 CCSHARED=  @CCSHARED@
 LINKFORSHARED= @LINKFORSHARED@
+BLINKFORSHARED=@BLINKFORSHARED@
+LINKFORMODULE= @LINKFORMODULE@
+BLINKFORMODULE=@BLINKFORMODULE@
 ARFLAGS=   @ARFLAGS@
 # Extra C flags added for building the interpreter object files.
 CFLAGSFORSHARED=@CFLAGSFORSHARED@
@@ -540,7 +543,7 @@
 
 # Build the interpreter
 $(BUILDPYTHON):Modules/python.o $(LIBRARY) $(LDLIBRARY) $(PY3LIBRARY)
-   $(LINKCC) $(PY_LDFLAGS) $(LINKFORSHARED) -o $@ Modules/python.o 
$(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST)
+   $(LINKCC) $(PY_LDFLAGS) $(BLINKFORSHARED) -o $@ Modules/python.o 
$(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST)
 
 platform: $(BUILDPYTHON) pybuilddir.txt
$(RUNSHARED) $(PYTHON_FOR_BUILD) -c 'import sys ; from sysconfig import 
get_platform ; print(get_platform()+"-"+sys.version[0:3])' >platform
@@ -666,7 +669,7 @@
fi
 
 Modules/_testembed: Modules/_testembed.o $(LIBRARY) $(LDLIBRARY) $(PY3LIBRARY)
-   $(LINKCC) $(PY_LDFLAGS) $(LINKFORSHARED) -o $@ Modules/_testembed.o 
$(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST)
+   $(LINKCC) $(PY_LDFLAGS) $(BLINKFORSHARED) -o $@ Modules/_testembed.o 
$(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST)
 
 
 # Importlib
@@ -1310,7 +1313,7 @@
 # pkgconfig directory
 LIBPC= $(LIBDIR)/pkgconfig
 
-libainstall:   all python-config
+libainstalldirs:
@for i in $(LIBDIR) $(LIBPL) $(LIBPC); \
do \
if test ! -d $(DESTDIR)$$i; then \
@@ -1319,6 +1322,16 @@
elsetrue; \
fi; \
done
+
+# resolve Makefile variables eventually found in configured python.pc values
+$(DESTDIR)$(LIBPC)/python-$(VERSION).pc: Misc/python.pc Makefile 
libainstalldirs
+   @echo "Resolving more values for $(LIBPC)/python-$(VERSION).pc"; \
+   if test set = "$${PYTHON_PC_CONTENT:+set}"; \
+   then echo '$(PYTHON_PC_CONTENT)' | tr '@' '\n' > $@; \
+   else PYTHON_PC_CONTENT="`awk -v ORS='@' '{print $0}' < Misc/python.pc`" 
$(MAKE) $@ `grep = Misc/python.pc`; \
+   fi
+
+libainstall:   all python-config libainstalldirs 
$(DESTDIR)$(LIBPC)/python-$(VERSION).pc
@if test -d $(LIBRARY); then :; else \
if test "$(PYTHONFRAMEWORKDIR)" = no-framework; then \
if test "$(SHLIB_SUFFIX)" = .dll; then \
@@ -1338,7 +1351,6 @@
$(INSTALL_DATA) Modules/Setup $(DESTDIR)$(LIBPL)/Setup
$(INSTALL_DATA) Modules/Setup.local $(DESTDIR)$(LIBPL)/Setup.local
$(INSTALL_DATA) Modules/Setup.config $(DESTDIR)$(LIBPL)/Setup.config
-   $(INSTALL_DATA) Misc/python.pc $(DESTDIR)$(LIBPC)/python-$(VERSION).pc
$(INSTALL_SCRIPT) $(srcdir)/Modules/makesetup 
$(DESTDIR)$(LIBPL)/makesetup
$(INSTALL_SCRIPT) $(srcdir)/install-sh $(DESTDIR)$(LIBPL)/install-sh
$(INSTALL_SCRIPT) python-config.py $(DESTDIR)$(LIBPL)/python-config.py
@@ -1540,6 +1552,7 @@
-rm -rf build platform
-rm -rf $(PYTHONFRAMEWORKDIR)
-rm -f python-config.py python-config
+   -rm -f M

Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-30 Thread Michael Selik
On Sat, Jun 30, 2018 at 9:43 AM Tim Peters  wrote:

> The attractions are instead in the areas of reducing redundancy, improving
> clarity, allowing to remove semantically pointless indentation levels in
> some cases, indeed trading away some horizontal whitespace in otherwise
> nearly empty lines for freeing up a bit of vertical screen space, and in
> the case of comprehensions/genexps adding straightforward ways to
> accomplish some conceptually trivial things that at best require trickery
> now (like emulating a cell object by hand).
>

The examples you provided (some were new in this thread, I think) are
compelling. While my initial reaction to the proposal was mild horror, I'm
not troubled by the scoping questions.

Issues still bothering me:
1. Initial reactions from students was confusion over := vs =
2. This seems inconsistent with the push for type hints

To be fair, I felt a similar gut reaction to f-strings, and now I can't
live without them. Have I become a cranky old man, resistant to change?
Your examples have put me into the "on the fence, slightly worried"
category instead of "clearly a bad idea".

On scoping, beginners seem more confused by UnboundLocalError than by
variables bleeding between what they perceive as separate scopes. The
concept of a scope can be tricky to communicate. Heck, I still make the
mistake of looking up class attributes in instance methods as if they were
globals. Same-scope is natural. Natural language is happy with ambiguity.
Separate-scope is something programmers dreamed up. Only experienced C,
Java, etc. programmers get surprised when they make assumptions about what
syntax in Python creates separate scopes, and I'm not so worried about
those folks. I remind them that the oldest versions of C didn't have block
scopes (1975?) and they quiet down.

The PEP lists many exclusions of where the new := operator is invalid [0].
I unfortunately didn't have a chance to read the initial discussion over
the operator. I'm sure it was thorough :-). What I can observe is that each
syntactical exclusion was caused by a different confusion, probably teased
out by that discussion. Many exclusions means many confusions.

My intuition is that the awkwardness stems from avoiding the replacement of
= with :=. Languages that use := seem to avoid the Yoda-style comparison
recommendation that is common to languages that use = for assignment
expressions. I understand the reluctance for such a major change to the
appearance of Python code, but it would avoid the laundry list of
exclusions. There's some value in parsimony.

Anyway, we've got some time for testing the idea on live subjects.

Have a good weekend, everyone.
-- Michael

PS. Pepe just tied it up for Portugal vs Uruguay. Woo! ... and now Cavani
scored again :-(

[0] https://www.python.org/dev/peps/pep-0572/#exceptional-cases
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-07-01 Thread Michael Selik
On Sun, Jul 1, 2018 at 12:39 AM Tim Peters  wrote:

> So, ya, when someone claims [assignment expressions will] make Python
> significantly harder to teach, I'm skeptical of that claim.
>

I don't believe anyone is making that claim. My worry is that assignment
expressions will add about 15 to 20 minutes to my class and a slight
discomfort.

As Mark and Chris said (quoting Mark below), this is just one straw in the
struggle against piling too many things on the haystack. Unlike some
changes to the language, this change of such general use that it won't be
an optional topic. Once widely used, it ain't optional.


On Sun, Jul 1, 2018 at 2:19 AM Mark Dickinson  wrote:

> There's a constant struggle to keep the Python portion of the course large
> enough to be coherent and useful, but small enough to allow time for the
> other topics.
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-07-01 Thread Michael Selik
On Sun, Jul 1, 2018 at 5:28 PM Steven D'Aprano  wrote:

> On Sun, Jul 01, 2018 at 08:35:08AM -0700, Michael Selik wrote:
> > On Sun, Jul 1, 2018 at 12:39 AM Tim Peters  wrote:
> >
> > > So, ya, when someone claims [assignment expressions will] make Python
> > > significantly harder to teach, I'm skeptical of that claim.
> > >
> >
> > I don't believe anyone is making that claim. My worry is that assignment
> > expressions will add about 15 to 20 minutes to my class and a slight
> > discomfort.
>
> How do people who teach other languages deal with this?
>

Python may be in a unique situation in the history of programming. It
wouldn't surprise me if more people learned Python last year than any other
programming language.



> Assignment expressions are hardly a new-fangled innovation of Python's.
> They're used in Java, Javascript, Ruby, Julia, R, PHP and of course
> pretty much the entire C family (C, C++, C# at least). What do
> teachers of those languages do?
>

Assignment expressions are not the issue. The real question is: How do
open-source projects balance the addition of new features against the
growth of complexity? It's the same as that "Remember the Vasa" thread.


[...] R [has] *four* different ways of doing assignment.
>

I think that's a good explanation of why I teach Python and not R. The
first time someone asked me to teach a data science course, Python wasn't
the clear winner. In fact, R may have been more popular among
statisticians. I picked Python for the same reason it's more popular in the
industry -- it's the easiest* to use.

* Easiest that gets the job done well.


> As Mark and Chris said (quoting Mark below), this is just one straw in the
> > struggle against piling too many things on the haystack. Unlike some
> > changes to the language, this change of such general use that it won't be
> > an optional topic. Once widely used, it ain't optional.
>
> Without knowing the details of your course, and who they are aimed at,
> we cannot possibly judge this comment.


I disagree. I think the sentiment holds for a great variety of courses and
audiences.



> Decorators are widely used, but surely you don't teach them in a one day
> introductory class aimed at beginners?
>

Most of the time, no. Once, yes, because that's what the team needed. I was
pretty proud of myself for handling that one. Because I had to teach
decorators early, many other important topics were excluded.


Here is the syllabus for a ten week course:
> https://canvas.uw.edu/courses/1026775/pages/python-100-course-syllabus
>
> Note that decorators and even regular expressions don't get touched
> until week ten. If you can't fit assignment expressions in a ten week
> course, you're doing something wrong. If you can't fit them in a two
> hour beginners course, there is so much more that you aren't covering
> that nobody will notice the lack.
>

It's not about any one particular topic, but the trade-offs between topics.
A 10-week lecture course might be 30 hours of lecture, comparable to a
4-day "bootcamp" style course. I assure you that 4 days doesn't feel long
enough when those last few hours are winding down. There's always more to
say.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-07-02 Thread Michael Selik
On Sun, Jul 1, 2018 at 11:36 PM Tim Peters  wrote:

> [Michael Selik]
> > My worry is that assignment expressions will add about 15 to 20
> > minutes to my class and a slight discomfort.
>
> So not intractable - which is my high-order bit ;-)
>
> For those who want more bits of precision (perhaps Guido), while
> quantification is good, it needs context to provide insight.  Like, out of
> how many class hours total?
>

Generally between 20 and 40 hours.


Is 15-20 minutes a little, a lot, par for the course ... compared to other
> topics?
>

I guessed 15-20 minutes, because I'm mentally comparing it to things like
ternary expressions. Odds and ends that make the code better, but not a
major concept that deserves hours.


Will it require you to drop other topics?
>

Yes. It might not seem like much, but every minute counts. I'd probably try
to ignore := unless some pesky student brings it up. It's like someone
saying, "Hey, I heard that Python can't do threads?!" I always say, "Good
question," but internally I'm thinking, "there goes a half hour. What can I
cut today?"



> Would you _save_ twice as much class time if we got rid of "is"? ;-)
>

Ha. You joke, but ``is`` takes about 5 minutes. About 5 or 10 minutes more
if some clever student notices that ``1 is 1`` and I need to explain
Singletons and interpreter optimizations versus language spec.


If it's accepted, do read the PEP
>

I've read it a few times now. I hope I didn't sound like I haven't read it.
That'd be embarrassing.


Meta: About the Vasa, I'm not concerned.
>

Matt Arcidy brought up an interesting point, which I'll quote here: "... I
don't see any importance to the position of educators right now, especially
since these educators in the thread are complaining about an increase in
their personal work, for which it appears they were compensated."

>From my brief observations, it seems that the nattering nabobs of
negativism, such as myself, are mostly educators. I recently started to
wonder if I'd care so much about the language if I didn't teach. I suspect
that if I didn't worry about teaching new features, Python 4 could be
announced tomorrow and I wouldn't really mind.

I suppose it is selfish. But I hope that you [Tim], Guido, and the so many
others who have poured energy into this project will appreciate that it's
not the current users, but the next billion (?!) Pythonistas that will
really keep the language going. Maintaining popularity among educators is a
big part of that.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-07-02 Thread Michael Selik
On Sun, Jul 1, 2018 at 8:21 PM Matt Arcidy  wrote:

> [...] Can anyone adequately explain why this specific modality of
> learning,  a student-in-a-seat based educator, must outweigh all other
> modalities [...]?


1. It doesn't.
2. It's a proxy for the other modes.

I hope this was an adequate explanation.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fuzzing the Python standard library

2018-07-17 Thread Michael Selik
On Tue, Jul 17, 2018 at 4:57 PM Jussi Judin  wrote:

> Quick answer: undocumented billion laughs/exponential entity expansion
> type of an attack that is accessible through web through any library that
> uses fractions module to parse user input (that are actually available on
> Github).
>

Are you suggesting a warning in the fractions documentation to mention that
large numbers require large amounts of memory?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Exporting Python functions on AIX

2018-07-27 Thread WILSON, MICHAEL
All,

My excuse if this is not the appropriate list for a question essentially 
concerning the AIX port of Python.

The current port of Python for AIX includes composing an export file 
(/lib/python2.7/config/python.exp) in which there are a number of functions 
starting "Py_" or "_Py_".

The Vim package for AIX is built referencing the python.exp file and 
unfortunately, when functions are removed from libpython, even those which are 
not called, the vim command detects missing symbols.

The most recent case (May 2017), functions _Py_hgidentity, _Py_hgversion and 
_Py_svnversion were replaced/removed, see "bpo-27593: Get SCM build info from 
git instead of hg (#1327)".

Is it correct to assume that the "_Py_" functions are internal (Python name 
space) that should/must not be called by or made visible to application code  ?

Could you indicate a URL to the authoritative API documentation ?

Thanks for your replies.

Mike Wilson

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Accessing mailing list archives

2018-07-31 Thread Michael Selik
Would it be possible to normalize by the number of mailing list members and
also by "active" members? The latter would be tricky to define.

On Mon, Jul 30, 2018 at 3:29 PM Victor Stinner  wrote:

> Hi Bob,
>
> I wrote a basic script to compute the number of emails per PEP. It
> requires to download gzipped mbox files from the web page of archives per
> month, then ungzip them:
>
> https://github.com/vstinner/misc/blob/master/python/parse_mailman_mbox_peps.py
>
> Results:
> https://mail.python.org/pipermail/python-committers/2018-April/005310.html
>
> Victor
>
> Le lundi 30 juillet 2018, Bob Purvy  a écrit :
> > hi all,
> > I've been trying to figure out how to access the archives
> programmatically. I'm sure this is easy once you know, but googling various
> things hasn't worked.  What I want to do is graph the number of messages
> about PEP 572 by time.  (or has someone already done that?)
> > I installed GNU Mailman, and downloaded the gzip'ed archives for a
> number of months and unzipped them, and I suspect that there's some way to
> get them all into a single database, but it hasn't jumped out at me.  If I
> count the "Message-ID" lines, the "Subject:" lines, and the "\nFrom " lines
> in one of those text files, I get slightly different numbers for each.
> > Alternatively, they're maybe already in a database, and I just need API
> access to do the querying?  Can someone help me out?
> > Bob ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mike%40selik.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [python-committers] [RELEASED] Python 3.4.9 and Python 3.5.6 are now available

2018-08-07 Thread Michael Felt


On 8/6/2018 11:38 AM, Charalampos Stratakis wrote:
> A side note on your side note. Different distro's have different
> standards, use/customer cases to address etc. In enterprise
> distributions the usual scheme is that the version that you see is the
> minimum one and many fixes coming from upstream or the redistributor
> are incorporated on top of that version. Just check the package
> changelogs. :) CVE's do get fixed and there is actually cooperation
> with upstream on different levels in regards to those. And speaking
> here as one of the people doing that for one of the enterprise
> distros.
>
a) good to hear
b) On AIX they stayed with ssh at version 6.0 for so long, that even
with all the CVE et al included it was still extremely weak compared to
6.7 and later when they tightened the default ciphers. And yes, I fell
over the change - but was glad, in the end, to rid of weak ssh clients.
c) read package changelogs. The :) is because they are hard to read or
non-existent.

I do not mean to criticize any "enterprise" methods. My "enterprise" of
choice is AIX and when it comes to OSS I dare say everyone else does a
better job (which is why I got started with packaging in the first place
- but only what I need and/or someone requests). However, I do find it
very very hard to know what python 2.7.5 has or has not, that 2.7.15 now
has. There are, iirc, quite a few important changes. The "hard" freeze
seems to have come at roughly 2.7.8 or 2.7.9 (just a guess).

Also, as I am trying to test on other platforms it gets a bit
frustrating when the latest python3 I can find is a v3.4.X.

Might be good project developers (in general, not meant as specific to
python) to understand that version number changes are not followed -
blindly - by enterprise patch management and being too quick with
version number changes will make it more difficult for users to know
what they have.

p.s. I do not do this (packaging/patch management) for any "distro". In
that sense I am "just a consumer" who "rolls his own" when/if needed.




pEpkey.asc
Description: application/pgp-keys
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] AIX and python tests

2018-08-08 Thread Michael Felt
Try again on this.

a) Victor has said he will look, from time to time - after his vacation.
b) our vacations do not overlap
c) comment was also made privately, re: my starting a worker for
buildbot, that there is not much sense in a bot if noone is working on
the tests.

I'll do my best, in the (limited) time I have to work on c) - but alone
I cannot get anything done.

So, Victor suggested I just ask for others to review for now - so I can
have some semblance of moving forward - before my vacation starts (about
when Victor gets back from his).

In advance - many thanks.

On 8/5/2018 10:59 PM, Michael wrote:
>
> As I have time, I'll dig into these.
>
> I have a couple of PR already 'out there', which I hope someone will
> be looking at when/as he/she/they have time. My time will also be
> intermittent.
>
> My next test - and I hope not too difficult - would be the test_utf8.
> The test:
>
> FAIL: test_cmd_line (test.test_utf8_mode.UTF8ModeTests) fails - and I
> am wondering if it is as simple as AIX default mode is ISO8559-1 and
> the test looks to be comparing UTF8 with the locale_default. If that
> is the case, obviously this test will never succeed - asis. Am I
> understanding the test properly. If yes, then I'll see what I can come
> up with for a patch to the test for AIX. If no, I'll need some hand
> holding to help me understand the test A bigger challenge, and I think
> a major issue with many of the test failures is test_ssl. Here I
> already know I'll need so assistance. I am quite lost. I know AIX at
> an expert level, but I do not know python (especially python
> internals, macros, etc..) and after about 3 levels I am lost. I also
> find it hard to get 'artifacts' from the tests to know what is
> expected. Looking forward to assistance from various people - in
> understanding the tests, and probably better python coding criticism.
> Michael
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/aixtools%40felt.demon.nl



pEpkey.asc
Description: application/pgp-keys
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [issue17180] shutil copy* unsafe on POSIX - they preserve setuid/setgit bits

2018-08-16 Thread Michael Felt
 recent call last):
  File "/data/prj/python/git/python3-3.8/Lib/test/test_shutil.py", line
1491, in test_copy_remove_setuid
    self.assertEqual(oct(mode), oct(harmless_mode))
AssertionError: '0o4500' != '0o500'
- 0o4500
?   -
+ 0o500


--


On 8/15/2018 1:01 PM, Michael Felt wrote:
> Michael Felt  added the comment:
>
> I am looking at this.



pEpkey.asc
Description: application/pgp-keys
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Comparisions for collections.Counters

2018-09-05 Thread Michael Selik
On Wed, Sep 5, 2018 at 3:13 AM Evpok Padding 
wrote:

> According to the [doc][1], `collections.Counter` convenience intersection
> and union functions are meant to help it represent multisets. However, it
> currently lacks comparisons, which would make sense and seems
> straightforward to implement.
>

x = Counter(a=1, b=2)
y = Counter(a=2, b=1)
x > y
?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] switch statement

2018-09-21 Thread Michael Selik
First, this sounds like it belongs on python-ideas, not python-dev.
Second, when you do send a message to python-ideas, it'll help to
accompany it with a realistic example usage that motivates your
proposal.
On Fri, Sep 21, 2018 at 11:18 AM  wrote:
>
> Hi,
>
> A humble proposal for a switch-like statement syntax for Python:
>
> - - -
> switch blah in (100, 2, 30, 'bumm'):
>   dosomething1()
>   x = 88
> case blah in (44, 55):
>   otherstuff(9)
> case blah in (8):
>   boo()
> else:
>   wawa()
> - - -
>
> So, let's use, and allow only *tuples*.
> As early as possible, build a jump table, based on (foreknown) small integer 
> values. As in other languages.
> Strings may have to be hashed (in "compile time"), to obtain small integer 
> value. Some secondary checking may
> have to be done for exact content equality. (Alternative: do no allow strings 
> at all.)
> For gaps in the integer range: maybe apply some very basic dividing/shifting 
> to "compact" the range. (As
> compilers optimize in other languages, I guess -- but I may be totally 
> wrong.) (For example find "unused bits"
> in the numbers (in 2-base representation). newnum = orignum >> 3 & 6 | 
> orignum & ~6. newnum is smaller (roughly
> 1/8) than orignum.)
> The (achievable) goal is to be faster than hash table lookup. (A hash table 
> with keys 100, 2, 30, 'bumm' etc.)
> And approach the speed of single array-index lookup. (Or even faster in some 
> cases as there will be just jumps
> instead of calls?)
> (I am not an "expert"!)
>
> Let allow fallthrough or not? - To be decided. (Either is compatible with the 
> above.)
>
>
> I know about PEP 3103 and
> https://docs.python.org/3.8/faq/design.html?highlight=switch#why-isn-t-there-a-switch-or-case-statement-in-python
>
> (I do not know how to comment on a PEP, where to discuss a PEP. If this is 
> inappropriate place, please forward it.)
>
> --
>
>
>
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/mike%40selik.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RELEASE] Python 3.7.1rc1 and 3.6.7rc1 now available for testing

2018-09-27 Thread Michael Felt
Not critical - but I note a difference between Python3 3.6.7 and 3.7.1 -
no support for the configure option --with-openssl.

On AIX I was able to run both configure and "make install" without incident.

I also ran the "make test" command.

v3.7.1:

9 tests failed again:
    test_ctypes test_distutils test_httpservers test_importlib
    test_site test_socket test_time  test_utf8_mode test_venv
 

There are, for most of above, a PR for these waiting final review and merge.

test_utf8_mode: I thought this was already merged. Will research.

test_venv, test_site: new test failures (I am not familiar with). Will
need more research.

v3.6.1:
16 tests failed:
    test_asyncio test_ctypes test_distutils test_ftplib test_httplib
    test_httpservers test_importlib test_locale
    test_multiprocessing_fork test_multiprocessing_forkserver
    test_multiprocessing_spawn test_socket test_ssl test_strptime
    test_time test_tools

FYI: again, there are PR for many of these, but, for now, I'll assume
they will not be considered for backport. FYI only.

On 9/27/2018 4:21 AM, Ned Deily wrote:
>  Assuming no
> critical problems are found prior to 2018-10-06, no code changes are
> planned between these release candidates and the final releases. These
> release candidates are intended to give you the opportunity to test the
> new security and bug fixes in 3.7.1 and 3.6.7. We strongly encourage you
> to test your projects and report issues found to bugs.python.org as soon
> as possible.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-09-28 Thread Michael Selik
On Fri, Sep 28, 2018 at 2:11 PM Sean Harrington  wrote:
> kwarg on Pool.__init__ called `expect_initret`, that defaults to False. When 
> set to True:
> Capture the return value of the initializer kwarg of Pool
> Pass this value to the function being applied, as a kwarg.

The parameter name you chose, "initret" is awkward, because nowhere
else in Python does an initializer return a value. Initializers mutate
an encapsulated scope. For a class __init__, that scope is an
instance's attributes. For a subprocess managed by Pool, that
encapsulated scope is its "globals". I'm using quotes to emphasize
that these "globals" aren't shared.


On Fri, Sep 28, 2018 at 4:39 PM Sean Harrington  wrote:
> On Fri, Sep 28, 2018 at 6:45 PM Antoine Pitrou  wrote:
>> 3. If you don't like globals, you could probably do something like
>> lazily-initialize the resource when a function needing it is executed
>
> if initializing the resource is expensive, we only want to do this ONE time 
> per worker process.

We must have a different concept of "lazily-initialize". I understood
Antoine's suggestion to be a one-time initialize per worker process.


On Fri, Sep 28, 2018 at 4:39 PM Sean Harrington  wrote:
> My simple argument is that the developer should not be constrained to make 
> the objects passed globally available in the process, as this MAY break 
> encapsulation for large projects.

I could imagine someone switching from Pool to ThreadPool and getting
into trouble, but in my mind using threads is caveat emptor. Are you
worried about breaking encapsulation in a different scenario?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-09-29 Thread Michael Selik
On Sat, Sep 29, 2018 at 5:24 AM Sean Harrington  wrote:
>> On Fri, Sep 28, 2018 at 4:39 PM Sean Harrington  wrote:
>> > My simple argument is that the developer should not be constrained to make 
>> > the objects passed globally available in the process, as this MAY break 
>> > encapsulation for large projects.
>>
>> I could imagine someone switching from Pool to ThreadPool and getting
>> into trouble, but in my mind using threads is caveat emptor. Are you
>> worried about breaking encapsulation in a different scenario?
>
> >> Without a specific example on-hand, you could imagine a tree of function 
> >> calls that occur in the worker process (even newly created objects), that 
> >> should not necessarily have access to objects passed from parent -> 
> >> worker. In every case given the current implementation, they will.

Echoing Antoine: If you want some functions to not have access to a
module's globals, you can put those functions in a different module.
Note that multiprocessing already encapsulates each subprocesses'
globals in essentially a separate namespace.

Without a specific example, this discussion is going to go around in
circles. You have a clear aversion to globals. Antoine and I do not.
No one else seems to have found this conversation interesting enough
to participate, yet.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] LDLAST variable in configure.ac

2018-10-01 Thread Michael Felt
Hi all,

Before I submit a patch to increase the default MAXDATA setting for AIX
when in 32-bit mode - I want to know if I can put this LDFLAG setting in
LDLAST, or if I should introduce a new AC_SUBST() variable (e.g.,
LDMAXDATA).

I have not looked yet, but I was thinking that MAYBE! LDLAST is intended
as a last resort variable that can be modified in Makefile.

Thanks!

Michael




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Change in Python 3's "round" behavior

2018-10-01 Thread Michael Felt


On 9/30/2018 2:17 PM, Steven D'Aprano wrote:
>  (It's also called Dutch Rounding.)

Ah - as to why - and from school! (as so-called intuitive! rather desired!).

A test score goes from 5.5 to 6.0 - which becomes passing.

Oh, do I recall my children's frustrations when they had a X.4Y score -
that became X.0. Tears!

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] dear core-devs

2018-10-01 Thread Michael Felt
Dear core-devs,

I have some bad characteristics.

I can be extremely enthusiastic - and write too much. I have been trying
to not write - anything - worried that my enthusiasm is not matched by
yours, or worse was a reason to ignore my work to get AIX passing all tests.

FYI: since the end of July I have dedicated 16 to 24 hours of my free
time to get this done. All for Python; all in my freetime. My employer
does not care - I do, or did.

I am grateful to Martin Panter - who helped me graciously when I knew
absolutely nothing when I first got started; Victor was kind enough to
answer some emails and help me along but also clear that he has zero
interest in AIX and my questions were taking too much of his time.
Regretfully for me.

Again - Victor - thank you for your time. I appreciated the assistance
and feedback.

(Others have helped from time to time, my apologies for not naming you
specifically.)

I am, to put it lightly, extremely frustrated, at this point.

I am sorry, for myself obviously - but also for Python. Obviously, I am
doing it all wrong - as I see lots of other issues being picked up
immediately.

All volunteers need some level of recognition to keep moving on.

And, while you may not give a damn about anything other than Windows,
macos and/or Linux - there are other platforms that would like a stable
Python.

Sincerely,

Michael




signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] LDLAST variable in configure.ac

2018-10-02 Thread Michael Felt


mime-attachment
Description: application/pgp-encrypted



encrypted.asc
Description: Binary data
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] dear core-devs

2018-10-02 Thread Michael Felt
I am willing to assist as best I can with AIX - I seem to have the core
requirements re: time available: (i.e., over-comitted at work, but
'work' evenings and weekends on OSS :p)


On 10/2/2018 6:41 PM, Simon Cross wrote:
> Are there any core devs that Michael or Erik could collaborate with?
> Rather than rely on adhoc patch review from random core developers.
>
> Michael and Eric: Question -- are you interested in becoming core
> developers at least for the purposes of maintaining these platforms in
> future?
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/aixtools%40felt.demon.nl

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] dear core-devs

2018-10-02 Thread Michael Felt



On 10/2/2018 4:45 PM, Erik Bray wrote:
> Michael, if there are any PRs you want to point me to that I might be
> able to help review please do.
A little trick I learned:
https://github.com/python/cpython/pulls?q=is%3Aopen+is%3Apr+author%3Aaixtools+sort%3Aupdated-desc
lists them all.

What "flipped my switch" yesterday was discovering a PR that I was
gifted (by an ex? core-dev) and put in the system back in January is now
broken by a patch merged about two weeks ago. Worse, pieces of
test_ctypes(bitfields) that previously worked when using __xlc__ seem to
be broken. Which highlighted the "time pressure" of getting tests to
pass so that regressions can be seen.

If you let me know what info you would need (I gave lots of debug info
two years ago to get that initial fix).

And, I guess the other "larger" change re: test_distutils. Also, some
issues specific to xlc being different from gcc.

Those two do not show on the gccfarm buildbot.

Many thanks for the offer! I'll try to not take more than the hand offered!
>   I don't know anything about AIX either
> and am not a core dev so I can't have a final say.  But I've been
> hacking on CPython for a long time anyways, and might be able to help
> at least with some initial review.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] LDLAST variable in configure.ac

2018-10-02 Thread Michael Felt
Yes, unintended. It was only supposed to be signed, but "Send Later" 
encrypts it.

Unpacked version:



On 10/2/2018 1:07 AM, Benjamin Peterson wrote:
> On Mon, Oct 1, 2018, at 12:12, Michael Felt wrote:
>> Hi all,
>>
>> Before I submit a patch to increase the default MAXDATA setting for AIX
>> when in 32-bit mode - I want to know if I can put this LDFLAG setting in
>> LDLAST, or if I should introduce a new AC_SUBST() variable (e.g.,
>> LDMAXDATA).
> I think you should just put it in LDFLAGS.
I was wanting to avoid that, as LDFLAGS is an environmental variable.

At the surface, it appears Python is using PY_LDFLAGS (with
CONFIGURE_LDFLAGS coming from LDFLAGS during the ./configure moment.

A reason for a separate variable is that this particular option is only
relevant for the python EXE, and not for shared libraries and "other
things". IMHO, a reason for LDMAXDATA is because LDLAST is actually
already too widely used:

root@x066:[/data/prj/python/git/cpython-master]grep LDFLAGS *.in
Makefile.pre.in:CONFIGURE_LDFLAGS=  @LDFLAGS@
Makefile.pre.in:# Avoid assigning CFLAGS, LDFLAGS, etc. so users can use
them on the
Makefile.pre.in:# Both CPPFLAGS and LDFLAGS need to contain the shell's
value for setup.py to
Makefile.pre.in:PY_LDFLAGS= $(CONFIGURE_LDFLAGS) $(LDFLAGS)
Makefile.pre.in:LDSHARED=   @LDSHARED@ $(PY_LDFLAGS)
Makefile.pre.in:BLDSHARED=  @BLDSHARED@ $(PY_LDFLAGS)
Makefile.pre.in:OPENSSL_LDFLAGS=@OPENSSL_LDFLAGS@
Makefile.pre.in:    $(MAKE) @DEF_MAKE_RULE@ CFLAGS_NODIST="$(CFLAGS)
$(PGO_PROF_GEN_FLAG)" LDFLAGS="$(LDFLAGS) $(PGO_PROF_GEN_FLAG)"
LIBS="$(LIBS)"
Makefile.pre.in:    $(MAKE) @DEF_MAKE_RULE@ CFLAGS_NODIST="$(CFLAGS)
$(PGO_PROF_USE_FLAG)" LDFLAGS="$(LDFLAGS)"
Makefile.pre.in:    $(LINKCC) $(PY_LDFLAGS) $(LINKFORSHARED) -o $@
Programs/python.o $(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST)
Makefile.pre.in: $(CC) -dynamiclib -Wl,-single_module
$(PY_LDFLAGS) -undefined dynamic_lookup
-Wl,-install_name,$(prefix)/lib/libpython$(LDVERSION).dylib
-Wl,-compatibility_version,$(VERSION) -Wl,-current_version,$(VERSION) -o
$@ $(LIBRARY_OBJS) $(SHLIBS) $(LIBC) $(LIBM) $(LDLAST); \
Makefile.pre.in:    $(CC) -o $(LDLIBRARY) $(PY_LDFLAGS) -dynamiclib \
Makefile.pre.in:    $(LINKCC) $(PY_LDFLAGS) $(LINKFORSHARED) -o $@
Programs/_testembed.o $(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST)
Makefile.pre.in:    $(LINKCC) $(PY_LDFLAGS) -o $@
Programs/_freeze_importlib.o $(LIBRARY_OBJS_OMIT_FROZEN) $(LIBS)
$(MODLIBS) $(SYSLIBS) $(LDLAST)
Makefile.pre.in:    $(CC) $(OPT) $(PY_LDFLAGS) $(PGENOBJS)
$(LIBS) -o $(PGEN)

The ONLY line that needs $LDMAXDATA is:

Makefile.pre.in:    $(LINKCC) $(PY_LDFLAGS) -o $@
Programs/_freeze_importlib.o $(LIBRARY_OBJS_OMIT_FROZEN) $(LIBS)
$(MODLIBS) $(SYSLIBS) $(LDLAST) $(LDMAXDATA)

or set $(LDLAST) at the end rather than append $(LDMAXDATA)
>> I have not looked yet, but I was thinking that MAYBE! LDLAST is intended
>> as a last resort variable that can be modified in Makefile.
> LDLAST looks vestigial from OSF/1 support and should probably be removed.


On 10/2/2018 2:51 PM, Łukasz Langa wrote:
>> On 2 Oct 2018, at 12:29, Michael Felt  wrote:
>>
>> 
> Michael, this message looks encrypted on my end. For people without your 
> public key, it's impossible to read. This was probably unintentional on your 
> end. In either case I'd avoid encrypting messages that go to public mailing 
> lists.
>
> - Ł

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] dear core-devs

2018-10-02 Thread Michael Felt


On 10/2/2018 11:34 PM, Terry Reedy wrote:
> On 10/2/2018 12:41 PM, Simon Cross wrote:
>> Are there any core devs that Michael or Erik could collaborate with?
>> Rather than rely on adhoc patch review from random core developers.
>
> You two might collaborate with each other to the extent of reviewing
> some of each other's PRs. 
Might be difficult. We both, or at least I, claim ignorance of the
others platform. I still have a lot of PEP to learn, and my idea of a
bug-fix (for Python2) was seen by core-dev as a feature change. I would
not feel comfortable trying to mentor someone in things PEP, etc..
> That still leaves the issue of merging.
How much confidence is there in all the "CI" tests? Does that not offer
sufficient confidence for a core-dev to press merge.
How about "master" continuing to be what it is, but insert a new
"pre-master" branch that the buildbots actually test on (e.g., what is
now the 3.X) and have a 3.8 buildbot - for what is now the "master".

PR would still be done based on master, but an "initial" merge would be
via the pre-master aka 3.X buildbot tests.

How "friendly" git is - that it not become such a workload to keep it
clean - I cannot say. Still learning to use git. Better, but still do
not want to assume it would be easy.

My hope is that it would make it easier to consider a "merge" step that
gets all the buildbots involved for even broader CI tests.

>
>> Michael and Eric: Question -- are you interested in becoming core
>> developers at least for the purposes of maintaining these platforms in
>> future?
>
> Since adhoc is not working to get merges, I had this same suggestion.
> Michael and Erik, I presume you have gotten some guidelines on what
> modifications to C code might be accepted, and what concerns people have.
imho: guidelines - paraphrased - as little as possible :)

I have many assumptions, and one of those is that my assumptions are
probably incorrect.
Goal: have AIX recognized as a Stable platform, even if not in the
highest supported category.
And that implies, support as far as I am able, to keep it "Stable".
>
> I think for tests, a separate test_aix.py might be a good idea for
> aix-only tests
Unclear to me how this would work. Too young in Python I guess (or just
a very old dog), but what test would be needed for AIX, or any other
platform, that would not need to be tested in some fashion for the
'other' platforms. At a hunch, where there are many platform.system()
dependencies expected (e.g., test_posix, maybe doing something in the
class definition (is there a "Root" Object/Class that all inherit from.
Maybe a (read-only) "root" attribute (or is property better?) could be
the value of platform.system(), and iirc, might be used by as @property
in unittest. (so, if not in "root" class, then in something like
unittest/__init__.py.

I hope to be "close" in "Python thinking" - enough that someone who
actually knows how the pieces fit together could come with a better, and
more appropriate guideline/implementation.

> , while modification of other tests might be limited to adding skips. 
> The idea would be to make it easy to remove aix stuff in the future if
> it again became unsupported.
IMHO: IBM and AIX do not mention it, but for openstack cloudmanagement
(very specifically cloud-init) AIX needs a recognized stable Python
implementation. I am "surprised" in the level of communication of IBM
with Python community.

Personally, I do not see AIX as a specialized platform. Feels more like
the "last-standing" fully supported (commercial OEM) 'POSIX-UNIX'. Of
course my focus is narrow - so maybe there is a lot of support for
commercial platforms such as HPUX, Solaris, and other mainstream UNIXes.
Feel free to correct me!!
> Ditto for other specialized platforms.
>
>
>
>

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] dear core-devs

2018-10-03 Thread Michael Felt


On 10/3/2018 2:48 AM, Terry Reedy wrote:
> On 10/2/2018 7:16 PM, Michael Felt wrote:
>>
>>
>> On 10/2/2018 11:34 PM, Terry Reedy wrote:
>>> On 10/2/2018 12:41 PM, Simon Cross wrote:
>>>> Are there any core devs that Michael or Erik could collaborate with?
>>>> Rather than rely on adhoc patch review from random core developers.
>>>
>>> You two might collaborate with each other to the extent of reviewing
>>> some of each other's PRs.
>
>> Might be difficult. We both, or at least I, claim ignorance of the
>> others platform.
>
> Partial reviews, short of accept/change are better than no review and
> can make a merge decision easier for a core dev.  You should each be
> or become familiar with PEP 7 and somewhat familiar with local C
> idioms. Do names follow local standards.  Do C-API calls make sense.
Sounds simple enough. The tricky part is "the details".
>
> >>  I still have a lot of PEP to learn, and my idea of a
> >> bug-fix (for Python2) was seen by core-dev as a feature change.
>
> Failures of current tests would seem to me to be bugs.  However, some
> bug fixes require a feature change.  It is an awkward situation.  We
> are increasingly reluctant to patch 2.7.
Some are quite simple to fix, even if hard to find: such as:
"elif cmd is None:" -> "elif notcmd orcmd is None:"

Some are not bugs at all - very hard to find! Instead, "textual"
differences because a library is overly optimized - the expected
exception occurs - but no error message. Linking with a less optimized
(libssl.a and libcrypto.a) resolved many reported test "failures".

Nearly three years ago I was keen to see things in Python2(.7), but not
so much now. I also feel the time is to push hard towards current
Python3 versions.
>
>>> That still leaves the issue of merging.
>> How much confidence is there in all the "CI" tests? Does that not offer
>> sufficient confidence for a core-dev to press merge.
>
> Code for new features or bugs that escaped the tests should have new
> tests.  AIX-specific code should (as in must ;-) be tested before
> being submitted, since it will not be properly tested by CI.  With CI
> now covering Windows twice, Linux twice, and Mac, I believe it has
> become rarer for buildbots to fail after CI passes.  Victor would know.
>
> I  believe that you are initially dealing with bugs that do not pass
> current tests.
I am dealing with tests that do not pass. The dilemma: what is wrong -
the test, or what it is testing? Generally speaking, I cannot call
Python3 (master) broken. So I look for a "root cause" in a test
assumption that is wrong, and find a way to correct that.

Sometimes, it is a bit of both - and those are very hard to resolve
without feedback.

See the discussion, elsewhere, regarding MACADDR. It has never been that
platform Y does not have a MACADDR - rather, platform Y formats it
differently than (all) other platforms.

>
>> How about "master" continuing to be what it is, but insert a new
>> "pre-master" branch that the buildbots actually test on (e.g., what is
>> now the 3.X) and have a 3.8 buildbot - for what is now the "master".
>>
>> PR would still be done based on master, but an "initial" merge would be
>> via the pre-master aka 3.X buildbot tests.
>>
>> How "friendly" git is - that it not become such a workload to keep it
>> clean - I cannot say. Still learning to use git. Better, but still do
>> not want to assume it would be easy.
>
> Too complicated.
>
>> My hope is that it would make it easier to consider a "merge" step that
>> gets all the buildbots involved for even broader CI tests.
>
> I considered the wider buildbot fleet to be post-merge CI ;-).
>
>>> I think for tests, a separate test_aix.py might be a good idea for
>>> aix-only tests
>
> I may be wrong on this.
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/aixtools%40felt.demon.nl

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] dear core-devs

2018-10-03 Thread Michael Felt


On 10/3/2018 1:46 AM, Neil Schemenauer wrote:
> On 2018-10-02, Michael Felt wrote:
>> I am sorry, for myself obviously - but also for Python. Obviously, I am
>> doing it all wrong - as I see lots of other issues being picked up
>> immediately.
> I'm not sure that's the case.  There are a lot of PRs or bugs that
> sit there without getting reviews.  The problem is that few (or no)
> core developers get paid to work on Python.  So, the time they spend
> is motivated by their specific "itch".  Getting reviews on any PR is
> difficult, even for core developers.  In their case, they have to
> option of forcing the issue, I guess.
>
> This is a problem we should try to deal with somehow.  Turning off
> valuable contributors like you is bad.  I'm not sure how to do it
> though.  At the core Python sprint in September there was some talk
> about how CPython developers might get funding.  Maybe that could
> help deal with the backlog of reviews required.
>
>> And, while you may not give a damn about anything other than Windows,
>> macos and/or Linux - there are other platforms that would like a stable
>> Python.
> There is probably some truth in not caring about other platforms.
> The problem from the reviewer perspective is the question of "what
> is the potential downsides of this PR vs what are the benefits?".
> The safest thing is to not approve the PR.  No core developer wants
> to be the person who broke CPython.  You must admit, AIX is an
> extremely niche platform at this point.  I bet if you picked 1000
> software developers at random, it would be likely that zero of them
> have ever used AIX.  So, it's not that we don't care at all about
> AIX but that the cost/benefit equation makes accepting AIX specific
> changes more difficult.
Nods. However - this is a chicken/egg issue (imho). AIX is seen a weak
platform because noone has ever tackled these. When I started on this I
had never expected to have found a resolution to them all.

Platforms have differences and when the tests miss that difference that
the tests give a false result. e.g., one accepted PR was because AIX
libc printf() output for printf(NULL) is "" while other platforms output
"(null)".


>
> One specific suggestion I have about your PR is to try to make your
> changes not AIX specific.  Or at least, make the AIX checking as
> localized as possible.  So, as an example, in test_uuid you have:
>
> _notAIX = not sys.platform.startswith("aix")
a) I thought/hoped this was better practice and performance - calling 
sys.platform.startswith("aix")only once, rather than X times.
b) more maintainable (e.g., change to not platform.system()
c) iirc - this got changed to AIX = , and throughout the test is "if
not AIX"...
>
> then later in the module you check that flag.  While that is the
> most direct approach to fixing the issue and making the test pass,
> it is not good for the long term maintainability of the code.  You
> end up with boolean flags like _notAIX spread about the logic.  Over
> time, code like that becomes a nightmare to maintain.
>
> Instead, I would suggest test_uuid is making platform specific
> assumptions that are not true on AIX and possibly other platforms.
> So, do something like:
>
> 
> _IS_AIX = sys.platform.startswith("aix")
better name.
>
> _HAVE_MACADDR = (os.name == 'posix' and not _IS_AIX)
AIX has MACADDR, but formatted with '.' rather than ':' and uses a
single hex-digit when value between dots is < 16 (decimal)
>
> @unittest.skipUnless(_HAVE_MACADDR, 'requires Posix with macaddr')
> def test_arp_getnode(self):
> ...
>
> The _HAVE_MACADDR test is relatively simple and clear, does this
> platform have this capability.  Later in the code, a check for
> _HAVE_MACADDR is also quite clear.  If someone comes along with
> another platform that doesn't support macaddr, they only have to
> change one line of code.
>
> This kind of capability checking is similar to what happened with
> web browsers.  In that case, people discovered that checking the
> User Agent header was a bad idea.  Instead, you should probe for
> specific functionality and not assume based on browser IDs.  For the
> macaddr case, is there some way to you probe the arp command to see
> if supports macaddr? 
I suppose if someone had written the original test with "check program
to see if ..." it would have worked already.
I am trying to get current tests to work with minimal changes.

I am certainly not "blaming" anyone for not knowing this unique behavior
of this platform. Before debugging this I did not know of the difference
eithe

[Python-Dev] AIX to stable, what does that take?

2018-10-04 Thread Michael Felt
In the buildbots AIX is marked as "unstable"? What is needed to get it
marked as a "stable" platform - that being one of my short-term goals.

My assumption is that it needs to (at least) pass all tests - and that
is why I keep asking for attention. All the PRs to fix individual tests
mean less if they are not merged, for whatever reason.

However, maybe there is another way, or even something additional
needed. Maybe something I cannot provide and then I can adjust my
expectations and goals.

Regards,

Michael

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-10-04 Thread Michael Selik
You don't like using Pool.starmap and itertools.repeat or a comprehension
that repeats an object?


On Wed, Oct 3, 2018, 6:30 PM Sean Harrington  wrote:

> Hi guys -
>
> The solution to "lazily initialize" an expensive object in the worker
> process (i.e. via @lru_cache) is a great solution (that I must admit I did
> not think of). Additionally, in the second use case of "*passing a large
> object to each worker process*", I also agree with your suggestion to
> "shelter functions in a different module to avoid exposure to globals" as a
> good solution if one is wary of globals.
>
> That said, I still think "*passing a large object from parent process to
> worker processes*" should be easier when using Pool. Would either of you
> be open to something like the following?
>
>def func(x, big_cache=None):
>return big_cache[x]
>
>big_cache =  { str(k): k for k in range(1) }
>
>ls = [ i for i in range(1000) ]
>
> with Pool(func_kwargs={"big_cache": big_cache}) as pool:
>
> pool.map(func, ls)
>
>
> It's a much cleaner interface (which presumably requires a more difficult
> implementation) than my initial proposal. This also reads a lot better than
> the "initializer + global" recipe (clear flow of data), and is less
> constraining than the "define globals in parent" recipe. Most importantly,
> when taking sequential code and parallelizing via Pool.map, this does not
> force the user to re-implement "func" such that it consumes a global
> (rather than a kwarg). It allows "func" to be used elsewhere (i.e. in the
> parent process, from a different module, testing w/o globals, etc...)..
>
> This would essentially be an efficient implementation of Pool.starmap(),
> where kwargs are static, and passed to each application of "func" over our
> iterable.
>
> Thoughts?
>
>
> On Sat, Sep 29, 2018 at 3:00 PM Michael Selik  wrote:
>
>> On Sat, Sep 29, 2018 at 5:24 AM Sean Harrington 
>> wrote:
>> >> On Fri, Sep 28, 2018 at 4:39 PM Sean Harrington 
>> wrote:
>> >> > My simple argument is that the developer should not be constrained
>> to make the objects passed globally available in the process, as this MAY
>> break encapsulation for large projects.
>> >>
>> >> I could imagine someone switching from Pool to ThreadPool and getting
>> >> into trouble, but in my mind using threads is caveat emptor. Are you
>> >> worried about breaking encapsulation in a different scenario?
>> >
>> > >> Without a specific example on-hand, you could imagine a tree of
>> function calls that occur in the worker process (even newly created
>> objects), that should not necessarily have access to objects passed from
>> parent -> worker. In every case given the current implementation, they will.
>>
>> Echoing Antoine: If you want some functions to not have access to a
>> module's globals, you can put those functions in a different module.
>> Note that multiprocessing already encapsulates each subprocesses'
>> globals in essentially a separate namespace.
>>
>> Without a specific example, this discussion is going to go around in
>> circles. You have a clear aversion to globals. Antoine and I do not.
>> No one else seems to have found this conversation interesting enough
>> to participate, yet.
>
>
> >>>
>
>>
>>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] dear core-devs

2018-10-04 Thread Michael Felt


On 10/4/2018 9:55 AM, Petr Viktorin wrote:
> On 10/4/18 9:34 AM, Victor Stinner wrote:
>> Hi,
>>
>> If IBM wants a better Python support, it would help a lot if IBM pays
>> for this development.
I agree. If IBM ...
>> ... Antoine Pitrou has been paid in the past to enhance Python
>> support in Solaris and it worked well.
>
FYI - as I now have access to the gccfarm, and in the spirit of more
generalized "posix" additions I looked for an HPUX and a Solais system
to build master on.

make test never finished (one test was still hanging after over 20
minutes, and I had to go. Of the 419, 17 or 18 had failed. Roughly where
AIX plus xlc was at last July without my PRs for tests.

So, while it worked - money stopped and Solaris is in no better
numerical shape (test wise) than AIX.
> Michael explicitly said this is a personal effort. IBM or other big
> money is not involved.
IBM is my employer. As I am not a developer (merely a systems and
management consultant) I do not face losing my job by working on OSS. I
have been called off certain OSS projects because IBM was providing
money and/or developers. This is one of the reasons (being called off
elsewhere) that I have been hesitant to be more involved than I was in
2015-2017.

So, let me be explicit - I can only speak for myself. And as long as no
manager says "No, cannot work on that" I have given a commitment to work
on this. "Some things cannot be bought" - such as un-biased (I call it
"maverick" rather than merely independent.) On the one hand IBM policy
is to encourage independent thought. The core goal is to help customers
succeed. But individual managers up and down the line occasionally have
additional business needs, and then workers as myself apologize and take
a step back - in a word - adjust.

Short answer: my involvement is mine to give at no price. I am
considered one of the worlds AIX experts on matters of integration,
performance and security.

So, I have just simple questions for you? Do you value my expertise? May
I assist?

>
> Is paying the best way to get features into Python? Does becoming a
> core dev mean you can now get paid for approving changes? Some of the
> implications are quite disturbing :(
>

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] AIX to stable, what does that take?

2018-10-04 Thread Michael Felt


On 10/4/2018 10:30 AM, INADA Naoki wrote:
> Hello,
>
> First of all, congratulations on passing all test on AIX.
>
>> My assumption is that it needs to (at least) pass all tests - and that
>> is why I keep asking for attention. All the PRs to fix individual tests
>> mean less if they are not merged, for whatever reason.
>>
>> However, maybe there is another way, or even something additional
>> needed. Maybe something I cannot provide and then I can adjust my
>> expectations and goals.
> As a one of core developer, I don't know anything about AIX.
> If my change breaks AIX build, I can't investigate what's happened.
>
> So I think we need following in devguide:
>
> * Brief description about AIX, from developer's point of view.
This I might be able to do. Bullet form:
* Committed to POSIX standard (valid when release came out, so AIX 5.3
confirms to a different standard than AIX 7.2)
* While Linux affinity is recognized - GNU (or GNP - GNU not POSIX)
integration is not guaranteed. - GNU rte is not provided under support.
There is a so-called Toolbox, GNU an other OSS utilities supplied by
many packaged as RPMs. Unfortunately, different RPM providers (Michael
Perlz, BULL Freeware, IBM, and others) have different numbering (the
part after the package version, e.g., python-2.7.10-XXX so they do not
mix well). Headache for both admins and developers trying to develop in
a GNU-like environment.
* As a consultant, fedup with what is called the "RPM hell" by many AIX
admins - I do not use any RPMs. I build everything myself, using xlc
(gcc introduces the need for a GNU RTE, e.g., glibc). I package using
installp (the "native") AIX package manager, and strive to make the
packages independent (these days). When there are dependencies I try to
build them as static libraries so that they do not become an additional
install dependency.
* finally, a bit deeper: while the AIX linker loader supports svr4
shared libraries (it is the data, not the file name) it also supports
having multiple shared libraries in a classic archive. So, rather that
.../lib/libxxx.so and .../lib64/libxxx.so AIX prefers .../lib/libxxx.a
with two so-called members, with same or different names. The one found
is not it's name, but the symbol name and size of the ABI (32-bit or 64-bit)
* Hope that is enough of the key issues for now.
** In general, GNU autotools (autoreconf -f -v) works well, as does
configure.ac et al. for createing OSS Makefiles.
> * How to run AIX on (VirtualBox, AWS EC2, Azure, GCP) easily.
Easily! ? :) - well, on a POWER server it was easy enough for me to
follow the step by step instructions for creating a buildbot. If I had a
POWER server with more resources I would examine using the AIX internal
WPAR - however, a POWER server configured to use PowerVC uses "EC2" aka
cloud-init for creating a virtual machine. With that environment it
should be "easy" to provide additional instructions to cloud-init-ec2.

Or, I provide you a login on my personal server that I run the buildbot
on. etc. etc. - Where there is a will, there is a way.
> * How to set up a development environment for Python.
Again, follow the instructions for setting up a buildbot.
> * How to build Python.
git clone ...
autoreconf -v -f (as needed)
./configure --with-pydebug  #gcc compiler
./configure --with-pydebug --without-computed-gotos # xlc compiler
make
make test
> * How to debug C code.
I learned, 40 years ago, using adb (a debugger) - I do a lot of
single-stepping. gdb is not the default debugger. If I were a developer
I would probably dig into the AIX debuggers (there are at least two, kdb
(kernel debugger, which I do use occaisionally for performance issues)
and another I try to avoid. I add fprintf statements and am looking at
learning how to use probevue.

In short, you probably have many much better ideas on how to debug C
than I do :)
>
> And even though there is a developer guide, it will take more long time
> than fixing issues on AIX, compared Linux, macOS, and Windows.
>
> But without this guide, it feels almost impossible to maintain AIX build to 
> me.
IMHO: The AIX build is stable, but this is unrecognized because it does
have differences that cause tests to fail. I can think of one test that
PASSes, but should fail. And another test that passes, but should have
failed (in test_uuid) I have submitted a PR.

I tried to fix "all" in one PR, which confused people - so redid it as
two (got _uuid working in Python 3.7 ! yes!!) but the "original" to fix
uuid.py and test_uuid.py is still "awaiting change review".

My gut feeling to maintaining AIX is: a) all test pass so a potential
regression is flagged; b) someone such as myself who knows the platform
and can establish a "root cause" on why it is failing with AIX so that
c) a developer become

Re: [Python-Dev] AIX to stable, what does that take?

2018-10-05 Thread Michael Haubenwallner
Hi Michael,

being on a similar road with Gentoo Prefix, I really do appreciate
your AIX related work!

However, for two (not so minor) topics I've got a little different
experience, which I think should be mentioned here for completion:

On 10/04/2018 11:13 AM, Michael Felt wrote:
> On 10/4/2018 10:30 AM, INADA Naoki wrote:
>> Hello,
>>
>> First of all, congratulations on passing all test on AIX.

>> As a one of core developer, I don't know anything about AIX.
>> If my change breaks AIX build, I can't investigate what's happened.
>>
>> So I think we need following in devguide:
>>
>> * Brief description about AIX, from developer's point of view.
> This I might be able to do. Bullet form:

> ... I build everything myself, using xlc
> (gcc introduces the need for a GNU RTE, e.g., glibc).

Using gcc does *not* require to use glibc or even GNU binutils at all.
Except for gcc's own runtime libraries, there's no need for a GNU RTE.
In fact, in Gentoo Prefix I do use gcc as the compiler, configured to
use AIX provided binutils (as, ld, nm, ...), with AIX libc as RTE.

> * finally, a bit deeper: while the AIX linker loader supports svr4
> shared libraries (it is the data, not the file name) it also supports
> having multiple shared libraries in a classic archive. So, rather that
> .../lib/libxxx.so and .../lib64/libxxx.so AIX prefers .../lib/libxxx.a
> with two so-called members, with same or different names. The one found
> is not it's name, but the symbol name and size of the ABI (32-bit or 64-bit)

While this all is true, having multiple *versions* of one shared library in
one single file is a PITA for package managers - both human or software.

But fortunately, the AIX linker does support so called "Import Files",
allowing for *filename based* shared library versioning like on Linux,
while still allowing for both ABIs in a single library archive file.

For example, libtool provides the --with-aix-soname={aix|svr4|both}
configure flag since libtool-2.4.4.  Although the default will stay
at 'aix' here, in Gentoo Prefix I do use 'svr4' only.  This actually
is a package manager's decision, ideally for all depending packages.
As gcc does use libtool, for more information please refer to
https://gcc.gnu.org/install/configure.html#WithAixSoname
But note that "Import Files" should work with xlc as well.

Thanks!
/haubi/
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-10-16 Thread Michael Selik
Would this change the other pool method behavior in some way if the user,
for whatever reason, mixed techniques?

imap_unordered will only block when nexting the generator. If the user
mingles nexting that generator with, say, apply_async, could the change
you're proposing have some side-effect?

On Tue, Oct 16, 2018, 5:09 AM Sean Harrington  wrote:

> @Nataniel this is what I am suggesting as well. No cacheing - just storing
> the `fn` on each worker, rather than pickling it for each item in our
> iterable.
>
> As long as we store the `fn` post-fork on the worker process (perhaps as
> global), subsequent calls to Pool.map shouldn't be effected (referencing
> Antoine's & Michael's points that "multiprocessing encapsulates each
> subprocesses globals in a separate namespace").
>
> @Antoine - I'm making an effort to take everything you've said into
> consideration here.  My initial PR and talk
>  was intended to shed light
> on a couple of pitfalls that I often see Python end-users encounter with
> Pool. Moving beyond my naive first attempt, and the onslaught of deserved
> criticism, it seems that we have an opportunity here: No changes to the
> interface, just an optimization to reduce the frequency of pickling.
>
> Raymond Hettinger may also be interested in this optimization, as he
> speaks (with great analogies) about different ways you can misuse
> concurrency in Python . This
> would address one of the pitfalls that he outlines: the "size of the
> serialized/deserialized data".
>
> Is this an optimization that either of you would be willing to review, and
> accept, if I find there is a *reasonable way* to implement it?
>
>
> On Fri, Oct 12, 2018 at 3:40 PM Nathaniel Smith  wrote:
>
>> On Fri, Oct 12, 2018, 06:09 Antoine Pitrou  wrote:
>>
>>> On Fri, 12 Oct 2018 08:33:32 -0400
>>> Sean Harrington  wrote:
>>> > Hi Nathaniel - this if this solution can be made performant, than I
>>> would
>>> > be more than satisfied.
>>> >
>>> > I think this would require removing "func" from the "task tuple", and
>>> > storing the "func" "once per worker" somewhere globally (maybe a class
>>> > attribute set post-fork?).
>>> >
>>> > This also has the beneficial outcome of increasing general performance
>>> of
>>> > Pool.map and friends. I've seen MANY folks across the interwebs doing
>>> > things like passing instance methods to map, resulting in "big" tasks,
>>> and
>>> > slower-than-sequential parallelized code. Parallelizing "instance
>>> methods"
>>> > by passing them to map, w/o needing to wrangle with staticmethods and
>>> > globals, would be a GREAT feature! It'd just be as easy as:
>>> >
>>> > Pool.map(self.func, ls)
>>> >
>>> > What do you think about this idea? This is something I'd be able to
>>> take
>>> > on, assuming I get a few core dev blessings...
>>>
>>> Well, I'm not sure how it would work, so it's difficult to give an
>>> opinion.  How do you plan to avoid passing "self"?  By caching (by
>>> equality? by identity?)?  Something else?  But what happens if "self"
>>> changed value (in the case of a mutable object) in the parent?  Do you
>>> keep using the stale version in the child?  That would break
>>> compatibility...
>>>
>>
>> I was just suggesting that within a single call to Pool.map, it would be
>> reasonable optimization to only send the fn once to each worker. So e.g. if
>> you have 5 workers and 1000 items, you'd only pickle fn 5 times, rather
>> than 1000 times like we do now. I wouldn't want to get any fancier than
>> that with caching data between different map calls or anything.
>>
>> Of course even this may turn out to be too complicated to implement in a
>> reasonable way, since it would require managing some extra state on the
>> workers. But semantically it would be purely an optimization of current
>> semantics.
>>
>> -n
>>
>>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/seanharr11%40gmail.com
>>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/mike%40selik.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-10-17 Thread Michael Selik
If imap_unordered is currently re-pickling and sending func each time it's
called on the worker, I have to suspect there was some reason to do that
and not cache it after the first call. Rather than assuming that's an
opportunity for an optimization, I'd want to be certain it won't have edge
case negative effects.


On Tue, Oct 16, 2018 at 2:53 PM Sean Harrington 
wrote:

> Is your concern something like the following?
>
> with Pool(8) as p:
> gen = p.imap_unordered(func, ls)
> first_elem = next(gen)
> p.apply_async(long_func, x)
> remaining_elems = [elem for elem in gen]
>

My concern was passing the same function (or a function with the same
qualname). You're suggesting caching functions and identifying them by
qualname to avoid re-pickling a large stateful object that's shoved into
the function's defaults or closure. Is that a correct summary?

If so, how would the function cache distinguish between two functions with
the same name? Would it need to examine the defaults and closure as well?
If so, that means it's pickling the second one anyway, so there's no
efficiency gain.

In [1]: def foo(a):
   ...: def bar():
   ...: print(a)
   ...: return bar
In [2]: f = foo(1)
In [3]: g = foo(2)
In [4]: f
Out[4]: .bar()>
In [5]: g
Out[5]: .bar()>

If we say pool.apply_async(f) and pool.apply_async(g), would you want the
latter one to avoid serialization, letting the worker make a second call
with the first function object?
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-10-18 Thread Michael Selik
On Thu, Oct 18, 2018 at 8:35 AM Sean Harrington 
wrote:

> The most common use case comes up when passing instance methods (of really
> big objects!) to Pool.map().
>

This reminds me of that old joke: "A patient says to the doctor, 'Doctor,
it hurts when I ...!' The doctor replies, 'Well, don't do that.'"

Further, let me pivot on my idea of __qualname__...we can use the `id` of
> `func` as the cache key to address your concern, and store this `id` on the
> `task` tuple (i.e. an integer in-lieu of the `func` previously stored
> there).
>

Possible. Does the Pool keep a reference to the passed function in the main
process? If not, couldn't the garbage collector free that memory location
and a new function could replace it? Then it could have the same qualname
and id in CPython. Edge case, for sure. Worse, it'd be hard to reproduce as
it'd be dependent on the vagaries of memory allocation.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-10-18 Thread Michael Selik
One idea would be for the Pool method to generate a uuid and slap it on the
function as an attribute. If a function being passed in doesn't have one,
generate one. If it already has one, just pass that instead of pickling.
The child process will keep a cache mapping uuids to functions.

I'm still worried about unintended consequences.


On Thu, Oct 18, 2018 at 9:00 AM Michael Selik 
wrote:

> On Thu, Oct 18, 2018 at 8:35 AM Sean Harrington 
> wrote:
>
>> The most common use case comes up when passing instance methods (of
>> really big objects!) to Pool.map().
>>
>
> This reminds me of that old joke: "A patient says to the doctor, 'Doctor,
> it hurts when I ...!' The doctor replies, 'Well, don't do that.'"
>
> Further, let me pivot on my idea of __qualname__...we can use the `id` of
>> `func` as the cache key to address your concern, and store this `id` on the
>> `task` tuple (i.e. an integer in-lieu of the `func` previously stored
>> there).
>>
>
> Possible. Does the Pool keep a reference to the passed function in the
> main process? If not, couldn't the garbage collector free that memory
> location and a new function could replace it? Then it could have the same
> qualname and id in CPython. Edge case, for sure. Worse, it'd be hard to
> reproduce as it'd be dependent on the vagaries of memory allocation.
>
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-10-19 Thread Michael Selik
On Fri, Oct 19, 2018 at 5:01 AM Sean Harrington 
wrote:

> I like the idea to extend the Pool class [to optimize the case when only
> one function is passed to the workers].
>

Why would this keep the same interface as the Pool class? If its workers
are restricted to calling only one function, that should be passed into the
Pool constructor. The map and apply methods would then only receive that
function's args and not the function itself. You're also trying to avoid
the initializer/globals pattern, so you could eliminate that parameter from
the Pool constructor. In fact, it sounds more like you'd want a function
than a class. You can call it "procmap" or similar. That's code I've
written more than once.

results = poolmap(func, iterable, processes=cpu_count())

The nuance is that, since there's no explicit context manager, you'll want
to ensure the pool is shut down after all the tasks are finished, even if
the results generator hasn't been fully consumed.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-34837: Multiprocessing.Pool API Extension - Pass Data to Workers w/o Globals

2018-10-22 Thread Michael Selik
This thread seems more appropriate for python-ideas than python-dev.


On Mon, Oct 22, 2018 at 5:28 AM Sean Harrington 
wrote:

> Michael - the initializer/globals pattern still might be necessary if you
> need to create an object AFTER a worker process has been instantiated (i.e.
> a database connection).
>

You said you wanted to avoid the initializer/globals pattern and have such
things as database connections in the defaults or closure of the
task-function, or the bound instance, no? Did I misunderstand?


Further, the user may want to access all of the niceties of Pool, like
> imap, imap_unordered, etc.  The goal (IMO) would be to preserve an
> interface which many Python users have grown accustomed to, and to allow
> them to access this optimization out-of-the-bag.
>

You just said that the dominant use-case was mapping a single
task-function. It sounds like we're talking past each other in some way.
It'll help to have a concrete example of a case that satisfies all the
characteristics you've described: (1) no globals used for communication
between initializer and task-functions; (2) single task-function, mapped
once; (3) an instance-method as task-function, causing a large
serialization burden; and (4) did I miss anything?



> Having talked to folks at the Boston Python meetup, folks on my dev team,
> and perusing stack overflow, this "instance method parallelization" is a
> pretty common pattern that is often times a negative return on investment
> for the developer, due to the implicit implementation detail of pickling
> the function (and object) once per task.
>

I believe you.


> Is anyone open to reviewing a PR concerning this optimization of Pool,
> delivered as a subclass? This feature restricts the number of unique tasks
> being executed by workers at once to 1, while allowing aggressive
> subprocess-level function cacheing to prevent repeated
> serialization/deserialization of large functions/closures. The use case is
> s.t. the user only ever needs 1 call to Pool.map(func, ls) (or friends)
> executing at once, when `func` has a non-trivial memory footprint.
>

You're quite eager to have this PR merged. I understand that. However, it's
reasonable to take some time to discuss the design of what you're
proposing. You don't need it in the stdlib to get your own work done, nor
to share it with others.

>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   4   5   6   7   8   9   10   >