se commits* and let me know ASAP if we are missing
something you would like to include on the 3.11.0 final release.
You have until 15:00 UTC+0 today to let me know, otherwise, your changes
will need to wait until 3.11.1.
Thanks for your help!
Regards from sunny London,
Pablo
hon
Software Foundation.
https://www.python.org/psf/
If you have any questions, please reach out to me or another member of the
release team :)
Your friendly release team,
Ned Deily @nad https://discuss.python.org/u/nad
Steve Dower @steve.dower https://discuss.python.org/u/steve.dower
Pablo Gali
code unfortunately.
Hope this helps.
Regards from rainy London,
Pablo Galindo Salgado
> On 26 Oct 2022, at 19:12, David J W wrote:
>
>
> I am writing a Rust version of Python for fun and I am at the parser stage of
> development.
>
> I copied and modified a PEG grammar
on's PEG grammar.
>
> On Wed, Oct 26, 2022 at 12:51 PM Pablo Galindo Salgado <
> pablog...@gmail.com> wrote:
>
>> Hi,
>>
>> I am not sure I understand exactly what you are asking but NEWLINE is a
>> token, not a parser rule. What decides when NEW
my mistakes are not too obvious to end users :P
Being your release manager for 3.11 and 3.10 has been a privilege and an
honor (and it will continue for a couple
of years of bugfixes and security releases, I'm not going anywhere).
Regards from rainy London,
Pablo Galindo Sa
wrote:I wonder if David may be struggling with the rule that a newline is significant in the grammar unless it appears inside matching brackets/parentheses/braces? I think that's in the lexer. Similarly, multiple newlines are collapsed.On Wed, Oct 26, 2022 at 1:19 PM Pablo Galindo Salgado &l
m running into problems where the parser crashes any time there is some double like NL & N or Newline & NL but I want to nail down NEWLINE's behavior in CPython's PEG grammar.On Wed, Oct 26, 2022 at 12:51 PM Pablo Galindo Salgado <pablog...@gmail.com> wrote:Hi,
I am not sure I
ecided that this will be a discussion point with the rest of the core devs
in the core dev sprint.
In representation of the Steering Council,
Pablo Galindo Salgado
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an e
acting us.
Regards from snowy London,
Pablo Galindo Salgado
> On 15 Dec 2022, at 07:12, wonsuk yang wrote:
>
> hi! thank you for amazing community!
>
> the python community is the light and the salt for me!
>
> the site https://buildbot.python.org/all/#/grid
>
>
ime!
Regards from rainy London,
Pablo Galindo Salgado
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archiv
against the different options.
Thanks a lot for your help!
Regards from cloudy London,
Pablo Galindo Salgado
On Mon, 19 Dec 2022 at 17:59, Pablo Galindo Salgado
wrote:
> Hi everyone,
>
> I am very excited to share with you a PEP that Batuhan Taskaya, Lysandros
> Nikolaou and myse
Software Foundation.
https://www.python.org/psf/
Your friendly release team,
Ned Deily @nad
Steve Dower @steve.dower
Pablo Galindo Salgado @pablogsal
Łukasz Langa @ambv
Thomas Wouters @thomas
___
Python-Dev mailing list -- python-dev@python.org
To
I am not
contemplating?
Regards from rainy London,
Pablo Galindo Salgado
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org
nparse' I think is unnecessary as my view is
that we should
not guarantee anything other than roundtrip over the generated source
initially.
> I would prefer to keep a separated module, like "import ast.unparse"
or "import unparse".
Why? I think ast.unparse is a natural
Opened https://bugs.python.org/issue38870 to track this.
On Tue, 19 Nov 2019 at 00:40, Pablo Galindo Salgado
wrote:
> Hi,
>
> What do people feel about exposing Tools/parser/unparse.py in the standard
> library? Here is my initial rationale:
>
> * The tool already needs to
loudy London,
Pablo Galindo Salgado
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at
https://mail.python.org/arc
, Guido van Rossum wrote:
> How do I find the refleak buildbots? I went to the devguide and searched
> for "buildbot", which pointed to https://www.python.org/dev/buildbot/ --
> but searching there for "refleak" finds nothing.
>
> On Tue, Dec 3, 2019 at 1:16 PM Pab
dbots, and it's frustrating that they are so hard to find.
>
> On Tue, Dec 3, 2019 at 2:17 PM Pablo Galindo Salgado
> wrote:
>
>> > How do I find the refleak buildbots?
>>
>> In this page:
>>
>> https://buildbot.python.org/all/#/builders
>>
>
> Just to clarify, this means that 3.9 will ship with the PEG parser as default,
> right? If so, this would be a new feature, post beta. Since that is counter
> to our
> general policy, we would need to get explicit RM approval for such a change.
The idea is merging it *before beta* and make it
> About the migration, can I ask who is going to (help to) fix projects
which rely on the AST?
I think you misunderstood: The AST is exactly the same as the old and the new
parser. The only
the thing that the new parser does is not generate an immediate CST (Concrete
Syntax Tree) and that
is onl
> About the migration, can I ask who is going to (help
to) fix projects which rely on the AST?
Whoops, I send the latest email before finishing it by mistake. Here is the
extended version of the answer:
I think there is a misunderstanding here: The new parser generates the same AST
as the old p
> That paragraph seems rather confused. I think what it might be
> trying to say is that a PEG parser allows you to write productions
> with overlapping first sets (which would be "ambiguous" for an
> LL parser), but still somehow guarantees that a unique parse tree
> is produced. The latter sugges
>The only thing I'm missing from the PEP is more detail about how the
> cross-language nature of the parser actions are handled. The example covers
> just C, and the description of the actions says they're C expressions. The
> only mention of Python code generation is for alternatives without actio
> The only thing I'm missing from the PEP is more detail about how the
cross-language nature of the parser actions are handled.
Expanded the "actions" section in the PEP here:
https://github.com/python/peps/pull/1357
___
Python-Dev mailing list -- pytho
After the feedback received in the language summit, we have made a modification
to the
proposed migration plan in PEP 617 so the new parser will be the default in
3.9alpha6:
https://github.com/python/peps/pull/1369
___
Python-Dev mailing list -- python
> But again this is for PyObjects only.
Not really, we check also memory blocks:
https://github.com/python/cpython/blob/master/Lib/test/libregrtest/refleak.py#L72
as long as you don't directly call malloc and use one of the Python
specific APIs like PyMem_Malloc
then the reflect code should catc
Just some comments on the GC stuff as I added them myself.
> Shouldn't GC track *all* objects?
No, extension types need to opt-in to the garbage collector and if so,
implement the interface.
> Even if it were named PyObject_Cycle_GC_IsTracked() it would be exposing
internal implementation details
I was talking with a colleague today about the PEP and he raised a couple
of question regarding the match protocol and the proxy result.
One question is that taking into account that 'case object(x)' is valid for
every object, but it does (could do) something different for objects
that have a non-
I like the proposal in general but I am against removing lnotab. The reason
is that many tools rely on reading this attribute to figure out the Python
call stack information. For instance, many sampler profilers read this
memory by using ptrace or process_vm_readv and they cannot execute any code
o
> In theory, this table could be stored somewhere other than the code
object, so that it doesn't actually get paged in or occupy cache unless
tracing is on.
As some of us mentioned before, that will hurt the ecosystem of profilers
and debugger tools considerably
On Thu, 23 Jul 2020 at 18:08, Jim
Thanks for the proposal Mark!
I wanted to make some comments regarding converting AST nodes to PyObjects
internally. I see some challenges here:
* Not using an arena allocator for the nodes can introduce more challenges
than simplifications. The first is that deleting a deep tree currently is
jus
>
>
> * Not using an arena allocator for the nodes can introduce more challenges
>> than simplifications. The first is that deleting a deep tree currently is
>> just freeing the arena block, while if the nodes were PyObjects it will
>> involve recursive destruction. That could potentially segfault
> > Don't we need to do all of this in the _ast module, already?
> > We already have an AST composed of Python objects
We have AST of Python objects, but the python versions are not used
internally, especially in the parser, where they are created. The parser
and the compiler
currently use exclus
void it.
On Wed, 16 Sep 2020 at 12:48, Mark Shannon wrote:
>
>
> On 16/09/2020 12:22 pm, Pablo Galindo Salgado wrote:
> > > Don't we need to do all of this in the _ast module, already?
> > > We already have an AST composed of Python objects
> >
>
As someone that went through doing a release just now and now what it
entailsthanks a lot for all the work, Larry! :)
On Mon, 5 Oct 2020 at 19:39, Barry Warsaw wrote:
> They say being a Python Release Manager is a thankless job, so the Python
> Secret Underground (PSU), which emphatically do
Our experience with automatic testing is that unfortunately is very
difficult to extract real problems with it. We tried some of the new
experimental source generators on top of hypothesis (
https://pypi.org/project/hypothesmith/) and sadly we could not catch many
important things that parsing exis
ink IIRC.
Do you know if the link to the file you mentioned used to be there?
Thanks,
Pablo
On Thu, 8 Oct 2020, 09:25 Miro Hrončok, wrote:
> On 05. 10. 20 22:22, Łukasz Langa wrote:
> > In fact, our newest Release Manager, Pablo Galindo Salgado, prepared the
> first
> > alpha r
s" and "Timeline" tabs (
https://speed.python.org/timeline/).
* Once the daily builds are working as expected, I plan to work on trying
to automatically comment or PRs or on bpo if
we detect that a commit has introduced some notable performance regression.
Regards from sunny Lo
ssumed I'd misread
> the figures, and moved on, but maybe I was wrong to do so...
>
> Paul
>
> On Wed, 14 Oct 2020 at 14:17, Pablo Galindo Salgado
> wrote:
> >
> > Hi!
> >
> > I have updated the branch benchmarks in the pyperformance serve
on/pyperformance/blob/master/pyperformance/benchmarks/bm_unpack_sequence.py
>
> https://github.com/python/pyperformance/blob/master/pyperformance/benchmarks/bm_regex_dna.py
>
> Thanks.
>
> On 14.10.2020 15:16, Pablo Galindo Salgado wrote:
> > Hi!
> >
> > I have updat
ing :)
That's why from now on I am trying to invest in daily builds for master,
so we can answer that exact question if we detect regressions in the future.
On Wed, 14 Oct 2020 at 15:04, M.-A. Lemburg wrote:
> On 14.10.2020 16:00, Pablo Galindo Salgado wrote:
> >> Would it be possible
because
the micro-benchmarks published in the What's new of 3.9 were confusing a
lot of users that
were thinking if 3.9 was slower.
On Wed, 14 Oct 2020 at 15:14, Antoine Pitrou wrote:
>
> Le 14/10/2020 à 15:16, Pablo Galindo Salgado a écrit :
> > Hi!
> >
> > I have
18:58, Chris Jerdonek
wrote:
> MOn Wed, Oct 14, 2020 at 8:03 AM Pablo Galindo Salgado <
> pablog...@gmail.com> wrote:
>
>> > Would it be possible rerun the tests with the current
>> setup for say the last 1000 revisions or perhaps a subset of these
>> (e.g. every
201 - 243 of 243 matches
Mail list logo