Re: revise "([^/]+)$" into '([^/]+)$' in a lot of files under a directory.

2019-11-18 Thread Pieter van Oostrum
Hongyi Zhao  writes:

> On Sun, 17 Nov 2019 20:28:55 +0100, Pieter van Oostrum wrote:
>
>> To be honest, I myself would use Emacs, with rgrep and wgrep to do this.
>
> Are these tools superior to grep?

They are based on grep. But rgrep does a grep through a whole directory tree, 
or a selection thereof, specified by a file pattern.
The wgrep allows you to edit the grep output, for example just changing 
"([^/]+)$" to '([^/]+)$'.
And then you let it write the changes back to those files.
The advantage is that you see what you are doing.

But now this has become off-topic with regard to Python.
-- 
Pieter van Oostrum
WWW: http://pieter.vanoostrum.org/
PGP key: [8DAE142BE17999C4]
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Launching a Script on the Linux Platform

2019-11-18 Thread Wildman via Python-list
On Sun, 17 Nov 2019 18:27:45 +, Barry Scott wrote:

>> On 12 Nov 2019, at 20:24, Wildman via Python-list  
>> wrote:
>> 
>> Yes, I prefer to envoke env in the shebang line instead of
>> depending on the path.  Paths can change especially in a
>> multi-user system but env will always know where to find
>> the executable.
> 
> The path to python will not change surely?

In Linux, being a multi-user OS, the path is not global or
system wide.  The path can be different for different users.
This is done by adding an EXPORT command to ~/.bashrc.

> Because you are installing from a deb you know the exact path to the python 
> you
> need to use. There is no need to use the /usr/bin/env to search the path and
> potential break your code, because a version of python that you do not expect 
> is on
> the path.
> 
> Barry

I don't understand.  The deb does not install python so I
fail to see how I would know the exact path.

As to env breaking my code, never heard of such a thing.

-- 
 GNU/Linux user #557453
"There are only 10 types of people in the world...
those who understand Binary and those who don't."
  -Spike
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Launching a Script on the Linux Platform

2019-11-18 Thread Chris Angelico
On Tue, Nov 19, 2019 at 5:06 AM Wildman via Python-list
 wrote:
>
> On Sun, 17 Nov 2019 18:27:45 +, Barry Scott wrote:
>
> >> On 12 Nov 2019, at 20:24, Wildman via Python-list  
> >> wrote:
> >>
> >> Yes, I prefer to envoke env in the shebang line instead of
> >> depending on the path.  Paths can change especially in a
> >> multi-user system but env will always know where to find
> >> the executable.
> >
> > The path to python will not change surely?
>
> In Linux, being a multi-user OS, the path is not global or
> system wide.  The path can be different for different users.
> This is done by adding an EXPORT command to ~/.bashrc.
>
> > Because you are installing from a deb you know the exact path to the python 
> > you
> > need to use. There is no need to use the /usr/bin/env to search the path and
> > potential break your code, because a version of python that you do not 
> > expect is on
> > the path.
> >
> > Barry
>
> I don't understand.  The deb does not install python so I
> fail to see how I would know the exact path.
>
> As to env breaking my code, never heard of such a thing.
>

The deb should depend on an appropriate Python package. Then you can
assume and expect that this version of Python is installed.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Writing a CPython extension - calling another sibbling method ?

2019-11-18 Thread MRAB

On 2019-11-18 07:52, R.Wieser wrote:

Hello all,

I'm trying to edit a binary extension to Python, and have a situation where
I would like to create method which adds a single argument, and than jumps
to / calls another method.  Like this:

static PyObject *py_proc1(PyObject *self, PyObject *args)
{

   Py_RETURN_NONE
}

static PyObject *py_proc2(PyObject *self, PyObject *args)
{
// call py_proc1 to with "foo" prepended to "args"
}

I have no idea how I should do either the call or the adding of that
argument (and going to a few examples I've found googeling didn't show me
the answer either).

Than again, I'm not even sure if the "foo" needs to be prepended, or if that
"py_proc1" method can receive more than a single "args" argument ...  Like
this perhaps:

static PyObject *py_proc1(PyObject *self, int MyNewArgument, PyObject *args)
{

}

I've also tried to go the C way (just calling, from "py_proc2", a C function
containing "py_proc1"s code), but I got lots of errors, most of them in the
realm of the different returns not being of the same type (no idea why it
doesn't complain about it in the origional "py_proc1" code itself though).

tl;dr:
I could use some examples that show how to work withl PyObject subfunctions.

Regards,
Rudy Wieser

P.s.
Yes, this is related to my earlier questions and problems.

One possibility is to refactor the code so that py_proc1 and py_proc2 
themselves just handle their arguments and then call the function that 
does the actual work.


A clunkier way would be to make a new tuple that consists of the 
prepended item and the items of args and pass that to py_proc1 as its 
args. When py_proc1 returns its result object, DECREF the new tuple to 
clean up and then return the result object.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Writing a CPython extension - calling another sibbling method ?

2019-11-18 Thread R.Wieser
MRAB,

> One possibility is to refactor the code so that py_proc1 and py_proc2 
> themselves just handle their arguments and then call the function that 
> does the actual work.

The thing is that the arguments of py_proc1 and py_proc2 are the same, but 
for a single argument.   Which means that letting both of them first parse 
their own arguments means duplicated code.Which I do not really want and 
thus try to evade

But yes, that is a possibility too.  The "the function that does the actual 
work" part is what I tried to describe with my second example.

> A clunkier way would be to make a new tuple that consists of the prepended 
> item and the items of args and pass that to py_proc1 as its args.

That is what I tried to describe with my first example.

The thing is I have no clue about what the above calling should look like 
(though I think I already found how to append my argument to the "args" 
string-object).

In other words, do you have any idea of what either of those calling methods 
should look like ?   An example perhaps ?   Having only encountered the 
CPython API two days ago I'm still fumbling in the dark I'm afraid.

Regards,
Rudy Wieser


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Launching a Script on the Linux Platform

2019-11-18 Thread Wildman via Python-list
On Tue, 19 Nov 2019 05:09:07 +1100, Chris Angelico wrote:

> On Tue, Nov 19, 2019 at 5:06 AM Wildman via Python-list
>  wrote:
>>
>> On Sun, 17 Nov 2019 18:27:45 +, Barry Scott wrote:
>>
>> >> On 12 Nov 2019, at 20:24, Wildman via Python-list 
>> >>  wrote:
>> >>
>> >> Yes, I prefer to envoke env in the shebang line instead of
>> >> depending on the path.  Paths can change especially in a
>> >> multi-user system but env will always know where to find
>> >> the executable.
>> >
>> > The path to python will not change surely?
>>
>> In Linux, being a multi-user OS, the path is not global or
>> system wide.  The path can be different for different users.
>> This is done by adding an EXPORT command to ~/.bashrc.
>>
>> > Because you are installing from a deb you know the exact path to the 
>> > python you
>> > need to use. There is no need to use the /usr/bin/env to search the path 
>> > and
>> > potential break your code, because a version of python that you do not 
>> > expect is on
>> > the path.
>> >
>> > Barry
>>
>> I don't understand.  The deb does not install python so I
>> fail to see how I would know the exact path.
>>
>> As to env breaking my code, never heard of such a thing.
>>
> 
> The deb should depend on an appropriate Python package. Then you can
> assume and expect that this version of Python is installed.
> 
> ChrisA

Yes, of course, python(3) is listed as a "depends" in the deb
control file.  That does insure that python is installed but
in no way does that tell me the path of the python executable.

-- 
 GNU/Linux user #557453
"Be at war with your vices, at peace with your neighbors
and let every new year find you a better man."
  -Benjamin Franklin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Writing a CPython extension - calling another sibbling method ?

2019-11-18 Thread MRAB

On 2019-11-18 20:15, R.Wieser wrote:

MRAB,

One possibility is to refactor the code so that py_proc1 and py_proc2 
themselves just handle their arguments and then call the function that 
does the actual work.


The thing is that the arguments of py_proc1 and py_proc2 are the same, but
for a single argument.   Which means that letting both of them first parse
their own arguments means duplicated code.Which I do not really want and
thus try to evade

But yes, that is a possibility too.  The "the function that does the actual
work" part is what I tried to describe with my second example.

A clunkier way would be to make a new tuple that consists of the prepended 
item and the items of args and pass that to py_proc1 as its args.


That is what I tried to describe with my first example.

The thing is I have no clue about what the above calling should look like
(though I think I already found how to append my argument to the "args"
string-object).

In other words, do you have any idea of what either of those calling methods
should look like ?   An example perhaps ?   Having only encountered the
CPython API two days ago I'm still fumbling in the dark I'm afraid.


It could be something like this:


static PyObject *py_proc2(PyObject *self, PyObject *args)
{
/*** TODO: Add error checking. ***/
PyObject* prepend_arg;
PyObject* prepend_tuple;
PyObject* new_args;
PyObject* result;

/* The object to be prepended. */
prepend_arg = PyUnicode_FromString("foo");

/* Make a tuple from the prepended object. */
prepend_tuple = BuildValue("(O)", prepend_arg);

/* No longer need prepend_arg. */
Py_DECREF(prepend_arg);

/* Make the new argument list. */
new_args = PySequence_Concat(prepend, args);

/* No longer need prepend_tuple. */
Py_DECREF(prepend_tuple);

/* Call the other method. */
result = py_proc1(self, new_args);

/* No longer need new_args. */
Py_DECREF(new_args);

return result;
}
--
https://mail.python.org/mailman/listinfo/python-list


Re: Launching a Script on the Linux Platform

2019-11-18 Thread Peter J. Holzer
On 2019-11-18 15:01:57 -0600, Wildman via Python-list wrote:
> On Tue, 19 Nov 2019 05:09:07 +1100, Chris Angelico wrote:
> > On Tue, Nov 19, 2019 at 5:06 AM Wildman via Python-list
> >  wrote:
> >> On Sun, 17 Nov 2019 18:27:45 +, Barry Scott wrote:
> >> > Because you are installing from a deb you know the exact path to the 
> >> > python you
> >> > need to use. There is no need to use the /usr/bin/env to search the path 
> >> > and
> >> > potential break your code, because a version of python that you do not 
> >> > expect is on
> >> > the path.
> >>
> >> I don't understand.  The deb does not install python so I
> >> fail to see how I would know the exact path.
> >>
> >> As to env breaking my code, never heard of such a thing.
> >>
> > 
> > The deb should depend on an appropriate Python package. Then you can
> > assume and expect that this version of Python is installed.
> 
> Yes, of course, python(3) is listed as a "depends" in the deb
> control file.  That does insure that python is installed but
> in no way does that tell me the path of the python executable.

The debian packaging guidelines tell you where the execuable has to be.
If you install the python package you can be very sure that the
executable will be in /usr/bin. And this is the executable you want to
use. You don't want to use some other random program called "python"
(which may or may not be an interpreter for some version of the Python
language) which just happens to be in the user's path.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | [email protected] |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Friday finking: TDD and EAFP

2019-11-18 Thread Peter J. Holzer
On 2019-11-13 15:16:55 +1300, DL Neil via Python-list wrote:
> On 4/11/19 9:44 AM, Peter J. Holzer wrote:
> > TDD does in my opinion encourage EAFP thinking.
> > 
> > The TDD is usually:
> > 
> >  1 Write a test
> >  2 Write the minimal amount of code that makes the test pass
> >  3 If you think you have covered the whole spec, stop, else repeat
> >from 1
> > 
> > This is often (e.g. in [1]) exaggerated for pedagogic and humoristic
> > reasons. For example, your first test for a sqrt function might be
> >  assert(sqrt(4) == 2)
> > and then of course the minimal implementation is
> >  def sqrt(x):
> >  return 2
> 
> I have seen this sort of thing in spreadsheet training - someone pulling-out
> a calculator, summing a column of numbers, and typing 'the answer' in the
> "Total" cell (instead of using the Sigma button or @SUM() ).
> 
> However, I've never seen anyone attempt to pass-off this sort of code
> outside of desperate (and likely, far too late) floundering during a 101
> assignment - FAILED!
> 
> Who would begin to believe that such code implements sqrt, or that it meets
> with the function's objectives as laid-out in the spec AND the docstring?
> So, anyone can prove anything - if they leave reality/reason far-enough
> behind.

I'm not a TDD expert, but my understanding is that this kind of thing is
meant seriously.

But of course it is not meant as a finished program. It is meant as a
first step. And there is a reason for starting with an obviously
incomplete solution: It makes you aware that your test suite is
incomplete and your program is incomplete, and that you will have to
improve both.

If you write this simple test and then write a complete implementation
of sqrt, there is a strong temptation to say "the code is complete, it
looks correct, I have a test and 100 % code coverage; therefore I'm
done". But of course you aren't - that one test case is woefully
inadequate. As is demonstrated by writing a completely bogus
implementation which passes the test.

You say you write all the tests in advance (I read that as "try to write
a reasonably complete test suite in advance").

That prevents the pitfall of writing only a few alibi tests. It also has
the advantage that you are in a different mind set when writing tests
than when writing code (almost as good as having someone else write the
code).

However, it means that you consider your tests to be complete when you
start to write the code. So there is no feedback. If you forgot to
include tests with non-integer results in your test suite (yes, I'm
aware you wrote that quickly for a mailing-list posting and probably
wouldn't make that mistake if you really wanted to implement sqrt), you
probably won't think of it while writing the code, because now you are
in the code-writing mindset, not the test-devising mindset.

I think that tight feedback loop between writing a test and writing the
*minimal* code which will pass the test has some value: You are
constantly trying to outsmart yourself: When you are writing tests you
try to cover a few more additional potential mistakes and when you are
writing code you try to find loop-holes in your tests.

> Great joke, but its proponents are delaying a proper consideration of TDD.

I don't know what "proper" TDD is (and even less "proper consideration"
of TDD), but TDD is in my opinion very much rooted in the agile mindset.
And that means frequent iteration and improvement. So I think the
micro-iteration technique is closer to philosophically pure TDD (if such
a thing exists) than your waterfally "write complete spec, then write
all tests, then write code" technique (That doesn't mean that your
technique is bad - it's just not what I think people are talking about
when they say "TDD").

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | [email protected] |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to delay until a next increment of time occurs ?

2019-11-18 Thread Peter J. Holzer
On 2019-11-15 12:11:31 +0100, R.Wieser wrote:
> Dennis,
> > No, that addition is a fixed increment on the initial starting
> > time, and is NOT relative to the ending of a sleep.
> 
> > No, that addition is a fixed increment
> 
> Yep.
> 
> > on the initial starting time
> 
> Nope.
> 
> If pythons sleep is worth its salt it ends exactly on the time provided in 
> "t".

No. There are many reasons why sleep() might return after t (If I wanted
to be picky, I'd point out that sleep() will never end *exactly* at t,
but let's read "exactly" as "so close it doesn't make a difference").
Very few of those reasons have anything to do with Python: They may be
inherent in the OS (which simply doesn't guarantee that and may not be
able to switch to any process at any time) to the hardware (there may
not be a timer with sufficient resolution, or the interrupt may be
masked, and of course there are context-switch times) to the state of
the system (the system may be busy with a higher priority task, or the
page of your program may not be in RAM and have to be read from disk
first), etc.


> Thus the subsequent addition to that "t" is releative to the end of 
> the sleep just before it.

No, because you get the actual time at some time between the the
previous and the next sleep and use that to compute how long to sleep.
So even if sleep() always returned at exactly the intended time, the
time used to compute the sleep time would be a little later. But of
course sleep is never that punctual, but that doesn't matter since that
just adds to that "a little later".

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | [email protected] |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Launching a Script on the Linux Platform

2019-11-18 Thread Wildman via Python-list
On Mon, 18 Nov 2019 22:15:31 +0100, Peter J. Holzer wrote:

> On 2019-11-18 15:01:57 -0600, Wildman via Python-list wrote:
>> On Tue, 19 Nov 2019 05:09:07 +1100, Chris Angelico wrote:
>> > On Tue, Nov 19, 2019 at 5:06 AM Wildman via Python-list
>> >  wrote:
>> >> On Sun, 17 Nov 2019 18:27:45 +, Barry Scott wrote:
>> >> > Because you are installing from a deb you know the exact path to the 
>> >> > python you
>> >> > need to use. There is no need to use the /usr/bin/env to search the 
>> >> > path and
>> >> > potential break your code, because a version of python that you do not 
>> >> > expect is on
>> >> > the path.
>> >>
>> >> I don't understand.  The deb does not install python so I
>> >> fail to see how I would know the exact path.
>> >>
>> >> As to env breaking my code, never heard of such a thing.
>> >>
>> > 
>> > The deb should depend on an appropriate Python package. Then you can
>> > assume and expect that this version of Python is installed.
>> 
>> Yes, of course, python(3) is listed as a "depends" in the deb
>> control file.  That does insure that python is installed but
>> in no way does that tell me the path of the python executable.
> 
> The debian packaging guidelines tell you where the execuable has to be.
> If you install the python package you can be very sure that the
> executable will be in /usr/bin. And this is the executable you want to
> use. You don't want to use some other random program called "python"
> (which may or may not be an interpreter for some version of the Python
> language) which just happens to be in the user's path.
> 
> hp

Yes, /usr/bin is the likely place to find the python executable
but a guideline is not a guarantee.  I have always been taught
it is never a good idea to use a hard path unless it is something
installed with your program or something created by your program.
That approach has not failed me.

-- 
 GNU/Linux user #557453
The cow died so I don't need your bull!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Friday finking: TDD and EAFP

2019-11-18 Thread Mark Turner


> On Nov 18, 2019, at 4:07 PM, Peter J. Holzer  wrote:
> 
> On 2019-11-13 15:16:55 +1300, DL Neil via Python-list wrote:
>> On 4/11/19 9:44 AM, Peter J. Holzer wrote:
>>> TDD does in my opinion encourage EAFP thinking.
>>> 
>>> The TDD is usually:
>>> 
>>> 1 Write a test
>>> 2 Write the minimal amount of code that makes the test pass
>>> 3 If you think you have covered the whole spec, stop, else repeat
>>>   from 1
>>> 
>>> This is often (e.g. in [1]) exaggerated for pedagogic and humoristic
>>> reasons. For example, your first test for a sqrt function might be
>>> assert(sqrt(4) == 2)
>>> and then of course the minimal implementation is
>>> def sqrt(x):
>>> return 2
>> 

I think this simple test like has value. It’s just not where you expect it to 
be. In order to get this test to pass you have to have your development 
environment set up, your testing environment set up and perhaps some basic 
dependencies resolved. If this test doesn’t pass then it’s not the code that 
needs debugging, it’s the environment. Later on, after this module of code is 
finished, if a lot of tests start failing for some reason and this simple test 
is one of them, then a good place to start debugging is the environment.

 - Mark
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Writing a CPython extension - calling another sibbling method ?

2019-11-18 Thread Michael Torrie
On 11/18/19 1:15 PM, R.Wieser wrote:
> The thing is that the arguments of py_proc1 and py_proc2 are the same, but 
> for a single argument.

Does this have to be done in the C API?  Depending on how this class is
used in your Python code, I would just create a new Python class that
extends this class defined in the C code.  Then it's more a matter of:

import cmodule

class NewClass(cmodule.OldClass):

def my_proc2(self, *args):
b=3
self.my_proc1( *((b,) + args) ) #OldClass.my_proc1

Now any instance of NewClass has a method called my_proc2 which calls
the my_proc1 from the C API defined class with the extra argument prepended.

The "*" notation is to unpack the tuple, which is used when calling
another function that takes positional arguments.
-- 
https://mail.python.org/mailman/listinfo/python-list


Any socket library to communicate with kernel via netlink?

2019-11-18 Thread lampahome
As title, I tried to communicate with kernel via netlink. But I failed when
I receive msg from kernel.

The weird point is sending successfully from user to kernel, failed when
receiving from kernel.

So I want to check code in 3rd library and dig in, but always found library
called netlinkg but it actually does something like modify network address
or check network card...

Any idea is welcome
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Friday finking: TDD and EAFP

2019-11-18 Thread boB Stepp
On Mon, Nov 18, 2019 at 3:23 PM Peter J. Holzer  wrote:
>
> I don't know what "proper" TDD is (and even less "proper consideration"
> of TDD), but TDD is in my opinion very much rooted in the agile mindset.
> And that means frequent iteration and improvement. So I think the
> micro-iteration technique is closer to philosophically pure TDD (if such
> a thing exists) than your waterfally "write complete spec, then write
> all tests, then write code" technique (That doesn't mean that your
> technique is bad - it's just not what I think people are talking about
> when they say "TDD").
 Your comments brought to mind a book recommended to me, "Test-Driven
Development by Example" by Kent Beck.  On the intro page to part one,
he says:

"...My goal is for you to see the rhythm of Test-Driven Development
(TDD), which can be summed up as follows.

1.  Quickly add a test.
2.  Run all tests and see the new one fail.
3.  Make a little change.
4.  Run all tests and see them all succeed.
5.  Refactor to remove duplication.

The surprises are likely to include

-- How each test can cover a small increment of functionality
-- How small and ugly the changes can be to make the new tests run
-- How often the tests are run
-- How many teensy-weensy steps make up the refactorings"



-- 
boB
-- 
https://mail.python.org/mailman/listinfo/python-list