Currently, list.reverse() only works for an entire list. If one wants to
reverse a section of it 'in-place', one needs to slicing which makes the space
complexity no longer O(1). One can also manually make a loop and do the
reversal but that is even slower than slicing. List.reverse() does not t
Sorry for not explaining the background of my idea. I'm involved in the
research area of sorting algorithms. Reversals are part of sorting and correct
me if wrong, `list.reverse()` is the fastest method to reverse an entire list,
which is also in-place. Yet, it doesn't work for a subsection of i
[This is the revised version of the previous reply which contained mistakes]
Sorry for not explaining the background of my idea. I'm involved in the
research area of sorting algorithms. Reversals are part of sorting and correct
me if wrong, `list.reverse()` is the fastest method to reverse an en
I see. I do agree that my reply brings about that 'verbose repeated' feeling,
haha. But for the record, it's not about having something in hand now for the
future, but it's more of a paradigmatic approach to the implementation. Python
has changed for the better in terms of necessity:
- map() re
I see.
You have coined the term exactly, partial-reverse. Nice. You have also put
forward a realistic question of 'why do we need'. Well, surely not everyone
needs it and definitely it's not urgently needed, but its just the
counterintuitive incompleteness such that 'it works for a whole, but n
Indeed, I understand.
Thanks for reply.
___
Python-ideas mailing list -- [email protected]
To unsubscribe send an email to [email protected]
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at
https://ma
Insightful! You mentioned terms like 'memory_view', and 'lazy slice'. You felt
the pulse of the situation. But the most elegant thing (I had it a long time
ago but you brought it up before, haha) is that you notice the downside of
copies - you indicated how a lazy slice is the magic wand that el
Indeed, it's not directly related, perhaps misunderstanding, but I'm just
drawing the similar idea between the two situations of not taking memory first.
If you slice, you make a copy and that takes space. So, the space complexity is
no longer O(1). It's just that, not that it has any direct rel
Interesting. Just to comment, Mr. Mertz is realistic in the algorithmic sense.
Running time is highly affected by various factors. For example, lets just
assume that an insertion sort of O(N^2) time on a quantum computer is fast
enough for all kinds of task in the world. So, naturally, there sho
Depends on the implementation. If you, instead of swapping pair by pair one by
one, rewrite that sequence in the opposite direction and that sequence is
longer than 3, it already fits the situation. A block swap algorithm swaps two
elements of an array. If out-of-place, you can specify more than
Indeed, making a slice a view does pose painful challenges. For a slice
iterator, I wonder if there is an bigger overhead in being an iterator or
building an iterator. I wholeheartedly agree that 'adding add-hoc
functionality' is slightly toy-ish, but I brought up the idea of 'start' and
'stop'
Indeed, from previous replies, I have already learnt that use-cases are the
primary driver here around. In fact that should be the general case.
I do admit that my assessment is too abstractive for any feasible
considerations. I was looking at it from the algorithmic sense, that if a
function i
12 matches
Mail list logo