Re: [opensource-dev] Script Memory Management Algorithm

2010-03-09 Thread Marine Kelley
I'm naive here, I don't know the server side of it. But how can a sim  
know when a script hits a threshold, and not be able to report the  
actual memory used ? Since it can check it against a threshold...




On 8 mars 2010, at 18:46, Kelly Linden  wrote:

We are not out to write a new malloc for mono.  What we have is a  
system that throws an exception when the memory used by the script  
hits a certain threshold (64k currently).  This exception is caught  
so we can "crash" the script.  The future small scripts and big  
scripts projects will add new functions to set and get this  
threshold value, allowing scripters to effectively control how much  
memory is reserved for their script.  We will continue to use mono's  
default memory management within the reserved memory thresholds.  It  
is a much simpler problem to solve.


 - Kelly

On Sun, Mar 7, 2010 at 5:50 AM, Carlo Wood  wrote:
Lets assume that the *average* script uses
8 to 16 kB of real memory. LL's design allocates
64 kB regardless, leading to an overhead of
400 to 800% (meaning they need to buy 5 to
9 times the memory that is theoretically needed).

In that light, I gave it some more thought, and
think something as complex as my rmalloc's algorithm,
with it's 19% overhead, isn't needed (note that
it's both faster than gmalloc as well as three
times more efficient; so complexity is not always
a bad thing ;).

Nevertheless, in this case, since the scripts
use a maximum of 64 kB, you can use the
following design:

Let each allocated block be a power of 2
kilobytes, with a maximum of 64 kB (and a
minimum of 1 KB, or 4 if even the tiniest
script needs that).

It is easy to see that this would lead
to an overhead of 25% on average per
allocated block.

We'd still have to deal with "holes" of a
full 64 kB where blocks became totally
unused, but normally scripts in objects are
started all at once when a sim reboots, and
only seldom stopped. The scripts that will
most likely attribute to random starting
and stopping of scripts will be the scripts
in attachments. A worst case scenario would
be one where there are 50 avies in a sim
(during a meeting), then a new avie enters
with some scripts which need to be allocated
at the top of the heap; then the previous
50 avies leave. That would create a hole
in the heap of the size of all the scripts
of those 50 avies. If script memory would
be relocatable, then this problem doesn't
exist of course. I can't simply estimate
the average overhead caused by this, but
because the algorithm described here is
basically the one used by gmalloc (which
in my test used 62% overhead) I'm pretty
sure that it will be less than -say- 100%
overhead; still 4 to 8 times more efficient
than the current design on the table.

The API for this design would be something
like the following:

namespace script_memory_management {

void* ll_sbrk(ptrdiff_t);   // Increment the size of the heap
int   ll_brk(void*);// Set the size of the heap  
explicitely


void* ll_malloc64(void);// Get a new 64 kB block.
void  ll_free64(void*); // Free such a block.

void* ll_malloc(size_t s);  // Allocate s bytes of memory for a  
script.

void  ll_free(void*);   // Free it again.

...

Assuming here that scripts cannot deal with
relocation, otherwise one should also have:

void* ll_realloc(size_t s); // Allocate a different size of  
memory.



ll_malloc then will round the number of requested bytes up
to the nearest power of 2 (kBs) and retrieve a block from one
of the free lists (maintained for 32, 16, 8, 4, 2 and 1 kB)
(note that if scripts only seldom use 1 or 2 kB it might
be more efficient to use a minimum of 2 or 4 kB instead of 1).

Each 64 kB block would contain either one 64 kB allocation,
or two 32 kB block allocations, or four 16 kB block allocations,
etc, but never allocations of mixed sizes, thus making it
easy to find a free block of such size.

A free list is usually maintained by adding pointers inside
the (unused) memory blocks, linking them together to a linked
list of free memory blocks of a given size. That causes allocating
to be instant, but freeing memory requires to traverse this
list in order to update it's pointers. With the number of
scripts that normally run in a sim this will not be a problem.

--
Carlo Wood 
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting  
privileges


___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting  
privileges
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated po

Re: [opensource-dev] Script Memory Limits UI

2010-03-09 Thread Ambrosia
> So, cool, wouldn't it be nice to only allocate what is actually
> requested?

That -is- in the works, with the Small Scripts and Big Scripts
projects. it will allows you to reserve just as much memory as you
need for mono scripts, less than the 64k..or even more. But alas, yes,
time will have to be invested into rewriting things.
Why the process simply doesn't allow you to slip a new
llMaxMemory(amounthere); into the state_entry of a script, and lets
the sim runtime dynamically allocate new memory as the script grows up
to the max..I don't know. it'd be much simpler.

What I personally want is not only a script memory display, but also
script -time-. People need to see how much script time is used up by
the things they wear, the things they create, the things they have on
their mainland parcels uses up. It's ridiculous that one needs to be
friends with a full-fledged Estate Manager to check those things, as
script time affects sim performance just as much as script memory
does.
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Script Memory Limits UI

2010-03-09 Thread Marine Kelley
That's right. Computer programs are constantly managing two  
contradictory resources, space and time. In theory we need control  
over script time as much (no more no less) as we need control over  
script memory. But let's not ask for even more additional workload  
than we already have.



On 9 mars 2010, at 10:05, Ambrosia  wrote:

>> So, cool, wouldn't it be nice to only allocate what is actually
>> requested?
>
> That -is- in the works, with the Small Scripts and Big Scripts
> projects. it will allows you to reserve just as much memory as you
> need for mono scripts, less than the 64k..or even more. But alas, yes,
> time will have to be invested into rewriting things.
> Why the process simply doesn't allow you to slip a new
> llMaxMemory(amounthere); into the state_entry of a script, and lets
> the sim runtime dynamically allocate new memory as the script grows up
> to the max..I don't know. it'd be much simpler.
>
> What I personally want is not only a script memory display, but also
> script -time-. People need to see how much script time is used up by
> the things they wear, the things they create, the things they have on
> their mainland parcels uses up. It's ridiculous that one needs to be
> friends with a full-fledged Estate Manager to check those things, as
> script time affects sim performance just as much as script memory
> does.
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Morgaine
I believe that they're seeking better-informed legal counsel on GPL
compliance first, before redrafting the TPV.   The first version conflated
users, viewers and developers so terribly that GPLv2 clause 6 was left in
tatters.  Joe's phrasing is the only one that makes the necessary separation
so far.  I hope he has a hand in the redraft. :-)

Regarding commencement, that was just typical product manager silliness,
announcing release or commencement dates before something is ready.  File
under ignore.


Morgaine.





=

On Tue, Mar 9, 2010 at 2:11 PM, Boy Lane  wrote:

> It has been 14 days since the initial draft of the 3PVP was published and
> we
> were told it will be reworked to include comments, concerns and
> suggestions.
> Two weeks have passed since and besides a FAQ that also says the policy is
> being worked on there have been no news.
>
> As this is a mission critical question for everybody involved in client
> development:
> What is the status of the Third Party Viewer Policy? Do we have to assume
> that the current version is binding and/or when will an updated version be
> available?
>
> Thanks!
>
>
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting
> privileges
>
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Re: [opensource-dev] Script Memory Limits UI

2010-03-09 Thread Carlo Wood
It's not impossible... it's actually rather simple.

That being said, I wouldn't be surprised if LL feels it's too
difficult for them.

[ I suppose remarks like this (that it is simple) have usually not got
any weight, therefore I already added the fact that I wrote a malloc
library myself in the past that is faster and three times more efficient
than gmalloc (never released though), and already posted a rough outline
of how it could be done. Now I, reluctantly, let me add that the concept
of some individual here knowing something that Linden Lab can't do, is
also not impossible. When I deleted my home directory a few years
ago, then ONLY thing I could find on the net; from the FAQ to the
developers of the filesystem itself, was: you CANNOT undelete files
on an ext3 filesystem.  Well, I did; I recovered all 50,000 files
completely; and wrote a (free, open source) program to prove it
(ext3grep) (in case you never heard of that, then I guess you never
deleted file from an ext3 filesystem ;) The HOWTO webpage that I
wrote at the same time, has been translated to Japanese, Russian,
and so on. My English version got 50,000 hits in the first three
days after release). I didn't take "it's impossible" for granted
then, and thousands of people thanked me for that (literally, by
email). I'm not going to take "it's impossible" in this case
either, because this is way way way more simple :/. Sorry, but LL is
just lazy. That is the reason. You're right, let them say that
and I'll crawl back under my rock: "We're just lazy". ]

On Tue, Mar 09, 2010 at 08:54:45AM +0100, Marine Kelley wrote:
> supposed to do themselves. Oh of course this is a hard job, allocating  
> memory dynamically in an environment like this. Perhaps it is  
> impossible. I have yet to hear a Linden say, in all honesty, "sorry  
> guys, we can't do it as initially planned, we have to ask you to  
> participate by tailoring your scripts, because we can't do it from our  
> side". What I have heard so far is "you will be provided tools to  
> adapt to the change that is taking place". The two mean exactly the  
> same thing, but a little honesty does not hurt. This additional  
> workload was not planned, is a shift of work that we were not supposed  
> to take in charge in the first place, with no compensation, so I'd  
> have liked a little explanation at least.

-- 
Carlo Wood 
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Script Memory Management Algorithm

2010-03-09 Thread Morgaine
Carlo +1.

Explicit pre-allocation of memory by users has to be one of the silliest and
most regressive "improvements" appearing in SL for a long time.  It's a
waste of memory, it takes us back decades, and it's a burden on users.

So why do it?

"Because we've decided to, full stop." -- seems to be the prevailing M.O.


Morgaine.








On Tue, Mar 9, 2010 at 1:26 PM, Carlo Wood  wrote:

> This is exactly the kind of reaction that drives me away from here.
>
> I propose a simple way get FOUR times the memory for all the scripts, at
> no other cost than adding some malloc code to your mono engine.
>
> And you simply say, "This is what we ARE doing, we're not going to change
> that".
>
> Why this immovable stubbornness about internal development decisions?
>
> In case with the below you mean to say "allowing people to set ammount
> of memory at which their scripts crash, up front, is as good," then
> read the past posts on this list again.
>
> People say that it is NOT, by FAR not, as good. Scripters shouldn't
> have to manually figure out the maximum ammount of memory their scripts
> can possibly use and use that as the fixed ammount of memory that
> their script reserves. That was last done 10 years ago. Just have the
> server take care of this: give a script a minimum, and when it needs
> more, give it more. No hassle for the users.
>
> On Mon, Mar 08, 2010 at 09:46:44AM -0800, Kelly Linden wrote:
> > We are not out to write a new malloc for mono.  What we have is a system
> that
> > throws an exception when the memory used by the script hits a certain
> threshold
> > (64k currently).  This exception is caught so we can "crash" the script.
>  The
> > future small scripts and big scripts projects will add new functions to
> set and
> > get this threshold value, allowing scripters to effectively control how
> much
> > memory is reserved for their script.  We will continue to use mono's
> default
> > memory management within the reserved memory thresholds.  It is a much
> simpler
> > problem to solve.
> >
> >  - Kelly
> >
> > On Sun, Mar 7, 2010 at 5:50 AM, Carlo Wood  wrote:
> >
> > Lets assume that the *average* script uses
> > 8 to 16 kB of real memory. LL's design allocates
> > 64 kB regardless, leading to an overhead of
> > 400 to 800% (meaning they need to buy 5 to
> > 9 times the memory that is theoretically needed).
> >
> > In that light, I gave it some more thought, and
> > think something as complex as my rmalloc's algorithm,
> > with it's 19% overhead, isn't needed (note that
> > it's both faster than gmalloc as well as three
> > times more efficient; so complexity is not always
> > a bad thing ;).
> >
> > Nevertheless, in this case, since the scripts
> > use a maximum of 64 kB, you can use the
> > following design:
> >
> > Let each allocated block be a power of 2
> > kilobytes, with a maximum of 64 kB (and a
> > minimum of 1 KB, or 4 if even the tiniest
> > script needs that).
> >
> > It is easy to see that this would lead
> > to an overhead of 25% on average per
> > allocated block.
> >
> > We'd still have to deal with "holes" of a
> > full 64 kB where blocks became totally
> > unused, but normally scripts in objects are
> > started all at once when a sim reboots, and
> > only seldom stopped. The scripts that will
> > most likely attribute to random starting
> > and stopping of scripts will be the scripts
> > in attachments. A worst case scenario would
> > be one where there are 50 avies in a sim
> > (during a meeting), then a new avie enters
> > with some scripts which need to be allocated
> > at the top of the heap; then the previous
> > 50 avies leave. That would create a hole
> > in the heap of the size of all the scripts
> > of those 50 avies. If script memory would
> > be relocatable, then this problem doesn't
> > exist of course. I can't simply estimate
> > the average overhead caused by this, but
> > because the algorithm described here is
> > basically the one used by gmalloc (which
> > in my test used 62% overhead) I'm pretty
> > sure that it will be less than -say- 100%
> > overhead; still 4 to 8 times more efficient
> > than the current design on the table.
> >
> > The API for this design would be something
> > like the following:
> >
> > namespace script_memory_management {
> >
> > void* ll_sbrk(ptrdiff_t);   // Increment the size of the heap
> > int   ll_brk(void*);// Set the size of the heap
> explicitely
> >
> > void* ll_malloc64(void);// Get a new 64 kB block.
> > void  ll_free64(void*); // Free such a block.
> >
> > void* ll_malloc(size_t s);  // Allocate s bytes of memory for a
> script.
> > void  ll_free(void*);   // Free it again.
> >
> > ...
> >
> > Assuming here that scrip

Re: [opensource-dev] Script Memory Management Algorithm

2010-03-09 Thread Argent Stonecutter
On 2010-03-09, at 07:26, Carlo Wood wrote:
> This is exactly the kind of reaction that drives me away from here.
>
> I propose a simple way get FOUR times the memory for all the  
> scripts, at
> no other cost than adding some malloc code to your mono engine.

I don't think you have established that.

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


[opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Boy Lane
It has been 14 days since the initial draft of the 3PVP was published and we 
were told it will be reworked to include comments, concerns and suggestions. 
Two weeks have passed since and besides a FAQ that also says the policy is 
being worked on there have been no news.

As this is a mission critical question for everybody involved in client 
development:
What is the status of the Third Party Viewer Policy? Do we have to assume 
that the current version is binding and/or when will an updated version be 
available?

Thanks! 


___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Script Memory Management Algorithm

2010-03-09 Thread Carlo Wood
This is exactly the kind of reaction that drives me away from here.

I propose a simple way get FOUR times the memory for all the scripts, at
no other cost than adding some malloc code to your mono engine.

And you simply say, "This is what we ARE doing, we're not going to change that".

Why this immovable stubbornness about internal development decisions?

In case with the below you mean to say "allowing people to set ammount
of memory at which their scripts crash, up front, is as good," then
read the past posts on this list again.

People say that it is NOT, by FAR not, as good. Scripters shouldn't
have to manually figure out the maximum ammount of memory their scripts
can possibly use and use that as the fixed ammount of memory that
their script reserves. That was last done 10 years ago. Just have the
server take care of this: give a script a minimum, and when it needs
more, give it more. No hassle for the users.

On Mon, Mar 08, 2010 at 09:46:44AM -0800, Kelly Linden wrote:
> We are not out to write a new malloc for mono.  What we have is a system that
> throws an exception when the memory used by the script hits a certain 
> threshold
> (64k currently).  This exception is caught so we can "crash" the script.  The
> future small scripts and big scripts projects will add new functions to set 
> and
> get this threshold value, allowing scripters to effectively control how much
> memory is reserved for their script.  We will continue to use mono's default
> memory management within the reserved memory thresholds.  It is a much simpler
> problem to solve.
> 
>  - Kelly
> 
> On Sun, Mar 7, 2010 at 5:50 AM, Carlo Wood  wrote:
> 
> Lets assume that the *average* script uses
> 8 to 16 kB of real memory. LL's design allocates
> 64 kB regardless, leading to an overhead of
> 400 to 800% (meaning they need to buy 5 to
> 9 times the memory that is theoretically needed).
> 
> In that light, I gave it some more thought, and
> think something as complex as my rmalloc's algorithm,
> with it's 19% overhead, isn't needed (note that
> it's both faster than gmalloc as well as three
> times more efficient; so complexity is not always
> a bad thing ;).
> 
> Nevertheless, in this case, since the scripts
> use a maximum of 64 kB, you can use the
> following design:
> 
> Let each allocated block be a power of 2
> kilobytes, with a maximum of 64 kB (and a
> minimum of 1 KB, or 4 if even the tiniest
> script needs that).
> 
> It is easy to see that this would lead
> to an overhead of 25% on average per
> allocated block.
> 
> We'd still have to deal with "holes" of a
> full 64 kB where blocks became totally
> unused, but normally scripts in objects are
> started all at once when a sim reboots, and
> only seldom stopped. The scripts that will
> most likely attribute to random starting
> and stopping of scripts will be the scripts
> in attachments. A worst case scenario would
> be one where there are 50 avies in a sim
> (during a meeting), then a new avie enters
> with some scripts which need to be allocated
> at the top of the heap; then the previous
> 50 avies leave. That would create a hole
> in the heap of the size of all the scripts
> of those 50 avies. If script memory would
> be relocatable, then this problem doesn't
> exist of course. I can't simply estimate
> the average overhead caused by this, but
> because the algorithm described here is
> basically the one used by gmalloc (which
> in my test used 62% overhead) I'm pretty
> sure that it will be less than -say- 100%
> overhead; still 4 to 8 times more efficient
> than the current design on the table.
> 
> The API for this design would be something
> like the following:
> 
> namespace script_memory_management {
> 
> void* ll_sbrk(ptrdiff_t);       // Increment the size of the heap
> int   ll_brk(void*);            // Set the size of the heap explicitely
> 
> void* ll_malloc64(void);        // Get a new 64 kB block.
> void  ll_free64(void*);         // Free such a block.
> 
> void* ll_malloc(size_t s);      // Allocate s bytes of memory for a 
> script.
> void  ll_free(void*);           // Free it again.
> 
> ...
> 
> Assuming here that scripts cannot deal with
> relocation, otherwise one should also have:
> 
> void* ll_realloc(size_t s);     // Allocate a different size of memory.
> 
> 
> ll_malloc then will round the number of requested bytes up
> to the nearest power of 2 (kBs) and retrieve a block from one
> of the free lists (maintained for 32, 16, 8, 4, 2 and 1 kB)
> (note that if scripts only seldom use 1 or 2 kB it might
> be more efficient to use a minimum of 2 or 4 kB instead of 1).
> 
> Each 64 kB block would contain either one 64 kB allocation,
> or two 32 kB block allocations, or four 16 kB block allocati

Re: [opensource-dev] Script Memory Management Algorithm

2010-03-09 Thread Dickson, Mike (ISS Software)
I'm inclined to agree.  Not to mention addressed the cost of maintaining a 
forked version of mono and the effort to forward port it into new releases.

I suppose it's possible that a change to malloc could make script memory 
allocation problems much simpler but that seems highly unlikely.  More likely 
you're trivializing a complex problem (in addition to allocation there are the 
issues of migrating the heap across simulators if a script moves, etc). 

Mike

-Original Message-
From: opensource-dev-boun...@lists.secondlife.com 
[mailto:opensource-dev-boun...@lists.secondlife.com] On Behalf Of Argent 
Stonecutter
Sent: Tuesday, March 09, 2010 9:06 AM
To: Carlo Wood
Cc: server-b...@lists.secondlife.com; opensource-dev@lists.secondlife.com
Subject: Re: [opensource-dev] Script Memory Management Algorithm

On 2010-03-09, at 07:26, Carlo Wood wrote:
> This is exactly the kind of reaction that drives me away from here.
>
> I propose a simple way get FOUR times the memory for all the  
> scripts, at
> no other cost than adding some malloc code to your mono engine.

I don't think you have established that.

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Script Memory Management Algorithm

2010-03-09 Thread Erik Anderson
So...

If the script hits a memory wall, there's no way to transparently increase
that wall and start reporting that the script is taking more RAM?  Or has
the stack+heap collided with each other at that point and there's no way to
reform the memory space?  Isn't this already something that scripts are
currently doing with their one-way llGetFreeMemory function?

On Tue, Mar 9, 2010 at 7:51 AM, Dickson, Mike (ISS Software) <
mike.dick...@hp.com> wrote:

> I'm inclined to agree.  Not to mention addressed the cost of maintaining a
> forked version of mono and the effort to forward port it into new releases.
>
> I suppose it's possible that a change to malloc could make script memory
> allocation problems much simpler but that seems highly unlikely.  More
> likely you're trivializing a complex problem (in addition to allocation
> there are the issues of migrating the heap across simulators if a script
> moves, etc).
>
> Mike
>
> -Original Message-
> From: opensource-dev-boun...@lists.secondlife.com [mailto:
> opensource-dev-boun...@lists.secondlife.com] On Behalf Of Argent
> Stonecutter
> Sent: Tuesday, March 09, 2010 9:06 AM
> To: Carlo Wood
> Cc: server-b...@lists.secondlife.com; opensource-dev@lists.secondlife.com
> Subject: Re: [opensource-dev] Script Memory Management Algorithm
>
> On 2010-03-09, at 07:26, Carlo Wood wrote:
> > This is exactly the kind of reaction that drives me away from here.
> >
> > I propose a simple way get FOUR times the memory for all the
> > scripts, at
> > no other cost than adding some malloc code to your mono engine.
>
> I don't think you have established that.
>
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting
> privileges
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting
> privileges
>
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Re: [opensource-dev] Script Memory Limits UI

2010-03-09 Thread Imaze Rhiano
I don't think that dynamic memory would be hard to implement, but 
problem is that avatar/parcel have (or is going to have) limited memory 
available.

1) It is not possible swap memory to server's hard drive - because that 
would cause lag - and is actually reason behind why memory limits are 
coming to script world called SL.
2) It is not possible to use your neighbors unused memory - because then 
your scripts would randomly crash when your neighbors claim their memory 
back - and it would be bit inconsisted - you can't use your neighbors 
primitives either!

Second problem is that average SL users have limited amount of brain 
cells and limited amount of patience. They are not going to be happy, if 
their shop's unique visitor counter "that has been working last few 
years" suddenly stops working and throws stack overflow exception, 
because they used sex bed with their girlfriends. They need stability 
and deterministic behavior.

I would be happy to convert dynamic memory supporter, if you could 
present realistic memory management algorithm that (these are not laws - 
common sense works here):
1) works in limited memory (doesn't try to use swapping or neighbors 
unused memory)
2) doesn't need user actions after successful rezzing (user doesn't need 
to set quotas, prioritize scripts or reset/delete scripts that are using 
too much memory)
3) script developer can be sure that when object is successfully rezzed 
it have enough memory for running and basic operations (user for example 
can't set script's memory limit so small that it can't run)
4) once object is rezzed successfully, scripts in object keeps running 
until object is derezzed (assuming that script's developer was careful 
with scripting)
5) algorithm is feasible to implement with current LL hardware - it 
doesn't need things like large statistic databases to forecast script's 
memory usage in future

9.3.2010 15:47, Carlo Wood kirjoitti:
> It's not impossible... it's actually rather simple.
>
> That being said, I wouldn't be surprised if LL feels it's too
> difficult for them.
>
> [ I suppose remarks like this (that it is simple) have usually not got
> any weight, therefore I already added the fact that I wrote a malloc
> library myself in the past that is faster and three times more efficient
> than gmalloc (never released though), and already posted a rough outline
> of how it could be done. Now I, reluctantly, let me add that the concept
> of some individual here knowing something that Linden Lab can't do, is
> also not impossible. When I deleted my home directory a few years
> ago, then ONLY thing I could find on the net; from the FAQ to the
> developers of the filesystem itself, was: you CANNOT undelete files
> on an ext3 filesystem.  Well, I did; I recovered all 50,000 files
> completely; and wrote a (free, open source) program to prove it
> (ext3grep) (in case you never heard of that, then I guess you never
> deleted file from an ext3 filesystem ;) The HOWTO webpage that I
> wrote at the same time, has been translated to Japanese, Russian,
> and so on. My English version got 50,000 hits in the first three
> days after release). I didn't take "it's impossible" for granted
> then, and thousands of people thanked me for that (literally, by
> email). I'm not going to take "it's impossible" in this case
> either, because this is way way way more simple :/. Sorry, but LL is
> just lazy. That is the reason. You're right, let them say that
> and I'll crawl back under my rock: "We're just lazy". ]
>
> On Tue, Mar 09, 2010 at 08:54:45AM +0100, Marine Kelley wrote:
>
>> supposed to do themselves. Oh of course this is a hard job, allocating
>> memory dynamically in an environment like this. Perhaps it is
>> impossible. I have yet to hear a Linden say, in all honesty, "sorry
>> guys, we can't do it as initially planned, we have to ask you to
>> participate by tailoring your scripts, because we can't do it from our
>> side". What I have heard so far is "you will be provided tools to
>> adapt to the change that is taking place". The two mean exactly the
>> same thing, but a little honesty does not hurt. This additional
>> workload was not planned, is a shift of work that we were not supposed
>> to take in charge in the first place, with no compensation, so I'd
>> have liked a little explanation at least.
>>  
>

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] [server-beta] Script Memory ManagementAlgorithm

2010-03-09 Thread Kitty
> It might be possible to add display of memory currently used 
> as well, but what's the use case for it?

It would allow residents to independently review the imposed script limits.

Another use case would be because people *are* going to start banning
residents based on what the script limits UI says (just like people are
ejected/banned over ARC). If some random resident hasn't spent hours and
hours trying to see if everything they've bought in the past few years
doesn't have an update for it that uses the upcoming "llReserveMemory" they
will end up banned from random places for wearing the wrong item.

Simply reporting "actually in use" for *other* residents even though the
individual limit is against a "hypothetical maximum limit" would do a whole
lot to help with that. A Mono script would still count as 64Kb against that
avie's personal limit, but anyone else trying to look for information would
be seeing say 8Kb because that's how much it's actually using and actual
usage is really the only thing parcel and sim owners are (or should) care
about when it comes to other people's attachments.

> Likewise, it would be useful for verifying sellers' signs' 
> claims immediately after purchase, should a scripted object 
> be displayed as a static, unscripted object. The ability to 
> see the number of prims in a linked set helps similarly for 
> advertised prim counts.

It likely won't take very long for the sellers of scripted gadgets to
include "script usage" information, but it will be many months before
someone selling - for example - earrings with a texture change script is
going to do the same or before they even find out that there is such a thing
as script limits.

The examples of "troublesome" cases that require script limits in the first
place never seem to be the highly technical kind, but rather the every day
"mundane" things like hair, shoes, jewelry, prim skirts, etc. So the people
who are least likely to include that information, or even know that
something like script limits exists.

I already have literally hundreds of "script-in-every prim" things I've
bought - where I can't even delete the scripts because of a change to "no
modify" some time back - that might just be good for relocating to the trash
6 months from now. Moving forward, noone should find out *after* purchase
whether or not what they just bought is something they'll be able to wear,
or whether they'll see the dreaded "Not enough script resources available to
attach object!" and know that they just threw money out the window.

When someone picks "Buy", the script limit/usage for each individual object
inside of the prim should really just be right there alongside the
permissions the item will have.
Use of green/yellow/red from ARC would make things even simpler still for
the average resident: anything using less than 1/38th of the per-avatar
limit (limit divded by # possible attachments) shows green, several
multiples is yellow and anything more is red.

Kitty

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


[opensource-dev] Script memory limit vs server CPU utilization as a key metric

2010-03-09 Thread Joel Foner
Many apologies if this has been discussed at length in a place that I've
missed...

I'm a bit baffled by the continuing strong focus on memory utilization of
scripts rather than CPU load on the host servers. If (maybe I'm missing an
important issue here) the issue is to avoid a resident or scripted item from
causing performance problems on a region, wouldn't the relative CPU load
imposed by that script be a critical item?

I understand that if the total active memory size for a server goes above
it's physical available RAM, then paging would increase and potentially
create issues. Is there some objective analysis of servers with the Second
Life simulator code on to show that they go into continuous swap mode in
this case, or is it occasional "blips" of performance degradation on a
slower interval? It seems to me that having continuing excessive CPU load
would generate an on-going low simulator frame rate, which would be more
frustrating than occasional hits from swapping.

This line of thinking makes me wonder if a better metric for managing the
user's perception of performance would be script CPU load rather than memory
size.

Thanks in advance, and again if this has already been addressed please feel
free to point me at the thread so that I can read up.

Best regards,

Joel
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Tayra Dagostino
On Tue, 9 Mar 2010 14:47:35 +
Morgaine  wrote:

> I believe that they're seeking better-informed legal counsel on GPL
> compliance first, before redrafting the TPV.   The first version
> conflated users, viewers and developers so terribly that GPLv2 clause
> 6 was left in tatters.  Joe's phrasing is the only one that makes the
> necessary separation so far.  I hope he has a hand in the redraft. :-)


I think yoiu've misreaded the TPV policy, no GPL violation, viewer
code is GPL, you can take a copy from svn, manipulate it, patch or
mood, rename it, all GPL let u do with it (and consequential charges
for a developer who work on a GPL code)

TPV is like an addendum to TOS, if you want use Linden Lab grids you
should follow some rules... this is server side, no viewer code
involved... the Linden services aren't GPL...
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Argent Stonecutter
On 2010-03-09, at 14:12, Tayra Dagostino wrote:
> I think yoiu've misreaded the TPV policy, no GPL violation, viewer
> code is GPL, you can take a copy from svn, manipulate it, patch or
> mood, rename it, all GPL let u do with it (and consequential charges
> for a developer who work on a GPL code)
>
> TPV is like an addendum to TOS, if you want use Linden Lab grids you
> should follow some rules... this is server side, no viewer code
> involved... the Linden services aren't GPL...

Doesn't matter. LL can't impose a restriction like "you have to stop  
distributing your ripper viewer" through any contract, license, or  
policy if they're going to use the GPL for the viewer.

Now, they don't HAVE to use the GPL. Since they own 100% of the  
copyright in the viewer they can even use a "modified GPL that isn't  
really the GPL". But they can't use the GPL and impose restrictions  
that the GPL doesn't allow, whether they're spelled out in the viewer  
code license, the TPV, or magic license fairies.

They get this, and are working on fixing it, it's just taking longer  
than anticipated to get it through their hot and spicy attorneys.
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Tayra Dagostino
On Tue, 9 Mar 2010 14:23:33 -0600
Argent Stonecutter  wrote:

> On 2010-03-09, at 14:12, Tayra Dagostino wrote:
> > I think yoiu've misreaded the TPV policy, no GPL violation, viewer
> > code is GPL, you can take a copy from svn, manipulate it, patch or
> > mood, rename it, all GPL let u do with it (and consequential charges
> > for a developer who work on a GPL code)
> >
> > TPV is like an addendum to TOS, if you want use Linden Lab grids you
> > should follow some rules... this is server side, no viewer code
> > involved... the Linden services aren't GPL...
> 
> Doesn't matter. LL can't impose a restriction like "you have to stop  
> distributing your ripper viewer" through any contract, license, or  
> policy if they're going to use the GPL for the viewer.

uhm... i read 'you cannot connect your modded viewer to our grid if
contain "word1" or "word2 or "etc."'

all tpv is related to interconnectivity between Linden grid to other
viewers. and viewer complying will be listed, i don't see any
terroristic action against source code license...

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Gareth Nelson
Read sections 4b,7a, 7c, 8c and 8d for a start - references to
distributing viewers and how you must not do so under certain
circumstances.

All of these restrictions contradict the rights granted by the GPL. LL
could argue that any releases after this policy constitute a release
under a new license of "GPL+TPV modifications", but they can not
retract the license of the earlier releases.

For that reason, it would be wise not to make use of any official LL
source in a viewer which may violate this policy.


On Tue, Mar 9, 2010 at 8:38 PM, Tayra Dagostino
 wrote:
> On Tue, 9 Mar 2010 14:23:33 -0600
> Argent Stonecutter  wrote:
>
>> On 2010-03-09, at 14:12, Tayra Dagostino wrote:
>> > I think yoiu've misreaded the TPV policy, no GPL violation, viewer
>> > code is GPL, you can take a copy from svn, manipulate it, patch or
>> > mood, rename it, all GPL let u do with it (and consequential charges
>> > for a developer who work on a GPL code)
>> >
>> > TPV is like an addendum to TOS, if you want use Linden Lab grids you
>> > should follow some rules... this is server side, no viewer code
>> > involved... the Linden services aren't GPL...
>>
>> Doesn't matter. LL can't impose a restriction like "you have to stop
>> distributing your ripper viewer" through any contract, license, or
>> policy if they're going to use the GPL for the viewer.
>
> uhm... i read 'you cannot connect your modded viewer to our grid if
> contain "word1" or "word2 or "etc."'
>
> all tpv is related to interconnectivity between Linden grid to other
> viewers. and viewer complying will be listed, i don't see any
> terroristic action against source code license...
>
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting privileges
>



-- 
“Lanie, I’m going to print more printers. Lots more printers. One for
everyone. That’s worth going to jail for. That’s worth anything.” -
Printcrime by Cory Doctrow

Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Argent Stonecutter

On 2010-03-09, at 14:38, Tayra Dagostino wrote:

> On Tue, 9 Mar 2010 14:23:33 -0600
> Argent Stonecutter  wrote:
>
>> On 2010-03-09, at 14:12, Tayra Dagostino wrote:
>>> I think yoiu've misreaded the TPV policy, no GPL violation, viewer
>>> code is GPL, you can take a copy from svn, manipulate it, patch or
>>> mood, rename it, all GPL let u do with it (and consequential charges
>>> for a developer who work on a GPL code)
>>>
>>> TPV is like an addendum to TOS, if you want use Linden Lab grids you
>>> should follow some rules... this is server side, no viewer code
>>> involved... the Linden services aren't GPL...
>>
>> Doesn't matter. LL can't impose a restriction like "you have to stop
>> distributing your ripper viewer" through any contract, license, or
>> policy if they're going to use the GPL for the viewer.
>
> uhm... i read 'you cannot connect your modded viewer to our grid if
> contain "word1" or "word2 or "etc."'

To promote a positive and predictable experience for all Residents of  
Second Life, we require users of Third-Party Viewers ***and those who  
develop or distribute them*** (“Developers”) to comply with this  
Policy and the Second Life Terms of Service.

***If you are a Developer who distributes Third-Party Viewers to  
others***, you must also provide the following disclosures and  
functionality

You acknowledge and agree that we may require you to stop using ***or  
distributing*** a Third-Party Viewer for accessing Second Life

etc etc etc etc etc etc...

I'm not arguing that these are not, perhaps, reasonable requirements.

I am simply pointing out that they are NOT compatible with the GPL.
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Script memory limit vs server CPU utilization as a key metric

2010-03-09 Thread Kelly Linden
Hi Joel.

This is an interesting issue.  You would think CPU would be the big issue,
but really it isn't.

* We actually do a decent job of time slicing scripts.  You add a lot of
scripts and in general just the scripts run slower, sim performance isn't
that impacted.
* WAIT! Before you yell at me for that statement read this one: most of the
script sources of lag are *specific* and actually caused by LSL triggered
non-LSL events. For example temp-rezzers can kill sim performance.  It isn't
the script that kills it though, it is the rezzing of the object, and the
derezzing.  Yes the script causes that, but it isn't script CPU type that is
hindering the performance. (BTW we are working on reducing the impact of
rezzing! ssshhh)
* ALL RIGHT! Yes there are other exceptions, scripts with high script time
that effect the sim. But these are generally exceptions.  The summary is
though, that script CPU time is relatively well handled, though I admit it
could be better.
* Yes better allocating script CPU time so that one user can't impact the
scripts of everyone else is also a great goal - it just isn't as high a
priority as script memory right now.

moving on
* Sim's going in to swap is one of the biggest issues with sim peformance we
have right now.  It is significantly more of a problem than script CPU
time.  And when it happens all stats tank.
* The fact that script memory is unbounded, that you can have a single prim
object with thousands of scripts (technically unlimited, but the viewer
behaves weird after a while) is a real problem.  Someone built a
GoogleFS-like system in SL that could in theory hold many gigabytes of
data.  They thankfully never deployed the full system.
* When sims use too much memory they don't just affect themselves they
affect their neighbors - the other regions running on the same host.

So yeah, it is kind of weird, but memory is the bigger performance issue so
we have chosen to address it first.

 - Kelly

PS - nice to chat with you again, feel free to contact me directly if you
wanna chat more. Hope things are going well.

On Tue, Mar 9, 2010 at 10:49 AM, Joel Foner  wrote:

> Many apologies if this has been discussed at length in a place that I've
> missed...
>
> I'm a bit baffled by the continuing strong focus on memory utilization of
> scripts rather than CPU load on the host servers. If (maybe I'm missing an
> important issue here) the issue is to avoid a resident or scripted item from
> causing performance problems on a region, wouldn't the relative CPU load
> imposed by that script be a critical item?
>
> I understand that if the total active memory size for a server goes above
> it's physical available RAM, then paging would increase and potentially
> create issues. Is there some objective analysis of servers with the Second
> Life simulator code on to show that they go into continuous swap mode in
> this case, or is it occasional "blips" of performance degradation on a
> slower interval? It seems to me that having continuing excessive CPU load
> would generate an on-going low simulator frame rate, which would be more
> frustrating than occasional hits from swapping.
>
> This line of thinking makes me wonder if a better metric for managing the
> user's perception of performance would be script CPU load rather than memory
> size.
>
> Thanks in advance, and again if this has already been addressed please feel
> free to point me at the thread so that I can read up.
>
> Best regards,
>
> Joel
>
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting
> privileges
>
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Gareth Nelson
Many of the requirements are in fact unreasonable unless they are
rephrased to apply ONLY when connecting to LL's servers

On Tue, Mar 9, 2010 at 8:54 PM, Argent Stonecutter
 wrote:
>
> On 2010-03-09, at 14:38, Tayra Dagostino wrote:
>
>> On Tue, 9 Mar 2010 14:23:33 -0600
>> Argent Stonecutter  wrote:
>>
>>> On 2010-03-09, at 14:12, Tayra Dagostino wrote:
 I think yoiu've misreaded the TPV policy, no GPL violation, viewer
 code is GPL, you can take a copy from svn, manipulate it, patch or
 mood, rename it, all GPL let u do with it (and consequential charges
 for a developer who work on a GPL code)

 TPV is like an addendum to TOS, if you want use Linden Lab grids you
 should follow some rules... this is server side, no viewer code
 involved... the Linden services aren't GPL...
>>>
>>> Doesn't matter. LL can't impose a restriction like "you have to stop
>>> distributing your ripper viewer" through any contract, license, or
>>> policy if they're going to use the GPL for the viewer.
>>
>> uhm... i read 'you cannot connect your modded viewer to our grid if
>> contain "word1" or "word2 or "etc."'
>
> To promote a positive and predictable experience for all Residents of
> Second Life, we require users of Third-Party Viewers ***and those who
> develop or distribute them*** (“Developers”) to comply with this
> Policy and the Second Life Terms of Service.
>
> ***If you are a Developer who distributes Third-Party Viewers to
> others***, you must also provide the following disclosures and
> functionality
>
> You acknowledge and agree that we may require you to stop using ***or
> distributing*** a Third-Party Viewer for accessing Second Life
>
> etc etc etc etc etc etc...
>
> I'm not arguing that these are not, perhaps, reasonable requirements.
>
> I am simply pointing out that they are NOT compatible with the GPL.
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting privileges
>



-- 
“Lanie, I’m going to print more printers. Lots more printers. One for
everyone. That’s worth going to jail for. That’s worth anything.” -
Printcrime by Cory Doctrow

Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Script Memory Management Algorithm

2010-03-09 Thread Michael Schlenker

Am 09.03.2010 um 02:54 schrieb Lear Cale:

> huh?  Can you point to existing technology for this analyzer?  Seems
> to me like it would require an oracle.

It might require an oracle to reach 100%, but if you go for the easy part, its 
not that hard (assuming a sane GC). LSL is a pretty simple language without 
many of the problems you have with C like pointers and memory aliasing and 
stuff like that. Not sure if one could reuse parts of the LLVM 
analyzers/optimizers for Mono to do this (Mono 2.6 can compile for a LLVM 
backend). 

1. All scripts that only handle integer, float, vector, key and rotation 
variables without recursive calls can be handled easily.
--> The memory size for those types is fixed.
--> One could probably compute a complete callgraph and add up the maximum 
number of used locals too.

2. If the script uses recursion, it depends. If the compiler can figure out the 
maximum depth, works as 1, if it can eliminate tailcalls, it can do 1. too. If 
it cannot, its a lost cause and it must assume the worst aka 64kB.

3. If the script uses strings, it depends what operations are used on those 
strings. The only critical operation is appending strings to other strings, 
comparing, getting substrings etc. could all be done in constant memory. As all 
functions that can provide strings to a script have an upper bound on parameter 
size one could calculate a 'worst case' scenario in many cases. Just assume any 
LSL function or event that returns/provides a string provides a unique string 
of maximum size. Sum up and you have a worst case limit.

4. If the script uses lists of other data types use similar techniques as for 
strings. As LSL does not have any reference types and you cannot nest lists 
this is not too hard to do because list are essentially flat like strings.

This of course assumes some small glitches are allowed like the overhead of 
copying a string/list, because the gc can free that memory at once if needed. 

For simple scripts, e.g. the ugly 100s of 'resizer' scripts you find in hairs, 
shoes etc. this could work pretty well, as those are usually just one 
link_message handler with trivial code and one or two functions that call 
llSetPrimitiveParams(). You might overestimate by one or two kB, because you 
don't know how big the link_message string and key parameters are, but other 
than that it should work pretty well.

Michael

> 
> On Mon, Mar 8, 2010 at 2:03 PM, Michael Schlenker
>  wrote:
>> 
>> Am 08.03.2010 um 18:46 schrieb Kelly Linden:
>> 
>>> We are not out to write a new malloc for mono.  What we have is a system 
>>> that throws an exception when the memory used by the script hits a certain 
>>> threshold (64k currently).  This exception is caught so we can "crash" the 
>>> script.  The future small scripts and big scripts projects will add new 
>>> functions to set and get this threshold value, allowing scripters to 
>>> effectively control how much memory is reserved for their script.  We will 
>>> continue to use mono's default memory management within the reserved memory 
>>> thresholds.  It is a much simpler problem to solve.
>>> 
>> While your at it, how about a static analyzer for mono/LSL that determines 
>> guaranteed lowest memory consumption for a script and sets the threshold 
>> there.
>> 
>> That would have the benefit of easing scripters work by providing useful 
>> defaults for all the easy cases without them having to do anything at all.
>> 
>> The scheme should only break down if the mono GC behaves weird. In that case 
>> scripters have a huge problem anyway as they cannot set any threshold 
>> without being crashed at random by a lazy GC.
>> 
>> Michael
>> ___
>> Policies and (un)subscribe information available here:
>> http://wiki.secondlife.com/wiki/OpenSource-Dev
>> Please read the policies before posting to keep unmoderated posting 
>> privileges
>> 
> 

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Script Memory Management Algorithm

2010-03-09 Thread Michael Schlenker

Am 09.03.2010 um 23:57 schrieb Michael Schlenker:

> 
> Am 09.03.2010 um 02:54 schrieb Lear Cale:
> 
>> huh?  Can you point to existing technology for this analyzer?  Seems
>> to me like it would require an oracle.

For an example of such a technique for Java bytecodes:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.100.1873

> 
> It might require an oracle to reach 100%, but if you go for the easy part, 
> its not that hard (assuming a sane GC). LSL is a pretty simple language 
> without many of the problems you have with C like pointers and memory 
> aliasing and stuff like that. Not sure if one could reuse parts of the LLVM 
> analyzers/optimizers for Mono to do this (Mono 2.6 can compile for a LLVM 
> backend). 
> 
> 1. All scripts that only handle integer, float, vector, key and rotation 
> variables without recursive calls can be handled easily.
> --> The memory size for those types is fixed.
> --> One could probably compute a complete callgraph and add up the maximum 
> number of used locals too.
> 
> 2. If the script uses recursion, it depends. If the compiler can figure out 
> the maximum depth, works as 1, if it can eliminate tailcalls, it can do 1. 
> too. If it cannot, its a lost cause and it must assume the worst aka 64kB.
> 
> 3. If the script uses strings, it depends what operations are used on those 
> strings. The only critical operation is appending strings to other strings, 
> comparing, getting substrings etc. could all be done in constant memory. As 
> all functions that can provide strings to a script have an upper bound on 
> parameter size one could calculate a 'worst case' scenario in many cases. 
> Just assume any LSL function or event that returns/provides a string provides 
> a unique string of maximum size. Sum up and you have a worst case limit.
> 
> 4. If the script uses lists of other data types use similar techniques as for 
> strings. As LSL does not have any reference types and you cannot nest lists 
> this is not too hard to do because list are essentially flat like strings.
> 
> This of course assumes some small glitches are allowed like the overhead of 
> copying a string/list, because the gc can free that memory at once if needed. 
> 
> For simple scripts, e.g. the ugly 100s of 'resizer' scripts you find in 
> hairs, shoes etc. this could work pretty well, as those are usually just one 
> link_message handler with trivial code and one or two functions that call 
> llSetPrimitiveParams(). You might overestimate by one or two kB, because you 
> don't know how big the link_message string and key parameters are, but other 
> than that it should work pretty well.
> 
> Michael
> 
>> 
>> On Mon, Mar 8, 2010 at 2:03 PM, Michael Schlenker
>>  wrote:
>>> 
>>> Am 08.03.2010 um 18:46 schrieb Kelly Linden:
>>> 
 We are not out to write a new malloc for mono.  What we have is a system 
 that throws an exception when the memory used by the script hits a certain 
 threshold (64k currently).  This exception is caught so we can "crash" the 
 script.  The future small scripts and big scripts projects will add new 
 functions to set and get this threshold value, allowing scripters to 
 effectively control how much memory is reserved for their script.  We will 
 continue to use mono's default memory management within the reserved 
 memory thresholds.  It is a much simpler problem to solve.
 
>>> While your at it, how about a static analyzer for mono/LSL that determines 
>>> guaranteed lowest memory consumption for a script and sets the threshold 
>>> there.
>>> 
>>> That would have the benefit of easing scripters work by providing useful 
>>> defaults for all the easy cases without them having to do anything at all.
>>> 
>>> The scheme should only break down if the mono GC behaves weird. In that 
>>> case scripters have a huge problem anyway as they cannot set any threshold 
>>> without being crashed at random by a lazy GC.
>>> 
>>> Michael
>>> ___
>>> Policies and (un)subscribe information available here:
>>> http://wiki.secondlife.com/wiki/OpenSource-Dev
>>> Please read the policies before posting to keep unmoderated posting 
>>> privileges
>>> 
>> 
> 
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting privileges

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Armin Weatherwax

> I am simply pointing out that they are NOT compatible with the GPL.
GPL compatible or not - the sentence "The Snowglobe Viewer [...] this 
viewer may be somewhat less stable than the official Second Life 
viewer"( http://viewerdirectory.secondlife.com/ at 2010/03/10 00:06 
GMT+1) is a slap into the face of anybody contributing bugfixes to the 
secondlife codebase.

Armin
 
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Thomas Grimshaw
It's the truth. Snowglobe is unstable.

~Tom

Armin Weatherwax wrote:
>> I am simply pointing out that they are NOT compatible with the GPL.
>> 
> GPL compatible or not - the sentence "The Snowglobe Viewer [...] this 
> viewer may be somewhat less stable than the official Second Life 
> viewer"( http://viewerdirectory.secondlife.com/ at 2010/03/10 00:06 
> GMT+1) is a slap into the face of anybody contributing bugfixes to the 
> secondlife codebase.
>
> Armin
>  
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting privileges
>   

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Morgaine
At any given point in time, one viewer is more stable than another, and at
another point in time, it's the other way around.  This is perfectly normal,
and blanket statements about superior stability make no sense ... especially
when they share common code! :-)

If anything, Snowglobe could well be more stable over time, since any bugs
probably won't last long because they tend to get patched rapidly and a new
tagged version released.  In contrast the official LL viewer gets released
infrequently.

One shouldn't read too much into PR or advocacy statements anyway.


Morgaine.





=

On Tue, Mar 9, 2010 at 11:11 PM, Thomas Grimshaw wrote:

> It's the truth. Snowglobe is unstable.
>
> ~Tom
>
> Armin Weatherwax wrote:
> >> I am simply pointing out that they are NOT compatible with the GPL.
> >>
> > GPL compatible or not - the sentence "The Snowglobe Viewer [...] this
> > viewer may be somewhat less stable than the official Second Life
> > viewer"( http://viewerdirectory.secondlife.com/ at 2010/03/10 00:06
> > GMT+1) is a slap into the face of anybody contributing bugfixes to the
> > secondlife codebase.
> >
> > Armin
> >
> > ___
> > Policies and (un)subscribe information available here:
> > http://wiki.secondlife.com/wiki/OpenSource-Dev
> > Please read the policies before posting to keep unmoderated posting
> privileges
> >
>
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting
> privileges
>
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Gareth Nelson
Don't new features get into snowglobe faster too? Thus more potential for bugs

On Wed, Mar 10, 2010 at 12:15 AM, Morgaine
 wrote:
> At any given point in time, one viewer is more stable than another, and at
> another point in time, it's the other way around.  This is perfectly normal,
> and blanket statements about superior stability make no sense ... especially
> when they share common code! :-)
>
> If anything, Snowglobe could well be more stable over time, since any bugs
> probably won't last long because they tend to get patched rapidly and a new
> tagged version released.  In contrast the official LL viewer gets released
> infrequently.
>
> One shouldn't read too much into PR or advocacy statements anyway.
>
>
> Morgaine.
>
>
>
>
>
> =
>
> On Tue, Mar 9, 2010 at 11:11 PM, Thomas Grimshaw 
> wrote:
>>
>> It's the truth. Snowglobe is unstable.
>>
>> ~Tom
>>
>> Armin Weatherwax wrote:
>> >> I am simply pointing out that they are NOT compatible with the GPL.
>> >>
>> > GPL compatible or not - the sentence "The Snowglobe Viewer [...] this
>> > viewer may be somewhat less stable than the official Second Life
>> > viewer"( http://viewerdirectory.secondlife.com/ at 2010/03/10 00:06
>> > GMT+1) is a slap into the face of anybody contributing bugfixes to the
>> > secondlife codebase.
>> >
>> > Armin
>> >
>> > ___
>> > Policies and (un)subscribe information available here:
>> > http://wiki.secondlife.com/wiki/OpenSource-Dev
>> > Please read the policies before posting to keep unmoderated posting
>> > privileges
>> >
>>
>> ___
>> Policies and (un)subscribe information available here:
>> http://wiki.secondlife.com/wiki/OpenSource-Dev
>> Please read the policies before posting to keep unmoderated posting
>> privileges
>
>
> ___
> Policies and (un)subscribe information available here:
> http://wiki.secondlife.com/wiki/OpenSource-Dev
> Please read the policies before posting to keep unmoderated posting
> privileges
>



-- 
“Lanie, I’m going to print more printers. Lots more printers. One for
everyone. That’s worth going to jail for. That’s worth anything.” -
Printcrime by Cory Doctrow

Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Tateru Nino

On 10/03/2010 10:09 AM, Armin Weatherwax wrote:
>   
>> I am simply pointing out that they are NOT compatible with the GPL.
>> 
> GPL compatible or not - the sentence "The Snowglobe Viewer [...] this 
> viewer may be somewhat less stable than the official Second Life 
> viewer"( http://viewerdirectory.secondlife.com/ at 2010/03/10 00:06 
> GMT+1) is a slap into the face of anybody contributing bugfixes to the 
> secondlife codebase.
>   
It might be true at some times and not at others (right now, Snow2 is a
whole lot more stable than V2 beta in my experience), but it reads like
a calculated slur.

-- 
Tateru Nino
http://dwellonit.taterunino.net/

___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Morgaine
*If only* new features got into Snowglobe faster.  :-)  The only things that
seem to get in fast are bug fixes.  And of course IBM-sponsored code.

Admittedly, the rate is somewhat faster than into LL's main viewer ... but
then, it could hardly be slower! :-)

Talking about "LL's main viewer" ... why doesn't it have a name?


Morgaine.





==

On Wed, Mar 10, 2010 at 12:26 AM, Gareth Nelson wrote:

> Don't new features get into snowglobe faster too? Thus more potential for
> bugs
>
> On Wed, Mar 10, 2010 at 12:15 AM, Morgaine
>  wrote:
> > At any given point in time, one viewer is more stable than another, and
> at
> > another point in time, it's the other way around.  This is perfectly
> normal,
> > and blanket statements about superior stability make no sense ...
> especially
> > when they share common code! :-)
> >
> > If anything, Snowglobe could well be more stable over time, since any
> bugs
> > probably won't last long because they tend to get patched rapidly and a
> new
> > tagged version released.  In contrast the official LL viewer gets
> released
> > infrequently.
> >
> > One shouldn't read too much into PR or advocacy statements anyway.
> >
> >
> > Morgaine.
> >
> >
> >
> >
> >
> > =
> >
> > On Tue, Mar 9, 2010 at 11:11 PM, Thomas Grimshaw 
> > wrote:
> >>
> >> It's the truth. Snowglobe is unstable.
> >>
> >> ~Tom
> >>
> >> Armin Weatherwax wrote:
> >> >> I am simply pointing out that they are NOT compatible with the GPL.
> >> >>
> >> > GPL compatible or not - the sentence "The Snowglobe Viewer [...] this
> >> > viewer may be somewhat less stable than the official Second Life
> >> > viewer"( http://viewerdirectory.secondlife.com/ at 2010/03/10 00:06
> >> > GMT+1) is a slap into the face of anybody contributing bugfixes to the
> >> > secondlife codebase.
> >> >
> >> > Armin
> >> >
> >> > ___
> >> > Policies and (un)subscribe information available here:
> >> > http://wiki.secondlife.com/wiki/OpenSource-Dev
> >> > Please read the policies before posting to keep unmoderated posting
> >> > privileges
> >> >
> >>
> >> ___
> >> Policies and (un)subscribe information available here:
> >> http://wiki.secondlife.com/wiki/OpenSource-Dev
> >> Please read the policies before posting to keep unmoderated posting
> >> privileges
> >
> >
> > ___
> > Policies and (un)subscribe information available here:
> > http://wiki.secondlife.com/wiki/OpenSource-Dev
> > Please read the policies before posting to keep unmoderated posting
> > privileges
> >
>
>
>
> --
> “Lanie, I’m going to print more printers. Lots more printers. One for
> everyone. That’s worth going to jail for. That’s worth anything.” -
> Printcrime by Cory Doctrow
>
> Please avoid sending me Word or PowerPoint attachments.
> See http://www.gnu.org/philosophy/no-word-attachments.html
>
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges

Re: [opensource-dev] Potential inventory problem?

2010-03-09 Thread John Hurliman
Finally nailed this one down. If you have items in your inventory that have
completely empty permissions (basemask 0, everyonemask 0, currentmask 0,
etc) the viewer will freeze when you open the texture picker dialog. This
appears to affect all released versions of the viewer. The next obvious
question is whether this can be triggered by giving someone a bad item
either directly or through a group attachment or notecard.

I put a sanity check in my OpenSim connector code to look for the red flag
of items with a basemask of 0.

John

On Mon, Mar 8, 2010 at 6:28 PM, John Hurliman wrote:

> That certainly sounds like the symptoms I'm seeing. One of my two CPUs is
> pegged, memory allocation for the Snowglobe process is all over the place
> (jumping up and down by 10s or 100s of MB at a time), and the hang seems to
> be indefinite. I haven't left it for more than three minutes, but by that
> time the server has decided you are timed out since the client isn't sending
> any network traffic.
>
> We also just confirmed that the current OpenSim master isn't triggering
> this bug, so it's a subtle difference between what the stock OpenSim
> inventory connector is doing and what mine is doing. It would be really nice
> to know what that difference is, but it sounds like Qarl wasn't sure either.
> I just found out I'm sending folders with Version = 0 and I'm told that
> version should start at 1. I fixed that and it hasn't resolved the issue,
> but I'll keep plugging away.
>
> John
>
>
> On Mon, Mar 8, 2010 at 4:23 PM, Nexii Malthus wrote:
>
>> Ah, the comment by Qarl Linden in the JIRA issue nails the issue for that
>> specific freeze bug:
>> "i'm not sure how our external svn is configured - but i can explain the
>> root of the problem. it's with our memory allocator (malloc/new/etc.) we
>> switched back at 1.21, and the behavior of the inventory system seemed to
>> trigger some kind of horrible worst-case scenario. with my demo above, i
>> switched to dmalloc. for the real fix in 1.23, we invoked some windows magic
>> to improve the allocator performance."
>>
>> Might that help for this bug?
>>
>> - Nexii
>>
>> On Tue, Mar 9, 2010 at 12:19 AM, Nexii Malthus wrote:
>>
>>> It does happen on the SL grid. At least it did for a long while, I have
>>> admit I use the texture picker not that often these days, but it has burned
>>> me pretty badly.
>>>
>>> How bad is the freeze? How long is it?
>>>
>>> Edit: Found this JIRA, might be relevant to the problem I think, the
>>> freeze bug used to exist in 1.21 and 1.22 and was fixed with 1.23.
>>> http://jira.secondlife.com/browse/VWR-8818
>>>
>>>
>>> - Nexii
>>>
>>>
>>> On Mon, Mar 8, 2010 at 10:33 PM, Rob Nelson <
>>> nexisentertainm...@gmail.com> wrote:
>>>
 Hahaha, I remember back when Open Grid Services was written in PHP;  I
 still have it on my webserver somewhere.  I think I submitted a patch to
 you guys several years ago (regarding a SQL injection exploit) where I
 accidentally left in a bunch of cursing and racial slurs.

 If I contribute again, I think I'll be a little more careful.

 Also, I'll take a peek into what's causing that problem;  I'm having to
 redo the UI in Lua anyway, so I may as well figure out what is causing
 your issues.

 Fred Rookstown

 On Mon, 2010-03-08 at 11:17 -0800, John Hurliman wrote:
 > Yes, I'm trying to push out the first release of a set of PHP grid
 > services for OpenSim and the texture picker freeze bug is a
 > showstopper. It obviously doesn't happen on SLGrid and it wasn't
 > happening in the past with OpenSim. A packet trace didn't show
 > anything interesting (doesn't appear to be triggered by a specific
 > packet at all). My current guess is that the client is making a bad
 > assumption about information the server sent earlier on. It might have
 > to do with the library since I am not sending a library skeleton at
 > the moment.
 >
 > Any other clues on the freezing issue would be much appreciated. The
 > warning is more of a curiosity but it could uncover another hidden
 > expectation the open source server software is not meeting.
 >
 > John
 >
 > On Mon, Mar 8, 2010 at 9:21 AM, Nexii Malthus 
 > wrote:
 > Happens just as well on any grid. I'm just as confused as
 > anyone here what this error message means, but this was
 > brought up before and I vaguely remember a linden saying that
 > wasn't the source of the lag problems related to inventory.
 >
 >
 > Are you trying to track down the texture picker freezing bug?
 >
 >
 > - Nexii
 >
 >
 > On Mon, Mar 8, 2010 at 12:45 AM, John Hurliman
 >  wrote:
 >
 >
 > When logging into OpenSim with Snowglobe I see a lot
 > (dozens) of these messages in the log file:
 >
 >

Re: [opensource-dev] Potential inventory problem?

2010-03-09 Thread Argent Stonecutter

On 2010-03-09, at 19:06, John Hurliman wrote:

> Finally nailed this one down. If you have items in your inventory  
> that have completely empty permissions (basemask 0, everyonemask 0,  
> currentmask 0, etc) the viewer will freeze when you open the texture  
> picker dialog. This appears to affect all released versions of the  
> viewer. The next obvious question is whether this can be triggered  
> by giving someone a bad item either directly or through a group  
> attachment or notecard.
>
> I put a sanity check in my OpenSim connector code to look for the  
> red flag of items with a basemask of 0.

Crikey, you shouldn't be able to create such objects!
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges


Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Morgaine
Tayra, the GPL is about a lot more than merely providing modifiable sources.

GPL licenses specifically provide the freedom to modify and distribute GPL
sources *without "further restrictions"* being placed on developers beyond
the restrictions declared in the license itself.  This is completely
distinct from restrictions being placed on the *use of viewers* upon
connection to a service, in which the GPL has no interest at all.  Clause 6
of GPLv2 is particularly demanding:


   - *6. Each time you redistribute the Program (or any work based on the
   Program), the recipient automatically receives a license from the original
   licensor to copy, distribute or modify the Program subject to these terms
   and conditions. You may not impose any further restrictions on the
   recipients' exercise of the rights granted herein. You are not
   responsible for enforcing compliance by third parties to this License.*


These are fundamental freedoms provided by the GPL, and they cannot be
denied at the start nor taken away nor restricted.  You cannot provide
sources while imposing "*further restrictions"* on GPL developers and still
release under GPL --- that would be non-compliant with the license.

This is totally unrelated to LL placing restrictions on what viewers or
users can do when they connect to SL.  LL is entirely within its rights to
do that.  What they cannot do is to place restrictions directly on the
developers modifying and distributing the code under GPL --- that's not GPL
compliant.

The reason for this is easy to understand --- it's about allowing the
distribution chain to work freely.  Let me give you an example to illustrate
the point.


   1. SL developer A takes LL sources, compiles them with extra optimization
   turned on, and publishes binaries and sources on her website in full
   compliance with GPLv2 and also in full compliance with LL's rules.
   2. An SL user likes this release and wants it in his Linux distribution,
   so he sends it to his distro maintainers, B.  They give it a quick once-over
   with a security tool, notice a typical buffer overflow bug, fix it, and add
   the package and sources to their distro in .rpm format.  (They don't use
   SL.)
   3. A gaming distro developer C sees this client appear, rebuilds it and
   adds it to the packages in his distro in .deb format.  (He doesn't use SL.)
   4. User D of the gaming distro thinks this is a great base for his
   warfare game, adds a pile of ways to attack people, and submits it back to
   the gaming distro as a GPL derivative for warfare fans.  (He doesn't use
   SL.)
   5. Opensim user E notices this new viewer, and thinks the new weaponry is
   a great way of fuzz testing his private sims to find security holes.  He
   adds some more test-related menus to the code and sends the modified sources
   to a testers distro.  (He doesn't use SL.)
   6. Finally, SL developer F finds the viewer in the testers distro,
   notices that it still works with SL despite its odd path through the Linux
   world, recompiles it, and places binaries and sources on his website for
   users to try out.


So what does the GPL have to say about all this?  Easy:  if the code is
GPL-licensed, then all developers have the freedom to modify and distribute
the software *without further restrictions other than stated in the GPL
license*.   All of the above is therefore perfectly fine.

But what does LL have to say about all this?  According to the TPV document,
all of these developers have restrictions placed on them, because the TPV
document is so badly written that it talks about developers, viewers, and
users all mixed up together.  Also, it doesn't have a clear clause narrowing
the *scope* of all TPV terms to *usage* alone.  As a result, when describing
restrictions that apply to users and viewers in SL, these restriction end up
applying to GPL developers too.  That is simply not GPL compliant, because
it is a "*further restriction*" on the developer's right to modify and
distribute.

It doesn't even make *practical* sense, since people B-E above don't even
connect to SL.  However, it must be noted that the freedoms provided by the
GPL are not conditional on not using a particular service.  GPL developers
are granted those freedoms even if, as users, they later connect to SL
themselves (and maybe get themselves banned for it, which is fine).  Being a
developer and being a user are separate things, even if you perform both
roles.  (The GPL is not concerned with restrictions placed on you as a user
of course, those are not its business.)

What's more, LL states that they may request that developers modify their
software to suit the SL terms of service, with an implied  threat of
termination if they don't comply.  This might seem reasonable to LL, but if
it is a "*further restriction*" on the developer's freedom to modify and
distribute GPL code then it might well not be GPL compliant at all.  This is
a fine point that might have to be tested in cou

Re: [opensource-dev] Third party viewer policy: commencement date

2010-03-09 Thread Lance Corrimal
Am Mittwoch 10 März 2010 schrieb Morgaine:
> *If only* new features got into Snowglobe faster.  :-)  The only
> things that seem to get in fast are bug fixes.  And of course
> IBM-sponsored code.
> 
> Admittedly, the rate is somewhat faster than into LL's main viewer
> ... but then, it could hardly be slower! :-)
> 
> Talking about "LL's main viewer" ... why doesn't it have a name?
> 


I usually refer to a stock LL-supplied viewer as "that other thing".
And try not to use it.
___
Policies and (un)subscribe information available here:
http://wiki.secondlife.com/wiki/OpenSource-Dev
Please read the policies before posting to keep unmoderated posting privileges