[opensource-dev] So what happens if....
First I want to thank LL from efforts to create ethical guidelines for respectable viewers. Unfortunately I don't think that these rules are going to work in reality where we are living.(And as others have already pointed out - there might be incompatible with GPL and other licenses.) Let's imagine one moment that instead of being ebil bitch I would be respectable software developer and create my own viewer that fully confirms policy. I would register my viewer to your viewer directory. Now - one of following scenarios would happen - what I should do - and what would be LL's reaction: 1) My viewer is open sourced. Some evil person(s) take source code and add functionality that breaks policy. They don't bother to change viewer's id or other identification data. 2) My viewer have plugin framework that allows 3rd party developers to create their own plugins. One evil person then writes plugin that is breaking rules of viewer policy. 3) Evil persons will develop proxy or software hook that will steal data directly from data stream between client and LL servers - or - communications between viewer's host module and DLLs. Proxy / hook is completely transparent - neither client or server can detect it. 4) I will develop closed source viewer and evil persons will develop their own evil viewer. Then they decide to fake my viewer's identification data so that server thinks that their evil viewer is my viewer. ___ Policies and (un)subscribe information available here: http://wiki.secondlife.com/wiki/OpenSource-Dev Please read the policies before posting to keep unmoderated posting privileges
Re: [opensource-dev] FAQ posted for Third Party Viewer Policy
28.2.2010 10:34, Marine Kelley kirjoitti: > I'd like to remind people of my proposed solution, back when LL asked > everyone about how to set their third party viewer policy, a few > months ago. I had proposed to make it so that only viewers built on a > LL-owned dedicated machine would be accepted. Such binaries would be > the result of the build of committed sources, with the addition of a > small code (unknown to the devs of the viewer) that would transfer a > hash to the grid upon connecting (and possibly regularly afterward > while online). The binaries would be hosted on LL's website, along > with the sources, and everyone would have been able to consult the > sources while being sure there would not be any difference between > these sources and the resulting binaries (with the exception of the > code I mentioned). Granted, this is an expensive solution, and > potentially difficult while testing (there has to be some temporary > code for that purpose, for instance a code that allows only 4 or 5 > viewers using it at the same time), but the only solution that > formally guarantees that Build = Source, and that the source can be > reviewed, instead of testing every viewer, which takes much longer. > This approach wouldn't work - and LL's third party viewer policy is not going to work either. There is nothing to stop skillful coder to decode this "secret hashing component", skillful hacker to write proxy that will do it's ebil things between client and server or skillful user to install one certain program that allows to access OpenGL information and gather necessary information. Moving security/DRM to client side - is not going to work. Big companies like EA have tried this approach through rootkits and such - result: total absolute failure and huge loss of PR (just google "DRM spore"). Microsoft tried to support different DRM schemas with their multimedia player - result: player that is very slow to start, media format that requires internet access, works on single computer and complex encryption/verification/obfuscation schemas. Intel and media companies introduced HDCP - result: honest customers required to upgrade their working hardware and pirates who are still releasing movies to net before their official release day without annoying "you wouldn't steal car ads" and unskippable ads (http://www.makeuseof.com/tech-fun/wp-content/uploads/2010/02/pirateddvd1.png). Next year, 28 February 2011 - assuming world doesn't end and everything is following my grand plan, 1) Nyx Linden still doesn't have bear, 2) you still need to fake bake specular lighting for latex clothes, 3) content creators are going to whine how their content was copybotted and "LL doesn't do enough to stop copybotters" and 4) there are fewer SL compatible open source viewer developers and more non-SL compatible viewer developers IMHO: Instead of wasting valuable bytes to lawyers (don't feed lawyers they are just getting bigger and more hungry) and trying to move security/DRM to client's responsibility LL should do following: 1) Organize "build Nyx's bear competition", 2) add support for clothing materials and custom avatar meshes that finally allow proper latex clothing, 3) create paranoid a server that is not hopelessly fallen love with the client and verifies client's requests and actions, 4) streamline process for posting copyright notices (it should be two click process), 5) allow content creators to post additional proof that they are creators of content (to avoid constant copyright griefing attacks), - higher resolution textures - non-watermarked textures - high polycount models - etc. 6) improve assets server so that it allows better track who uploaded/created asset, when and who are using it so that all copybotted material are instantly deleted from the server and avatars who are distributing it are banned, 7) change from passive - waiting for copyright notice - mode - to active mode, where you are actively seeking copyright violations through automatic processes and perhaps allowing other users to tip possible copyright violations, 8) make process more transparent - allow creators see inside process, give them feedback 9) make process more visible - publish reports how many you have banned, write random blogs about topic and offer rewards from copyright tips Ultimately you could someday render scene in server - and thus avoid situation where you need to transfer assets of textures and objects to client, but I guess there no users currently who are ready to pay from high cost hardware, software and bandwidth that would be needed for server side rendering. I think that third party viewer policy is great ethical guide for second life compatible viewer developers and directory gives good listing to respectable viewers and correct download addresses. But otherwise it is completely waste of time and money, going to drive some developers away from second
Re: [opensource-dev] Script Memory Limits UI
I don't think that dynamic memory would be hard to implement, but problem is that avatar/parcel have (or is going to have) limited memory available. 1) It is not possible swap memory to server's hard drive - because that would cause lag - and is actually reason behind why memory limits are coming to script world called SL. 2) It is not possible to use your neighbors unused memory - because then your scripts would randomly crash when your neighbors claim their memory back - and it would be bit inconsisted - you can't use your neighbors primitives either! Second problem is that average SL users have limited amount of brain cells and limited amount of patience. They are not going to be happy, if their shop's unique visitor counter "that has been working last few years" suddenly stops working and throws stack overflow exception, because they used sex bed with their girlfriends. They need stability and deterministic behavior. I would be happy to convert dynamic memory supporter, if you could present realistic memory management algorithm that (these are not laws - common sense works here): 1) works in limited memory (doesn't try to use swapping or neighbors unused memory) 2) doesn't need user actions after successful rezzing (user doesn't need to set quotas, prioritize scripts or reset/delete scripts that are using too much memory) 3) script developer can be sure that when object is successfully rezzed it have enough memory for running and basic operations (user for example can't set script's memory limit so small that it can't run) 4) once object is rezzed successfully, scripts in object keeps running until object is derezzed (assuming that script's developer was careful with scripting) 5) algorithm is feasible to implement with current LL hardware - it doesn't need things like large statistic databases to forecast script's memory usage in future 9.3.2010 15:47, Carlo Wood kirjoitti: > It's not impossible... it's actually rather simple. > > That being said, I wouldn't be surprised if LL feels it's too > difficult for them. > > [ I suppose remarks like this (that it is simple) have usually not got > any weight, therefore I already added the fact that I wrote a malloc > library myself in the past that is faster and three times more efficient > than gmalloc (never released though), and already posted a rough outline > of how it could be done. Now I, reluctantly, let me add that the concept > of some individual here knowing something that Linden Lab can't do, is > also not impossible. When I deleted my home directory a few years > ago, then ONLY thing I could find on the net; from the FAQ to the > developers of the filesystem itself, was: you CANNOT undelete files > on an ext3 filesystem. Well, I did; I recovered all 50,000 files > completely; and wrote a (free, open source) program to prove it > (ext3grep) (in case you never heard of that, then I guess you never > deleted file from an ext3 filesystem ;) The HOWTO webpage that I > wrote at the same time, has been translated to Japanese, Russian, > and so on. My English version got 50,000 hits in the first three > days after release). I didn't take "it's impossible" for granted > then, and thousands of people thanked me for that (literally, by > email). I'm not going to take "it's impossible" in this case > either, because this is way way way more simple :/. Sorry, but LL is > just lazy. That is the reason. You're right, let them say that > and I'll crawl back under my rock: "We're just lazy". ] > > On Tue, Mar 09, 2010 at 08:54:45AM +0100, Marine Kelley wrote: > >> supposed to do themselves. Oh of course this is a hard job, allocating >> memory dynamically in an environment like this. Perhaps it is >> impossible. I have yet to hear a Linden say, in all honesty, "sorry >> guys, we can't do it as initially planned, we have to ask you to >> participate by tailoring your scripts, because we can't do it from our >> side". What I have heard so far is "you will be provided tools to >> adapt to the change that is taking place". The two mean exactly the >> same thing, but a little honesty does not hurt. This additional >> workload was not planned, is a shift of work that we were not supposed >> to take in charge in the first place, with no compensation, so I'd >> have liked a little explanation at least. >> > ___ Policies and (un)subscribe information available here: http://wiki.secondlife.com/wiki/OpenSource-Dev Please read the policies before posting to keep unmoderated posting privileges