Re: What is Expressiveness in a Computer Language
Chris Smith wrote: > > Easy, any statically typed language is not latently typed. > > I'm actually not sure I agree with this at all. I believe that > reference values in Java may be said to be latently typed. Practically > all class-based OO > languages are subject to similar consideration, as it turns out. Quite probably true of GC-ed statically typed languages in general, at least up to a point (and provided you are not using something like a tagless ML implementation). I think Rob is assuming a rather too specific implementation of statically typed languages. > I'm unsure whether to consider explicitly stored array lengths, which > are present in most statically typed languages, to be part of a "type" > in this sense or not. If I understand your position correctly, wouldn't you be pretty much forced to reject the idea of the length of a Java array being part of its type ? If you want to keep the word "type" bound to the idea of static analysis, then -- since Java doesn't perform any size-related static analysis -- the size of a Java array cannot be part of its type. That's assuming that you would want to keep the "type" connected to the actual type analysis performed by the language in question. Perhaps you would prefer to loosen that and consider a different (hypothetical) language (perhaps producing identical bytecode) which does do compile time size analysis. But then you get into an area where you cannot talk of the type of a value (or variable) without relating it to the specific type system under discussion. Personally, I would be quite happy to go there -- I dislike the idea that a value has a specific inherent type. It would be interesting to see what a language designed specifically to support user-defined, pluggable, and perhaps composable, type systems would look like. Presumably the syntax and "base" semantics would be very simple, clean, and unrestricted (like Lisp, Smalltalk, or Forth -- not that I'm convinced that any of those would be ideal for this), with a defined result for any possible sequence of operations. The type-system(s) used for a particular run of the interpreter (or compiler) would effectively reduce the space of possible sequences. For instance, one could have a type system which /only/ forbade dereferencing null, or another with the job of ensuring that mutability restrictions were respected, or a third which implemented access control... But then, I don't see a semantically critically distinction between such space reduction being done at compile time vs. runtime. Doing it at compile time could be seen as an optimisation of sorts (with downsides to do with early binding etc). That's particularly clear if the static analysis is /almost/ able to prove that is legal (by its own rules) but has to make certain assumptions in order to construct the proof. In such a case the compiler might insert a few runtime checks to ensure that it's assumptions were valid, but do most of its checking statically. There would /be/ a distinction between static and dynamic checks in such a system, and it would be an important distinction, but not nearly as important as the distinctions between the different type systems. Indeed I can imagine categorising type systems by /whether/ (or to what extent) a tractable static implementation exists. -- chris P.S Apologies Chris, btw, for dropping out of a conversation we were having on this subject a little while ago -- I've now said everything that I /would/ have said in reply to your last post if I'd got around to it in time... -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Anton van Straaten wrote: > But a program as seen by the programmer has types: the programmer > performs (static) type inference when reasoning about the program, and > debugs those inferences when debugging the program, finally ending up > with a program which has a perfectly good type scheme. It's may be > messy compared to say an HM type scheme, and it's usually not proved to > be perfect, but that again is an orthogonal issue. I like this way of looking at it. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Darren New wrote: [me:] > > Personally, I would be quite happy to go there -- I dislike the idea > > that a value has a specific inherent type. > > Interestingly, Ada defines a type as a collection of values. It works > quite well, when one consistantly applies the definition. I have never been very happy with relating type to sets of values (objects, whatever). I'm not saying that it's formally wrong (but see below), but it doesn't fit with my intuitions very well -- most noticeably in that the sets are generally unbounded so you have to ask where the (intentional) definitions come from. Two other notions of what "type" means might be interesting, both come from attempts to create type-inference mechanisms for Smalltalk or related languages. Clearly one can't use the set-of-values approach for these purposes ;-) One approach takes "type" to mean "set of classes" the other takes a finer-grained approach and takes it to mean "set of selectors" (where "selector" is Smalltalk for "name of a method" -- or, more accurately, name of a message). But I would rather leave the question of what a type "is" open, and consider that to be merely part of the type system. For instance the hypothetical nullability analysis type system I mentioned might have only three types NULLABLE, ALWAYSNULL, and NEVERNULL. It's worth noting, too, that (in some sense) the type of an object can change over time[*]. That can be handled readily (if not perfectly) in the informal internal type system(s) which programmers run in their heads (pace the very sensible post by Anton van Straaten today in this thread -- several branches away), but cannot be handled by a type system based on sets-of-values (and is also a counter-example to the idea that "the" dynamic type of an object/value can be identified with its tag). ([*] if the set of operations in which it can legitimately partake changes. That can happen explicitly in Smalltalk (using DNU proxies for instance if the proxied object changes, or even using #becomeA:), but can happen anyway in less "free" languages -- the State Pattern for instance, or even (arguably) in the difference between an empty list and a non-empty list). -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: > When people talk > about "types" being associated with values in a "latently typed" or > "dynamically typed" language, they really mean *tag*, not type. I don't think that's true. Maybe /some/ people do confuse the two, but I am certainly a counter-example ;-) The tag (if any) is part of the runtime machinery (or, if not, then I don't understand what you mean by the word), and while that is certainly a reasonably approximation to the type of the object/value, it is only an approximation, and -- what's more -- is only an approximation to the type as yielded by one specific (albeit abstract, maybe even hypothetical) type system. If I send #someMessage to a proxy object which has not had its referent set (and assuming the default value, presumably some variant of nil, does not understand #someMessage), then that's just as much a type error as sending #someMessage to a variable holding a nil value. If I then assign the referent of the proxy to some object which does understand #someMessage, then it is not a type error to send #someMessage to the proxy. So the type has changed, but nothing in the tag system of the language implementation has changed. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Chris Smith wrote: > > It would be interesting to see what a language designed specifically to > > support user-defined, pluggable, and perhaps composable, type systems > > would look like. [...] > > You mean in terms of a practical programming language? If not, then > lambda calculus is used in precisely this way for the static sense of > types. Good point. I was actually thinking about what a practical language might look like, but -- hell -- why not start with theory for once ? ;-) > I think Marshall got this one right. The two are accomplishing > different things. In one case (the dynamic case) I am safeguarding > against negative consequences of the program behaving in certain non- > sensical ways. In the other (the static case) I am proving theorems > about the impossibility of this non-sensical behavior ever happening. And so conflating the two notions of type (-checking) as a kind of category error ? If so then I see what you mean, and it's a useful distinction, but am unconvinced that it's /so/ helpful a perspective that I would want to exclude other perspectives which /do/ see the two as more-or-less trivial variants on the same underlying idea. > I acknowledge those questions. I believe they are valid. I don't know > the answers. As an intuitive judgement call, I tend to think that > knowing the correctness of these things is of considerable benefit to > software development, because it means that I don't have as much to > think about at any one point in time. I can validly make more > assumptions about my code and KNOW that they are correct. I don't have > to trace as many things back to their original source in a different > module of code, or hunt down as much documentation. I also, as a > practical matter, get development tools that are more powerful. Agreed that these are all positive benefits of static declarative (more or less) type systems. But then (slightly tongue-in-cheek) shouldn't you be agitating for Java's type system to be stripped out (we hardly /need/ it since the JVM does latent typing anyway), leaving the field free for more powerful or more specialised static analysis ? > (Whether it's possible to create the same for a dynamically typed > language is a potentially interesting discussion; but as a practical > matter, no matter what's possible, I still have better development tools > for Java than for JavaScript when I do my job.) Acknowledged. Contrary-wise, I have better development tools in Smalltalk than I ever expect to have in Java -- in part (only in part) because of the late binding in Smalltalk and it's lack of insistence on declared types from an arbitrarily chosen type system. > On > the other hand, I do like proving theorems, which means I am interested > in type theory; if that type theory relates to programming, then that's > great! That's probably not the thing to say to ensure that my thoughts > are relevant to the software development "industry", but it's > nevertheless the truth. Saying it will probably win you more friends in comp.lang.functional than it looses in comp.lang.java.programmer ;-) -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Andreas Rossberg wrote: [me:] > > It's worth noting, too, that (in some sense) the type of an object can > > change over time[*]. > > No. Since a type expresses invariants, this is precisely what may *not* > happen. If certain properties of an object may change then the type of > the object has to reflect that possibility. Otherwise you cannot > legitimately call it a type. Well, it seems to me that you are /assuming/ a notion of what kinds of logic can be called type (theories), and I don't share your assumptions. No offence intended. Actually I would go a little further than that. Granted that whatever logic one wants to apply in order to prove about a program execution is abstract -- and so timeless -- that does not (to my mind) imply that it must be /static/. However, even if we grant that additional restriction, that doesn't imply that the analysis itself must not be cognisant of time. I see no reason, even in practise, why a static analysis should not be able to see that the set of acceptable operations (for some definition of acceptable) for some object/value/variable can be different at different times in the execution. If the analysis is rich enough to check that the temporal constraints are [not] satisfied, then I don't see why you should want to use another word than "type" to describe the results of its analysis. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
I wrote: > It would be interesting to see what a language designed specifically to > support user-defined, pluggable, and perhaps composable, type systems > would look like. Since writing that I've come across some thoughts by Gilad Bracha (a Name known to Java and Smalltalk enthusiasts alike) here: http://blogs.sun.com/roller/page/gbracha?entry=a_few_ideas_on_type and a long, and occasionally interesting, related thread on LtU: http://lambda-the-ultimate.org/node/1311 Not much discussion of concrete language design, though. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is a type error?
Chris Smith wrote: > Some people here seem to be > saying that there is a universal concept of "type error" in dynamic > typing, but I've still yet to see a good precise definition (nor a good > precise definition of dynamic typing at all). How about this, at least as a strawman: I think we're agreed (you and I anyway, if not everyone in this thread) that we don't want to talk of "the" type system for a given language. We want to allow a variety of verification logics. So a static type system is a logic which can be implemented based purely on the program text without making assumptions about runtime events (or making maximally pessimistic assumptions -- which comes to the same thing really). I suggest that a "dynamic type system" is a verification logic which (in principle) has available as input not only the program text, but also the entire history of the program execution up to the moment when the to-be-checked operation is invoked. I don't mean to imply that an operation /must/ not be checked until it is invoked (although a particular logic/implementation might not do so). For instance an out-of-bound array access might be rejected: When the attempt was made to read that slot. When, in the surrounding code, it first became unavoidable that the about read /would/ be reached. When the array was first passed to a function which /might/ read that slot. ...and so on... Note that not all errors that I would want to call type errors are necessarily caught by the runtime -- it might go happily ahead never realising that it had just allowed one of the constraints of one of the logics I use to reason about the program. What's known as an undetected bug -- but just because the runtime doesn't see it, doesn't mean that I wouldn't say I'd made a type error. (The same applies to any specific static type system too, of course.) But the checks the runtime does perform (whatever they are, and whenever they happen), do between them constitute /a/ logic of correctness. In many highly dynamic languages that logic is very close to being maximally optimistic, but it doesn't have to be (e.g. the runtime type checking in the JMV is pretty pessimistic in many cases). Anyway, that's more or less what I mean when I talk of dynamically typed language and their dynamic type systems. > I suspect you'll see the Smalltalk version of the objections raised in > response to my post earlier. In other words, whatever terminology you > think is consistent, you'll probably have a tough time convincing > Smalltalkers to stop saying "type" if they did before. If you exclude > "message not understood" as a type error, then I think you're excluding > type errors from Smalltalk entirely, which contradicts the psychological > understanding again. Taking Smalltalk /specifically/, there is a definite sense in which it is typeless -- or trivially typed -- in that in that language there are no[*] operations which are forbidden[**], and none which might not be invoked deliberately (e.g. I have code which deliberately reads off the end of a container object -- just to make sure I raise the "right" error for that container, rather than raising my own error). But, on the other hand, I do still want to talk of type, and type system, and type errors even when I program Smalltalk, and when I do I'm thinking about "type" in something like the above sense. -- chris [*] I can't think of any offhand -- there may be a few. [**] Although there are operations which are not possible, reading another object's instvars directly for instance, which I suppose could be taken to induce a non-trivial (and static) type logic. -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Joe Marshall wrote: > What we need is an FAQ entry for how to talk about types with people > who are technically adept, but non-specialists. Or alternatively, an > FAQ of how to explain the term `dynamic typing' to a type theorist. You could point people at "a regular series on object-oriented type theory, aimed specifically at non-theoreticians." which was published on/in JoT from: http://www.jot.fm/issues/issue_2002_05/column5 to http://www.jot.fm/issues/issue_2005_09/column1 Only 20 episodes ! (But #3 seems to be missing.) Actually the first one has (in section four) a quick and painless overview of several kinds of type theory. I haven't read the rest (yet, and maybe never ;-) -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Andreas Rossberg wrote: > Chris Uppal wrote: > > > > > > It's worth noting, too, that (in some sense) the type of an object > > > > can change over time[*]. > > > > > > No. Since a type expresses invariants, this is precisely what may > > > *not* happen. If certain properties of an object may change then the > > > type of > > > the object has to reflect that possibility. Otherwise you cannot > > > legitimately call it a type. > > > > Well, it seems to me that you are /assuming/ a notion of what kinds of > > logic can be called type (theories), and I don't share your > > assumptions. No offence intended. > > OK, but can you point me to any literature on type theory that makes a > different assumption? 'Fraid not. (I'm not a type theorist -- for all I know there may be lots, but my suspicion is that they are rare at best.) But perhaps I shouldn't have used the word theory at all. What I mean is that there is one or more logic of type (informal or not -- probably informal) with respect to which the object in question has changed it categorisation. If no existing type /theory/ (as devised by type theorists) can handle that case, then that is a deficiency in the set of existing theories -- we need newer and better ones. But, as a sort of half-way, semi-formal, example: consider the type environment in a Java runtime. The JVM does formal type-checking of classfiles as it loads them. In most ways that checking is static -- it's treating the bytecode as program text and doing a static analysis on it before allowing it to run (and rejecting what it can't prove to be acceptable by its criteria). However, it isn't /entirely/ static because the collection of classes varies at runtime in a (potentially) highly dynamic way. So it can't really examine the "whole" text of the program -- indeed there is no such thing. So it ends up with a hybrid static/dynamic type system -- it records any assumptions it had to make in order to find a proof of the acceptability of the new code, and if (sometime in the future) another class is proposed which violates those assumptions, then that second class is rejected. > > I see no reason, > > even in practise, why a static analysis should not be able to see that > > the set of acceptable operations (for some definition of acceptable) > > for some object/value/variable can be different at different times in > > the execution. > > Neither do I. But what is wrong with a mutable reference-to-union type, > as I suggested? It expresses this perfectly well. Maybe I misunderstood what you meant by union type. I took it to mean that the type analysis didn't "know" which of the two types was applicable, and so would reject both (or maybe accept both ?). E..g if at instant A some object, obj, was in a state where it to responds to #aMessage, but not #anotherMessage; and at instant B it is in a state where it responds to #anotherMessage but not #aMessage. In my (internal and informal) type logic, make the following judgements: In code which will be executed at instant A obj aMessage."type correct" obj anotherMessage. "type incorrect" In code which will be executed at instant B obj aMessage. "type incorrect" obj anotherMessage."type correct" I don't see how a logic with no temporal element can arrive at all four those judgements, whatever it means by a union type. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is a type error?
Eliot Miranda wrote: [me:] > > Taking Smalltalk /specifically/, there is a definite sense in which it > > is typeless -- or trivially typed -- in that in that language there are > > no[*] operations which are forbidden[**], > > Come one Chris U. One has to distinguish an attempt to invoke an > operation with it being carried out. There is nothing in Smalltalk to > stop one attempting to invoke any "operation" on any object. But one > can only actually carry-out operations on objects that implement them. > So, apart from the argument about inadvertent operation name overloading > (which is important, but avoidable), Smalltalk is in fact > strongly-typed, but not statically strongly-typed. What are you doing /here/, Eliot, this is Javaland ? Smalltalk is thatta way -> ;-) But this discussion has been all about /whether/ it is ok to apply the notion of (strong) typing to what runtime-checked languages do. We all agree that the checks happen, but the question is whether it is reasonable/helpful/legitimate to extend the language of static checking to the dynamic case. (I'm on the side which says yes, but good points have been made against it). The paragraph you quoted doesn't represent most of what I have been saying -- it was just a side-note looking at one small aspect of the issue from a different angle. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Anton van Straaten wrote: > In that case, you could say that the conceptual type is different than > the inferred static type. But most of the time, the human is reasoning > about pretty much the same types as the static types that Haskell > infers. Things would get a bit confusing otherwise. Or any mechanised or formalised type system, for any language. If a system doesn't match pretty closely with at least part of the latent type systems (in your sense) used by the programmers, then that type system is useless. (I gather that it took, or maybe is still taking, theorists a while to get to grips with the infromal type logics which were obvious to working OO programmers.) -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
David Hopwood wrote: > > But some of the advocates of statically > > typed languages wish to lump these languages together with assembly > > language a "untyped" in an attempt to label them as unsafe. > > A common term for languages which have defined behaviour at run-time is > "memory safe". For example, "Smalltalk is untyped and memory safe." > That's not too objectionable, is it? I find it too weak, as if to say: "well, ok, it can't actually corrupt memory as such, but the program logic is still apt go all over the shop"... -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is a type error?
Chris Smith wrote: [me:] > > I think we're agreed (you and I anyway, if not everyone in this thread) > > that we don't want to talk of "the" type system for a given language. > > We want to allow a variety of verification logics. So a static type > > system is a logic which can be implemented based purely on the program > > text without making assumptions about runtime events (or making > > maximally pessimistic assumptions -- which comes to the same thing > > really). I suggest that a "dynamic type system" is a verification > > logic which (in principle) has available as input not only the program > > text, but also the entire history of the program execution up to the > > moment when the to-be-checked operation is invoked. > > I am trying to understand how the above statement about dynamic types > actually says anything at all. So a dynamic type system is a system of > logic by which, given a program and a path of program execution up to > this point, verifies something. We still haven't defined "something", > though. That was the point of my first sentence (quoted above). I take it, and I assumed that you shared my view, that there is no single "the" type system -- that /a/ type system will yield judgements on matters which it has been designed to judge. So unless we either nominate a specific type system or choose what judgements we want to make (today) any discussion of types is necessarily parameterised on the class(es) of . So, I don't -- can't -- say /which/ judgements my "dynamic type systems" will make. They may be about nullablity, they may be about traditional "type", they may be about mutability... When we look at a specific language (and its implementation), then we can induce the logic(s) that whatever dynamic checks it applies define. Alternatively we can consider other "dynamic type systems" which we would like to formalise and mechanise, but which are not built into our language of choice. > We also haven't defined what happens if that verification > fails. True, and a good point. But note that it only applies to systems which are actually implemented in code (or which are intended to be so). As a first thought, I suggest that a "dynamic type system" should specify a replacement action (which includes, but is not limited to, terminating execution). That action is taken /instead/ of the rejected one. E.g. we don't actually read off the end of the array, but instead a signal is raised. (An implementation might, in some cases, find it easier to implement the checks by allowing the operation to fail, and then backtracking to "pretend" that it had never happened, but that's irrelevant here). The replacement action must -- of course -- be valid according to the rules of the type system. Incidentally, using that idea, we get a fairly close analogy to the difference between strong and weak static type systems. If the "dynamic type system" doesn't specify a valid replacement action, or if that action is not guaranteed to be taken, then it seems that the type system or the language implementation is "weak" in very much the same sort of way as -- say -- the 'C' type system is weak and/or weakly enforced. I wonder whether that way of looking at it -- the "error" never happens since it is replaced by a valid operation -- puts what I want to call dynamic type systems onto a closer footing with static type systems. Neither allows the error to occur. (Of course, /I/ -- the programmer -- have made a type error, but that's different thing.) > In other words, I think that everything so far is essentially just > defining a dynamic type system as equivalent to a formal semantics for a > programming language, in different words that connote some bias toward > certain ways of looking at possibilities that are likely to lead to > incorrect program behavior. I doubt that will be an attractive > definition to very many people. My only objections to this are: a) I /want/ to use the word "type" to describe the kind of reasoning I do (and some of the mistakes I make) b) I want to separate the systems of reasoning (whether formal or informal, static or dynamic, implemented or merely conceptual, and /whatever/ we call 'em ;-) from the language semantics. I have no objection to being used as part of a language specification, but I don't want to restrict types to that. > > Note that not all errors that I would want to call type errors are > > necessarily caught by the runtime -- it might go happily ahead never > > realising that it had just allowed one of the constraints of one of the > > logics I use to reason about the program. What's known as an > > undetected bug -- but just because the runtime doesn't see it, doesn't > > mean that I wouldn't say I'd made a type error. (The same applies to > > any specific static type system too, of course.) > > In static type system terminology, this quite emphatically does NOT > apply. There may, of course, be undetected bugs, but they are not
Re: Saying "latently-typed language" is making a category mistake
Pascal Costanza wrote: > Sorry, obviously I was far from being clear. ACL2 is not > Turing-complete. All iterations must be expressed in terms of > well-founded recursion. How expressive does that end up being for real problems ? I mean obviously in some sense it's crippling, but how much of a restiction would that be for non-accademic programming. Could I write an accountancy package in it, or an adventure games, or a webserver, with not too much more difficulty than in a similar language without that one aspect to its type system ? Hmm, come to think of it those all hae endless loops at the outer level, with the body of the loop being an event handler, so maybe only the handler should be required to guaranteed to terminate. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Andreas Rossberg wrote: > So what you are suggesting may be an interesting notion, but it's not > what is called "type" in a technical sense. Overloading the same term > for something different is not a good idea if you want to avoid > confusion and misinterpretations. Frivolous response: the word "type" has been in use in the general sense in which I wish to use it for longer than that. If theorists didn't want ordinary people to abuse their technical terms they should invent their own words not borrow other people's ;-) Just for interest, here's one item from the OED's entry on "type" (noun): The general form, structure, or character distinguishing a particular kind, group, or class of beings or objects; hence transf. a pattern or model after which something is made. And it has supporting quotes, here are the first and last: 1843 Mill Logic iv. ii. ยง3 (1856) II. 192 When we see a creature resembling an animal, we compare it with our general conception of an animal; and if it agrees with that general conception, we include it in the class. The conception becomes the type of comparison. 1880 Mem. J. Legge vi. 76 Every creature has a type, a peculiar character of its own. Interestingly the first one is from: John Stuart Mill A System of Logic (but it's not mathematical logic ;-). OK, I admitted I was being frivolous. But I really don't see why a type system (as understood by type theorists) /must/ have no runtime element. Just as I wouldn't stop calling a type system a "type system" if it were designed to work on incomplete information (only part of the program text is available for analysis) and so could only make judgements of the form "if X then Y". [JVM dynamic class loading] > Incidentally, I know this scenario very well, because that's what I'm > looking at in my thesis :-). > All of this can easily be handled > coherently with well-established type machinery and terminology. I'd be interested to know more, but I suspect I wouldn't understand it :-( > A type system states > propositions about a program, a type assignment *is* a proposition. A > proposition is either true or false (or undecidable), but it is so > persistently, considered under the same context. So if you want a type > system to capture temporal elements, then these must be part of a type > itself. You can introduce types with implications like "in context A, > this is T, in context B this is U". But the whole quoted part then is > the type, and it is itself invariant over time. But since the evolving type-attribution that I'm thinking of also includes time as a static element (at each specific time there is a specific attribution of types), the dynamic logic can also be viewed as invarient in time, since each judgement is tagged with the moment(s) to which it applies. But there we go. I don't expect you to agree, and though I'll read any reply with interest, I think I've now said all I have to say in this particular subthread. Cheers. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is Expressiveness in a Computer Language
Marshall wrote: [me:] > > But, as a sort of half-way, semi-formal, example: consider the type > > environment in a Java runtime. The JVM does formal type-checking of > > classfiles as it loads them. In most ways that checking is static -- > > it's treating the bytecode as program text and doing a static analysis > > on it before allowing it to run (and rejecting what it can't prove to > > be acceptable by its criteria). However, it isn't /entirely/ static > > because the collection of classes varies at runtime in a (potentially) > > highly dynamic way. So it can't really examine the "whole" text of the > > program -- indeed there is no such thing. So it ends up with a hybrid > > static/dynamic type system -- it records any assumptions it had to make > > in order to find a proof of the acceptability of the new code, and if > > (sometime in the future) another class is proposed which violates those > > assumptions, then that second class is rejected. > > I have to object to the term "hybrid". > > Java has a static type system. > Java has runtime tags and tag checks. It has both, agreed, but that isn't the full story. I think I explained what I meant, and why I feel the term is justified as well as I can in the second half of the paragraph you quoted. I doubt if I can do better. Maybe you are thinking that I mean that /because/ the JVM does verification, etc, at "runtime" the system is hybrid ? Anyway that is /not/ what I mean. I'm (for these purposes) completely uninterested in the static checking done by the Java to bytecode translator, javac. I'm interested in what happens to the high-level, statically typed, OO, language called "java bytecode" when the JVM sees it. That language has a strict static type system which the JVM is required to check. That's a /static/ check in my book -- it happens before the purportedly correct code is accepted, rather than while that code is running. I am also completely uninterested (for these immediate purposes) in the run time checking that the JVM does (the stuff that results in NoSuchMethodException, and the like). In the wider context of the thread, I do want to call that kind of thing (dynamic) type checking -- but those checks are not why I call the JVMs type system hybrid either. Oh well, having got that far, I may as well take another stab at "hybrid". Since the JVM is running a static type system without access to the whole text of the program, there are some things that it is expected to check which it can't. So it records preconditions on classes which might subsequently be loaded. Those are added to the static checks on future classes, but -- as far as the original class is concerned -- those checks happen dynamically. So part of the static type checking which is supposed to happen, has been postponed to a dynamic check. It's that, and /only/ that which I meant by "hybrid". Of course, /if/ we grant that runtime checking of the sort done by Smalltalk or Lisp also constitutes a "type system" in some sense that puts it on a par with static type checking, then that would be another, very different, reason to claim that Java had a hybrid type system (though, in fact, I'd rather claim that it had two independent type systems). But that's the bigger question point under discussion here and I wasn't trying to beg it by using the word "hybrid". -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: What is a type error?
Chris Smith wrote:
> But then, I
> dislike discussion of strong/weak type systems in the first place. It
> doesn't make any sense to me to say that we verify something and then
> don't do anything if the verification fails. In those cases, I'd just
> say that verification doesn't really exist or is incorrectly
> implemented.
Hmm, that seems to make what we are doing when we run some tests dependent on
what we do when we get the results. If I run a type inference algorithm on
some of my Smalltalk code, and it tells me that something is type-incorrect,
then I think I'd like to be able to use the same words for the checking
regardless of whether, after due consideration, I decide to change my code, or
that it is OK as-is. Minor point, though...
Anyway:
> if (x != 0) y = 1 / x;
> else y = 9;
>
> is not all that much different from (now restricting to Java):
>
> try { y = 1 / x; }
> catch (ArithmeticException e) { y = 9; }
>
> So is one of them a use of a dynamic type system, where the other is
> not?
My immediate thought is that the same question is equally applicable (and
equally interesting -- i.e. no simple answer ;-) for static typing. Assuming
Java's current runtime semantics, but positing different type checkers, we can
get several answers.
A] Trivial: neither involves the type system at all. What we are seeing
is a runtime check (not type check) correctly applied, or made unnecessary by
conditional code. And, obviously we can take the same line for the "dynamic
type checking" -- i.e. that no type checking is involved.
B] More interesting, we could extend Java's type system with a NotZero
numeric type (or rather, about half a dozen such), and decree that 1 / x was
only legal if
x : NotZero
Under that type system, neither of the above would be type legal, assuming x is
declared int. Or both would be legal if it was declared NotZero -- but in both
cases there'd be a provably unnecessary runtime check. The nearest "dynamic
type system" equivalent would be quite happy with x : int. For the first
example, the runtime checks would never fire (though they would execute). For
the second the runtime "type" might come into play, and if it did, then the
assignment of 9 would take place /because/ the illegal operation (which
can be thought of as casting 0 to NotZero) had been replaced by a specific
legal one (throwing an ArithmeticException in this case).
C] A different approach would be to type the execution, instead of the value,
so we extend Java's type system so that:
1 / x
had type (in part) . The
first example as a whole then would also have (in part) that type, and the
static type system would deem that illegal. The second example, however, would
be type-legal. I hope I don't seem to be fudging the issue, but I don't really
think this version has a natural "dynamic type system" equivalent -- what could
the checker check ? It would see an attempt to throw an exception from a place
where that's not legal, but it wouldn't make a lot of sense to replace it with
an IllegalExceptionException ;-)
D] Another possibility would be a static type checker after the style of [B]
but extended so that it recognised the x != 0 guard as implicitly giving x :
NonZero. In that case, again, the first example is static type-correct but the
second is not. In this case the corresponding "dynamic type system" could, in
principle, avoid testing x for zero the second time. I don't know whether that
would be worth the effort to implement. In the second example, just as in
[B], the reason the program doesn't crash when x == 0 is that the "dynamic type
system" has required that the division be replaced with a throw.
One could go further (merge [B] and [C], or [C] and [D] perhaps), but I don't
think it adds much, and anyway I'm getting a bit bored (And also getting
sick of putting scare quotes around "dynamic type system" every time ;-)
Personally, of the above, I find [B] most appealing.
-- chris
--
http://mail.python.org/mailman/listinfo/python-list
Re: Saying "latently-typed language" is making a category mistake
Chris Smith wrote: > Perhaps, if you wanted to be quite careful about your distinctions, you > may want to choose different words for the two. However, then you run > into that same pesky "you can't choose other people's terminology" block > again. There may be more flexibility in this area than you'd think. I get the impression that (after only a few years hard work) the terms "weak typing" and "strong typing", used as synonyms for what we are here calling "dynamic typing" and "static typing", are starting to die out. So be of good hope ! I may yet be persuaded to talk of "static type checks" vs "dynamc checks" (with a slight, veilled, sneer on the word "type" -- oh, so very limiting ;-) > I'd be more careful in the analogy, there. Since type theory (as > applied to programming languages, anyway) is really just a set of > methods of proving arbitrary statements about program behaviors, Not "arbitrary" -- there is only a specific class of statements that any given type system can address. And, even if a given statement can be proved, then it might not be something that most people would want to call a statement of type (e.g. x > y). It seems to me that most (all ? by definition ??) kinds of reasoning where we want to invoke the word "type" tend to have a form where we reduce values (and other things we want to reason about) to equivalence classes[*] w.r.t the judgements we wish to make, and (usually) enrich that structure with more-or-less stereotypical rules about which classes are substitutable for which other ones. So that once we know what "type" something is, we can short-circuit a load of reasoning based on the language semantics, by translating into type-land where we already have a much simpler calculus to guide us to the same answer. (Or representative symbols, or... The point is just that we throw away the details which don't affect the judgements.) > a > corresponding "informal type theory" would just be the entire set of > informal reasoning by programmers about their programs. I think that, just as for static theorem proving, there is informal reasoning that fits the "type" label, and informal reasoning that doesn't. I would say that if informal reasoning (or explanations) takes a form more-or-less like "X can [not] do Y because it is [not] a Z", then that's taking advantage of exactly the short-cuts mentioned above, and I would want to call that informal type-based reasoning. But I certainly wouldn't call most of the reasoning I do type-based. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: languages with full unicode support
Joachim Durchholz wrote: > > This is implementation-defined in C. A compiler is allowed to accept > > variable names with alphabetic Unicode characters outside of ASCII. > > Hmm... that could would be nonportable, so C support for Unicode is > half-baked at best. Since the interpretation of characters which are yet to be added to Unicode is undefined (will they be digits, "letters", operators, symbol, punctuation ?), there doesn't seem to be any sane way that a language could allow an unrestricted choice of Unicode in identifiers. Hence, it must define a specific allowed sub-set. C certainly defines an allowed subset of Unicode characters -- so I don't think you could call its Unicode support "half-baked" (not in that respect, anyway). A case -- not entirely convincing, IMO -- could be made that it would be better to allow a wider range of characters. And no, I don't think Java's approach -- where there /is no defined set of allowed identifier characters/ -- makes any sense at all :-( -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: (was Re: Xah's Edu Corner: Criticism vs Constructive Criticism)
Tagore Smith wrote: > It's much easier to use a killfile than to complain to an ISP, and I > think that that should be the preferred response to messages you don't > like. I'm inclined to agree. The problem is not Xah Lee (whom I have killfiled), but the people who insist on making my killfile useless by posting loads of follow-ups saying things amounting to "stop this insane gibberish". Every bloody time. Wake up, people ! You are not the victims, you are the problem. Shut up, /please/. (And yes, I do realise that I'm adding to the problem here, and indeed that I'm not following my own advice, nor heeding my own request.). -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: A critic of Guido's blog on Python's lambda
Bill Atkins wrote: > But why should I have to worry about any of this? Why can't I do: > > (with-indentation (pdf (+ (indentation pdf) 4)) > (out-header) > (out-facts)) > > and then within, say out-facts: > > (with-indentation (pdf (+ (indentation pdf) 4)) > (write pdf "some text")) > > More readable, and no bookkeeping to worry about. This is great! And > here's the macro: . [...] Can you explain to a non-Lisper why macros are needed for this ? I'm a Smalltalker, and Smalltalk has no macros, nor anything like 'em, but the equivalent of the above in Smalltalk is perfectly feasible, and does not require a separate layer of semantics (which is how I think of true macros). aPdf withAdditionalIndent: 4 do: [ aPdf writeHeader; writeFacts ]. and aPdf withAdditionalIndent: 4 do: [ aPdf write: '... some text...' ]. Readers unfamiliar with Smalltalk may not find this any easier to read that your Lisp code, but I can assure them that to any Smalltalker that code would be both completely idiomatic and completely transparent. (Although I think a fair number of Smalltalkers would choose to use a slightly different way of expressing this -- which I've avoided here only in order to keep things simple). > Macros rock. I have yet to be persuaded of this ;-) -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: A critic of Guido's blog on Python's lambda
Alex Martelli wrote:
> I think it's reasonable to make a name a part of functions, classes and
> modules because they may often be involved in tracebacks (in case of
> uncaught errors): to me, it makes sense to let an error-diagnosing
> tracebacks display packages, modules, classes and functions/methods
> involved in the chain of calls leading to the point of error _by name_.
It seems to me that there's something of a circularity here. Either in your
logic if applied to language design in general, or as a self-perpetuating habit
when applied to Python in particular.
The assumption is that functions are in some sense rather "big" things -- that
each function performs a well-defined action which can be understood in
isolation. That may well be true in idiomatically written Python (I've never
cared for the look of the language myself, so I don't know what's considered
"normal"), but it isn't true in general. With the assumption, it makes sense
to say that every function /could/ have a name, and so, why not /give/ it a
name ? But without the assumption, when many little anonymous functions are
used, the idea of giving them all names appears pettifogging to the point of
idiocy. If the language and/or culture expects that all/most functions will be
named, then "little" functions (in my sense) won't be used; hence the
self-perpetuating habit. But I don't think that the underlying logic supports
that habit independently of its own self-perpetuating nature.
E.g. consider the Smalltalk code (assumed to be the body of a method):
aCollection
do: [:each |
each > 0 ifTrue: [^ true]].
^ false.
which iterates over a collection checking to see if any element is > 0. If so
then the method answers true ("^" -- spelled "return" in Java), otherwise it
answers false. In that code,
[^ true]
is syntactically and semantically an anonymous function, which is only invoked
if the antecedent is true (in point of fact the compiler inlines that function
away but I don't think that's relevant here). The passage beginning
[:each | ...
and reaching to the matching ] is also an anonymous function with one parameter
(each) which is applied to each element of the collection in turn. (In this
case it really is an anonymous function, even at the implementation level.)
What "name" would you give to either of them ? I don't believe that /any/ name
is possible, and certainly that no name is desirable.
In my working Smalltalk environment today, there are 60099 methods defined
across 3369 classes. In that codebase there are 38112 anonymous functions. Do
you really want to have to find names for them all ?
-- chris
--
http://mail.python.org/mailman/listinfo/python-list
Re: A critic of Guido's blog on Python's lambda
Pisin Bootvong wrote: > Slippery Slope:: >"Argumentation that A is bad, because A might lead to B, and B > to C, and we all know C is very bad." For the Slippery Slope criticism to be applicable, there would have to be some suggestion that removing anonymous functions /would actually/ (tend to) lead to removing anonymous values in general. There was no such suggestion. The form of the argument was more like reasoning by analogy: if context A has features like context B, and in B some feature is known to be good (bad) then the analogous feature in A is also good (bad). In that case an attack on the validity of the argument would centre on the relevance and accuracy of the analogy. Alternatively the argument might be seen as a generalisation/specialisation approach. Functions are special cases of the more general notion of values. We all agree that anonymous values are a good thing, so anonymous functions should be too. If you parse the argument like that, then the attack should centre on showing that functions have relevant special features which are not shared by values in general, and so that we cannot validly deduce that anonymous functions are good. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: A critic of Guido's blog on Python's lambda
Petr Prikryl wrote: > for element in aCollection: > if element > 0: > return True > return False [I'm not sure whether this is supposed to be an example of some specific language (Python ?) or just a generic illustration. I'll take it as the latter, since it makes my point easier to express. I'll also exaggerate, just a little...] But now, in order to hack around the absence of a sensible and useful feature -- /only/ in order to do so -- you have added two horrible new complications to your language. You have introduced a special syntax to express conditionals, and (worse!) a special syntax to express looping. Not only does that add a huge burden of complexity to the syntax, and semantics, of the language (and, to a lesser extent, its implementation), but it also throws out any semblance of uniformity. Once you've started down that route then you've lost all hope of user-defined control structures which operate on a par with the built-in ones. And please note that "control structures" are not limited to C-style "for" "while" etc. E.g. in Java there's an unresolved, and irresolvable, tension between whether a failing operation should return an error condition or throw an exception -- the problem is that exceptions are (IMO and most other peoples' here) intended for exceptional conditions, and for many operations "failure" is not exceptional. One, highly effective, resolution to the problem is for the operation to be parameterised with the action to take if it fails (defaulting to code to raise an exception). In Java that approach, though technically possible, is totally infeasible due to the pathetic overuse of unwanted syntax, and the underuse of simple and powerful primitives. E.g. can you add three-way comparisons (less-than, same-as, greater-than to, say, Python with corresponding three-way conditional control structures to supplement "if" etc ? Are they on a semantic and syntactic par with the existing ones ? In Smalltalk that is trivial (too trivial to be particularly interesting, even), and I presume the same must be true of Lisp (though I suspect you might be forced to use macros). I should say that if your example /is/ in fact Python, then I believe that language allows fairly deep hooks into the execution mechanism, so that at least the "for" bit can be mediated by the collection itself -- which is better than nothing, but nowhere near what I would call "good". -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: A critic of Guido's blog on Python's lambda
Bill Atkins wrote: > My favorite macro is ITERATE [...] Thanks for the examples. -- chris -- http://mail.python.org/mailman/listinfo/python-list
Re: John Bokma harassment
Fred Gilham wrote: > BTW, one time I tried a little social engineering to get rid of an > irrelevant cross-posted thread. I replied to the messages in the > thread (an irrelevant political thread posted in rec.audio.tubes) with > (somewhat) inflammatory replies but deleted my newsgroup from the > follow-up line. I kept doing this for a day or two to every message > that showed up in rec.audio.tubes. The result was that the threads > actually died out pretty fast in that newsgroup. Clever idea. Evil, but clever ;-) -- chris -- http://mail.python.org/mailman/listinfo/python-list
OT: Quote ? [was: John Bokma harassment]
[apologies to the whole flaming crowd for sending this to the whole flaming crowd...] Geoffrey Summerhayes wrote: > After you kill Navarth, will it be nothing but gruff and deedle > with a little wobbly to fill in the chinks? Where does that come from ? It sounds like a quote, and Navarth is a Jack Vance name (and /what/ a character), but I don't remember the rest of it occurring in Vance. -- chris -- http://mail.python.org/mailman/listinfo/python-list
