On 09/01/2018 03:47 PM, Everlast wrote:

It's because programming is done completely wrong. All we do is program like it's 1952 all wrapped up in a nice box and bow tie. WE should have tools and a compiler design that all work interconnected with complete graphical interfaces that aren't based in the text gui world(an IDE is just a fancy text editor). I'm talking about 3D code representation using graphics so projects can be navigated  visually in a dynamic way and many other things.

The current programming model is reaching diminishing returns. Programs cannot get much more complicated because the environment in which they are written cannot support them(complexity != size).

We have amazing tools available to do amazing things but programming is still treated like punch cards, just on acid. I'd like to get totally away from punch cards.

I total rewrite of all aspects of programming should be done(from "object" files(no more, they are not needed, at least not in the form they are), the IDE(it should be more like a video game(in the sense of graphical use) and provide extensive information and debugging support all at a finger tip away), from the tools, to the design of applications, etc.

One day we will get there...


GUI programming has been attempted a lot. (See Scratch for one of the latest, possibly most successful attempts). But there are real, practical reasons it's never made significant in-roads (yet).

There are really two main, but largely independent, aspects to what you're describing: Visual representation, and physical interface:

A. Visual representation:
-------------------------

By visual representation, I mean "some kind of text, or UML-ish diagrams, or 3D environment, etc".

What's important to keep in mind here is: The *fundamental concepts* involved in programming are inherently abstract, and thus equally applicable to whatever visual representation is used.

If you're going to make a diagram-based or VR-based programming tool, it will still be using the same fundamental concepts that are already established in text-based programming: Imperative loops, conditionals and variables. Functional/declarative immutability, purity and high-order funcs. Encapsulation. Pipelines (like ranges). Etc. And indeed, all GUI based programming tools have worked this way. Because how *else* are they going to work?

If what you're really looking for is something that replaces or transcends all of those existing, fundamental programming concepts, then what you're *really* looking for is a new fundamental programming concept, not a visual representation. And ance you DO invent a new fundamental programming concept, being abstract, it will again be applicable to a variety of possible visual representations.

That said, it is true some concepts may be more readily amenable to certain visual representations than others. But, at least for all the currently-known concepts, any combination of concept and representation can certainly be made to work.

B. Physical interface:
----------------------

By this I mean both actual input devices (keyboards, controllers, pointing devices) and also the mappings from their affordances (ie, what you can do with them: push button x, tilt stick's axis Y, point, move, rotate...) to specific actions taken on the visual representation (navigate, modify, etc.)

The mappings, of course, tend to be highly dependant on the visual representation (although, theoretically, they don't strictly HAVE to be). The devices themselves, less so: For example, many of us use a pointing device to help us navigate text. Meanwhile, 3D modelers/animators find it's MUCH more efficient to deal with their 3D models and environments by including heavy use of the keyboard in their workflow instead of *just* a mouse and/or wacom alone.

An important point here, is that using a keyboard has a tendency to be much more efficient for a much wider range of interactions than, say, a pointing device, like a mouse or touchscreen. There are some things a mouse or touchscreen is better at (ie, pointing and learning curve), but even on a touchscreen, pointing takes more time than pushing a button and is somewhat less composable with additional actions than, again, pushing/holding a key on a keyboard.

This means that while pointing, and indeed, direct manipulation in general, can be very beneficial in an interface, placing too much reliance on it will actually make the user LESS productive.

The result:
-----------

For programming to transcend the current text/language model, *without* harming either productivity or programming power (as all attempts so far have done), we will first need to invent entirely new high-level concepts which are simultaneously both simple/high-level enough AND powerful enough to obsolete most of the nitty-gritty lower-level concepts we programmers still need to deal with on a regular basis.

And once we do that, those new super-programming concepts (being the abstract concepts that they inherently are) will still be independent of visual representation. They might finally be sufficiently powerful AND simple that they *CAN* be used productively with graphical non-text-language representation...but they still will not *require* such a graphical representation.

That's why programming is still "stuck" in last century's text-based model: Because it's not actually stuck: It still has significant deal-winning benefits over newer developments. And that's because, even when "newer" does provide improvements, newer still isn't *inherently* superior on *all* counts. That's a fact of life that is easily, and frequently, forgotten in fast-moving domains.

Reply via email to