What are the Next Programming Models? 540
jg21 writes "In this opinion piece, Simeon Simeonov contemplates what truly new programming models have emerged recently, and nominates two: RIAs and what he calls 'composite applications' (i.e. using Java, .NET or any other programming language). He notes that Microsoft will be trying to achieve RIAs in Avalon, but that it's late out of the gate. He also cites David Heinemeier Hansson's Ruby on Rails project as showing great promise. 'As both a technologist and an investor I'm excited about the future,' Simeonov concludes. It's a thoughtful piece, infectious in its quiet enthusiasm. But what new models are missing from his essay?"
What about Small (Score:3, Interesting)
Don't forget the ways of Apple (Score:3, Interesting)
Web as new platform (Score:3, Interesting)
The trend towards RIA's/webapps has traditionally been restricted to those in a database centric role, but with the increasing use of AJAX and the like, the webapp is pushing further into the desktop application space. Obviously the centralization and server-side nature of the applications helps deployment and maintainance, but developers are basically trading the platform of an operating system for the platform of a web browser, with all the intricacies and compatibility issues that follow both.
Webapps are a good direction to take for data access apps, but where the line becomes less clear cut and extreme amounts of javascript/dhtml are needed to achieve behaviours, the apps can become somewhat clunky and difficult to use. To me, it's essential that the designers of today's webapps realise the limitations of what they're working with and when to use traditional desktop apps.
funny AND interesting, but yeah FP... (Score:5, Interesting)
See Beating the averages [paulgraham.com] for a well-written and thoughtful essay.
In a nutshell, languages themselves vary in power. No one disputes that. All things being equal, you should generally choose the most powerful language you can all the time. As we move more and more to server-hosted software, your choice of language is incredibly important because a) it's your choice, not forced on your by being the language of the OS and b) it can be a huge competitive advantage.
Matz (Ruby's creator) acknowledges ripping off ideas from Lisp (but putting a friendlier face to it). Python is Lispy. Javascript has been called Lisp in C's clothing. These are all functional languages, or can be used functionally.
Graham noted how all languages are trending more towards Lisp in terms of features (see the essay linked above). Want further proof? C# 2.0 is getting lexical closures. Innovation from Microsoft! These were available in Lisp for 30 years, javascript for 10 (since it was created), they're in Perl 5, Ruby, I can go on...
If languages continue to become higher and higher level, wouldn't we need to investigate this weird AI language from 1958 and see what features it doesn't have in order to do more meaningful research? 'cause these days, all the "new" features of today's languages are decades old...
Functional Programming: Haskell (Score:5, Interesting)
Functional programming greatly simplifies the task of the programmer by removing execution order from the things that programmers have to keep track of. Just as garbage collection in Java got rid of the need to recycle memory manually, so in Haskell the execution order is a matter for the compiler to optimise rather than for the programmer to worry about.
Historically functional programming has had problems doing IO: languages have had to admit impure side effects to do IO. Haskell has a wonderful solution to this problem, which unfortunately this post is too small to contain (really: go see!).
Paul.
Erlang (Score:3, Interesting)
In any case, I was left with a feeling of "yeah, I like this and would use it again, but it's not something that is going to wipe the floor with older models".
Also, I have some doubts as to how much FP "Scales Down" [dedasys.com] in the sense that it initially confuses people who are used to "normal" languages. I think perhaps FP might be more successful if someone were to take a more bottom up approach - let it "escape from the ivory tower". Languages like Erlang are doing this already, and of course people will be able to provide links to this or that Haskell or ML system used commercially, but to really make inroads, you've got to bridge the gap...
Just some musings.
PLOP (Score:4, Interesting)
Most programs can be written practally in most languages, since all you really need is "if", "decrement" and "goto". Some problems aren't a good fit for a given language. That's why there's more than one.
Any program that breaks its problem into chunks is in effect creating its own mini-language. Whether you call it Abstact Data Typing or Object Orientation or Functional Programming or even Top Down Design, what it comes down to is dividing the problem into manageable chunks and working with those chunks until done.
I wish all CS students were taught from day one, or maybe day fifteen, how to create their own programming language. Usually you have to take a compilers course to get that.
Creating a new language is not that hard. It gets a bad rap because people think they have to write a backend for a given architecture, but writing the backend to generate C++ or some other HLL is just as good, since they've already done the heavy lifting and you can automate the compile train with your favorite maker.
Domain Specfic Languages/Language Oriented Prgmng (Score:3, Interesting)
I think DSLs are going to radically change the way that people code. DSLs potentially provide the meta-prgramming ccapabilities of LISP with the transparency and idiot-proofing of a language like Java. We may even see a hierarchy of software engineeringh develop, with one type of hihg-level coder deveoping DSLs and others able to use these languages easily within their own areas of expertise. For more, check the following links:
http://www.jetbrains.com/mps// [jetbrains.com]
http://www.martinfowler.com/articles/languageWork
http://intentsoft.com/ [intentsoft.com]
Good Design - but sometimes it emerges... (Score:2, Interesting)
Good Design (aka Big Design Up Front [c2.com]) is very effective when the problem domain is well understood or there exist a reasonable number of known solutions to choose from. Text editing is a good example of this, people have been writing text editors for over 40 years so there shouldn't be too many surprises and there are lots of examples. (Telephone signal exchange is similar...) For these problems a very formal approach should work well and result in a well documented and well designed system.
Other problems, usually in newer fields of endeavor, lend themselves to more dynamic software creation strategies with less stringent design phases such as hacking, exploratory programming, prototyping and good old XP [c2.com]. It's very hard to write requirements, functional specifications or even UML diagrams for a system that does things nobody has even dreamed about.
In an ideal world both approaches will result in a good design. What started as a hack can turn into a prototype and evolve into a design, the trick is to document it all... but that would require infinite time and infinite resources. This might occur in large open source projects where the user and development communities are large enough to represent statistical universes but in the corporate world where the bottom line drives everything and therefore time and resources are limited shortcuts are taken. Sometimes this results in brilliantly designed but undocumented applications, but just as often the result is a giant ball of mud that will scare the willies out of the first intern or student hired to maintain code.
Re:Multi-Core (Score:3, Interesting)
These have been around for ages, but mainly for scientific computing. For example Fortran 90 and later versions, but there are also variants of C++ and others. Usually they take advantage of obvious parallelity in the data, for example matrix multiplication, and make the processors handle the separate bits without bothering the programmer with threads etc. It's also the kind of computation that takes place in graphics cards with their multiple pipelines.
I don't see any easy way to do the same for general programming. For example, separate threads for user interface and the actual processing is a good idea, but a very high-level one, not the kind of thing that would be done automatically by a compiler.
I hope that the existing parallel programming languages would be more widely used for the computationally intensive parts. It seems so silly that home computers have focused on pushing single processor performance for all this time, while 'real computer science' has been reaping the benefits of parallel processing for years.
Re:Don't forget the ways of Apple (Score:3, Interesting)
NeXTStep was a lot more primitive - no Foundation, just AppKit and C-based libraries.
It really is depressing watching demos of C# and .NET/Mono, and seeing them being touted as new and shiny, and seeing how far they are behind where NeXT was a decade ago.
*sigh* (Score:1, Interesting)
Ruby on Rails is great, not because it's something NEW, but because it wraps up all these best practices with a friendly face.
Creating simple domain-specific languages is how talented programmers do things already, with powerful languages like Lisp. However languages like PHP, Python, Java, TOOK AWAY this ability because language designers thought it was "unnecessary" or "too complicated" for the average programmer.
Along comes Ruby, which gives you back some of that power. And a talented programmer took it and "did the right thing" by creating a tight domain-specific language. Now everybody is so excited. Great, whatever makes programs simpler and more expressive is fine by me.
But can we please stop talking about the "next" great thing, when hardly anybody remembers the great things from the past?
If there's any problem in this industry, it's that programmers have ZERO knowledge of fundamentals. Instead of standing on the shoulders of giants, they constantly re-invent wheels.
Re:Afraid of parenthesis? Stay away from XML! (Score:3, Interesting)
Some very sick people [apache.org] disagree...
Re:We need a way to avoid duplicating work (Score:3, Interesting)
Formal method advocates (and I am one!) need to realise that claiming formal methods as the ultimate solution is actually counterproductive. I would suggest you'll find far more converts by simply arguing that formal methods and formal specification is a faulous tool that developers ought to learn for those projects that happen to need or benefit from it. Once people actually learn a bit about it they'll see how many projects could benefit from some level of formal specification that aren't currently using it.
"Formal methods aren't right for everything, but if you're a serious software engineer you owe it to yourself to learn how to use them for when you strike a project that can benefit"
is a much better tack to take IMHO.
Jedidiah.
Django (Python) (Score:2, Interesting)
http://it.slashdot.org/article.pl?sid=05/08/02/00
Re:The best web dev framework you've never heard o (Score:4, Interesting)
Re:Mod parent up. (Score:3, Interesting)
I just wrote some Haskell code to manipulate formal power series [wikipedia.org] in Haskell. One thing that was cool was that I was able to take propositions that I could prove mathematically and simply rewrite them as code. It was pretty mind-blowing. Things that were traditionally hard [wolfram.com] to compute became one liners because of lazy evaluation. On the other hand, almost trivial changes to the code that still resulted in true mathematical propositions didn't result in working code. Essentially the problem was to do with what precisely depended on what. The wrong mathematical proposition and a1 depends on a2 which depends on a3 up to infinity so the code never terminated. In fact, it's very easy to write code that looks provably correct but doesn't terminate. I just came across a paper on this very subject - the fact that some things we take for granted in mathematics aren't true of guaranteed to terminate functional programs. Pity I can't find the link.
Re:Things will always change (Score:3, Interesting)
Math has nothing to do with it. COSA solves the nastiest problem of complex software programs: blind code or unresolved data dependencies. That is, something is modified in one part of the program unbeknownst to another. This makes it almost impossible to modify complex code without introducing dangerous and unforeseen side effects. The second greatest cause of unreliability is non-deterministic timing. COSA solves this problem through synchronicity.
All other programming problems are, as Fred Brooks put it, are accidental and can be easily dealt with using traditional methods.
Re:Good Design (Score:3, Interesting)
I can agree with this, as long as you actually do throw it away once the learning process is done. I realize you said this yourself, but it bears repeating. Too many people are afraid to "throw away all that effort" (translation - don't make me think about it again, or don't make me pay for it again) and want to reuse a prototype as production. It takes a lot to talk them out of it.
A good development plan can tolerate a learning curve to get the job done right. A bad development plan is just trying to push some crap out the door, hopefully AFTER it stops stinking (but that isn't a requirement).
I personally worked a hw/sw development where we did build a prototype just to learn the real problem. Cost about $1.5M. We learned a lot and sent it off to graceful retirement, then built two full production designs for about $3M each. They worked great, even got shown on the History Channel (for about 3 seconds :-) ) Sure enough, management and the customer came back and asked how we could reuse the prototype. Threw another $100k at it for upgrades and never did get matching performance. It was like trying to get an old VW Beetle to upgrade to a Porsche 911 or 959. They just couldn't let a good decison that "wastes" money go.
Re:Afraid of parenthesis? Stay away from XML! (Score:5, Interesting)
)
Now tell me what it means. Specifically, tell me what expression it ends.In contrast, take this XML example:
</p>
Now tell me what expression it ends. See how much easier it is?See, that's the difference: In XML, the angle brackets aren't units really units of syntax in and of themselves; tags as a whole are. Moreover, in XML these units of syntax are self-discribing. Also, angle brackets are never nested; they always occur in "" pairs without any more brackets between them.
Design first, language second (Score:2, Interesting)
Once the rules for how to structure logic has been determined (to the best of ones ability) then a language that formalizes those constructs should be created.
We have plenty of languages that are great for defining logic, but none for structuring it.
So how do we do this? I suggest looking to the universe for answers.
In the hardware world, there is no need for garbage collectors. The laws of the universe restrict hardware engineers and so they must decide which hardware device is going to exclusively own a resister or a capacitor.
The reason for garbage collection is because of the unrestricted nature of the state-of-the-art languages. Any old object can point to any other object.
If hardware was designed like software, then a circuit board plugged into you motherboard would share a resister or two with the circuit board on you hard drive. It wouldn't take a genius to see that this is a bad idea. But in software it's done all the time.
There are many other such problems that garbage collection causes like breaking encapsulation, causing memory leaks (i.e. objects aren't collected because some object incorrectly is still referencing it), slowing down your program, indeterministic destruction of objects, etc.
There are many other problems with programming that need to be solved, e.g. how to easily develop multithreaded programming without little or no extra coding. (I've personally developed such a model, one that also gives the developer the speed of manual allocation of objects with the automatic deallocation of a garbage collector with no extra CPU cycles wasted AND deterministic destruction of objects.)
Also, imagine only needing a single collection class instead of dozens. (I've also achieved this.)
With proper models, we can achieve such things.
We need to reconsider everything to innovate. Nothing is sacred. Everything should be up for a redesign.
Unfortunately, all we seem to do is evolve languages to add a special flavor of a loop construct.
Pendulum (Score:3, Interesting)
OOP is perhaps the pennicle of behavior-oriented in that interfaces tend to be thought of as behaviors applies to things. It is about time we swing back to the data side for a while.
I would also like to see more exploration of "separate meaning from presentation" such that syntax or views of logic can be customized to the developer and/or a particular need. Microsoft's CLR (or is it CRL?) is a baby step in that direction because it allows multiple syntaxes on top of more or less the same interpreter.
Re:funny AND interesting, but yeah FP... (Score:4, Interesting)
Of course I'm doing some handwaving here about the writing it correctly part. Until you memorize the major idioms, you'll often experience starting something with a single paren when it really needs to start with two, for example, and you'll get weird behavior that ends up driving you to randomly adding and removing parens until it seems to work. Admittedly that's a bit of a hurdle at first, but after some experience, that part gets easy. (Like glancing at for (int x=0; x10; x++) and reading "do it ten times" without having to think about it. A lot of people forget how much thinking a newbie has to do to parse such an expression the first few times.)
The real problem with Lisp isn't the parens. Once you get over the initial hurdle, you just look at the indentation. The problem is that dev platforms these days are so much more than just a language. The basic concepts underlying a Lispy language are almost timeless. The whole rest of the dev system, though, has a shelf life of about a decade or less, after which time the way it is made available, the libraries, the editors you have to use, the string model, the constraints it's optimized for, the compromises it has made, its interaction with other technologies, etc., are all out of touch with current realities. Such is Lisp today.
(Paul Graham once seemed like the guy who could rejuvenate Lisp, but each year that passes makes that less likely. Speaking of out of touch with current realities.... Even Microsoft's secret projects are more open.)