Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

What Is The Future Of Programming Languages? 55

MrProgrammer asks: "With hybrid languages like C# coming down the pike, what do you see as the next advances to be made in programming? We have languages from Assembly to Visual Basic, covering what would appear to be the entire spectrum. Is there anything else to be added? Is there anything beyond OOP?"
This discussion has been archived. No new comments can be posted.

What is the Future for Programming Languages?

Comments Filter:
  • In perl they aren't required, and then you have to keep doing use strict to find bugs

    But "use strict" doesn't force the programmer to declare types, it just requires the programmer to declare variables. The variables remain varients with implicit type conversions. For a description of strongly typed languages without explicit type conversions (and how it contrasts to Perl) take a look at Strong Typing [plover.com]

    Why shouldn't one of the primitive types of a language be that of an address ?

    Pointers also prevent a lot of compiler optimizations due to aliasing issues. I'd rather have a highly optimizing compiler that knows how to do all the neat pointer tricks rather than one that is kept intentionally dumb because it can't tell when I'm doing neat pointer tricks. (Of course I'd also like more communication between the compiler and the linker so that virtual methods could be converted to static if not overriden.)

    • What I mean is this: Microcode allows you to soft-code each "instruction" in the instruction set. There is therefore nothing to stop you making those as high-level as you like.
    That's true. But keep in mind that microcode existed before RISC instruction sets ..... I think it was the VAX that had an assembly-level instruction for "find the roots of this polynomial".

    To bring this back to the root of the thread, the Crusoe:

    RISC was a step away from microcode, in that if you look at how a MIPS processor executed an instruction, the instruction directly sets the datapaths through the processor with, no decode necessary. <opinion>The reason the RISC chips no longer have an edge over CISC chips, is because the RISC instruction sets no longer match up with how a processor executes the code. A pIII is a superscalar core, with a CISC decoder on the front end - a MIPS R12000 is a superscalar core with a RISC decoder on the front end. No difference. In the Crusoe & Itanium, the VLIW instruction set directly matches the way the core works, in the same way that the actualy bit patterns of MIPS instruction set matched the MIPS hardware.</opinion> (okay, the memory/processor speed ratio has an impact on the efficience of RISC, too).

    The whole idea you suggest about massively parallel reprogrammable hardware - very cool, very interesting, but I'm not holding my breath to see it in the short-term future. I have actually heard of one system simiral to this in existance, though. It was a machine based on the xylinx (sp?) fpga processors. As I recall, it was the size of a mini-tower PC, and as powerful as your average supercomputer. Cost about half a million bucks, I think.

    So far as the stuff about high-level instructions in microcode, the last project that I know to design a processor this high-level, was a stack-based processor from Sun that natively executed java bytecode. I believe this has since been canned, for the same reasons that people moved from processors as CISC as those in the VAX to more RISCy machines. The way that people seem to be designing processors to are RISC/VLIW processors.

    I believe that is is possible to flash update the Pentium pro/II/III microcode on the fly, to fix bugs. Presuming that you can modify the microcode involved in instruction-decode, I guess you could make a pentium run a coompletely different instruction set :-)
    This would be a *very* cool, if impractical, way to write a JVM.

    Finally - to get back to the original ask slashdot - my opinion on the future of programming languages, is that in execution they should be architecture neutral, in that they compile to a bytecode. Why? The difference between a RISC/CISC processor and a VLIW processor is the fact that RISC/CISC processors have a decoder on the front end for an outdated instruction set. we should buffer against the changing requirment the hardware has of the instruction set, by a adopting a 3 tier approach to execution - in the same way that databases buffer against change in the way data is stored. That means a intermediate, step, such as bytecode.

    End rant :-P

  • I think this is where design patterns really shine, and I hope pattern libraries become more standard across languages.

    I don't think this will happen. Well, maybe it will in the context of mainstream OO (static language, virtual methods message passing style, inheritance, i.e. C++-type languages), but not generally anyway... Most of the patterns in the GoF book are about getting over the limitations of an inherently static language like C++. The problems the patterns solve might simply not exist in other languages.

  • I can't really see the big deal. I think it's meant to be sort of an extensible 5GL, i.e. a user-defined "intention" language combined with some user-defined heuristics to generate execution plans, and, presumably, a solution generator which applies rules and attempts to meet the goal or "intention" supplied by the user. This sounds a lot like just about every AI program ever written, which is why things like Scheme were developed.

    For a mature example, look at Macsyma or Mathematica. These are extensible algebraic tools which have extensible rule sets, and "intentions" which are stated in mathematical expressions. A massive search engine powers the system by applying rules and attempting to construct a reduced or simplified expression from the original. These are handy (in fact I think calculators now use a simplified version), but not exactly a dream to program. For one thing, the algorithm to generate solutions meets the textbook definition of "intractable".

    I think in the example of legacy code, the idea is to treat the FORTRAN or whatever as natural language and re-create all of the semantics of the compiler in some sort of rule base, and then go to town by adding to or modifying the ruleset to extend the domain. This seems like a hopeless endeavor.

    As a kicker let me leave you with this quote, the result, apparently, of 4-5 years of intense IP research:

    Many simple defines, such as #define foo 2, which would have been classified
    as a constant definition by Notkin et al, instead turned out to be instances of parameter passing!
  • Personally I like Common Lisp. It is completely different from the other languages I've used. One of the most interesting things is that it encourages practices that the profs usually tell people to avoid like self-modifying code and bottom-up programming.

    Having the compiler available at run-time makes for some very nice opportunities for optimization and the macro system allows for overhead-free abstractions. Using a suitable library can seem like moving from C to C++ in terms of added functionality to the language. For instance CLOS, the Common Lisp Object System was written in Common Lisp on top of Common Lisp.

    This ability to extend the language is probably something we'll also see in new languages, because building abstractions to suit the problem might just be better than trying to fit the problem to a certain set of preselected abstractions.

    BTW, CMUCL is just fine. I use it myself and I don't know of better free lisps (clisp - no native compiler, gcl - poor standards support, sbcl - fork of CMUCL, should be as good). It includes a good optimizing native code compiler. Note that CMUCL is freeware (as in totally free for any purpose, with no obligations), not GPL. You can get it at www.cons.org.

  • The difference between a RISC/CISC processor and a VLIW processor is the fact that RISC/CISC processors have a decoder on the front end for an outdated instruction set. we should buffer against the changing requirment the hardware has of the instruction set, by a adopting a 3 tier approach to execution - in the same way that databases buffer against change in the way data is stored. That means a intermediate, step, such as bytecode.
    Yes. I've felt for a while that a future direction will be CPUs designed to run some intermediate bytecode that's explicitly designed for runtime optimisation. If you look at what HP is doing with Dynamo [arstechnica.com] (which dynamically recompiles HP-PA instructions back into HP-PA instructions before execution by the hardware, typically gaining 10-15% execution speed), it's clear that a split static/dynamic compilation scheme is a win - there are simply too many things in a modern superscalar design that can't be known at static compile time, and thus need to be put off until runtime, a sort of "lazy compilation". One can only imagine that it'd be a bigger win if instead of originally compiling to some legacy instruction set architecture (like HP-PA, PPC, or the old groaning x86), code was first compiled to a bytecode architecture that was specifically designed to mesh with a final dynamic compilation implemented in microcode (or even pure hardware, though I imagine acting on the amount of metainfo needed to do this well would make the design too complex for that).

    When (hopefully not if) someone does pursue this idea, it's reasonable to postulate that some programming languages or even paradigms might be better suited than others to the first-stage static compilation to bytecode. So I guess that's what my own answer to "what is the future of programming languages" depends on. One thing I'll say is that I'm not sure it's Java, which means Sun's MAJC design for the purposes outlined above.

    There's another nice article [arstechnica.com] (companion to the one linked above) at Ars Technica that talks about some of this, though it actually ends up being a sort of review of CPU evolution from hardware lock-in though ISAs through microcode emulation of ISAs. It doesn't really talk about a bytecode-oriented-CPU, but it sure seems a natural next step to me. It's an idea in many ways similar to Crusoe or even (shudder) EPIC, but doing things at different times and in different places.

  • What I mean is this:

    Microcode allows you to soft-code each "instruction" in the instruction set. There is therefore nothing to stop you making those as high-level as you like.

    (There have been microcode processors, where each machine-level opcode equalled a Pascal command.)

    What I'm suggesting is that the compiler does NOT compile the "program" to suit the processor(s), but change the processors to suit the program. In this case, what you'd do is, for each object, is replace the microcode for each opcode on the corresponding processor with the compiled code for a given method.

    eg: Let's say you have a class "btree", with methods for creating a new root node, checking if a tree is empty, adding a node, deleting a node, and returning the value of a node.

    You program these as your new opcodes for your processor. You now have a RISC instruction set, with 5 opcodes and a pseudo-register for the value of the node.

    (You actually have a further opcode for selecting an instance.)

    The concept of a program no longer exists, in this model. As your "program" is now just a series of opcodes, spread over a number of processors, ALL you have, in the way of a program, is an initial call to one opcode on one processor. Nothing else has any meaning.

    Further, because all your data is held in pseudo-registers, there is no need to access main memory. Your processors =BECOME= memory.

    This concept calls for the elimination of the traditional model of seperate memory, main processor and dedicated supplementary processors for various devices and tasks. What you would have is a collection of totally programmable processors and nothing else. Your motherboard becomes a grid of processors and communications lines. Maybe one EEPROM to hold the bootstrap. But that's it. There would be no need for anything else.

    This means that OO programming becomes parallel programming. Each instance of each object can talk to any instance of any object on the system, in parallel, WITHOUT messy scheduling code, time sharing, or any other such rubbish. This means that whatever OS you used would not need to handle any of this. It would all be done at the hardware level, inherently.

    By using this kind of massive parallelism, you are moving from traditional models of parallel programming (which are generally blocking, and based on layers upon layers of fluff to get anything done), and moving closer to the Internet's concept of nodes which exist and operate in an inter-dependent to independent manner.

    You also move OO from a merely artistic language to a practical language. If you can develop an application in a simpler, more testable manner, using the OO paradigm, AND have it's execution time superior to a functional program, THEN you have reached the level of usable OO technology.

    (Note: It'd be faster, because you might only have 10-15 opcodes on a processor, which is much more efficient to handle than 200-300 instructions, typical on a CISC or CRISC hybrid.

  • A while ago I posted some C code about the value of xor in swapping int's, etc., but some of the followups noted that most of it wouldn't compile. The reason is that ANSI C and C++ have moved away from K&R and made most type abuses illegal.

    That got me to thinking about C in general, and the values that got me interested in it in the first place.

    When I took up C I had already learned several high-level languages like BASIC, Pascal, and even APL and some Lisp. C immediately stood out, because it had tight integration with hardware, based on an abstract, but very accurate, Von-Neuman-type processor and memory model. The basic goal of C was to do everything that assembler did, with similar tightness of code and familiarity with registers, bit sizes, and machine addresses. Unfortunately, the prevailing trend was to take C and make it a higher-level and not a lower-level language.

    However, these days, just about all processors are pipelined or multi-pipelined, and C code is hacked to pieces by the compiler or the on-chip optimizer. That's handy, but, in effect, C and even many assembly languages have become interpreted! The IA-64 will pull C code into separate, independent instruction streams, based on dependency and dataflow analysis. Crusoe will do speculative execution and even backtracking on unsuccessful branches, going so far as to do a rollback on memory locations, in cache.

    That's all fine and dandy, but I wonder if there are features there that could be put under more programmer control. Maybe there are some new #pragma'sthat I don't know about, but still I see C as getting farther and farther behind the hardware model.

    As a few naive suggestions, here are some ideas I have:

    Tagging statements to designate which pipeline they go into. This is what IA-64 compilers do anyway, so why not us! (maybe we can, but I haven't checked).

    adding cache control statements to variables - eg lock in cache, pageable, or page immediately (useful for scans, where,say, 2G of RAM is being scanned one byte at a time - there's no reuse). Oddly enough, I got this one from Oracle, which has this option for certain table operations.

    More machine-level operators. I don't know what these would be, but anything bitwise would qualify.
  • pointers allow memory access to anywhere which requires the OS to sandbox applications and lose performance. i saw an interesting draft of an operating system that has some pretty neat ideas relating to this at http://ftp.rook.com.au/
  • OOP (well C++) is not efficient when it comes to memory utilisation - this due to many factors, including the automatic inlining of a lot of code. But how many developers actually care about program size?

    I've got a Solaris machine here which is constantly down due to many huge binaries wanting to run simultaneously. Each process consumes 8-19 MB of RAM, and it is possible that there could be 100 or more running at once. Not one of the processes is that complex in design. Neither the machine nor the code are under my control in any form, but I've still got to interact with the thing. Grr!!

  • by nellardo ( 68657 ) on Tuesday August 08, 2000 @06:11AM (#873162) Homepage Journal
    What I mean is this: Microcode allows you to soft-code each "instruction" in the instruction set. There is therefore nothing to stop you making those as high-level as you like.
    That's true. But keep in mind that microcode existed before RISC instruction sets. You could program in a high-level byte-code, but almost no one ever did (the most recent exception that I'm aware of is SGI's propensity for microcoding the hell out of their graphics accelerators to run GL real fast). I think it was the VAX that had an assembly-level instruction for "find the roots of this polynomial". Essentially no one programmed in assembly (except kernel hackers, who didn't usually need polynomial roots....) but no compiler was smart enough to figure out when it could use that instruction. Even if the instruction had a library call interface, people that really cared about the accuracy of their roots would want to choose their own algorithm (and if they weren't coding in assembly, there was no way they were going to touch microcode).
    What I'm suggesting is that the compiler does NOT compile the "program" to suit the processor(s), but change the processors to suit the program.
    At one level, there's no theoretical difference - do you transform A to use B or B to use A? If the program and processor are both correct, and the transformers are both correct then this kind of transform should be associative through composition - i.e., (trans A) -> B == A -> (trans' B). On another level, transforming machine types (in the Turing sense of machine) can change the run-time, sometimes exponentially.
    In this case, what you'd do is, for each object, is replace the microcode for each opcode on the corresponding processor with the compiled code for a given method.
    Realistically, what's the difference here between what you've proposed and what presently happens? RISC instruction sets are pretty close to microcode already - that's intentional, and how they were designed, for just the kinds of purposes you propose. Register windows on modern processors were in fact designed for object-oriented programming (see David Ungar's Ph. D. thesis on a processor designed for Smalltalk - Ungar is now a researcher at Sun and one of the pioneers in just-in-time compilation).
    eg: Let's say you have a class "btree", with methods for creating a new root node, checking if a tree is empty, adding a node, deleting a node, and returning the value of a node. You program these as your new opcodes for your processor. You now have a RISC instruction set, with 5 opcodes and a pseudo-register for the value of the node. (You actually have a further opcode for selecting an instance.)
    Self's bytecodes are in fact much like this. Self has eight bytecodes - three bits. If I remember correctly:
    • Send a message. This used five bits to index into a table of message names.
    • Put a message string in the table.
    • Extend identifier. This is used to have more than a five bit index, either for putting in the message or retreiving it.
    • Non-local return.
    • Primitive assign. Used five bits for the slot name.
    and a couple others I dont recall - non-local return probably, and possibly one to push "self" onto a stack.
    The concept of a program no longer exists, in this model. As your "program" is now just a series of opcodes, spread over a number of processors, ALL you have, in the way of a program, is an initial call to one opcode on one processor. Nothing else has any meaning.....
    I suggest you look into the Agents programming language, by Gul Agha. This is exactly what it is. Even simpler, really - each agent (processor) can send and receive messages, and has one register worth of local storage and a script describing what messages to send when other messages are received.
  • The AmigaDOS command-line language implemented something very close to this, perhaps even better than the way you describe. All the following commands are equivalent:

    COPY src dest
    COPY TO dest src
    COPY FROM src TO dest
    COPY TO dest FROM src

    The command has a default argument order, but also has keywords that can be used to rearrange the order. I really wish a couple UNIX commands had this option. Like grep -- I can never remember if the search string goes at the beginning or at the end.
  • Did you try these tools out? I'm curious what your or anyones experience is. Too bad these threads run out so fast.

    I've played with comparable tools but I find myself going back to writing code because I have to tell the computer explicitly about so much of what I want done.

    I also think a lot of it has to do with the fact that i;ve been working in C since 1985 and there's a lot of calcificated resistance within the old pathways to try a new approach... Also I have to watch the size of my binaries and a higher level language always seems to bloat these up. Perhaps with you its even worse since coding in assembly gets you right down to the smallest possible footprint for your applications.

  • I imagine we're one generation on from the assembler vs. high level language argument. With optimising compilers getting better, the advantages of assembler seem to be diminishing. Now we'll have people on one side saying "Use these libraries, it's easier!" and the other lot will be saying "Yes, but it's more efficient if I write my own code". Until someone brings out a librar handling thingy that optimises to the extent that you can't tell the difference... Of course by then we'll be moving onto something else which isn't quite as efficient but is much easier... ;-) Or am I getting to cynical?
  • Component Oriented Programming is the future. COP is not COM, EJB or CORBA. Those are heavy weight components. I talking about very light weight components down at the language level. COP uses containment to reuse functionality.

    My idea of a good COP language would have the following attributes: uses containment and interfaces (instead of inheritance), has a restrictive referencing model where encapsulation and proper coupling is enforced and guaranteed by the language, automatic reclamation of memory eliminating the need for garbage collectors but still freeing the developer from explicit deletes (best of both worlds), deterministic destruction of components (since no garbage collector), P-Code compiled once when transferred to a new platform (not every time the component is loading into memory), a single outer component that replaces the virtual machine which is now unnecessary (it is this outer component that ported from machine to machine), coupling of components are done only at the interface level by their common container (siblings never call each other directly), etc.

    Well so much for that pipe dream. Until the economics of language development change, there will be very little advancements made. Since Java is now free, the economics have changed drastically. Another harsh reality is that most languages still allow the developer to easily violate some pretty basic laws of good design, e.g. low coupling and encapsulation. What we need is a language that supports known design laws.

    What we don't need is more design patterns. Design patterns are mostly mechanisms to get around bad language design. Many of the books written on "pitfalls" of a particular language stem from inheritance. If you don't believe me then pick one up and count them. A good 80% of all rules/laws of C++ or Java stem from inheritance.

    Another major problem is programmers. We are all object oriented programmers now. Unfortunately, most of us object-oriented programmers still think Java is an object-oriented language. Well, I hate to be the one to break the news, but it is not. You need three things to be an OO language: inheritance (yuck), polymorphism and encapsulation. Since every object is a reference in Java, you'd have to clone every object going in and out of an object to properly protect that object from outside access (encapsulation). Since cloning cannot be guaranteed to be a deep copy (most are shallow copies), then you have to serialize the object to a memory stream and back into the new object assuming all objects are serializable which many are not.

    Most "programmers" aren't. We think that inheritance is the new silver bullet. And we are quite religious about this. Most programmers don't know what coupling is, don't understand what the fragile base class is, don't even think about encapsulation and wouldn't know a black-box approach to coding if it bit them in the ass.

    Good design practices should dictate the language requirements. Low coupling is the most important design concept ever. Every language I've worked with has intrinsically supported high coupling. Many design patterns that have been developed should be part of the language, e.g. delegation, publisher/subscriber, singletons. Most design patterns would go away with a properly designed language.

    It is extremely difficult and many times impossible to develop systems that follow proper design practices because of the brain-dead languages that we are forced to use. Until some company with a lot of money (not Microsoft because they have never innovated anything) who has a lot to gain by creating a language that produces robust systems exists (if ever), we will be stuck using plastic spoons to dig our graves.

    I am not holding my breath.

  • I think things like GLADE are the future. You describe what the UI is supposed to look like. Then you add the functionality using a different language.

    Actually, with languages like Haskell and Prolog, you can describe the result that you want, and give the system enough rules to figure out how to implement it. These languages are more descriptive than imperative. They have a lot of power, but I don't think they'll ever catch on much.
  • A major reason for the development of Ada was that the Department of Defense was wasting large amounts of money on the support of hundreds of programming languages used in embedded systems. Maybe we need more than one programming language, but we don't need hundreds of them.
  • I think Components, difficult as they can sometimes be are definitely where the action is starting to move. I think as large part of this is being pushed by trhe Internet -> I have watched the dominant paradigm shift slowly as the demands that ecommerce places on high-availbility, scalablity and maitainability drive technology.

    Java for example was orginally released with the intention of adding interaction to web pages...BUT after the 5 minutes of applet antics faded, Java begin to shift toward components, middle tiers, transactions, etc etc. Java provides a platform independent base around which which all sorts of development can occur. Perhaps it could be termed what M$ delicately calls a 'framework'.

    ALthough you guys will all hate me for saying so, you can all get focked and stop being fundementalists, but C# offers an interesting perspective on the whole programming language issue. Why? For a start M$ has ripped lartge chunks from the three most succesful languages of the last 10-20 years (C,C++, and Java), so we see a large number of the features that make these languages so effective. Secondly, and this is the Java influenced part, C# is integhrated with the NGWS or whatever it is called -> this is really a JIT compiler, with the interesting caveat -> it is essentially language independent. You can use C# to create apps, or C++, or VB, or Perl, or Python, or even COBOL as the delightful interivew that was posted a couple of days ago attests to. What does this mean? I have no idea, save that the platform (the .net platform, anyway) starts to become irrelevant as an influence on language choice.

  • I've had a glance over the articles you linked - very interesting, and I'll read them properly later.

    If you look at what HP is doing with Dynamo [...]

    Cool stuff - this is also about the same way that Sun's Hotspot JVM works. The problem with most JIT compilers is that there is a trade off between the speed the code runs at, and the time the system takes to load. The more heavily optomizing your compiler is, the longer compile times are. With hotspot, bytecode is initially interpreted. The system looks for hotspots in the code, and compiles these into very tight code. Code that rarely runs may always be interpreted, but this doesn't matter.

    The key difference between the two presumably comes down to memory - the fact that Dynamo is compiling from HP-PA to HP-PA means that any assumptions that the code makes about the nature of memory only have to *remain* correct. The fact that memory regions may actually be memory mapped IO makes cross compilation from one instruction set to another a lot harder. This is why Crusoe needs such a smart mmu, and one of the reason why Java limits memory access to memory refernces as opposed to pointers. Harder but not impossible - I know of a project to provide fast software emulation of ISA's for old mainframe hardware, which is no longer available, using this kind of cross compilation. (plus of course, stuff like softwindows for the Mac) Like I say, cool stuff.

    One thing that you may find interesting - the article on Dynamo ends saying imagine this for x86... well i know someone working on a system for Linux that kindof does this. Unlike Dynamo, it doesn't do it at runtime, though. With Dynamo, you interpret all the code, and as you are doing this you can anaylze what parts of the code are hotspots. The project he is involved in makes use of the performance profiling tools already available in Linux. The idea is as a program runs you compile statistics of where the program is spending most its time. You then regularly optomize your system, ensuring that the most common path is the one with the least jumps (& therefore stalls), but recompiling code on your harddisk, not in memory (compiling x86 to x86). It is a different approch, not optomizing at runtime, but with similar results.

    there are simply too many things in a modern superscalar design that can't be known at static compile time

    Very good point. I read something a while back pointing out that java (or any other bytecode) programs may run faster than native code on intel Itaniums. The Itanium is an interesting chip. Skip on if you know this, but for those who don't.....

    The pentium II/III chips have a number of floating point units. When the processor hits a floating point intstruction it allocates a fpu to the task: if none are available it stalls until one is. [this description may not be perfect - I'm into low-level code, but not hardware :-)] In the Itanium, specialist execution units have to be allocated for every type of instruction, so you have units like floating point units, memory units, maths units, etc. Operations like multiplication require a maths unit, but things like addition can be done by a maths unit, or a memory unit (which is a basic RISC ALU). Different versions of the chip may have differing numbers of the units (eg. an Itanium for a graphics workstation may have more floating point units, one for a server may have more memory units). A static compiler cannot know how many units of each different type the processor it will run on will have - a JIT can.

    So I guess that's what my own answer to "what is the future of programming languages" depends on. One thing I'll say is that I'm not sure it's Java,

    Why the hell not?!
    ;-)
    Seriously, are you talking about the language, or the bytecode instruction set here?
    What do you feel java is lacking?

    It's an idea in many ways similar to Crusoe or even (shudder) EPIC, but doing things at different times and in different places.

    Why the shudder?
    If there is a draft in here I can shut the door... Seriously though, why don't you like EPIC? Can't say that I know much about any other VLIW processors myself.

    cheers,
    G

  • by Zurk ( 37028 ) <zurktech AT gmail DOT com> on Monday August 07, 2000 @08:35AM (#873172) Journal
    the languages now are really CRUDE. programmers have to build infrastructure whenever we code. thats prolly the most annoying part (and sometimes the most fun) of coding. what i'd like to see is a language which would provide basic functions off the shelf. if i want an HTML editor stuck between two translucent animated buttons which pull up a hex editor and a MP3 player..i shouldnt have to *code* all that. three function calls and a few lines of code should do that for me in (insert language of choice). i should have to build stuff to parse files...if i want a XML tag called weather in a file somewhere on the disk, it should be able to retrieve it for me in one functional call. Java & the GTK stuff has been trying to do that but it doesnt go far enough. I want a seamless environment to manipulate all the functions available, plus i should be able to cut and paste bits from a library of examples available. And i should have a choice of writing bits in assembly and controlling the machines registers at the same time. tall order huh ?
  • I believe that programming, as it transends from a science to a field of engineering, will become more of a COP (Component Oriented Programming). In the future, 5 to 15 years, I believe that programming will become more of putting/gluing components together rather than developing applications from scratch or developing apps from scratch and including a few light components.

    But for this paradime to be accepted a good, reliable, easy to use, and fast component aractecture must be created. Microsoft has COM, *NIX has CORBA but the overhead for understanding and implementing these technologies is just too big. VB with COM tries to make an effort of making COP a viable solution but as everybody knows Microsoft cannot innovate. The system is cludgy and still very difficult to understand. What I feel a good COP environment would be is 90% of development time drag and drop and 10% of development time scripting, to glue everything together, i.e. passing data back and forth and implementing some dynamic interfaces.

    I'm hoping that bonobo with the GNOME project will provide the ease of use portion of this but I have not had the time to look into it. Until then we will be stuck with the current form of programming.

  • by satch89450 ( 186046 ) on Monday August 07, 2000 @08:48AM (#873174) Homepage

    When you look at the progression of programming languages over the years, you see a growth in complexity followed by a simplification and maturation. This cycle, I believe, will continue.

    The B language is a perfect example. It's atomic elements mapped one-for-one with the DEC PDP-7 instruction set, and included interesting shorthand that sped development. The C language fixed some of the growth pains of B; the standard mandated strong typing so that the compiler could do better what "lint(1)" tried to do. C++ tried to extend the C syntax into the object world, with some consequences good and bad.

    What I see, though, is the creation of new languages to solve specific problems in ways that are natural to the particular problem-solver. You see this with business languages that abstract several thousand lines of COBOL code in a single statement. The further abstraction makes writing certain code easier, faster, and more bug-free.

    Emphasis needs to be increased on program accuracy. Debugging has been bolted on for years; it's time for significant debugging aids to be included in languages.

  • Umm... What does a language have to do with the libraries available for it? Why on earth would you want to include a web browser in the language? Should C have an official web browser? Don't confuse the language with its libraries. Java is a great language and it also sports a large collection of high level classes that come with the default JVM.
  • Umm... no. A language is syntax and semantics. It may be that a particular vendor provides a nice library for a particular language, but that doesn't mean jack for the language itself. Syntax and semantics.

    And if you think that syntax and semantics are irrelevant: you're wrong.

    (Sorry to sound like a curmudgeon.)
    --
    -jacob
  • OO, for example, will NEVER surpass, or even equal traditional functional programming

    OOP isn't about run-time efficiency -- it's about abstracting a complex problem into logical units that are simpler to visualize and manipulate; it is about programmer efficiency.

    Almost every decision in computer science involves a trade off, because there are so many variables that are inversely proportional to one another. Run time performance is only one variable in a complex equasion; to optimize for maximum run-time efficiency a language designer has to sacrifice somthing else.

    C compilers produce efficient code because they intentionally do not do things like automatic bounds-checking and garbage collection at run time. Java does do bounds-checking and garbage collection at run time, at the expense of run-time performance. Debating if C is better than Java or vice-versa is as pointless as arguing if a crescent wrench is better or worse than a socket wrench: both do pretty much the same job, but one may be better suited for one particular task than the other.

    OOP exists because it makes it easier to solve certain types of problems. Not all problems are best modeled using an OOP model, just as all problems are not best modeled using a procedural model. Granted, everything eventually ends up as machine code (which, under existing processer architectures, is procedural). If you really wanted to be pedantic, you could say that you don't really need any other languages besides assembler. Of course, programming everything in assembler would be as silly as building a house by baking your own bricks and milling your own lumber.


    "The axiom 'An honest man has nothing to fear from the police'

  • languages ARE collections of libraries, like it or not. and its going to get much better once we have decent libraries. the core language is nothing more than a wrapper to access the library. are you telling me C is useful without stdlib/stdio ? Syntax is now becoming increasingly irrelevant since most languages use a C or C like syntax (java included).
  • heh. and i used to code in 8085 assembly where there was no such thing as a printf. :) ..or a OS or monitor for that matter. looking up hex codes and typing em into a 16 digit keypad connected to an LCD..fun fun fun.
    FORTRAN sucked syntactically, C doesnt have this problem, which is why C++/Java/Modula-2 try and copy it.
    VB doesnt qualify as a language. :)
    ok..you have a point that the language isnt about libraries, but now that C is around and programmers have used > 1 language, the language has shifted to becoming irrelevant. libraries are all important as is portability, security and performance. drag and drop is *not* the way to go, but neither is the "i'll write volumes of code to manipulate XML/HTML/media files".
    The paradigm seems to be shifting to a robust language base with extensive support libraries type thing (GUI/text mode development is the programmers choice - i still code in java with pico) with emphasis on portability, performance and security. which is a *good* thing i might add.
    of course theres always M$ doing its usual..but since theyre pushing C# instead of VB, they seem to be learning too. :)
  • Actually you're talking to an embedded guy who programmed in C for 2 years before he used a printf. Come on, a language is NOT a collection of libraries. If that were the case why wouldn't we have just created a few more libraries and stuck with fortran?

    I will admit, though, that a bunch of nifty libraries will allow a bad language to prosper as people overlook syntactical and structural problems because of its "ease of use".

    I hear visual basic has alot of nifty built in stuff, maybe you should try doing all of your coding in that.
  • I think the largest revolution in software engineering in the last 5 years or so has been design patterns. Components are great for GUI stuff, and pre-packaged classes are fine when they do just what you want. When somthing falls short you need to be able to develop your own stuff without re-inventing the wheel. I think this is where design patterns really shine, and I hope pattern libraries become more standard across languages.
  • . . . is lisp.

    Ok, well I don't really know, that's just what I wanted to try out next. I thought I'd share my reasons and see what other people had to say.

    Basically, I've observed the following characteristics about the best code I've worked with:

    • Code generates code.
    • A good example of this is the
    • Fastest Fourier Transform in the West [fftw.org]. You can re-run the code generator to generate code optimized for the size of transforms you will be using. The other common use of this is in interfaces. Most programming shops have a set of code in house that generates reader code for ascii parameter files ( like .rc files or .ini files). The one I worked with used lex and yacc; it would parse a C struct definition and generate an example file and the reader code which would create the struct. The macros that any complecated cross-platform program seems to end up needing are another example.
    • Small chunks of "interpreted" instructions are generated at runtime. I think the "plan" that fftw makes you generate for a given transform and size, and then pass in with the data, is like this but I'm not sure. Some of the stuff I wrote was "generators" for certain classes, that essentially stored the arguments to a constructor, to be called (perhaps repeatedly) at another time. I think a lot of CORBA based interfaces basically pass along the code to interpret the data along with the data.
    It seems to me that a lot of the modern buzzwords essentially boil down to these issues once you figure out what they are really doing. I think the most useful part of C++ lies with the template machanism, not object oriented or inheritence or whatever other buzzword you pick; and templates are essentially just a macro on type. This CORBA stuff seems to be to be just generating interface code on the fly and passing it around. (The Component Oriented Programming stuff seems to be to be just shell scripting, sorry.)

    My experience with lisp is limited; scheme was my language of choice for any projects I did as an undergrad, and I haven't really dabbled in it much since. The reasons why I think lisp (well, common lisp in particular) might address these issues well is the relative ease of generating code on the fly, and the good reputation of the macro system.

    From a little bit of reading newsgroups, and from talking to older lisp hacker types, it seems that one of the problems that lisp suffers from is that the commercial lisp implementations are too expensive and not compatable with each other, so people get locked into using one lisp because the effort to port to another is too expensive, and once these companies had you by the balls they will essentially calculate your profit margin and charge that. So I plan to only use one of the GPL'd versions, probably CMU. Is the CMU version so bad that I'm dooming myself from the start by doing this ?

  • by alangmead ( 109702 ) on Monday August 07, 2000 @12:54PM (#873183)
    By puting things in terms of "Assembly language to Visual Basic" you are really defining the terms of language design really narrowly.

    Basically you're just analyzing one axis of the evolution of programming languages,(the pure imperitive branch) from the Assembly to Fortran to Basic to VB (with a few backwards steps with GWBASIC and QBASIC in there.) Its not that far of a stretch. I plan on taking a close look at functional programming, logical programming, 4GLs, etc. before I ask "is that all there is".

    I worry about the way you dismiss object oriented programming so quickly too. The way objects work in C++ vs. Smalltalk vs. Python vs. Dylan shows a huge range of study just to decide what objects are and how they work.

  • Karma seems to be stuck for people above 50. Last time I checked moderation _does_ affect karma in the hidden sids, but there's a chance that that has been disabled to prevent abuses (creating 50 accounts that post slowly in hidden sids to gain moderation tokens, using those to moderate the person's main alias up.
    --
  • as rgristroph mentioned, Common Lisp (CL) is an excellent language for code generation. Most developers are not capable of writing compilers, but CL is the only langage I know of that lets you come close (actually, it lets you go the whole way, pretty much). Every language has its strong points. I think the strong points of CL, _some_ of which were mentioned, are (in no particular order):

    1) access to all CL functionality at compile-time
    2) the code itself is data, stored in basic Lisp data structures (lists, atoms).
    3) ability to write code-generating macros using (1) and (2)
    4) no _required_ type declarations
    5) massively dynamic
    6) very powerful object system, also completely dynamic
    7) integrated compiler/debugger/read-eval-print loop and development system (that's a simple but powerful command-line evaluator)
    8) friendly and helpful community of CL programmers
    9) several free implementations
    10) ANSI standardized
    11) no pointers
    12) automatic garbage collection
    13) platform- and implementation-independent code for the most part (except sockets, and some other stuff)

    actually, there's more, but it just goes on and on. I hit the most salient features, I think. As a common lisp user and developer I confirm that many of the most important language features are available in CL, and are done right. I constantly see CL language features be done (and wrong) in lots of other languages, including (but not limited to C#). Java sucks, C of course sucks, and C++ is just a disaster. Dylan has some features, but just sold out half-way, and only works on MS, and has an unappealing syntax, limitations that make it not worthwhile, and is just a 2nd-rate rip-off of CL.
  • if i want an HTML editor stuck between two translucent animated buttons which pull up a hex editor and a MP3 player..i shouldnt have to *code*
    And that's precisely why you won't find those features in libraries... it's an organized effort amoung programmers to prevent people like you from writing programs with translucent animated buttons and built-in MP3 players (and no capital letters.)

    You know, I was joking when I wrote that, but now that I think about it... Flashy and unusable GUI's are almost always written in Visual Basic, which happens to be a language that's very extensible. I would never suggest that a language be crippled to keep it from being misused by idiots (*cough* Windows Scripting Host *cough*), but I wonder how much more useful our computers would become if OCX's went away?
  • I know this is off topic but...

    I've just reciently found these hidden discussions or whatever you want to call them. I got moded up on a couple of posts and my karma didn't go up. Anybody have any insight? I don't mind if my karma doesn't go up but I'm just curious.

    Anyway it might be a good idea to have moderations in these boards not affect karma. That would keep the karma whores out and hopefully keep many of the trolls out too.

  • by jd ( 1658 ) <imipak@ y a hoo.com> on Monday August 07, 2000 @10:53AM (#873188) Homepage Journal
    The efficiency of programming languages is heavily dictated by the hardware that they run onto.

    OO, for example, will NEVER surpass, or even equal traditional functional programming, until the hardware is itself OO. (The Crusoe is one step in this direction, where the end result is placing one method in one instruction, one class on one processor, and one instance in a selectable set of registers.)

    The reason is simple. OO languages need to be translated to a functional form (which is going to be inefficient) and then compiled to a low-level form (which also looses efficiency).

    By designing hardware that is NOT based on the functional concept, you can improve the quality of non-functional languages to the point where they become practical under stress.

    Then, there are other developments that badly need to be made. Abstract Data Type compilers are not exactly thick on the ground, but would make writing complex data structures much easier.

    Then, too, there is an assumption in computing that you have to move from beginning to end. Yet when you draw up a specification, you don't give a flow, you give mathematical rules which are valid. Nor do you say -how- something is to be done, you simply define the consequences of performing a given operation. Extend the concepts a bit, and you totally seperate the "whats" from the "hows", and the content from the presentation. Allowing the user to control THEIR end, and the machine ITS end will be the next major step forward in computing.

  • Does anyone with more hands-on experience than I (read "more than none") with Cocoa and Objective-C [apple.com] have an idea of how it fits in with all the others here?

    It sounds like an awesome development platform to move to, but is it "the future"?

  • I believe that in the reasonably near future Programming languages will evolve from systems inwhich Use is dictated by Definition to systems inwhich the opposite is true: Definition will result from Use. I call these systems Adaptive Natural Programming Environments.

    Of course I just made up that name but that's neither here nor there. It does help to identify ideas in concise terms. So ANPE or Anne P. (whoever she may be) if you please, it is.

    The simple fact is that ALL programming languages in their deployed forms (ie programs) be they interpreted or compiled are restricted in use by the definitions applied at construction time. In other words and by example, the verb, or lets call it a function to be mathematical for a moment, "Move" no matter what its encapsulation (Function library or Object Class) has a definition that is determined by the programmer and is applicable only to that which the programmer has decided it is applicable to. So I may be able to "move" a MoveableObject in Object terms or execute the function "move()" given the appropriate parameters
    but these operations have pre-determined scope.

    What then is the abstract definition of "move" and does it exist as an action outside of its prescribed use? Cognitively this is (pardon the pun) a no-brainer. Of course "move" has meaning. Does that mean in the programmatic analog all things which are to be moved are required to subscribe to the appropriate protocol in order to be moved? What if something needs to be moved that the programmer has not ever anticipated moving?

    Given that "move" implies displacement and certain well-known variables (lets call 'em X,Y and Z) need to be "adjusted" what is to be done with the object that has been moved and does not reference any variables that are displacable? Surely the user knows what they are doing. Why not let the object that needs to be moved adopt displacability as a feature? Why not, further to the point, allow for this instance of use to determine the degree to which the object is displacable? This is useful info. A house for example largely stays put but every now and then it needs to get moved (Damn highway keeps getting closer every year!) so why not give it membership in the Displaceable world but not the same degree of membership as say a wheelbarrow? By maintaining this history of use and adoption of characteristics a very powerful meta info system necessarily evolves.

    So why haven't these ideas been explored in traditional programming languages? For one most applications are not appropriate for this level of functional fuzziness (thank you very much but I like the intransigence of my ATM) but as computer applications encroach into more complex areas (simulation systems, gaming, virtual environments) the resistance to dynamic runtime change that is evident in the most abstractly defined Object-Oriented system will disappear. I believe that adaptive systems will prove to be the theoretical basis for non-adaptive systems and will become the paradigm for the future. Moreover I believe that this will result from the application of linguistic concepts (beyond the encapsulation of noun and verb) to programming languages which will result in enriched machine to machine communication and the "automation" of complex simulations and virtual environments and even more work for me.

  • History seems to teach that there are strong institutional factors opposing programming language change. I guess the economics of the investment in existing code and existing skills makes the cost of changing programming language high, at least for most purposes. Otherwise, we'd all have adopted a programming language like Dylan well before now (take a look at http://www.functionalobjects.com for more on Dylan, or the recent Byte columns http://www.byte.com/column/BYT20000601S0003 and http://www.byte.com/column/BYT20000628S0007 for more on this neat language that gives you efficiency along with improved OOP and functional programming). The other thing I find interesting is that most of the innovations on the desktop are now being driven by games. This applies in both hardware and software. Unrealscript has some neat ideas, and Tim Sweeney wrote an essay (I can't find the URL) where he discusses a lot of pretty smart ideas about where programming languages should be going, and he's apparently going to do a newer and better language for his next major project.
  • Quick answer:

    No Required type-declarations: most errors will be caught when calling functions with type declarations (the built-in ones) anyway when not using unsafely optimized code.

    ANSI standard issues: IMO most notably no standard multiprocessing (threading) support, no standard GUI (there's CLIM, not on free CL's), no standard support for hard real-time programming (needs realtime GC), no standard networking... but should all of these really be a part of the language standard ? Maybe not, since it is quite big already.

    Pointers: Without pointers the compiler can optimize certain things, because the program can't poke around memory. For instance, caching a variable's value in a register in C is dangerous, since it maybe modified through a pointer. Also you get pointers with the FFI (at least in CMUCL).

  • > 4) no _required_ type declarations

    Why is this good ? In perl they aren't required, and then you have to keep doing use:strict to find bugs, so you end up just declaring them anyway. Making things easy for people is only good if you are making it easy for them to do the right thing, not if you are making it easy for them to screw up.

    > 10) ANSI standardized

    I understand from some people that one of the big problems with lisp is that not enough of it is standardized, and important parts like the foreign function interface are implementation dependent.

    > 11) no pointers

    Why is this good ? Why shouldn't one of the primitive types of a language be that of an address ? It allows you to do some cool stuff like pointer-pushing which speeds things up.

  • The features of the language are very important for this. The improvements to OOP that a language like Dylan brings (its concern for efficiency along with the modules and multimethods and a few other things)mean that you can build libraries once for most design patterns, for example, and then just use them. It's also interesting how much easier Dylan makes plugging into something like CORBA or COM. Until our languages improve, there are a lot of compromises in trying to widen use of components.
  • I'm sure a lot of people think it should be called C@*#&$@!!!. :-)
  • The future of languages lies in the future of methodologies.

    Intentional programming [microsoft.com] is a pretty neat idea. IP basically provides an environment for programmers to develop extensions to a given language and to abstract code into "intentions". (It is a little unfortunate that a technology coming from Charles Simonyi at Microsoft is called "IP".) With an IP environment, you don't edit lines of code; rather, you edit (sort of) the abstract syntax tree, and you can express code in a notation of your own devising (which later gets translated into a target language).

    Generative programming (see generative-programming.org [generative...amming.org] is another new wave in SE. The idea behind generative programming is that you create metaprograms which then can generate programs or components, allowing you to build adaptable (meaning you can change their domain) or adaptive (meaning they self-adjust to a new domain) programs fairly easily.

    Also look for eXtreme Programming [xprogramming.com] and refactoring [refactoring.com] to become major new forces in traditional OO SE. XP is a new way to develop software; refactoring is a new way to maintain it. XP has end-users specify the requirements of an application by talking about how they'd like to use it; then developers, working in teams, code the application, designing as they go (except for some Class-Responsibility-Collaboration cards, which they make at the beginning of the development cycle). XP involves rapid development, frequent releases, and stringent unit tests. Refactoring provides a set of rubrics for improving the design of working code without breaking it, making code easier to maintain and understand while also providing better granularity for profiling and other instrumentation.

    Out of all of these, I'm currently using XP, some GP, and refactoring -- and my productivity has soared.


    ~wog

  • As the Department of Defense painfully learned in the 80's, when they tried to mandate that all software development be done in Ada, no single programming language is adequate for all programming tasks.

    We have dozens of programming languages because each one does some specific task better than it's alternatives. C and assembly put you very close to the machine, giving you performance and low-level control over the hardware at the expense of maintainability, reliability, and development time. SQL works well for manipulating large sets of data and reliable transaction processing. Scripting languages like Perl let you easily manipulate text streams. Java gives you portability and code maintainability. The list goes on.

    Software tools are just like physical tools: you have tools like a Swiss Army Knife, which do a lot of things well enough for some jobs; and you have tools which are designed to do a specific job and do them well. Just as in meatspace, you have to pick the proper tool for the job at hand. A Corvette, a Jeep, and a pickup truck will all get you where you are going; but depending on the particualar circumstances, one might be better than the other -- the Corvette is going to be better if you're driving on the highway, the Jeep is best if you have to go on unpaved roads, and the pickup is best if you have to haul a lot of stuff along with you.

    The future (and present) of programming, as I see it, is systems where different bits are implemented in the most suitable language, then tied together with a glue language. The days when a programmer could make a career out of knowing one language are, for the most part, gone and buried.


    "The axiom 'An honest man has nothing to fear from the police'

  • OOP is not meant to be a Holy Grail, despite what many people claim. OOP is meant to be efficient, which is not the same thing as fast during run time.

    Sometime, write a basic program in C which accepts input from a user (like their name, for instance) and then writes "Hello, <username>!". Make sure that it does bounds checking; that it has no arbitrary limits on how long a user's name can be; that it's well-behaved in the case of a crash; that it's this, that and the other. It takes pages and pages of code to write a simple, basic program well in C, because C is an inherently low-level language.

    C++, on the other hand, simplifies all of this for you. Just do a "string username; cin >> username" and you get a boatload of functionality.

    C++ is more efficient in that case, because processor time is cheap and my time is worth a hell of a lot. In that case, it's better to minimize my time rather than the processor's time. This is the way it is for most software projects, by the by; programmer time costs much more than computer time.

    There are situations when this is reversed, where it's more efficient to have teams of programmers working for years on a program than it is to trust the processor to do the Right Thing at run-time. The Space Shuttle is a good example of this. The people who write code for the shuttle's flight computers use HAL/S as a development language ("High Order Assembly/Shuttle", if I recall). HAL/S is ugly. It's just about the last language I'd use for any given engineering task. But it works perfectly fine for the Shuttle, because it's more important that the programmers do their job right than it is that the language is perfect.

    The cost of Shuttle programming is astronomical (pardon the pun). However, in almost 25 years of operation they've never had a single catastrophic crash (save the 1986 one, which was caused by a catastrophic failure of other systems which interrupted computer operation). Try getting that sort of uptime from a UNIX box.

    Moral of the story: OOP isn't a panacea. Neither is traditional procedural programming. Saying that "OO ... will NEVER surpass, or even equal traditional functional programming" just shows that you don't know OO very well.

    OO isn't competing with procedural programming. It doesn't need to. The two paradigms exist completely independent of each other, and neither one of them needs to justify their existence. They just are, and wise programmers use whichever one is proper for a given application.
  • Let's see:

    Intentional programming: Building your own abstractions that will get translated into efficient code.

    Generative programming: Write code that will write code.

    Hmm. Isn't this lisp ?

    Seriously... it will be packaged differently, but they have a lot in common... It is quite common practice in lisp to build abstractions with macros that will make it easier to program and usually the macros translate (codewalking, rewriting, reordering, generating extra code) the code into a more efficient form (the syntax tree is minimal, it's all lists and they're easy to manipulate). For instance, look up the loop-macro or the series-package.

    Cheers,

  • I've got a few ideas. I throw a lot of them out. One of them that I've sorta liked was structural flexibility. Right now in most functional languages, you specify a function with arguments like so:

    strcpy( dest, src );

    I like the idea that a proper parser can extract parameters from any place in a function call. Consider:

    copy( src )to( dest );

    Now, your function is called "copyto". There are a few more characters to type, but isn't the purpose of the function much more apparent? And since your function calls can be written in a more natural grammer, you don't have to remember the order in which a bunch of parameters are given. (I for one still look at man pages for an awful lot of simple C functions).

    Maybe someone will extend C in such a way, and call it "Natural C". (?? C Natural ??)
  • This is a very good point. There are new input devices for programming becoming available. I am an embedded systems programmer, and I recently got a CD from IAR Systems [iar.com] which contains a demo of a state-chart graphing program [visualstate.com] that also creates source code to match the structures drawn on-screen. A company called TogetherSoft [togethersoft.com] makes a product called Together, which draws UML documents (diagrams?) including state charts and umm... whatever you call those diagrams that look like state charts but show objects instead of states. I believe it also does some of the (most of the? all of the?) source code generation. IAR works on products for embedded systems, TogetherSoft works on products for larger systems. (I don't work for either IAR Systems or TogetherSoft, BTW.)

    The flow of graphs is a lot easier to perceive (imagine, create, manipulate, debug) than a one-dimensional sequence of statements. Productivity can be boosted significantly by using such tools. I don't think there's a tool out there that can, (for all applications and under all circumstances,) surpass the power, directness, control that you get from hand-writing assembly code (which is what I spend most of my time doing). But I definitely believe these can be very useful, powerful, time-saving tools in themselves, under the right circumstances.

  • OO, for example, will NEVER surpass, or even equal traditional functional programming, until the hardware is itself OO. (The Crusoe is one step in this direction, where the end result is placing one method in one instruction, one class on one processor, and one instance in a selectable set of registers.)

    Interesting, but I've lost what you mean by this. As I understand it, the Crusoe is just a standard VLIW processor - smart, nothing unusual in the way it executes code. The only really neat bit that I am aware of is the mmu, which is capable of mapping certain memory regions as being memory mapped IO regions, helping support the code morphing. How are you suggesting that the Crusoe supports OO better? placing one method in one instruction - uh? what are you saying here? memory-indirect function calls? you can get that functionallity out of the x86. one class on one processor urgh? a class is just a description of how a region of memory is being used?

    Not wanting to offend - just not quite understanding your point.

    Then, too, there is an assumption in computing that you have to move from beginning to end. Yet when you draw up a specification, you don't give a flow, you give mathematical rules which are valid. Nor do you say -how- something is to be done, you simply define the consequences of performing a given operation. Extend the concepts a bit, and you totally seperate the "whats" from the "hows", and the content from the presentation. Allowing the user to control THEIR end, and the machine ITS end will be the next major step forward in computing.

    Yeah - have you ever coded in a functional language like ML? Just like this. For those who haven't, the difference between a procedural and a functional language goes something like this:

    procedural:
    get_pan();
    fill_with_water();
    boi l();
    place_egg_ in_water();
    wait();
    remove_egg();
    shell();

    fun ctional:
    shell(remove_egg(wait (place_egg_in_water(boil(fill_with_water(get_pan() ))))));

  • What I'm hoping for in the future is a language which surpasses Smalltalk in it's simplicity, elegence and splendid design. Smalltalk does have it's issues (slowish runtime), but in many ways, the peak of language design has reached it peak - in 1980. Twenty years have passed since Smalltalk-80 was made public, and about thirty years since resarch on what would be come Smalltalk started at Xerox PARC.

    And after all this time past, all the pionerring research done at Xerox PARC which lead to Smalltalk and the modern GUI, nothing new has come up. We've got Java, and now, C#, nothing but half-assed attempts at Smalltalk flexibility, but catering to those who cannot think past C's syntax.

    It's a shame, and it doesn't seem to be getting better- Stanford's Self project was interesting, and also responsibile for a lot of new ideas and techniques, but has not really bloomed. Often do I look around for new languages, new ideas in language design, and it seems it's all just the same ideas being rehashed.

    I love Smalltalk as a language, but I'm ready for something just plain better to come along. Perhaps Smalltalk is just too good that it'll be another 20 years before the rest of the world catches up with it.
  • No kidding. What's new about IP is that someone has built a system (a la the lofty plans for Guile) that does the translations for you AND incorporates literate programming, and GP is a little more than "write code that will write code". However, the state of SE is just now catching up to where Symbolics was in the 70s. :-) Having used lisps for several medium-size projects, I'm always a little depressed when some new^H^H^H 30-year-old "innovation" comes out, but these will bring these ideas to a wider audience, and will eventually result in better code.


    ~wog
  • This syntax looks a lot like Smalltalk
    foo at: dest copy: src
    The method calling syntax of Objective-C has strong Smalltalk overtones, so you might want to look there.
  • For the work that I do C and Perl work very nicely. I guess it depends on what you want to achieve.

    I believe the very next great improvement in programming is not going to be some great development environment allowing us to glue together components - you're always stuck with learning the component and making the component do what you want it to do and often it doesn't allow you to do exactly what you need (ssh/scp instead of telnet/rcp).

    No I think it's in the interface we use to program. My source codes are basically one dimensional lines of text expressing a logic that has a lot of variables interacting. The operational structure is in my head and the keyboard is a poor tool to translate the mind/logic to the practical/computer. I wouldn't know what type of input device would bring an improvement, I certainly don't think voice recognition will help programming, but I do believe that current languages are limited by the input devices we use to translate what is in our mind. Once such a device becomes available programming languages will start to reflect it.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...