Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming IT Technology

As Languages Evolve... 81

naph writes "It seems that as programming languages have developed there has been a steady increase in the level of abstraction they use. Early languages were all very low-level, but successive generations have become higher and higher. Is this trend going to continue, or do you think we've reached a kind of happy medium between power and abstraction? Would developers prefer higher level languages, or is the direct control of things good? I was just wondering what other developers out there thought of this."
This discussion has been archived. No new comments can be posted.

As Languages Evolve...

Comments Filter:
  • One thing that has happened in PL design is that they constrain the programmer more. This tends to increase the expressiveness, unlike what one qould expect at first.
    • by gaj ( 1933 ) on Sunday October 27, 2002 @12:38PM (#4541743) Homepage Journal
      I disagree. Limiting the degrees of freedom does allow a programmer to concentrate on the problem they're trying to solve. To the degree that the language fits the problem domain, this is certainly a good thing. However, to say the constraints in and of themselves tend to increase expressiveness is patently absurd. Expressiveness comes from having language constructs that are a good fit to a) the problem domain and b) your ways of thinking.

      In addition, Stroustrup was correct in saying that a language affects the ways you think about a problem. His language (C++) is certainly expressive, if terminally ugly and wart ridden.

      In addition, I think that there are plenty of counter examples to your assertion. Python and Perl are both (relativily) young languages that allow many degrees of freedom. Further examples would be ocaml, which is also quite dynamic.

      • Python and Perl are both (relativily) young languages...

        I wonder what your definition of young is. In your post it sounds as if you think of C++ as old, or at least not young. C++ was developed around 1983, making it almost 20 years old. The first release of Perl was 1987, making it only 4 years younger, and Python was first publicly released in 1991, another 4 years under Perl. While you can perhaps make a case for Python being "young" at 12, I would think 16 is close enough to 20 to consider Perl not-young. After all, Perl is 80% as old as C++. It's not as if it's Java [sun.com] or C# [microsoft.com].
        • No, I consider C++ to be somewhere in the middle. C is old. Lisp, Fortran and Pascal are all most definately old.

          All of this is, obviously, somewhat subjective. The maturity of a language depends upon more than a simple count of years. By any measure, though, C is more mature than Python, Java more mature than C# and (I hope) we are more mature than this line of logic. I perhaps could have worded my comment better; I certainly should have proofed it more thoroughly. I guess I just mean that in the grand history of programming languages, Python and Perl5 (I consider it quite distinct from Perl are young.

  • by Guspaz ( 556486 ) on Sunday October 27, 2002 @12:08PM (#4541602)
    As long as the compiler is efficient and very good at optimizing, more abstraction is OK. But if abstraction comes at the price of too much speed (Or executable bulk), then the compiler should not exist.
    • by Directrix1 ( 157787 ) on Sunday October 27, 2002 @12:41PM (#4541761)
      Two words: Moore's Law. Even heavy abstraction will not keep up with the speed increases from that. And hardware is cheap. IT guys that program app servers aren't. Instead of paying an IT crew a lot of money to optimize a server you could just invest in more/better hardware. And size is not an issue any more as far as code size goes. The data most programs sift through is usually the only thing to consider as far as storage goes, since usually the data is so much larger than the program anyways.
      • If I use an O(N^2) algorithm instead of an O(N) algorithm I dont care how fast hardware gets, O(N) will always win on a suffiently sized data set.
        • Not always, if the constant factor cost of the O(N) algorithm is much higher, then for a particular set of N's the O(N^2) will be faster.

          While complexity analysis is extremely useful, never forget that ultimately, the individual program, machine and input will draw the line between which algorithm is truely faster.
          • Perhaps, but then thats looking at specific examples. There are cases where it makes a difference, and in these cases the complexisty is more important then the machine running it. E.g. Searching a sorted list of everyone in the usa. I'd rather use a 386 and a binary search, then a random search and pentium 4.
            • Id rather use some indexing on such large amount of data. Like putting an array out of first 4 letters and use that as index for rest of data. That should give us a 4 Meg index file for first search. [2^(5bitsperletter*4letters)] And that reduces the overall datafile by decent amount by reducing duplicates so its not waste.
              If its compressed well it won't save much but anyway.

              But I wouldn't use 386 for it as it would require some EXCELLENT variable lenght compression inorder to fit the data in the harddrives for it to seach.
              400M names to search for...

              If we assume average name lenght is ~10bytes.
              [6bits per letter.]
              That would result enough less than 4G. P4 with 4G of ram could fit that data in memory. And get it in 0.6 seconds average. 1.3s worstcase.
              [Random search.]
              Your 386 with binary search, should require
              2log(400 000 000)= 29 random disk accesses.
              Thats ~1 seconds, with HD delays of the time.

              So P4 has better average case while 386 has better worstcase.
              If 386 cannot handle enough disks to hold the required data you should add the network delay for the cluster that handles it, and suddenly your 386 with the algorithm is going to loose.
              Isn't moore law wonderfull. These days people could get far more ram than 15 years ago harddrive space.
          • The fastest algorithm is always going to be the one which exploits known features of the data. For example, if you know your data is almost sorted, insertion sort will beat heap sort any time, though merge sort will probably do even better, if you can generate initial runs cleverly.

            The second fastest algorithm is the one which can recognise common cases and optimise for them, and fall back to a reasonably efficient general algorithm if necessary.

            Incidentally, constant factors really matter, especially on modern machines where you have to take into account virtual memory working sets, buffer cache algorithms, cache coherency and so on. An O(N) algorithm with lots of mutex locking (even if there's no contention) may run significantly slower than an O(N log N) algorithm with not as much locking, simply because of the cost of cache synchronisation.

        • If I use an O(N^2) algorithm instead of an O(N) algorithm I dont care how fast hardware gets, O(N) will always win on a suffiently sized data set.

          The point of a highly abstract language is that you don't need to care about the algorithm, because an algorithm expert at the vendor has already done that and provided a package for you to use. Example: if I want to to retrieve a sorted set of rows from a database, I just add an "ORDER BY" clause and I don't care how it's done, because Oracle probably have a PhD computer scientist whose only job it's been for the last 20 years has been optimizing sorting algorithms. I certainly would no retrieve an unsorted result set and try to sort it myself, since there's no way I could do it any better.
      • And as long as your IT guys aren't coding too many O(n^2) algorithms...

        Using more efficient algorithms will always be a better usage of coding time than choice of language.
      • In the non-realtime graphics industry (I used to work in visual effects), they have an equal and opposite law to Moore's Law known as Blinn's Law. It states that the expectation of audiences rises at the same rate as hardware speed increases, making the amount of time taken to compute a frame basically constant.

        I no longer work in visual effects. I now hack a Z39.50 server for a living, and the same is true. Machines get faster, but the amount of text that people want to index increases at the same (or greater) speed.

        Note that this doesn't negate your point. My desktop machine spends most of its time waiting for me to press the next key (though it does get a lot of protein folding in while it's waiting). However, Moore's Law is rarely a saving grace, especially on the high end.

      • All I'm talking about are the costs of abstraction. I'm not dissing optimization. I'm dissing foregoing modelling a system well. Things should be optimized where it won't hurt reusability, or in rare circumstances when you really need every last drop of speed (which in my experiences has shown to be really rare, and a lot of times certain non-portable optimizations can be bypassed by making a more distributed generic version of the same system). Just my experience though.
    • While it's a general trend that higher abstraction results in lower performance, this isn't always the case. Sometimes you get your cake and eat it too. Templates in c++ are my favorite example... you can get much of the usefulness of virtual classes while not paying as high a price. Ocaml would be another example. It's extremely fast, on par with g++, and yet allows a very high level of abstraction.

      I think picking the right tool for the job and the coder/team is most important. This is why java and c++, while covered with warts and sore spots, really are a good thing.
  • Even though... (Score:2, Insightful)

    by davisshaver ( 583015 )
    Even though i am just starting programming, I would prefer a language that is more abstract. I think it allows for faster programming. Just tell me if I am wrong.
    • Re:Even though... (Score:3, Insightful)

      by Hollinger ( 16202 )
      Well, you're being clever or naive, I can't tell which. By faster programming, I'm guessing you meant that you can whip the code out quickly and efficiently. That's true. However, there are just some cases where a higher level language will cost you, such as Embedded Programming, or high performance drivers and software (e.g. the Detonator series of drivers from NVidia). If you're trying to push as much data around as physically possible in a clock cycle to get that extra 2 FPS, low level is the way to go.

      With each level of abstraction, you, as a coder or designer, add overhead and unneeded to the code operations (check the assembler's output of a c++ program vs. hand-written assembler).

      The human mind is still the best code optimizer out there today.
      • Re:Even though... (Score:3, Interesting)

        by ttfkam ( 37064 )
        On the contrary, I believe it is you who is being clever or naive. Embedded programming, device drivers, etc. account for maybe one tenth of one percent of all source code written today. I fear that I may even be overestimating that number.

        For smaller projects, using low-level facilites can get you big gains. Even for larger projects, optimizing the hotspots in code with low-level constructs from within a high-level framework can show great results. In fact, this is how many high-level languages are implemented.

        However, while the human mind is the best code optimizer out there, it is also the most frail and inconsistent. While you can make very tight routines in Assembly on a good day, what happens when you are sleep deprived, up against a hard deadline, and stuck trying to figure out why the program keeps crashing? You know, in the real world. Compilers, while maybe not producing the absolute best code for a particular instance are very consistent about producing pretty damned good binary output billions of times 24 hours a day/7 days a week.

        Code production time is also a factor. If you are working on a project that's a few tens of thousand lines of code, Assembly language -- with a competant code author behind it -- can show amazing results over the Python version (for example). But the Python version was finished and debugged days or weeks before the Assembly language version was code complete.

        Your C++ example is a bit misleading though. Chances are that you were looking at the assembler's output of iostreams. iostreams implementations, while getting better, are not anywhere near the small size of stdio.h or raw assembly output to the console. Then again, iostreams is far more portable and flexible than either of the previous two. It's a tradeoff, just like everything else in the world: convenience/specificity. Take out iostreams and replace it with a home-grown implementation or stdio.h and you'll notice some "tighter" code.

        In addition, compilers keep getting better. A good optimizing compiler is nothing to sneeze at nowadays. As a whole, compilers are measurably better than five years ago, and worlds better than they were fifteen years ago. Some of the best human minds are writing general code optimizers out there today.

        Add in the final tidbit that assembly isn't portable. If you are targeting a particular embedded platform with strict space requirements -- a small minority of all development projects out there -- C with Assembly fits the bill. As soon as your platform changes because a vendor went under or requirements demand a faster processor or whatever, all of that Assembly is basically useless. You might be able to use some of the same general algorithms, but you're basically talking about a rewrite. Then again, if you wrote some Assembly code that is more generic, it's not heavily optimized is it? It also doesn't work too well if your target includes multiple platforms from the start.

        Want to write a portable, network-aware program? If you use C, be sure to eat your Wheaties in the morning, because you're gonna have to spend a while typing in all of the #ifdefs. But you'll just make it clean and compile for Linux, FreeBSD, Solaris right? What if Windows or BeOS are requirements for your project? #ifdef #ifdef #ifdef

        OR!

        You could write it in Perl, Python, Java, or any of the other "dirty" high level languages and worry about those clock cycles after you've profiled it and seen the need. This of course doesn't remove the need for proper design before you start, but we were talking about implementation.

        Remember: Premature optimization is the root of all evil.
        • by pb ( 1020 )
          I agree with you that it's possible to speed up programs written in higher-level languages immensely by finding the bottlenecks and rewriting them in a lower level language, (profiling your code, the old fashioned way) but I find much of the rest of your bashing to be quite excessive.

          I find that in higher-level languages, I spend less time debugging my code and more time debugging the system libraries and the language. In any complicated and changing system, there will be bugs, and there's far more code behind your average higher-level programming language, especially when it's new, not extensively tested, and not well-understood. i.e., your average C compiler will behave much more predictably than your average Java compiler, and if you screw something up in Assembler, rest assured, it's probably your fault.

          How are iostreams more portable than stdio.h? Do you mean portable in that it hides more implementation details, or portable in that it runs on more platforms? I'll agree with you on the former, although that isn't always a good thing, but never on the latter. :)

          As for C's #ifdef facility, I like it a lot. In fact, I sincerely wish Java had some form of macro preprocessing, because occasionally, I need it. I'm sick of Java programmers whining about how "macros are confusing"--if they lack the intelligence to run the source through the preprocessor in the first place, then... it might explain a lot about their Java coding.

          But this goes right back to using the right tool for the job, and not reinventing the wheel--if I needed to implement some sort of portable, network-aware program in C, I'd probably look for some sort of portable, network-aware LIBRARY that's already ported to the intended platforms.

          There's nothing wrong with quickly testing out an implementation in a higher-level language, but depending on what you're doing, you still might want to re-code all or parts of it in C. And all languages have their warts, especially depending on what you're used to doing or having. I wish C compilers had a standard facility for hashes and did proper tail-recursion optimizations--but then I also wish Java wasn't so crippled, bloated, and slow (to say nothing about typecasting, scope problems, enforcing its own naming conventions, etc., etc.), OOP in C++ wasn't so whacky, and I wouldn't mind if Perl's type system were a bit cleaner, or if PHP implementations could be more consistent between versions, etc., etc., etc. :)

          So, yes, use the right tool for the job. But don't kid yourself--you'll be banging your head against that higher-level language a lot anyhow, and likely for different reasons, sometimes completely unrelated to your program. After doing that for a while, I'd much rather know I'm shooting my own foot in C or Assembler instead of having my shot foot anonymously instantiated by some cryptic and unnecessary higher-level language 'feature'.
          • Re:eh? (Score:3, Interesting)

            by ttfkam ( 37064 )
            i.e., your average C compiler will behave much more predictably than your average Java compiler, and if you screw something up in Assembler, rest assured, it's probably your fault.

            This is FUD. There is far more consistency in Java compilers than C compilers in the real world. The difference in output of C compilers is far greater than the difference in output of Java bytecodes. As far as debugging system libraries and the language, while in my years of Java programming I have come across bugs in the JVM implementation and even once(!!!) I came across a compiler bug. I was able to work around the problem in all cases. This is impressive considering that I have done Java development on OS/2, Windows, Linux, and Solaris. I have had far more headaches from C compilers (yes, I code in C as well -- learned it years before Java came along) when bouncing from platform to platform than I have ever had from Java bugs. Come to think of it, I've even come up against a C library bug or two. To suggest that higher level languages are somehow tainted in this respect and C or Assembly is the cleaner answer is laughable.

            How are iostreams more portable than stdio.h? Do you mean portable in that it hides more implementation details, or portable in that it runs on more platforms? I'll agree with you on the former, although that isn't always a good thing, but never on the latter. :)

            I misspoke. My intention was to say that iostreams are more portable than an Assembly solution and is more flexible than either. You are correct.

            As for C's #ifdef facility, I like it a lot. In fact, I sincerely wish Java had some form of macro preprocessing, because occasionally, I need it. I'm sick of Java programmers whining about how "macros are confusing"--if they lack the intelligence to run the source through the preprocessor in the first place, then... it might explain a lot about their Java coding.

            Why? Why does Java need a hack that -- irrespective of the language syntax -- makes drop in text segments like a preprocessor? Do you need it for plugging in a native implementation when available and a Java one when not? Or conditional compilation of some functionality? Java doesn't need a preprocessor for that. You can do it in Java and still take advantage of the Java syntax validators -- which is crippled by the use of a prepreprocessor. It isn't about macros being confusing (which would be a #define and not an #ifdef); It's about them being unnecessary and in this case harmful. Since we're talking about macros and Java, what good would they do? If I write a sufficiently small or simple method/function, the Java compiler will inline it for me. For that matter, so will the optimizing C compiler. #define is of limited use today.

            if I needed to implement some sort of portable, network-aware program in C, I'd probably look for some sort of portable, network-aware LIBRARY that's already ported to the intended platforms.

            And C is one of the only languages left that DOESN'T have a standard, portable networking API. Java, Python and Perl have all had one for years. Java recently gained a secondary API for non-blocking I/O that takes care of a lot of the speed issues plaguing network apps in the past. And it's supported by all current JVMs. And any Java programmer can look up how to use it in any recent Java tutorial or quick reference.

            You want a library? So do I. Those libraries are called java.net, java.nio, and IO::Socket. How are they implemented? Probably in C and/or some Assembly. Do I care? Not if the API is stable and the speed fits my minimum requirements for a job (and they almost always do).

            Nothing wrong with "testing out an implementation in a higher-level language" eh? What happens when that implementation is plenty fast and/or memory efficient enough to do the job. Why recode in C?

            Java is crippled how? By its standard GUI libraries? How can you compare that to C's lack of any standard GUI library? As for the rest of Java, given the non-blocking I/O libraries, how is Java substantially slower? By all means, give me an example larger and more complex than "Hello World."

            Study after study that I have read has demonstrated that algorithm makes far more difference than language in speed contests. The worst problems I have seen in Java code were when arrogant C programmers tried to code it like C and then whine about how it won't work right for them. I have seen Java code where people have made classes called Get_Channels and made instances of those objects in order to call an instance method instead of just making a static method and calling it off of the class definition. This is why Java gets a reputation of slow in the last few years, not because of some inherent limitation of the platform. As far as naming conventions, thank god there's a standard. Any Java programmer who follows the naming standard is pretty well covered that some other programmer will recognize and understand his constructs with a minumum of time and effort. It isn't until C programmers come in with their need to call classes get_channels that things go haywire.

            Scope problems? What scope problems? Can you be more specific because I'm not even sure what you have a problem with here.

            you'll be banging your head against that higher-level language a lot anyhow, and likely for different reasons, sometimes completely unrelated to your program. After doing that for a while, I'd much rather know I'm shooting my own foot in C or Assembler instead of having my shot foot anonymously instantiated by some cryptic and unnecessary higher-level language 'feature'.

            Yes, and people never bang their head against C or Assembly. I have had many more problems with the limitations of C in the past seven years than I have ever had with Java or Perl. Sometimes C fits the bill. When speed is truly an issue and every little cycle counts, sometimes C (or more often C++) comes to my rescue. For almost every other problem imaginable, C is my problem, not my solution.

            Java and C++ have very well defined behavior for object instantiation. For someone who complains that Java programmers just aren't smart enough to handle preprocessor macros, you seem awfully dismissive of your own ability to see what happens in a higher level language when I and others barely even blink.
            • Java needs a preprocessor because Sun has decided that operator overloading is a bad thing, and with a preprocessor, it would be a reasonably easy hack to overload operators (after all, its just syntax magic)

              Thats one of the things that really annoys me about java, its darn hard to make it fit into 80 columns when using containers other than arrays, because of the long syntax on containers.

              • Of course no is stopping you from using a preprocessor but this is not true operator overloading when using a preprocessor macro. One of the primary features of operator overloading in C++ (which is the language from which I assume you get your love of the practice) is type safety. There's more to it in practice than syntax magic.

                You lose type safety when you invoke a preprocessor as you are no longer under the auspices of the syntax validator.

                And as a point of history, operator overloading was ommited because when Gosling et al actually did the research, more bugs were introduced as a result of operator overloading than were reduced by concision. It wasn't a personal grudge or offhand bias. There were many arguments, comparisons, and trials made before the final decision to leave them out.

                If you feel that you can handle it and -- more important -- that everyone that will ever touch your code and APIs will make better code because of it, by all means have at it.

                As far as the 80 columns difficulties, quit blaming the language and get a better editor -- you know, the kind that can fit more than 80 columns per row or intelligently wrap the lines. Why anyone codes with console-based vi or emacs these days, I will never understand.
                • by pb ( 1020 )
                  Yes, it's obvious that they had some leanings towards operator overloading simply due to the way they also use + for string concatenation. (never mind that anyone who knows what + actually does there will tell you not to use it anyhow because it's slow...) But naturally this is, again, one of those features that is handy enough that they don't mind doing it themselves, because they know what they're doing, but they wouldn't dare unleash it upon the world.

                  Think of the consequences--someone might write an arbitrary precision number library that allows for code that looks like "2 * 3 + 4 / 5" -- no, that must never be, it would be too useful and obvious!

                  As far as the 80 columns difficulties, quit blaming the language and get a better editor -- you know, the kind that can fit more than 80 columns per row or intelligently wrap the lines. Why anyone codes with console-based vi or emacs these days, I will never understand.


                  This explains so much--thank you. I have often wondered why Java programmers can't seem to indent their code properly. I've even had trouble finding a good program to (re-)indent Java code. The main reason why I'd like to see programmers stick to 80 columns (or 120 even? please??) is because sometimes I might want to PRINT SOMETHING OUT.

                  However I realize that when you have code that looks like
                  fooObject.barSomething( blahSomethingElse.stuffInstantiator(aVariable, bObject).toString()).bazMemberFunction()
                  it might be difficult to indent without introducing awkward spaces or unnecessary temporary variables, and you wouldn't want to disrupt the straightforward purity of the Java code itself! Actually, if you write all of your classes correctly, and your iterator functions and whatnot, maybe you can get your main program to one big long line that does everything you need. Say, 10,000 characters long.

                  But while we're enforcing semantics on the programmers, let's enforce specific editors or IDEs too! Do you have any suggestions, or an approved list of allowable console text editors? Does the editor have to be written in Java?
                  • (never mind that anyone who knows what + actually does there will tell you not to use it anyhow because it's slow...)

                    String foo = "This " + "is " + "a " + "segmented " + "string.";
                    and
                    String foo = "This is a segmented string.";

                    are equivalent in recent (the last few years) Java compilers. Like any good compiler technology, better choices are made over time. It's not a blanket performance dog.

                    It's also useful for simple cases like:

                    for (int i=0; i<5; ++i){
                    String foo = "String" + i;
                    // Do something with it
                    }

                    The only real case where you should avoid it is where you are concatenating multiple variables whose contents can only been found at runtime:

                    Date d = new Date();
                    int x = 10;
                    String foo = "String" + x + ": " + d;

                    In cases like this, you should be using StringBuffers explicitly. Then again, this is common knowledge for Java programmers. Might I suggest the book Java Performance Tuning [oreilly.com] to help with your speed issues in Java. I consider it in the same league as Meyer's "Effective C++" series for avoiding common language pitfalls. It talks about common usage of containers, string manipulation, threads, and other nice tidbits.

                    As for numerics, use the right tool for the job. In many cases, it isn't Java. But then, neither is C.

                    I have often wondered why Java programmers can't seem to indent their code properly. I've even had trouble finding a good program to (re-)indent Java code. The main reason why I'd like to see programmers stick to 80 columns (or 120 even? please??) is because sometimes I might want to PRINT SOMETHING OUT.

                    Define "properly." C and C++ programmers are hardly a breed to talk. As for code beautifiers, try this one [tiobe.com]. I've never used it as I'm fine with my and my team's formatting habits, but it was the first link in a Google search for me. :-/

                    Try searching for 'Java "code beatifier"'. Worked for me.

                    As for printing, every printer I've used in the last ten years handled more than 80 characters per column. Mmmmm... Progress...

                    However I realize that when you have code that looks like


                    fooObject.barSomething( blahSomethingElse.stuffInstantiator(aVariable, bObject).toString()).bazMemberFunction()

                    it might be difficult to indent without introducing awkward spaces or unnecessary temporary variables, and you wouldn't want to disrupt the straightforward purity of the Java code itself!

                    Or it could be written as:

                    BlahSomethingElse bse = BlahSomethingElse.getInstance(aVariable, bObject);
                    BazMember bm = fooObject.barSomething(bse.toString());
                    bm.bazMem berFunction();

                    Is this what you were talking about? Or were you talking about the "getInstance()" static method commonly found in the Java API. It is of course for instantiation with singletons and the like just as it is in C++. I'm surprised this is new for you. In addition, what you see is what you get. It is actually harder to have object instantiations happen without noticing in Java than it is in C++. Incidentally, Java compilers make the same bytecode for this as for your example, and mine is (a) easier to read and (b) prints just fine in 80 columns. And no, I don't advocate 10,000 character lines. I also don't think the 80 column hard limit is necessary anymore either.

                    And I said nothing about enforcing specific editors or IDEs. Please point out where I did. All I said was that specifically console-based vi and emacs were at issue. Actually, now that I think of it, you can usually stretch console windows out. Or is 80 columns the one true width? Quick quiz: Why is 80 columns such a magic number? Answer: Because that's how many characters fix on the terminal screens and early line printer pattens. Move on.

                    And while we're on the topic of printing stuff out, you know all of those automatic documentation generators for C and C++? They weren't quite so popular until javadoc came out were they? Show me one for C and C++ that predates Java. Go on. I dare you.

                    For what it's worth, I have used gvim, vim, emacs, xemacs, moleskine, textpad, BBEdit, Visual Studio, JBuilder, and many others for development. I have even written Java in notepad although I don't recommend the experience to others.
                    • by pb ( 1020 )
                      My numeric example wasn't meant to say that you should use Java to do high-performance arbitrary precision number manipulation, but rather to mention that this is an area where operator overloading makes the interface much more natural. I already that the + operator has its shortcomings in concatenation, but it is nice to see that this is (still) common knowledge.

                      I haven't seen that particular code beautifier--when I was searching for one, I found one that understood and formatted many different languages (at least somewhat), primarily for printing. And it did an ok job with Java, except when it couldn't manage to decently break a line of code up in the first place--this is primarily due to the godAwfullyLongIdentifiers that are commonly used in Java.

                      You're right that my example could be broken up through the judicious use of temporary variables, as I've already mentioned. But I know of no code beautifier that actually modifies your source code so as to change its meaning, and if I found one, I probably wouldn't want to use it in the first place.

                      I actually wasn't referring to any particular Java method in that example by the way, but attempting to emulate the sort of identifiers commonly found in a Java program. I think that writing code like that (all on one line, and accessing the results of functions as objects, etc., etc.) is a particularly ugly way to write Java code, but it's common enough, and IMHO it makes the code harder to read, indent, and understand.

                      I don't think that 80 columns is necessarily some magic number that *has* to be enforced, but I think it's a nice rule of thumb, especially for printing. I'd go as far as perhaps 120 columns, but with 80 columns as a strong preference. Still, even 120 columns isn't wide enough for some code...

                      I often use console-based editors, sometimes on actual consoles. And I wouldn't ask anyone to change their preferred editor, yea, even for Java programming, unless I had some awesome development environment to suggest. Usually I use nano or pico, actually, due to an old preference for simple console editors, probably a hold-over from my DOS programming days. For an IDE, I'd have to recommend RHIDE, again due to my days using TurboVision-based IDEs. But of course this is entirely my personal preference.

                      The automatic documentation generators in the style of JavaDoc did get quite popular, and started springing up on every language, almost overnight. I think that's one great contribution Java has made to the community, intentionally or not. I guess you need good documentation when you can't decently indent, format, and print your code... ;)
                    • The automatic documentation generators in the style of JavaDoc did get quite popular, and started springing up on every language, almost overnight.

                      I suggest that Common Lisp's docstrings and the browsers and information screens available in every widespread Common Lisp environment are far more useful and convenient. Also C-z a in ILISP [technion.ac.il] rocks.

            • by pb ( 1020 )
              As always, with Java or C, YMMV--sometimes it just depends on what you're trying to do with the language in the first place, and if you haven't run into problems, well, congratulations. I admit I have little experience programming in Java, perhaps because I object to its particular variety of brain damage, but I'd be happy to go into a few of my pet peeves, just so you understand where I'm coming from.

              Scope issues--have you ever used protected variables in C++? I think they're remarkably handy for both keeping a level of OO encapsulation and allowing descendants to get real work done later on. Let's say you want the same functionality in Java. I'll call the different kinds of scope here public (anyone can access a variable), private (only the class itself can access its own variable), package (one of the things Java added--anything in the package can access the variable) and class (that class and its descendants can access the variable).

              Now, in C++ (since there are no packages) public variables have public scope, private variables have private scope, and protected variables have class scope. That is to say, to get a variable that only has class scope, all you have to do is make it protected. But in Java, protected variables have class AND package scope. Therefore, to make a class have protected variables, you have to give each class (and its descendants) its own unique package, which you'd have to keep track of manually.

              But wait, it gets better... remember what I was saying about Java enforcing its own conventions on the programmer? Well, to create a package, you have to create a directory for that package. So, whereas before you could have one file that contained, say, your class with the protected variable and your main() routine and whatnot, now you have an extra package and directory to worry about... just for that variable! Want another class with protected variables? Make another package, create another directory, etc., etc., etc....

              To reply to your questions about why Java would need macro preprocessing, I'd like to go into some of the philosophy that (to me, at least) seems to underly a lot of Java's design. You said it yourself--"Why does Java need...". Well, the answer is that for a particular feature, maybe Java doesn't need it at all for 99% of the time. And all of the features that the developers thought did more harm than good were purposely left out of Java. With prejudice. Macros are a good example of this. Because Java has such silly, long and inane naming conventions, I'd love to #define them all away and use something short and easy to remember. Even iostreams in C++ are better here--you can just use cin and cout for I/O; convenient, isn't it? I agree with you that using #define for inlining is of limited use today--most of the time the compiler does a pretty good job with that now.

              A better example of this philosophy is goto. Even Dijkstra never advocated eliminating goto entirely from the language--he acknowledged that it has its uses. This is another case where 99% of the time you really shouldn't use it at all, but the other 1% of the time you'll miss it enough to switch to C++ or C. But the Java people didn't just not implement a goto statement and leave it at that. Oh no. They actually RESERVED THE KEYWORD AND LEFT IT UNIMPLEMENTED. To me, that just seems completely gratuitous, unnecessary, and unprofessional.

              I agree with you that choosing the right algorithm can make far more difference than which language you choose, because the orders of magnitude difference in performance can make up for much sloppy programming in a higher-level language. And this is one of those reasons why Java is so confusing and dangerous. It has a lot of implementations of different, handy algorithms, that can all do the same tasks. And this has a high learning curve associated with it, I'd say even higher than learning the STL in C++. Since you didn't actually write this code yourself, you have to know (i.e., read the documentation or benchmark the implementation or both) how fast each operation is for a given task in each implementation to pick the right one. Also, it helps if you know what it does to your data along the way. For instance, the fact that Strings are UNICODE in Java might explain why a simple change can double your memory requirements.

              Ok, how about graphics. Graphics are fun--Java has some built-in facilities for drawing primitives to the screen, something C doesn't have built into the language. Sure, you could use X, or SDL, or whatnot, but that's a whole 'nother kettle of fish. Well, you can draw lines, and boxes and circles in Java... but not pixels. There's no drawing primitive for a pixel. I think you could convert from one type of graphics object to another (that has pixels and nothing else) but only at a hefty penalty in speed and memory usage, and obviously if you wanted both, converting between the two would be highly silly. So the usual work-around is to draw a degenerate line or square or something, surely at great expense, since there is no pixel primitive, and no good way to add one. So much for OOP.

              I like Perl quite a bit; it has a lot of higher-level language features, but it doesn't impose any bizarre semantics on me, or hide any features from me. It's a great tool for a C programmer to gain a lot of higher-level language features without losing the power and flexibility of C. Perl has something for everyone, even proper closures and lexically scoped variables, if you want them!

              But yes, I agree that using the right tool at the right time is very important. My experience with Java has been far more trouble than it could ever possibly be worth, but perhaps I wasn't designing a networked GUI interface or something. I admit that there are many things that Java can do for you that are handy, and would require much work to implement in C. But there seem to be just as many things that it does that I would rather not have it do, and can find no easy way to make it stop.

              Also, my foot shooting example wasn't meant to be taken literally, but rather in the spirit of the classic list of getting your feet shot in various different languages. If that wasn't clear, I apologize.
              • According to Sun sources:
                The private protected access was removed from the language because it violated a nesting relationship that was valuable. You can list the existing accesses in an ordered relationship from least to most accessible as follows:


                private
                package (i.e., when no access modifier is specified)
                protected
                public

                The "private protected" access does not fit cleanly into that ordered list. This ordering capability keeps certain parts of the language simple, and is therefore valuable.

                The assumption is that packages contain "related" code, and so encapsulation within a package is less critical. If you throw code randomly into a package you will end up with unusual exposures (although with inner classes this can be controlled better), but if it hurts, don't do that.

                Take that as you will. I personally don't have a problem with it and it hasn't been a real impediment to my code. Why you have objects in the same package that have no usage guidelines, I don't know. There are a lot of people who agree with you, so I'll leave that as a matter of opinion.

                So directories are an issue eh? You are aware that it's possible to make classes without subdirectories right? If you don't specify a package, they get put into the default package which is at the root level (of your source tree). As opposed to C and C++, if I have a group of source files in a directory, I know for a fact that they are related to one another (in the same package). In C/C++, I am at the whim of the coder as to what namespace the classes are in (C++ only) and also into which directory s/he saw fit to drop them.

                If you have twenty directories for a small project in Java, you've got bigger issues than the directory structure; You're project organization is completely screwed. If you've only got one, and the path is com/mycompany/, I hardly think that is cause for panic and excessive teeth gnashing.

                When using a class, because of the package designation, I know exactly where it exists on the source tree if I need to fix something. Makes things simpler for the classloader too and the expectations the developer has of that classloader.

                Even iostreams in C++ are better here--you can just use cin and cout for I/O; convenient, isn't it?

                PrintStream o = System.out;
                o.print("Is");
                o.print(" this");
                o.print(" short");
                o.print(" enough");
                o.print(" for");
                o.println(" you?");

                Approaching the length of your typical print statement in C. No shortcuts possible in Java indeed.

                It has a lot of implementations of different, handy algorithms, that can all do the same tasks. And this has a high learning curve associated with it, I'd say even higher than learning the STL in C++. Since you didn't actually write this code yourself, you have to know (i.e., read the documentation or benchmark the implementation or both) how fast each operation is for a given task in each implementation to pick the right one. Also, it helps if you know what it does to your data along the way. For instance, the fact that Strings are UNICODE in Java might explain why a simple change can double your memory requirements.

                Harder than the STL? Ha ha! You're kidding right? I love the STL dearly but to say that it's easier to use and more intrinsically intitive than the Collections package is silly. While I wish Java had generics so the Collections would have fewer casts in order to use, it's still easier to use.

                As far as relative efficiency of algorithms, are you complaining about the fact that you're not diving into the source? You are aware that there are STL implementations out there that only give you headers and library against which to link?

                That said, are you talking about the constant factor to each of the containers or the relative cost of using a LinkedList as opposed to a HashMap? Go read a CS algorithms book. These aren't that tough. Also, there is nothing that says that one JVM will be implemented exactly the same as another (same as the STL). The only thing guaranteed is the API. Whether you are using the STL or Java Collections, you still need to test and benchmark. If you need help deciding which situations call for a List instead of a Set, C and C++ aren't gonna bail you out either. I'm assuming that this is not the case as your comments to this point have been well informed.

                As for strings being unicode, I fail to see why that's a mark against Java. I18n and l10n aren't afterthoughts as they are in C and C++; They are designed in. Codepages, character encodings, etc. are things I don't much care about. But thanks to Java, I don't have to. It's a solved problem. Programs today need to be made for more than just English speakers and Java makes that easy. Hell yes it has built it unicode support! Thank god they did the right thing!

                Well, you can draw lines, and boxes and circles in Java... but not pixels. There's no drawing primitive for a pixel. I think you could convert from one type of graphics object to another (that has pixels and nothing else) but only at a hefty penalty in speed and memory usage, and obviously if you wanted both, converting between the two would be highly silly.

                Absolutely, positively, blantantly, patently, obviously false [sun.com]. Part of the Java2D API which has been around since Java 1.2 (four years ago?).

                I like Perl quite a bit; it has a lot of higher-level language features, but it doesn't impose any bizarre semantics on me, or hide any features from me.

                Perl? No bizarre semantics? Please tell you are joking. Next you're going to tell me that OO programming in Perl is clear and intuitive.

                And I never said anything about a networked GUI. I personally love the Swing API but it is far too big and slow to use for any UI of significant size -- something Sun really needs to work on. Networking and database work however is my bread and butter. C has nothing as clean, elegant, mature, and easy to use as Java's networking and database layers. Period.

                As for things to "make it stop," give me another example. Maybe there's something in the API you aren't familiar with that could make your life easier.
                • by pb ( 1020 )
                  First, to get "private protected" semantics, you can't just use an anonymous package, you have to create another package to encapsulate the class in the first place.

                  Re: assigning System.out to an object, sure that works, but you have an extra temporary variable; not horrible, but a bit ugly.

                  I don't mind reading documentation; I'm merely pointing out that when you have as many container implementations, it'll take a while to figure out which one is best for the task at hand, and what the pros and cons of each one are. I personally don't have a problem with this, but it's easy for a programmer to pick the wrong one--in fact, it happens all the time. By using completely different algorithms under the hood but maintaining the same interface, Java makes it easy for programmers to use the wrong algorithm and never know it.

                  There's nothing wrong with Strings being UNICODE provided the only thing you're storing in them is text that's written in some human-readable language. A char* or char[] in C often contains data as opposed to Chinese. I see nothing wrong with promoting data to UNICODE as needed, but I don't see why I should double my storage requirements otherwise.

                  In that link you oh-so-cleverly pointed out, do you see any way to DRAW it? No? Well, that's because you found a class that REPRESENTS the idea of a pixel, but isn't ACTUALLY a "picture element". Yes, Java has a lot of classes, so it's confusing when you're searching for the right one. Try again and get back to me when you find something that can draw to a computer screen. (hint: it can also draw lines, squares, circles, etc., etc., as I said before)

                  OO Programming in Perl is some bizarre hack they added on later. I don't entirely understand what they're doing, but then, I don't have to use it either. By "bizarre semantics", I'm not talking about syntax here--you can get rid of that with a tokenizer. The problems I have with Java go far deeper than that.

                  I agree with you about the networking--networking in C can get pretty nasty. I haven't done any database access in Java, but I know that a lot of work has been done in that area. My take on Java is that it's sort of a modern-day cross between Pascal and Visual Basic. It imposes bizarre restrictions on the programmer (in the sense that people used to complain about Pascal's strong typing, but far worse) and it tries to have tons of ready-made algorithms and classes built-in to make life easier, apparently to make up for deficiencies in the language that make it a pain to do these things yourself.

                  There probably are some things in the API that would make my life easier (but what are the odds that I would find them :), or more likely some third-party system library-like classes that hide some of the nastiness. But fortunately I don't have to program in Java for a living, and I don't plan on doing too much more dabbling with it if I can help it. However, I'll leave you to it. :)
          • The C preprocessor is quite possibly better than nothing, but it has some serious deficiencies, deficiencies which in my opinion make it little more than a handy toy for programmers.

            • Recursive macros are not allowed.
            • The entire macro falls into the caller's scope.
            • Macro expansion takes place on a textual basis, not a semantic one.
            • You have to be extremely careful about bracketing so as not to affect the meaning of the code which calls your macro.
            • There is AFAIK no equivalent to gensym.
            • Arguments to macros are multiply evaluated.
            • The C preprocessor does not provide a programming language, but a syntax for textual replacement.

            Several things on this list may have changed since the last time I worked with C macros, but the basic premise of the preprocessor is flawed. Instead of merely replacing properly formatted strings in source code (a facility which is occasionally useful), it should offer a way to process already-tokenized source code, take it apart and modify its functionality, producing new code as the result. These two may seem similar, but in reality are widely differing expectations.

            I suggest reading Paul Graham's On Lisp for an example of a much more effective macro system.

            • by pb ( 1020 )
              Sounds powerful, like C++'s templates, or closures.

              Of course, Java doesn't have any of the above, so even C's macro system would be an improvement, let alone LISP's macro system.
  • I like being able to mess with the hardware like I can in C/C++. Don't get me wrong, you need abstractions, but I think that C/C++ have it just about right. I can program without learning anything about hardware, or I can cast null pointers and crash it.
    • Try learning something to use something like prolog for real.

      WHy should I have to specify what order to run the instructions, and how to do concurrancy, and so on.
  • IMO (Score:4, Insightful)

    by Tumbleweed ( 3706 ) on Sunday October 27, 2002 @12:19PM (#4541660)
    The entire point of computers is to do things much faster than we can do them manually (or even make things possible that weren't before).

    By making programming easier and faster with higher-level languages, this contributes to that goal.

    There IS a trade-off in speed by doing things at such a high level versus, say, machine language, but considering the scope of most apps these days, it wouldn't be economically viable to create everything that way. Optimising certain bottle necks in low level languages is probably about the only common use programmers will have for low level stuff in the future, except for certain special cases or very small applications.

    If you have a project that needs super-duper optimization, it might be better to concentrate on improving the compiler's optimization rather than writing your app in a low-level language. Keep in mind you have to maintain your code!
    • If you're a customer and you're stuck with a particular tool chain for a given device, then you're stuck with whatever your vendor provides for optimization. For many embedded devices, there are only one or two compilers available.

      Thus, your suggestion to improve the compiler isn't all that useful. The vendor needs to do that. Meanwhile, the customer either attempts compiler black-magic (eg. tweaking their C code in obscure ways to try to help the compiler) or reduces their code to assembly.

      For the vendor, to accurately measure the performance of the tools relative to what can be achieved with a given device, it is extremely useful to have hand-optimized benchmarks that represent "optimal" and measure versus them. (This is especially true for 'heavy compute' functions.) That is an activity I actually get to participate in -- I get to write some of the highly optimized assembly code that we benchmark our compiler against. I get to do that, though, because I work for the vendor. Our customers are stuck optimizing their code with no chance to improve the optimizer.

      --Joe
    • There IS a trade-off in speed by doing things at such a high level versus, say, machine language, but considering the scope of most apps these days, it wouldn't be economically viable to create everything that way. Optimising certain bottle necks in low level languages is probably about the only common use programmers will have for low level stuff in the future, except for certain special cases or very small applications.

      I think that steps being taken towards using multiple languages in the same application (whether it be through library interfaces or something like Visual Studio.Net which lets you use multiple languages in the same executable or library) really help with this sort of thing. That way, you can use more abstract languages for the things that don't really affect your speed, and then go down to the nuts & bolts when you have to (even ILASM in .Net, though that's not AS close to the machine as x86 asm, which you could still put in an unmanaged dll if you needed it). Even developers like John Carmack, quite well known for breaking into assembly in the source code of his games, has said that newer applications need this less and less as the compilers get better and the processors get faster.

      That being said, I doubt Carmack's going to break out Visual Basic for DoomQuake4.
  • It's not as much about high/low level as about code reusability. Nowadays it's almost impossible to start anything completely from scratch, so languages offering better code reusability win. And high level languages generally offer much better code reusability (combined with good separation -- I can use other's people code together with my code easily).

    This doesn't necessarilly mean high level languages will win, at least not fast. There have to be enough code to reuse -- libraries, modules, or how it's called in the particular language ... Thus older languages have a great advantage of the amount of existing code -- look at Fortran, it's ugly as hell, but people still code in it ;-)
  • by ajuda ( 124386 ) on Sunday October 27, 2002 @12:21PM (#4541671)
    It seems to me that any language, be it Java, C++ or plain old C will become more abstract on their own as people begin to use libraries and reuse classes and methods. Once someone writes some basic classes, he will write classes which use those, and so on until the classes which he writes are many steps above the original class in abstraction.
  • Program complexity (Score:3, Insightful)

    by smallfries ( 601545 ) on Sunday October 27, 2002 @12:25PM (#4541686) Homepage
    I don't really think that its a trade off between power and abstraction. It's more a case of expressiveness vs efficency. All languages have the same power - as long they're universal and not some subset of a universal language.

    Expressiveness is slightly different though, as you move up through the levels of languages; from machine code up to imperative languages like C++ and then up to functional or logic languages like Haskell or Prolog - you lose control over telling the machine how to do something and focus more on what it should do.

    Potentially this gives the compiler more scope for optimisation and leaves the programmer able to reason about more complex systems. Yes you could write something like, say a datamining or visualisation app in assembly language. But how much more effort would it be than doing the same in Haskell?

    These nicer abstractions actually make it easier and quicker to write more complex code (theres a hell of a lot less of it for a start). I would think that theres still a level higher that we could go that would give a useful impact in productivity. The holy grail of language research is an abstract specification of what a program should do, from which an actual program can be generated automatically. This would allow complex systems to be verified more easily (and correctly) which are the kind of qualities that you need to move software from a scientfic (artistic?) discipline into a mature school of engineering.

    Hopefully that would lead to more realiable systems but comes back to my original point about efficiency. In the longterm it may be more efficient rather than less to use these levels of abstraction as the large complexity of the types of systems that we will be designing will stop anyone from 'coding them by hand'.
  • Tools (Score:5, Insightful)

    by Trusty Penfold ( 615679 ) <jon_edwards@spanners4us.com> on Sunday October 27, 2002 @12:31PM (#4541717) Journal

    When it comes to languages, the answer is use the right tool for the job. Low level languages will always coexist with high level ones.
    • Low level languages will always coexist with high level ones.

      Well, always is a pretty strong word, but otherwise you've really hit the nail on the head here. I think that the issue is not abstract vs. low-level, but rather how to keep the programmer focused on the task at hand (telling the computer what to do), while eliminating the mundanities (semicolon here, attach function A to class B using GUI tool blah...).

      I think the answer is layers (which is what we have currently) taken to the extreme. I'd love to see an environment where one can design the application in purely abstract terms, perhaps with a visual representation. When a program unit needs to be modified for enhanced performance, one could "drill down" into the next lower-level language to write custom code for the job. If that's not enough, drop down another level, etc.

      In a sense, this relationship already exists between many VM-based languages and the platforms they run on (Java and Lisp both have C/C++ at their core, for example). I'd like to see these relationships further fleshed-out and more accessible to the programmer.

      Now there are tools out there that are supposed to provide this sort of flexibility. If anyone has experince with any of them, I'd be interested in hearing about it...

      • It's hard to figure Lisp having C/C++ at it's core when it existed long before C was invented.
        • by pb ( 1020 )
          Many Lisp implementations are now written in C--Film at 11!

          Believe it or not, we've gone a long way from car and cdr being register mnemonics.
          • CLISP is written in C. As far as I know, CMUCL and SBCL are written in Lisp (granted, CMUCL and SBCL are pretty intimately related). So two of the three big free Lisp implementations are not in C. To the best of my knowledge, the big commercial Lisps such as Allegro and Lispworks are also written in Lisp.
      • One of the benefits of C++ is that you can drill down from a level of objects sending messages to each other to void* shuffling bytes and operators that twiddle bits when necessary.

        C++ is a great idea, but it's a monstrosity of an implementation for historical reasons.

        Working in layers is the only sensible way to go, but I don't think you need to call out of one language and into another to do so.

        Both C++ and Lisp are models of building in layers within the same language.

    • A single language designed from the start to do things at a high level by default, but that let's you drill down to C-level where needed -- but not to actual C -- to a replacement for C. Something intended to be compiled down to highly optimized machine code, not to a higher-level bytecode. Something cleanly designed from the start to span the range of high-level to low, rather than evolved to be that way, like C++. Something that you really *can* develop device drivers and operating systems and embedded apps as well as big GUI client apps with.

      C was quite elegant for its time, but it is no longer even close to the best we could do at that level of abstraction.

      And C++ was built by taking one good idea at a time and finding a way to shoehorn it into an existing framework that never anticipated the new stuff, resulting in the monstrosity of a design we have today, with more exceptions and gotchas than the US Federal Tax Code.

      For example, the utter confusion of arrays of bytes and strings of text characters is a fundamental flaw (today).

      C++ can't correct the flaw and remain a superset of C, so it just added a single-byte string class and then a wide char string class, which gives us byte arrays and char* and const char* and string and wstring, all with different APIs and rules and gotchas and no consistent conversion between any given pair...all in the same language.

      But wait! there's more. Since C's approach to strings (modern text data) is so poor, and C++ did something about it so late, every OS, class library, home-grown "portability layer", etc. in the world also invented its own string class, string APIs, etc. Something so utterly fundamental to almost every program is an utter mass of confusion in C/C++.

      Compare that to the purity of strings in Java or C#. Yet, unfortunately, you can't compile either of those down to the sort of nugget of machine code you can compile C++ into.

      Or consider something as fundamental as numerical types. How many bugs are caused by C's (and therefore C++'s) use of different definitions of char, int, wchar_t, etc., on every platform? I know why it was done, but I think fixed data types is the right choice for a general purpose language for today. Again, you can do this in Java, and have an even better selection of types in .Net, but those platforms aren't suitable for low-level coding.

      Something like ECMA-334 (that's ECMA and soon to be ISO C#), which the spec allows to be compiled straight to machine code, could get pretty close to this. A small runtime to provide GC (even C and C++ have runtimes these days) could still be used, but could be overridden to do full byte-by-byte manual memory handling where needed. Bounds-checked arrays by default that let you turn off the bounds checking where you're sure you ought to, etc.

      If ISO C# can't be modified to allow this, then it could be forked into a language that could. You would have the high-level constructs combined with low-level byte manipulation of memory, files, and streams, capable of being compiled down to roughly what you would get if you wrote in C, but much more reliable and easy to maintain.

      A language that was designed from the start to span a range from high-level to low-level with a single string type, fixed numeric types, byte array manipulation primitives, regular expressions, gc that could be overridden, a syntax that is already widely known (C#/Java-ish)etc., would be a great replacement for C & C++.

      • We need a ... single language designed from the start to do things at a high level by default, but that let's you drill down to C-level where needed

        Why do we need a single language to do this? You can build a large project using a number of languages right now. In fact, I suspect most projects use more than language.

        If you have a single language that can be used for both high and low level stuff then it will either be crippled at one or both of these tasks by the demands placed on it by the other(s). Or it will be so fragmented that you might as well be learning more than one language anyway.
        • Why do we need a single language to do this?

          It's not needed in the sense that development can't be done without it, of course, but it's needed in the sense of convenience and usefulness.

          Having a single language that spans high- to low-level abstraction allows you to choose the appropriate level in a fine-grained way. In other words, you may decide to drop down to "C-level" only inside the innermost loop of one compound loop within a single method. Having to pop out and call an external C routine or whatever is very clumsy in comparison.

          C developers are very reluctant to give up their byte-level fine-grained control, but as an initially small and simple project becomes successful and needs to grow, they almost always begin to create higher levels of abstraction. The small functions get called by larger functions that later get called by even larger functions (assuming development is kept clean and refactored.)

          I've seen a lot of C projects that eventually evolved their own homegrown varieties of OOP, generic programming, and all sorts of things.

          C++ has become very popular because a lot of the C programmers knew that they would eventually reinvent parts of it anyway, and not as well, but they weren't willing to give up the low-level control of C.

          Replacing C++ with a suite of languages, each with a different string type, different programming approaches, etc., is not a very attractive option for making life easier for C++ programmers. They'd rather have a "cleaner" C++.

          That's not to say that you can escape multiple languages. We'll still be using Java in the server to generate JavaScript for browsers, while talking to the database in SQL, etc., while doing system admin chores in Perl.

          I'm not proposing a single language that will do it all. I'm really looking for a better C, to be used in the problem space where C is the best choice today (drivers, high-performance video, database engines, OSes, etc.)

          But a better C really needs to be able to reach up to high-level abstraction easily, without requiring each app to reinvent things like OOP, which is one big reason for the usefulness of C++. For that reason the "modern C" I'm looking for really ought to replace C++ as well.

          If you have a single language that can be used for both high and low level stuff then it will either be crippled at one or both of these tasks by the demands placed on it by the other(s).

          I don't believe that's true, but I'd have to design and implement the language to prove it. ;-)

          Or it will be so fragmented that you might as well be learning more than one language anyway.


          No, I think you could have a single string type, a unified set of tools for manipulating byte sequences, a single regex syntax, etc. and cover most of the range covered by C++, and that would be much less fragmented than writing in multiple languages.

      • "For example, the utter confusion of arrays of bytes and strings of text characters is a fundamental flaw (today)."

        So write your apps in a higher level language.
        What makes C/C++ so great is the relationship between your char arrays and pointers. You have so many ways to work with them, and if you know what you're doing, you love the way C handles them. And why are you complaining about a string class? Write your own, and use it in your programs. That way you can write every high level string function you want in your class, and do basically anything with it. You are not supposed to write a homepage with C, there are other languages for that. If you use C/C++ in the way it was meant to, you'll find it very powerful and very "just the way you would like it to be".
        • What makes C/C++ so great is the relationship between your char arrays and pointers

          No, one thing that makes it great is the relationship between byte arrays and pointers. Those are not characters. They are bytes. They may be the encoding of any type of data: text, numbers, audio, video....

          A C-type way of dealing with bytes is needed, of course. That's separate from the need to deal with text. Things inside quotation marks ("hello, world") should be text, and treated as an abstraction one layer above raw bytes.

          And why are you complaining about a string class? Write your own, and use it in your programs.

          And everyone does, because it was so badly fumbled by C's built-in "string" design. The fact that anyone who needs to go beyond 7-bit ASCII has to reinvent such a fundamental language feature is a dead giveaway that C's built-in approach is deficient.

          If you use C/C++ in the way it was meant to, you'll find it very powerful and very "just the way you would like it to be".

          I've done so for years, and my long experience with it and a wide range of other languages makes it abundantly clear that while the need for a bit- and byte-manipulation level language is still very real (C), and the need to be able to build up from that low level to rather high levels of abstraction is also very important (C++), the design of C/C++ is absurdly far from "just the way you would like it to be".

  • by Kevin Stevens ( 227724 ) <kevstev&gmail,com> on Sunday October 27, 2002 @12:37PM (#4541737)
    As I recall from my software engineering class, programmers program at the same rate in lines of code regardless of the language (I believe IBM did the study in the 80's, but dont quote me on that). Therefore, programming languages SHOULD be more abstract to increase productivity. It also comes down to the "reinventing the wheel" factor. The more bug-free features/libraries we can stuff into a language, the more we can produce bug free code quicker. The only problem is of course that abstraction comes at the cost of speed. How much more enjoyable is it to program in java and not have to worry about cleaning up memory than say C or even assembly where everything is a battle. I dont know about you, but I would much rather type create_new_window() than worry about framebuffers and things of that nature. Hopefully this can be accomplished while keeping speed up and code bloat down
  • by Phouk ( 118940 ) on Sunday October 27, 2002 @12:41PM (#4541757)
    "Higher abstraction" for a programming language means it's farther away from the requirements and constraints of the cpu, and closer to the problem domain.

    cpu ------> abstraction ------> problem

    As a result, the more abstract language is often less efficient for the computer to execute, but allows the programmer to describe the problem to the computer faster. That is, it makes him more productive, in the sense of...

    productivity = features developed / times spent

    Now,
    • the amount of functionality expected of a program keeps rising and rising,
    • the cost of spending additional cpu cycles on more abstraction keeps going down and down,
    • programmer time stays at (very roughly) the same price.


    As a result, the "sweet spot" in the tradeoff between programmer time and cpu cycles now is with more abstract languages than it was 10 years ago.
    This is also why in the past the abstraction level of "mainstream" languages has steadily increased (machine language -> assembler -> macro assembler -> COBOL/FORTRAN -> modular/structured languages -> object-oriented languages).

    This is also why I firmly believe the abstraction level will keep going up, through stuff like:
    • stronger influence of functional abstractions on mainstream languages (e.g. having closures and higher-order functions)
    • support for "stronger" abstractions through features such as design by contract, aspect-oriented programming, generics etc.


    I also expect more new languages to have dynamic instead of static typing, which is also a way to attain higher programmer productivity (especially for refactoring) at the cost of compiler/runtime efficiency.

    One more note: Before you argue against higher abstraction, please check if your line of reasoning could have been used as an argument for assembler and against higher-level languages. If so, maybe something is wrong with it...
    • Before you argue against higher abstraction, please check if your line of reasoning could have been used as an argument for assembler and against higher-level languages. If so, maybe something is wrong with it...


      But there are places where assembler is the correct choice. Each language has its place, yea and verily, even onto Visual Basic. (That place is no where near me, fortunately...)

    • I also expect more new languages to have dynamic instead of static typing, which is also a way to attain higher programmer productivity (especially for refactoring) at the cost of compiler/runtime efficiency.

      well, i agree with most of your well-reasoned post but disagree with you here.

      static typing actually improves programmer productivity by reducing error rates. however, it requires an advanced language for it to really work. for instance, in C one often has to "break" strong typing in order to implement a generic callback (for instance, g_hash_table_foreach [gnome.org] from glib). modern languages that allow for closures and multiple inheritance make strong typing viable.

      -- p
  • Abstractness in PL's is always the simplification of powerful extant patterns of current programming, for example, Object-Oriented programming was already highly in use in systems such as windowing environments long before languages like Java and C++ incorporated it into their grammar. Polymorphism, templates, etc. also follow this paradigm.

    So as long as there are powerful programming systems in use that are not already part of a language, PL's are bound to become more and more abstract. Of course the abstractions of these systems will tend to bloat programs and may inhibit low-level control; so, PL's capable of low-level optimization (akin to C) will always remain when performance is a requisite.

    In keeping with the history of PL's, we will see the evolving languages charging ahead while their compilers try to keep pace: leaving their lower-level predecessors behind to tend to more obscure and specialized tasks.
  • by metacosm ( 45796 ) on Sunday October 27, 2002 @12:54PM (#4541826)
    I think the hard tie to a single langauge for a project is slowly going away. I think you are going to see more and more projects done in 3+ langauges. You built your first revision in a scripting langauge (python), and have it calling your database (PL-SQL), then once most everything is working you will go in and check how it preforms. You profile the slow bits and port them over to a quicker langauge (C++) using a tool to help you tie it all together (SWIG).

    I could have written this little made up example using (perl), (xslt+xml), (C), (h2xs) or one of a dozen other combos.

    The power of using a scripting langauge as a major component of your project is you get rapid prototyping, and easy extendability. The advantage of using a lower level langauge is speed and "access" to APIs of hardware you might need. Why anyone would feel the need to limit themselves to one group is beyond me.
    • ...and of course there's the often overlooked facet of this which is that often times, for a given load and/or userbase, the higher level abstraction is more than fast enough to do the job.

      If your current implementation can potentially handle ten times your current requirements, there's no need to rush into the lower levels.
  • Halted for some time (Score:3, Interesting)

    by photon317 ( 208409 ) on Sunday October 27, 2002 @01:11PM (#4541913)

    The progressive abstraction of computer languages slow to a creep a long time ago. OO has been around very a very long time, just not neccesarily in the form of C++. Essentially the CS world has settled on some mutual understanding the the range of abstraction around C++, Java, and Perl is a pretty good place to be depending on how OO and whatnot you want to be (and of course we will always have the ever-enduring C for simpler and systems programming), and we can't seem to come up with anything better that's got more useful abstraction than that.

    There's been a dream of a useful and successful 4GL for many many years now, and from time to time someone claims they've done it, but it's a shoddy system that isn't flexible enough and too proprietary (comes with it's own crappy OS just for that programming language, etc). 4GL (4th generation language) is supposed to supremely abstract away the need for code altogether, or at least try to. In my idea of a proper 4GL, programming would consist of composing one well-structered XML document describing the objects your problem domain deals with, what they can do, and your business rules for dealing with them. It should be something a non-technical person who understands the business can write with a little help for a helper gui. From there 3GL (C++, Java) source code, GUI elements in whatever, middleware servers, database design and sql code, should all spring forth on it's own. But like I said, so far 4GL has been a pipe dream, we seemed to have reached a point where it's going to be very difficult to get much further without figuring out true-AI first, which is some ways off.
    • Well, the academic programming language community doesn't agree with you. We still publish papers and do research and develop new programming languages and new programming language ideas. You might say, "computer programmers are very slow to adopt new programming languages and ideas," and that would be true. But that doesn't mean that computer scientists aren't still working on more abstract, more expressive languages!

  • Programming languages are just going through the normal stage of evolution. Early oral cultures had little to no abstraction because the 'language' didn't support it. Then the invention of writing offered a quantum leap in understanding, letting people realize that both an oak and pine were 'trees' (a previously meaningless word).

    Programming languages, while not following exactly the same paradigm, are definitely following the same thread. The more advanced they become, the more advanced people who use them become... allowing things to be done in multiple ways and bringing in a new wave of thinking about programming.
  • by gregor-e ( 136142 ) on Sunday October 27, 2002 @01:28PM (#4541997) Homepage
    Programming languages are human languages that allow the communication of a solution from one human to another in a langauge that computers are just barely smart enough to understand. As computers grow more complex, we can expect that they will be better able to grasp increasingly human-comfortable languages. It is easy to imagine a programming environment in which a developer talks to the computer, saying "remember that inventory program we did for SparkleCorp last year? Well we need to do one just like it, only they want data warehoused audit trails between them and their sister company." and the computer will hack something up that is a first-order approximation.

    Computers double in complexity every 18 months. Programmer productivity doesn't. The only way to get programmer productivity to keep pace is by augmenting programmer intelligence with computer intelligence.

  • Only small and simple processing platforms are good candidates for an all-Assembly solution nowadays.

    Show me someone who can effectively handle all of the ins and outs of modern Athlons, P4s, Xeons, and G4s, and I'll show you someone who's smart enough to spend their talents on writing compilers instead of individual applications.

    I'm sorry, but normal humans cannot beat optimizing compilers on modern processors. Coding to the metal on the high end isn't a viable option anymore.
  • Even now, with highly abstract languages, we still have computer languages that only trained users can understand. In my company, we are always trying to make applications where the business logic can be changed by business users, so that, say, the finance guy can log into the system and change the process for paying bills. This is because as highly regulated/complex industries move more processes to computers, they need business-person control over their systems. It's very easy for a bank to have a business analyst change an online form because a regulation changed; it's harder to call up the program vendor and rewrite the program. The problem is that there needs to be an easy-to-use language for describing the new form. Currently many such languages are business domain only; expect them to grow and become more powerful; soon a business person will be able to put together an entire web application from "scratch" with a good framework.
  • by MarkusQ ( 450076 ) on Sunday October 27, 2002 @02:48PM (#4542370) Journal

    there has been a steady increase in the level of abstraction they use

    So, C is more abstract than LISP? And C# is more abstract than Haskell? Or is this "steady increase" just an artifact of how you choose your examples?

    Imagine you are hosting a party. The first person to show up will either be taller than you, or shorter. And (if the people are showing up in a random order) there is a non-zero chance that the next person to show up will either be shorter than both of you, or taller than both of you. As new guests continue to arive, we should expect the height of the tallest person present to go up, and the height of the shortest person present to go down.

    I would argue that the same thing is happening with programming languages. We are not just seeing higher and higher level languages, we are also seeing lower and lower level languages (e.g. so-called programable hardware, PLAs, etc.) at the low end. More than anything, we are seeing a steady increase in the number of FORTRANistic languages that dwell somewhere between BASIC and C.

    But no grand trend in any particular direction.

    -- MarkusQ

    • So, C is more abstract than LISP? And C# is more abstract than Haskell?

      Right - high-level languages such as LISP have been in existance for a long time. Mostly just in the research community, though.

      On the other, look at the "mainstream" languages that get chosen most often in a business context. Compared to COBOL / FORTRAN, (even) Java and Perl represent progress...
  • Abstract languages have been around for a long time. Kernighan calls them little languages, and he has described them several times in his books. The examples of abstract languages appears endless: snobol, awk, troff, tbl, pic, eqn, ampl, . . .. When it comes to problems in their domain, I would always rather use a little language.
  • Bad Thesis. (Score:3, Insightful)

    by bellings ( 137948 ) on Sunday October 27, 2002 @04:48PM (#4543091)
    Early languages were all very low-level, but successive generations have become higher and higher.

    The first language was FORTRAN. The second was LISP. Your premise is fundementally flawed -- languages have not been getting higher and higher level. And before I get any spelling flames, I should point out that back in 1959, the names of both languages were still capitalized like that.

    What has been happening is that generic support for useful abstractions has been slowly creeping into our languages. It seems that about once every 10 or 15 years the limitations of the current languages to express those abstractions becomes severe enough that people are willing to make a jump to the next generation of languages.

    During the 70's and early 80's, ALGOL-like languages, like Pascal, C, and FORTRAN 77 predominated. From the mid 80's through the late 90's, C++ apparently reigned supreme. Now, in the late 90's and early part of the 00's, we're seeing Java and C# move into the forefront of the developer's mind.

    I am loath to call any of those languages high level. C++ added generic support for several OO ideas. Java and C# have added garbage collection and much better support for runtime linking.

    But in the end, all of these languages are still fairly low level. The biggest thing that has changed is the overwhelming size of the languages standard libraries, and each Operating System's runtime libraries. We've learned a lot about what programmers need to do in the last 50 years, and we've encapsulated a lot of that knowledge into standard, reusable libraries. In return, those libraries have grown huge.

    Think about the size of the libraries available to us -- the KDE libraries, the Win32 runtime library, the suite of standard ActiveX controls available on Windows, the huge Java standard library, CPAN, or the new DOT.NET framework. These are where the advances have been made in the last 50 years, but with a price.

    In the 70's, one programmer working a few months could have implemented an entire robust optomized copy of the C library himself, down to the syscall level. A good programmer could intimately understand the entire library in a matter of weeks. Today, it would take dozens of programmers working years to implement the Java or DOT.NET libraries. There is probably no-one who can honestly claim an intimate understanding of any of them. We've reached a point where the standard library is bigger than any one person can understand. At that is probably the biggest thing that is going to impede the development of more complex, useful libraries in the near future...
  • Opinion... (Score:3, Insightful)

    by Pseudonym ( 62607 ) on Sunday October 27, 2002 @07:06PM (#4543737)

    OK, wild speculation follows...

    I think we're going to see languages move in two directions: higher and wider.

    The "higher" languages will be designed for bigger abstractions. A lot of these will compile down to today's high- and medium-level languages. Most will be domain-specific. We've always had examples of this (e.g. parser compilers such as yacc, ASN.1 compilers and so on; there are plenty of these for high-level languages like Haskell, such as happy and Strafunski too), but I think we'll see more as we go on.

    The "wider" languages will be designed to support higher-level abstractions directly by providing the basic building blocks to the library writer. We see this in C++ template libraries, Haskell combinator libraries, Lisp macro libraries and so on. They will not be as good as "high" languages in their specific domains, but they will be generic enough for "normal" applications, plus they will have the benefit of not requiring a whole other compi.

    • Re:Opinion... (Score:2, Interesting)

      by Vader82 ( 234990 )
      Very correct. There exists a project now with the goal of making it easy to create PHP websites. It is called Enzyme [sourceforge.net].

      It is essentially some source code (the XML and templates) that compiles (gets looked at by the set of PHP scripts that write scripts) into PHP and is then run on the target machine. The "compiled" PHP is then again compiled and run when it is needed.

      All this abstraction comes at a price. It does slow things down. The benefit is that once the "compiler" is written, many generic PHP sites can be created with just an afternoon thinking about the requirements, writing the specs, testing the system. The next day, off to usability study and withing 2 or 3 days the customer has their site, ready to do.

      The other benefit is that rather tedious but necessary steps can be tossed in quite easily. Its much easier to use cleartext passwords, but once someone has written the javascript and proper PHP to handle client encrypted passwords, it get incorporated into every project, not just the ones written by the guy who really knows his stuff.

      Essentially what is happening is that really smart people are making it easier for less smart people to make computers do exactly what they want, and that is a good thing(tm).
  • I remember the good old days when I coded small games for my Atari. It had a 68k CPU with 1MB of RAM (yes, I bought the top notch, I even had it upgraded to 4MB later on).
    I coded in asm and C, in order to save RAM I had to use several sprites and compose them into one character (for example, a gnome and an elf are holding a torch, a sword or a bag, simply put the hands of the characters in a fixed locations, use one torch, one sword and one bag to overlay on the character sprite and save loads of memory, i.e. a few kBs at most :P )
    A proper use of structs even provided a primitive OO interface (I had a C++ book, but no compiler)...
  • by vsync64 ( 155958 ) <vsync@quadium.net> on Monday October 28, 2002 @04:04AM (#4545760) Homepage
    Early languages were all very low-level, but
    successive generations have become higher and
    higher.

    I don't buy this. Explain why Common Lisp lets
    me do this, for example:

    (defparameter *settings-file-location*
    (make-pathname :directory '(:absolute "etc" "monkey")
    :name "settings"
    :type "conf"))

    (defun save-settings ()
    (with-open-file (settings-file *settings-file-location*
    :if-does-not-exist :create
    :if-exists :rename
    :direction :output)
    (prin1 *settings*))

    Java, 18 years later, requires code like the
    following to approach the functionality of the
    previous snippet:

    /* We have to shove EVERYTHING into a class.
    A singly-inherited one, no less. */
    class Settings {
    static File settingsFile = new File("etc" + File.separator +
    "monkey" + File.separator +
    "settings.conf");
    // is static initialization order even guaranteed?
    static File settingsFileBackup = new File(settingsFile.getName() +
    ".bak");

    public void saveSettings() {
    boolean backedUpSettings = false;

    if (settingsFile.exists()) { // get the old one out of the way
    settingsFile.renameTo(settingsFileBackup);
    backedUpSettings = true;
    }

    FileOutputStream fow;
    BufferedOutputStream bow;
    try {
    fow = new FileOutputStream(settingsFile);
    bow = new BufferedOutputStream(fow);
    dumpSettings(bow);
    close(bow);
    } catch (Exception e) {
    if (bow && fow.getFD().valid()) {
    close(bow); // file descriptors aren't garbage collected
    }
    // now put back the old file
    if (backedUpSettings) {
    settingsFileBackup.renameTo(settingsFile);
    }
    }
    }
    }

    Note that this Java code loses on systems like
    Mac OS, which store the file type somewhere
    besides the filename.

    Or how about Common Lisp's condition system,
    which allows execution to actually continue
    where it left off once an error is corrected?
    What about MAPCAR, or DO and DO*? Heck, what
    about first-class function objects?

    Of course, try getting a job using Common Lisp,
    or any other decently abstracted general-purpose
    programming language today...

    BTW, Slashdot inserted the spurious semicolons
    in this post, not me.
  • I believe that the next generateion of languages will contain the following new(ish) abstratrions.

    Data encapsulation:
    All data on the system is encapsulated with some Meta-data describing it's type, origin, rights, source,destination etc.....
    This will give a huge increase in security, and allow for radicly different programming models.

    Profiling JIT compilers:
    JIT compilers that profile your code, and use the profile data to occasionally peform a
    re-optimized compile.

    MPP:
    Applications will be written to take advantage on MPP, as chip manufactures start adding things like hyperthreading to there CPU's and it becomes more efficient to build a PC with 8CPU's than with one fast one, coding practices will change to allow applicaitons to run more effiently in MPP environments.


  • I tend to find the current movements too "code-centric". I find that one can factor (move) much of the complexity of an application into a relational database.

    You can get virtual and ad-hoc views of your "noun model" without physically shuffling around code and code structures. Code is too physical, rigid, and one-dimensional for my tastes.

    Note that I honed most of my database techniques on "nimble" desktop database systems of the 80's. The "big iron" DB's like Oracle can learn something from them to make their systems (optionally) more nimble IMO. Their formality has scared off a lot of people into the code-centric realm. I am not saying go back to those 80's systems, but simply look at what they did well and carry the lessons over to the big-iron DB's.

Don't panic.

Working...