Forgot your password?
Programming IT Technology

The Fastest Web Language On The 'Net? 265

Posted by Cliff
from the and-they're-off dept.
TheCorporal asks: "Our company has come to the point in our development where we feel it is time for a recode. We are rewriting code for a large multiplayer, browser/text, turn based strategy game (Like Utopia), and would like to know the best language solution in terms of speed. A Rapid development platform would be nice, but most important is the speed of execution." There's more below, but the question is simple, which language is the swift hare of the net, and which one is the toiling tortoise, and where do the others fall, in between.

"Basically, we are not experts in any one language, but have quite a bit C/C++/Perl experience. The target platform will be Apache on *nix, but a portable solution would be good. At the moment we have the engine coded in C for CGI which then interfaces with MySQL to store game data. We are thinking of hacking in FastCGI support for a good boost in speed, but we feel a complete recode will be neccesary, as the amount of players in the game will soon be hitting 5 figures."

"At this point we pretty much know CGI alone is out of the question from a speed standpoint, so we are looking for something a bit more robust. We have heard that mod_perl may be a good solution, but have also heard the same for Python, PHP, C++, etc, so if anyone has experience with dynamic content like this, and has some suggestions and comments as to the merits of your choice, we would appreciate it."

Meanwhile, on the other side of the galaxy, slartiblartfast asks of his improbability computator, a similar question: "I have been wondering for a while if anyone has some really good metrics on the relative performance capabilities of the different scripting languages. By scripting languages I mean Perl, PHP, JSP, ColdFusion, ASP etc and by performance I mean how many pages can each one serve per second for the same hardware and load test? Every benchmark I have seen was commissioned by the creators of the technology that eventually won the test. i.e. The guys implementing the technology that won just happened to be on the core development team for the product. Now I just can't swallow that sort of thing, so I thought I'd ask here. Has a truly independent test been done that didn't favour one technology over another, or that at least invited the best from each area to build and optimize the site to be tested?"

Careful. There are lies, damned lies, statistics...and then there's benchmarks. It's a quote that's been seen often enough, here on Slashdot, but it still has its own bit of wisdom to impart.

This discussion has been archived. No new comments can be posted.

The Fastest Web Language On The 'Net?

Comments Filter:
  • I know, how fast it runs. I wrote an HTTP server, and in its early versions file-sender was a constantly restarting CGI-like process, like in FTP. Performance significantly increased when I have switched to process pool model, so I have added module interface that allows modules to have their own process pools when necessary, isolating modules from each other.
  • As a module, of course -- either Apache (native or through or fastcgi) or my fhttpd (that I wrote with protocol that looks a bit similar to fastcgi, but allows more complex processing models).
  • by Nate Fox (1271)
    May I suggest using a shell interpreter of some type? I like bash, but csh may be your calling since you guys have some C/C++ background. Forget the PHP/Perl/Python thing. Just use the URL and/or cookies to store variables! No need for a database, either! I say make the client do all the work. :-D

    If Bill Gates had a nickel for every time Windows crashed...
  • ed is the standard text editor, after all.
  • Out of curiousity, were you trying to write monolithic applications using thousands of line of VBScript within ASP?

    Or were you doing the intelligent thing, writing all your logic into compiled VB/VC components configured within MTS and then called by a few dozen lines of ASP code?

    There's more to it than that, but...

    Can you say DUH?
  • Actually, both mod_perl (available from ActiveState) and mod_php (as of PGP version 4+) are available for Win32 now.

    Check out ActiveState's web site for the perl module, and for the Win32 binary download.

    Of course, it's been my experience that both run much better in a Unix environement.
  • Some people that call themselves the esperanto group use JSP and Corba [].

    Since we are joining database languages and Esperanto, I'll mention Visionyze's Esperant [], a tool for generating database queries, but there are no promises of speed.

  • OK, for raw speed I'm going to keep clear of scripted languages and leave that to others...

    Yes, Java can be very fast, especially with some of the latest Hotspot JVMs. In fact Java can be faster than C++.

    C is generally much faster than Java. C++ can be just as fast, providing you don't use certain constructs, such as virtual functions. If you do then you will be slower than Java on Hotspot.

    Of course Assembler is the fastest if you concentrate on the coding (as others have pointed out). However, this becomes quickly infeasible if a group of programmers are working on the project. A good compiler is MUCH better to use. Hence you should just use assembler for bottlenecks (meaning that you're pretty much stuck with C/C++. Let's not get into JNI).

    As for the framework, others have pointed out the cost of CGI. mod_almostanything will probably be faster than the tightest assembler (depending on the ratio of hits to the complexity of the task). For sheer invocation speed, you can't go past modules. That usually means Java or a scripting language, but there's nothing to prevent you from writing the program as a binary module.

    So it's an engineering decision between trade-offs.

    • If the program is long and complex enough then the cost of CGI is probably small. If instead, it is a simple program then modules will allow for more hits.
    • If you want a module then is the time and complexity of writing a binary module in C or C++ worth the speed advantage provided over scripted or Java modules?
    • If you go with a module framework then will Java really provide the speed increase over mod_perl, mod_php, etc, that you want? Some tests here are in order.
    • If you use CGI (because the application is complex enough that invocation cost is negligible) then will the OO advantages of Java and C++ make it worthwhile for the team to program in these languages? Otherwise C is the fastest option (with assembler tuning of bottlenecks).
    • If using CGI and you choose Java or C++, then are you after code-reuse, etc? If so then you're possibly going to use the C++ features that make it slower than Java. Alternatively, C++ also allows for easy Assembler optimisation of bottlenecks.

    If speed outweighs everything, including complexity, development time, cross-platform issues, and cost, then I recommend building your own Apache module in C and optimising bottlenecks in assembler. If complexity, development time, corss-platform, etc, are still important, then I'd probably go with Java with a fast module, but I'd do some performance tests first.

    All of these languages are great (including the scripting languages I haven't mentioned), but they need to be used where they are most appropriate.

    Just my 2 bits worth.

  • Actually, the best would be in assembler as an Apache module. Apache modules are nice in that they don't have to be forked/execed - they just run right in the Apache process.
  • [] is an opensource EJB2.0 container.
  • Yes, but using gcj eliminates all of the benefits of JSPs. Namely, you loose the benefit of having a common JVM across multiple calls to the code. This means you can't pool objects (JSP does this automatically) or pool resources, such as database connections (DB pooling comes for "free" with J2EE implementations, such as the free one from JBoss []).

    All told, if you pick a decent JSP/Servlet engine (JRun or Resin perform about 3x better than Tomcat), you'll find the JSPs run just a hair (about 10%) slower than the fastest dynamic page systems, mod_perl & mod_php. With the object oriented benefits of Java, and the ability to separate presentation logic from business logic using JSP tag libraries, JSP/Servlet engines are an excellent choice.

    By the way, I wouldn't discount ASP+, either. While ASP has a lot of problems, ASP+ has learned from a lot of previous mistakes, as well as from all the benefits achieved from JSP/Servlets. Of course, it will tie you in to MS solutions (bad! bad!), but it's a very good piece of technology, none the less.

  • So you compile your app server with gcj? OK. So I suppose you need an open-source app server. Try JBoss, or Enhydra. Ooops. They both depend upon libraries that gcj doesn't support.

    OK. Let's say you get past that hurdle. So now you want to rework the servlet framework so that they are compatible with .so libraries. Hmmm. Why you're at it, why don't you simply rewrite an HTTP server that loads these mythical .so servlets? Your suggestion is ridiculous.

    Apps can't tell if they're running natively vs. running in a VM. But the servlet spec depends upon a dynamic class loader. And, lo and behold! gcj doesn't support that (a dynamic class loader is different than a shared archive, I'm afraid).
  • Yep. I run Tomcat 3.2 (yes, that is the current version, and yes, Tomcat 3.1 was still about 1/3 as fast as JRun & Resin) on my website, where all pages are dynamically generated. Mainly because I'm too cheap to buy JRun or Resin (Resin, btw, is free for personal use).

    But JRun & Resin both outperform Tomcat 3.1 & 3.2. But Tomcat 4.0 is supposedly a lot better. I simply can't rely upon it until the development is a little further along.

    And, I can't say enough good things about JSP tags. They are the "right" way to do dynamic web authoring.
  • The link: JBoss [].

  • Excellent post, thank you. It's Insightful, Interesting, Informative, and Underrated!
  • You might be surprised how fast and how little memory a C language CGI runs. Of course if
    you are doing lots of DB queries each instance, you would benefit from a persistant app.
  • Even assuming developers can even agree what a "reasonably intelligent" program is, I don't think most applications are reasonably intelligent, if for no other reason than design goals change.

    For example, what they have right now may have been a great design if it was intended for a small audience. Small, simple and fast. But if the goal shifts, all of the sudden the current design is no longer right, even though it's still "reasonably intelligent".

    Even assuming two similar designs, the language doesn't make that big of a difference. Even going from the slowest to the fastest is only a factor of two or three. And if it's only a factor of two or three generally the effort is better spend on hardware, which speeds up everything and not just one application.

    We live in an age of where orders of magnitude growth is common. This will change, but until then we have to plan for incredible growth and a simple language change alone isn't going to cut it.

  • To make up for my transgression, and horrible editing in my post, I'll add my few:

    * We're running a site with a few million hits per day, and we're considering switching from Apache to IIS because we heard it was faster. Thoughts?

    * Should be move from BSD to Linux because we can by CDs at our local drug store?

    * Bruce Perens or Natalie Portman?

    * What's the better breakfast cereal: Hot grits or Glorious MEEEPT?

  • put the database and web server on different machines. then monitor which one get's more load.
    You will also wan't to monitor the amount of traffic
    between database and web server, in case you are
    "selecting too much" and prosessing too much in the cgi.

  • Seriously, what kind of advice is 'use good design'? ... Given two reasonably intelligent programs, the choice of language makes a huge difference in the speed of an application.
    Umm, I think you're missing Sheldon's point. He's not really saying "use good design", he's saying "know your design first". This is vital when determining what language to code your program in.

    Chapter 3 of The Practice of Programming by Brain W. Kernighan and Rob Pike is devouted to exactly this kind of situation. Knowing how your project is/will be structured can help you choose the right language. This can also lead to shorter, crisper programs and more understandable code. Why write a simple one-line regexp in Perl when you can write the same thing in a few hundred lines of C?

    A program designed well but written in the wrong language can run horribly slowly. The same is true if you're using a language that you're not very familiar with. Use a language you know, and use a language which fits the design of your program. Taking these two things into account, I feel, yields the best results.

    So, I think the best solution comes when you add your comment with Sheldon's. Given two reasonably designed programs, the choice of the language makes a huge difference in the speed of the resulting application.

  • JSP's are pretty robust, but slower than pure servlets (which are also robust)


    JSPs are compiled into servlets. Admittedly, they may be slower than intentionally-coded servlets, but after the first call, they are servlets.

  • I beleive the database server and other libraries that you call out to make a much larger impact in speed.

    We have an application constructed in a similar fashion: JSP/Servlet in the presentation tier, custom appserver for object persistence and business logic at the backend, talking to Oracle. Under load, the cpu profile was 70%/25%/5%.
    Apparently, all the string manipulation in the presentation layer (parsing requests and generating responses) had a much higher resource requirement than the pure-object or database components. Oracle wasn't even sweating, but we've worked hard not to make it do so, given the expense of JDBC.

  • If you're dealing with a web application interfaced with Apache, and communicating with clients through the network, and data coming through a relational database, than it's almost definite that the speed difference of C or C++ simply won't make any difference.

    You've got to focus on what causes the performance bottleneck, and address that.

    In web based applications, there are generally three bottlenecks:

    1. CGI. The time that it takes to invoke a CGI script is generally quite significant. The load time generally dwarfs the execution time. If you can use a system that allows you to avoid the loading cost, you're winning, even if your computation is a couple of orders of magnitude slower.
    2. Database Access. Performing an operation against a relational database is IO intensive, and thus time consuming. Again, the time spent on database related IO and marshalling/demarshalling usually dwarfs the time spent computing using the data.
    3. Network communication. Often the most significant bottleneck of all. If your user is submitting stuff via http, then they see round-trip times for each submission that are seconds long. For most applications, the time savings of C vs Perl (which are quite large) are utterly unimportant when you realize that they don't matter once considered against the intrinsic latency.

    So the real advice is: choose a language that handles your bottlenecks best. If you are really doing something computation intensive that takes a long time in Perl, then it's worth writing in a highly efficient language, like C++, or OCaml.

    But if your bottleneck isn't really computation, then you might be much better off using some else. If it's CGI load time killing you, than writing in Java (with Jakarta), or Perl (with mod_perl) or Python (with mod_python) will be significantly faster than a C++ program loaded through CGI. And any of the other languages will likely be easier to use than C++.

    If it's database IO time, then leaving the language alone, and redesigning your tables to optimize database access may payoff much better than changing languages.

  • The only problem with this is that you need to deploy at least two maybe more languages. For the ASP pages you need to write jscript or vbscript and for you com objects you need to write them in C/C++ and then deploy them on MTS. You are most likely going to need two teams to do this project.
    Using Java or Mod_PERL you could do it all in one language and save the cost of the other team.

  • Actually php 4.x has native session methods too. I use them and they work very well.
  • I just wanted to point out that..

    You can write loadable modules for php in C. this gives you tremendous advantage of native C speed while using a scriptable interface with a very rapid developement cycle. The functions you write in C will be visible to the scripter like any other PHP native function. Combine this with the built in database pooling and you got a killer solution.

    If the PHP scenario is not scenario is not scalable go to webobjects. It's also a very roubust well tested easy to develop environment that handles scalibility for you. By using objective C you get compiled speed with a framework where all the hard work has been done for you.

    As long as you are thinking outside the box consider Klyx or Delphi. Go talk to the astatech [] folks and they will sell you the middleware from hell for a dirt cheap price. This thing truly rocks (I am not associated with them in any way just a happy customer). You get RAD developement, native compilation, great database support, ability to write apache modules, a killer IDE and debugger. Really now what else could you ask for.

  • "With CISC complex instruction, often 1 long to execute instructive can get into the chip, and halt everything until it finishes."

    There are CISC machines with ILP. Pentiums have 2 pipes, and Athlons have (IIRC) 5.

    "Either way, the coder is only able to access the 8(AIX,BIX... etc...)"

    That's EAX, EBX, etc. But, there are more available: mm0-mm7 (which are really the floating point registers), and xmm0-xmm7, which are 128-bit registers for SSE and SSE2.
  • So if everyone really knows what they're doing (cross fingers), go with Perl, because you cannot get that much expressiveness in any other language. If you think your development skills would benefit from additional structure, go with C++.

    Generally good advice. But this is a special case.

    The questioner already has working code, and wants to recode it in another language to speed it up and perhaps "iron" it into cleaner form for future enhancements.

    In such a situation you can inherit much of the organization and concentrate on speed. Porting to a language that's enough different than the original to bring your attention to things as you port (rather than making a mechanical translation) but not so different that you have to totally reorganize or implement a LOT of of replacements for language builtins will probably give you the cleanest result. Or if you're already in the likely fastest target language, sit back and look over the existing organization to see what can be improved.

    C/C++ tends to be the fastest in those enviornments where it's appropriate, and IF you can find the right stuff in the standard libraries/class libraries you probably won't have to implement a lot of replacements. Let's assume for the moment it would be a good choice. Since this is a port for speed, it would be a good time to come to a real understanding of the underpinnings of the language, so you can squeeze the most out of it.

    But before you dig in, try instrumenting the existing system and find out where its bottlenecks are. You may find the real problem isn't the language, but some other aspect (like API delays or database time). If that's where your time consumption or latencies are, you'll have to fix or replace them to get your improvement anyhow. If the bottlenecks aren't inherent in the existing language (neither the language itself or its API requirements), you might find you can fix them and leave the bulk of the code just as it is.
  • Were you lucky enough that when you were finished you got to dump it to one of those comically large 8" floppy disks? Or did you have to output your results to paper tape?
  • by passion (84900)

    If you add the Zend optimization engine, it's even faster if you're doing alot of loops and such.

    Yeah, I like Zend too, but there are other PHP caches available as well, such as:

    Haven't played with them myself, but I've heard plenty of good things about APC.

  • I have a question: how do you test the speed of the script vs the database vs the web server?

    The main problem I have is knowing what the load looks like. Should I just pull back pages as fast as possible and measure CPU use?

  • To "ed" simply reply "ex"

    That way you get all the vi people on your side by default as well. ;-)

    "I may not have morals, but I have standards."
  • mod_perl and mod_php are most likely going to be your optimal solutions. They are very well optimized for large performances and the retain the code in memory for faster execution. In terms of perl, mod_perl is MUCH faster then FastCGI but there is a larger memory requirement. mod_php is IMHO the better way of going but then again, mod_php is only available on the unix platforms which defeat your cross-platform desires (I'm not sure if mod_perl is on win32 boxes or not). PHP alone is not capable of handling the demands that you are talking about but because of the way that mod_php caches and retains in memory the php scripts it is also a faster solution and you are having to make less requests to the harddrive which cause the minimal but neccessary speed increases. However, your overall best bet is to do some kind of content distributing and pushing the contents closer to the users regionally and/or server clustering is probably a must.
  • dont forget that zend costs in a commercial environment. that is where the speed lies with php. i would say write the whole thing in c and set it up as an apache mod. a good book is 'writing apache modules in perl and c', stein & maceachern, published by o'reilly. good luck.
  • And you should also try to write your software with fewer bugs. And if you're in a boxing match, try to land more punches than the other guy.

    The question though is analogous to a boxer wanting to know what gloves to use to win boxing matches. Maybe there's an advantage to lighter (or heavier) gloves, or a specific stitching, or the leather used or whatever, but the fact that the boxer asked the question suggests that maybe they don't know they should be landing more punches.

    So, the advice "use good design" is excellent. It's a tactful way of suggesting that the code they're using isn't all that crucial. Moving to a 5 figure user base (and congratulations to the article author) requires a better design, of which a production plan (of which language choice is just a part) is just a part. I think the best part of the advice given is that it doesn't sound as much like chiding a fledgling as it probably should.

  • OK, maybe an exaggeration. Programming in straight machine code will give you the fastest and tightest code...when you finish in 2007!

    C is probably the best compromise, but design is crucial--go read "Mastering Algorithms with Perl" or something similar to remind you how important GOOD coding for the situation is, rather than concentrating on the language.

  • Apache is a great general purpose web server, but if you really need all-out performance, it is not the answer.

    Probably the highest performance architecture for an all-dynamic-content site is a very small, simple, single-process httpd (say based on thttpd), which connects (via sockets, shared memory, or fd-passing) to one or more persistant server process(es) dedicated to your task (probably written in C).

  • In the last 6 months or so, eweek had a shootout between scripting languages...php spit out 61 pages per second, i believe, putting it on top in the speed category, above asp and jsp. can't seem to find the article on their site tho...
  • More matters than just language. If you were running on an NT/2000 machine you would be best off using either ASP/COM or ASP.Net. (crippleware anyone?) But being on a *nix platform, you would probably be best off going with a Java servlet solution.
    Java and the J2EE not inherently any faster than any other technology, but has a huge advantage because of vendor support. Everyone's scrambling over eachother to provide the best and fastest implementation of the J2EE platform, and the results are trickling down to free servers.
    A good example would be the Orion J2EE server, which if benchmarks are to be believed is far and away the fastest solution. Find it at: []
    I use it myself, and can vouch for ease of use and speed, though I haven't done any benchmarking.
    Check out some amazing benchmarking figures for it at: ml []
  • Ooooh, we all bow in awe of your fantastic insight. Not bothering to listen to people with experience, or to back up your claims with any sort of benchmarks, you use the well-known technique of prejudice. Because Java is interpreted (which it isn't), and even though you admit it is not interpreted, then it is effectively interpreted. Not bothering to explain what this means, you go on to mention another language that you (wrongly) say is interpreted, namely Basic. Then, without mentioning dialect, or implementation, you want us all to "feel intuitively " that Basic is interpreted, and also that Basic is slow, and that therefore all interpreted languages are slow. Now, since Java is (by your claims) effectively interpreted, it must also be slow!

    I feel so in awe of your grasp of logic!

  • You'd have to be stupid, then. Perhaps you aren't stupid enough!
  • Nope it isn't. Hand-coded assembler will always win. A human can look at the assembler output from the optimizing compiler, find one spot to improve, and win. Also, by coding in assembler, you will often find some tweaks and clever tricks that it would be hard to describe in a high-level language, and that requires modification of the algorithm used, and therefore will never be discovered by an optimizing compiler.

    It is a misconception that compilers are now so good that people can't beat them. If most programmers had spent as much time with assembler as they do with high-level languages, most programmers would easily beat an optimizing compiler.

    But of course, writing anything substantial in assembler is just stupid. Almost as stupid as having this discussion, which also reduces (at leaast my) productivity...

  • First off, I just to need to reiterated that RISC stands for (Reduced Instruction) set not (Reduced Instruction set). The instructions are quick to execute. With CISC complex instruction, often 1 long to execute instructive can get into the chip, and halt everything until it finishes. I believe this is why overclockers get burned. But with RISC commands, the chip is able to quickly handle each one, and on most modern achitectures it is able to handle 3-4 RISC commands at the same time. It is able to re-order some of the assembly commands during runtime to maximize the number of things going on at once. Additionally, it is also able to optimize register allocation.

    I was recently told that the PII chip has 40 registers not 8. I believe the chip dymanically assumes and tries to optimize the best use of those 40 from 8 by looking ahead, but it *might* occur in the compiler, but that seems less likely. Either way, the coder is only able to access the 8(AIX, BIX... etc...)

    If the same register is being used for many commands, but the output of one command is not needed for a future command that uses the same register, the chip will switch the future command to use a different register. So mastery of assembly does not guarentee more faster programs, one also needs to know what chip they are working on, and how it will attempt to optimize the code.

    Yes, I'm offtopic but compilers are cool. Now for a lil on topic stuff. I come from the school that when hitting web pages the database hit tends to be the thing that slows things down. The solution I have used to solve this problem is to load the database in memory as much as possible. Max max max out the ram on yer server cluster, and fill up those javabeans. Naturally, with a large game you can't keep everything in memory, but just by keeping the most frequently used things in memory you should be able to increase performs. Once somebody logs in, store their relevant info in a Bean. I have been brainwashed into using Java as the solution, but I'm sure you could do it in another language like C++ or Perl. But I don't think it can be done in a scripting language such as PHP, ASP or Python.

    Worst episode ever.
  • I could not agree more. I just spent 18 months developing high-speed online database access using binary CGI (not a script, but an executable). There are so many varieties of options available out there. After my first dev cycle I was very distracted by trying to find something faster. Spent a total of about four months trying to find better language/system/drag-n-drop solution. I found that they all have strengths and weeknesses. In the process I wound up completely redesigning my original CGI in it's original C and the end result came out way better than the other options could have done. It's way more than ten times faster after a good redesign. My only regret is that I wasted four months chasing a red herring. Also, as a result of redesigning it, I am intimately familiar with the entire system now. I also designed it so it scales easily. Just get another box and another IP. It even registers itself with the DNS. I haven't even turned on compiler optimizations yet. I'll save that for a rainy day. And then I'll hand roll assembly if I have to panic. Which won't happen because of the scalability. Which brings me to a point that I'd like to contribute. I find that it makes me a lot more comfortable to know that my code has room to improve so I can get it when I need it. As opposed to,"there's no way I can make the system go any faster. I already have it optimized as much as it can be".
  • the speed solution is not to recode as a script, but to move the app from cgi to a loadable module.
  • Here is a link to a pdf [] that has real performance comparisons between scripting languages and c/c++. It has productivity figures as well.
  • You can pretty much eliminate any interpreted language (e.g. Tcl) and web script (e.g. PHP, ASP, ColdFusion).

    I wouldn't rule out interpreted languages, because you're not going to be doing any "heavy lifting" in them anyway.

    For example, with ASP, the goal is to use ASP *only* for formatting and user-interface. All the "heavy lifting" (ie, computationally expensive stuff) should be done in another tier, written in precompiled COM components written in C/C++ or some other language. Sure, ASP is slow, but if it's only doing 1% of the total work you're okay. Also, it's a good way to separate interface and design from the game logic.

    A similar strategy could be used with any interpreted language that generates web pages... Java, Cold Fusion, etc. []
  • <i>[Man, what a bitch it would be to try to code in Magyar...]</i>
    Man, at first you said "Marglar" - now THAT would be a bitch!!<P>
    $marglar = "Marglar!";<BR>
    marglar "Hello there. What is your marglar?\n";<BR>
    $you = <STDMARGLAR>;<BR>
    marglar "Hello, $you. Marglar to $marglar.\n";<BR>
    </TT><P>(Slightly obscure South Park reference...)
  • if your also looking at using a traditional database behind the scenes then take a good look at Delphi / Kylix. Delphi creates the fastest web apps while still allowing applications to be developed quickly..

    I'm neither an expert on web development nor objective (I work for Borland). But this point needs to be amplified.

    TheCorporal wants RAD, as long as it doesn't affect speed. Delphi is the Windows RAD tool, and it generates tiny, fast native executables. Its Linux sibling, Kylix, is also fast and has a particular emphasis on Apache support. Both products suport apps-as-shared-libraries, so you get the extra oomph of an in-process server.

    Both have first rate database support, but plenty of Delphi programmers write non-database apps. I expect Kylix to be the same.

    Oh yeah, and if you want fast web apps, you want Internet Direct [], AKA Winshoes. Now open source, works with Delphi and Kylix. Naturally, it's on the Kylix CD.

    You also get a fast change-and-test cycle. Both Delphi and Kylix use Pascal, a language that only takes one pass to compile. Plus the Borland Pascal compiler is the fastest on the block, being the current incarnation of the venerable Turbo Pascal compiler. So a Delphi/Kylix build is shockingly fast -- so fast that new users often think the compiler is simply broken.

    If you can't live without mulitple inheritance and operator overloading, Borland C++Builder is the way to go. Same technology, but of course compiling C++ takes a lot more work! No, I can't tell you when the Linux version will be out.

    I hear someone saying, "Yeah, but Borland won't be around in six months." I've been hearing that for 10 years! Enough already.


  • I've heard that. Not my department, but I believe somebody is working on it. In the meantime the fix is available from the link I put in my previous post.


  • Compile it into apache if you want to break speed records. Create a apache module to run your stuff and compile it right in. It is not goig to run any faster than that.
  • heh.. when circumstances forced me to code on a windows box (not my choice), i used notepad.exe long enough to write my own primitive editor. then i used my new editor to make itself better. now i have a nice multi-document graphical editor with cut/copy/paste and find/replace with regexps.
  • Why stop there?

    Assembler CGI scripts IMO are the fastest most efficient web scripts that you can write.

    You can write them because I wouldn't want to write Assembler CGI scripts, but they would be efficient.

  • What do you mean zend costs in a commercial environment? The underlying zend engine used by PHP is free! The zend cache product costs money, but is definitely not needed by PHP, and there are free open source alternatives to boot. PHP/Zend is most definitely free in commercial settings. We've got loads of them running, and didn't need to pay a dime in licensing fees.
  • The fastest language in execution speed will probably be the one your programmers are most familiar with. C or Assembly would be the fastest when optimized, but it is also possible to code a really slow implementation. If your programmers start with a language they are familiar with, then learn a bit about optimizing that language, then they should do alright.

    Having said that, it sounds like several languages would support your application, and different ones would be best implemented in different languages. A glue language with calling ability to other languages (C, C++) may be good for the central parts, and farm database work out to a database language (MySQL). This way, you can take advantage of speed gains in either domain (C++ speed optimizations for low-level execution, MySQL for quantum database improvements).

    This will also force you to think about the design a little more, as well as the interfaces. With good interfaces, the implementation can thrash around a bit, but everyone can still be doing their work, and you can prototype a lot faster.

    Of course, I haven't experienced your situation. Why don't you write the developers at Utopia, see what they use, and if they would change anything about their language of choice?

  • by PHr0D (212586)
    Lets not forget about fortran! [].. Oh, you said for the web.. right.. *cough*.. Oh look, here comes my bus..

  • It's a common misconception that Assembler is faster than C. Good compilers know how to group instructions together so that they execute faster on the given processor. It's quite hard to do by hand.

    Compilers are not so dumb as in the times of 8088, and the processors are not so simple anymore.

  • When I have a really critical piece of code, I write it in both C and assembler, then look at the compiler output and mix and match.

    I generally come up with a more efficient overall representation of the algorithm, and can do other optimisations based on extra knowledge, and the compiler comes up with a better way to use registers, etc.

    The combined code is usually about 15% faster than either of the originals.

  • I agree, Java has to be a serious contender for your short list, the portability/platform independence will be a serious advantage longer term.

    When you consider the convergence between STB's,Consoles and Web-Pad; Java is the only realistice platform for these, and medium term you'll get support for plenty of other client devices, for very-little or even no effort.

    The ability to use the same language across the development, on the server side and a range of clients, will make resourcing the various elements of the system easier. All your Coders can work equally well on any of your tiers, Server, middle or Client.

    Some will disagree claiming Java is too slow, however this is not really true, ( I use Java to implement an Interactive Digital TV service).

    This is especially the case when compared with the realistic alternatives, particularly server side.

    The very light-weight nature of Java threads make it highly responsive (low latency) compared to CGI processes. Add a decent JIT and this means it can even out-perform C/CGI.

    Java skills are pretty common and increasing, (and look good for your CV :)

  • hell, do it in 0's and 1's....


    most software shops can't realistically develope in assembly. it's simply to difficult to trace bugs in the code, and there are too many ways for bad developers to shoot themselves in the foot. odds are that the percentage gain they'd get in performance from coding in assembly would be offset by the time/cost of writing the software.

    Whatever it's written in, good design is most important anyway.

    All being said and done, it sounds like you should write it in c++ with an optimized compiler for the best performance, although java servlets or a java server would be the easiest to write, and quicker to develop without losing too much performance.
  • Errr, I have been coding in java for the last 2.5 yrs, and I have to say, java has some advantages, but it has some pretty serious drawbacks as well:
    anytime use java, you loose speed. JSP's are pretty robust, but slower than pure servlets (which are also robust), applets (over the net) tend to be slow, because it takes an age and then some to download a medium sized applet, and EJB's (the last time I used them) were less than reliable.
    the advantages of the design, implementation, deployment, and reusability / consistency are all valid.
  • Bollicks. Java GUI runs like a 3 legged lame pig in mud, but java sever side code (which obviously does not need GUI in it) runs comparable in speed (if not fast in some cases) to c++ server side code. Plus it is easier/faster to code and maintain.
  • As of win32 I thought C++ was just as fast as Assembly.
  • by root (1428) on Thursday March 15, 2001 @01:44PM (#361369) Homepage
    Machine code? Geez, you wanna go part-time on us, you just say the word...

    I hand-filed gears, sprockets, cogs and pistons for my own Babbage Difference Engine, arranged for shipping for thirteen metric tonnes of high-grade coal from China, and blew my own glass cooling jackets from Nova Scotia beach sand. The result is the fastest goddamned shopping cart program on the net.

    Wheels and gears? Bah! I have ancient texts filled with speels and incantations to do my problems. Other answers can be found in tea leaves, wax, reading sticks tossed on the ground, or in tarot cards.

  • by PCM2 (4486) on Thursday March 15, 2001 @11:25AM (#361370) Homepage
    I concur. But you should point out that developer efficiency is a good thing as well. For rapid application development, he should seriously consider using a wire-wrap kit, rather than etching the circuit boards himself.
  • by ewhac (5844) on Thursday March 15, 2001 @10:56AM (#361371) Homepage Journal

    Assembly? Geez, kids these days. Back in my day, we entered machine code directly, entered in octal by toggling address and data switches on the front panel and hitting DEPOSIT NEXT.

    (Better mod this down; "Can You Top This?" cascades can get out of hand...)
    (And no, I'm not kidding, I really did fiddle with IMSAI and Altair boxes...)

  • by Lumpish Scholar (17107) on Thursday March 15, 2001 @11:05AM (#361372) Homepage Journal

    "An empirical comparison of C, C++, Java, Perl, Python, Rexx, and Tcl" []

    Kernighan and Pike's The Practice of Programming [] (reviewed here []), especially chapter 7 on performance

    This comparison [] (just popped up from a Google search).

    Obvious advice: Measure your current system, find out where it's really spending it's time.

    If programmer productivity is irrelevant, you'll be hard pressed to beat well-written C. (And if wishes were horses, beggars would fly.-)
  • by jilles (20976) on Thursday March 15, 2001 @11:29AM (#361373) Homepage
    And the people who do realize don't realize they are not addressing any performance bottlenecks by using it. Get over it, native compilation does not solve many Java performance problems. There's a reason for native compilers such as towerJ and gcj not being used that often: the performance gain is not that big and sometimes not even there!
  • by austad (22163) on Thursday March 15, 2001 @11:01AM (#361374) Homepage
    PHP is fast and runs on just about any platform. With the use of the ADODB abstraction layer, you can easily switch databases if needed instead of changing a ton of code. It has built in session management, but you can easily store your session data in your database, then you can just add identical servers as your load goes up, and you don't have to worry about connection persistence across servers, so it's extremely scalable.

    If you add the Zend optimization engine, it's even faster if you're doing alot of loops and such.

  • by hugg (22953) on Thursday March 15, 2001 @11:06AM (#361375)
    There's a language called Moto [] that compiles directly into Apache C modules. If you cache your database calls properly, you can get supposedly get 1000's of hits per second. Probably not for everybody, but if you need extreme speed... you need it!
  • by SEWilco (27983) on Thursday March 15, 2001 @11:46AM (#361376) Journal
    Better yet, design your own microprocessor for the game and write the entire thing in microcode.

    Well, it certainly makes for fast software development time when the entire program is:

    But the firmware development and maintenance will take some time.
  • by Ted V (67691) on Thursday March 15, 2001 @11:00AM (#361377) Homepage
    Having spent 10 years developing software, let me assure you that your greatest speed gains come from the algorithm design, not the language used.

    The best example I have is from 2 years ago when I worked for Motorola. I wrote a simulator that performed a large file with a real device on the other side. The simulator was also responisible for multithreading other tasks from the real device at the same time (although the program only used one unix thread to do this). We wrote our simulator in Perl [] and the actual device ran compiled C code.
    It turned out that our interpretted Perl code sent packets to the C program so fast that the hardware running the C code crashed. We literally had to cripple our Perl code so it sent the data at a slower rate.

    That said, I firmly believe that it's far more important to choose a language that best suits your development abilities and choose language speed second. C++ and Java are great languages if you want to be forced into object oriented development, and sometimes that's what you need. Personally I love perl, but learning how to write clean perl code is extremely difficult (though rewarding).

    So if everyone really knows what they're doing (cross fingers), go with Perl, because you cannot get that much expressiveness in any other language. If you think your development skills would benefit from additional structure, go with C++.

  • by stuce (81089) on Thursday March 15, 2001 @11:05AM (#361378)

    There was an article [] here on slashdot that compared four different scripting languages. From the standpoint of speed PHP came in first. PHP has a reputation of being the fastest web scripting language and, to be honest, is a joy to program in as well. If this is not enough speed Zend [] sells a PHP cache that will precompile all your pages to speed things up even more. I believe there is a free version of the PHP cache out there but I don't know it by name.

    And before you use MySQL please read this []. MySQL has a reputation of being the fastest open source database but it really can't scale like Postgres can.

  • by smoondog (85133) on Thursday March 15, 2001 @10:57AM (#361379)
    If your tables are *huge* MySQL may not be
    the best solution, not sure on performance
    but it is something to check. Your language
    is not the only thing you need to consider. You should also consider the DB engine and the server platform. Why recode when you can purchase more hardware? :) Might be cheaper.

    Servlets are quick, well supported and popular.

  • by shaper (88544) on Thursday March 15, 2001 @12:06PM (#361380) Homepage

    * Is there any kind of text-mode visual editor on unix ?

    Must resist... must resist... Ahh, to heck with it, the obvious answer is

    vi, of course. :-)

    FLAME ON! Ducks, runs...

  • by cs668 (89484) <> on Thursday March 15, 2001 @10:57AM (#361381)
    People say java is slow. I can write really fast Java code :-)

    It really depends on how well you know the language and environment you are working with. If you pick up java and go to town it might be slow, same with perl, or C. As an example you are trying to change your C execution environment by using FastCGI. Has nothing to do with the language C but, the way that the client comunicates with that C code.

    You need to come up with a good plan from front to back and then pick the language or languages that will make it happen.
  • by tokengeekgrrl (105602) on Thursday March 15, 2001 @11:04AM (#361382)
    I have worked with both of them and they are both resource hogs. They work fine if your traffic is somewhat limited but neither scale very well from my experience.

    And I would definitely consider upgrading the database to something more robust than MySQL.

    - tokengeekgrrl

  • by Darlok (131116) on Thursday March 15, 2001 @10:59AM (#361383)
    The question you're asking about what is the fastest CGI language is sort of a loaded one. Different languages excel at different things. Will you be accessing a relational database (which one?), will there be only one server or will it be load-balanced over several, etc etc etc? Heaven forbid you're storing something of this magnitude in on-disk flat files, but if you are, well, that needs to be considered too.

    PERL is multipurpose, but won't win many road races for much of anything. PHP has ease of use, but its database support (even with pconnect) and performance in general is not the quickest unless you're hacking the Zend optimizer by hand. Python is getting closer, but it's still not the fastest. ASP isn't either.

    You're identifying the right problem, but IMHO, asking the question wrong. I'd identify and measure the speed of your underlying technology first. Depending on what you're doing, the script may not even be the bottleneck! (Though it's hard to say with the amount of info provided.)

    Either way, good luck!

  • by joto (134244) on Thursday March 15, 2001 @09:21PM (#361384)
    Well, that's pretty scary. I don't have the report here at home, but I'll certainly look it up later. However, from my experience Java is certainly not such a hog. While the JIT enthusiasts might be overdoing it a bit (sometimes even claiming performance is better than C++), the java-bashers (wow, there I go ad hominem again...) who usually only rely on prejudice are a lot worse. Until I've actually read the article, I presume this must be a very extreme case that they were lucky to find in the JIT implementation (and that would easily be spotted by a profiler, so that it could be rewritten).

    My experience with Java is limited to some (very minor) work on a very large data-acquisition system (and I really do mean large, both in terms of code and in terms of data. Sure we had some performance problems, most of them due to programmer faults, some of them due to scale of the project (the previous version was written in C++, and had performance problems as well, and here we experienced the all too typical second system effect), and some few extremely annoying problems with the java implementation. Mostly related to garbage collection.

    Well, to be true, we didn't really use java for the data-intensive stuff, but it was pretty amazing what we found orselves doing in java. Often prototype implementations turned out to be good enough, and never needed a rewrite in C++. The main realization was that java is certainly not as slow as some people would have it. A rule of thumb would probably be 2-3 times slower at worst (unless we are talking about swing, which really is a hog), but this was two years ago, and the situation might have improved even more.

    I am not a big java bigot. But that's not because of performance. I'm perfectly happy with java's performance (although such extreme cases as you pointed out definitely needs fixing). What I don't like about java is mostly syntactical (it's too much like C/C++, and it doesn't allow you to use macros for abbreviating common constructs). I also miss complex numbers, generics, and easier interfacing with code written in C, C++, assembly, or Fortran. What I do like about Java is that it usually results in readable code (surprisingly often, don't really know why...), relatively ok performance, garbage collection, and javadoc. Which basically means that java probably would be one of my favourite choices for a language when working as part of a large team, or taking over someone else's code, but not for my personal hacking pleasure. Hmmm, come to think of it, that's a pretty good recommendation, but I think I still prefer Ocaml, Beta, Common Lisp, Smalltalk, Mercury, Haskell, C, Python, or Ruby for hacking pleasure...

  • by f5426 (144654) on Friday March 16, 2001 @03:00AM (#361385)
    Bob wrote:

    [snipped explanation about why java and OO is best solution there, and why it can't work on bsd due to kernel limitations]

    You are clueless. OO is not a silver bullet for development. What fits you well may not be good for this guy. Hell, we don't even know what its problems are !

    Anyway, stop this insanity about non-preemtiveness FreeBSD scheduler that lack the spinlock reference counting in its module destructor, as it have been solved months ago !

    (And no, the myth that unix commands name have been choosed based on intestinal noises have already been debunked elsewhere. For instance, fsck was choosed for a different reason)

    Maybe I just bited an ignorant troll, but I can't let you spread your bullshit all over slashdot.


  • by f5426 (144654) on Thursday March 15, 2001 @11:03AM (#361386)
    Put it in the kernel. Writing it in assembly would be a plus too.

    Okay, this is a stupid answer, but the question was stupid already:

    > we feel it is time for a recode

    The pool:

    "Everything is running like crap. You've got a cold feeling". What is it?
    * time for a recode ?
    * time for an ask slashdot ?
    * time for an upgrade ?
    * time to check that the 'turbo' button is pressed ?
    * time for hiring good engineers ?
    * time for a profiling session [<- hint] ?
    * CowboyNeal ?


  • by Saint Aardvark (159009) on Thursday March 15, 2001 @11:04AM (#361387) Homepage Journal
    Machine code? Geez, you wanna go part-time on us, you just say the word...

    I hand-filed gears, sprockets, cogs and pistons for my own Babbage Difference Engine, arranged for shipping for thirteen metric tonnes of high-grade coal from China, and blew my own glass cooling jackets from Nova Scotia beach sand. The result is the fastest goddamned shopping cart program on the net.

  • The biggest problem that I have with JSP/Servlet/EJB development is that very, very few people using those APIs actually do a decent job of implementing a "great OO design," and all the additional overhead built in to the J2EE platform just bogs the thing down. The whole point of the J2EE spec is to create an environment in which the programmer can't screw up a transaction, write non-thread-safe (or even multi-threaded) code, or hack together their own improvised patch for a certain type of database, browser, etc. You might as well be writing in Cold Fusion for all the freedom and flexibility you get.

    For a really large application, with dozens/hundreds of developers, hundreds of thousands of users, and millions of transactions being processed regularly, the over-engineering of the J2EE framework can pay off. For anything that totals less than about 50,000 lines of code, or that doesn't need a lot of built-in industrial-strength transaction processing and legacy system integration, though, it's just overkill. And every time your JSP wants a single variable from an EJB, something like the following happens on the backend:

    1. Client makes request for http://someserver/content/dyn_page.jsp?sessionId=" xjfoi490fijs"&username="foobar"...
    2. Web server forwards request to J2EE app server
    3. App server checks security context, last update time of JSP source file, request data, etc.
    4. JSP, running as a servlet, receives the request data
    5. The output contains a value pulled from an EJB, so a JNDI lookup and CORBA call is performed to locate the EJB server
    6. A stub proxy for the EJB is loaded, and a CORBA connection is opened between the servlet and EJB servers
    7. The method request is made, and its result is serialized, passed over IIOP to the servlet container, and deserialized
    8. Finally, the servlet finishes writing the output, and returns control to the web server, which replies to the client

    And yes, in theory J2EE apps are portable between application and web servers, as well as underlying operating system. However, that assumes that every vendor supports the full spec, (which almost no one does) that they use the same version, (which they certainly don't) and that the developers can resist using any of the oh-so-tempting add-ons, native libraries, and convenience methods that each of the app server vendors dangle in front of them.

    Finally, JSPs are just about the biggest letdown of any dynamic web tech I've used. They actually discourage the seperation of static content, dynamically-updated portions, and application logic. You get an equal amount of support for OO design in ePerl, and have to jump through far fewer hoops. If you want compiled "add-on" components, use the Apache module APIs (in C, Perl, Python, etc.). Both the development process and the finished application will be faster and easier to maintain, and won't require a wall of brand-new Sun Enterprise boxes to run.

    And yes, I know of which I speak. My last major programming project was a J2EE-based web application that, though fairly well optimized (with a lot of quick shortcut code, PL/SQL procedures handling much of the business logic, and Apache providing all of the static content) could bring a brand-new four-processor Sun to its knees when all ten people in the office tried to "load test" it.

    My advice to those who want a high-performance web application toolkit is to do what developers have been doing for a long time: find a starting point that already does some of what you need, and build on it. Don't drop $50k on a license for WebLogic if 85% of its functionality is going to go wasted.

  • by vodoolady (234335) on Thursday March 15, 2001 @11:22AM (#361389) Journal
    And you should also try to write your software with fewer bugs. And if you're in a boxing match, try to land more punches than the other guy.

    Seriously, what kind of advice is 'use good design'? I've heard so many people spout this pretty obvious goal as wisdom, and then go on to point out that stupid solutions run slowly now matter what language you use. Given two reasonably intelligent programs, the choice of language makes a huge difference in the speed of an application.

  • by mongooze (316829) on Thursday March 15, 2001 @10:57AM (#361390)
    while cgi is fine and dandy, the absolute quickest solution would be to generate a collection of static html pages for every possible combination of variables... granted this could take some time ;)
  • Yeah, that thought just crossed my mind. You can use the apache module API to make a module, in C, C++ (recommended), or Perl (using mod_perl as the means) so that requests don't leave Apache's memory space. The CGI slowdowns are 1) starting a new process (if needed), 2) passing data to the process, or 3) "interpretation" of a scripting language (whether in a separate process/memory space or embedded in apache).

    Writing your own Apache module will get rid of 2 of those three, and the third if you stick with C/C++ for your module. Note that you should still follow good design principles; the mod_whatever should just be a mechanism for getting data into and out of apache and the code that implements your application. The module is not the application; its the means to get apache to exchange data with your application.

  • by SpanishInquisition (127269) on Thursday March 15, 2001 @10:48AM (#361392) Homepage Journal
    it is the fastest
  • by Natak (199859) on Thursday March 15, 2001 @10:59AM (#361393)
    Well if you want top speed you can only get that from a compiled development platform. Most web environments have grown up as an interpreted solution in order to make changes easier (good old internet time). So if you care most about speed you want to look at a couple options, first is creating your own ISAPI if you're looking at NT, or your own DSO if your looking at apache. You can code either of these in C, if your also looking at using a traditional database behind the scenes then take a good look at Delphi / Kylix. Delphi creates the fastest web apps while still allowing applications to be developed quickly. There are tradeoffs if you look at a compiled approach. (Like you have to restart the web server if you make a code change). There are many inbetweener type solutions you may want to look at like ASP, or FastCGI.
  • by Anonymous Coward on Thursday March 15, 2001 @11:01AM (#361394)
    It's obvious from your post that you have no idea where the bottlenecks are. Before you go making arbitrary changes, do some real profiling of your app. What does gprof say? What does your network testing show? How much time is spent in each component?

    Without real tests, your changes are likely to have little or no effect on overall performance.

    Texas: all your electricity are belong to us

  • by mo (2873) on Thursday March 15, 2001 @11:02AM (#361395)
    Although it's nice to speed up your program execution with changes like cgi to fast-cgi, good design will benefit you the most.

    What's a good design? Write your code in a way that you can run it on multiple servers with a web redirector in front of it. Try not to depend too much on fancy SQL logic as it is diffucult to scale your databases. Instead, try to stay out of the database as much as possible, and when you do have to use the database, split up your schema such that it wouldn't be that hard to run multiple database servers. Another good thing to keep in mind in MySQL is not to do too complex of queries. MySQL flies with simple selects on indexed fields. Extremely complex updates can really tie up your database.

    Now that you understand good design, how do you code your cgi end? For ultimate speed, you could do apache modules written in C, but mod_perl is only trivially slower and much easier to develop. One stipulation is that if you are getting deep into the guts of apache with things like internal redirects or many layered handlers I'd advise using C, but it doesn't look like you'll be doing that.
  • by the red pen (3138) on Thursday March 15, 2001 @11:20AM (#361396)
    I'd like to combine this recommendation with the other high-rated recommendation about design.

    Many "web languages" are page-centric. PHP, and ASP are like this. Other "web languages" take application languages and tie them to a page-centric mode. Mod-perl does with as does ASP+COM. For a lot applications, this isn't really a problem because the application flow maps nicely to the page flow. When the application does things which can be presented on a web page, but whose behavior is not easily modelled in a page-view manner, then you start to see kludgy implementations.

    Java allows you to code in a manner appropriate to the part of the problem you are solving. If you have, for example, a game-play engine that runs in the background, you can easily spawn a Thread for it that will run just like any other Java Thread without any limitations due to being a "web program."

    This allows a design where the game engine is nicely abstracted and isolated from the front end. This also makes it easier to have a team of people in charge of making the game cool for users and another team making the gameplay itself cool.

    On a side note, EJB's can impose a lot of infrastructure and programming overhead that's unecessary if you don't need the services of a full-blown Component Transaction Monitor. You can frequently do what you want by using regular Java classes or Java RMI.

  • It all depends on what you're serving. If there's a lot of static pages, or pages in different languages, then Apache is the best mediator involved. This is certainly true of serving unchanging images.

    But if everything you do is going through the equiv of "CGI", then forget Apache. HTTP is far too easy a protocol to implement (hell, its the protocol used for lots of "embedded" servers in stuff like Napster and Shoutcast). Implement your own HTTP server where you automatically can have all requests go to an engine for processing directly, and take Apache and all that configuration out of the loop. You'd effectively have two servers running -- an apache server to handle throwing images and static pages around, and a second home-grown server that directly serves up the application data. Doing this won't change that your database engine is your primary bottleneck, but it will reduce all other bottlenecks by quite a bit.

    Apache is a general purpose system, and does it pretty damn fast, but for a true special-purpose system, its best to implement your own special-purpose server.

    The "embedded server" for Java follows the same principle. Maybe W3C [] has some implementation code in C that may prove useful.

  • by Moe Yerca (14391) on Thursday March 15, 2001 @10:54AM (#361398) Journal
    I don't know about speed numbers (everything I've done server side has been extremely fast) but development time is great with JSP/Servlet/EJB. It's easy to build a great OO design, implement it, and deploy it on gobs of web/app servers. It's really a shame Sun is giving Java such a bad name around hard core GNU/Linux peeps. It's such a pretty, robust, fun environment to code in. Try it. You'll like it. Or you'll vomit.
  • by SheldonYoung (25077) on Thursday March 15, 2001 @10:55AM (#361399)
    In reality, language choice has much less of an impact on the speed of an application than the design. but Even a language that's twice as fast can be ten times as slow with a bad design. Some languages make certain designs easier to express, just pick a language that lets you design the way you want.

    The *first* thing you need to do is make the design is right. No matter how fast the language is, the number of new users and new features will outstrip any incremental improvements. Even if you make it three times as fast eventually there will be three times as many users.

    The only lasting solution is to design it so it scales. If you don't, you'll be chasing the increasing loads by praying incremental optimization and faster new hardware will keep you ahead of the curve. If you build a successful site, it probably won't.

    Consider Slashdot a classic case to study.

  • It's a common misconception that Assembler is faster than C. Good compilers know how to group instructions together so that they execute faster on the given processor. It's quite hard to do by hand.

    In fact it's research to that effect, a few years ago, that led to the development of RISC machines.

    A good assembly programmer could still outdo a compiler when he really focussed. But the compilers knew MOST of the tricks, and applied them consistently everywhere. In competition with assembly programmers - even good ones - the program that had been through the compiler normally came out significantly ahead.

    Given this, and the greater portability of things like Unix (which was mostly in C with some minimal assembly where needed), assembly code was mostly dropped except where it was unavoidable (like OS routines to get the stack arranged after an interrupt so you could get back into C).

    But given that the compiler was generating essentially all the code anyhow, it made sense to design computers with simplified ("reduced") instruction sets, rather than extended ("complex") sets of feature-prone instructions. Sometimes it would take several RISC instructions to do the work of a CISC instruction. But the compiler could generate it, so it was no skin off the programmer's nose.

    With the compiler to do the work, a RISC computer could be very simple internally. This meant it could be very small. That meant the parts could be close together, so it could run faster with a given technology, and that it could be moved to a faster technology sooner, when the production yeild for a BIG chip was still too low but the yeild on a SMALL one was adequate.

    The extra instruction fetches were a problem. But instruction cacheing kept the inner loops in the machine, so there was still a big net gain.
  • by fayd (143105) on Thursday March 15, 2001 @11:35AM (#361401)
    Design is a major issue when talking performance, but there's more to it than that. The poster mentioned using MySQL on the backend. That means there's quite a bit of work to do before we even start mentioning design.

    Someone with a fair understanding of data analysis needs to go through and figure out what the data storage needs are. Now pick your database: MySQL is reasonably fast for small databases on small machines. But it reaches its breaking point relatively quickly. My experience indicates that PostGreSQL is the next step up the ladder. With a user base in the 5 figure range, I would run Postgres on it's own machine and watch it closely. If it seems to have problems keeping up (and you're not on too small a machine) you'll have to start looking at a big database (e.g. Oracle).

    Also, the other hardware you're running on has some performance implications. Do you have a large amount of physical memory? The more information you can keep in memory, the faster your system. How are your disk file systems layed out (NFS? RAID?). The when you do have to go to the file system, these having resolved these questions will affect performance.

    Now we can talk about languages and delivery mechanisms.

    You mentioned keeping an eye towards portability. Unfortunately, there are trade offs there as well. If you want speed, portability is your enemy. Java and Perl are great languages (I use them and recommend them often), but they are relatively poor performers. You can pretty much eliminate any interpreted language (e.g. Tcl) and web script (e.g. PHP, ASP, ColdFusion).

    The heavy lifters are still C and C++. But even if you write your CGI in C, you're still incurring the CGI penalty (which is very expensive).

    If you insist on using Apache, then start by writing an Apache module in C or C++. Even faster than that is to skip Apache altogether and write the entire server yourself. You want this to be web delivered still, which is fine as the HTTP protocol isn't too difficult to implement.

    Once you've figured all this out ... NOW you can start your design.
  • by f5426 (144654) on Thursday March 15, 2001 @11:11AM (#361402)
    You're spoiling the fun. It was the most trollesque askslashdot ever.

    They *asked* for a langage war.

    Next ones should be something along the line.

    * We're a big company porting its stuff to linux. What is the best desktop environment to use ?
    * When redoing out corporate backend, we decided to go to unix. But which unix should we choose ?
    * Is there any kind of text-mode visual editor on unix ?


  • by Erasmus Darwin (183180) on Thursday March 15, 2001 @12:33PM (#361403)
    vi, of course.

    Nonono. There is but one answer to the editor flamewar:


    If someone says, "vi", someone else will inevitably reply, "emacs".
    If someone says, "emacs", someone else will inevitably reply, "vi".
    If someone says, "ed", everyone else tends to get quiet and assume that the person is either a Unix guru, an escaped mental patient, or both. Either way, they realize that they probably shouldn't argue the point further.

  • by TOTKChief (210168) on Thursday March 15, 2001 @11:03AM (#361404) Homepage

    Esperanto--the universal language!

    Oh, you meant programming language.

    [Man, what a bitch it would be to try to code in Magyar...]

  • by tweakt (325224) on Thursday March 15, 2001 @10:55AM (#361405) Homepage

    Server side script rarely consumes a lot of processor cycles. I beleive the database server and other libraries that you call out to make a much larger impact in speed.

    It's all a matter of optimizing the slowest part for the largest gain. Optimizing the script will result in much less improvement than say, switching to a faster database server.

    Also next in line would be the web server that is hosting the application. Some scripting languages are possible more efficient than others but that only matters if you're doing a lot of processor intensive things within the script (mathematical calculations, etc) which is rarely the case.

You are in a maze of little twisting passages, all different.