Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
News

Who Still Codes In Assembler? 61

rednax asks: "We see a lot of discussion on /. regarding many 'high level' languages (PERL, Python and JAVA for example are all well covered) rather than assembly language. There are a few exceptions such as this discussion from waaaay back when, which touched on it. Assembly level languages obviously have a place in all systems at the lowest level to provide basic services, but what about other areas. Obviously there are trade offs. Speed and compact object code are the two main arguments for assembler, but how much do these matter when we can get 1GHz processors, and large amounts of RAM? How many /.ers do use assembler, and what for?"

"Obviously assembly code is needed in embedded systems - I do not mean embedded Linux systems here, but rather the specialised, dedicated processor systems that do control work. Gibson research is one of the few advocates for programming down at the bare metal level that I have seen recently, and I think his products show what can be done in an incredibly small space, when Assembler is used. This too is one of his works."

This discussion has been archived. No new comments can be posted.

Do We Still Need To Program In Machine Code?

Comments Filter:
  • I am NOT arguing here. I'm wondering how good modern compilers really are. Years ago, I was programming in VAX assembler, and in C on a VAX. The Vax C compiler, with "max" optimization, could beat me 7 out of 8 times. Later, on a Sun, using SunOS 4.03, I could easily "beat" that compiler. The GNU C compiler has been, for quite a few years, one of the best ever written. I doubt that we could write better assembler than it can, now. With RISC chips, Super-Scalar, etc., I doubt any human could out-perform a "top notch" compiler today.
  • no GNU tool, or anything that goes with a regular Linux distro should be coded in assembly.

    I fear the day when this poster's "painfully obvious" statement comes true and distributions are afraid to ship the kernel, because parts of it are in assembly. I await the day (soon, hopefully) when distros can ship with a DVD player program which has some ASM optimizations so that it can work on a system that mere mortals can own.
    --
    // mlc, user 16290

    • How many C programmers know, (really KNOW) how, say, malloc(); works?

    So do you know exactly what sort of signals does your opcode generate in the CPU? You better track those electrons or you don't know what you are doing. And them orbitals! Are you sure you know where your orbitals are?

    I know that

    • malloc() allocates size bytes and returns a pointer to the allocated memory. The memory is not cleared.

    If I need to know more, I'll dig up that glibc source, but if I don't and find more enjoyment from pretty perl scripts than obscure platform-specific tricks, don't call me a loser.

  • You low-level folks are hard to find and are exactly the type of people who should be reading -- and contributing to -- Kerneltrap.com [kerneltrap.com].
  • Embedded systems programmers do a *lot* of assembly.
  • assembly '2k [assembly.org]
    SE '2k [se2k.dk]
    dream hack [dreamhack.org]
    Scene [scene.org]

    There are more..
    I only wish that NAID (Montreal, where I live) still existed.. ):
  • I take it back. Of course you'd need pointer arithmetic. Sometimes, I astound myself with the statements I make.
  • The locking code for an RDBMS needs to operate in fewer than 100 instructions...Furthermore, the "latches", or internal locks on the buffer pool, etc., must execute in under 10 instructions.

    Are these some fundamental laws of computer science? Please clarify these (seemingly) odd requirements.

  • Unless Linux/BSD is your main server/workstation, quit ignorantly commenting!

    Cool. BSD is my main server, so now I can make all the ignorant comments I want!

  • Many people also use assembly because it is also great for people who want to not only code a program, but actually know how it works. you can write a program in C you say? Or PERL, Eiffel, or whatever? I consider using function calls cheating in a way. Did you write any of those standard C functions that you are using? If not, did you really write that program, or did you simply arrange the smaller programs of others (functions) in such a way that they work differently than the particular arrangement of another person's program.

    In a way, assembly is also just rearranging "smaller programs of others" into larger programs. How? On x86 (and other CISC-style architectures) assembly is not the lowest level of programming, microcode is. It's just like using functions in C, except that you're functions are written in extremely low level microcode rather than another HLL.

  • Speed-wise, machine optimized code won't overtake Very Smart Geeks

    True, but that's if you're having VSG(in Assembler) write your code. What do you do if the code is written by physicists who aren't expert coders ?

    PS - To the spelling pillock. Get a life. Yes, I'm a Brit, and I spell things with an "ess" not a "zed" because I don't still use a 17th century spelling that we abandoned to you colonials centuries ago.

  • when you're running calculations which would take ASCI Red a couple weeks.

    I used to write that sort of thing, and I used Algol and Fortran, not assembler.

    The Very Smart Geeks who had written the Fortran's code generator knew far more about optimising assembler than I ever would. Speed-wise, machine optimised code overtook hand-crafted some decades ago.

  • With all due respect to the frighteningly-evolved world of compiler research, even the best compiler is unlikely to match hand-written assembly for lots of tasks.

    For one thing, a higher-level language is simply too vague for conveying processor-specific instructions, and there are plenty of times when an astute programmer can apply her understanding of an algorithm to prefetch data intelligently, structure code to jive with a particular processor's branch prediction rules, exploit pairability of certain simple operations, and so forth.

    Mind you, an intelligent compiler will try to employ these same strategies, but its ability to do so is dependant on the choice of higher-level instructions given by the programmer. For instance, a particular processor might have an instruction that can test for zero and jump in a single clock cycle. In this case, the difference between iterating up versus down to zero in a for loop might determine if the compiler can actually exploit this.

    A programmer who is intimately familiar with the vagaries of a particular compiler will have better luck, but this generally ends up being more trouble than it's worth.

    For some applications, the importance placed on performance justifies the time necessary to hand-optimize code in assembly. This is not exclusive to people writing, say, memory allocation routines for an os. If any piece of code is called often enough, a difference in execution time of a couple dozen clock cycles may make the difference between a usable application and one that sucks. (Of course, this all assumes that the overall algorithm/design is sensible in the first place. Assembly will not save you from shitty code.)
  • Most of the programmers out there are just people in the industry out there to make money

    Really? I got into programming for one thing: THE WOMEN!!

  • I'm a web developer that started as a graphics guy. I have since learned about databases and JavaScript and ASP because I wanted to be able to do stuff. Don't get me wrong - the money is nice, but it was never my motivation. I have looked at C++ and Perl and Java and VB out of curiosity and some need, and one day after I stop being terrified I might look at Assembler.
  • I program mostly in Borland Delphi, so dropping to Assembler is as simple as
    asm

    end;

    So, if I need to, I do. My main reason is when Borland's handcoded assembly code sucks... For instance, string searching uses REPNE SCASB. Which was the best way of doing searching on 8086 and possibly 80286 processors. But on todays Pentiums, a tight loop is faster. So, if I have a lot of string searching to do....

  • What a nice post Sivar!

    I still code in assembly (x86 'natch) and I find it a great deal of fun on two levels: it focuses my perception of how HLL's "work" and (well to put it frankly:) it is Power (humbling too at times).

    I would recommend learning asm to anyone programming in C/C++ or Java or even coding stuff for Linux (setting up the parameters on the stack for your C calls (and not crashing the program) is a transcendental experience); it may not help on your job immediately but do it in your spare time for a little while and you'll see improvement in how you approach your coding. Just be careful not to reinvent the wheel too much (I happened to love writing my own utilities and macros but you may not)!

    Hmmm, does anyone know how much, if any, of the Quake III Arena engine is in asm??

  • YEEEEE-HAW!!!!!!!!!

    linnucks is my firstest favorite workserver to. I can be all ignorant n stuff.

    Bingo Foo

    ---

  • Wohoo! Debug.com!
  • the british did not abandon the spellings to use americans, we took your old spellings and used them for a while, then webster came along and wrote his dictionary, simplifing some of the spellings in the process, therfore we improved upon your system you just stuck with the old more complex one
  • I consider using function calls cheating in a way. Did you write any of those standard C functions that you are using? If not, did you really write that program, or did you simply arrange the smaller programs of others (functions) in such a way that they work differently than the particular arrangement of another person's program.

    When you build a house, did you create all the building materials yourself? If not, did you really build that house, or are you just rearranging things (lumber, concrete, etc) that were already there?

    When you write a song, did you invent all the notes that you used? If not, did you really create that song, or did you just re-use notes that other songs have already used before you?

    I'm sorry, but saying that function calls are "cheating" is missing the point entirely. The true creative exercise in programming is deciding what your program is going to do, and then finding/writing/using the appropriate functions to do it. When you think about it, I didn't invent any of the words I've used in this post, someone else did, I merely arranged them in a way that gets my point across. As did you.


    --

  • Airlines. Computer Reservation Systems. Banks. Assembly is crucial to the execution of TPF (Transaction Processing Facility) in extremely high volume, high load environments. For example, one CRS for which I worked runs at a nominal request arrival rate of 5200 messages a second! Unfortunately, most "higher level" environments simply cannot withstand that onslaught of traffic.
  • But if you're going to have a calculation running for several months, it is worth it.

    If I want a calculation to run for several months, I'll code it in Visual Basic and run it on Windoze 2000 - assuming the pc stays up that long.

  • I know this sounds like a flamebait. but eitherways....

    I'd say the most important reason why people do not code is asm is just that it is difficult.
    Most of the programmers out there are just people in the industry out there to make money. I do not mean to offend all the guys out there, but the truth is that most of the programmers who give into the hype of learning the 'latest a.k.a coolest' language are those who do it hoping that they'd be able make more money out of it.

    Are you telling me that the web-designers and Java guys out there are actually interested in learning something for the heck of it? Hell, they do it because it helps fatten their wallets. Period.

    And programmers out there who do it because they like it, would continue to do it whether or not they like it.

    This is highlighted esp. for assembly language programmers. Had people coded the same way they did back in the days of PDP's/286/386, optimising that lines of assembly code, we'd really be making use of the awesome computing power we have today.

    I still just hope that there are Mels still out there.

    "...Fear the people who fear your computer"
  • If you honestly can't see why statements such as "X has to be written in N instructions" are inherently platform specific, then I can only conclude that you are, in fact, mildly retarded.

    As much as I regret to feed this troll again, allow me to rebut. Firstly, my "mild retardation" must make my D. Phil (that's "Ph.D." for you Yanks) in computer science all the more impressive. Secondly, neither Gray nor I said "X has to be written in N instructions"; rather, the statement was "less than 100 instructions" and "less than 10 instructions". For those of you who are having trouble following, those numbers are what people who have at least high school maths experience refer to as "orders of magnitude". So that's "a lock must be set 'on the order of' 100 instructions, as opposed to a latch, which must be an order of magnitude faster.

    Hopefully, this won't burst your small head.

  • Are these some fundamental laws of computer science?
    No, they're remarks from a SIGMOD talk given by Jim Gray -- perhaps you've heard of him. While the vast majority of RDBMS code is written in C, the locking routines are ALL assembly, because they are so time-critical.
  • You have to point out, that assembler isn't necessarily programming on bare metal. Borlands Turbo Assembler 5.0 even provides some object oriented features as classes with one level of inheritance. And the IA64 assembler has a very nice syntax, and is close to high level languages as the entire platform was designed for better support of compiled languages. It seems paradox that this makes building good compilers for this platform harder, as it is a very complex instruction set that heavily relies on dependencies.

    And I'm a big friend of embedded assembler code, as the main advantage of higher level languages is the organisation framework of the code. Writing some methods in assembler combines the structure of the higher level language and the control of the assembler. You can support different platforms by #ifdefs as it is done in the Linux Kernel, even for some normal C code lines.

    For a lot of simple standard tools there should be an assembler version, as I cannot understand why a "one bitter" like true should take more than 4k.

    And there are of course these low level system initialisation routines, device drivers and all those classical examples.

  • The one and only metrics for a software project is not speed as some tend to think. I develop games for a living, and we use 95% c++ all over the project, without affecting the overall speed of the game notably, but cutting the costs effectively... On a team project, readability is essential, for instance because of turn-over. Some programmers achieve to make C-code hard to read, no doubt re-reading asm would be a complete loss of time. Moreover, portability is affected and in a game, 80% of c++ code is directly buildable on any platform (even if we are lucky we only develop on lower-endian platforms and such things because some people... well...). Now learn mips and microcode on ps2, it might be a bit long. It is , for me at least! And last, but not least, with pipeline issues, optimising asm is a real pain, so my motto is : don't do it if they don't force you with sharp weapons! And use it in a high level language framework, through inline. Just my 2
  • you say? Well, if I'd been there bowing, I'd never have had to admit I didn't know how a since long non-existent nation (miss)spelled 'optimize', now would I?

    Exactly. [dilbertzone.com]

  • yeah, the 'Very Smart Geeks' might even be able to spell to 'optimize'!!
  • well, the smart geeks would not speak 'British English'
  • With the more complex processors, pipelining can get very complex. Try programming the TI 6701 DSP in assembly. It is a Very Long Instruction Word (VLIW) processor with software pipelining and 'flying registers'. For a non-trivial loop, the C compiler almost always generates better code than a human can.

    In fact, on that DSP, they have a special 'optimizing linear assembler' so you don't have to think about parallel instructions and pipelining. And it STILL is very very complex.

    The time of hand coded raw assembly is ending soon....

  • I lied. There IS text in here. MWAHAHAHAHA!
  • In addition to the other gentleman's links, try: http://www.hornet.org/ and do a search for "in2k" (no space, make sure you include the www. in the URL) Have fun. Some of the demos are very humbling.
  • Well, I look at your example like this: The boards, nails, etc. are the processor opcodes. You CAN go lower level (Building your own processor/making your own materials) but that's getting a bit rediculous. Then it eventually gets down to "Did you really /MAKE/ that silicon, or was it produced in the nuclear fire of stars billions of years ago?. Now, if you build a house and have an outside company make your walls and floors for you then truck them in, that (IMHO) is more representative of using premade functions. The problem is, you don't really KNOW how those floors and walls were made. How many C programmers know, (really KNOW) how, say, malloc(); works? (Especially in DOS) Those that think you do, try to implement it into a compiler. (That's not to say that no /.'er could of course) I agree with your comment about the true creative process of writing the program. I was more refering to understanding how everything works and truely being able to take full credit for the program. I understand that writing C without any #includes is impractical and that many (most) truely excellent programmers use other people's functions. If you don't feel that using standard lib functions is, in a light sort of way, cheating... I have no problem with that. I use standard library calls all the time. They are what makes development in C so much faster than development in assembly. I just personally feel that a program is not truely mine if I have done so. (It's nothing that I lose sleep over though)
  • Assembly is great for neat loops and optimizations etc... But, in the end what you have is really specific code. If you keep in mind that it will be a one off type endevour, more power to ya'. Great for demo's, and neato graphics. Though, no GNU tool, or anything that goes with a regular Linux distro should be coded in assembly.

    Did I just point out something painfully obvious? Sorry...

    __

  • by Anonymous Coward

    I work for Midway Games.

    Many programmers here have only used assembly. Virtually all of the coin ops have been 100% assembly up until very recently.

    Assembly is still used on all machines on all platforms in the rendering system - there's no other way to get the performance you need.

    Some systems, like the Playstation 2, don't even have a C compiler available for two of the four CPUs.

  • yeah, the 'Very Smart Geeks' might even be able to spell to 'optimize'!!

    Well, are you smart enough to spell "British English"?

  • Except, perhaps, the ones at Bletchley Park who are part of the reason you aren't spending your life bowing and scraping to some S.S. thug.
  • Infidel! This only works on DOS/Windows systems!
  • I have to admit that I've not followed the link, so apologies if I am making incorrect or redundant statements here.

    I suspect that a human's insight might be mainly which parts are worth optimising.

    Given the devlish difficulty in hand scheduling and hinting assembly code for modern processors, I wonder if the proper thing isn't to write a superoptimiser. Optimal register allocation and instruction choice is NP complete, so the only way to solve it in some cases is exhaustive search.

    I'm envisaging a system where human input is basically which code blocks to superoptimize and leave the computer to chug away at it overnight. Perhaps genetic algorithms (with test data to check for valid executions) would work?

    Has this ever been attempted?
  • I thought c-- was pointer artithmetic-less, which would make it impossible to write self modifying code? Unless you're suggesting some staged programming techniques?

    Is there even a compiler for c-- yet? I was under the impression that it was still at the proposal stage.
  • But the human always has the advantage that it can use the computer's insights, so therefore the humans code should always be at least as fast as the compiler, if they're humble enough to use the compiler's output whan it makes sense to do so...
    Read the art of assembly language programming for a good intro for this debate:

    http://webster.cs.ucr.edu/Page_asm/ArtofAssembly /f wd/fwd.html

    I personally know assembly, and only use it when doing intensive 2d graphics stuff, but just knowing how the machine works so that all your coding is improved is the best reason to know assembly. Use it when it makes sense(which for me is after C coding, optimizing algorithms, and profiling... Coding in straight assembly just ain't worth it for the machines I work on [well, except for my TI-85 programming endevours ;-])
  • I started programming originally on a 48k, rubber keyed, Spectrum. After finding the BASIC too limitting I progressed to z80 machine code.

    After that when I was first exposed to PC's I wanted to find an assembler straight away. (Not having ever heard of C/C++/Perl by then - I was only 16 or so ;).

    I wrote major, single-person, projects in pure x86 assembly language. Nowadays I wouldn't dream of doing that any more - but I have written Win32 programs in pure assembly; just to see if it could be done easily...

    One of the few times I ever bother with assembly language nowadays is when I'm decompiling drivers - to reverse engineer protocols, and the like. (eg. My attempt at a Linux driver for the MPIO [steve.org.uk] MP3 player).

    As I'm interested in security I occaisonally find an exploit of my own, and being able to code in assembler is pretty essential for this.

    I have to say that even if I never used assembly language again I'm so glad I learnt it, because it really has helped me understand how computers work; something that the current generation of programmers, fresh out of college, miss .. IMHO.

    (Recent graduates seem to think the computers talking greek when presented with a stack dump/core file... I find that very depressing.)


    Steve
    ---
  • do you have a link to these "Finnish assembly competitions"? sounds pretty cool..

    ----
  • I'd say the most important reason why people do not code is asm is just that it is difficult.
    Try again. Three biggest reasons not to even think about using assembler:
    1. The compiler yields code of sufficient speed and compactness for the intended application.
    2. Precise control of execution time is not required.
    3. The additional difficulty of designing, coding, documenting and testing an assembly-language implementation is not warranted by the benefits; it is far cheaper to throw more CPU cycles at the problem than more man-hours.
    I can write code that's tighter than an IRS auditor's asshole, and I'm damn proud of it. I still don't pretend that what I do is the way everything should be done. There is a time to throw hardware at the problem because hardware is cheaper than software, and that cut-off point in the PC world changes by half every 18 months. If you aren't in the PC world, your tradeoffs are different. That's life.
    --
    Knowledge is power
    Power corrupts
    Study hard
  • Assembly is very much alive and well with microcontrollers, especially the 8 bit micros (PIC, AVR, Scenix/Ubicomm, etc.). C is making inroads, but for any sort of optimization, you still need to be able to look at the code the compiler generates and understand what it is doing. This is very important when you are trying to deal with low latency interrupts, tight timing loops, and small memory sizes.

    For the most part, developing in a higher level language is faster, more robust, and a heck of a lot easier. Knowing assembly greatly enhances the understanding of what the computer is actually doing, and how the computer is doing it. If speed and size are important, do assembly. The speed on many applications is strongly limited by I/O- it doesn't make as much sense to use the considerable strength and difficulty for those type of applications, but if you're trying to do real time DSP, the speed of assembly, especially when dealing directly with fast I/O, is very valuable.

    It really is a matter of using the most appropriate tool for the job. High level languages do a lot of things very well. Assembly also does many things very well. C can map very closely to assembly in some circumstances. Any sort of programming language is just a layer between your ideas and the op-codes that the computer executes. Assembly is just a bit more precise, like a surgeons scalpel, compared to the dull broadsword that is Visual Basic. Insert your own cutting implement metaphor for your favorite language.
  • Having worked professionally in real-time visualization, signal processing, and compression for over ten years, I stopped using assembler about 1995, except for porting and maintaining my older code. The speedup possible with changes in algorithm (10x to 200x) is much greater than the speedup possible with assembler (2x to 4x). Once one section or piece of software is sped up by 10x to 200x, the economics dictate that you speed up another section or piece of software by 10x to 200x rather than speeding up the first piece by another 2x to 4x. Each type of speedup -- algorithm improvement or conversion to assembly -- takes about the same amount of time. The worst part these days is that the optimization rules change drastically every year -- MMX, SSE, SSE2, P-IV, IA-64 -- making a commercial investment in assembly code a high expense for short-lived results.

    I'm sure assembler still makes sense for mass-distributed desktop video and mega-production videogames, but the uses for it are getting more limited. A good algorithm, data alignment strategy, caching strategy, and pre-calculation strategy can be expressed well enough in C/C++.

    And as a side note, even for my modern rare uses of assembly I try to use the in-line assembler since assemblers are no longer the staple tool that every developer has installed.

    And as another side note, I laugh at people who try to get their processor faster by another 10%, 50%, or 100%. If these people would learn to optimize code, they could speed up by 20,000%.

  • I know that in the days of forking webservers, perl cgi scripts, and relational databases the idea of performance is considered somewhat unimportant, but there are cases where it really does matter.

    For example, when you're running calculations which would take ASCI Red a couple weeks.

    Yes, there are optimizing compilers out there. Yes they do a good job on some systems. No, they don't come within a factor of two of the performance well-written assembly code achieves on x86 processors.

    Programming in assembly is slow work. To optimize code well it will often take a few hours per instruction. (I've spent weeks optimizing 50-instruction loops). But if you're going to have a calculation running for several months, it is worth it.
  • I know that Quake 3 uses assembly code to optimize some things...I know that on the Mac they (either Grame and/or John C) have rewritten the assembly portions which supposedly has resulted in a significant speedup (They just need to release the code so WE can find out for ourselves :-) )
  • On x86:

    0CA5:0100 B402 MOV AH,02What does it take
    0CA5:0102 B249 MOV DL,49to convince the
    0CA5:0104 CD21 INT 21lameness filter
    0CA5:0106 B44C MOV AH,4Cthat this is ok?
    0CA5:0108 CD21 INT 21(Too many caps indeed)

  • I remember back in high-school I was having fun with JMP FFFF:0000 in debug. I once sneaked the generated .COM-file into my teacher's computer's boot-sequence. He couldn't for his live figure out what was wrong, so he released the class for the rest of the day.

    I grew up some years later.

  • A blanket statement such as the above can't possibly be what he actually said. Now if he were talking about some specific RDBMS on some specific platform it might be a perfectly reasonable thing to say.
    I was there, "Pete". You weren't. You also apparently do not have any computer science training, or you would know that a "specific RDBMS on a specific platform" is not only likely incredibly far from the state of the art (if it is a commercial system), but is also not interesting in general to actual researchers, unless they are writing a benchmarking paper to show why they aren't interested in commercial systems.
    In the RDBMS case, the maximum size of the locking code would depend on the computer, OS, and lots o' other stuff.

    Actually, you're quite wrong. First, note that I said "instructions", not "cycles". All modern processors provide a test-and-set or compare-and-swap instruction. So it does not depend at all the architecture. RDBMS locking is not done at the OS level -- that is FAR too much overhead. An RDBMS maintains its own locks, its own file structure, etc. Typical instruction-count costs for seeking through a highly-optimized B+-tree, doing a hash join, or marking a page as "dirty" have not changed since the introduction of virtual memory support in hardware, and neither has the cost of setting and releasing locks. Furthermore, for a scalable system, the cost times for updating those locks need to be as Gray said.

    Kindly find someone else to troll, or perhaps ask your parents to buy you the newest Quake game if you're really frustrated.

  • I have not touched assembly type language since 1989. But the above artical points out very important issues. My father taught me to work on cars since I was 6. one of the most important aspects of rebuiling a motor was to lable all your parts with great detail so that you never had wonder ( and yes, connecting rods should be rematched towards there orignal pistons )and always sub assemble to make intalation easier. When I got my C-64 (1981) I coded the same way - Very detailed and in sub routines. I first started in basic, then I bought a Basic compiler then a bought and assembler program /compiler. I designed everything under one platform, so I was not to worried about cross compatibility. And after a while each sub routine in basic was recoded in assembler ( varible passing was pain but well worth the speed ups ) and calls to the rountines were used

    Code optimizing can be best done on well documented code, that has been tested and the sub-routines optimized.

    I would think that with the current state of tech, most software houses would create the program then issue "addon/upgrades" to specific operating systems ( mac os or win or linux ) and specific processors. If i recall right i think autocad did this back in the late 80's and early 90's



    spambait e-mail
    my web site artistcorner.tv hip-hop news
    please help me make it better
  • Although the upcoming AGB is likely to diminish that pool of coders back to near extinction, the Color Game Boy has probably been one last "hurrah" for the bare-metal game programmer. I've optimized cycles and opcode bytes on the thing (older gb first) for a decade - I hand converted line-by-line the 6809 source code for Defender & Joust to run them on the Color Game Boy. So I say, we're still out here. For a little while. (The AGB supports C++, woo hoo!)
  • by BitMan ( 15055 ) on Friday February 16, 2001 @08:15AM (#427699)

    In most modern, general-purpose programs, assembler has rarely been used to "increase performance" over traditional program language code -- with rare exceptions going to heavily used functions that a profiler might pick up. And even then, most assembly is part of a larger program and a fraction of the total code -- usually in-line in the high-level language, or a self-containted object referenced by the other high-level language code. But even then, today's compilers are good optimizers and can even best you on your assembler if you don't know what you are doing.

    No, for the past decade, assembler is still in use for raw, system-level details. Just like the Linux kernel, some things need to get done with GAS. Even NT's init loader is like this as well.

    Most CS and engineering programs still teach assembler not to be proficient, but to understand the machine code v. man code relationship. It's not a full-up course where you write programs only in assembler -- in fact, I had a class on system-level programming where I wrote an assembler in C and binutils (lexx, awk, etc...). Why? It taught me not only assembler fundamentals, but the concepts of machine code and organization, basic parsing, hash tables, referencing and other concepts -- all in one course, in a very practical manner!

    No, people why say "why code in assembler anymore?" are the people who continue to fail to understand why assembler ever existed. Heck, even Cobol and Fortran were available in the 50s! It was rarely a matter of "speed", but the fact that speed and size are "real-world constraints" at the system-level.

    -- Bryan "TheBS" Smith

  • by BitMan ( 15055 ) on Friday February 16, 2001 @08:34AM (#427700)

    With the Quake series, idSoftware found the Intel 32-bit integer loads from memory to 32-bit general-purpose register to be piss-poor slow. With a little tinkering, they found that on the Pentium, loading two 32-bit integers into a 64-bit floating point register, and then moving those integers from register to register was much faster. Why? With the Pentium, Intel really worked hard on FPU to bring it up to more common RISC-implementation performance (the 80x87 ISA has always been a poor FPU implementation, remember Weitek? ;-), and created a well pipelined FPU in the Pentium (over other x86 processors at the time). Pipelining means that more than one instruction can be worked on by a unit simultaneously -- if similiar instructions follow one another.

    Most fast, interpolated 3-D geometry usually involves sets of regularly, well-timed, memory loads of four to sixteen (4-16), 32-bit reads for each operand into registers (usually 1x4 or 4x4 matrix of 32-bit, interpolated integer coordinates in 3-D space). Again, loading through the traditional integer opcode was piss-poor slow -- and id, again, found that the new Pentium chip offered a "workaround" via its new, pipelined FPU.

    Because this was "unorthodoxed", most compilers simply did not implement this. If you tried to read integers from memory, it would use the integer load opcode. So idSoftware had to write their own replacement memory read/write routines and functions. Undoubtably, this was done easiest in assembler.

    BTW, this "Pentium optimization" was why the first generation of AMD and Cyrix processors lagged in Quake performance. Not because their FPU "sucked", but the idSoftware optimizations relied on the performance of a pipelined FPU -- not something AMD or Cyrix had in their first generation of Pentium ISA-compatible products. In fact, AMD's K6 (Nx686 core) had an excellent, non-pipelined FPU that was faster at execution -- provided your program was comprised of random integer/ALU, FPU and control opcodes. And if idSoftware would have just left the regular integer load in, the K6's integer memory load was upto 3x faster than the non-MMX Pentium and would have smacked the Pentium silly.

    -- Bryan "TheBS" Smith

  • by Tau Zero ( 75868 ) on Friday February 16, 2001 @08:03AM (#427701) Journal
    There are hordes of devices and sub-devices (modules in larger devices) where a MB is the designer's wet dream. Look at your car. Going forward from the trunk, you might have a CD player in it with a micro with between 4K and 32K of program ROM in it. Beneath the driver's seat is a memory-seat controller module which runs in 8K of ROM and 224 bytes of RAM, plus a little EEPROM. In the front doors there are a pair of modules which read switch inputs, control the window, door lock and mirror motors, and read back status/failure information on all of them. The instrument cluster has several microcontrollers, the overhead console trip computer has another, the electronic climate control another (probably 32K for a sophisticated one), the body controller another one... The larger modules are increasingly programmed in C, but the smaller applications still have parts (if not the entirety) written in assembly code getting down and dirty with the metal.

    Appliances may have teensy-weensy little microcontrollers in them. Some of them have as little as 256 bytes of program ROM. You aren't going to get much of a C program into that, but assembly can do quite a bit.

    Okay, I'll quit the mine-is-smaller-than-yours code-size contest now. ;)
    --
    Knowledge is power
    Power corrupts
    Study hard

  • by Socializing Agent ( 262655 ) on Friday February 16, 2001 @05:36AM (#427702)
    The Solaris kernel is between 10-15% fine-tuned assembly language. You think Duff's Device [lysator.liu.se] is odd? Wait until you see the Solaris bcopy() routine. It's like a switch statement without labels, except harder to understand.

    Also, RDBMS vendors will likely always use assembly. The locking code for an RDBMS needs to operate in fewer than 100 instructions to guarantee the possibility of concurrency -- and that includes deadlock detection. Furthermore, the "latches", or internal locks on the buffer pool, etc., must execute in under 10 instructions.

    Assembly language is important, especially since highly-optimized C code is faster than unoptimized C code by a factor of 5 or more. How do you think that those optimizations get into the compiler? Sure, no one writes applications -- or even, at this point, games -- in assembly, but today's processors are so complex that

    • You NEED to hand-optimize some speed-critical things
    • A simple syntax-directed translation scheme is more inadequate than ever for today's compilers -- if you're writing compilers, you really get to worry about speed.
  • by ksheff ( 2406 ) on Friday February 16, 2001 @09:51AM (#427703) Homepage

    Your tolower code doesn't work. ORing a value with itself, produces itself (that and you are assigning ch[0] to be 'Z' in the if statement). This is probably what you wanted (assuming input is ASCII):

    if ((c >= 'A') && (c <= 'Z')) c |= 0x20;
    or as a macro:
    #define TOLOWER(c) ((((c) <= 'A') && ((c) >= 'Z')) ? ((c) | 0x20) : (c))

    However, you wouldn't want to use the macro if it was evaluating a function or math expression, since that would get evaluated anytime (c) was actually used. The original mistake was probably due to your 4:45am posting w/o sleep. =)

  • Most programmers that I consider truely skilled still use assembly for a variety of tasks. Many things are MUCH easier to do in assembler than in any high-level language (Just try writing self-morphing code in C--And yes that is useful, it is very difficult to crack self-morphing programs)

    Assembly language is control. Control over every instruction that the processor recieves. Control over every byte of memory that is shifted throughout memory.
    You can control the exact number of bytes that your program occupies.
    You can make TINY programs.
    How about a webserver in under 600 bytes? (Granted, it can't outperform Apache...) How about a GLibC clone in under a meg? I have seen demos from the Finnish assembly competitions that included such feats as real time raytracing of a translucent sphere on a 386. How about much of the first level of Interplay's Descent in exactly 4096 bytes? (Including music!)

    Want to talk to hardware, directly, without any overhead? Try accessing the 64-bit register that counts the number of clocks so far on modern IA CPUs without ASM. You can't do that in an HLL because nobody has done it for you yet. HLLs are largely about piecing together others' work into a new pattern. In ASM, it's just you and the CPU.

    Many people also use assembly because it is also great for people who want to not only code a program, but actually know how it works. you can write a program in C you say? Or PERL, Eiffel, or whatever? I consider using function calls cheating in a way. Did you write any of those standard C functions that you are using?

    If not, did you really write that program, or did you simply arrange the smaller programs of others (functions) in such a way that they work differently than the particular arrangement of another person's program.

    (I'm not saying that function calls are bad--they sure speed up development--I am trying to make a point about assembly)

    As an added benefit, those who code in assembly are generally better at coding in HLLs because they know better how those HLLs work. Knowing how they work means being able to write faster code. For example, if you are using a language that has no tolower() equivalent, you can write something like:

    switch(ch[0]) {
    case 'A':
    ch[0] = 'a';
    case 'B':
    ch[0] = 'b';
    ...etc...

    The coder that learned how his system works at a binary level might choose the much faster:

    if((ch[0] >= 'A') && (ch[0] = 'Z')) ch[0] = (ch[0] | ch[0]);

    The bitwise OR doesn't even take a clock cycle to execute (but admittedly it is harder to read)

    discLamer: I'm not trying to tell anybody that their HLL coding skills are unimportant or easy. I'm not saying that HLLs are bad. (My favorite language is C). I am saying that many very skilled programmers still use assembly because it is very good at certain things that no other language can do well or at all. I am also saying that assembly is not dead. Just unpopular. I am also posting this at 4:45AM and have had no sleep or caffiene for quite some time. Just ignore me.

    Here are a some answers to some common complaints about assembly language blatantly stolen from Randall Hyde's book, "The Art of Assembly Language Programming." If you are interested, the whole book is freely available at http://webster.cs.ucr.edu/Page_asm/ArtofAssembly/f wd/fwd.html
    -=-=-=-

    Assembly is hard to learn.
    So is any language you don't already know. Try learning (really learning) APL, Prolog, or Smalltalk sometime. Once you learn Pascal, learning another language like C, BASIC, FORTRAN, Modula-2, or Ada is fairly easy because these languages are quite similar to Pascal. On the other hand, learning a dissimilar language like Prolog is not so simple. Assembly language is also quite different from Pascal. It will be a little harder to learn than one of the other Pascal-like languages. However, learning assembly isn't much more difficult than learning your first programming language.

    Assembly is hard to read and understand.
    It sure is, if you don't know it. Most people who make this statement simply don't know assembly. Of course, it's very easy to write impossible-to-read assembly language programs. It's also quite easy to write impossible-to-read C, Prolog, and APL programs. With experience, you will find assembly as easy to read as other languages.

    Assembly is hard to debug.
    Same argument as above. If you don't have much experience debugging assembly language programs, it's going to be hard to debug them. Remember what it was like finding bugs in your first Pascal (or other HLL) programs? Anytime you learn a new programming language you'll have problems debugging programs in that language until you gain experience.

    Assembly is hard to maintain.
    C programs are hard to maintain. Indeed, programs are hard to maintain period. Inexperienced assembly language programmers tend to write hard to maintain programs. Writing maintainable programs isn't a talent. It's a skill you develop through experience.

    Assembly language is hard.
    This statement actually has a ring of truth to it. For the longest time assembly language programmers wrote their programs completely from scratch, often "re-inventing the wheel." HLL programmers, especially C, Ada, and Modula-2 programmers, have long enjoyed the benefits of a standard library package which solves many common programming problems. Assembly language programmers, on the other hand, have been known to rewrite an integer output routine every time they need one. This book does not take that approach. Instead, it takes advantage of some work done at the University of California, Riverside: the UCR Standard Library for 80x86 Assembly Language Programmers. These subroutines simplify assembly language just as the C standard library aids C programmers. The library source listings are available electronically via Internet and various other communication services as well as on a companion diskette.

    Assembly language programming is time consuming.
    Software engineers estimate that developers spend only about thirty percent of their time coding a solution to a problem. Even if it took twice as much time to write a program in assembly versus some HLL, there would only be a fifteen percent difference in the total project completion time. In fact, good assembly language programmers do not need twice as much time to implement something in assembly language. It is true using a HLL will save some time; however, the savings is insufficient to counter the benefits of using assembly language.

    Improved compiler technology has eliminated the need for assembly language.
    This isn't true and probably never will be true. Optimizing compilers are getting better every day. However, assembly language programmers get better performance by writing their code differently than they would if they were using some HLL. If assembly language programmers wrote their programs in C and then translated them manually into assembly, a good C compiler would produce equivalent, or even better, code. Those who make this claim about compiler technology are comparing their hand-compiled code against that produced by a compiler. Compilers do a much better job of compiling than humans. Then again, you'll never catch an assembly language programmer writing "C code with MOV instructions." After all, that's why you use C compilers.

    Today, machines are so fast that we no longer need to use assembly.
    It is amazing that people will spend lots of money to buy a machine slightly faster than the one they own, but they won't spend any extra time writing their code in assembly so it runs faster on the same hardware. There are many raging debates about the speed of machines versus the speed of the software, but one fact remains: users always want more speed. On any given machine, the fastest possible programs will be written in assembly language.

    If you need more speed, you should use a better algorithm rather than switch to assembly language.
    Why can't you use this better algorithm in assembly language? What if you're already using the best algorithm you can find and it's still too slow? This is a totally bogus argument against assembly language. Any algorithm you can implement in a HLL you can implement in assembly. On the other hand, there are many algorithms you can implement in assembly which you cannot implement in a HLL.

    Machines have so much memory today, saving space using assembly is not important.
    If you give someone an inch, they'll take a mile. Nowhere in programming does this saying have more application than in program memory use. For the longest time, programmers were quite happy with 4 Kbytes. Later, machines had 32 or even 64 Kilobytes. The programs filled up memory accordingly. Today, many machines have 32 or 64 megabytes of memory installed and some applications use it all. There are lots of technical reasons why programmers should strive to write shorter programs, though now is not the time to go into that. Let's just say that space is important and programmers should strive to write programs as short as possible regardless of how much main memory they have in their machine.

    Assembly language is not portable.
    This is an undeniable fact. An 80x86 assembly language program written for an IBM PC will not run on an Apple Macintosh. Indeed, assembly language programs written for the Apple Macintosh will not run on an Amiga, even though they share the same 680x0 microprocessor. If you need to run your program on different machines, you'll have to think long and hard about using assembly language. Using C (or some other HLL) is no guarantee that your program will be portable. C programs written for the IBM PC won't compile and run on a Macintosh. And even if they did, most Mac owners wouldn't accept the result.

    Don't flame me too much... This is my first post on /.!

    sivar@NoSpam.email.REMOVE.com

Work without a vision is slavery, Vision without work is a pipe dream, But vision with work is the hope of the world.

Working...