Forgot your password?
typodupeerror
Security Intel Operating Systems Software

Is the x86 Architecture Less Secure? 315

Posted by Cliff
from the oh-my-buffers dept.
An anonymous reader asks: "Paul Murphy at CIO Today reports that a specific Windows buffer overflow vulnerability ' depends on the rigid stack-order execution and limited page protection inherent in the x86 architecture. If Windows ran on Risc, that vulnerability would still exist, but it would be a non-issue because the exploit opportunity would be more theoretical than practical.' And implies that other Windows vulnerabilities are actually facilitated by having an x86 chip." How does the x86 processor compare with other architectures when it comes to processor based vulnerabilities? How well have newer additions, like the Execute Disable Bit, helped in practical situations?
This discussion has been archived. No new comments can be posted.

Is the x86 Architecture Less Secure?

Comments Filter:
  • by Anonymous Coward on Friday April 29, 2005 @07:00PM (#12388726)
    all x86 processors have an evil bit
  • by rossifer (581396) * on Friday April 29, 2005 @07:01PM (#12388732) Journal
    Paul Murphy [cio-today.com], I'd like you to meet Paul Graham [paulgraham.com]. What we have here is an Apple press release being printed up as a trade journal article.

    Good for Apple's PR firm. I guess.

    Not that I have anything against Macs or PowerPC hardware, I just don't like disengenuous authors (or their articles).

    Regards,
    Ross
    • Maybe Apple paid him too. [washingtonpost.com]


    • by ErikTheRed (162431) on Friday April 29, 2005 @07:20PM (#12388889) Homepage
      Something about news articles in general (as I learned from one of my clients, a PR agency) - many "reporters" create "stories" by basically doing some light editing (if that) on a press release. If you want to get your product or service in a newspaper, magazine, etc., the best thing to do is to have a pre-written piece that the "reporter" can slap their name on. It's a reasonable bet, for instance, that any positive story in your local paper about some local business was written either by that business or their PR agency. That doesn't necessarily mean it contains untrue information, but it certainly colors whatever facts are included.

      This is the actual main reason for many people's complaints that news sources lean too far left or right or whatever - much of the "news" is generated by PR firms, advocacy groups, political parties, etc., given a very thin coat of paint, and slapped on the page. Some actual work is done on the editorial page, and in the reviews (although there have been some "reviews" done along these lines for things like restaurants - caveat emptor), but by and large you should take most newspaper and magazine stories with an appropriate grain of salt (unless you have a particularly high level of confidence in a specific writer or publication).
      • >> is generated by PR firms, advocacy groups, political parties, etc., given a very thin coat of paint, and slapped on the page

        Writers are people. People are lazy. Small wonder PR firms are smart enough to exploit it...
      • Media Watch (Score:5, Informative)

        by xixax (44677) on Friday April 29, 2005 @09:43PM (#12389736)
        Our public broadcaster has a show calledMedia Watch [abc.net.au] that routinely busts journalists for plagiarising press releases. Not to mention even more forward things like running advrtisements as news [abc.net.au].

        Xix.

      • Re: Biased (not) (Score:3, Insightful)

        by screenrc (670781)
        Nowday, what I see on the news are stories
        about Michael Jackson, Mrs. Stuart, and the Pope.
        This is what is passed as "news". Is the
        media biased towards the left or towards the
        right? When all they do is talk about
        the unimportant, the media is not biased at
        all! They are just silly.
      • And wire services... (Score:3, Informative)

        by jfengel (409917)
        You'll tend to see rewritten press releases in the business section. The front page of most newspapers originates in wire service articles: AP, Reuters, AFP, sometimes big national papers like the Washington Post or the New York Times.

        If you click through a news story from a news aggregator like Google News, you'll note that many of them have identical text, because they're literally repeating the AP wire service story, crediting the original AP writer and all.

        Actual reporters are used primarily for local
  • by Anonymous Coward on Friday April 29, 2005 @07:01PM (#12388736)
    What, is there only one tech writer in the world? (See article two or three down on SCO)
  • RISCy (Score:5, Insightful)

    by FidelCatsro (861135) <fidelcatsro.gmail@com> on Friday April 29, 2005 @07:02PM (#12388741) Journal
    If windows Ran on a RISC arch then it would be just as insecure .
    The fact is not that this issue is an insecurity in X86 but the fact that windows uses it .If you know of a flaw in your architecutre then why are you programing
    to that flaw .
    • Re:RISCy (Score:5, Insightful)

      by nocomment (239368) on Friday April 29, 2005 @07:14PM (#12388844) Homepage Journal
      Bingo. Well said. OpenBSD runs on x86, does it have this flaw?
      • Re:RISCy (Score:4, Interesting)

        by NutscrapeSucks (446616) on Saturday April 30, 2005 @02:04AM (#12390744)
        To be fair, OpenBSD doesn't really care much about performance, and is willing to take a big speed hit for security. They have implemented workarounds for the architecture that have been deemed unacceptable elsewhere (Linux). All of this is pretty recent -- a few years ago, they had all the same fundemental problems as everyone else.
    • Re:RISCy (Score:3, Interesting)

      Exactly.

      The security advantage of MacOS X is a lack of braindead design decisions, it has nothing to do with PowerPC.
    • Think about it...if a virus could never read it's code out of memory, you would never suffer from it...worms would die, and all malware would stop working! It would be great!

      Better yet, you can use the Signetics chips in both PCs *AND* Macs!

      ttyl
      Farrell
  • by winkydink (650484) * <sv.dude@gmail.com> on Friday April 29, 2005 @07:02PM (#12388743) Homepage Journal
    2 articles in under 4 hours submitted by an "anonymous reader" that point to Paul Murphy at CIO Today. Hmmmm... Astroturf anybody?
    • ... Hes a busy boy aparently . Perhaps a coincidence .. or sponsership..
      This issue itself is intresting though . Though i am fairly sure it was discussed a couple of years back when AMD introduced the non executable area in the Athlon and opteron chips
    • Draw dubious conclusions from circumstantial evidence that question the anti-wintel open source orthodoxy, get cited on Slashdot!

      Right this instant, I'm working on my "Windows better for pirating media files" opinion piece. It's a surefire winner.

  • by HotNeedleOfInquiry (598897) on Friday April 29, 2005 @07:04PM (#12388750)
    Blame the machine or blame the programmer? You can write x86 code without buffer overflows, period. That you can be more sloppy on other architectures and not get overflows seems silly. Like "everyone should drive Volvos cause they are safer."

    Lots of things can be laid at the feet of x86 architecture, but not that it seduces programmers into writing code with buffer overflows.
    • by Anonymous Coward

      Blame the machine or blame the programmer?

      How about blaming both?

      A machine can make it more difficult for extremely common types of attack to be successful. If it doesn't, then it shares some of the blame.

      A programmer can avoid troublesome functions and coding styles, can test with bad data more thoroughly, and can use automated tools to catch these problems before they are a security issue. If they don't, then they share some of the blame.

      A programming language can mitigate these issues by p

    • You are correct that you can write x86 code without buffer overflows. I've always thought that dynamically-assigned buffers were trouble since I first learned them.

      What the author of this article is saying is that PowerPC-based computers would only have a 1-in-6 chance of being able to execute code arbitrarily spilled over actual code via buffer overflow.

      Moreover the way that data and code "segments" (I'm using the x86 word here) just don't work the same way on PowerPCs. This essentially prevents arbitrary

    • by Anonymous Coward
      This is complex. Simply adding NX to x86 doesn't mean much, the platform still has to know when heap is code and when it's heap and when it switches from one to the other and it's not easy to retro fit in, x86 platforms would be different, probably not substantially but still different and there'd be no legacy, had NX been there a long time ago.

      As for bugs, I agree with you but I also know how easy and how common it is. We need to use multiple tools, just saying hire better coders or something to th

  • Funny... (Score:5, Insightful)

    by scovetta (632629) on Friday April 29, 2005 @07:05PM (#12388761) Homepage
    If Windows ran on Risc, that vulnerability would still exist, but it would be a non-issue because the exploit opportunity would be more theoretical than practical.

    Funny how exploits that are "just theoretical" don't stay that way forever...
    • Not so... (Score:4, Insightful)

      by dhowells (251561) <slashdot@domhowells.com> on Friday April 29, 2005 @07:37PM (#12389011) Homepage Journal
      Althought the insecurity of code that is only 'theoretically' exploitable ought to be fixed (we all prefer bug free code, right?) many theoretical exploits will never be practically exploited for technical reasons.

      There is a distinction here which needs to be made between code which is exploitable but for which no public exploit code or method has been released -- in which cases it 'wont stay that way for ever' -- and code wherein the calculation of an arbitrary or runtime offset (e.g for a buffer overflow) is impossible and guesswork is impractically unlikely. Theoretical insecurities of the latter type are very likely to 'stay that way for ever'
    • Yeah, coz OpenBSD on x86 is soooooo insecure....

      -Slak
    • Re:Funny... (Score:4, Interesting)

      by kasperd (592156) on Saturday April 30, 2005 @05:22AM (#12391224) Homepage Journal
      Funny how exploits that are "just theoretical" don't stay that way forever...

      I always liked this phrack article about how to exploit an appearently unexploitable bug [phrack.org]. After reading this, I would be very cautious about clasifying a bug as unexploitable.
  • Stack (Score:4, Interesting)

    by Sloppy (14984) * on Friday April 29, 2005 @07:06PM (#12388766) Homepage Journal
    The x86 has always been known to be inferior. But the most blatant problem isn't lack of execution permission bits by page, or anything subtle. The biggest problem is something so huge and obvious, that people have stopped being able to see it, because they're completely immersed in it.

    On x86, the stack grows backwards. Backwards! A stack overflow ought to overwrite unallocated space, not earlier stack frames and return addresses. It's totally insane.

    But I guess when you live with insanity year after year, you get used it.

    • Re:Stack (Score:2, Insightful)

      by kernel_dan (850552)
      The stack goes backwards and the heap goes upwards. They grow in opposite directions to minimize wasted space. You would prefer heap overflows to overwrite the stack frames and return addresses?

      Careful programming when dealing with memory in a language without builtin bounds checking is the solution to this problem.
    • Re:Stack (Score:5, Interesting)

      by CajunArson (465943) on Friday April 29, 2005 @07:18PM (#12388877) Journal
      Bzzztt.... wrong, Thankyou for playing. As I learned firsthand when coding buffer overflows in a security class, it is a simple, easy, and wrong assumption to think that the stack growing down is the main reason you can do buffer overflows. The main reason is that you are allowed to write where you're not supposed to, not matter what direction the stack grows. Hint: Remember what a stack is exactly... a buffer overflow can just as easily write up into another function's space and execute the overflow from there.
      It actually turns out that a bunch of the random relocation and offset tricks while helpful, can still be defeated, so simply growing the stack in a different direction is not a real solution.
    • It's totally insane.

      It's not totally insane. The stack and data areas both grow into unallocated space. In a system without paging (such as the 8080, which the 386+ is ultimately decended from), this is the easiest to allocate. It only becomes a problem on stack overflow or memory exhaustion. It's also the way most architectures work. (at least the 8080, 8086, 80286, 80386+, 2502, 6800+, 68000+, VAX, etc, which is to say every architecture I've ever programmed in assembley). I have a PPC assembley book

    • The stack grows downward on the vast majority of architectures, including PowerPC, ARM, SPARC, and MIPS. The only architectures I've come across where stacks grow up are PA-RISC and i960.

      So if you cponsider this is a sign of CPU inferiority, well, I hope you're an HPUX fanboy.
    • Re:Stack (Score:5, Insightful)

      by pclminion (145572) on Friday April 29, 2005 @07:36PM (#12389000)
      A stack overflow ought to overwrite unallocated space, not earlier stack frames and return addresses. It's totally insane.

      Not really. You assume that all buffer overflows overflow in the "upward" direction. It's just as easy, in C, to code a loop that progresses backward through memory. I've had many reasons and occassions to do it. Simply making the stack grow upward instead of downward won't solve the underlying basic issue, which is that without proper bounds checking, the program can overwrite memory it's not supposed to.

      Besides, it's incredibly convenient for the stack to grow downward. Program code and data starts at the bottom of virtual memory, and the stack starts at the top. You just map in new page frames as necessary. If the stack grew the other direction, it would either have to be limited in size, or you'd have to shift it in memory if it grew too large. Shifting it is practically impossible, since you'd have to find all program pointers into the stack and update them all to reflect the new stack. Gad, I don't even want to think about it.

      • Here's a novel question: Why even put the stack and heap in the same virtual page on modern operating systems?

        I mean, there were a lot of design decisions that made sense back in the day, but I'm always wondering if a fresh ground-up processor design might not have some advantages...
        • i think everyone agrees about this but its the same thing that keeps the steering wheel as a standard control mechism for the car

          why change something so level when so far it works

          note, I'm not saying that it shouldn't be changed, but refactoring code is one of the most ignored aspects of software and hardware development simply because you gotta include backwards compatibility
      • Re:Stack (Score:4, Insightful)

        by ebuck (585470) on Friday April 29, 2005 @10:29PM (#12389941)
        Up and down mean nothing in a computer, that is, they mean just as much as the stack growing left to right, or right to left. Or even upper-right corner to lower-left corner, diagonally.

        0x00000000 isn't the math number 0, nor is 0xFFFFFFFF unless you assign that meaning to it. A perfect example is in floating point numbers, which mean something totally different that the same sized integer, which is totally different that the same sized memory address.

        As others have already said. It's not the direction, it's the ability to do something that you shouldn't be able to do.
    • Bollocks! (Score:5, Informative)

      by EmbeddedJanitor (597831) on Friday April 29, 2005 @08:44PM (#12389449)
      On many/most RISC you can grow the stack in any direction. By convention, most ARM code runs a down-growing stack,

      The stack direction has nothing to do with security. You can still have stack protection running up or down stacks. Once you have a reasonable MMU, all further problems are due to software design.

  • by HotNeedleOfInquiry (598897) on Friday April 29, 2005 @07:06PM (#12388774)
    But doesn't pretty much everyone use a compiler. And doesn't the compiler pretty much insulate you from such issues? What am I missing?
    • The issue is whether the stack grows downwards (from higher memory address to lower) or upwards (from lower memory addresses to higher). If the stack grows downwards, then overrunning an array allocated on the stack (due to missing bounds check, which is bad programming) can overwrite a return address on the stack. Then the function can return to an arbitrary address. If the stack grew upwards, this would not be possible. No, the compiler cannot insulate you from basic CPU design. On the other hand, not bou
  • by ikewillis (586793) on Friday April 29, 2005 @07:08PM (#12388786) Homepage
    I administrate dozens of Solaris/SPARC systems over the years. While implementing a buffer overflow on this architecture may be less trivial, I can't tell you how many buffer overrun patches I've installed over the years in various patch clusters.

    The real disadvantage of x86 over a RISC architecture like SPARC is that it doesn't have page protections (not to be confused with real mode segmentation which essentially disables the protected mode i386 MMU) where pages containing data and code are marked differently, so data pages are non-executable. sparcv9 implements a non-executable user stack per default, whereas it's a configurable option for sparcv8 binaries.

    This has all changed with x86-64/AMD64/EM64T/x64/WHATEVER, which has brought a noexec bit to memory pages and allows hardware buffer overflow protection similar to SPARC. This still isn't a silver bullet for buffer overflows, but it's certainly better than nothing.

    • Interestingly, PowerPC lacks a per-page execute disable as well. It has nothing to do with whether an architechture is RISC or not.
    • The interesting thing here is that you're totally welcome to use segmentation in protected mode, its just that no one does. The paging mechanisms almost, but not quite, duplicate the protection offered by enforcing segmentation.

      Right now modern operating systems simply set the segmentation registers to have a base of 0 and a limit of the end of memory, thus creating one big segment. You could have segments as well as paging, but that would be a pain in the ass. Of course we now see the problem with fai

  • Well... (Score:4, Funny)

    by autocracy (192714) <slashdot2007NO@SPAMstoryinmemo.com> on Friday April 29, 2005 @07:08PM (#12388796) Homepage
    F00FC7C8

    Still here? Dammit...

  • big deal.. (Score:2, Funny)

    by pixas (711468)
    ..so what if I have 0.999997 viruses in my CPU?
  • Thanks, Slashdot -- I actually read that boatload of ignorant gibberish, and now I'm measurably dumber than I was before I clicked the link. Keep this up and I too will be making specious arguments about "RISC" and "CISC".
  • Now really, we all know that x86 sucks noodle for various reasons (A20 anyone?), so why does it draw attention when somebody says it out loud? It wasn't so much designed, rather cobbled together with cludge upon cludge to retain backwards compatibility. It's all known!

    Granted, if it kills x86 once and for all before yet another actually useable arch like Alpha is eradicated, it's not bad.

  • by carcosa30 (235579) on Friday April 29, 2005 @07:12PM (#12388831)
    SO! We now know the truth: Microsoft is blameless for the shoddy security of their products. It's all the fault of the x86 architecture.

    After all, how could Microsoft be expected to learn the intricacies of their primary platform and write code that does what it's supposed to?

    We have been lied to.
  • by ArbitraryConstant (763964) on Friday April 29, 2005 @07:13PM (#12388835) Homepage
    For starters, Windows does run on RISC.

    The stack behaviour of PowerPC is just as predictable as x86, and it's just as easy to perform a buffer overflow attack on vulnerable code.

    PowerPC doesn't offer more per-page protection than x86, and it offers less than x86-64, as x86-64 can disable execution on a per-page basis.

    It's possible to do things on both architechtures like add a random offset to the stack or load libraries at random locations. This makes attacks much more difficult, and OSes like OpenBSD do them on both architechtures. OSes like Linux or MacOS don't do them on any architechtures. Stack protections like propolice are a compile-time option and can be used on any OS on any architechture.

    The mainstream architechtures of today do very little to distinguish themselves from each other security wise. One of the the few features that is different from one architechture to another, per-page execute protection, is not available on PowerPC.

    This guy doesn't know what he's talking about.
    • OSes like Linux or MacOS don't do them on any architechtures. Stack protections like propolice are a compile-time option...

      And no Linux distribution allows you to make use of your own compiler flags?
      • "And no Linux distribution allows you to make use of your own compiler flags?"

        Of course you can set your own compile flags. That would be why I said "Stack protections like propolice are a compile-time option and can be used on any OS on any architechture.". You clipped that part of my sentence in your quote.

        The other features I mentioned require OS support, as they involve small but significant changes to the internals of the OS.
    • OSes like Linux or MacOS don't do them on any architechtures.

      Linux does [lwn.net] support limited stack and library randomization. However, there are questions [stanford.edu] as to the effectiveness of these techniques.

    • This guy doesn't know what he's talking about.

      Probably. Dunno since I stopped RTFAs a while ago.

      However, the IBM PowerPC 970FX aka Apple G5 processors have had NX for a while. Partial Linux support already exists. Check it out.

      http://lwn.net/Articles/126862/ [lwn.net]

      I like the 970FX (apart from its tiny cache). Shame Apple has a monopoly on the desktop systems, and you have to buy their OS to run Linux on one.
    • by pammon (831694) on Saturday April 30, 2005 @03:05AM (#12390880)
      > The stack behaviour of PowerPC is just as
      > predictable as x86, and it's just as easy to
      > perform a buffer overflow attack on vulnerable
      > code.

      No it's not.

      For example, here's a function vulnerable to a classic buffer overflow:

      void security_hole(char* s) {
      char buff[128], *ptr = buff;
      while (*s++ = *buff++);
      }

      It's more difficult to turn this buffer overflow into arbitrary code execution on PowerPC because the link register isn't spilled to the stack (so you have to overwrite some function's return address higher up in the call chain) which takes more work and requires a larger payload, larger instruction sizes means you need a still larger payload, larger instruction sizes mean it's trickier to build an instruction stream with no zero bytes, and in any case you may have to flush the instruction cache to force it to see your changes - no easy task.

      Leaf functions, functions that take advantage of tail-call optimizations, and functions that move the link register into a GPR rather than the stack don't let you overwrite the return address at all, which is never the case on x86.
  • by rice_burners_suck (243660) on Friday April 29, 2005 @07:20PM (#12388890)
    In all, I don't think the processor is really responsible for most of these problems. I think it is the design and implementation of software. Things simply must be done correctly, or computers will go haywire no matter what kind of processor they have.

    Linux, BSD, and other *nix systems are reasonably well protected through the design of the system and the widespread use of common server programs, which are checked and re-checked for these problems by a variety of people and organizations. Also, GCC provides ProPolice, which can help lock things down a bit more. I think this particular problem mostly applies to systems running Windows.

    Microsoft's business problem in this regard is that they have no control over the applications running in Windows, and they provide a default Windows install that is quite open and vulnerable. Locking down the ports and getting rid of the most dain-bramaged policies in Windows is only one part of the solution. Vulnerabilities in application programs can still be used to break into these systems, and Microsoft will ultimately be blamed.

    Perhaps the best thing Microsoft can do is integrate something like ProPolice into the C and C++ libraries used to compile programs for Windows. This would make a big difference in protecting the stack of running programs that are not designed with security as a priority.

    If x86 really is less secure by nature, it probably wouldn't hurt if they'd put their virtualization engine (similar in function to VMware but not even half as good) right into the core OS. Under such a design, only the Windows kernel would run directly on the processor; the rest of the operating system and all of the application programs would run with the same x86 instruction set, but through the virtualization engine. There, checks could be made to prevent the most common vulnerabilities of the x86 processor from being utilized.

  • NX bit (Score:2, Interesting)

    by BinaryJono (546830)
    while the NX bit can help prevent the execution of malicious code on the stack after a buffer overflow, it doesn't solve the security problem posed by overflows. return-into-libc attacks can easily be executed and will become much more prevelant as NX-enabled PCs filter into the mainstream. address space randomization can help make rilc attacks harder on 64-bit architectures but is pretty useless on 32-bit archs.
  • Whare have we heard that [atstake.com] before?
  • I just read this article recently in Embedded Systems Programming magazine. http://www.embedded.com//showArticle.jhtml?article ID=55301875 [embedded.com] After a detailed explanation of the hardware protection features built into the x86 (since the 80386), the author makes the following statement towards the end of the article: "Too bad Microsoft doesn't use this feature. Windows has been plagued by buffer-overflow bugs that could easily be prevented by the processor's segmentation features. Alas, even though these fe
    • "Too bad Microsoft doesn't use this feature. Windows has been plagued by buffer-overflow bugs that could easily be prevented by the processor's segmentation features. Alas, even though these features have been built into every x86 chip for more than 15 years, Microsoft has never used them. Instead, Windows creates a "flat" memory system with no segmentation, no tasking, no bounds checking, and no privilege protection, and then struggles to duplicate all those features in software. The result has been famo

    • Using a segmented address space, where the Stack and Code are kept in what are effectively different address spaces, would do much to mitigate the effect of buffer overruns. On the other hand, the NX bit on x86-64 accomplishes basically the same thing, without the overhead of having to use long pointers to access data on the stack.

      Neither of them are really all that robust though, since any time you can overwrite the return address on the stack, you can cause execution to veer off to somewhere else. Maybe
  • However, all other non Harvard architectures will suffer the same fate, as they all have the same flaw. RISC isnt a magic trick to fix all evils.

    Its simple: dont mix code and data.
  • A longitme friend has a Sun 360 that's permanently on the net. a 25 Mhz MIPS processor isn't exactly as desireable target to skript kiddez, nor does it have the ability to saturate a network link...and you'd need 40 times as many to create your vast Zombie Horde.

    Christ, creating a SSH key takes a good 30 seconds.
  • I think a future version of X86 should have virus execution assistance in hardware.

    Given that you just can't stop the things, why not offload the burden of running them from the processor?

    BIPs (Bots Infected per Second) could be the metric for performance.
  • by Branka96 (628759) on Friday April 29, 2005 @09:33PM (#12389687)
    CAN-2004-1134 is a buffer overflow issue. The Mac is susceptible to buffer overflows.
    Take e.g. the iSync issue [apple.com]. Apple doesn't go into details, but if you do a Google search on "isync vulnerability" you will find:
    "The vulnerability is caused due to a boundary error in the handling of the "-v" and "-a" command line options. This can be exploited to cause a buffer overflow by supplying an overly long argument (over 4096 bytes). Successful exploitation allows execution of arbitrary code with the privileges of the mRouter application."
    A proof of concept exploit [linuxsecurity.com] can be found at. It opens a root shell.
    When the PowerPC jumps to a subroutine, the return address is stored in the lr register. The first thing the prolog code in the subroutine does, is to put the address on the stack (freeing up the register for further function calls). So, a would-be hacker can overwrite the return address. For a description of how to take advantage of buffer overflows on the Mac, see "Smashing The Mac For Fun & Profit" [phathookups.com].
  • by Anonymous Coward on Friday April 29, 2005 @10:53PM (#12390072)
    Every once in a while there is somebody who claims a certain CPU type, operating system, etc. is more secure simply by its basic structuring. The main point made here is that x86 is less secure because of its process memory layout. Lets take a look at a few known and popular high impact vulnerabilities and examine the reasons why they could be exploited:
    • The telnetd AYT heap overflow (2002) could be exploited on x86/*BSD systems specifically because of their memory layout and little-endianess, while MIPS and SPARC systems were saved by their big-endian, 64bit addresses. Yet, on x86/Linux it was not exploitable, because of a different memory layout within the heap.
    • The Solaris login heap overflow (2001) could be exploited on both x86 and SPARC. The reason were that addresses are created by the vulnerable code itself.
    • The SSH1 CRC32 overflow (2000), has been exploited on every known architecture, x86, SPARC, MIPS, etc. because the data used to overwrite memory with were created by the vulnerable code itself, hence endianess and order did not matter.
    Now, there are cases where RISC architecture makes exploitation more difficult to impossible. But there are around an equal amount of cases where x86 is saved. But the reason is not to be found within the architecture alone, but within differences in the whole chain from CPU to process memory layout to ABI and runtime environment. The following are especially important to determine if a vulnerability could be exploited on a given system:
    • CPU, word width and endianess
    • process address layout
    • stack frame handling and layout, how registers are saved (register windows?) order of registers/parameters/locals/alloca
    • heap handling (i.e. what malloc allocation system is used. For example, most *BSD systems use an out-of-chunk management to control the heap structure itself, while glibc uses an inband management, which is by nature more likely to allow exploitation)
    • compiler optimizations, eg. if small functions are inlined omitting stack frames, etc.
    • ...
    Speaking with more than eight years of exploit development experience, there is much more to consider than just the CPU type.
  • by Animats (122034) on Friday April 29, 2005 @11:07PM (#12390152) Homepage
    It's certainly possible to build machines which prevent buffer overflows. Burroughs did that from 1958 to about 1990, quite successfully. Every array is in its own segment. Memory addresses aren't numbers; they're a sequence of descriptors, more like a pathname than a pointer. The last element of the pathname is the array subscript. A descriptor that goes off the end of an array generates a subscript-out-of-range exception.

    But it's tough to run C on that kind of architecture. [brighton.ac.uk] C wants pointers to be addresses. The "array is a pointer" convention is a bad fit to a true segmented architecture. You can run Pascal just fine, but running C is tough. It can be done, but basically requires allocating all the variables in one big "array" at the hardware level, so you lose the protection. When C came in, the Burroughs machines (by then the Unisys A series) died off.

    So it's quite possible to fix this, but you have to dump C. This may happen as Java and C# get more traction.

    C++ doesn't help. It's part of the problem.

  • by 44BSD (701309) on Friday April 29, 2005 @11:39PM (#12390274)
    http://cvs.openbsd.org/papers/auug04/ [openbsd.org]

    Theo talks about how OpenBSD uses various available processor features to increase system attack resilience, w/minimal performance impact. The design choices made for architectures with differing degrees of per-page protection are presented. The concepts are not at all OpenBSD-specific, although the implementation discussed is, of course, OpenBSD.
  • by haggar (72771) on Saturday April 30, 2005 @03:53AM (#12391013) Homepage Journal
    I would have said that the most obvious hardware-specific feature, that would protect against stack overflows, is Harvard architecture (vs. Von Neumann, present in almostall CPUs today).

    In Harvard architecture, data and program memory are separate and separately accessed. This has a speedup benefit, as you can access the data in the same cycle you access the program memory, but the other advantage is, a stack overflow will not corrupt your program. For an example, the Atmel AVR risc microcontroller family uses Harvard architecture.
  • by Glock27 (446276) on Saturday April 30, 2005 @09:06AM (#12391704)
    The author of the cited article is clueless on several fronts, but he does have a good basic point: if you're choosing between Windows, Mac and Linux for the "best" computing platform, Mac is looking awfully attractive these days.

    In another article on Slashdot today it's mentioned that Eric Raymond recommends Microsoft "open document formats" and "adhere to standards". Document formats aren't really an issue with Apple, but Apple is doing a very nice job of adhering to open standards these days. BSD Unix, Java, OpenGL, PDF, TCP/IP, X11...Apple is much more programmer friendly than it has ever been. The G5 machines are also very competitive on performance.

    If you need access to commercial applications, or would rather spend money instead of time to accomplish your computing tasks, Mac makes a lot of sense compared with Linux. Windows, for me, is a distant third due to the time lost dealing with security issues, and a general distaste for programming something that inelegant. Besides, I can target Windows using Java with very little pain.

    Just my $.02.

  • by generationxyu (630468) on Saturday April 30, 2005 @11:06AM (#12392137) Homepage
    Yes, it's true, NX will protect you from the simple char buf[512]; strcpy(buf, untrusted_data). But that doesn't mean it's secure. What if the return address the attacker supplies isn't on the stack? What if it's in a predictable malloc() buffer? Ok, set NX on malloc()s. What if it's in the code segment? You can't make that NX. What if it's in libc? Once again, can't make that NX. Lots of undesirable stuff can be done without executing stack code.

    Random offsets won't help much -- they'll help some, but what if you can write a LOT of data into that buffer? Give it a LARGE NOP sled.

    Detect when a process is doing a lot of NOPs in a row and kill it? Ok. Use "AIAIAIAIAIAIAIAI..." 'A' = 0x41 = inc %ecx, 'I' = 0x49 = dec %ecx. Together, they are an effective NOP. Hell, most of the time, "AAAAAAAAA..." is an effective NOP. Does an attacker really care what's in ECX?

    The problem is NOT the architecture, NOT the OS, and NOT the language. It's not a problem with libc, stdio, strcpy, or anything else. If you haven't figured this out by now, you might want to read about computer architecture -- computers do what you tell them to. I can write secure code in which I strcpy() from untrusted data into a static buffer on the stack, on an x86 running Windows with no NX. Hell, I'll even do it in real mode.

    I'm not a DJB fanboy, but he does have quite a few good points. Programmers are lazy. Write secure code.

A failure will not appear until a unit has passed final inspection.

Working...