Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security Intel Operating Systems Software

Is the x86 Architecture Less Secure? 315

An anonymous reader asks: "Paul Murphy at CIO Today reports that a specific Windows buffer overflow vulnerability ' depends on the rigid stack-order execution and limited page protection inherent in the x86 architecture. If Windows ran on Risc, that vulnerability would still exist, but it would be a non-issue because the exploit opportunity would be more theoretical than practical.' And implies that other Windows vulnerabilities are actually facilitated by having an x86 chip." How does the x86 processor compare with other architectures when it comes to processor based vulnerabilities? How well have newer additions, like the Execute Disable Bit, helped in practical situations?
This discussion has been archived. No new comments can be posted.

Is the x86 Architecture Less Secure?

Comments Filter:
  • RISCy (Score:5, Insightful)

    by FidelCatsro ( 861135 ) <.fidelcatsro. .at. .gmail.com.> on Friday April 29, 2005 @07:02PM (#12388741) Journal
    If windows Ran on a RISC arch then it would be just as insecure .
    The fact is not that this issue is an insecurity in X86 but the fact that windows uses it .If you know of a flaw in your architecutre then why are you programing
    to that flaw .
  • by winkydink ( 650484 ) * <sv.dude@gmail.com> on Friday April 29, 2005 @07:02PM (#12388743) Homepage Journal
    2 articles in under 4 hours submitted by an "anonymous reader" that point to Paul Murphy at CIO Today. Hmmmm... Astroturf anybody?
  • Funny... (Score:5, Insightful)

    by scovetta ( 632629 ) on Friday April 29, 2005 @07:05PM (#12388761) Homepage
    If Windows ran on Risc, that vulnerability would still exist, but it would be a non-issue because the exploit opportunity would be more theoretical than practical.

    Funny how exploits that are "just theoretical" don't stay that way forever...
  • by ArbitraryConstant ( 763964 ) on Friday April 29, 2005 @07:13PM (#12388835) Homepage
    For starters, Windows does run on RISC.

    The stack behaviour of PowerPC is just as predictable as x86, and it's just as easy to perform a buffer overflow attack on vulnerable code.

    PowerPC doesn't offer more per-page protection than x86, and it offers less than x86-64, as x86-64 can disable execution on a per-page basis.

    It's possible to do things on both architechtures like add a random offset to the stack or load libraries at random locations. This makes attacks much more difficult, and OSes like OpenBSD do them on both architechtures. OSes like Linux or MacOS don't do them on any architechtures. Stack protections like propolice are a compile-time option and can be used on any OS on any architechture.

    The mainstream architechtures of today do very little to distinguish themselves from each other security wise. One of the the few features that is different from one architechture to another, per-page execute protection, is not available on PowerPC.

    This guy doesn't know what he's talking about.
  • Re:RISCy (Score:5, Insightful)

    by nocomment ( 239368 ) on Friday April 29, 2005 @07:14PM (#12388844) Homepage Journal
    Bingo. Well said. OpenBSD runs on x86, does it have this flaw?
  • by Locke2005 ( 849178 ) on Friday April 29, 2005 @07:14PM (#12388849)
    The issue is whether the stack grows downwards (from higher memory address to lower) or upwards (from lower memory addresses to higher). If the stack grows downwards, then overrunning an array allocated on the stack (due to missing bounds check, which is bad programming) can overwrite a return address on the stack. Then the function can return to an arbitrary address. If the stack grew upwards, this would not be possible. No, the compiler cannot insulate you from basic CPU design. On the other hand, not bounds checking array accesses should always be considered a "bug".
  • Re:Stack (Score:2, Insightful)

    by kernel_dan ( 850552 ) <slashdevslashtty ... m minus math_god> on Friday April 29, 2005 @07:16PM (#12388861)
    The stack goes backwards and the heap goes upwards. They grow in opposite directions to minimize wasted space. You would prefer heap overflows to overwrite the stack frames and return addresses?

    Careful programming when dealing with memory in a language without builtin bounds checking is the solution to this problem.
  • Re:Stack (Score:2, Insightful)

    by Sloppy ( 14984 ) * on Friday April 29, 2005 @07:18PM (#12388874) Homepage Journal
    why would up equal non-allocated?
    Well, some direction needs to be unallocated.
    Overflowing memory is bad, period.
    Hey, I can't say "bugs are ok." It's just a question of how catastrophic you want the bugs to be. Maybe having them always be a distaster (because return address gets overwritten) has some advantages, in that it makes bugs less subtle so the developer will more likely find them. But according to history, that seems to have not worked out, given that even non-programmers now know what "buffer overflow" means.
  • Re:Stack (Score:5, Insightful)

    by pclminion ( 145572 ) on Friday April 29, 2005 @07:36PM (#12389000)
    A stack overflow ought to overwrite unallocated space, not earlier stack frames and return addresses. It's totally insane.

    Not really. You assume that all buffer overflows overflow in the "upward" direction. It's just as easy, in C, to code a loop that progresses backward through memory. I've had many reasons and occassions to do it. Simply making the stack grow upward instead of downward won't solve the underlying basic issue, which is that without proper bounds checking, the program can overwrite memory it's not supposed to.

    Besides, it's incredibly convenient for the stack to grow downward. Program code and data starts at the bottom of virtual memory, and the stack starts at the top. You just map in new page frames as necessary. If the stack grew the other direction, it would either have to be limited in size, or you'd have to shift it in memory if it grew too large. Shifting it is practically impossible, since you'd have to find all program pointers into the stack and update them all to reflect the new stack. Gad, I don't even want to think about it.

  • Not so... (Score:4, Insightful)

    by dhowells ( 251561 ) <slashdot@domhowells.com> on Friday April 29, 2005 @07:37PM (#12389011) Homepage Journal
    Althought the insecurity of code that is only 'theoretically' exploitable ought to be fixed (we all prefer bug free code, right?) many theoretical exploits will never be practically exploited for technical reasons.

    There is a distinction here which needs to be made between code which is exploitable but for which no public exploit code or method has been released -- in which cases it 'wont stay that way for ever' -- and code wherein the calculation of an arbitrary or runtime offset (e.g for a buffer overflow) is impossible and guesswork is impractically unlikely. Theoretical insecurities of the latter type are very likely to 'stay that way for ever'
  • Re:RISCy (Score:1, Insightful)

    by Anonymous Coward on Friday April 29, 2005 @07:37PM (#12389014)
    Yep , that and the BSD core ;) hehe .
    Linux/ free,darwin ,open,net:BSD don't have these problems .
    if you choose a specific architecuture such as x86 then you don't dont program to its weakness you go for its strengths like the BSDs and linux do .

    Fcat!
  • ms supporter? (Score:4, Insightful)

    by Blitzenn ( 554788 ) * on Friday April 29, 2005 @07:39PM (#12389032) Homepage Journal
    Did you happen to actually read the article? The guy ends by blatantly stating that there is no sane reason to choose a PC over a mac. How can you possibly see this guy as an MS supporter,.. unless of course you didn't really read the article.
  • Re:virtulization (Score:2, Insightful)

    by Anonymous Coward on Friday April 29, 2005 @07:40PM (#12389039)
    Except by VMware and Virtual PC. Oh wait, I guess it can be virtualized!
  • by Anonymous Coward on Friday April 29, 2005 @07:42PM (#12389050)

    Blame the machine or blame the programmer?

    How about blaming both?

    A machine can make it more difficult for extremely common types of attack to be successful. If it doesn't, then it shares some of the blame.

    A programmer can avoid troublesome functions and coding styles, can test with bad data more thoroughly, and can use automated tools to catch these problems before they are a security issue. If they don't, then they share some of the blame.

    A programming language can mitigate these issues by providing standard library functions that aren't vulnerable to being misused in this way, bounds-check, provide higher-level libraries, etc. If it doesn't, then it shares some of the blame.

    A manager can reduce risk by giving the programmers the resources to do their jobs properly, mandating safer languages, instituting code reviews, pushing back schedules instead of skipping QA, etc. If they don't, they share some of the blame.

    Security is only as strong as the weakest link, and nobody in the chain is ever perfect.

  • Re:RISCy (Score:5, Insightful)

    by Michalson ( 638911 ) on Friday April 29, 2005 @07:49PM (#12389105)
    Microsoft isn't the one relying on it, they just are supporting it to a degree because they understand the marketing importance of having backwards compatiblity for all the stuff people use (from a Joe user/Bob Company perspective, what's the point of "upgrading" to the latest version if suddenly your software/hardware stops working). Microsoft actually has got a lot of flak for making things tighter; a big one being the 9X->NT path that made a lot of API calls do a better job of checking parameters, resulting in sloppy programs being broken. More recently the SP2 update broke programs that mess with memory like a virus/exploit. So make up your mind - are they bad for maintaining backward compatiblity that is less secure/less stable, or are they bad for tightening things up and thus breaking a few badly written 3rd party programs people rely on.
  • by AaronD12 ( 709859 ) on Friday April 29, 2005 @08:19PM (#12389303)
    You are correct that you can write x86 code without buffer overflows. I've always thought that dynamically-assigned buffers were trouble since I first learned them.

    What the author of this article is saying is that PowerPC-based computers would only have a 1-in-6 chance of being able to execute code arbitrarily spilled over actual code via buffer overflow.

    Moreover the way that data and code "segments" (I'm using the x86 word here) just don't work the same way on PowerPCs. This essentially prevents arbitrary code from being executed on this particular RISC processor.

    This is not a Mac-specific thing. Any computer (RS6000, AS/400, IBM xSeries, etc.) with a PowerPC family processor will have this benefit.

    Windows might still be insecure, but it would be less insecure running on a PowerPC RISC processor.

    -Aaron-

  • by HotNeedleOfInquiry ( 598897 ) on Friday April 29, 2005 @08:43PM (#12389442)
    I admire a troll that's not afraid to tell a troll that he's trolled a troll.
  • Re: Biased (not) (Score:3, Insightful)

    by screenrc ( 670781 ) on Friday April 29, 2005 @10:19PM (#12389900)
    Nowday, what I see on the news are stories
    about Michael Jackson, Mrs. Stuart, and the Pope.
    This is what is passed as "news". Is the
    media biased towards the left or towards the
    right? When all they do is talk about
    the unimportant, the media is not biased at
    all! They are just silly.
  • Re:Stack (Score:4, Insightful)

    by ebuck ( 585470 ) on Friday April 29, 2005 @10:29PM (#12389941)
    Up and down mean nothing in a computer, that is, they mean just as much as the stack growing left to right, or right to left. Or even upper-right corner to lower-left corner, diagonally.

    0x00000000 isn't the math number 0, nor is 0xFFFFFFFF unless you assign that meaning to it. A perfect example is in floating point numbers, which mean something totally different that the same sized integer, which is totally different that the same sized memory address.

    As others have already said. It's not the direction, it's the ability to do something that you shouldn't be able to do.
  • Re: Biased (not) (Score:2, Insightful)

    by Brandybuck ( 704397 ) on Friday April 29, 2005 @11:07PM (#12390150) Homepage Journal
    Mr. Jackson and Mrs. Stuart are not that important in the grand scheme of things, but the Pope is. That's because he's the head of the largest and most influential denomination of the largest and most influential religion in the world. I would have to check a current almanac, but I suspect he's a leader to more people than any other leader in the world.

    Just because he doesn't have armies and navies, or platinum albums, or a line of towels at KMart, doesn't make him unimportant. He may not be important to you, but he's important to half a billion people or more. That's significant.

    If he's only a tenth as influential as his predecessor was, his election is more than newsworthy.
  • by Anonymous Coward on Friday April 29, 2005 @11:40PM (#12390280)
    And the solution is FORTRAN.

    No seriously, flame all you want, but FORTRAN, even FORTRAN 77, is perfectly suited to development in a modern environment. I don't think you can prove otherwise.
  • by betasam ( 713798 ) <betasam@@@gmail...com> on Saturday April 30, 2005 @12:44AM (#12390498) Homepage Journal
    It looks like most operating systems like relying on C. Wouldn't C# or Java require a VM and hence a little shakedown on the OS architecture? And wait, what would the VM be implemented with? There seems to be a strong case that a good hardware architecture can only be of help. The bad one, irrespective of what runs on top of it might always provide a source for trouble. Application developers have always tried to rely on a nice language + compiler + framework as they are evolving.
  • by pammon ( 831694 ) on Saturday April 30, 2005 @03:05AM (#12390880)
    > The stack behaviour of PowerPC is just as
    > predictable as x86, and it's just as easy to
    > perform a buffer overflow attack on vulnerable
    > code.

    No it's not.

    For example, here's a function vulnerable to a classic buffer overflow:

    void security_hole(char* s) {
    char buff[128], *ptr = buff;
    while (*s++ = *buff++);
    }

    It's more difficult to turn this buffer overflow into arbitrary code execution on PowerPC because the link register isn't spilled to the stack (so you have to overwrite some function's return address higher up in the call chain) which takes more work and requires a larger payload, larger instruction sizes means you need a still larger payload, larger instruction sizes mean it's trickier to build an instruction stream with no zero bytes, and in any case you may have to flush the instruction cache to force it to see your changes - no easy task.

    Leaf functions, functions that take advantage of tail-call optimizations, and functions that move the link register into a GPR rather than the stack don't let you overwrite the return address at all, which is never the case on x86.
  • I call BS (Score:3, Insightful)

    by mangu ( 126918 ) on Saturday April 30, 2005 @10:37AM (#12392002)
    FORTRAN, even FORTRAN 77, is perfectly suited to development in a modern environment.


    Oh, yeah? Try getting data from an Oracle database in FORTRAN. They used to have something called, IIRC, pro*fortran, but no more. It took me about six months of interaction with people deeper and deeper in the Oracle organization to find out that that product is "deprecated" and no longer supported. Have you ever tried porting a FORTRAN program from VAX/VMS to whatever modern environment you use? Or from a PDP-11? So here is one reason why FORTRAN is dead: important software companies no longer support it.


    Another reason: try finding programmers who are experienced in it. Where I work they have a 20-year-old system entirely written in FORTRAN. In the last twelve months, three junior engineers have quit their jobs because an old dinosaur insists that they must keep doing everything in FORTRAN instead of calling the old functions from C programs. What's the point of having "FORTRAN" in your resume, if the job market for that skill is so restricted?


    But these are practical reasons, you wanted technical reasons, I guess. So try this: how do you do string manipulations? Functions that are one-liners in C become two pages long in FORTRAN. Or how about dynamic memory allocation?


    I have used FORTRAN a lot in the past. I have seen its long and slow agony. I have seen the countless different standards, the many people and organizations who have said, "sure, you can do that in FORTRAN, do it like this" and have come with a solution that's incompatible with everything else.


    Maybe FORTRAN could have evolved differently, if it wasn't so much a "commercial" software. All companies did incompatible improvements to FORTRAN so their marketing people could say "ours is the best FORTRAN in the market". Endless forking while C evolved in a standardized way. Today, to link a VAX FORTRAN library with an Oracle-accessing FORTRAN program originally written in AIX, for instance, is so hard that the easiest solution is to rewrite everything in C.


    But I know people like you who believe FORTRAN is still the solution. As I mentioned, they are running through junior engineers at a fast rate. Luckily, that's not my department, here we do everything in either C/C++ or PHP.

  • by generationxyu ( 630468 ) on Saturday April 30, 2005 @11:06AM (#12392137) Homepage
    Yes, it's true, NX will protect you from the simple char buf[512]; strcpy(buf, untrusted_data). But that doesn't mean it's secure. What if the return address the attacker supplies isn't on the stack? What if it's in a predictable malloc() buffer? Ok, set NX on malloc()s. What if it's in the code segment? You can't make that NX. What if it's in libc? Once again, can't make that NX. Lots of undesirable stuff can be done without executing stack code.

    Random offsets won't help much -- they'll help some, but what if you can write a LOT of data into that buffer? Give it a LARGE NOP sled.

    Detect when a process is doing a lot of NOPs in a row and kill it? Ok. Use "AIAIAIAIAIAIAIAI..." 'A' = 0x41 = inc %ecx, 'I' = 0x49 = dec %ecx. Together, they are an effective NOP. Hell, most of the time, "AAAAAAAAA..." is an effective NOP. Does an attacker really care what's in ECX?

    The problem is NOT the architecture, NOT the OS, and NOT the language. It's not a problem with libc, stdio, strcpy, or anything else. If you haven't figured this out by now, you might want to read about computer architecture -- computers do what you tell them to. I can write secure code in which I strcpy() from untrusted data into a static buffer on the stack, on an x86 running Windows with no NX. Hell, I'll even do it in real mode.

    I'm not a DJB fanboy, but he does have quite a few good points. Programmers are lazy. Write secure code.

  • Re:Media Watch (Score:3, Insightful)

    by Nutria ( 679911 ) on Saturday April 30, 2005 @01:42PM (#12392851)
    if the press release is well enough written, balanced and has correct facts

    What's the point of a balanced press release?

    If you're not pumping your "side", you're not doing a good job.

With your bare hands?!?

Working...