Is the x86 Architecture Less Secure? 315
An anonymous reader asks: "Paul Murphy at CIO Today reports that a specific Windows buffer overflow vulnerability ' depends on the rigid stack-order execution and limited page protection inherent in the x86 architecture. If Windows ran on Risc, that vulnerability would still exist, but it would be a non-issue because the exploit opportunity would be more theoretical than practical.' And implies that other Windows vulnerabilities are actually facilitated by having an x86 chip." How does the x86 processor compare with other architectures when it comes to processor based vulnerabilities? How well have newer additions, like the Execute Disable Bit, helped in practical situations?
RISCy (Score:5, Insightful)
The fact is not that this issue is an insecurity in X86 but the fact that windows uses it
to that flaw .
Is this the Astrodome? (Score:5, Insightful)
Funny... (Score:5, Insightful)
Funny how exploits that are "just theoretical" don't stay that way forever...
This guy doesn't know what he's talking about. (Score:5, Insightful)
The stack behaviour of PowerPC is just as predictable as x86, and it's just as easy to perform a buffer overflow attack on vulnerable code.
PowerPC doesn't offer more per-page protection than x86, and it offers less than x86-64, as x86-64 can disable execution on a per-page basis.
It's possible to do things on both architechtures like add a random offset to the stack or load libraries at random locations. This makes attacks much more difficult, and OSes like OpenBSD do them on both architechtures. OSes like Linux or MacOS don't do them on any architechtures. Stack protections like propolice are a compile-time option and can be used on any OS on any architechture.
The mainstream architechtures of today do very little to distinguish themselves from each other security wise. One of the the few features that is different from one architechture to another, per-page execute protection, is not available on PowerPC.
This guy doesn't know what he's talking about.
Re:RISCy (Score:5, Insightful)
Re:Maybe I'm missing something here... (Score:3, Insightful)
Re:Stack (Score:2, Insightful)
Careful programming when dealing with memory in a language without builtin bounds checking is the solution to this problem.
Re:Stack (Score:2, Insightful)
Re:Stack (Score:5, Insightful)
Not really. You assume that all buffer overflows overflow in the "upward" direction. It's just as easy, in C, to code a loop that progresses backward through memory. I've had many reasons and occassions to do it. Simply making the stack grow upward instead of downward won't solve the underlying basic issue, which is that without proper bounds checking, the program can overwrite memory it's not supposed to.
Besides, it's incredibly convenient for the stack to grow downward. Program code and data starts at the bottom of virtual memory, and the stack starts at the top. You just map in new page frames as necessary. If the stack grew the other direction, it would either have to be limited in size, or you'd have to shift it in memory if it grew too large. Shifting it is practically impossible, since you'd have to find all program pointers into the stack and update them all to reflect the new stack. Gad, I don't even want to think about it.
Not so... (Score:4, Insightful)
There is a distinction here which needs to be made between code which is exploitable but for which no public exploit code or method has been released -- in which cases it 'wont stay that way for ever' -- and code wherein the calculation of an arbitrary or runtime offset (e.g for a buffer overflow) is impossible and guesswork is impractically unlikely. Theoretical insecurities of the latter type are very likely to 'stay that way for ever'
Re:RISCy (Score:1, Insightful)
Linux/ free,darwin
if you choose a specific architecuture such as x86 then you don't dont program to its weakness you go for its strengths like the BSDs and linux do
Fcat!
ms supporter? (Score:4, Insightful)
Re:virtulization (Score:2, Insightful)
Re:I gotta call bullshit on this one... (Score:2, Insightful)
Blame the machine or blame the programmer?
How about blaming both?
A machine can make it more difficult for extremely common types of attack to be successful. If it doesn't, then it shares some of the blame.
A programmer can avoid troublesome functions and coding styles, can test with bad data more thoroughly, and can use automated tools to catch these problems before they are a security issue. If they don't, then they share some of the blame.
A programming language can mitigate these issues by providing standard library functions that aren't vulnerable to being misused in this way, bounds-check, provide higher-level libraries, etc. If it doesn't, then it shares some of the blame.
A manager can reduce risk by giving the programmers the resources to do their jobs properly, mandating safer languages, instituting code reviews, pushing back schedules instead of skipping QA, etc. If they don't, they share some of the blame.
Security is only as strong as the weakest link, and nobody in the chain is ever perfect.
Re:RISCy (Score:5, Insightful)
Re:I gotta call bullshit on this one... (Score:3, Insightful)
What the author of this article is saying is that PowerPC-based computers would only have a 1-in-6 chance of being able to execute code arbitrarily spilled over actual code via buffer overflow.
Moreover the way that data and code "segments" (I'm using the x86 word here) just don't work the same way on PowerPCs. This essentially prevents arbitrary code from being executed on this particular RISC processor.
This is not a Mac-specific thing. Any computer (RS6000, AS/400, IBM xSeries, etc.) with a PowerPC family processor will have this benefit.
Windows might still be insecure, but it would be less insecure running on a PowerPC RISC processor.
-Aaron-
Re:1993 called - they want their flamewar back (Score:3, Insightful)
Re: Biased (not) (Score:3, Insightful)
about Michael Jackson, Mrs. Stuart, and the Pope.
This is what is passed as "news". Is the
media biased towards the left or towards the
right? When all they do is talk about
the unimportant, the media is not biased at
all! They are just silly.
Re:Stack (Score:4, Insightful)
0x00000000 isn't the math number 0, nor is 0xFFFFFFFF unless you assign that meaning to it. A perfect example is in floating point numbers, which mean something totally different that the same sized integer, which is totally different that the same sized memory address.
As others have already said. It's not the direction, it's the ability to do something that you shouldn't be able to do.
Re: Biased (not) (Score:2, Insightful)
Just because he doesn't have armies and navies, or platinum albums, or a line of towels at KMart, doesn't make him unimportant. He may not be important to you, but he's important to half a billion people or more. That's significant.
If he's only a tenth as influential as his predecessor was, his election is more than newsworthy.
I have found the solution. (Score:1, Insightful)
No seriously, flame all you want, but FORTRAN, even FORTRAN 77, is perfectly suited to development in a modern environment. I don't think you can prove otherwise.
Operating Systems and C (Score:2, Insightful)
Re:This guy doesn't know what he's talking about. (Score:5, Insightful)
> predictable as x86, and it's just as easy to
> perform a buffer overflow attack on vulnerable
> code.
No it's not.
For example, here's a function vulnerable to a classic buffer overflow:
void security_hole(char* s) {
char buff[128], *ptr = buff;
while (*s++ = *buff++);
}
It's more difficult to turn this buffer overflow into arbitrary code execution on PowerPC because the link register isn't spilled to the stack (so you have to overwrite some function's return address higher up in the call chain) which takes more work and requires a larger payload, larger instruction sizes means you need a still larger payload, larger instruction sizes mean it's trickier to build an instruction stream with no zero bytes, and in any case you may have to flush the instruction cache to force it to see your changes - no easy task.
Leaf functions, functions that take advantage of tail-call optimizations, and functions that move the link register into a GPR rather than the stack don't let you overwrite the return address at all, which is never the case on x86.
I call BS (Score:3, Insightful)
Oh, yeah? Try getting data from an Oracle database in FORTRAN. They used to have something called, IIRC, pro*fortran, but no more. It took me about six months of interaction with people deeper and deeper in the Oracle organization to find out that that product is "deprecated" and no longer supported. Have you ever tried porting a FORTRAN program from VAX/VMS to whatever modern environment you use? Or from a PDP-11? So here is one reason why FORTRAN is dead: important software companies no longer support it.
Another reason: try finding programmers who are experienced in it. Where I work they have a 20-year-old system entirely written in FORTRAN. In the last twelve months, three junior engineers have quit their jobs because an old dinosaur insists that they must keep doing everything in FORTRAN instead of calling the old functions from C programs. What's the point of having "FORTRAN" in your resume, if the job market for that skill is so restricted?
But these are practical reasons, you wanted technical reasons, I guess. So try this: how do you do string manipulations? Functions that are one-liners in C become two pages long in FORTRAN. Or how about dynamic memory allocation?
I have used FORTRAN a lot in the past. I have seen its long and slow agony. I have seen the countless different standards, the many people and organizations who have said, "sure, you can do that in FORTRAN, do it like this" and have come with a solution that's incompatible with everything else.
Maybe FORTRAN could have evolved differently, if it wasn't so much a "commercial" software. All companies did incompatible improvements to FORTRAN so their marketing people could say "ours is the best FORTRAN in the market". Endless forking while C evolved in a standardized way. Today, to link a VAX FORTRAN library with an Oracle-accessing FORTRAN program originally written in AIX, for instance, is so hard that the easiest solution is to rewrite everything in C.
But I know people like you who believe FORTRAN is still the solution. As I mentioned, they are running through junior engineers at a fast rate. Luckily, that's not my department, here we do everything in either C/C++ or PHP.
NX provides little protection (Score:3, Insightful)
Random offsets won't help much -- they'll help some, but what if you can write a LOT of data into that buffer? Give it a LARGE NOP sled.
Detect when a process is doing a lot of NOPs in a row and kill it? Ok. Use "AIAIAIAIAIAIAIAI..." 'A' = 0x41 = inc %ecx, 'I' = 0x49 = dec %ecx. Together, they are an effective NOP. Hell, most of the time, "AAAAAAAAA..." is an effective NOP. Does an attacker really care what's in ECX?
The problem is NOT the architecture, NOT the OS, and NOT the language. It's not a problem with libc, stdio, strcpy, or anything else. If you haven't figured this out by now, you might want to read about computer architecture -- computers do what you tell them to. I can write secure code in which I strcpy() from untrusted data into a static buffer on the stack, on an x86 running Windows with no NX. Hell, I'll even do it in real mode.
I'm not a DJB fanboy, but he does have quite a few good points. Programmers are lazy. Write secure code.
Re:Media Watch (Score:3, Insightful)
What's the point of a balanced press release?
If you're not pumping your "side", you're not doing a good job.