Why Not Solid State Hard Drives? 652
waterlogged asks: "I was just wondering if anybody has heard of a cheap ram based network drive? Seems to me with the ram prices being at about US. $12.00 for 128 megs that someone hasn't developed a battery backup version of this to plug into a network or even a bus. A gig worth of 8ns seek time storage for $120 anyone? That would just about eliminate any wait in loading programs."
BigSlowTarget asks: "There are some previous articles on Slashdot about vendors selling solid state drives, but they all seem to be quite expensive - particularly given the slide in the cost of memory. Has anyone hacked together a solid state drive to take advantage of $60/GB memory prices? I'd really like to be able to boot and run at solid state speed without spending thousands."
Jah-Wren Ryel asks: "In case you haven't noticed, RAM is incredibly cheap, you can put a gigabyte of PC133 RAM into your machine for less than $60. A year ago, that would have cost more like $600. So now it is feasible for one to have a 10-15GB RAM disk, except for one thing - most motherboards won't support more than 2GB total (4 dimm slots x 512MB per dimm). It seems like it wouldn't be too hard to design a PCI card to hold 20-30 dimms and make that available through a hardware windowing scheme (like EMS/EMM back in the old 16-bit days). With the right drivers it could be used as a big RAM disk or for buffercache. Is there such a product out there? The closest I have seen are solid-state disks that sit on the other end of a scsi bus, are too expensive, and aren't anywhere near as fast as a PCI implementation could be."
So what technical details (and the issues of volatile data and price) may be preventing the construction of RAM based drives, and is there anything else that may be preventing some entrepreneurial soul from bringing such a thing to market?
Needs constant power (Score:3, Insightful)
Re:Needs constant power (Score:5, Funny)
Cute. (Score:2, Flamebait)
We need a UPS instead. And the "u" part is the tough part.
-Kasreyn
Re:FLASH file system.. (Score:3, Informative)
Ummm CMOS? (Score:2)
Re:Ummm CMOS? (Score:4, Informative)
Re:Ummm CMOS? (Score:3, Informative)
Re:Needs constant power (Score:2, Informative)
Re:Needs constant power (Score:2, Interesting)
Computers could also be designed to bypass the hard drives when the system power is off. I doubt a hard drive would utilize much energy.
Re:Needs constant power (Score:2, Interesting)
I'd suggest getting a smallish (1gb or so) flash drive for your windows/linux/amiga/whatever partition, and use some monstrous drive to store all your media files.
Flash no good. (Score:3, Informative)
Flash is slow to write to.... and is limited in the number of writes. Flash wears out.
Re:Needs constant power (Score:2, Interesting)
Re:Not a problem (Score:5, Insightful)
Given the plummeting price of high density/small footprint hard drives, you could have both the volatile drive and the nonvolatile drive in a single low price unit, with backup to/recovery from the nonvolatile drive occuring automatically on startup and shutdown.
It needs to be more often than startup/shutdown! Many of us don't shutdown for weeks at a time. You would want it to continually copy things to the disk when there is idle time. But then you're essentially using the RAM as a really big disk cache which is where we are already today.
As I read the article, the whole point is to shift to RAM and save money at the same time. If you're buying the hard disk anyhow then you're shifting to RAM but not saving any money. And you may not be improving performance much over a massive RAM cache either. So I find it hard to be enthusiastic about this idea of backing up the RAM to hard disk.
Which brings us back to ... (Score:3, Insightful)
Now explain to me how this is different from using main memory as a VM cache in unix?
Re:Needs constant power (Score:2)
A long time coming (Score:3, Funny)
Re:A long time coming (Score:2, Insightful)
Hopefully, this technology will still be made available to those of us who don't need a 100GB hard drive.
Unfortunately, that is very unlikely due to economies of scale, and price compression. Take a look at hard drive prices. Try to find a 10GB hard drive these days, and you are likely going to pay more for it than for a 40GB drive.
In the same line, look at the prices for currently manufactured drives, and processors. The price for the newest is astronomical. The price for older stuff drops, but not linearly. It slowly hits a plateau (it's nice that the whole curve seems to be lowering) currently near the $30 mark.
There just isn't a market for "slow" (less than 600MHz) processors or "small" (less than 20GB) hard drives. While I really like the idea of an UDMA100 4GB drive, (solid state would be even better), there just isn't a viable market for such devices.
Re:A long time coming (Score:3, Insightful)
That's only considering a total replacement of one technology for another. In the same way that hard drives didn't make tape drives obsolete I doubt that solid state would make something else less desirable. For example, a 4 Gig solid state drive would be plenty for the vast majority of users to load their software onto. Data could then go to the old platter style hard drive. With a combination of the two you would see some truly astounding system performance increases.
The good news is that the Unix directory structure already provides a great deal of seperation between user data and the programs that access it. The bad news is that Windows does no such thing across the board. Whether you care about Windows or not, it is the OS that's driving the majority of the hardware market out there.
I'm no fan of Apple, but they may be the only folks out there that might pull something like this off. Assuming OSX utilizes a similar seperation between software and data, they have the hardware and software ability to work something like this.
Re:A long time coming (Score:5, Interesting)
You point is correct but the parent's point is correct as well. We may have 40GB drives, but we are only using a small amount at any given time. Using the strengths of RAM with the strengths of HD's we could see some really interesting hardware. It seems like the middle road (similar to what another poster mentioned) is to substancially boost the amount of RAM used as a disk cache. Add some pseudo-AI drivers and you end up with a situation like this.
User starts Word. As the application is loading and initializing and as the user is working, the hard drive is automatically loading all dictionaries, the other Office programs, the equation editor, the charting program, the clip art, the help files, all .docs you've ever edited, all .txt files, local .html files into your 2 GB RAM buffer on the hard drive. You may never, ever use Word to edit html files, but since RAM is so cheap it doesn't matter.
A complete directory of all files is also stored in the drive's RAM buffer. Searches become instananeous.
As you save files, the saved files are mirrored back to the platter to ensure against power failures, but they are also saved in RAM (with a battery backup) to ensure against head crashes.
Now that the hard drive has memory to burn (so to speak) it stops being a mere storage device and becomes a "autonomous storage unit" that has it's own CPU to assist the computer in it's search for information. Seagate, Maxtor, and all the other drive manufacturers who are about to declare banckrupcy start marketing "ASU : Storage for the 22nd Century" in partnership with the struggling memory companies (who would love to have another market for the slower / cheaper memory technologies).
The technology companies are saved thanks to my idea (until, of course, we find out that Rambus actually owns the patent on ASU's and they start sueing everyone ;-)
What you are describing.. (Score:3, Insightful)
The only difference is things aren't cached until they are loaded the first time.
If your computer had 80GB of memory, you would invariably end up with most of your HD (at least, what you were using) cached.
Solid state drives. (Score:4, Informative)
Cenatek [cenatek.com] seems to be on a good track with these. They offer a PCI card with a handful of DIMM slots, a slap on rechargable battery panel, which holds enough power to run a connected hard drive of appropriate size which will dump the contents of what is essentially a RAM disk, in the event of a shutdown or power loss. A little spendy still, for consumer use, but to see something like this backend busy websites, or store database file structures would be pretty slick.
Re:Solid state drives. (Score:4, Interesting)
According to their website, sustained data transfer rate is 80-100MB/s (umm, WHY would it vary if it's all solid state?). Add to the fact that the PCI bus is limited at 133MB/s and there's more than just 1 device using the PCI bus (and a lot of them aren't conservative when it comes to bus usage)...
Or, for 1/4 the price you can pack together 2x75GB drives in a raid 0 array, get 30x as much space AND get the same bandwidth.
No, right now there's not much point to solid state drives. Iff (sorry, math hangover, If and Only If) hard drive prices were to stay the same, and memory prices were to fall by an order of magnitude (lets say 10x) THEN I could see there being a market for this. But you'd also need to use either PCI-64 (533MB/s+) or get some other designed bus to support the much higher throughputs.
But then again this just begs the question, what do you need that much more speed for?
To take advantage of RAMdisks, you pretty much need to have your computer on all the time, or in standby mode when you're not using it. At this point, what do you need much higher disk bandwidth for?
Loading your mp3s or movies?
Loading office in 2s instead of 6s?
running your games (oh wait, that's CPU/GPU intensive not HD).
Quite frankly I don't see the technology or the market right now to create solid state HDs.
Re:Solid state drives. (Score:2, Insightful)
That may be better for some applications. No amount of RAID magic though can reduce the latency though (seek time). So this might be good for some database apps, but a RAID would be better for streaming the data. Though, I can't think of very many apps that require a single 80MB/s stream.
finally -- a use for AGP! (Score:4, Interesting)
This is exactly what AGP was designed for -- high-bandwidth I/O to main memory, without blocking the PCI bus. Plus, the AGP GART can do most of the address translation you would need. All modern PC (and even Apple) chipsets have an AGP interface, which is wasted on a headless server. . . until now. AGP even provdes extra power (even the obscene AGP PRO), so that an onboard battery/HDD could be used to backup.
> To take advantage of RAMdisks, you pretty much need to have your computer on all the time, or in standby mode when you're not using it.
This is true. *or* you could have your computer net-boot from a a server with one of these. Even 100megabit transferring from memory will feel faster than a local hard disk. And gigabit over copper is becoming very affordabl these days.
Re:Solid state drives. (Score:5, Interesting)
To take advantage of RAMdisks, you pretty much need to have your computer on all the time, or in standby mode when you're not using it. At this point, what do you need much higher disk bandwidth for?
Loading your mp3s or movies?
Loading office in 2s instead of 6s?
running your games (oh wait, that's CPU/GPU intensive not HD).
--
FORGET ABOUT HOME USE, think a bit.
There are limiting factors with hard drives, mainly LATENCY issues, this might not be a problem for you or any home users, but for some specific scenarios, it is, and a BIG one. I give you a specific case where I could benefit from such a system:
Without going in too much details, I work with a lot of files, my workstation generates over 200,000+ files for a single simulation, no it can't be put in a database for now, it has to be accessed from different software with no database support, every other part of the software is optimised to know exactly which file to open, using the maximum of memory, cropping useless data, etc etc... everything is maximized to a more than good level. The only bottleneck I have in my system right now is the drive's latency issue, and beleive me, if I could go down from the milliseconds to nanoseconds or microseconds, it would be over a tenfold increase in speed and I wouldn't need 10 machines running in parralel to do the job in one day (which unfortunately I don't have
Most application are bandwidth hungry, but there are some stuff out there requiring LOW LATENCY, and heck, if there wasn't a need for that, you wouldn't see solid state drives for 60,000$ out there. There's a need, but sometimes you're limited by your R&D budget and you'd gladly take an emmerging technology or home-made stuff if it means saving 80% of the cost of the equivalent part, and increasing your effeciency by a factor 10.
I'll see your answer "if you need it, and it slows down you r&d, buy it, for the sake of the company" sometimes it doesn't work like that for cashflow reasons and you have to work with what you can get in your specific budget, the issue here (and title of this forum) is about cheap storage that would have a low latency and High bandwidth solution (with loads of storage). I'm sure I am not the only one that would GLADLY grab a 30GB solid state drive for a fraction of what it would cost me with the current systems (which are way overpriced considering the price of ram right now).
There's a need for Solid State, while I understand that the gap between a home user and a workstation/server class machine is blending more and more, it's not because a home user wouldn't benefit from such a device, that it's not needed for corporate or R&d levels. Current solutions wouldn't be selling for 50K$+ if there wasn't a need for them... heck, they wouldn't exist.
Two words: (Score:3, Insightful)
Re:Solid state drives. (Score:2, Insightful)
80-100MB/s sustained data output.
Which is what 2 or 3 HDs on a software controlled raid can give you for MUCH MUCH cheaper.
Re:Solid state drives. (Score:4, Insightful)
I'm sorry but you really need to go back to drive technology 101
Idiots like you shouldn't talk out there ass so much...
You must be one of those who
I'd call you stupid names back in return, but I don't stoop that low. Anybody who needs to do that (a) needs a lot more fiber in their diet and (b) needs to lighten up.
I HAVE a 4x75GB IDE RAID 0 array, and can get a max of 98MB/sec read off of it, and a good 75MB/sec sustained. Off of a single drive I can get 45MB/sec max, 25MB/sec sustained.
And I was implying that there are very few applications that need the use of that specific RAM disk over a much cheaper IDE raid array. If you had 4GB RAM on the mainboard, or 8GB or 16, then you would see a few more apps that would benefit from that performance. However just about any home user, and the vast majority of corporate users wouldn't benefit one bit from the use of that. There are very few uses that would benefit from a sustained 90MB/sec, however the very low latency is a big help.
So I wasn't "talking out of my ass". Go shove your nasty attitude up someone elses ass. Like we don't have enough problems to stress over as it is. Lighten up.
Re:Solid state drives. (Score:2)
What I was stating is that the Cenetek is pointless. If they offered something that had the speed of RAM, then yes it would be pointful. But at 80-100MB/sec maximum, hard drives are the far cheaper and easier solution.
Also the device said DRAM (and not SDRAM), I dunno if that was an oversight or if they're actually using the older ram model on that board (which might explain the speed problem).
Re:Solid state drives. (Score:3, Informative)
That was uncalled for...
I HAVE a 4 disk IDE raid that gives me 75-90MB/sec sustained performance. At peak it can hit just shy of 100MB/sec. 4x75GB IBM drives on a HPT 380 IDE Raid controller.
SO I don't know where you're getting your "stats" from. I can also get 20MB/sec sustained transfer rate off my 40GB IBM drive that I have right here in my system, single drive, I just did a file transfer yesterday to proove the same point (a copy from one HD to another at 19.8MB/sec for a 450MB file). That wasn't optimal conditions. The files and free space on both drives were fragmented. Under "optimal" conditions I can get 32MB/sec raw read rate off the drive itself. Off each of the 75GB drives I can get 45MB/sec raw read rate.
And the cenatek solution that was posted gave 80-100MB/sec and was also extremely expensive. Setting that up for 4GB would be the 2/3rds of the cost for setting up my 300GB raid 0 array. 4x1GB SDRAM (if it uses SDRAM, the info only said DRAM) modules is $500 according to pricewatch [pricewatch.com] and the controller itself is unknown (I can't find any vendors selling it) but I'd assume to be around $100-$200 range). So say $600 for the 4GB ramdrive solution, $900 for the 4x75GB raid solution. So it's 50x more expensive (per MB) and the only thing that it gives me is less access time.
And the "data sheet" (LOL!) reports that the rates (80-100MB/sec) is "thousands of times faster than standard hard drives" (exact quote)... So apparently they think that 80kb/sec is the usual read rate for a hard drive these days. Even in their actual breakdown they conpare "100,000 sector reads/writes per second compared to 5,000 to 6,000 I/Os per second for a standard disk drive". Oh, they're talking about FLOPPY DRIVES... Well OK then, yeah then it is thousands of times faster...
Re:Solid state drives. (Score:3, Informative)
The issue fixed with solid state disks are rotational latency and seek latency. When faced with a heavy random seek load, platter based drives waste immense amounts of time waiting for either the head, or the disk to be in the correct position to read data. Combined, this takes about 12 ms on a good IDE drive. By contrast, "finding" the correct spot on a solid state disk takes about 10 ns. Thus a random seek pattern on a solid state drive should run about 1,000 times faster. This is the sort of load placed by heavy use of database servers. Slashdot, for instance would benefit from this. Your quake game, would not as most of the reads would be sequential, not random.
Check out Storage Review [storagereview.com] to see some i/o performance of platter based storage.
Re:Solid state drives. (Score:3, Interesting)
Would it be? How long does a compile take? Do you do anything else during compile time that would take away time from the other parts of your day (like, oh, reading slashdot
My full compiles take about 10 minutes for my module, and I do them maybe 5 times a day, max 10. Saving 2 minutes per compile will save me 10-20 minutes a day, which is nothing. I also do many spot compiles of individual files which take very little time at all.
And during those 10 minutes I read my slashdot, I go to the can, I gab with some coworkers and impact their performance, I surf the web, I gab with my boss, I keep happy. I'm very HAPPY to be given a good excuse for many 10 minute breaks a day, I dunno about you =)
If it's DRAM (Score:2)
Okay, add a UPS and all, but wouldn't this still be much less stable than a HD that you can pull out and ship across the country without it losing data?
Re:If it's DRAM (Score:2)
The biggest use of RAM drives on Macs were for Powerbook users. With a lightweight word processor (Word 5 *cough*) and a lightweight System folder, you could spin down your hard drive, dim the screen, and get gobs of battery time out of those old machines, and Oh! the blissful silence!
You'd just want to save your files to the hard drive every now and then to prevent Murphy from visiting.
Re:If it's DRAM (Score:2)
RAM Drives. (Score:5, Interesting)
Seagate had developed years ago a standard called IPI, I think. It was for the 30 and 40 megabyte RAM drives that had developed. I know it never took off, but it was specificlly for static RAM drives.
What would be really cool, would be RAM storage with an Infiniband interface. Its possible to use it for storage or for regular memory.
You already have a RAM disk - file system cache (Score:3, Informative)
Linux, FreeBSD, and MacOSX (I dunno about Windows) all have excellent VM and file system caches (sometimes they're tightly integrated). If you have 4GB of RAM in your system, and your running processes have 64MB resident, then it's like having a 3.94GB RAM disk. That is, of course, unless you routinely access more than 3.94GB of files.
This is why having lots of RAM is good, even if your processes don't use much.
It's not prefect - I know that on FreeBSD 4, for example, if you have zillions of small frequently used files in the cache, and then you do a big tar, all those important little files will get pushed out of the cache in favor of the new file, which might only be accessed once. Also, the kernel will swap processes out to make room for file system cache, and there aren't a lot of knobs for tuning all of this. EG I don't think you tell the kernel "keep *all* my processes resident, even if they're idle... no really, I *do* have enough RAM!"
Anyway I just don't see any use for standalone RAM disks. There are very few real-world applications that need *deterministic* 1ms seek times. If you rely on the OS you will generally get the best performance.
Re:RAM Drives. (Score:5, Interesting)
The main performance benefit of a RAID is in reducing the impact of seek time on overall throughput. You pay a little extra in transaction overhead to send commands to multiple drives (instead of a single drive) to gain the dual benefits of cutting the average cost of a seek down, and increasing your linear access bandwidth. (In other words, you do seeks 1/N as often, and your bandwidth for a linear read within a track is N times what it would be for 1 drive, for a RAID with N drives. At least, this is true for striping.)
With a RAM disk, the cost of seeking is zero. Also, the bandwidth of the RAM already exceeds the available bandwidth of the drive cable. So, if you were to RAID your RAM drives, you'd still have the performance penalty of the additional overhead, but no gain due to hiding seeks or striping your bandwidth. The result would be a net loss in performance.
Now, what might be interesting is a mirrored RAID, where one side of the mirror was a physical HD, and the other was RAM. Modify the RAID software to send all reads to the RAM drive by default. Ta-da! Instant hardware-backed RAM drive! Performance would be lower than a pure RAM drive, but you wouldn't need to do anything unusual to make the RAM's contents persistent. A power loss looks like a drive failure -- just replicate the other drive back to the RAM.
--JoeHuh? (Score:4, Insightful)
Huh? Unless I'm completely out to lunch, I don't see this....
Is my math wrong, or is Cliffs?
Re:Huh? (Score:5, Informative)
Given his figure of 128MB for $12, that's 10.66MB per dollar.
From western-digital.com I can get a 40GB 7200RPM UATA/100 caviar harddrive for $117.00. That's 341.88MB per dollar.
This puts harddrives into the lead by a factor of 32. So, until it's at the point where 128MB of RAM costs $0.375, harddrives still have the lead.
Justin Dubs
Re:Huh? [OT] (Score:4, Insightful)
Example 1:
but RAM is now cheaper when it comes to memory-per-unitofcurrency than hard drives -- cliff
RAM is 30-40x more expensive than HDs, I don't know WHAT he was smoking when he thought that...
Example 2:
I suspect a fair number of people never try Linux or one of the BSDs because they're moderately happy with AOL as an ISP -- timothy
how many people do you know who would be running Linux if it wasn't for the fact that they were using AOL? (Let me rephrase, how many tech savvy people are using AOL (that aren't forced to)?)
And the anti-Microsoft hysteria has been especially harsh over the past few days. That article about File Extensions And Molopolies [slashdot.org] was so pathetic it didn't even qualify as satire. It should never have seen the light of day on either
And
Re:Huh? (Score:3, Funny)
This comment will be ranked +3, Funny.
Cenatek (Score:4, Informative)
From their site:
The Rocket Drive stores data in memory modules (standard dynamic random access memory, or DRAM) rather than on magnetic media.
Re:Cenatek (Score:2)
Maybe they are comparing it to floppy disks?
The illegal use potential (Score:4, Funny)
This would also work for War3z fiends. *again, yanks plug* "What do you mean piracy, I don't even have an OS on there."
Seriously, I think it would only be useful if you could couple it with a RAID-like (I know it wouldn't be true RAID) system so if the power for whatever reason (Power outage, UPS goes bad, battery dies) you info wuold still be there, maybe a RAM-drive that does nightly/hourly back ups...
Re:The illegal use potential (Score:2)
k.
OS Stability (Score:2)
of course, a system crash or a reboot would do about the same thing.
This by itself would would preclude many script kiddies using notoriously unstable OSen, never mind systems that get infected by trojens etc.
"issue the reboot command now!"
heh
Re:The illegal use potential (Score:2)
Why not just make a 40GB HD with 40GB cache? When an access is made on the same data already accessed it would just be found in the cache on the device, and (depending on your write-through, etc.. technique) this should be the same as a platter based divice in "RAID" with a RAM based device. You would have the same lag at initial load as the platter based device but your load time from that point on should only decrease. The data on the HD cache should be able to remained cached following the system soft-reboot, and possibly with a switch on the side, remain during a hard-reboot (useful for if you want to change the sound card and don't mind the pennies worth of electricity used) or turned OFF for when you go on vacation and there is no need...
Heck, I'm sure you could get a nice cache hit ratio with only 10GB of cache on the 40GB HD. Those of you with 40 gigers, think about how much of that data is just mp3s and iso's and how much is OS, browser, etc...
Re:The illegal use potential (Score:3, Interesting)
The other (slightly less secure) way is to use a network filesystem for storage, of encrypted files, and decrypt the files in memory on the diskless desktop computer as you were using them. That way the decrypted files couldn't be written out in swap, or any of the other common problems. Once the power was turned off, it'd all be gone. But unlike most systems, the decryption would all be done locally, preventing clear-text from ever being transmitted.
Ideally your BIOS's POST routines would involve multiple writes to RAM, of patterns and psuedo-random data. So you'd just hit RESET and it'd perform a thorough wipe. (Theoretically data can be recovered from RAM once the computer is off.)
New Math? (Score:2, Redundant)
$20 gets you about 256 MB of ram. $200 gets you about 75,000 MB of HD space. Ten times the price gets you 300 times the MBs. What are you smoking, and can you give some of it to my credit card companies?
Re: (Score:2)
Re:New Math? (Score:2)
Of course, if you are hitting that dreaded 4 Gbyte limit, you can't do that (how could they have been so stupid to design a chip that could only address 4 Gbytes :-).
Ram drives, nothing new (Score:3, Interesting)
it would have to be SRAM or Flash ROM...... (Score:2)
Re:it would have to be SRAM or Flash ROM...... (Score:3, Insightful)
Huh? (Score:5, Informative)
RAM is now cheaper when it comes to memory-per-unitofcurrency than hard drives.
According to pricewatch [pricewatch.com], a 40 gig hard drive is $78. Let's say $120 for a good one. That makes RAM 20 times more expensive, at $60/gig.
It's still really cheap, but let's not get crazy. :)
Re:perUNITofCurrency is the key work. (Score:2)
If, on the other hand, you need 100 GB of storage, you're not going to stock up on RAM, are you? In that case, both you and Cliff are still wrong: a usable amount of RAM is not cheaper than any new drive you could buy today to do the SAME job.
- A.P.
Linux is good at that... (Score:3, Interesting)
How about integrated buffers? (Score:5, Interesting)
Re:How about integrated buffers? (Score:2)
Re:How about integrated buffers? (Score:2, Funny)
Don't we sync disks in Linux/BSD/Unix before shutting down or unmounting a disk to flush the buffers?
There is even an NT resource kit utility that causes these buffers to be flushed as well.
The AT&T System V manuals describe a table to indicate what was in the buffers to insure files didn't get out of sync.
Welcome to the technology of the late 70's... =)
Size does matter... (Score:3, Insightful)
Ramdrives Cheaper to Make In Software (Score:2)
If there is such a huge speed up why not make devices that act like drives that are really memory? Becuase the software has already been written (ramdrive drivers) and it is faster and cheaper than implimenting a completely seperate piece of hardware and driver.
Also consider the fact that you would have not only create hardware to plug into the SCIS/IDE system, the SCSI and IDE channel bandwidths aren't nearly as good as straight memory. Plus it is nice not to eat sometimes crowded cases with another piece of hardware.
Recovery (Score:3, Interesting)
flash drives (Score:3, Interesting)
i have been very pleased with my sandisk [sandisk.com] flashdrives. basically they are IDE-interface drives with flash memory instead of spinning platters. 0 ms seek time is nice, so is -silent- and -very very low power- storage. not to mention if you don't have to treat it like an egg.
i've used both the flashdrive [sandisk.com] from sandisk, and the IDE flash drives [simpletech.com] from simpletech [simpletech.com].
the sandisk flashdrives have sizes from very small (4 MB) to big enough for your MP3s (2 GB). of course they get expensive at the high end :) best things about them are (1) can get them semi-cheap from ebay [ebay.com] and (2) standard IDE interface.
-samRe:flash drives (Score:3, Informative)
I hope this was helpful.
Re:flash drives (Score:2)
MO? (Score:2)
Who the hell has an MO hard drive? MO WORM drives used to be pretty popular . . .
Hm.
-Peter
ATTO SiliconDisk (Score:3, Informative)
Re:ATTO SiliconDisk (Score:2)
With hard drive speeds where they are nowadays, there's really no point to RAM disks, except in very specialize high-end applications (i.e. databases). Even in those cases, your probably better off with a machine that can handle huge amounts of RAM (Alpha, Sparc, and Itanium can all handle terabytes of address space, i think) and an OS that can do decent filesystem buffering.
R-A-I-D or R-A-I-D ? (Score:2, Funny)
or
Redundant Array of Inexpensive DIMMs
They are available... (Score:2, Interesting)
Solid state...why? (Score:2)
To get solid state hard drive they must be more desirable than platter HDs. All that solid state has going for it is speed. It's far more expensive, holds less data, and unless you get the expensive chips, looses all data when the power is turned off.
Current HD tech has HD's maxing out at 400GB. I'd perfer the robustness of solid state, but platter drives are simply better at this time.
Imagine a solid state file server though! Sigh.
You don't necessarily need a RAM disk (Score:3, Insightful)
There are two ways you can do this.
Way 1 -- Use a PCI card with 4GB of RAM on it as primary storage. At the end of the day, or week, or whatever, copy all of the data to more "permanent" storage. Like hard disks. This way a power loss (or battery failure) isn't too much of a nightmare.
The drawbacks are that you need special hardware and you could lose days of work.
Way 2 -- Cram your machine with as much RAM as possible. Which probably means 4GB. Configure your OS so that it uses about 95% of RAM as a buffer-cache.
Data will be loaded from disk initially on demand (which means slow startup) but will almost always stay memory resident thereafter. The OS will also commit dirty pages back to disk from time to time ensuring that you don't lose anything important.
This may be less doable with systems that insist on synchronous writes during file operations, but you can often disable these things if you want to take the risk.
The benefit of this approach is that you don't need special hardware and you're less likely to lose data than Way 1. Which basically means you can and have been experiencing this now.
If your system grinds disk consistently after several hours of use, it's a good indication that you should get more RAM considering how cheap it is.
CD-RW Technology (Score:2, Interesting)
Flash Drives (Score:2)
Sure these [yahoo.com] are not cheaper by the MB, but they are incredibly cool!!
Flash RAM is getting there (Score:2)
The last CompactFlash card I bought for my digital camera was well under $1/MB (actually about $0.67/MB).
The first SCSI hard disk I bought for my Mac Plus was over $10/MB, and held less than 1/4 the capacity of that CF card. And it weighed 14 lb.
Flash isn't cheaper than current technology disks, certainly; for the price of a 1/4 GB CF card you can get an 80GB IDE drive. But the growth of the digital camera and PDA markets has driven the cost/MB of flash down, and will continue to do so.
What would be cool is a RAID controller for CompactFlash; plug in 6 CF cards in a space the size of a standard hard drive and have it do RAID-5 in hardware. Slower than stock RAM, but non-volatile. The catch there is the number of read/write cycles...and I'm not sure how much work has been done on improving that side of flashRAM performance.
Fast virtual memory! (Score:2)
Oh, wait.. why am I recalling the joke about a solar powered flashlight?
SSD's aren't new (Score:3, Informative)
We actually got our Alpha vendor to let us try an SSD for 30 days. The drive was fast, but we found that we quickly saturated the controller (something a couple U160 drives can easily do). In that regard, it wasn't that fast at all.
And, as has been said in other posts, it's not really economically fesible. We tested a 3.2GB SSD last Christmas that cost $25,000. For that application, we thought it was a good fit. But if you're concerned about capacity, we just bought some 180GB drives for our SAN for about $5,000.00 each.
While the RAM and disk capacity available now is amazing, I don't think we'll ever see the dollar/cost ratio for RAM beat the dollar/cost ratio for disks.
In 1994, which I had a 486/DX2 66 (which came with 4MB Ram), I bought 16MB of RAM for $560.00. Quake was 15MB, so I could load it into a ram drive and play from there. Guess what? It wasn't noticably faster than my IDE hard drive, but Windows screamed. =)
Polymer memory might drive RAM/HDD's away.. (Score:2, Interesting)
The swedish R&D site:
http://www.thinfilm.se/
The norwegian mothercompany:
http://www.opticomasa.com/
Article about it (in Swedish however
http://www.nyteknik.se/pub/pub26_3.asp?art_id=1
More material can be found by searching for Opticom, Plastic memory,thinfilm etc..
Interfaces should not be a big bootleneck. Whatever technology used to create the RAM disc. ATA-100 (100MB/s) and SCSI U160 (320MB/s) should be significant. U320 and U640 will come within years.
If the current number of RAM sockets are a limit.. one can always network some MB's stuffed with RAM.
pbRemove(a)ludd.NospamherEluth.RemovEthisse
Anyone in need of computer consulting with unix or programming btw?
Missing the point of SSDD (Score:3, Interesting)
In order to explain I'll have to do a quick primer on RDBMS' and how they handle memory management.
As you're probably aware, there are a multitude of different operations you can perform on a RDBMS; UPDATE, DELETE, SELECT, etc.
For more efficient queries the RDBMS will cache physical data structures in memory. It may cache parts of the index or recently accessed data. If the cache is full it will kick out the oldest, least used parts to make some room for the new stuff.
To make a long story short, most servers have way more disk space than RAM. As such, it will use a designated 'temp' or scratch area for some of those sorts (and temporary tables) if there are more important things in RAM or it cannot all fit. In Sybase / MS SQL you create a special database for this called 'tempDB'. I'm sure DB2 / Oracle have similar data structures.
Here is where solid-state disks enter the picture. You can buy a small solid-state disk (9GB or less) for cheap. You then 'create' tempDB on the solid state device. That way you can completely eliminate the relatively slow disk drive for things like sorting, temp tables, etc. and devote all of your RAM to caching database information.
To me, this seems a lot better than using solid-state devices exclusively as a storage medium. Initially when you start up your RDBMS the cache is clean. After people run a couple queries the important (and most hit) indexes and data are cached any way so you do not have to worry about touching the disk unless you perform a write. However in most OLTP (online transaction processing; a la web app) it's mostly selects so you wouldn't receive the benefit of the solid-state device unless it wasn't in the cache.
Most SSDD have a battery-backup in them in case of power failure and are generally mated to a corresponding hard drive. When the SSDD is idle it will flush the writes to the HD to keep the HD up-to-date. On a power failure it will immediately dump changed data to the HD (also battery-powered).
For 'home' systems I can't imagine anyone using SSDD as their primary storage. It doesn't make sense - rarely does anyone perform anything that 'demanding' as to require solid-state drives. Plus, if you have a single memory error you would lose the entire thing (break one of your DIMMs and tell me what happens when you try and boot.)
The price/performance sweet spot (Score:2)
A disk array with a big front-end RAM cache effectively gives you RAM-like access speeds for cache hits. You can basically adjust the amount of cache to get as close as you want to RAM speed overall for your workload, while also taking advantage of rotating media's price and durability advantages. Ideally, either the cache is either battery backed or the array has enough of an internal power reserve to dump cache to disk even when external power is lost. This use of a large but safe RAM cache is the main thing that differentiates a Symmetrix or a Shark or a Lightning from some low-end POS that's really no more than a stack of disks with a plain old PC bolted on the front...and don't even get me started on the abomination that is host-based RAID.
Market saturation?? (Score:2)
Personally I'm thinking just packing my system full of memory would be the best solution. As others have mentioned, an OS with good disk caching built in can be as good if not better than a RAM disk. It might be useful to have some way to expand memory through a PCI slot but it seems like, for now, solid state storage just isn't worth it.
its called (Score:2)
Just load your programs into ramdisk.
Have the data that needs saving tossed onto the hard drive periodically by a script that dumps the data that needs to be saved from a ramdisk directory, to a HDD.
Holy fuck memory is cheap (Score:2)
(Oh yeah, RAM disks, cool. etc.)
Volume differences: GB/cc (Score:2)
I just put 2 100 GB magnetic disks drives into my TiVo.
I think 200 of the 1 GB SDRAMs would take up quite a bit more space, even if the slots were there.
There IS a SSD for PCI bus machines now. (Score:2, Informative)
http://platypustechnology.com
"Platypus Technology has designed a range of storage innovations that free applications from the bottlenecks caused by hard drives.
You can run mission critical files from silicon, rather from rotating platters".
The design appears to be quite nice.
The price appears to be outrageous.
From www.cdw.com
"Platypus QikDRIVE8 1GB
1GB PCI solid state hard drive card for PC and Mac workstations and servers $3229."
OS Redisign (Score:3, Interesting)
We would have to do some serius os and user interface redesign. If the pc is used for video editing the samples could be kept in memory this would speed thing up a bit, but you would have to save the data to the HD eventualy.
Another great application for this would be chase servers, imagin a organization that does video editing and all the clients have gigabit ethernet, implement servers that have 1TB of ram before the data storage server at night they could sync the data.
Seriusly, we have to think about this, our current view on pc is that ram is way more that hd storage. Diskless clients could make a come back...
Solidisks, and other Solid State Technology (Score:5, Interesting)
As such, they are fairly old technology, and most of the problems have been ironed out. The problem with power can be solved in a number of ways, for example. You can have battery-backed RAM, or you can have the "RAM" non-volatile by using a design that does not decay rapidly with time. (Flash RAM works this way.)
Another problem has been the capacity of a solid-state hard-drive. This, as has been mentioned, has largely been overcome. I =STILL= believe that wafer-scale chips are the way to go, for this, though. You should be able to make wafers that are tens of terrabytes in capacity, by now.
(The problem with making wafers has always been the purity and the defect levels. Purity just requires you to use something better than skimming. Double distillation, or atomic mass seperation, would give you near 100% purity. You then just cool the resultant in a vaccuum flask, so that the defect rate is negligable.)
Getting back to the modern day, though - how to turn cheap RAM into quality solidisk. This involves making a card, with a whole load of RAM on it. Since you're using conventional RAM, you can't rely on modern-day core memory. This means the fall-back of using battery-backed RAM.
You want TWO batteries, for this. One will be in discharge/recharge mode, the other will be in operational mode. When the batteries switch over, you want the recharged one to be switched first, so that the batteries are in parallel, BEFORE switching over the other. That way, there's no loss of power.
When switching to discharge/recharge mode, the battery must be fully drained, to prevent "memory", where a rechargable battery fails to recharge correctly from a semi-charged state. Once drained, you recharge it to capacity.
The switch-over should happen on one of two events:
This guarantees that you have 175% - 200% of any one battery's lifetime, which should be ample for most purposes. The recharger should tap off the bus' power supply, with the batteries directly powering the RAM at all times. This avoids any problems of messy spikes somehow getting into the computer.
If you want "extra-long-life" SSD technology, you are probably best off using very low-power RAM for the main disk, and using higher-power fast RAM for the cache. The lower the power of the main disk, the better. Static RAM is worth a glance, for this - I think it's usually more efficient than dynamic.
Of course, the =ULTIMATE= solution is to go back to using core memory. (For those who never went to computer science classes, "core memory" is one of the earliest non-volatile digital storage systems. It was a form of magnetic storage, and used semi-permanent magnets to retain the data. Data could only be read by destroying the copy in storage, which mean that a read cycle also had a write cycle. It was slow, but when you had RAM that was guaranteed to retain data for over a century, who cared?)
How I would do it (i.e., properly) (Score:2)
2. This bus could be HyperTransport from the NB to a HyperTransport enabled memory controller that can control up to 16GB of memory. This will give you massive bandwidth and low latency - the best of all worlds.
3. 16 DIMM slots in a drive bay somewhere, or whatever. connect to the memory controller. Battery connected to power DIMMs in case of power down. Use DDR DIMMs, as they use less power. A large laptop battery should power 16 DIMMs for well over a day on their own.
Alternatively, just set up a massive RAM drive and cache the HD into it... rewards uptimes of course!
What's the point? (Score:2, Insightful)
The only reasonable purpose I can think of for a fast ram disk is if you can get some relatively slow ram on that device, which is cheap, but won't fit on your motherboard due to it requiring faster/more expensive ram, such as RDRAM or other exotica like ECC Registered SDRAM. But it's still cheaper to get a few hard drives.
Solid State Hard Drives vs Ram Disks (Score:2, Insightful)
The way I see it, the kernel is smart enough to use ram for buffering when it can - certainly smarter than a user creating a ram disk.
If you need more performance, give your system more ram and let the kernel decide how much of that ram should go to a ram disk.
Try DiskOnChip (Score:2, Informative)
Why not use a software raid-like solution? (Score:2)
Re:Sorry, you must mean (Score:3, Informative)
Re:Reliable? (Score:2, Insightful)
Well, it can't be worse than the MTBF of IBM GXP drives [slashdot.org]..
Plus, it's pretty much a given that MTBF(device_with_moving_parts) is less than MTBF(device_with_no_moving_parts). You probably had more hard drives fail on you than memory chips, right?
So I think the only problem regarding reliability is solving the power issue to the satisfaction of the average induhvidual.
I think 10 more years max, and then it's the way of the dodo for our spinning friends.
Yan
Re:Compact Flash (Score:2)
Compact flash already acts just like a hard drive, it just doesn't use the same connectors. From a logical/electrical standpoint they are identical.
Re:PerUNITofCURRENCY (Score:2)
Re:Me! Me! (Score:3, Insightful)
All you people who keep talking about power loss, think rechargeable battery...
Magnetic storage can sit (unconnected to any power source) for years and years and still maintain data integrity. Keeping several GB of RAM powered reliably and cheaply for that long may not be as practical.
Better yet - put the RAM on the drive (Score:3, Informative)
More info on the Western Digital drive is available at storage review. [storagereview.com]