Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Can SSDs Be Used For Software Development? 480

hackingbear writes "I'm considering buying a current-generation SSD to replace my external hard disk drive for use in my day-to-day software development, especially to boost the IDE's performance. Size is not a great concern: 120GB is enough for me. Price is not much of a concern either, as my boss will pay. I do have concerns on the limitations of write cycles as well as write speeds. As I understand, the current SSDs overcome it by heuristically placing the writes randomly. That would be good enough for regular users, but in software development, one may have to update 10-30% of the source files from Subversion and recompile the whole project, several times a day. I wonder how SSDs will do in this usage pattern. What's your experience developing on SSDs?"
This discussion has been archived. No new comments can be posted.

Can SSDs Be Used For Software Development?

Comments Filter:
  • Swap? (Score:4, Interesting)

    by qoncept ( 599709 ) on Friday March 06, 2009 @04:23PM (#27096469) Homepage
    Do you have a swap file/partition? You're talking hundreds of writes a day, tops. That sounds like a big number, but in reality it just ain't. I would question why you feel the need for an SSD, though. I know the difference between $300 and $50 isn't that big in the grand scheme of things, what benefit are you expecting?
  • by wjh31 ( 1372867 ) on Friday March 06, 2009 @04:31PM (#27096597) Homepage
    could a raid structure give the performance boost i assume you are after? ive no experiance with them but i gather they can offer higher read/write rates. Can someone with more experiance say exactly how much of a performace boost they give, a set of small HDD's could be the same price without the concerns over cycle limits
  • SSDs = productivity (Score:5, Interesting)

    by Civil_Disobedient ( 261825 ) on Friday March 06, 2009 @04:42PM (#27096805)

    I use SSDs for my (both) development systems--the first was for the work system, and after seeing the improvements I decided I would never use spinning-platter technology again.

    The biggest performance gains are in my IDE (IntelliJ). My "normal" sized projects tend to link to hundreds of megs of JAR files, and the IDE is constantly performing inspections to validate the code is correct. No matter how fast the processor, you quickly become IO-bound as the computer struggles to parse through tens of thousands of classes. After upgrading to SSD, I no longer find the IDE struggling to keep up.

    I ended up going with SSD after reading this suggestion [jexp.de] for increasing IDE performance. The general jist: the only way to improve the speed of your programming environment is to get rid of your file access latency.

  • by Zebra_X ( 13249 ) on Friday March 06, 2009 @04:46PM (#27096891)

    The real key here is this: when an SSD drive can no longer execute a write - the disk you will let you know. Reads do not cause appreciable wear so you will end up with a read only disk when the drive has reached the end of it's life. This is vastly superior to the drive just dying becuase it's had enough of this cruel world.

    I'd be interested to see some statistics on electrical failure of these drives though... but it seems that isn't as much of an issue.

  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Friday March 06, 2009 @04:53PM (#27097053) Journal

    Just got one in a Dell laptop, came with Ubuntu. A subjective overview:

    I have no idea how well it performs with swap. I'm not even really sure why I have swap -- I don't have quite enough to suspend properly, but I also never seem to run out of my 4 gigs of RAM.

    It's true, the write speed is slower. However, I also frequently transfer files over gigabit, and the bottleneck is not my SSD, it's this cheap Netgear switch, or possibly SSH -- I get about 30 megabytes per second either way.

    So, is there gigabit between you and the SVN server? If so, you might run into speed issues. Maybe. Probably not.

    Also worth mentioning: Pick a good filesystem if a lot of small files equals a lot of writes for you. A good example of this would be ReiserFS' tail packing -- make whatever "killer FS" jokes you like, it really isn't a bad filesystem. But any decent filesystem should at least be trying to pack writes together, and I only expect the situation to improve as filesystems are tuned with SSDs in mind.

    It also boots noticeably faster than my last machine. This one is 2.5 ghz with 4 gigs of RAM; last one was 2.4 ghz with 2 gigs, so not much of a difference there. It becomes more obvious with actual use, like launching Firefox -- it's honestly hard to tell whether or not I've launched it before (and thus, it's already cached in my massive RAM) -- it's just as fast from a cold boot. The same is true of most things -- for another test, I just launched OpenOffice.org for the first time this boot, and it took about three seconds.

    It's possible I've been out of the loop, and OO.o really has improved that much since I last used it, but that does look impressive to me.

    Probably the biggest advantage is durability -- no moving parts to be jostled -- and silence. To see that in action, just pick out a passively-cooled netbook -- the thing makes absolutely no discernible noise once it's on, other than out of the speakers.

    All around, I don't see much of a disadvantage. However, it may not be as much of an advantage as you expect. Quite a lot of things will now be CPU-bound, and there are even the annoying bits which seem to be wallclock-bound.

  • Re:Umm... (Score:1, Interesting)

    by Joce640k ( 829181 ) on Friday March 06, 2009 @04:59PM (#27097165) Homepage

    Show me a manufacturer which makes a drive which simultaneously:

    a) Competes with hard drives for speed
    b) Uses the cheapest possible MLC memory in it

    Grandparent is correct: If you're not clever enough to figure out if this will be a problem, you shouldn't be a programmer.

    Scary thought: Hard drives don't last forever either....

  • RAM disk ? (Score:3, Interesting)

    by smoker2 ( 750216 ) on Friday March 06, 2009 @04:59PM (#27097179) Homepage Journal
    Can't you just load up on RAM and create a RAM drive for working stuff and keep the slow HDD for shutdown time ? Cheaper than SSD and no write cycle issues. You can also get RAM based IDE and SATA drives.
  • How about ramdisks? (Score:3, Interesting)

    by ultrabot ( 200914 ) on Friday March 06, 2009 @05:01PM (#27097217)

    Sometimes I wonder whether it would make sense to optimize the disk usage for flash drives by writing transient files to ramdisk instead of hard disk. E.g. in compilation, intermediate files could well reside on ramdisk. If you rely on "make clean" a lot (e.g. when you are rebuilding "clean" .debs all the time), you won't have that much attachment to your object files.

    Of course this may require more work than what it's really worth, but it's a thought.

  • Re:Swap? (Score:2, Interesting)

    by Anonymous Coward on Friday March 06, 2009 @05:06PM (#27097283)

    Would you care to explain your opinion that MLC SSDs are junk? I know some people have gotten a bad impression of MLC SSDs because Windows' default configuration doesn't play nicely with them. However if you tune Windows, MLCs work great. If you use OS X, just about everything is, by accident, property tuned and they work great. My guess, with Linux they will just work great.

    Three days in with my new SSD and OS X, and I love it. The almost total elimination of disk latency has made it a whole new experience. I can't even measure launch times in icon bounces any more; on average the windows appear before the icon has even finished its first jump off the dock.

  • Re:Umm... (Score:5, Interesting)

    by Joce640k ( 829181 ) on Friday March 06, 2009 @05:18PM (#27097539) Homepage

    Before we start, let me make a prediction: You never asked about the MTBF of your hard disk, right...?

    http://www.intel.com/design/flash/NAND/mainstream/ [intel.com]

    a) When Intel says "new level of ... reliability", maybe it means they thought about this problem when they designed the drive.

    b) When they say "NAND flash", maybe it means they're not using the cheapest MLC memory as mentioned in that scary wikipedia article.

    c) When their datasheet says "Minimum useful life of five years, assuming 20Gb/day of writing", maybe they got those numbers from real engineers, with degrees.

    d) When their datasheet also says, "Should the host system attempt to exceed 20 GB writes per day by a large margin for an extended period, the drive will enable the endurance management feature to adjust write performance, this feature enables the device to have, at a minimum, a five year useful life", maybe they were really really paranoid about saying 'five years' because they know people will start class-action lawsuits if it doesn't work out.

    So, um, how this even got greenlighted in 2009 is beyond me. It's like 1999 called wanting its flash-myths thread back.

  • by Daimanta ( 1140543 ) on Friday March 06, 2009 @05:28PM (#27097693) Journal

    yet, but I am eager to learn. What happens if you exceed the limit of writes? How does usage degrade the disks? Is heat bad? Does using the SSD as virtual memory degrade the disk fast?

    What about bad sectors, how do they compare with HDDs? Are SSDs generally more sturdy(longer lifespans) than HDDs?

    Inquiring minds want to know.

  • by Stephen Ma ( 163056 ) on Friday March 06, 2009 @05:33PM (#27097821)
    As I understand it, flash drives use wear leveling to spread the writing burden over many sectors of the disk. So each time I overwrite the same sector, say logical sector 100, the data goes to a different spot on the drive. That makes sense.

    However, suppose I fill up the drive with data, then free half of it. My question is: how does the drive know that half its sectors are free again for use in wear leveling? As far as the drive knows, all of its sectors still hold data from when the drive was full, and no sectors are available for levelling purposes.

    Is there some protocol for telling the drive that "sectors x, y, z are now free"? Or does the drive itself understand the disk layout of the zillions of different filesystems out there?

  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Friday March 06, 2009 @05:42PM (#27097981) Homepage

    Everyone's going SSD-crazy, but I'm not yet convinced. They're not _that_ much faster than spinning platters of death, at least not yet, and I'd much rather throw a ton of Ram at the disk cache for the same amount of money.

    If you're really worried about performance, invest in a true Ramdisk - the kind that has DDR memory slots on one side and a SATA connector on the other. You can write a 2-line script to mount and format it on boot, and even backup its contents upon shutdown (if needed). That's the ultimate /tmp drive, and it will not wear out no matter how hard you pound it.

  • by BobSixtyFour ( 967533 ) on Friday March 06, 2009 @05:48PM (#27098093)

    Serious Long-Term Fragmentation Problems...

    Potential buyers BEWARE, and do some research first. Google the term "intel ssd fragmentation" before purchasing this drive to understand this potential long-term issue. Chances are it won't impact most people, but if you plan on using this drive to house lots of smaller files, think again.

    Also
    Absolutely avoid using defragmentation tools on this drive! They will only decrease the life of the drive.

  • by rossz ( 67331 ) <ogre@@@geekbiker...net> on Friday March 06, 2009 @05:57PM (#27098265) Journal

    I worked in the game industry in the past and I felt this was one of their problems. The developers all had the latest greatest processors and the cutting edge overpriced video cards. The games ran just fine, of course. On a typical system, however, the game performance would suck big time. I refuse to replace my computer every year just to play the latest game.

    You can continue to give the developers cutting edge hardware, but make sure your QA people are running "typical" systems.

    My experience was from years ago when a 386 system was standard. I don't know what it's like today.

  • Re:Umm... (Score:3, Interesting)

    by adisakp ( 705706 ) on Friday March 06, 2009 @07:07PM (#27099357) Journal

    One thing about flash in general is that in order to rewrite a small amount of data, you need to (at the low level) erase and rewrite a relatively large amount of data.

    The technical term for small write requests actually causing large writes is "Write Amplification". This is one reason the Intel SSD drives are so fast. They have a Write Amplification (WA) factor of 1.1 [tomshardware.com] (done by combining small writes) while many other drives have a WA as high as 20. They also use an "intelligent" wear-leveling algorithm that can reduce spurious writing by nearly a factor of 3.

  • by Anonymous Coward on Friday March 06, 2009 @07:14PM (#27099469)

    Warning: I'm an Intel employee

    But I've been using the 80GB Intel MLC drive since mid-year 2008 and it's great. Very fast and silent -- I refuse to go back to a mechanical drive again. It's perfect for a client workload (99.9% of users) but not perfect for a transaction heavy server (use the SLC drive).

    My workload is writing code and generating/parsing very large data sets from fab (1 - 4 GB).

    Here is the "insider" information from my drive:

    6.3TB written total (roughly 9 months of usage)
    58 cycles (average) on each block of Nand

    Given that the component Nand is qualified out to 10K, that's clearly long enough for at least 5 years of usage.

  • Re:Umm... (Score:3, Interesting)

    by shutdown -p now ( 807394 ) on Friday March 06, 2009 @09:37PM (#27100999) Journal

    Perl is hard. Let's use brainf*ck.

    That's not Funny, that's Insightful. Brainfuck by itself is indeed very easy - why, just 8 basic operators! The irony is that Java is "simpler" than C++ by the same measure (less language features). In practice, this just shows how pointless the measure is in general.

  • by tytso ( 63275 ) * on Friday March 06, 2009 @09:38PM (#27101017) Homepage

    So interested people want to know --- how do you get the "insider" information from an X25-M (ie., total amount of writes written, and number of cycles for each block of NAND)?

    I've added this capability to ext4, and on my brand-spanking new X25-M (paid for out of my own pocket because Intel was to cheap to give one to the ext4 developer :-), I have:

    <tytso@closure> {/usr/projects/e2fsprogs/e2fsprogs} [maint]
    568% cat /sys/fs/ext4/dm-0/lifetime_write_kbytes
    51960208

    Or just about 50GB written to the disk (I also have a /boot partition which has about half a GB of writes to it).

    But it would be nice to be able to get the real information straight from the horse's mouth.

  • Yes (Score:2, Interesting)

    by lnxpilot ( 453564 ) on Friday March 06, 2009 @10:04PM (#27101235)

    I've been using a Patriot Warp V2 64GB SSD for a relatively large project (~400k lines of C code).
    The "write stutter" is a bit annoying, especially when I do a full "make clean", but it's not too bad.

  • by virtue3 ( 888450 ) on Friday March 06, 2009 @10:15PM (#27101309)
    I read all my slashdot during "build time". The worst part is, my work machine is so crappy, I'm in C# and I still have time to at least read the summary before it finishes JITing.
  • Re:Umm... (Score:2, Interesting)

    by AllynM ( 600515 ) * on Friday March 06, 2009 @11:27PM (#27101775) Journal

    d) When their datasheet also says, "Should the host system attempt to exceed 20 GB writes per day by a large margin for an extended period, the drive will enable the endurance management feature to adjust write performance, this feature enables the device to have, at a minimum, a five year useful life"...

    You make many good points, but I should point out that the quoted feature never made it into the retail product. When conducting the testing for my article, I wrote several TB per day to my X25-M and experienced no drop in write speeds - provided those writes were more sequential than random.

    Constantly hitting an X25-M with small writes will net you at most an average 50% drop in sequential write speeds. The drive will eventually reach an equilibrium based on the mix of write sizes you hit it with. The M has larger flash blocks and has to track a relatively higher level of write combining, and it is possible for it to get 'stuck' at some very low write speeds (see the article for more detail). This is a unique condition that Intel is currently looking into.

    Getting back to the quoted section: The write speed slow downs seen in my testing resulted only from the ratio of small/large files written and had nothing to do with the rate / volume of data written over any particular time period.

    Article in question:
    http://hardware.slashdot.org/article.pl?sid=09/02/13/2337258 [slashdot.org]
    http://www.pcper.com/article.php?aid=669 [pcper.com]

    In response to the post, I would recommend either an MLC SSD with very high IOPS (Intel), one of the newer MLC SSD's with on-board SRAM cache (OCZ Vertex / 3rd gen Samsung), or for the highest overall read/write throughput, a pair of SLC SSD's in RAID-0. For SLC, the Intel drive is very good, but there are much cheaper alternatives out there (i.e. G.Skill rebranded Samsung SLC). Note that the X25-E uses write combining, and will take that same 50% worst-case sequential write hit. Other SLC units are not as fast at small writes (no combining), but their performance remains rock steady regardless of what you hit them with.

    The G.Skill SLC drive I mentioned:
    http://www.newegg.com/Product/Product.aspx?Item=N82E16820231186 [newegg.com]

    Regards,
    Allyn Malventano
    Storage Editor, PCPer.com

  • by wrook ( 134116 ) on Saturday March 07, 2009 @01:49AM (#27102509) Homepage

    A friend of mine was a sniper. He told me that he only ever carried 3 bullets. The first was for the target. The second was in case he missed with the first shot. The third was for himself: if he had to use the second bullet he didn't have enough time to get away.

  • Re:Swap? (Score:3, Interesting)

    by nabsltd ( 1313397 ) on Saturday March 07, 2009 @02:38AM (#27102685)

    I have 3 running right now because I have two VPN connections to different networks using the Cisco VPN client

    The security rules of some VPN connections (that force everything through the VPN and effectively cut you off from the local network) meant that you had to have a lot of boxes just to make up your "workstation".

    Now, with VMs, you can have the VPN connection, get to the local network, and be able to transfer data from the local network to the other end of the VPN. This has basically restored my sanity (and made the flash drives I had purchased for sneakernet much less useful).

  • Re:Swap? (Score:1, Interesting)

    by Anonymous Coward on Saturday March 07, 2009 @02:50PM (#27106125)

    You're quite clueless and/or you only develop small, wee-tiny assembler or even microcontroller-related projects.
    The most stuff i program certainly has the ability to eat up more than just half of my 8 gigs RAM.

    Example: I'm writing a pythonic archive management system. The indexing process alone (I do store that in memory, because hdd would be a) in use by the process itself and b) way too slow anyway) eats roughly 2 gigs of RAM for a 2 TB dataset.

    Then i have a Firefox open with lots of current news, documentation and (i admit) webcomics. It currently eats about 700 megs.

    RAM is like screen-real-estate - you cannot have enough. And i wasn't even talking about HPC or distributed stuff.

    Clueless wit, you - please stop claiming that you're a "developer".

The use of money is all the advantage there is to having money. -- B. Franklin

Working...