Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Can SSDs Be Used For Software Development? 480

hackingbear writes "I'm considering buying a current-generation SSD to replace my external hard disk drive for use in my day-to-day software development, especially to boost the IDE's performance. Size is not a great concern: 120GB is enough for me. Price is not much of a concern either, as my boss will pay. I do have concerns on the limitations of write cycles as well as write speeds. As I understand, the current SSDs overcome it by heuristically placing the writes randomly. That would be good enough for regular users, but in software development, one may have to update 10-30% of the source files from Subversion and recompile the whole project, several times a day. I wonder how SSDs will do in this usage pattern. What's your experience developing on SSDs?"
This discussion has been archived. No new comments can be posted.

Can SSDs Be Used For Software Development?

Comments Filter:
  • Umm... (Score:1, Insightful)

    by addaon ( 41825 ) <(addaon+slashdot) (at) (gmail.com)> on Friday March 06, 2009 @04:21PM (#27096409)

    If you're not good enough at arithmetic to understand that this isn't an issue, should you really be developing software?

  • I'm not seating it (Score:5, Insightful)

    by timeOday ( 582209 ) on Friday March 06, 2009 @04:23PM (#27096465)
    I'm using the Intel SSD and I think it's great - fast and silent. Will it last? I'd argue you never know about any particular model of hard drive or SSD until a few years after it is released. On the other hand, I'd also argue it doesn't matter much. Say one drive has a 3% failure rate in the 3rd year and another has a 6% rate. That's a huge difference percentage-wise (100% increase). And yet it's only a 3% extra risk - and, most importantly, you need a backup either way.
  • by vlad_petric ( 94134 ) on Friday March 06, 2009 @04:26PM (#27096509) Homepage

    If they're good enough for Databases (frequent writes), they should be just fine for devel.

    OTOH, You should be a lot more concerned about losing data because of a) software bugs or b) mechanical failures in a conventional drive

  • Answer: (Score:2, Insightful)

    by BitZtream ( 692029 ) on Friday March 06, 2009 @04:35PM (#27096653)

    Yes, a SSD can be used for development.

    A better question to ask is should you use a SSD for development.

  • Re:Umm... (Score:5, Insightful)

    by Tetsujin ( 103070 ) on Friday March 06, 2009 @04:45PM (#27096881) Homepage Journal

    If you're not good enough at arithmetic to understand that this isn't an issue, should you really be developing software?

    Maybe you can explain why it isn't an issue, then?

    One thing about flash in general is that in order to rewrite a small amount of data, you need to (at the low level) erase and rewrite a relatively large amount of data. So depending on how extensively the filesystem is cached, where the files are located, etc., rebuilding a medium-sized project could wind up re-writing a large portion of the SSD...

  • make backups? (Score:2, Insightful)

    by bzipitidoo ( 647217 ) <bzipitidoo@yahoo.com> on Friday March 06, 2009 @04:47PM (#27096927) Journal

    You do back up your work, don't you? You know, in case it's lost, stolen, destroyed, etc.? An SSD going bad is hardly the only danger. So why not try out an SSD, and if you're especially worried, backup more frequently and keep more backups?

  • by petes_PoV ( 912422 ) on Friday March 06, 2009 @04:51PM (#27097007)
    That way it'll encourage them to write efficient implementations.

    If you give your programmers an 8-way 4GHz m/b with 64GB of memory (if sucha thing exists yet), they'll use all the processing power in dumb, inefficient algorithms, just because the development time is reduced. While those of us in the real world have to get by on "normal" machines.

    When we complain about poor performance, they just shrug and say "well it works fine on my nuclear-powered, warp-10, so-fast-it-can-travel-back-in-time" machine"

    However, if they were made to develop the software on boxes that met the minimum recommended spec. for their operating system, they'd have to give some thought to making the code run efficiently. If it extended the development time and reduced the frequency of updates, well that wouldn't be a bad thing either.

  • Re:should be fine (Score:3, Insightful)

    by clone53421 ( 1310749 ) on Friday March 06, 2009 @04:52PM (#27097031) Journal

    how many GB total the drive can write over its lifetime vs how much you produce each day

    It's not as simple as that. Make a small change (insertion or deletion) near the beginning of a large source code file, and the entire file – from the edit onward – must be written over. Then, any source code file that has been modified must be read and built, overwriting the previous binary files for those source codes. Finally, all the binary files must be re-linked into the executable.

    So you're not just writing ___ bytes of code. You're writing ___ bytes of code, re-writing ___ bytes of code because it followed code that was added or modified, and overwriting ___ of the object, library, debug, executable, etc. etc. files that are created when the project is built. In a large project that's probably in the order of megabytes. That is what TFS meant by:

    in software development, one may have to update 10-30% of the source files from Subversion and recompile the whole project, several times a day. I wonder how SSDs will do in this usage pattern.

  • by Mysticalfruit ( 533341 ) on Friday March 06, 2009 @04:54PM (#27097073) Homepage Journal
    If price isn't an option, then he should get himself 4 ANS-9010's and set them up as a hardware RAID0 hanging off the back of good fast raid controller.

    If he filled each of them with 4GB DIMMs he'd have 128GB of storage space.

    Volatile? Hell yeah... But also just crazy fast...
  • Re:Umm... (Score:5, Insightful)

    by blueg3 ( 192743 ) on Friday March 06, 2009 @04:55PM (#27097077)

    Neither he nor you have attempted to answer the question quantitatively. Look at how big a block is, a bit about their write-leveling strategy, how large your source files are, the quantity of data you overwrite and how frequently, and what the lifetime of SSD blocks is, and figure out how long the SSD should last. Even an order-of-magnitude calculation would be better than nothing.

    You both are approaching the problem qualitatively: SSDs have limited rewrite lifetimes, and I'm doing a lot of rewriting -- isn't that bad? You don't know! Figure it out!

  • Simple arithmetics (Score:5, Insightful)

    by MathFox ( 686808 ) on Friday March 06, 2009 @04:56PM (#27097093)
    A typical flash cell easily lasts 10.000 writes. Let's assume that every compile (or svn update) only touches 10% of your SSD space, that gives you 100.000 "cou" (compiles or updates). If you do 20 cou per day, the SDD will last 5000 working days, or 20 year.

    Now find a hard disk that'll last that long.

  • Re:Swap? (Score:5, Insightful)

    by afidel ( 530433 ) on Friday March 06, 2009 @05:06PM (#27097287)
    The best bet if your project is smaller than about 20GB is to buy a box full of ram and use a FAT32 formatted ramdrive. Orders of magnitude faster than even an SSD.
  • by Anonymous Coward on Friday March 06, 2009 @05:07PM (#27097309)
    compile time has nothing to do with inefficient algorithms slowing down programs.
  • Re:Umm... (Score:3, Insightful)

    by gandhi_2 ( 1108023 ) on Friday March 06, 2009 @05:14PM (#27097471) Homepage
    You are confusing programming and computer science.
  • Re:Umm... (Score:2, Insightful)

    by thetoadwarrior ( 1268702 ) on Friday March 06, 2009 @05:15PM (#27097491) Homepage
    Java only appears to be easier because it's syntax isn't complete shit.

    Considering that a bad thing is like considering using your hands to hold a pencil inferior to using your ass cheeks.
  • by vadim_t ( 324782 ) on Friday March 06, 2009 @05:23PM (#27097607) Homepage

    Disagree. This problem went away for the most part.

    First, performance isn't nearly the problem it used to be. We aren't using anymore the kind of hardware that needs the programmer to squeeze every last drop of performance out of it. In fact, we can afford to be massively wasteful by using languages like Perl and Python, and still get things done, because for most things, the CPU is more than fast enough.

    Second, we're not coding as much in C anymore. In C I could see this argument, lazy programmer writing bubble sort or something dumb like that because for him waiting half a second on his hardware isn't such a problem. But most of this has been abstracted these days. Libraries, and high level languages contain highly optimized algorithms for sorting, searching and hashes. It's a rare need to have to code your own implementation of a basic data structure.

    Third, the CPU is rarely the problem anymore, I/O is. Programs spend most of their time waiting for user input, the database, the network, or in rare cases, the hard disk. A lot of code written today is shinier versions of things written 20 years ago, and which would run perfectly fine on a 486. Also for web software the performance of the client is mostly meaningless, since heavy lifting is server-side.

    Also, programming has a much higher resource requirement than running the result. People code on 8GB boxes because they want to: run the IDE, the application, the build process with make -j4, and multiple VMs for testing. On Windows you're going to want to test your app on XP and Vista, on Linux you may need to try multiple distributions. VMs are also extremely desirable for testing installers, as it's easy to forget to include necessary files.

    I'd say that giving your developer a 32 core box would actually be an extremely good idea, because the multicore CPUs have massively caught on, but applications capable of taking advantage of them are few. Since coding threaded code is not lazy but actually takes effort, giving the programmers reasons to write it sounds like a very good idea to me.

  • Re:Umm... (Score:3, Insightful)

    by dgatwood ( 11270 ) on Friday March 06, 2009 @05:31PM (#27097757) Homepage Journal

    Perl is hard. Let's use brainf*ck.

  • Re:Umm... (Score:5, Insightful)

    by bluesk1d ( 982728 ) on Friday March 06, 2009 @05:36PM (#27097871)
    This is why it's almost pointless to ask a question on Slashdot. You get 100s of replies in a 50/50 distribution of random tech-word ramblings and flat out useless contempt, leaving you feeling stupid and your question unanswered.
  • by glwtta ( 532858 ) on Friday March 06, 2009 @05:37PM (#27097887) Homepage
    That way it'll encourage them to write efficient implementations.

    That's just stupid - I'm going to write better code because my compiles take longer?

    There seem to be a lot of these posts on Slashdot with down-home folk wisdom on how to educate the smug and indifferent programmer, who is so clearly divorced from reality that he doesn't even know what computers his customers use. I get the sneaking suspicion that the authors know very little about actual programming.

    There are two reasons for bad software:

    a) incompetent programmers
    b) bad project management

    The latter includes things like unrealistic timelines and ill defined scope and requirements. I'm not sure which one is the bigger culprit, but both are pervasive.

    In neither case, though, are you going to fix the problem with gimmicky bullshit like inadequate equipment.
  • Re:Umm... (Score:3, Insightful)

    by berend botje ( 1401731 ) on Friday March 06, 2009 @05:37PM (#27097891)
    Please also factor in the amount of static files on the drive. This has been, historically, forgotten. You do not have the whole drive to do 'swap-the-crappy-block' on.
  • Re:Swap? (Score:2, Insightful)

    by Dan Ost ( 415913 ) on Friday March 06, 2009 @05:42PM (#27097985)

    Holy crap! If you think a developer needs 16G of RAM, you're NUTS!

    Graphic artists and people editing videos need that kind of RAM, but a developer doesn't. I've got 2G of RAM in my machine and according to top, that's about twice what I use (and most of that is firefox and evolution). Granted, I don't use a heavy-weight IDE, but I hardly think Eclipse would require 14G of RAM to function (please correct me if I'm wrong).

  • by Anonymous Coward on Friday March 06, 2009 @05:47PM (#27098069)

    Developers should use *slow* machines...That way it'll encourage them to write efficient implementations.

    Signed,

    Someone who doesn't do any sort of real software development and who has no insight into the practical performance issues of modern software.

  • by pebs ( 654334 ) on Friday March 06, 2009 @05:47PM (#27098079) Homepage

    That way it'll encourage them to write efficient implementations.

    If you give your programmers an 8-way 4GHz m/b with 64GB of memory (if sucha thing exists yet), they'll use all the processing power in dumb, inefficient algorithms, just because the development time is reduced. While those of us in the real world have to get by on "normal" machines.

    I hear this all the time, and its completely silly because it only applies to a subset of software being developed. For example, most of the software I develop at my current job is deployed to hardware that actually has much higher specs than the hardware I am developing on, because I write mostly server-side software that is deployed to servers with generous resources. I work on the client-side as well (though a much smaller percentage), but the performance bottlenecks are not there anyway, they are when hitting the database. Of course, my development environment is unrealistic for a different reason -- it doesn't simulate the load that occurs in production. I don't think using a machine with pitiful specs for development is going to accurately simulate what happens when more than one user uses a system.

    In any case, us developers need fast machines because we actually have to build/rebuild the software constantly. And sometimes our tools are resource hungry, like servers that we have to restart frequently, or heavy-weight IDE's that are fucking slow even on the badass hardware we have (yes, there are different choices that can be made here, but not everyone gets to make those choices). The end users don't have this problem; the software is already built for them, the servers started up, and the caches warmed.

  • by psnyder ( 1326089 ) on Friday March 06, 2009 @05:48PM (#27098107)
    A similar argument was used in World War II to keep bolt action sniper rifles in use in some countries instead of 'upgrading' to 'auto-loading' rifles. With bolt action, after shooting, you had to physically lift the bolt, cock it in place, and push it down again before you could fire another shot.

    The argument was, if the snipers knew they couldn't fire again immediately, they would be more careful lining up and aiming that first shot. With an 'auto-loading' rifle, you could keep your eye in the scope and fire off more rounds.

    It seems quite obvious, that if you're in the field, the seconds after that first shot are very important. If you need to take your eye away from the scope, and spend the time reloading the chamber, the outcome could be completely different than if you were able to fire off a few rounds immediately.

    A good sniper would have aimed that first shot up carefully no matter what rifle they were using, in the same way a good programmer will make efficient, elegant algorithms no matter what machine they're using. You'd only have to 'limit' your programmers if you think they're bad programmers. If a supervisor is thinking along these lines, they've already hired bad programmers and are setting both themselves and their team up for failure. The faster the machines, the less time wasted. You don't need forced limits reminding them about efficiency, because any decent programmer will already be thinking about it.
  • by Zebra_X ( 13249 ) on Friday March 06, 2009 @05:53PM (#27098183)

    "Anyway, the make believe part is your thinking that by failing a write then your data is still readable which in fact majority of cases its dead Jim"

    Are you sure about this - based on your previous flow:
    "4) Chip reports back to controller erase success or fail"
    is when the OS is notified by the drive that the write failed. Presumably, the drive or the OS might try another part of the bank, sector or what have you. At no point are you earsing non-free sectors.

    It is fundamentally the write operation that causes the bits to fail, not the read. So the rest of the contents of the disk are fine - make an image and transfer to a new drive. Easy.

  • Re:Umm... (Score:3, Insightful)

    by kelnos ( 564113 ) <[bjt23] [at] [cornell.edu]> on Friday March 06, 2009 @06:03PM (#27098369) Homepage
    No, he's confusing software development with basic math skills.
  • by merreborn ( 853723 ) on Friday March 06, 2009 @06:12PM (#27098519) Journal

    Developers should use *slow* machines
    That way it'll encourage them to write efficient implementations.
    If you give your programmers an 8-way 4GHz m/b with 64GB of memory (if sucha thing exists yet), they'll use all the processing power in dumb, inefficient algorithms, just because the development time is reduced. While those of us in the real world have to get by on "normal" machines.

    No, developers should develop on fast machines... and test on slow machines.

    It's a waste of money to pay your programmers $50/hr to sit and wait for compiles to complete, IDEs to load, etc. That hurts the employer, and the additional cost gets passed on to the customer. It's in everyone's best interest that developers are maximally productive.

    Give them fast development environments, and realistic test environments.

  • by Haeleth ( 414428 ) on Friday March 06, 2009 @06:20PM (#27098643) Journal

    "Anecdotal evidence" is an oxymoron.

    Point is, I could just as easily claim that SSDs last ten years, and since neither of us has provided a shred of evidence to support our assertions, neither of us has any credibility whatsoever.

  • Re:Umm... (Score:1, Insightful)

    by Anonymous Coward on Friday March 06, 2009 @06:33PM (#27098863)

    brainf*ck.

    What's with the auto-censorshit? Or are you just a sissy too shy to use expletive? And why the fuck should I care?

  • by billstewart ( 78916 ) on Friday March 06, 2009 @08:56PM (#27100603) Journal

    If your main problem is speeding up your development environment's use of temporary disk storage (because Linux is already caching a lot), use /tmpfs, which stores the files in virtual memory, and if the system needs to page them out, it does that - it's really useful for files that are going to get created for short periods but don't need to get kept for long.

    Windows Vista Readyboost is doing something fancy and semi-automatic with caching in USB flash disks - get yourself a USB2 memory stick and turn it on. The stuff is so cheap these days that you might as well buy a large fast ReadyBoost stick, but you'll probably get a lot of payoff even from adding small drives - 8GB is now $20-40, and 32GB is ~$60-120 depending on how extreme you want to get.

  • Re:Swap? (Score:2, Insightful)

    by Anonymous Coward on Friday March 06, 2009 @09:22PM (#27100845)

    Where do you work? I had to piss and moan to get 2GB of RAM! I would kill for 16GB.

    I think the better question is where do YOU work that 2GB was such an ordeal. 4GB of desktop ram is $50 at newegg. If you work in the US... it's a shitty tech environment. That said, I bought a 24" LCD 3 years ago for myself and brought it into work when I wanted one that match my home display :)

  • Re:Swap? (Score:3, Insightful)

    by jaavaaguru ( 261551 ) on Friday March 06, 2009 @09:41PM (#27101039) Homepage

    Virtual machines.

    I have 3 running right now because I have two VPN connections to different networks using the Cisco VPN client, and another VM for testing client software on. Even then, I'm using just over half of the 4GB RAM the computer has.

  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Saturday March 07, 2009 @10:45AM (#27104413)
    Comment removed based on user account deletion
  • Re:Umm... (Score:3, Insightful)

    by raynet ( 51803 ) on Saturday March 07, 2009 @01:54PM (#27105725) Homepage

    I did calculate the worst case scenario once, gonna try it again on 128GB flash-drive.

    So, 128GB SSD has 128GiB flash but user gets usually 128GB or 120GB so that there are cells that can be used for wear leveling and also for badblocks so they get better yields (SSDs can ship with several broken cells). Lets assume a 128GB SSD, thus it has: 8.79GiB reserved for wear leveling.

    First we need to fill up the drive, otherwise it can use the unused cells for wear leveling. So, first we need to write 119GiB.

    Now we can began killing the drive, we write 1 byte to random sectors, assuming Intel SSD, each 1 byte write requires the SSD to erase 512KiB block (erase always erases multiple pages, on Intel SSDs, it is 512KiB). There are 18+ million blocks to wear level on.

    MLC can handle 10k writes, SLC 100k writes. Thus we get minimum amount to write in 1byte random writes to kill a flash is:

    MLC: 171GiB + initial 119GiB
    SLD: 1716GiB + initial 119GiB

    For 120GB SSD the write amounts are about twice as much.

    Comparing to eg. 50% full drive:

    MLC: 13TiB + initial 59GiB
    SLD: 130TiB + initial 59GiB

    This ofcourse assumes a brain-dead write leveling algorithm where as eg. the Intel SSD will wait until it has 512KiB of pages in the cache before commiting them on disk so the drive will last even longer.

    And ofcourse the OS will cache writes and when compiling apps you rarely write 1 byte blocks as files usually are much larger than that, just assuming couple KiB files created by the compilation you would have to write 2671TiB to the drive before it fails, and even at continous advertised 130MB/s speeds it would take 249days to kill the drive (at random 2KiB writes).

    Puuh, I hope I rememberer it all correctly and didn't make any math errors.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...