Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Cloud Programming

Ask Slashdot: Do Any Development Shops Build-Test-Deploy On A Cloud Service? 119

Posted by Unknown Lamer
from the raining-dev-builds dept.
bellwould (11363) writes "Our CTO has asked us to move our entire dev/test platform off of shared, off-site, hardware onto Amazon, Savvis or the like. Because we don't know enough about this, we're nervous about the costs like CPU: Jenkins tasks checks-out 1M lines of source, then builds, tests and test-deploys 23 product modules 24/7; as well, several Glassfish and Tomcat instances run integration and UI tests 24/7. Disk: large databases instances packed with test and simulation data. Of course, it's all backed up too. So before we start an in-depth review of what's available, what experiences are dev shops having doing stuff like this in the cloud?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Do Any Development Shops Build-Test-Deploy On A Cloud Service?

Comments Filter:
  • by Anonymous Coward on Wednesday April 02, 2014 @12:32PM (#46640931)

    Were you asked to do something or were you asked if doing something is a good idea?

    If you were asked to do something then fucking do it. Any sticker shock is the CTOs problem to explain.

    If your asked by your CTO if moving to AWS is a good idea for organizations where money is an issue the answer is typically NO. You can drink all of "the cloud" Kool-Aid you want... you just have to pay for it.

  • We do this (Score:5, Informative)

    by CimmerianX (2478270) on Wednesday April 02, 2014 @12:33PM (#46640937)
    I'm IT for a company that does this for 95% of dev/test/qa systems. It's worked out pretty well. Most servers are spun up and then chef'ed, used, then deleted after tests/whetever are complete. We do keep our code in house. SVN/GIT/ and Jenkins along with server build farms are all in house. The cloud services are expensive, but since IT has automated the deployment process for the cloud hosts, it works out better than keeping enough hardware in house to meed all test/qa needs. Plus less hardware in house equals less admin time which is a plus for us.
  • by Todd Knarr (15451) on Wednesday April 02, 2014 @12:51PM (#46641145) Homepage

    Amazon charges for instances by the hours they're running and the type of instance. Think of an instance as a server, because that's what it is: an instance of a VM. You can find the prices for various services at http://aws.amazon.com/pricing/ [amazon.com]. What you want are EC2 pricing (for the VM instances) and EBS pricing (for the block storage for your disk volumes. For EC2 pricing figure out what size instances you need, then assume they'll be running 720 hours a month (30 days at 24 hours/day) and calculate the monthly cost. For EBS pricing take the number of gigabytes for each disk volume (each EC2 instance will need at least one volume for it's root filesystem) and multiply by the price (in dollars per gigabyte per month) to get your cost. You can manage instances the same way you would any other machine, other than usually needing to use SSH to get access and having to worry about firewalling (these are publicly-accessible machines, you can't shortcut on security by having them accessible only from within your own network).

    The cost isn't actually too bad. For generic Linux, the largest general-purpose instance will, for a reserved instance on a 1-year commitment, cost you $987 up front and $59.04/month for runtime in the US West (Oregon) data center. An 8GB regular EBS volume will cost you $0.40/month for the space and $50/month for 1 billion IO requests. And not all instances need to be running all the time. You can, for instance, use on-demand instances for your testing systems and only start them when you're actually doing release testing, you'll need to pay for the EBS storage for their root volumes but you won't have any IO operations or run-time while the instance is stopped.

    The downside, of course: if Amazon has an outage, you have an outage and you won't be able to do anything about it. This isn't as uncommon an occurrence as the sales guys would like you to believe. Your management has to accept this and agree that you guys aren't responsible for Amazon's outages or the first time an outage takes everything down it's going to be a horrible disaster for you. Note that some of the impact can be mitigated by having your servers hosted in different regions, but there's a cost impact from transferring data between regions. Availability zones... theoretically they let you mitigate problems, but it seems every time I hear of an AWS outage it's one where either the failure itself took out all the availability zones in the region or the outage was caused by a failure in the availability-zone failover process. This all isn't as major as it sounds, outages and failures happen running your own systems after all and you've dealt with that. It's more a matter of keeping your management in touch with the reality that, despite what the salescritters want everyone to believe, there is no magic AWS pixie dust that makes outages and failures just vanish into thin air.

  • by Cyberax (705495) on Wednesday April 02, 2014 @12:56PM (#46641191)
    Why would you care about reliability for continuous integration?

    We use Amazon EC2 with spot nodes for our CI. After all, if a node dies - you can just restart the whole process on a new one. Sure, you'll lose some time, but given that 32CPU node with 64Gb of RAM can be bought for $0.3 per hour - we simply don't care.
  • Don't do it. period. (Score:2, Informative)

    by The Joe Kewl (532609) on Wednesday April 02, 2014 @01:27PM (#46641495)

    We did this at my last job.
    In short, it sucked.

    More descriptive: It really sucked!
    The boss didn't want to manage servers in house to save costs. So as a developer, we had to show up every day, boot our cloud instance up, sync the latest code to that instance, and begin development. Then before going home, you needed check in your code, shut down the instance, and go home.

    Doesn't sound so bad, except for the time you had to waste EVERY DAY logging into AWS, booting the EC2 instance, restoring the RDS instance, syncing the code, doing basic readiness tests BEFORE you could even begin working.
    Then there was always the fun part of the dev team leader changing out which RDS instance you needed to use, and forgetting to tell you about it.

    Not to mention the time you (the developer) had to waste every day shutting down (syncing / checking in code, creating snapshots, closing RDS instances down, etc, etc, etc).

    Then there is always the fun times when the cloud was down (yes, it DOES happen people!) or the internet connection was down (ISP issue, internal LAN issues, etc)... All of the time wasted managing the cloud instances (starting up, shutting down each day) could of been spent actually fixing things, and writing code, but I guess that wasn't cost effective enough for them.

Riches: A gift from Heaven signifying, "This is my beloved son, in whom I am well pleased." -- John D. Rockefeller, (slander by Ambrose Bierce)

Working...