Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Cloud Programming

Ask Slashdot: Do Any Development Shops Build-Test-Deploy On A Cloud Service? 119

bellwould (11363) writes "Our CTO has asked us to move our entire dev/test platform off of shared, off-site, hardware onto Amazon, Savvis or the like. Because we don't know enough about this, we're nervous about the costs like CPU: Jenkins tasks checks-out 1M lines of source, then builds, tests and test-deploys 23 product modules 24/7; as well, several Glassfish and Tomcat instances run integration and UI tests 24/7. Disk: large databases instances packed with test and simulation data. Of course, it's all backed up too. So before we start an in-depth review of what's available, what experiences are dev shops having doing stuff like this in the cloud?"
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Do Any Development Shops Build-Test-Deploy On A Cloud Service?

Comments Filter:
  • Bamboo OnDemand (Score:5, Interesting)

    by Anonymous Coward on Wednesday April 02, 2014 @01:23PM (#46640819)

    Atlassian is already trying to push their customers in this direction. Their Bamboo OnDemand offering spins up AWS instance as needed for builds. In this case, you could still host a local Bamboo instance and use elastic remote agents.

    One thing I do like about this sort of setup is that it keeps you honest about deployment. Your build environment stands up a new instance every time your remote agent goes stale and is recovered to reduce costs.

    • I think maybe the results of pilot efforts might be looked to, before mandating all development go to this model, based on reference architecture...

      • by Anonymous Coward

        My company has migrated all development to the cloud for development, quality assurance, pre-production (staging), and in some cases production. The convenience of standardised environments, quick build-up and tear-down, as well as access from practically any network-connected device should not be underestimated.

    • We're using OnDemand for CI of everything and CD for some. We use spot instances for the workers because we don't mind waiting a bit for the test to happen. We typically have to wait ~3.5 minutes to get an instance but are only paying $0.07/hr for that instance. It's ridiculously cheap for us to do it this way.

    • by zuzulo ( 136299 )

      One of my good buddies has been working on this for a while. Continuous build, test, deploy infrastructure in the cloud - pretty cool stuff, and its a pain to do it right in house, so a good candidate for outsourcing. Disclaimer - I dont work for these guys, nor have I actually used their services. Yet. ;-)

      https://circleci.com/ [circleci.com]

  • by Anonymous Coward

    It is 50%+ cheaper if you use in-house hardware. This assumes that you are a trained system administrator and you purchase energy and cost efficient hardware. Also, your data will be yours and not Amazons.

    • by ackthpt ( 218170 )

      It is 50%+ cheaper if you use in-house hardware. This assumes that you are a trained system administrator and you purchase energy and cost efficient hardware. Also, your data will be yours and not Amazons.

      With reliability of the Cloud we're not considering it ... yet.

      Once Cloud is considered reliable and secure, we'll look at it.

      Until then .. spinning drives are way cheap.

      • Better off renting/leasing your own dedicated servers in a colo AWS is designed to nickle and dime you unless you really really know what your doing and have the right use case.
      • by Cyberax ( 705495 ) on Wednesday April 02, 2014 @01:56PM (#46641191)
        Why would you care about reliability for continuous integration?

        We use Amazon EC2 with spot nodes for our CI. After all, if a node dies - you can just restart the whole process on a new one. Sure, you'll lose some time, but given that 32CPU node with 64Gb of RAM can be bought for $0.3 per hour - we simply don't care.
    • Dedicated hardware also performs better per the cost especially when it comes to disk io. I've experienced minutes of waiting time just making a MySQL database around 15 tables none having more than 5000 rows, whereas it was nearly instantaneous on my local machine. I've also experienced a lack of consistency on EC2. If someone is watching an episode of the smurfs where it is raining, you will notice a difference.
      • ha.. i meant to say on NetFlix
      • by lgw ( 121541 )

        Cloud servers have terrible IOPS. For anything but a DB, it probably doesn't matter, but trying to run a DB on a cloud VM will be painfully slow.

        OTOH, most cloud services actually offer DB-as-a-service, in one form or another, and if you can use that then performance will be much better. Not good, mind you, but no longer painful.

        • Um, LOTS of stuff requires high IO.

          Think of a qa VM. It has to do snapshots, installs, reverts. All of which are high IO. Especially if the build is a large install.

          • r.e. high IO... iops tend to be bottlenecked by random access (think seek time).

            Snapshots, installs, and so on are not so much random access limited as sustained sequential throughput; more streaming than random... SSD's tend to saturate SATA ports so you end up with tricks like raid to get more speed.

            *shrug* So... there are different kinds of "high IO". Depends on what your app needs from a storage point of view.
            The take home message is that if storage performance matters to your app(s) be sure you
            • Depends on the type of install - decompress & scatter lots of small files or just copy one large app.

              But yes, I'm very familiar with IOPS - I have several 24xRAID of STEC s840 drives ~= 1M IOPS. Large dbs love them.

              • by lgw ( 121541 )

                Of course, the best way to get a fast DB is 10,000 low-end servers. But if you're stuck in the legacy world of trying to scale up, you can't really use a cloud server for a DB, except for testing with token load.

                You mentioned snapshotting et al, but you don't do that stuff explicitly with cloud servers, you just stand up servers with the requested image on demand, and leave the implementation to the provider. If you need to move quickly, you just have a pool waiting, or just go parallel. E.g., if you need

              • by Fubari ( 196373 )
                NatasRevol: All valid points; sounds like we're vigorously agreeing :-)
                r.e. the s840 raids, I feel a bit jealous - that sounds like fun to work with.
      • If someone is watching an episode of the smurfs where it is raining, you will notice a difference.

        Could performance be improved if Amazon were to offload rendering of the raindrops to the client? I'm thinking if the Roku or AppleTV box had to compute the transparency of the raindrop over the skin of the blue smurfs, it might alleviate these performance hits you're seeing on EC2. Either that or replace the raindrops with snowflakes which are more complex due to every single one being unique, but not having

        • You'll notice that when you watch cartoons on NetFlix that when there is scene scrolling or lots of changes across frames, that the quality will go down which is probably due to the mpeg layer compression that they are using to minimize the amount of data transfer, so it can't be processed on the client side.
    • I would beg to differ on this. For CI, you can easily use spot instances which are dirt cheap. We pay $0.07/hr for ours. Assuming we had a build running 24x7x365, that's $613.20 per year costs. You'd be hard pressed to find a decent box for that price. Additionally, builds are not happening 24x7x365, but rather only when changes are made so your costs are even better than hardware, which is sitting idle and using power and rack space during the interim.

    • This is why I browse slashdot. "Insightful" posts like this backed up by solid data.
  • If it's all Java / JVM, then look at the Cloudbees offering, or the Waratek JVM (high-density) on something cheaper than EC2. Unless you have a decent grasp of when your environment can be shut down, EC2 is almost certain to be a waste of money, especially for dev / test.
    • by Anonymous Coward

      Amazon just did a huge price drop on all of their AWS services, it might actually have gotten to an affordable level.

      • by ackthpt ( 218170 )

        Amazon just did a huge price drop on all of their AWS services, it might actually have gotten to an affordable level.

        There's two "affordabilities"

        1. Nearly free, as in beer.

        2. It's up when we need it to be. If an outage costs us, that's factored into "affordable" and may be a cost we can't afford.

    • Says who? (Score:4, Interesting)

      by kervin ( 64171 ) on Wednesday April 02, 2014 @02:09PM (#46641303)

      AWS has some of the lowest cloud prices I've found anywhere. You can get AWS instances for under $3/month reserved according to what you need. 'Small' Linux instances cost about $15/month reserved last I checked. In fact they'll even give you a Micro instance free for a year without spending anything as part of their 'free tier'.

      How did you come to the conclusion AWS was expensive?

    • by Slashdot Parent ( 995749 ) on Wednesday April 02, 2014 @08:48PM (#46645175)

      EC2 likely too expensive.. [...] If it's all Java / JVM, then look at the Cloudbees offering

      You do realize that Cloudbees runs in EC2, right?

  • We do this (Score:5, Informative)

    by CimmerianX ( 2478270 ) on Wednesday April 02, 2014 @01:33PM (#46640937)
    I'm IT for a company that does this for 95% of dev/test/qa systems. It's worked out pretty well. Most servers are spun up and then chef'ed, used, then deleted after tests/whetever are complete. We do keep our code in house. SVN/GIT/ and Jenkins along with server build farms are all in house. The cloud services are expensive, but since IT has automated the deployment process for the cloud hosts, it works out better than keeping enough hardware in house to meed all test/qa needs. Plus less hardware in house equals less admin time which is a plus for us.
    • I'm IT for a company that does this for 95% of dev/test/qa systems. It's worked out pretty well. Most servers are spun up and then chef'ed, used, then deleted after tests/whetever are complete. We do keep our code in house. SVN/GIT/ and Jenkins along with server build farms are all in house. The cloud services are expensive, but since IT has automated the deployment process for the cloud hosts, it works out better than keeping enough hardware in house to meed all test/qa needs. Plus less hardware in house equals less admin time which is a plus for us.

      we do something similar. We need a machine up 24/7 to do checkins, builds, automated tests. For that use case, it's better to have your own machine. When we need to spin up multiple machines to do integration testing of our networked app, then it makes sense to use EC2 since we get clean machines, it can get set up, run, and then torn down again.

  • Security concerns (Score:3, Insightful)

    by Anonymous Coward on Wednesday April 02, 2014 @01:35PM (#46640961)

    If the stuff (data, processes, etc.) you put in the cloud are in any way sensitive, I would be very hesitant to put that in the hands of another company because of privacy and security. Particularly depending on your terms of service agreements with your users. I would avoid putting your source control system in the cloud too because then it's more accessible by nefarious actors than if it's locked down internally. This is of course assuming you have good security standards and practices in place.

  • by Anonymous Coward
    Forget trying to figure the cost, it isn't worth your time. Just move it and look at the bill. With Amazon and Google, there is no commitment, so try it. Take the bill to the other guys and see if you can beat it, if so move, if not stay. If everyone is too expensive, switch back to hosting it yourself. If cost is the most important issue, then the cheapest is building your own cloud with commodity hardware and OpenStack. You get the redundancy of the cloud with the cheap of whitebox hardware. Think
    • LOL.

      Maybe if you don't have a complex, co-dependent environment.

      Otherwise, you're just going to get fired.

  • We do (Score:3, Interesting)

    by Anonymous Coward on Wednesday April 02, 2014 @01:38PM (#46640995)

    We're not at your scale, but we do everything with AWS and have found that it works well.

    One thing you might want to do is reexamine your mentality around 24/7...you need to evaluate what really does need to run 24/7 and what needs to be available 24/7 (i.e. something that can tolerate the time it takes to spin up from an AMI).

    For example, your Jenkins server could be configured with a master/slave arrangement that allows the main Jenkins server to be a small or medium instance that runs 24/7 and then when a build needs to happen, spin up a beefier slave to rip through it as fast as possible and then shutdown when done. Each build then has a fixed cost, regardless of whether it runs serially or in parallel.

    Our main reasons for choosing to use the cloud were:
    - We have remote workers, both permanent and a WFH policy...cloud makes it not matter where you're working from.
    - Less maintenance...stuff mostly just works and most things are scripted rather than configured.
    - We like the mentality of thinking of computing as a resource, not a collection of discreet machines. Running 5 builds in parallel is expensive when you think machines but costs the same as 5 serial builds when you follow the spin up, build, spin down philosophy.

    • Re:We do (Score:5, Insightful)

      by Lumpy ( 12016 ) on Wednesday April 02, 2014 @01:46PM (#46641103) Homepage

      "cloud makes it not matter where you're working from."

      Competent IT and VPN does that as well.

    • The 24/7 is a good point - we have East and West coast N.A. teams and South Asia and Middle East teams; so there's dev going on around the clock. But I like the idea of making the slaves work harder than the master (ugh, such terminology). I also like the remote-worker "anywhere" has a consistent, accessible environment.
  • by Anonymous Coward

    It's pretty easy to go with something like Atlassian Cloud to handle all your build stuff. It will fire off that various EC2 instances when you need them. It handles the basic deploys. Fairly reasonably priced.

    What's not easy is setting up a secure EC2 environment that your production code will run in. I'm not saying EC2 isn't secure, I'm just saying you need to wear a lot of hats to really set it up well. You need to know network, firewall, unix, chef (or similar suite), messaging, storage NAS, and ap

  • Your CTO is an idiot (Score:3, Interesting)

    by Gothmolly ( 148874 ) on Wednesday April 02, 2014 @01:44PM (#46641081)

    He doesn't want to manage stuff in house because it's hard. But wait, that's his job, and why he draws C-level pay. If you are not just occasionally using it, the whole advantage of "cloud" goes away, unless you replace it with the concept of "outsource". Which might be his goal all along, either way, I would look for a new job. Cloud would be great if you needed to load test from 1000 machines or something, but even for that there are simulators.

    • by Anonymous Coward on Wednesday April 02, 2014 @01:57PM (#46641201)

      I think you're just failing to on-board the new cloud paradigm going forward.
      You probably haven't accounted for the synergized trending advantages.

      • Re: (Score:3, Funny)

        by Anonymous Coward

        I feel like I just read a week's worth of posts from LinkedIn connections.

  • Just do it (Score:4, Interesting)

    by hawguy ( 1600213 ) on Wednesday April 02, 2014 @01:47PM (#46641113)

    Amazon has a detailed AWS cost estimator:

    http://calculator.s3.amazonaws... [amazonaws.com]

    When we migrated to the cloud, our actual costs were within 15% of the estimated costs.

    But really, the easiest thing to do is just build a test environment and try it -- you only pay for the time you use.

    When we migrated to AWS we knocked 70% off our colocation bill (we had more space at the coloc than we needed, but it's hard to move production hardware to a smaller space without downtime, plus we had significant savings in equipment leases and maintenance contract costs).

    Our dev/test hardware was aging and becoming unreliable (and no longer matched production since we moved to AWS), so we moved that up to AWS as well, but even after that migration our total AWS bill less than half what we paid at the colocation center. We only run the dev/test hardware during business hours, or on-demand as needed -- we set up a simple web interface that lets developers spin up test instances as needed. AWS keeps dropping prices, so we're even as we've grown, our costs have remained relatively constant.

  • I was just explaining this to someone the other day that thought AWS was going to save them money. It's not cheaper than running your own shop. The only advantage I see is that you don't have to house/cool/maintain hardware. You can just move your application to higher capacity, faster servers. You get additional power and network reliability.

    If your dev/test platform is already off-site and working, then what is the compelling reason to interrupt everything and do the move? Where I am working today, the t

  • by Todd Knarr ( 15451 ) on Wednesday April 02, 2014 @01:51PM (#46641145) Homepage

    Amazon charges for instances by the hours they're running and the type of instance. Think of an instance as a server, because that's what it is: an instance of a VM. You can find the prices for various services at http://aws.amazon.com/pricing/ [amazon.com]. What you want are EC2 pricing (for the VM instances) and EBS pricing (for the block storage for your disk volumes. For EC2 pricing figure out what size instances you need, then assume they'll be running 720 hours a month (30 days at 24 hours/day) and calculate the monthly cost. For EBS pricing take the number of gigabytes for each disk volume (each EC2 instance will need at least one volume for it's root filesystem) and multiply by the price (in dollars per gigabyte per month) to get your cost. You can manage instances the same way you would any other machine, other than usually needing to use SSH to get access and having to worry about firewalling (these are publicly-accessible machines, you can't shortcut on security by having them accessible only from within your own network).

    The cost isn't actually too bad. For generic Linux, the largest general-purpose instance will, for a reserved instance on a 1-year commitment, cost you $987 up front and $59.04/month for runtime in the US West (Oregon) data center. An 8GB regular EBS volume will cost you $0.40/month for the space and $50/month for 1 billion IO requests. And not all instances need to be running all the time. You can, for instance, use on-demand instances for your testing systems and only start them when you're actually doing release testing, you'll need to pay for the EBS storage for their root volumes but you won't have any IO operations or run-time while the instance is stopped.

    The downside, of course: if Amazon has an outage, you have an outage and you won't be able to do anything about it. This isn't as uncommon an occurrence as the sales guys would like you to believe. Your management has to accept this and agree that you guys aren't responsible for Amazon's outages or the first time an outage takes everything down it's going to be a horrible disaster for you. Note that some of the impact can be mitigated by having your servers hosted in different regions, but there's a cost impact from transferring data between regions. Availability zones... theoretically they let you mitigate problems, but it seems every time I hear of an AWS outage it's one where either the failure itself took out all the availability zones in the region or the outage was caused by a failure in the availability-zone failover process. This all isn't as major as it sounds, outages and failures happen running your own systems after all and you've dealt with that. It's more a matter of keeping your management in touch with the reality that, despite what the salescritters want everyone to believe, there is no magic AWS pixie dust that makes outages and failures just vanish into thin air.

    • For this particular use case scenario, it would be better to skip the EBS disks and use ephemeral disks with instances that are spawned purely for the build and test, check their results back into the build system, and self-destruct. You could even request spot instances since the workload isn't particularly time dependent.

      You're right, if Amazon goes down, you're down without much recourse. But if you've designed your system to use instances that are launched on demand, you just launch them in a differen

      • Non-EBS-backed instances aren't good for test systems. To run them you need to have an AMI built with everything you need, and you need to keep that AMI updated with current test cases and so on. That's more work than just maintaining an EBS-backed instance would be. Especially considering that you're going to need the test instance to persist for anywhere from several days to several weeks while testing is in progress. We aren't talking unit tests, remember, we're talking about a complete release test of

    • The downside, of course: if Amazon has an outage, you have an outage and you won't be able to do anything about it.

      Not just Amazon - what if your ISP has an outage? Checked your connectivity SLA recently?

      What's your plan if some joker puts a back-hoe through a fibre trunk 10 miles away? Road-trip to Starbucks?

  • by holophrastic ( 221104 ) on Wednesday April 02, 2014 @01:57PM (#46641205)

    I've spent over ten years on dedicated servers, and have been very happy. Over the next year, I'll be moving into a private cloud scenario -- (not amazon or google, yuck. A local datacentre rolling their own.) I'll have some dedicated hardware (physical servers: CPU, RAM), and be sharing the rest of the cloud (storage, power, network, et cetera.).

    It's interesting because there are no actual benefits to me in terms of performance, capacity, stability, or price by moving -- even backups aren't any more fluid. Of course, my platform and business model have been well-tuned over the years, and my sub-industry doesn't have the fluctuations that are typically heralded by cloud services.

    So why am I moving? Abstracted hardware. I've reached that point where migrating from one dedicated server to another is a major undertaking. It's days of work, weeks of testing, and a huge risk to my business if I were to move any significant number of clients at one time; that means spreading it out over a year which means paying for the old and the new at the same time with zero additional revenue.

    I've got no problem with resource management and capacity planning. I just have trouble actually growing through the transition points. Moving to a private cloud is likely to give me the convenience of being able to upgrade physical servers instantly without any worries -- it's the virtualization layers and load balancing mostly.

    Wish me luck.

    • Oh, in case that wasn't clear (it wasn't), my business is a web development business that builds-tests-deploys all live, and hence would be doing so on the forth-coming cloud service.

  • The OpenStack infrastructure team is running largest cloud-based continuous deployment environment I've ever seen, and they're more than happy to give people introductions to it.

  • We moved our development and production systems to AWS in 2008 and have been quite happy. It has allowed us to grow and scale with load on production and quickly test things in development. There a few things to keep in-mind. First, if you know your usage pattern and can drop some money upfront then utilize the reserved instances to save some money. Second, you will need a script management system of some kind to run on the virtual servers at boot/shutdown, I recommend using something like Rightscale or Sca

  • Here is a recent video from Google on how they are doing it: Google Cloud Platform Live: DevOps at Google Speed & Tools for You [youtube.com]
  • by Anonymous Coward

    Have your CTO talk to Skytap. (I don't work for them, I've proposed this at my company) . This is one company, there must be others, who's main product is virtual dev. environments. I presume they run at Amazon. The whole environment can apparently be saved and immediately cloned. So it seems that they take server instances one step further, where there can be several of them in a private network and the whole thing can be versioned and cloned. (sorry for the anonymous, I'll get my login now)

  • Cloud is good for reliability, scalability, and if your particular scenario meets certain criteria, sometimes cost. Overall the cloud would be usually be more expensive, but can be cheaper to use cloud and only pay for what you need if you have short periods of high load combined with long periods of little load. Thus cloud might be cheaper because rather than paying for, cooling, powering, and maintaining alot of high end servers waiting to handle a large load only occasionally, you pay for what you nee

  • Hi, I work at Netflix, you may have heard of us.

    dev, test, build, and prod run on Amazon (leaving aside the actual streaming, which comes from cache boxes closer to the customer). We've been pretty public about the process, and some of the issues.

  • Yes.

    But as someone else suggested, it sounds like you need a CTO upgrade to go along with your migration to the cloud.

  • We did this at my last job.
    In short, it sucked.

    More descriptive: It really sucked!
    The boss didn't want to manage servers in house to save costs. So as a developer, we had to show up every day, boot our cloud instance up, sync the latest code to that instance, and begin development. Then before going home, you needed check in your code, shut down the instance, and go home.

    Doesn't sound so bad, except for the time you had to waste EVERY DAY logging into AWS, booting the EC2 instance, restoring the RDS instanc

    • by bmajik ( 96670 )

      Why didn't you script all of the activities you just described?

      • For starters, it was on a winblows environment (at least in the beginning).
        Mostly though, we were not given the time, and we were not allowed to work on un-approved projects.

        Besides, can your really script the ass hat of a lead dev changing RDS images / instances without telling you?
        Scripting is no help for the cloud going down either, not to mention connectivity issues.

        In short, I am happy to be outta there, that could have possibly been the worst communicating "team" I have ever had the displeasure of wo

        • by bmajik ( 96670 )

          Funny you mention that.

          Early in my Microsoft career, I built a system that provisioned thousands of windows machines on an as needed basis, differing by SKU level, language type, windows version, etc.

          I'm was proficient in scripting the installs of windows machines -- even back when windows didn't natively support that sort of thing very well(e.g. NT4)

          To be honest, Windows looks pretty good compared to any Linux distro I've worked with when it comes to automated provisioning and post configuration. That's a

  • If you are nervous about costs and overhead time required, you can try out some small apps on a PaaS site such as Heroku https://www.heroku.com/ [heroku.com] Then graduate to doing it on Amazon or whatever, once you figure out what you need or don't need, or don't want to be bothered with.
  • by msobkow ( 48369 ) on Wednesday April 02, 2014 @02:48PM (#46641697) Homepage Journal

    When working for companies, everything was "in the cloud" already: on remote servers. It's not like I was running the stuff on my desktop.

    SSH to Amazon or SSH to a box in the closet. Pretty much no difference to me.

  • So, cloud hosting is expensive versus standard hosting, you pay a premium for scalability. Simple as that.
  • by Kimomaru ( 2579489 ) on Wednesday April 02, 2014 @02:57PM (#46641795)
    Data center operations are expensive when you factor in power, gear and staff. But I don't think cloud solves those problems particularly well and it actually adds some more. Cloud data sits on someone else's secondary storage, and if you don't understand the implications of this you are not thinking hard enough. I think the decision to use cloud varies on a case-by-case basis and that you just have to measure it for yourself. It might make sense for development, but maybe you don't want your code on someone else's systems, for whatever reason. I like to write code on my cubieboard, an SoC platform that runs on 5 volt - runs Debian, can mount a laptop hard drive on it, has a dual core proc on it. Runs great for its purpose. If I try to do the same thing on a cloud system (and I have), the cost rises dramatically. But you can't run a high traffic web site off of a cubieboard. There's a line where cloud begins to make more sense. Depends on what you're looking for. But it won't replace the data center.
  • by mbaGeek ( 1219224 ) on Wednesday April 02, 2014 @03:02PM (#46641829) Homepage

    didn't I just read somewhere about Google [slashdot.org] doing something with this enterprise cloud thing?

    the answer to the question is "it depends" - my gut says "no" but as others have pointed out, if you want to know if something will be a cost effective solution, you need to test

    the game changing benefit of the "cloud" is the ability to scale up/down as needed ... SO from a financial viewpoint the question is similar to "Should we buy a building or rent office space?"

    BTW my headline is from a Dilbert gobbledygook generator - which I'm 90% sure that 100% of CTO's use an undefined % of the time

  • I would recommend doing both. There are significant advantages of moving some of the development/test tools out to the cloud. However, it should only go as far as development and perhaps first stage testing which is probably what your CTO has in mind.

    There's no reason why each development project has to pay for the physical space taken up by *shared* development tooling such as Jenkins, Common Git Repository, JIRA/Redmine/Trac and some database and application server that is used for functional testing.

    Ho

  • I'm in the process of standing up a new cloudy little provider and we don't count hours or minutes. Is that so wrong?

    The assumption is that the Internet is open 24/7 so why should we be marking time when we know you want it 24/7? We would rather cultivate the developers and geeks as customers. We'll soon have one portal for instant gratification but we're also happy to hand-craft VMs in a private place for you too. And it's built around CloudStack4 so it should feel familiar to many.

    Come talk to us whil

  • Do not do it, period.

    Network connections are not reliable enough for that, cloud services can (and will) go offline and the worst part, you will be putting your code on a external server on which you have no control. Your secret, revolutionary code free to NSA industrial espionage or worst. NEVER put important/sensitive data on a cloud.
  • Because the CTO can spin this to the CEO as being technically better somehow while getting kickbacks from the Amazon sales rep ;-)

  • by turgid ( 580780 ) on Wednesday April 02, 2014 @03:53PM (#46642339) Journal

    Yes, we use Visual Studio 365 Azure Edition for our C++ projects. Our compile times are a little longer, but we're riding the latest wave of post-Enterprise active data web cloud assured technology.

    This gives us all the advantages of future web technology developments as they happen with Microsoft's world-leading Software Engineering/Code ARTezan(R)(TM) Cratfperson paradigm.

    As a bonus, all of our best-shored development consultants were able to migrate their legacy Visual Source Safe projects seamlessly using cloud-aware IE plugins.

  • Amazon is not cheap at all for your task and I am not sure you are looking into the right direction. In the last year I was doing feasibility studies for software in the cloud and I implemented already a system that relies on Amazon 100%. I have evaluated various technical solutions and various providers. My findings are the following: 1. Amazon is second to nothing when it comes to be elastic. If you want to scale from 10 to 100 (you name what you count: CPU/Memory/Storage/Band/DB...) you will have it in
  • They are very new but I've worked with the founder, Jay Moorthi and have nothing but good experiences with him. Maybe I'll ping him to come and give an overview of how they help. https://www.solanolabs.com/ [solanolabs.com]
  • I've seen advantages and disadvantages in both scenarios. It depends on the application and the profile of your production systems. As a rule of thumb, your test/dev systems should be as close as possible to the production machines; If you're deploying to cloud services, you should have your test and staging system running on the same platform/provider; If you're deploying to bare metal, you should have dedicated servers for testing and staging. The applications don't work by themselves, and eg. controller/
  • We tried amazon and hated it. It was too expensive and too slow and individual servers
    weren't as reliable and predictable as we expected.
    I think it probably works ok for someone like netflix distributing it across hundreds of servers
    but I wouldn't recommend it for someone with less than a couple dozen servers.
    We switched to stormondemand (aka liquidweb) and have been much much happier.
    They have solid state drives which help with i/o which is one of amazon's weaknesses.
    They also have dedicated cloud servers

  • Given the way so much stuff - including internal to companies I've worked for, no way. Links work... if you're on the internal network, not outside. Software runs SO FAST... until you're not on the intranet, and then it's a dog.

    And developers always get and test on the hottest machines or servers... never mind the 95% of the folks going to that site, or using that software, are 1-2 generations of hardware back, and again, it runs like a dog, or requires you to buy new hardware.

    So if you did it on superdoope

What is research but a blind date with knowledge? -- Will Harvey

Working...