Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware IT

Ask Slashdot: Capacity Planning and Performance Management? 64

An anonymous reader writes: When shops mostly ran on mainframes, it was relatively easy to do capacity planning because systems and programs were mostly monolithic. But today is very different; we use a plethora of technologies and systems are more distributed. Many applications are decentralized, running on multiple servers either for redundancy or because of multi-tiering architecture. Some companies run legacy systems alongside bleeding-edge technologies. We're also seeing many innovations in storage, like compression, deduplication, clones, snapshots, etc.

Today, with many projects, the complexity make it pretty difficult to foresee resource usage. This makes it hard to budget for hardware that can fulfill capacity and performance requirements in the long term. It's even tougher when the project is still in the planning stages. My question: how do you do capacity planning and performance management for such decentralized systems with diverse technologies? Who is responsible for capacity planning in your company? Are you mostly reactive in adding resources (CPU, memory, IO, storage, etc) or are you able to plan it out well beforehand?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Capacity Planning and Performance Management?

Comments Filter:
  • by Anonymous Coward

    Pay attention. This is VERY complicated....

    We ask our users what their plans are.

    • by Anonymous Coward

      You don't even have to ask your users. Just use NoSQL, for everything. Use it for storing the data, use it for the back end business logic, use it for the middleware, and use it for the front end. Thanks to the CAP Theorem and JavaScript (all NoSQL uses JavaScript), you don't need to worry about scaling at all. That's the beauty of NoSQL: effortless and infinite scalability.

    • According to ancient lore, a pair of low-level HP engineers put in a request for a Saturn V rocket and a vice president killed their request. HP would have been a different company if it had to listened to their users.
  • is this a trick question?
  • It is simple, I use a mainframe and buy capacity when needed.
  • by ErikTheRed ( 162431 ) on Monday August 10, 2015 @01:59PM (#50286281) Homepage

    Speed of implementation in various organizations (or even departments, divisions, etc) runs a spectrum of "do stuff on more or less a whim" to "go through eight years of planning meetings to discuss the possibility of actually doing something." On the former end of the spectrum you buy extra capacity. At the latter end of the spectrum it doesn't matter, because you won't get the budget to buy extra capacity.

  • by mlts ( 1038732 ) on Monday August 10, 2015 @02:04PM (#50286335)

    There are a lot of tools you can use to help with capacity, be it VM farms, SANs/NASes, cloud providers, chassis/blades. Only a few points of advice:

    1: Everyone will sell their product as the silver hammer, where each target is a nail. The VNX guys will sell their SAN as a be all and end all, even if you just use CIFS/SMB. The security vendor will be selling you exotic appliances for encryption for your tape silos. The PC guy will be selling you tons of 1U racks and try to convince you that the onboard drive array is better than a SAN if they don't have a SAN product, otherwise, how slick their HBAs work when used with their SAN.

    2: Don't forget security. It may be cheaper to have one VM cluster for everything, but it be wiser to keep one client's hyper-sensitive stuff on one VMWare datacenter [1], while the other client who is running some backend stuff for an app would be in a different container.

    3: Before committing to purchase something, grab manuals and documentation, and read on the device. You might find it doesn't do what you want. Don't forget to take into account type of I/O and other items. I have had to deal with a terabyte/hour of random writes, and the only solution for that was going with either a caching HBA that had that much SSD so it would turn the random writes into an easy to digest sequential stream for the SAN, or go pure SSD. Sequential I/O is a lot easier and a lot cheaper to deal with than random I/O. Similar with I/O that is often cached versus I/O that never is reused.

    [1]: A datacenter is a VMWare object type. Can't vMotion across it, and is intended to provide distinct separation between items.

    • There are a lot of tools you can use to help with capacity, be it VM farms, SANs/NASes, cloud providers, chassis/blades. Only a few points of advice:

      1: Everyone will sell their product as the silver hammer, where each target is a nail. The VNX guys will sell their SAN as a be all and end all, even if you just use CIFS/SMB. The security vendor will be selling you exotic appliances for encryption for your tape silos. The PC guy will be selling you tons of 1U racks and try to convince you that the onboard drive array is better than a SAN if they don't have a SAN product, otherwise, how slick their HBAs work when used with their SAN.

      2: Don't forget security. It may be cheaper to have one VM cluster for everything, but it be wiser to keep one client's hyper-sensitive stuff on one VMWare datacenter [1], while the other client who is running some backend stuff for an app would be in a different container.

      3: Before committing to purchase something, grab manuals and documentation, and read on the device. You might find it doesn't do what you want. Don't forget to take into account type of I/O and other items. I have had to deal with a terabyte/hour of random writes, and the only solution for that was going with either a caching HBA that had that much SSD so it would turn the random writes into an easy to digest sequential stream for the SAN, or go pure SSD. Sequential I/O is a lot easier and a lot cheaper to deal with than random I/O. Similar with I/O that is often cached versus I/O that never is reused.

      [1]: A datacenter is a VMWare object type. Can't vMotion across it, and is intended to provide distinct separation between items.

      Why not do a calibration between Sales, Inventory, Purchasing, Logistics as x= a*s+b*i+c*p+d* vs No CPUs + No Network + No Transactions + support staff + some critical resource at a known level of x. Do not assume linear growth, but some growth that is proportional to x**3 or x**5 where x = the

      + a+b+c+d=100% and all a,b,c,d >0

      Typically you will have to include some measures such as a max of 1 second response time (or 0.1 seconds response time). Don't try to go cutting hairs in 4 endwise.

  • by davecb ( 6526 ) <davecb@spamcop.net> on Monday August 10, 2015 @02:05PM (#50286347) Homepage Journal

    I used to work for the (late, lamented) Sun Microsystems, and when we needed to give a credible answer to a price-sensitive customer, we used Teamquest Model. It pulls time-based info out of production-systems stats, so it doesn't add to the load, and then off-line does a classic queuing-system model of the system, working all in time units. That then allows the customer (really meaning me!) to ask what to expect from some specific configuration, and compare different systems for their price-performance tradeoffs.

    For common setups, we have spreadsheets based on what Model said, so the salespeople typically don't know there's a cool mathematical model behind the scenes (;-)) That's probably true of other vendors who use TQ models: it runs on anything modern, so lots of vendors use it.

    I have nothing to do with the company: they just allowed me to save $1.2 million once for a new datacenter, so I'm really really impressed by them.

    --dave

    • allowed me to save $1.2 million once for a new datacenter

      That's a lot. Or that isn't much. It depends on what the whole build out actually was. In other words, was that 50% cost reduction, or 1%.

      • "allowed me to save $1.2 million once for a new datacenter

        That's a lot. Or that isn't much. It depends on what the whole build out actually was. In other words, was that 50% cost reduction, or 1%."

        That's a lot in any case. Even if it's 1% it's still 1.2 millions out of a investment of (whatever the wages of that guy X time expended).

        You might have questioned if that kind of savings was the biggest at hand -if he could have invested his time to save 2.4 millions somewhere else, but the way you did, it remai

      • It was a customer's center so I'm being vague, but it was more than 10%
  • by Stone316 ( 629009 ) on Monday August 10, 2015 @02:16PM (#50286475) Journal

    That depends are you getting the information you need?

    Are your business analysts/architects even able to answer questions such as, how many net new users, concurrent users, able to summarize the typical workload? Back in the day and i'm only in my early 40's, this stuff used to be well defined. We used to have large documents which go down to the level of expected network load. So its either as you said, its too difficult given the diversity of the systems or they just don't know how to do their jobs anymore. I honestly think its about a 20/80 split. Yes the environments are more difficult to manage, but BA's/architects haven't adapted or frankly just don't care.

    BA's can't give me any information which would help me forsee or estimate how much load a project/change is going to have on the environment. So when i'm asked if we need new hardware, I just usually tell them to make sure they plan a proper load test and be prepared to spend money.

    In my company, its my job to make sure lights on runs well and highlight any issues related to capacity. For new projects, then its part of the project team which I may or may not be a part of.

    Storage, for us, seems to be the largest constraint, with memory and cpu coming in behind. Since we can't get much information, we just make sure we have all our servers hooked up to a large san so we can quickly provision more space.

  • by Anonymous Coward

    Translation:

    "Dear Product^WReaders, we at Dice want to know if 'Capacity Planning' is the new buzzword. Please provide anecdotal evidence to the approximate value of resume candidates with these keywords."

  • by wezelboy ( 521844 ) on Monday August 10, 2015 @02:39PM (#50286687)
    You should be using your monitoring system to gather performance data, and then analyzing that data.

    I am partial to check_mk right now, but I've done this kind of thing on nagios with pnp4nagios. When you have your monitoring system gathering network interface data, disk usage, cpu utilization, etc, and storing it in some kind of database like rrd, influxdb, or graphite, it isn't that much of a stretch to examine that data as an aggregate and graph trends. It really is amazing all the stuff you can figure out with this technique.
  • by swb ( 14022 ) on Monday August 10, 2015 @02:45PM (#50286743)

    Too much network bandwidth? Too much storage capacity? Too much CPU?

    Usually the drill seems to involve a lot of begging and pleading for money from management. The intermediate levels get dinged if they have to go back to the well, but they don't seem to get dinged if budgeted money ends up buying unused capacity.

    I don't doubt there are places which do heavy audits and ask hard questions about why you have a SAN with a bunch of free space or why your 10 gig NICs are running at sub-gigabit utilization and cause all manner of pain and suffering for excess capacity already budgeted and bought.

    But usually it doesn't seem to happen that way. Management barely supplies enough resources to meet their running demands and line management buys as much excess capacity as they can beg, borrow or steal because they know they will be punished for buying too little.

  • by dave562 ( 969951 ) on Monday August 10, 2015 @02:46PM (#50286749) Journal

    These days capacity planning comes down to have the right tool set for the job. I like VMturbo. There are a few others out there that will get the job done. VMturbo is nice because it is platform agnostic and can help you decide where to place workloads not only based on pure performance numbers, but also on resource cost. (For example, HyperV is likely less expensive than VMware in most situations).

    It is also worth considering an Application Performance Monitoring (APM) tool. Being able to identify exactly where the application is slow, and whether or not is an issue with the code or the underlying OS / infrastructure will save a lot of time during troubleshooting, and also help identify rooms to proactively allocate resources to head of potential bottlenecks.

    On a similar subject, a tool that provides deep visibility into the database layer helps a lot for the same reasons. A lot of junior admins make the mistake of assuming that high database server utilization is indicative of under provisioned hardware. In reality, poorly written queries will bring down even the beefiest of database servers. While you get information with the built in management tools, a dedicated monitoring platform (like Spotlight from Dell for example) will help you develop historical trends, while at the same time providing real time troubleshooting capabilities.

    Most of the time the network is the last bottleneck. In Cisco shops you can utilize NetFlow to figure out where the problems are. Or if the company you are working for has money to burn, the UCS infrastructure stack is very robust and comes with a whole slew of management and monitoring tools that can be leverage to discover latencies before they impact production environments too severely.

  • by Cyberax ( 705495 ) on Monday August 10, 2015 @02:47PM (#50286755)
    Just use AWS and scale out as needed. Your capacity planning then becomes more of a question which reserved instances to buy. AWS is not suitable for 100% of applications, but if you ask how to do capacity planning then your use-case is unlikely to be that 1% that doesn't fit the AWS model.
    • by Jaime2 ( 824950 )

      Not a magic bullet.

      Often what you want is a projection of the next five year's cost. Sure, AWS is good at making sure you only pay for what you need, but it doesn't help you make the Go/NoGo decision on a project. You can still easily get into a "this thing costs me more in usage that it's saving me" situation.

      • by Cyberax ( 705495 )
        You can buy reserved instances for 3 year periods, this locks-in the price and guarantees availability. And 5-year projections actually don't make much sense - hardware is likely to go through a couple of generations during this time. I worked for many companies and all the long-range cost projections I've seen were nothing but a pipe dream (or actually a checkbox document written by engineers eager to do actual work instead of generating tons of useless paperwork).
        • by Jaime2 ( 824950 )

          You can buy reserved instances for 3 year periods, this locks-in the price and guarantees availability.

          But that doesn't guarantee that the capacity you reserved will provide the performance you need. See the TeamQuest Model posts above for an example of a tool that can help you predict how much capacity you'll need to scale up from a pilot to a full implementation.

  • Not sure how it is done elsewhere, but I think we tend to balance load tenuously. Adding more applications and users slowly. When things start slowing down, users will start to complain, bring on a few more servers, repeat.

    However, one thing I will say, there is a danger in overcapacity. Managing multiple servers can be difficult (apparently), and it can cause some pretty hard to nail down issues in applications, particularly in legacy applications.

    I've had a couple of instances where a couple (not all) sev

    • by digsbo ( 1292334 )

      I've had a couple of instances where a couple (not all) severs were configured differently, and applications would perform differently, depending on what server you were currently randomly connected to

      That's a system management issue, not a capacity planning issue.

      • Agreed. However I was just saying that it seems the more capacity you add, the more complex a system can get, which can make management more difficult, so simply adding capacity for for capacity sake isn't all that great an idea either.

  • Today, with many projects, the complexity make it pretty difficult to foresee resource usage. This makes it hard to budget for hardware that can fulfill capacity and performance requirements in the long term.

    How is it any harder than ten years ago, twenty years ago, thirty years ago? Either you know what you have and how it is performing today or you don't. Either you know what the user demand(s) is/are or you don't. Either you know what options you have for hardware, software and services needed or you don't. Heaven forbid you do your job and find out BEFORE you start a new project plan.

    I know why this person posted this Ask /. anonymously. Either they're completely incompetent or their organization is when i

  • by Arijit Mukherji ( 4216435 ) on Monday August 10, 2015 @04:38PM (#50287725)
    This is exactly the situation we ran into when we launched our SAAS platform SignalFx to general availability. Internally it is composed of 15-20 different micro-services, making capacity planning a big challenge. We blogged about our experience here Metrics based approach to capacity planning [signalfx.com] . SignalFx is a metrics based monitoring perform, so in a meta way, we used SignalFx to capacity for SignalFx's launch

    tl:dr; version of our lessons and suggestions

    1. Design your architecture to be loosely coupled, so that it is possible to capacity-plan for each sub-component independently. Break a complex problem into N simpler ones
    2. Identity the 'limiting system resource' for each component individually (i.e. what will hit the wall first - CPU, memory, network etc.). You can do this through a combination of experimentation and plain and simple reasoning based on understanding of how it works
    3. Identify a business metric that correlates with the utilization of the limiting resource (e.g. api calls per second, number of logged in users, or whatever)
    4. Use analytics/math to project the capacity of the system, and how much free capacity you have (make sure to leave enough buffer, e.g. most services won't run very well at 99.99% cpu)

    At the end, you'll have something like this for each component of the system - e.g. "if I'm CPU bound on component X, and CPU of X linearly goes up with API_calls/s, and I'm currently at 5000 API/sec at 50% CPU, then I have total capacity for 9000 API/sec (with a 10% buffer) and free capacity for another 4000 API/sec.

    Now divide and conquer - let each component owner the responsibility to manage capacity of their system based on business needs provided to them.

  • Take a look around you organization...do you have anyone who has a firm understanding of the breadth of the technologies used in your systems? Does that same person have experience with performance testing and capacity analysis? Since your asking Slashdot the answer is very likely NO so go find a consultant who has that knowledge and hire him/her ASAP. Capacity analysis of distributed systems is not something to learn on the fly. Hire a pro.

  • I ran the Capacity Management practice for a leading provider of financial data servers for 10 years. We had a dedicated team doing the capacity planning for the company. However, a key part of my practice was making the performance data available to the application analysts as well as the centralized capacity planning team. By fostering this partnership with the application teams, we were able to understand exactly what was going on with each system and develop capacity plans togethger. We settle

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...