Ask Slashdot: Capacity Planning and Performance Management? 64
An anonymous reader writes: When shops mostly ran on mainframes, it was relatively easy to do capacity planning because systems and programs were mostly monolithic. But today is very different; we use a plethora of technologies and systems are more distributed. Many applications are decentralized, running on multiple servers either for redundancy or because of multi-tiering architecture. Some companies run legacy systems alongside bleeding-edge technologies. We're also seeing many innovations in storage, like compression, deduplication, clones, snapshots, etc.
Today, with many projects, the complexity make it pretty difficult to foresee resource usage. This makes it hard to budget for hardware that can fulfill capacity and performance requirements in the long term. It's even tougher when the project is still in the planning stages. My question: how do you do capacity planning and performance management for such decentralized systems with diverse technologies? Who is responsible for capacity planning in your company? Are you mostly reactive in adding resources (CPU, memory, IO, storage, etc) or are you able to plan it out well beforehand?
Today, with many projects, the complexity make it pretty difficult to foresee resource usage. This makes it hard to budget for hardware that can fulfill capacity and performance requirements in the long term. It's even tougher when the project is still in the planning stages. My question: how do you do capacity planning and performance management for such decentralized systems with diverse technologies? Who is responsible for capacity planning in your company? Are you mostly reactive in adding resources (CPU, memory, IO, storage, etc) or are you able to plan it out well beforehand?
We use a very very high tech bleeding edge system (Score:2, Funny)
Pay attention. This is VERY complicated....
We ask our users what their plans are.
Or just use NoSQL. (Score:2, Funny)
You don't even have to ask your users. Just use NoSQL, for everything. Use it for storing the data, use it for the back end business logic, use it for the middleware, and use it for the front end. Thanks to the CAP Theorem and JavaScript (all NoSQL uses JavaScript), you don't need to worry about scaling at all. That's the beauty of NoSQL: effortless and infinite scalability.
Re: (Score:2)
Also, it's webscale.
Oh hang on, I'm confusing it with MongoDB.
Umm, look, a CLOUD!
Re: (Score:2)
Re: (Score:2)
That's backwards, data first then everything else flows from that.
Enterprise Architecture (Score:1)
Re:Enterprise Architecture (Score:5, Interesting)
It isn't hard to convince business end, it is a function of money. IT is a cost center, it doesn't generate revenue. Therefore, by default there is a desire to hold costs down, which means limited IT budgets. Trust me, the business end understands, they just don't care about IT the way IT cares about IT.
That being said, it is EASY to get either money or absolution for the problems that Business end creates by not funding IT properly. You get them to sign off on the responsibility for when the shit hits the fan because of shortsighted budget concerns.
"If airplanes crash into this building, and 9/11 happens to us, how much data can you afford to lose".
"If a hacker gains access to our database, how much would that cost the company"
"How much does IT downtime cost this company"
People incapable of answering these questions (and a thousand more), should not be making IT decisions, until they can.
"Good IT is expensive. Bad IT is costly"
Re: (Score:2)
This is a good point. I have come to realize that as an IT professional, often times the only thing that I have power to do is to generate options, build the business case for those options (including the risks of not doing them *very key to do this step*) and then present them to the business. My job is to help the business leaders make informed decisions.
If they ultimately decide to avoid the costs and accept the risks of doing so, they only have themselves to blame if / when the risk materializes.
Most
Re: (Score:2)
A good idea seems to be incorporating IT, in some form, into risk management. Risk Management people who understand IT will fight for you, and the finance department will certainly listen to Risk Management if you can't get through to them yourselves.
I always wondered if there's some sort of cost model out there that uses multiplier factors. Like, yes, IT may be a cost center, but IT effectiveness is essentially a multiplier on every other department. If it sucks, it slows down every department and if it
Re: (Score:2)
Re: (Score:2)
My powers of prediction are inadequate; can anyone sell me a good, working crystal ball? Or, better, maybe /. can just give me one...
Oh plenty of vendors are more than willing to take your money and claim to give you a crystal ball... That's not unique..
Uses a mainframe (Score:1)
Depends on the corporate culture involved (Score:5, Interesting)
Speed of implementation in various organizations (or even departments, divisions, etc) runs a spectrum of "do stuff on more or less a whim" to "go through eight years of planning meetings to discuss the possibility of actually doing something." On the former end of the spectrum you buy extra capacity. At the latter end of the spectrum it doesn't matter, because you won't get the budget to buy extra capacity.
Only you can decide that. (Score:5, Insightful)
There are a lot of tools you can use to help with capacity, be it VM farms, SANs/NASes, cloud providers, chassis/blades. Only a few points of advice:
1: Everyone will sell their product as the silver hammer, where each target is a nail. The VNX guys will sell their SAN as a be all and end all, even if you just use CIFS/SMB. The security vendor will be selling you exotic appliances for encryption for your tape silos. The PC guy will be selling you tons of 1U racks and try to convince you that the onboard drive array is better than a SAN if they don't have a SAN product, otherwise, how slick their HBAs work when used with their SAN.
2: Don't forget security. It may be cheaper to have one VM cluster for everything, but it be wiser to keep one client's hyper-sensitive stuff on one VMWare datacenter [1], while the other client who is running some backend stuff for an app would be in a different container.
3: Before committing to purchase something, grab manuals and documentation, and read on the device. You might find it doesn't do what you want. Don't forget to take into account type of I/O and other items. I have had to deal with a terabyte/hour of random writes, and the only solution for that was going with either a caching HBA that had that much SSD so it would turn the random writes into an easy to digest sequential stream for the SAN, or go pure SSD. Sequential I/O is a lot easier and a lot cheaper to deal with than random I/O. Similar with I/O that is often cached versus I/O that never is reused.
[1]: A datacenter is a VMWare object type. Can't vMotion across it, and is intended to provide distinct separation between items.
Re: (Score:2)
There are a lot of tools you can use to help with capacity, be it VM farms, SANs/NASes, cloud providers, chassis/blades. Only a few points of advice:
1: Everyone will sell their product as the silver hammer, where each target is a nail. The VNX guys will sell their SAN as a be all and end all, even if you just use CIFS/SMB. The security vendor will be selling you exotic appliances for encryption for your tape silos. The PC guy will be selling you tons of 1U racks and try to convince you that the onboard drive array is better than a SAN if they don't have a SAN product, otherwise, how slick their HBAs work when used with their SAN.
2: Don't forget security. It may be cheaper to have one VM cluster for everything, but it be wiser to keep one client's hyper-sensitive stuff on one VMWare datacenter [1], while the other client who is running some backend stuff for an app would be in a different container.
3: Before committing to purchase something, grab manuals and documentation, and read on the device. You might find it doesn't do what you want. Don't forget to take into account type of I/O and other items. I have had to deal with a terabyte/hour of random writes, and the only solution for that was going with either a caching HBA that had that much SSD so it would turn the random writes into an easy to digest sequential stream for the SAN, or go pure SSD. Sequential I/O is a lot easier and a lot cheaper to deal with than random I/O. Similar with I/O that is often cached versus I/O that never is reused.
[1]: A datacenter is a VMWare object type. Can't vMotion across it, and is intended to provide distinct separation between items.
Why not do a calibration between Sales, Inventory, Purchasing, Logistics as x= a*s+b*i+c*p+d* vs No CPUs + No Network + No Transactions + support staff + some critical resource at a known level of x. Do not assume linear growth, but some growth that is proportional to x**3 or x**5 where x = the
+ a+b+c+d=100% and all a,b,c,d >0
Typically you will have to include some measures such as a max of 1 second response time (or 0.1 seconds response time). Don't try to go cutting hairs in 4 endwise.
For anything expen$sive, we use TQ (Score:5, Interesting)
I used to work for the (late, lamented) Sun Microsystems, and when we needed to give a credible answer to a price-sensitive customer, we used Teamquest Model. It pulls time-based info out of production-systems stats, so it doesn't add to the load, and then off-line does a classic queuing-system model of the system, working all in time units. That then allows the customer (really meaning me!) to ask what to expect from some specific configuration, and compare different systems for their price-performance tradeoffs.
For common setups, we have spreadsheets based on what Model said, so the salespeople typically don't know there's a cool mathematical model behind the scenes (;-)) That's probably true of other vendors who use TQ models: it runs on anything modern, so lots of vendors use it.
I have nothing to do with the company: they just allowed me to save $1.2 million once for a new datacenter, so I'm really really impressed by them.
--dave
Re: (Score:2)
allowed me to save $1.2 million once for a new datacenter
That's a lot. Or that isn't much. It depends on what the whole build out actually was. In other words, was that 50% cost reduction, or 1%.
Re: (Score:2)
"allowed me to save $1.2 million once for a new datacenter
That's a lot. Or that isn't much. It depends on what the whole build out actually was. In other words, was that 50% cost reduction, or 1%."
That's a lot in any case. Even if it's 1% it's still 1.2 millions out of a investment of (whatever the wages of that guy X time expended).
You might have questioned if that kind of savings was the biggest at hand -if he could have invested his time to save 2.4 millions somewhere else, but the way you did, it remai
Re:For anything expensive, we use TQ (Score:2)
Re: (Score:2)
That depends, are you getting the information ... (Score:5, Interesting)
That depends are you getting the information you need?
Are your business analysts/architects even able to answer questions such as, how many net new users, concurrent users, able to summarize the typical workload? Back in the day and i'm only in my early 40's, this stuff used to be well defined. We used to have large documents which go down to the level of expected network load. So its either as you said, its too difficult given the diversity of the systems or they just don't know how to do their jobs anymore. I honestly think its about a 20/80 split. Yes the environments are more difficult to manage, but BA's/architects haven't adapted or frankly just don't care.
BA's can't give me any information which would help me forsee or estimate how much load a project/change is going to have on the environment. So when i'm asked if we need new hardware, I just usually tell them to make sure they plan a proper load test and be prepared to spend money.
In my company, its my job to make sure lights on runs well and highlight any issues related to capacity. For new projects, then its part of the project team which I may or may not be a part of.
Storage, for us, seems to be the largest constraint, with memory and cpu coming in behind. Since we can't get much information, we just make sure we have all our servers hooked up to a large san so we can quickly provision more space.
Translation: resumes with Capacity Planning (Score:1, Funny)
Translation:
"Dear Product^WReaders, we at Dice want to know if 'Capacity Planning' is the new buzzword. Please provide anecdotal evidence to the approximate value of resume candidates with these keywords."
A good monitoring system helps (Score:4, Interesting)
I am partial to check_mk right now, but I've done this kind of thing on nagios with pnp4nagios. When you have your monitoring system gathering network interface data, disk usage, cpu utilization, etc, and storing it in some kind of database like rrd, influxdb, or graphite, it isn't that much of a stretch to examine that data as an aggregate and graph trends. It really is amazing all the stuff you can figure out with this technique.
Anybody get fired for buying too much? (Score:4, Insightful)
Too much network bandwidth? Too much storage capacity? Too much CPU?
Usually the drill seems to involve a lot of begging and pleading for money from management. The intermediate levels get dinged if they have to go back to the well, but they don't seem to get dinged if budgeted money ends up buying unused capacity.
I don't doubt there are places which do heavy audits and ask hard questions about why you have a SAN with a bunch of free space or why your 10 gig NICs are running at sub-gigabit utilization and cause all manner of pain and suffering for excess capacity already budgeted and bought.
But usually it doesn't seem to happen that way. Management barely supplies enough resources to meet their running demands and line management buys as much excess capacity as they can beg, borrow or steal because they know they will be punished for buying too little.
Spend Money on the Right Tools (Score:4, Informative)
These days capacity planning comes down to have the right tool set for the job. I like VMturbo. There are a few others out there that will get the job done. VMturbo is nice because it is platform agnostic and can help you decide where to place workloads not only based on pure performance numbers, but also on resource cost. (For example, HyperV is likely less expensive than VMware in most situations).
It is also worth considering an Application Performance Monitoring (APM) tool. Being able to identify exactly where the application is slow, and whether or not is an issue with the code or the underlying OS / infrastructure will save a lot of time during troubleshooting, and also help identify rooms to proactively allocate resources to head of potential bottlenecks.
On a similar subject, a tool that provides deep visibility into the database layer helps a lot for the same reasons. A lot of junior admins make the mistake of assuming that high database server utilization is indicative of under provisioned hardware. In reality, poorly written queries will bring down even the beefiest of database servers. While you get information with the built in management tools, a dedicated monitoring platform (like Spotlight from Dell for example) will help you develop historical trends, while at the same time providing real time troubleshooting capabilities.
Most of the time the network is the last bottleneck. In Cisco shops you can utilize NetFlow to figure out where the problems are. Or if the company you are working for has money to burn, the UCS infrastructure stack is very robust and comes with a whole slew of management and monitoring tools that can be leverage to discover latencies before they impact production environments too severely.
Just use AWS (Score:3)
Re: (Score:2)
Not a magic bullet.
Often what you want is a projection of the next five year's cost. Sure, AWS is good at making sure you only pay for what you need, but it doesn't help you make the Go/NoGo decision on a project. You can still easily get into a "this thing costs me more in usage that it's saving me" situation.
Re: (Score:3)
Re: (Score:2)
You can buy reserved instances for 3 year periods, this locks-in the price and guarantees availability.
But that doesn't guarantee that the capacity you reserved will provide the performance you need. See the TeamQuest Model posts above for an example of a tool that can help you predict how much capacity you'll need to scale up from a pilot to a full implementation.
Capacity Pros and Cons (Score:2)
Not sure how it is done elsewhere, but I think we tend to balance load tenuously. Adding more applications and users slowly. When things start slowing down, users will start to complain, bring on a few more servers, repeat.
However, one thing I will say, there is a danger in overcapacity. Managing multiple servers can be difficult (apparently), and it can cause some pretty hard to nail down issues in applications, particularly in legacy applications.
I've had a couple of instances where a couple (not all) sev
Re: (Score:2)
I've had a couple of instances where a couple (not all) severs were configured differently, and applications would perform differently, depending on what server you were currently randomly connected to
That's a system management issue, not a capacity planning issue.
Re: (Score:2)
Agreed. However I was just saying that it seems the more capacity you add, the more complex a system can get, which can make management more difficult, so simply adding capacity for for capacity sake isn't all that great an idea either.
OFFS (Score:2)
Today, with many projects, the complexity make it pretty difficult to foresee resource usage. This makes it hard to budget for hardware that can fulfill capacity and performance requirements in the long term.
How is it any harder than ten years ago, twenty years ago, thirty years ago? Either you know what you have and how it is performing today or you don't. Either you know what the user demand(s) is/are or you don't. Either you know what options you have for hardware, software and services needed or you don't. Heaven forbid you do your job and find out BEFORE you start a new project plan.
I know why this person posted this Ask /. anonymously. Either they're completely incompetent or their organization is when i
Re: (Score:2)
Simplify the problem, use a metrics based approach (Score:3, Informative)
tl:dr; version of our lessons and suggestions
At the end, you'll have something like this for each component of the system - e.g. "if I'm CPU bound on component X, and CPU of X linearly goes up with API_calls/s, and I'm currently at 5000 API/sec at 50% CPU, then I have total capacity for 9000 API/sec (with a 10% buffer) and free capacity for another 4000 API/sec.
Now divide and conquer - let each component owner the responsibility to manage capacity of their system based on business needs provided to them.
Re: (Score:2)
Re: Simplify the problem, use a metrics based appr (Score:1)
Hire a specialist (Score:2)
Take a look around you organization...do you have anyone who has a firm understanding of the breadth of the technologies used in your systems? Does that same person have experience with performance testing and capacity analysis? Since your asking Slashdot the answer is very likely NO so go find a consultant who has that knowledge and hire him/her ASAP. Capacity analysis of distributed systems is not something to learn on the fly. Hire a pro.
Re: (Score:2)
Migrate to the cloud, either public or private.
Yes, I heard it would make you rich, make your wife come back home, and moreover it cures cancer.
if you don't know your needs up front... (Score:2)
then cloud computing is pretty much aimed at you. (At least if your need is likely to be variable.)
The whole point of the cloud is that you only pay for what you use. If your needs are wildly variable from one month (or day) to the next, it might make sense to rent time/storage/throughput via the cloud.
If your needs generally only increase, and increase at a predictable pace, then it probably makes more sense to buy your own hardware.
Planning Tips (Score:1)