Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Education

Six Sigma-fying Your IT Department? 96

Saqib Ali asks: "These days all the major corporations are looking at Six Sigma methodology to improve their processes. I am planning to take a Six Sigma Green/Brown belt class in March. I work for the IT department, I have a statistics background, and I've studied statistics in university as well. I can understand Six Sigma being used in Production/Manufacturing facilities, but it is hard for me to figure how to apply Six Sigma in IT. Are any other readers using Six Sigma methodology for IT? If so what are some of the things that it can be applied to? As part of the training class, I have to come up with an idea for a Six Sigma analysis project. Though the project doesn't have to be IT related, but I would like it to be, so that I can see its application in real life. Any ideas for the project?"
This discussion has been archived. No new comments can be posted.

Six Sigma-fying Your IT Department?

Comments Filter:
  • by Woodblock ( 22276 ) on Monday January 20, 2003 @09:00PM (#5123011) Homepage
    Posted by Cliff on 07:58 PM January 20th, 2003 from the we-can't-link dept.
    Woodblock asks: "These days all the major corporations are looking at Six Sigma methodology to improve their processes. Should I Ask Slashdot whether for a project using Six Sigma methodology without explaining or providing any descriptive links? Sincerly, Six Sigma Lover in Colorado."
    • by missing000 ( 602285 ) on Monday January 20, 2003 @09:06PM (#5123055)
      link [isixsigma.com]
      • Buzzword alert (Score:4, Informative)

        by Basje ( 26968 ) <bas@bloemsaat.org> on Tuesday January 21, 2003 @03:41AM (#5125187) Homepage
        From the site:

        Six Sigma is a rigorous and disciplined methodology that uses data and statistical analysis to measure and improve a company's operational performance by identifying and eliminating "defects" in manufacturing and service-related processes. Commonly defined as 3.4 defects per million opportunities, Six Sigma can be defined and understood at three distinct levels: metric, methodology and philosophy...

        With such a description, I wouldn't touch it with a ten foor pole.

        • Depending on the industry the original poster works in, not doing it might not be an option. In the automotive industry, it is a requirement by the Big 3 to do business.

          The description does sound like a business fad, but in practice is is a very effective way to improve manufacturing processes. I'm not sure if it translates well to IT.
    • by Anonymous Coward
      It's this thing that companies (like Sun Microsystems) bombard their employees with for months at a time, removing them from their work and other productive efforts. Nobody is *totally* sure what it really is because even though everyone spends weeks of their life training for it, nobody ever implements any of it and it is soon forgotten.

      It doesn't matter in the long run because major corporations revamp their processes and trends every few years when some yahoo in the company wants to impress everyone and go on a power trip by changing the dynamics of the company (or so they think). And as soon as it has been changed, trained and implemented, someone else comes along and chances it again.
  • by PhysicsGenius ( 565228 ) <<moc.oohay> <ta> <rekees_scisyhp>> on Monday January 20, 2003 @09:01PM (#5123016)
    I think you might be able to utilize the Six Sigma backbone to leverage the synergy of your expected net gain while remaining focused on your core competencies. But it might be smarter to start printing out resumes.
    • by Anonymous Coward
      I am intrigued by your ideas. Would it be possible to use these ideas in concert with XML and B2B?
    • Thank you, Wally.

      p.s. At first glance I thought the title of the article was "Is Six-Sigma frying Your IT Department?" Mental Freudian slip?

      • At first glance I thought the title of the article was "Is Six-Sigma frying Your IT Department?"

        That's what I thought I saw too, which is why I bothered loading the story, figured it was a new worm or virus or something.

        Now that I see what it really is I'm betting a lot of IT departments would have preferred the malware.

    • Bingo, sir.
    • I can then track KPI's in my CRM to make sure I meet my SLA's. Then plug all that into the STFU to achieve total FUBAR compliance, all without RTFM, but I bet my job gets EOL'd and I get RIF'd

  • For the unintiated (Score:3, Insightful)

    by bolix ( 201977 ) <bolix.hotmail@com> on Monday January 20, 2003 @09:05PM (#5123048) Homepage Journal

    I've alway been of the opinion project and processes methodologies are the last resort worthless middle management use to justify their existence. Leadership, aptitude and competence being fearsome skills they prefer to outsource.

    Should your institution decide on that course - its a good idea to start [motorola.com] with the founders of the "process"
    • So true. The major problem arises when 'tools' like this one are accepted/used as a performance measurment by executive panel, though.

      Being part of middle management doesn't necessary imply incompetence and if the organisation is fairly large, clever middle management is necessity.

      Oh yes, thanks for the link - it will be useful to determine if Six Sigma looks like just another meaningless exercise to spend funds on.

      • Philosophys like this, measuring what you do, and getting people involved in using that information to improve what they do, lead to things like longer lasting transmitions. When I first learned about it, it was called statistical process control. An american idea, Edward Demming, that was shunned by corporate america, and that the japanese car companies used to kick the hell out of the american auto industry.

        You want LCD screens without dead pixles? You want something like this in place. Cheaper chips? This is for you.

        The problem Boeing had rolling out a program like this came in the way of resistance of lower level managers.
    • I've alway been of the opinion project and processes methodologies are the last resort worthless middle management use to justify their existence. Leadership, aptitude and competence being fearsome skills they prefer to outsource.

      People before process, yes. It doesn't matter what is in some document if the people aren't any good -- or are flat out dishonest.

      That applies to any sized project, and the smaller the project the more people matter and the less process does.

      I worked on one project where nobody on the customer's side would talk to me for 2 weeks strictly because of office politics and a manager that felt threatened by the contracting company I worked for. We had a great process...and it was entirely ineffective for months at a time.

      That said, programmers and tech-only specialists don't make large projects work. Good managers do. Think about it: ineffective management will kill any work that good programmers could do. Worse yet, bad management tends to bring in bad workers. Good managers will get rid of bad workers and bring in good ones...improving sucess.

      For large projects, process methodologies are just as critical as the people since management has to be able to say "we will do this". No specs, no design documents, nobody to check the code, ...

      Process doesn't gaurantee a thing. Without standards and paper work, how do you know that you've done anything? How do you know the difference between an unreasonable user and one that has a real legitimate gripe?

      Document the hell out of everything? YES! Don't allow any flexibility? NO! Just make it obvious what is being done and what is expected. If you deal with reasonable people, it'll work out. Work with bad people (managers or not) and it'll be a struggle. If you don't document anything, you have completed nothing.

  • by jsimon12 ( 207119 ) on Monday January 20, 2003 @09:13PM (#5123115) Homepage
    Hmmm, lets see, I have been through:

    Mil-Spec (which sort of actually has a point but is a bit over done, well a bit is an understantment and you are at the whime of the government auditors, better keep them happy.

    QA (Quality Assurance, uhh WTF? can we say paperwork)

    ISO 9001 and ISO 9002 (really is there a difference) yes I know the textbook answer, but it seems simply to amount to more paperwork, less productivity, surprise inspections, it is like Mil-Spec without the fat government contracts, sort of like eating healthy and still having a heart attack).

    Now the latest, Six Sigma, some tool writes a book on applying standard deviation to business practice and know everyone is on the boat. Give me a break, yet another engineers paperwork hell. Anyone who says it works probably has an MBA.
    • Uh.. you better hope you go through QA. Or would you rather be held accountable for production problems because your software doesn't scale, or does something "wrong".

      Programmers are NOT the best people to test their own code. Other programmers, maybe, but that is even debateable.
      • No, he's not talking about QA, as in testing, but QA, Quality Assurance, the business practice...

        It basically comes down to having a procedure for everything.

        You can be a QA accredited company, and churn out as much crap code as you like, as long as you have a procedure that states that's what you're supposed to do.

        Of course, whether or not you have any customers is another matter....
      • Quality assurance better not be the first time you realize that your software doesn't scale...
        • Yes, it should be, because QA -- better VV verification, validation and test -- starts before coding.

          Test/QA/VV&T starts in the general documents needed during sales -- "this system will do these major tasks" -- and is followed by the initial rough specifications. Specifications -- even poor ones -- do come first, right?

          Design (how the specs will be implemented) and coding (the implementation) come after QA/VV&T have been working on the project and have written up initial, if sketchy, test plans.

          By the time that coding starts, serious scaling issues should already be addressed. QA/VV&T/Test are there to make sure that things work as specified. If the ability to smootly scale up a project wasn't included in the specs or design documents, it's unlikely to end up in the implementation.

      • I'll bet he is talking about TQM: Total Quality Management.
    • Crap. I'm going through ISO 900x right now. They said that they wanted processes that are totally repeatable through IT. To improve the way we do things... best practices throughout the sites, etc, etc. Then it came down to actually implementing it.

      Now the mission has changed. It is now, "We want the ISO certificate to show our customers to impress them." And now the process is all about documenting what we do on a 10,000 foot level (which really provides no value at all).

      And now the internal pre-testing has been pushed back again. (The entire timeline keeps getting pushed back and back and back.) I figure if we wait long enough, a reorg will probably take care of this whole thing, and we won't have to worry about it.

      PS: I think ISO9001 in IT is just part of a big ploy to document everything that you do so they they can turn it over to trained monkeys in third world countries to attempt to operate half as well, for a quarter of the price.

      I understand what a six sigma deviation is (at least, I think I do), but I don't know the actual process you're going through. I'll assume it is an ISO9001 with a heavy emphasis on refining and best practices? Please don't tip off my management to this. Please?
    • by Anonymous Coward
      you left out TQM. Then there's GMP. Every industry has it's "thing du jour." For IT, there was client-server, multimedia, that Internet dealy, web services, java, thin clients, etc. Each one tends to die off in a few years, but portions of it linger on forever.
  • by tunah ( 530328 ) <sam&krayup,com> on Monday January 20, 2003 @09:21PM (#5123173) Homepage
    This is essentially the same as the "five nines" reliability that was touted for uptimes. Failures require deviations from the mean of six standard deviations, corresponding to 99.9996% success rate.

    It's just another metric. Don't get too excited.

  • by plsuh ( 129598 ) <plsuh&goodeast,com> on Monday January 20, 2003 @09:40PM (#5123314) Homepage
    One thing that all of the various "quality assurance" regimes miss entirely is the value of being able to make mistakes. Risk-averse managers love this "zero defects" kind of environment, because they like the predictability. However, for the organization in a rapidly changing environment, such predictability is often deadly. Achieving the goals takes too long, and by the time you have perfected a process it's obsolete. Customers have moved beyond what you are doing and demand something else. Your perfect buggy whip is no longer useful in an age of automobiles.

    Six Sigma and the like were developed in a manufacturing environment, where the same processes are used for years, and there is time to get it perfect. In the IT industry, two or three years yields a radically different environment. People need the room to take risks in order to deal with such a dynamic environment. The corollary is that in taking risks, sometimes you roll snake eyes and crap out. People make mistakes and if you focus only on the down side, you miss out on the greatly increased upside that comes from taking those risks.

    Six Sigma in the IT department sounds like a loser. In IT, you want a management style that is looser and and more free-flowing, able to shift quickly, and not stuck on not making mistakes. (Perhaps the biggest mistake to learn from would be to try to apply Six Sigma to your department!)

    --Paul
    • In the IT industry, two or three years yields a radically different environment.

      This is a generalization about the IT field that holds true sometimes (most times?), but not always. In the health-care provider world, for example, they expect data feeds on ...get this... tape reels.

      IT is just a tool, it's not the application of that tool. ATM machines, NASDAQ, the FAA, and hospital systems all use ancient IT platforms. But I for one am really glad that they took the time to get it right, and that they almost never fail.

      Would you want your bank's ATM system to be rolling out .net web services today?

    • Repeatability (Score:5, Insightful)

      by GCP ( 122438 ) on Tuesday January 21, 2003 @12:03AM (#5124146)
      I agree with your response, but I'd like to state it somewhat differently.

      It's all a question of repeatability.

      The idea of six sigma is a statistical thing, where you have a huge number of instances of the same thing, and they are almost all identical: almost completely repeatable. The fewer the exceptions, the more sigmas.

      I feel this is quite inappropriate in something like IT app development, because of the one-off nature of most IT apps. It may be a good idea for other aspects of IT that need to be repeated a huge number of times without any glitches, such as phone connections, server backups, etc., and maybe that's all that 6-sigma is trying to address here. But IT app dev is custom craftsmanship. You have a few things that are approximately repeated, such as putting up yet another web form, but most apps are not clones of anything. If they were, it would be "installation", not app dev. Most apps don't even share much in the way of success metrics, and there are far too few of them to talk about 6-sigma.

      I believe in statistical process control for repeatable processes, but for custom crafted items like apps, I think other software methodologies make a lot more sense.

    • I think that's an exagerration at best.

      IT basically moves back in forth between distributed computing and centralized computing. Six Sigma might cause somebody to question all of the bonehead decisions made in pursuit of the latest "right" way to do things.

      In 1981, VAX and Mainframe ruled IT. Users poked at terminals. IT Gurus talked about JCL and Serial ports.

      In 1991, PCs were in. All the bigshots demanded PCs so they could type memos with TrueType fonts and draw graphs with Excel or 1-2-3. Companies spends hundreds of thousands of dollars hiring IT people to allow these people to share a printer. IT gurus talked about QBasic and Thinnet cables.

      2001... Centralized server apps rule IT. Users poke at web browsers. It gurus talk about XML and security.

    • by nbvb ( 32836 ) on Tuesday January 21, 2003 @01:33AM (#5124638) Journal
      One thing that all of the various "quality assurance" regimes miss entirely is the value of being able to make mistakes. Risk-averse managers love this "zero defects" kind of environment, because they like the predictability.


      Right on.

      I work in an environment where "zero impact" is the big buzzword. The end goal is to have our change control list at the end of the year list "Impacting changes: 0". Our "emergency" changes aren't allowed to be above 15% of our totals.

      So you know what that does? That makes everyone do the tasks that _should_ be change controlled cowboy-style. Nobody wants to submit the forms to replace a failed disk drive, because you're going to get beaten up for it. So everyone just DOES IT and hope for the best. Application teams roll out new code and bugfixes all the time without change controls -- they don't want to sit on the phone with the VP's yelling at them for being over their emergency change numbers .......

      That sort of management style "zero defect" does nothing but drive the business processes underground.

      It's sickening.
      • they don't want to sit on the phone with the VP's yelling at them for being over their emergency change numbers

        VP: Why are 20% of your changes for emergencies?
        Sysadmin: Because you won't buy us [X].

        This is your chance to send any message to senior management that you want, and have them listen to it. Dilbert cartoons notwithstanding, senior management are generally pretty smart guys. They may not know the difference between bash and sh, but thats != stupid. They put those metrics in place for a reason. If the limits are being exceeded then they want to know why. Not because they enjoy hitting people on the head but because they want to fix it. And if the way to fix it is to spend some money then you have a ready-made business case.

        Paul.

        • See, that's where the problem is.

          If we all reported to the same VP, I'd agree.

          But the network, sysadmin, and application teams each report to a different VP.

          So no matter what happens, there's always 2 VP's yelling.

          If it's a network problem, the sysadmin and app VP's gang up on the network team. If it's a system problem, the network & app team gangs up on the sysadmins.

          And it's _never_ an application problem ;)

          ~NBVB (one of the aforementioned sysadmins)
      • What you are describing is not six sigma. Six sigma is about determining why the disk drives are failing / and or why failing disk drives are an "emergency"; and fixing that. Its about determining why bug fixes are in the applications code at all and trying to address the underlying issues. As for new processes it has a whole system.

        Lets take diskdrives. Are you sure that disk drives on crucial systems shouldn't be "hot swappable" and thus disk drives should be treated as an operations consumable rather than an infastructure repair. That is a failed disk drive swap would always have 0 impact?

        As for VPs beating people up; try raising the issue this way. "You are absolutely correct we are having way too many emergency disk drive swaps. Can I meet with some of the six sigma greenbelts overseeing hardware failures for the networking team to see which aspects of their methods we could apply in system's administration?" The networking VP then has to either put his people on the problem (and the easiest way to defuse criticism is to assign people tasks) or he won't raise the issue again.

        And if zero impact is the buzz word work with it. That looks to me like the best cow you could ever want. Zero impact forces:

        a) Redundant staff
        b) Redundant equipment
        c) A full testing environment

        I'd bring up non-zero impacts every chance I got.
      • A little off topic, but that reminds me of this allegedly true story about unintended consequences:

        I worked as an accountant in a paper mill where my boss decided that it would improve motivation to split a bonus between the two shifts based on what percentage of the total production each one accomplished.

        The workers quickly realized that it was easier to sabotage the next shift than to make more paper. Coworkers put glue in locks, loosened nuts on equipment so it would fall apart, you name it. The bonus scheme was abandoned after about ten days, to avoid all-out civil war.

        This was from Scott Adam's Dilbert Newsletter #44 [unitedmedia.com], so you might want to take it with a grain of salt.

      • So you know what that does? That makes everyone do the tasks that _should_ be change controlled cowboy-style. Nobody wants to submit the forms to replace a failed disk drive, because you're going to get beaten up for it. So everyone just DOES IT and hope for the best. Application teams roll out new code and bugfixes all the time without change controls -- they don't want to sit on the phone with the VP's yelling at them for being over their emergency change numbers .......


        That sort of management style "zero defect" does nothing but drive the business processes underground.
        Absolutely BRILLIANT summary of what is wrong in the IT world. I am a sysadmin working for a large Canadain financial institution (yeah I know, kind of an oxymoron). The change process here is insane.

        The systems I work with are mostly AIX. One of the great things about AIX is the "chfs" command, change a filesystem. It allows dynamic increases of filesystems. What this means is if a filesystem needs to be increased, the command is "chfs -a size=XXXXXX /fs_name". Wonderfull isn't it one simple, non-intrusive command to get the job done. Unfortunately, change management kicks in.

        1) Provide detailed justification and procedures for implementing the change.

        2) Provide detailed back-out procedures on how to recover from the change.

        3) Schedule the change so as not to conflict with other changes. Note: change window for ALL systems is Sunday 03:00 to 07:00 and our group is responsible for 300 systems. Note 2: You cannot have two changes on one system nor on multiple systems providing the same function for the same line of business.

        4) Get the change record approved. Guess what, it's not as easy as it sounds. For web servers the list includes seven groups, application servers eight, database 11, other infrastructure systems up to 16. Oh and don't forget the person approving the record cannot read so you must repeat the justification, procedures and back out plan for each group. Repeat this step up to 16 times.

        5) The change MUST be presented to a least one and usually two "Change Review Groups".

        6) If the record is not fully approved by the end of day Tuesday, the week BEFORE the change, special excemption must be obtained (2 VP's must approve). This is a wonderfull rule since most client requests are received on the Thursday before they want the change implemented.

        So guess what, changes are implemented on the fly. The process could EASILY be streamlined to have TWO groups approve the change, the requestor and the implementor. All that is needed for documentation is a copy of the e-mail pasted into the change record. 5 minutes of work instead of at least 6 hours and you dont have employees whing about internal bullshit on /.
  • by bill_mcgonigle ( 4333 ) on Monday January 20, 2003 @09:58PM (#5123421) Homepage Journal
    Last Fall I was searching for a new refrigerator. I had settled on a GE model, and was combing their website looking for specs and ran into dozens of html/web app errors.

    Having just read an article about how GE pioneered six-sigma, and thinking about the statistical distribution of the 6th sigma, I recall figuring that there must have been millions of web pages on that site, and I must have just hit all the ones with errors by chance.

    Well, it was either that or GE hadn't managed to apply Six-Sigma to their web development, and I knew that couldn't be true because I had just read it was used throughout the organization. ;)
    • Ever stop to think that maybe GE doesn't do the development on their website?
      • > Ever stop to think that maybe GE doesn't do the development on their website?

        I thought about that, when I read the original post, but the quality has to come from the whole right? So if they don't know how to get a quality out-sourcing (web development in this case), then they don't have quality. Simple is that. I think that "We bring good things to life" (GE's slogan) is a myth. I'm not saying that GE is a crappy company; they know how to deliver the number. I'm simply saying that GE is just good enough so that they don't fall apart.
    • You left out the part where you started re-thinking your choice of refrigerator brand.

      Of course nowadays the presence of the GE (or RCA) logo on a product in no way guarantees that it was actually designed and/or manufactured by them.

  • while eight balls can yield better result? Tell your boss. :)
  • by Breakerofthings ( 321914 ) on Monday January 20, 2003 @10:11PM (#5123518)
    Soon, I will be implementing an automated monitoring system for our web servers, etc. I plan on using a 6 Sigma approach to monitor reponse times. After all, they do vary with load, and 6 Sigma is more adaptive than establishing a hard limit. Plus, it is trivial to implement (our web servers log to a Postgres server, so 90% of the work can be done with a well-written select ... 6 Sigma is given a bad name by MBA types; it really is an extremely useful technique ... don't let these naysayers discourage you! (but be sure you don't succumb to the 'every problem is a nail' dementia)
  • especially when most companies would be lucky to achieve one sigma of successes.

    You also have to be seriously wary of a "concept" when its website has, as its first link, a place to buy logo tee-shirts. This sounds like a Dilbert cartoon.

    I hope the program doesn't include large banners that say "Quality".
  • The new Scott Adams Dilbert book ( The Way of the Weasel - and no, I'm not going to look it up on Amazon for you! ) has a small section on six sigma madness, if you need perspective.
  • by daviddennis ( 10926 ) <david@amazing.com> on Monday January 20, 2003 @10:43PM (#5123733) Homepage
    The owner of my company read the book 'The Six Sigma Way' and got very excited about it. He asked me to read it, and he asked the manufacturing guy to read it. Whenever we would have a problem, he would say stuff like "That's not Six Sigma" in email.

    I read about half of the book. It seemed to be a bunch of billions of generalities, complete with meaningless charts and graphs, and I am not actually sure what implementing it would do in a concrete sense.

    The basic idea of doing a ground-up analysis of your business to determine what needs to be done to make things more reliable is, in my opinion, something every business should do periodically. However, I don't think giving it a trendy name and insisting on hiring expensive consultants is going to help quality as much as just, well, periodically scrutinizing your own processes and looking for ways to improve quality.

    One of the key things the book said is that if you don't have buy-in from management and employees, Six Sigma is useless. So if nobody in your company wants to drink the Kool-Aid, it's pretty wortheless. But if people are enthusiastic about improving the way their company works, I'm not sure if the Six Sigma framework says that much that common sense doesn't.

    Hope this helps; I welcome dissent from the better informed.

    D
    • I agree that you have to have buy-in to a certain extent, but one of the things in Six-Sigma is having all of your processes documented. That alone is very handy. Think of it as documentation for the job instead of code. So if someone takes off, there is the documentation on what to do when things go wrong. It is much more applicable to larger corporations where there are "pockets of excellence" which also means their are usually "large bags of crap performance". If you can take the documented processes from the good group and move them to the bad groups, then the company benifits.

      I also agree that Six-Sigma often translates as "give money to consultants".
  • Six Sigma (Score:4, Informative)

    by the eric conspiracy ( 20178 ) on Monday January 20, 2003 @11:02PM (#5123827)
    It's called process capability analysis in the textbooks.

    Six sigma arose out of a rigorous statistical method of analysis of manufacturing data. Basically you plot some measurement of some characteristic of a device being manufactured over a large number of samples. This provides a characterization of the manufacturing process. Hopefully the data you chose to gather is normally (in the formal sense) distributed and you can assign a mean and standard deviation (sigma) to the data. You also examine where the actual specification for the measured characteristic lies. A simple way to characterize the location of the specification is in terms of the number of standard deviations (sigmas) that the specification is away from the mean measurement. Six sigma is simply a statement that the specification is six standard deviations away from the mean. Measurements outside the six sigma range denote defects.

    All of this is very cut and dry stuff that has been used in manufacturing for decades. Six sigma was 'popularized' when Motorola adopted it as a quality goal for their products company-wide.

    How does this apply to software development? I don't have a clue. How the hell do you establish a process capability measurement for a software development process in a rigorous fashion? Good Luck.

    IMHO this is just the latest quality fad that is rippling through the management/consultant community. What next? Re-engineering rises from the ashes again?

    • I learned something today from your message.

      Our team (and many others) has been moved from being a "service" or a "support" to being a "capability". That's right. Instead of providing service, or supporting a customer, we're now a Systems Administration Capability. As in, "We're capable of meeting your needs, but we'd rather take a nap."

      Nice to see where that process capability lingo snuck in there.
  • I was only able to attend 1 day of the 6day Six Sigma training.

    1/6th Sigma, Leanest of the Lean.

    SuperGlue
  • Step 1 - Have a meeting to discuss how important Six Sigma is. Use buzzwords, diagrams and sleight-of-hand to confuse everyone in the room.
    Step 2 - Wait for an IT group to complete a project... For example, wait until the network staff has completed migration to a new network infrastructure.
    Step 3 - Claim real and imaginary (don't be afraid to invent the figures) cost savings as a Six Sigma savings.
  • by Derek ( 1525 ) on Monday January 20, 2003 @11:56PM (#5124111) Journal
    I work for a large software development company that is trying to implement "Six Sigma" and it is a joke. Management has all the best intentions, but applying the Six Sigma model to a process as complex and as poorly understood as software development is a waste of time. The reality is, when a person in our organizations picks a "green belt" or "black belt" project, they already know what needs to be done. Nothing new is learned through the Six Sigma process and it adds a non-trivial amount of work onto the organization. I keep hoping that it will fizzle and die out soon, but this program is lasting longer than most. Good luck with the IT related project, I would be very interested to read a followup post if Six Sigma tells you anything you didn't already know.

    -Derek
    • I work for a large software development company that is trying to implement "Six Sigma" and it is a joke.

      There are several replies to the article mentioning that Six Sigma is for manufacturing processes, and other replies mentioning Six Sigma in the context of software development. If the software project managers are really hungry for a formalized process, why not implement the CMM-SW or CMMI [cmu.edu]? They are designed from the ground up for software and not for manufacturing.
    • I think that six sigma is overkill for just about every software project. That said, most managers I've delt with start out to document and codify the current way things are being done and make minor corrections. The idea is that the way things are being done is good, and that it should now be consistant. By introducing consistancy, you can ballence resouces and not rely on any one person since things are now documented and can be repeated.

      This usually fails for existing projects since it's difficult to engineer something that has grown up organically.

      The sucessful managers don't waste time on these existing, organic, projects. Instead, they start with a new project and allow for change. They drag in the users of the systems first and learn what they do. They document failures of the deployed system. They try and make using it more natural so that the delivered system matches the process and silently enforces consistancy.

      Unfortunately, six sigma or CMM are check boxes for many managers since they are told by thier bosses to get it. Real improvements are usualy secondary. I try and drag along useful processes under the guise that if we don't do them, we aren't going to get that check box. To me, that can make quite a bit of difference. Manage your managers.

  • Six Sigma (Score:5, Informative)

    by bwt ( 68845 ) on Monday January 20, 2003 @11:59PM (#5124117)
    "Six Sigma" is a buzzword name given to the methods of W. Edwards Deming, who advocated a methodology that depends on having objective, numerical measures of results. In a nutshell, the methodology tells us to understand the causes of variability in product and process design, and to work systematically to identify, understand, and eliminate causes of variation that propogate into the metric for goodness that we choose. Beware trying to apply these techniques when the metric for goodness isn't even clear.

    These techniques are used heavily in machining and electronics because they provide an objective, rationally based methodology for improvement. If you are building car cylindars, you use these techniques because if your competitor has more perfectly round cylindars than you do, cars built with them get better mileage and are more reliable and durable.

    The Six Sigma methods are difficult to apply in settings where an objective numerical measure of "goodness" isn't available. This is often the case in software design, when features and "ease of use" are the objective. One area where it can be applied quite well in software is in performance monitoring and tuning. Run duration is quantifiable, repeatable, and "faster is better" translates run times into a raw quality measure.

    Trying to use statistical methods on metrics that aren't objective or aren't easily quantified. Beware of using bug or defect coutns unless the failure mode is extremely well defined. Crap like "lines of code" or "number of methods" may be objective but there is no way to translate them into an overall measure of quality.

    Statistical techniques are a very powerful tool, but they are just that -- a tool. Just like a hammer isn't useful if you need to sand wood, don't expect "Six Sigma" to be the solution to every quality problem.
  • Like TQM, kan bans (japanese for note card or ticket), and a host of prior managment fads, six sigma will undergo something similar to a bubble. They all are adopted by knoledgeable and good managment teams at the beginning, and because they really are good ideas provide excellent results. However, following this more and more marginal managment teams begin adopting them as the great white hope that will transform their company or department from a flailing failure to an efficient production force. This too shall pass, give it a few years and we will have some new and better managment gimmik that will revolutionize the art of getting people to produce as much as possible.
    IMHO, great managers are born and not made, there are just some people who have the right mix of charisma, leadership, and care of their employees to cajole the best mix of production with morale, and no amount of schooling will take a bad manager and turn them into a good one.
    Don't get me wrong, six sigma is a good idea, just like all of its predicesors it certainly has a place in many operations, but it is not a panecea that will solve all of our current managment problems. Anyway, I have rambled enough for the night.
  • First off I put together a summary of six sigma [attbi.com] a few years back which might be helpful (especially for the "what is six-sigma" lurkers). A lot of the stuff in six-sigma can apply to IT:
    The DMAIC model outlines a software construction process.
    From a high level project management standpoint bugs & non-included featues in a program are "defects" and the defect reduction models would apply.
    The unity between six-sigma management structures and corporate management structures would apply.
    The focus on the customer would apply ...

    So yes six-sigma works perfectly for IT work. But instead of measuring number of defective widgets per million you are measuring number of defective lines of code per million or number of requirements that were failed be met per million...

    As far as stupid management philosophies this one is relatively harmless. TQM is actually excellent but a great deal of the really good stuff in TQM has been taken out of six-sigma.

    So in answer to your question I'd apply it to an important software project you are working on and track bugs in the program. You'll get to find out what your customers really want rather than what you think they want, or what you promised to deliver. You'll get to find out how over the years the software has been meeting their needs better (or not)...

    Anyway cheer up this isn't so bad. You are actually going to learn manager speak to get time to do projects right.

    • I like your summary of six sigma, but you fall into the trap that many do at the "measure" stage. It is a fact of life that good metrics aren't always available. Forcing the use of bad metrics so that your quality improvement apporach will fit nicely into a six sigma framework is a good way to drive insane behavior that actually guarantees poor quality.

      But instead of measuring number of defective widgets per million you are measuring number of defective lines of code per million or number of requirements that were failed be met per million...

      As I said above, using metrics like these will result in wasted because they are A) unknowable and non-objective B) don't actually translate to "quality" C) drive behaviour aimed at manipulating meaningless numbers instead of improving the product.

      A metric based on lines of code is not likely to have any relation to actual end user perception of quality. Writing cleae, maintainable code often involves writing MORE lines of code. This reduces waste over the full product lifecycle, yet the metric punishes it. The metric is also not very well defined: if the program doesn't do X how many lines of code are defective? All of them? None of them? The number that I change to add it? If I refactor my code to add the new feature and change 500 lines of code is that always worse than hacking in a 20 line fix that is much more fragile and nobody but me understands?

      Anything based on "number of requirements" is even worse. I have never once worked on a software project where requirements were sufficiently well defined and static to base an objective measurement on. This "problem" is inherent in software development because an application does different things for different people whose understanding of what the application is and could do is constantly evolving. Two people will count the number of requirements not met differently even for the same "bug". If you change the metric to "documented requirements" not met, then the most common answer for bugs is (and should be) "zero". If you count undocumented requirements, then people will always answer "one" to minimize the defect metric, and your requirement documented will evolve into a bug tracking sheet.
      • Excellent points regarding the metrics I suggested. My point was much more surfacey about the fact that IT work could utilize six sigma methods and the correction is well taken that both examples I gave are quite poor in showing this.

        As for number of requirements here I somewhat disagree. What I was considering is a scenerio like this:

        1) All end users make up a wish list
        2) All wish lists are combined into a complete set of requirements
        3) As time progresses end users and only end users can by unanimous consent drop requirements
        4) As time progresses any end user can add requirements

        Its this set that we track bugs against. Clearly

        a) Some requirements contradict other requirements
        b) New requirements get added
        c) Some requirements get dropped
        d) Many requirements are cut in very early stages as "out of scope"

        I think with a program over a period of years it gets pretty close to the "it does what I want it to do" stage. There should be a drop off over time. I can't imagine a software system getting to 6-sigma (which would probably mean something like all but 1 requirement) but I can see an upgrade effort aimed at going from 2 to 3 sigma.

        ______________

        I also completely concur with you that this system is likely to not prioritize well. "What's get rewarded is what gets done". I'd be much happier with a TQM type approach. I agree with Demming completely that management by objective, quotas... tend to be descructive. Software defects are most often caused by

        1) Poor leadership in management and executive positions in terms of trade offs (particularly time related)

        2) Unwillingness to train staff

        3) Poor choice of tools

        IMHO Six Sigma in practice is likely to do a great job on #3; and make #1 slightly worse. Real six sigma would attack all 3; but a low level IT guy isn't going to be in those meetings.

  • Break it down. (Score:4, Interesting)

    by pete-classic ( 75983 ) <hutnick@gmail.com> on Tuesday January 21, 2003 @01:36AM (#5124654) Homepage Journal
    Okay, your department does stuff, right? Break that stuff up into categories. Look at where things can (and do) go wrong. Try to figure out why. Figure out a way to measure success vs. failure. Then apply those measurements.

    Some concrete examples.

    What percentage of updates to "internet facing" services are applied within a set time frame? Maybe you decide they need to be applied within four business hours, or maybe 12 hours. When a patch is released it is a "moment of truth."

    What percentage of help desk calls are resolved (i.e. the employee is back to work) within x minutes (30? 60?). Why aren't they all?

    Six Sigma is full of buzzwords, but IMO it has some great potential. If I was a a guy "in the trenches" (sadly, I'm out of the technical field right now) I'd be taking advantage of a Six Sigma push to help my boss (and his boss) feel my "chronic pain." I.e. "The reason we can't resolve 99.992% of help desk calls in 60 min is that we don't have the parts we are supposed to have." or whatever.

    PS: Feel free to show this to your boss. Make sure he knows my email address is peter at fpcc net ;-) I'm available for consulting, or save a bundle and hire me outright!

    -Peter
    • "The reason we can't resolve 99.992% of help desk calls in 60 min is that we don't have the parts we are supposed to have." or whatever.

      I've used the same tactic...it's effective to a point. The worse the manager, the more likely they will foot drag on these types of requests or lead you into 1000 what-if type questions that lead nowhere.

      The better the manager the less likely it's necessary to do this because they already know. If they can't do it (budget), they'll say so but only after they have gone on a money hunt.

      PS: Feel free to show this to your boss. Make sure he knows my email address is peter at fpcc net ;-) I'm available for consulting, or save a bundle and hire me outright!

      Same here...Washington DC area (local), national, or international; active.consulting at metamark com .

  • There's a couple good articles on the web concerning Nasa's software development process. All in all, very similar to SigmaSix, with lots of statistical self-evaluation.

    The bottom line is that this sort of method does work, and can lead to impressive software reliability. But, it also raises the cost per line of code by an order of magnitude or more.
  • We've done six sigma (Score:2, Interesting)

    by cassidyc ( 167044 )
    As part of a companywide drive (Brought to you by the letters E and G though not necessarily in that order) where we were told that not completing a six sigma project would "affect" our performance review.

    Anyways, come the end of March last year, six sigma deadline, stop coding and software test and release date. We all stopped production to complete our six sigma projects. And for the most part we swindled it something awful. We picked changes and additions to the software that we performed ages ago. Our premise being that before the change our software was 0% compliant (worse than six sigma) and now with this functionality we were 100% compliant (better than six sigma). So we all did that, past a test on basic statistics, and we all got our little certificates and we all got a pat on the back.

    And with the "millions" saved through our six sigma projects we still couldn`t afford any training last year.

    It was the rollout and attitude from management and the "six sigma blackbelts" that was the worst part. Six sigma was clearly aimed at production and manufacturing and there was no leeway for software engineering where it is a very loose fit at best.

    I hear rumours that "design for six sigma" is better suited to the software industry, but that would require more training...

    So as you can see, not too impressed with six sigma.
  • by NigelJohnstone ( 242811 ) on Tuesday January 21, 2003 @06:10AM (#5125576)
    1. You have a problem
    2. Collect stats about it
    3. Analyse those stats to identify the problem
    4. Some sort of magic goes here to get from problem to solution
    5. Apply solution
    6. Collect stats again
    7. Calculate saving and claim is a 6Sigma saving
    8. Claim that the solution can be applied elsewhere for some vague future saving

    The problem is that the saving comes from applying the solution (Step4) not from the process of Six Sigma.

    Step 4 is done by the skilled engineers who know what solution fixes what problem, not the Six Sigma MBA who has no special expertise in the process.

    If the fix can't be applied because the guy is collecting statistics in Step 2) then he is causing the company damage by delaying the fix.

    His own salary is also a cost and the load he puts on the skilled engineers while trying to 'learn' their skill also costs money.

    In order to obtain statistics, you have to know what the possible causes are, so in the real world, they go to the engineer who already knows the problem and contrive a set of stats to collect that prove that solution.
    Then there's step 8, claim it will be re-used.
    If you look at the examples Six Sigma people give, its stuff like a leaking airconditioner pipe.

    If you have a leaking air-condition pipe, you hire a plumber or buy a book on plumbing, you don't look through Six Sigma projects looking for one that might turn up useful information.
    Quite simply the chances of someone re-using this information is negliable and its the fix in step 4 is the thing that would be reused and that isn't a Six Sigma step.

    So no, it just fluff to keep middle managers employed. That is why every company that uses it continue to increase costs. GE increased profit came from increase *sales* and economies of scale, not Six Sigma.

  • Misapplication (Score:2, Insightful)

    For a process to be meaningfully based upon statistics, one must be dealing with something which is meaningfully quantifiable, and which occurs in a sufficient number of comparably quantifiable instances to be stastically significant.

    What about an IT process is meaningfully quantifiable?

    What about an IT process occurs in statistically significant numbers of comparably quantifiable instances?

    What do you have a million of to see where you stand relative to 3.04 defects? Transaction processing response times. What else? Not much that I can think of.

    • Well, now, you see, you've hit the nail on the head.

      We got around this be implying that new functionality is quantifiable.

      If we didn`t have the functionality then we had 1 million defects out of 1 million, everytime you tried to use the non existant functionality it wasn`t there.

      So when we implimented the function we were 0 defects out of 1 million, now everytime you use the functionality it is there, doesn`t matter if it works...

      And yet the "Quality" never improved, which I believe is what six sigma is all about
  • I mean, if they are sending you on courses that no-one can figure out how they apply to the work you do... ;-)

    Answer would presumably depend on the size of your IT department. If you are vast, recieiving thousands of calls each day, then sure; statistical analysis could perhaps identify weaknesses in the processes. If you aren't vast, then I can't see it being much use.

    Sounds like you have the hammer problem; if all you have is a hammer, ten everything looks like a nail. If you are a statistician, then you approach everything as a statistical problem.
  • If Six Sigma requires 3.4 errors/million opprotunities... how do you count the opprotunities? I don't see a good way out of this... It's like using more dietary fiber to prevent automobile accidents... just plain silly.

    If you count runtime execution, and write the following code:
    For i := 1 to 1000000
    WriteLn('Hello, World!');

    How could this fail 3 times, in it's lifetime?

    Would it be ok for an Open Dialog to fail that often?
    Would it be ok for a Save option to fail that often?
    Would it be ok for any common option to fail that often?

    If you count source code, instead of execution. If the program is Perfect, with zero defects, but is hard to use, is it good enough?

    The only acceptable metric would be how often the final user doesn't get what they need or expect... and that is one hard to measure quantity, but I believe it's the only one worth measuring.

    --Mike--

  • Since the defect rate is the ratio of defects to "something", just define the "something" in terms you can live with. I suggest "defects per bytes of the binary program file". Then statically link everything to make the code huge (if you're working on Windows, compile with .Net to ensure bloated code).

    To lower the defect rate, just add more code bloat so the denominator gets larger. Problem solved in a way that makes the PHB's think you're really making progress.

    Now that I think about it, this explains so much . . .
  • At the very large software company I work for they are implementing this (although I'm a bit worried about the difference in what they're calling it). Our head-honcho, Bill, insists that it will make our software rule for eternity. And this will reap great dividends for our followers in the form of monetary and productivity gains.
  • Here's a hint. You want data. Lots and lots of beautiful data. Continuous is better, but discrete is ok if you have enough. Here's another hint. A big part of Six Sigma is reducing variation. Do you have a process that
    • is expensive, or
    • annoys your customer and
    • is highly variable, and
    • is easy to instrument?
    Projects I've worked on in the last year include LDAP server response times, OS image size, OS image development time, application server capacity planning, database server consolidation...
  • by TBone ( 5692 ) on Tuesday January 21, 2003 @01:27PM (#5127779) Homepage

    As many of the people have already commented, your responses are going to break down into one of two types:

    • Tech people: WTF, I don't do business process crap
    • Managment people: Six Sigma is a useful tool


    The question is, what exactly are you hoping to make use of 6S for? Six Sigma is a methodology for analyzing business processes - customer interactions, process flow, things like that. It's not a process for developing programs or websites or business tasks.

    The majority of the audience here on /. probably does not (judging from comments already posted) does not know the difference between "business processes" and "business tasks". Tasks should be done in the manner which is most effective. However, the framework around those tasks is something that can be analyzed and improved...

    • You don't apply Six Sigma to your coders, you apply it to the process by which the business comes up with requirements and specifications, which are then handed off to Project managers, who work with the developers and endusers to come up with the product.
    • You don't apply Six Sigma to the people in your call center, you apply it to the process by which they answer calls, then collect and present the data the customer needs, then handles the call themselves, or hands off the call to an area which has more expertice
    • You don't apply Six Sigma to website development, you apply it to the process of having testers or endusers use the site to perform tasks that the site is designed to handle, then give feedback to the developers, who fix the site.


    If you are an end-of-the-line technician/programmer/coder/etc/etc/etc, then Six Sigma will not necessarily be of help for you. Six Sigma will not help your company with the people that do the business. However, if your position gives you access to be a project manager, or department leader, or something sith some kind of management level overview, where you can influence the way various groups work with each other, then you will find the methodologies helpful.

  • As some have pointed out, statistical measurements can be useful in IT in the proper situation. Here's a concrete example.

    In a particular national network, all DNS requests went to one of two data centers, one on the East Coast and one on the West. During an exercise which measured the response time for a particular end-user service which included a DNS lookup, we discovered that the distributions of response times for DNS lookup for the two data centers were different. Further investigation (had to write some custom data collection software) showed that the problem was that about 1% of the DNS requests sent to the West Coast data center produced no response. The East Coast center produced no response in about 0.1% of the attempts, which was consistent with the known packet-loss rate in the backbone network.

    I believe the problem turned out to be a faulty load balancing box sitting in front of the multiple DNS servers. As a result of the episode, however, the service-level agreement for the DNS service was expanded to include response rate and response time, as well as server availability.

  • 6 sigma works great for some things, but it is expensive to reach it. Do not lose sight of the fact that you don't always needs such agressive goals. There is nothing wrong with deciding that 3 sigma is good enough for you, and a lot cheaper to reach.

    Remember the goal is to reduce costs. If you spend 6 billion to reach 5 sigma, and save 15 million per year, you will never pay off your investment. (Unless your company can honestly say that their products will still be in demand 2000 years from now, something history doesn't support) When talking to management do not forget that costs are what they should be thinking about. Those who understand 6 sigma will readially agree with you when you suggest that you don't have to reach as high.

    That said, do not try to kill legitamit projects. Real money can be saved by following 6 sigma mythods. Perhaps not everything will apply to you, but take the good and make something work. There is no need for not invented here syndrom.

  • I worked for GE IT Solutions and even we took six sigma in IT with a grain of salt. Basically some of the methodology could be applied but the actual goal of six sigma could never be obtained, IT systems are just too complex.
  • As others have pointed out, six sigma is an unrealistic goal on the soft targets. It's a laudable goal when you're manufacturing widgets (e.g., if you're running a soft drink production line and crank out a million cans a month, having only 3 manufacturing defects is a reasonable (if high) goal), but WTF does it mean when you're talking about a help desk that has 1000 widely varying tickets per month?

    On the other hand, there are some concrete numbers that you can use a six-sigma standard on.

    Uptime: over a year, your total downtime should be under 107 seconds. That's right, less than two minutes of downtime per year, for all causes! Since no system can reboot that fast, that means redundancy with fast cutovers.

    Connectivity. Again, over a year your total network downtime should be under 107 seconds. That means multiple upstream providers, multiple gateways, etc.

    Virus protection. Out of every million viral-laden messages (say a typical afternoon when the lastest MSTD is in full swing :-) only three should make it past your virus protection measures.

    I'm sure you can come up with your own metrics. These are also far more useful measures than the "process driven" ones since it's what the IT "customers" really care about. Think about the analogy to a commercial product - when you get an empty can of soda you don't care how well the can was manufactured or how great the unseen soda would have tasted, you only care about the final product.
  • Have a look at the discussion forum on iSixSigma.com as well. The general consensus is that Six Sigma will only be successful in a company that currently has processes in place. If processes are in place, then these must be measured and improved. Steer clear of some opinions of applying Six Sigma to the actual software. The number of defects can not be determined because an application has known as well as unknown bugs.
  • I worked on an industrial robot many years ago - its purpose was to cut components to a specified length.

    Due to the resolution of the stepping motor that controlled the cutter, the resulting component lengths, when plotted, were a uniform distribution - for length N-(delta/2) to N+(delta/2), where N was the desired length, and delta the minimum step distance. Within that range, all lengths were equally probable.

    The customer got angry, because he needed a less than 1ppm out of tolerance rate (tolerance was about 5*delta), and by his calculations we failed.

    His calculations? Measure 100 parts, compute sigma, is 6*sigma < tolerance?

    The problem is, the equation P(n:n6*sigma) ~ 10^-6 only holds true for a Gausian (a.k.a. normal) distribution (in other words, probability of x, given x is more than 6 sigma from mean, is less than 1 part per million only for a Gausian curve).

    Asserting that if you meet 6 sigma, you will be less than 1ppm only holds true for a Gausian curve.

    So to apply this to something without first demonstrating that the something follows a Gausian curve is WRONG.

    In my case, the failure rate was so very much less than 1ppm it was pathetic, but since this person did not understand the statistical relationships he was trying to apply, he caused us no end of grief.

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...