Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Measuring Coder Performance? 32

An Anonymous Coward asks: "Our company is a small web development firm that focuses on dynamic and database-driven websites. We are thinking about creating a bonus/recognition program for our coders based on their performance. The question is, how exactly does one measure programmer performance? We don't want to use the number of lines of code per day (why encourage 10 lines when 5 will do?). We don't want to use the time it took to finish the project, since lack of bugs, speed, security, etc. is just as important as time. Has anyone had an experience with a measurement system that they thought was a fair and accurate measure of the quality and speed of coding?" While I'm sure some standard can be applied to answer this question, I don't see it being extremely accurate. Some things just can't be quantified. I'd be interested in knowing if you all think this is one of them.
This discussion has been archived. No new comments can be posted.

Measuring Coder Performance?

Comments Filter:
  • by Anonymous Coward on Monday March 19, 2001 @01:33PM (#353360)
    You are looking for an objective way to measure a subjective thing. Good luck.

    My advice: fuggedaboudit. Use peer review. Allow developers to nominate each other for recognition.

    That's the only kind of recognition that counts, anyway.
  • Posted by xxmacdaddyxx:

    Screw that! Allow developers to secretly nominate each other for public flogging - it wont help productivity, but it's way more satisfying...
  • As another writer said, peer review is definitely the way to go. If you were a large company and you had a lot to automate and justify, then maybe you would have to turn to metrics, but you don't.

    Look at it this way: If your metrics contradict what your small band of hackers know is right, your hackers will be unhappy and quit. In other words, metrics would be, at best, as good as peer review, and at worst, they might do a lot of damage.

  • > Player 1: "I can code that in 50 lines."

    > Player 2: "I can code that in 20 lines."

    Phil Carmody: "I can code that in one prime number!"

    --
  • Decide what behaviors you want to reward. Examples:
    • Peer recognition. You want people who are respected by their teammates.
    • Role model. Do you have certain processes or paperwork that need to get done as part of projects? Do you have some people that do all the paperwork, and others that always skip it?
    • Being well rounded. Recognition often goes to coders who also have the communication skills to talk to the customer. Or who can write up a proposal. Or run a brown-bag lunch on a new tech topic for their teammates.
    • Hitting deadlines is good, don't blow it off because it might encourage bad code. If your process is in place then sloppy code won't pass your QA and your developers will know this. They need to hit the deadline AND pass inspection.
  • Forget analyzing code or time spent working or any of those, and use a decent "real-life" parameter. Measure how much coffee the coders drink!

    If you want a rough outline, just visually monitor the coders. Whoever *seems* to be drinking the most coffee wins.

    Of course if you're into statistics (and which self-respecting geek isn't?) you're going to have to go high-tech. Each coder is issued a magnetic-swipe card that uniquely identifies them-- to the coffee machines! You'll instantly be able to weed the slackers from the true workers, and also help identify those with "problems" early. If you set it up right, you can even determine who works better at what part of the day, and prioritize access to the coffee accordingly.

    Hell, this seems like such a good idea that we've got to patent it! Anyone know of any prior art?
  • Your post mentioned several typical computer geek or engineer solutions. i.e. meausre productivity somehow objectively. Programming or most engineering productivity isn't something that can be easily measured objectively. The problem is compounded by the fact that you are a small shop. The measurement criteria has to be constantly evaluated itself, and adjusted. This takes time and manpower that just isn't available unless you are a big shop.

    Counter examples to most "objective" measurement schemes are easy to create.

    Bonus based on number of bugs fixed? - see the old Dilbert strips on "I'm going to code me a new car today" by inserting bugs to be fixed.

    Bonus based on lack of bugs in code? Testers and coders quickly collaberate to fix minor bugs via water-cooler talk rather than through the bug tracking system. These bugs also tend to get mostly reported against the user interface people and not the deeper level logic people, even if that is the root cause and the interface coders end up coding work-arounds.

    Bonus based on lines of code produced? What about the coder who takes a large bunch of code with hairy logic and cut the number of lines down to 1/10th. Was that programmer negatively productive? What about the original programmer who artificially inflated his lines of code count (or function points - they are only slightly harder to inflate).

    The solution - especially in a small shop - is a manager who knows the people and actually does management. They will know who is churning out good stuff and who is slowing progress. This means a good manager - not a pointy hair boss. Of course, this isn't a popular thing with techies (myself included there) but experience has shown me that it is true.
  • by gbnewby ( 74175 ) on Monday March 19, 2001 @04:28PM (#353367) Homepage
    Read Fred Brook's "The Mythical Man Month." Read Beck's "Extreme Programming." Then, turn around and read Thompson's (1917) "Scientific Management."

    The lesson learned is that programming is more like art than like an industry. "Regular" measures of productivity don't apply well, nor do standard reward systems. I agree with AC: take a page from sourceforge.net [sourceforge.net] and other locations, and implement peer-review. Bug tracking etc. are other approaches.

    The biggest danger I see is turning the reward system into a popularity contest. Careful that the evaluation measures are consistent with what you want (e.g., if there's no customer interaction, then things like working hours and dress shouldn't play a role. But if there IS customer interaction... get the idea?).

  • Player 1: "I can code that in 50 lines."

    Player 2: "I can code that in 20 lines."

    Player 1: "I can code that in 10 lines."

    Player 2: "I can code that in 2 lines."

    Player 1: "Hmmm. Code that program."

    Player 1: (Aside) "Wait-- did you mean in Perl, or RPG?"
  • Personally, I think you'd be better of not doing this calculation, because you have to pay a person to work all these things out. I'd rather have that person looking for bugs/testing the code.
  • Probably won't work, but I thougth the idea was funy anyway:)

    If you can devide the work to be done into smaller tasks, you could announce these tasks (say) a week before they start. Then ask the devellopers to bid on them, giving a price in the amount of hours they think it will take them.

    The lowest bidder gets the task, and you'll have some (sloppy) way to tell how difficult the task was thought to be by the various developpers. Then you can look at how long it actually took (afterwards), and you can base the reward-points on that.

    Yes, I know this is *not* a very precise way. But at least it looks objective. And, to me (not working like that) it looks like fun:).

  • I presume you are working out outside clients. If that is the case, you should probably be creating design documents and defining acceptance criteria in there.

    Acceptance criteria should include delivery date, important functionality, stability, etc.

    So use this as your metric. Determine some weighting for each of these criteria and then get the customer and the manager to rate performance at the end.

    Did they deliver early? In that case, they get over 100% for coding speed. Is the application running at an acceptable speed (but not particularly fast)? Then they get all the points for code speed. Does it have significant bugs (but was still accepted by the client)? If so, they lose points here.

    After all, if you are pleasing your customers (and your managers), can you really ask anything more from your coders?

    In addition, it provides feedback for where the coder is having problems and how they can improve.

    --

  • This is exactly how this sort of thing would have worked with my old team:
    "I'm almost finished with the new module, but I need to pad the lines-of-code so I can get my bonus."
    "Oh, I would fix that bug, but it's only a 1-line fix."

    Instead, focus on projects that went well. Ask peers and team leads who should get a bigger bonus -- they'll know and they'll be right.

    Also, pay everyone a bonus that you don't want to leave.

  • Quoting from Peopleware [fatbrain.com], second edition, chapter 28 "Competition", under the "Teamicide Re-visited" heading (pg. 183 in my copy):

    "Internal competition has the direct effect of making coaching difficult or impossible. Since coaching is essential to the workings of a healthy team, anything the manager does to increase competition within a team has to be viewed as teamicidale."

    DeMarco & Lister quote W. Edwards Deming's "14 Points", where point 12B says that annual or merit ratings and management by objectives should be abolished. Alfie Kohn [alfiekohn.org]'s work focuses on the harm caused by the "Do this and you'll get this" mentality. Joel Spolsky's essay, "Incentive Pay Considered Harmful [editthispage.com]" is a quick read on the subject.

  • The only problem with the advice mentioned above is that the quantification is of a negative - that is, "one bug/pass is better than three".

    This isn't going to help when it comes to granting bonuses - I mean, who'd like to hear "Great job, here's an extra $3k... oops, found a bug, give us $1k back..." :)

    Using function points is a good idea, but as with most metrics here: who decides what weighting a function point gets? It's all subjective...

  • Just a thought but staff should be encouraged to share knowledge amongst other staff, and I see that a scheme like this would encourage people not to help each other out, talk about how to do something etc. Any bonuses for hard work should be made at promotion/pay review time, and kept in confidence. Just my 5c
  • Peer review - This one has already been mentioned. IMHO this should be informal, developers are not grading each other. Rather, make sure that they are all code reviewing each other's work. Developers should take pride in their work and put their names on code they write and contribute to. (This makes it easier to get a feel for the quality of their code and also helps developers and managers know who to turn to when they have problems.)

    Ownership - Make sure that developers have clear ownership boundaries. They work on other pieces, of course, but they are primarily responsible for delivering the component that they own. If a piece falls behind or is low quality you know who is responsible. You must still take into account other factors. If an inexperienced developer falls behind because he took on too much, that doesn't necessarily reflect badly on him.

    Set goals - Developers should set their own goals (guided by their manager(s)) and be judged by how well they accomplish them.

    Ranking - When all is said and done management should rank the developers and give out bonuses accordingly. The best get the highest bonuses, it goes down from there. Ranking is easier than you think, especially when you have a variety of data to use. This ranking should not be published!

    Spot bonuses - If you can, give out spot bonuses when someone is doing particularly good work. This is purely a motivation thing. Again, don't make it public or make a big deal over it. You don't want to demotivate everybody that didn't get the bonus.

    There are many traps and pitfalls. It's very subjective, you need to work especially hard to ensure that it does not become a popularity contest

  • This is harder than you make it sound.

    The number of bugs will be higher for proactive developers that take on more work. Plus, the time it takes to fix a bug varies greatly on the underlying code. Some bugs that sound trivial are actually quite hard to fix, while big features can be done quickly. And what happens when developer A writes tons of bad code, moves onto something else, and developer B gets to clean up?

    Bug tracking software will give you a small part of the overall picture, but don't rely on it entirely.
  • I recommend that only bug-free program get recognition at all. The reason is that bugs usually increase development time and costs a lot and also decrease the value of the program in the customers eye a lot.

    Second point: Such a bonus system can lead to the situation that people help each other less than before (esp. if there are financial benefits) which is of course couter productive. Maybe you can reduce this by not having valuable prices..let's say just a free lunch or so - something that increases the reputation of the coder but not his wallet.
  • You can measure the number of bugs per developer

    You mean the number of bugs they fix, or the number of bugs they cause ?
    ---

  • I once made a bet that I could program my functional unit in a style where every function (in C) had only _one line_ and that had to be a return statement

    Someone ws actually foolish enough to bet money that you could not do this? No fair betting on technical issues with PHB's...
    ---

  • This is only half sarcastic.

    Run an internal slashcode (or similar) system to allow your newbie employees to ask the experienced ones for help. Theoretically, those who prove the most helpful should gain Karma in this system. Use this Karma as this basis for raises, promotions, etc.

    Watch out for people who spend time karma-whoring instead of doing their real jobs, tho.
    ---

  • I would give some (most perhaps) weight to achieve objectives as a team. Then the team can discuss how a certain incentive should be shared among its members.

  • What you say is fine - and it actually fits in with my original reply - what you did can be seen as part of the QA/Testing phase. This is however one of the weaknesses I see in FPA - (remember I said i did not agree with all of it). All phases of a program have to be analysed from initial concept and design onwards - yes that falls into the "peer review" camp. This has to be done to ensure that FPs assigned are realistic and that excess 'logic' has not been built into a flawed design. No theory / metrics / methodology is perfect. You always have to use whatever is appropriate to the situation - which will depend on the size and importance of the program being developed. There's an old management saying that "you can't manage, what you can't measure". FPs are only one way of giving management something to measure - they are not a complete solution in their own right - which is kinda what I meant in part of the original reply.
  • by rednax ( 305483 ) on Monday March 19, 2001 @01:37PM (#353384)
    This question has bugged (if you'll pardon the pun) developers, and development managers since Grace Hopper was pulling moths out from between the valves! You are right to say that lines of code is not a good way, as it encourages verbose coding, nor is time taken to complete the code a suitable measure as you must take quality into account.

    In short you need to look at your whole development methodology. One of the more successful ones I have come across was based on Function Point Analysis - now there have been whole tomes written on that subject (not all of which I agree with but it may help you get some ideas). The basis of it is that each project / program is split into a number of function points, where each funciton of the program is awarded a value - eg drop down boxes might rate a "2" whereas a text input might only be a "1". Complex SQL might get a 5 - and so on (there are probably point charts somewhere on the net if you search for them). Total up the program and then the score for the program is what the coder is being asked to deliver. Now obviously there is a whole lot of stuff that can come out of this - if you know the average FPs delivered by a coder you can work out the time required etc, but what you are looking for is a measurement of productivity - so you need to look at how many FPs a coder can deliver in a day (or whatever timeframe you want really). The important thingt is that in your case you can not 'award' any points to the coder until your QA / Testing has assured you that each FP is working and can therefore be counted for that coder.

    Sort out your overall development methodology and all the rest will fall into place.
  • If you are using issue tracking software/work flow software, tracking performance is very, very easy. You can measure the number of bugs per developer, time-to-complete, complexity of the developers work, etc.

    The only problem with a system (any system) like that is that you have to make people use it.
  • Both...more complex code has more bugs...but it's complex code.

    Bugs are just part of the measurement, you really need to consider everything that's being done. No one metric is going to tell you what somebody is worth to an organization.
  • I hate to sounds like an echo, but tracking - tracking - tracking.

    One of the steps in your issue management process is a review of all changes/bugs/enhancements/etc..If a issues is going to make it into your product (and not get rejected) part of the review is breaking it down into it's composite parts and measuring the difficulty of those tasks. When you assign the ER (engineering resource) to a given task, you have just assigned them a certain amount of difficulty/work load/points.

    If they take 3 iterations to complete the task, and another coder can finish the same amount of difficulty/work load/points in a single pass, you have something that differentiates the two come 'bonus' time.

    That said, this kind of a system needs to be really, really closely monitored or else it's just a bullshit way of knocking people around come review time. The real advantage is that it if you are closely watching the system you can see where you need to step in and help someone before they come to the review and find themselves with a laundry list of mistakes.
  • "
    eg drop down boxes might rate a "2" whereas a text input might only be a "1". Complex SQL might get a 5 - and so on
    "

    One of the last bugs I had to fix was written by a guy who didn't know the language (C/old-style-C++). His design was overcomplicated, and wrong. His implementation was overcomplicated and wrong. I removed 200 lines of code which did the supposedly difficult (and thus high scoring) bit of logic, and replaced it with two lines of code which remembered a value as it was being passed on elsewhere - undoubtedly a 1, or even a 0 on your scale.

    I'm a consultant, I'm not here to be popular; and when you let the managers know that the guys they employed were idiots, then you you indeed don't become popular. However, I've now outlived my manager on this project, so I don't give a toss.

    However, the bug fixed was a show stopper, and the _peer review_ yielded a pat on the back from the other engineers who still work on the project, as it finally lets them get their bonuses!

    I work for a flat rate. No metric has been applied. Managers want metrics. Engineers want 'working'. In my utterly biased opinion anyway.

    THL
    --
  • Oh man! Someone moderate the above up!

    Hahah - I once made a bet that I could program my functional unit in a style where every function (in C) had only _one line_ and that had to be a return statement. I won several cakes from that bet. Magic - almost every function was recursive!

    To show 'good faith' I also provided the code in a 'striaght line' form too with remarks on how to map between the two (it was only 200 lines of code max, no more than 25 functions, so it was a fairly strightforward mapping). However, no bug reports were ever raised, so I assume that to this day the system runs with my recursive code.

    For reference - I was rewriting the company's C coding standards at the time, and I decided that I didn't like the sentence "Every function must have exactly one return statement". I deliberately mis-interpreted that as "... and no other statements at all".

    Sometimes I'm an asshole... But my mates had a laugh to see the (working) rubbish I produced with that restriction!

    THL
    --
  • Have I worked with you in the past?
    I've sure had chats across partitions with people like you in the past. (If your attitude is reflected by your links, that is.)

    I've had more jobs than hot meals, and I've seen teamicide many times. Hahah, I just didn't know that was what it was called. I always left first, so I don't know what the death throws look like!

    THL

    --
  • Being both an employer and a coder I definitely see the complexities before you. What I have found to be the best way is to give bonuses based on three things. #1 being the skill/title of the programmer, #2 being their final accomplished goal to be achieved (no bugs, etc), and for #3, have them complete online tests... This will encourage them to learn more, get things accomplished, and do it right.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...