What Makes a Good Design Document? 461
dnnrly asks: "I've been writing software professionaly for a couple of years now for more than 1 company and I've noticed a recurring pattern: I get put on a new project that already has a bit of history and I get told to read the design documents and then implement XYZ. What happens is I read the document, find that it gives me a lot of information about certain aspects of the system we are building, but leaves huge gaps in others. We're going to be rewriting some of the procedures very soon and I'll be able to influence the software side so I wanted to ask Slashdot readers what sort of things have they seen in design documents that they've liked/thought are a good idea? What have they found works and what doesn't? If all else fails, where's a good place to find all this stuff out?"
"There's usually a very defined and rigid format for every design document and the writers have obviously tried very hard to make sure that procedure has been followed, generally leading to an almost unreadable doc or a design for the sake of it. Part of the issue is that these guys have written the design after 2 or more years exposure to the problem so they tend to forget just how much they know."
Don't use one (Score:1, Interesting)
What's the purpose of the design doc? (Score:5, Interesting)
You Already Know (Score:3, Interesting)
What happens is I read the document, find that it gives me a lot of information about certain aspects of the system we are building, but leaves huge gaps in others.
If you can already identify gaps in previous design documents, then you are already qualified to write the next design document.
Apart from that, talk to some other experienced people in your organization and get their take on previous projects failures and delays. Then, see if there is any way to preemptively incorporate measures into the design document to improve your projects chances for success.
[Although, a lot of project success really boils down to getting the right people on the team.]
Re:Duh (Score:2, Interesting)
I dare say that I haven't seen much in the way of design documents in anything I do. A "design" to me, typically means a braindump committed to email. It's not too surprising either, I doubt many software engineers got into the business to write documents.
Re:Duh (Score:5, Interesting)
Then burn it. The methods are all nice in theory, but in practice it's often a crapshoot. Then again, I have a bit of history on this one, as my software engineering class was more of a class on how to deal with the PHB (clueless professor).
We learned how to make all the pretty diagrams, while we discovered that such pretty diagrams could never have relevance to our particular project. It ended up a game of "make the diagrams the customer wants, then make much simpler and more sensible ones for yourself that don't follow proper software engineering protocol but actually tell us how the darn thing works."
Wiki? (Score:2, Interesting)
Model Driven Development (Score:2, Interesting)
The basic process is : create a model that encapsulates the three bigs:
1) Analysis (i.e. requirements, actors, and use cases)
2) Components (object models, system models)
3) Interactions (interfaces and sequencing)
Once your model contains a good description of these three domains, expressing a design document from the model is straightfoward (indeed, many of good modellers will provide excellent document generators). XDE works fine, but my particular favorite is Enterprise Architect [sparxsystems.com]
The beauty of treating the design document as an expression of the model is that by changing the model, you change the document.
In a situation where you're doing large scale code-generation from the model, you're living high on the hog - one repository for your solution information, and any number of expressions of that information into the formats you need (requirements docs, design artifacts, codebase, etc...). By actually including the analysis elements of the solution (the requirements, particularly), you can link those requirements to system components that fulfill the requirements. As the requirements change (and, of course, they will), you can evaluate the impact of those changes quickly by tracing the associations.
Decent article on MDD [ibm.com]
Pictures are good! (Score:5, Interesting)
Amen. I do Quality Assurance (and for those who don't know, that isn't just testing). I use design docs to figure out how something is supposed to work before I get it. Pictures are good. You can (intentionally) bury information in a 50 page document. It is hard to bury information in pictures. I say "intentionally" above because in the past I worked with a guy who was the director of a development group. He didn't like to design things, or tell people about how things were going to work. So his requirements and design documents were vast containers of information. His standard answer to questions was: It's in the document.
Me: What is the flow of events from beginning to end?
Him: It is in the document.
Me: I couldn't find it. Where is it?
Him: It's in there, you just have to find it. See, here on page 3, and on page 10, and...ummm... You just have to piece the information together, but it is all in there.
Talk about information hiding. In meetings, people would ask questions, and he would say "It's all in the document, should I go get it?" Nobody wanted to spend meeting time sifting through it for answers. And sometimes, the answers weren't there, and we would always get the "I'll add it". Of course, nobody ever checked to make sure he added it.
I fought for months to get him to add a flow diagram in a doc. He kept insisting that all the information was there and that a diagram was useless. After months and months, he finally added it. The FIRST thing that someone said at the next meeting was how useful that diagram was, and they pointed out some improvements to it. It turned out those comments sparked conversations that led to the discovery of flaws that went unnoticed for months. I'll leave it up to the reader to guess who got credit for the diagram in the document. (hint: Senior QA person or director of development)
Let me re-iterate: pictures are good.
good code (Score:2, Interesting)
Good design documents show intent, but it is the code itself that determine the process. It is like a factory. One has draft and official procedures, but it is the marked up copies on the floor that indicate what is actually going on.
Amazingly, I find this somewhat harder to do in OO languages. The flow is often not as clear due to polymorphism and the like. Makes coding easier, but sometime reading harder. I guess it is just a matter of manners.
RFCs are requirements documents, not design (Score:5, Interesting)
An RFC specifies what behavior MUST, MUST NOT, SHOULD, SHOULD NOT, MAY, MAY NOT exist. It doesn't say jack about how that behavior is supposed to come into being. I could write an OpenPGP application that did all of its work by hiring Bruce Schneier to manually do the RSA computations, and it'd pass the RFC.
RFCs aren't design documents. RFCs specify behavior; design documents specify how that behavior is achieved.
Good design documentation (Score:2, Interesting)
I solve all these problems by using the Business Architecture Method from Business Architects [businessar...ctsllc.com] The site is new - online examples are coming later today (Monday).
Yes, I'm biased... I work for them.
That Jack Reeves is Real Smart (Score:1, Interesting)
Re:Duh (Score:3, Interesting)
Amen.
It ended up a game of "make the diagrams the customer wants, then make much simpler and more sensible ones for yourself that don't follow proper software engineering protocol but actually tell us how the darn thing works."
Actually sounds like you learned your lesson well.
Re:Duh (Score:2, Interesting)
The software engineer, even if he or she writes code, is not doing that. The software engineer has to design, whether it's done in their head alone or whether it gets committed to paper.
Is that coding software engineer less of a software engineer by skipping over the design documentation stage? Perhaps. We are a profession that doesn't have strict rules. How to write a good design document is a massive question because I don't think we've figured it out yet.
If you're going to construct a building you know you need floors, walls, windows and such. You have to do a site plan, blue prints, environmental impact studies and such. Good design documentation, I think, would be more of a given.
Software ranges from games to scientific applications, distributed apps and so on. Can a cookie cutter approach work? Is software still advancing with such a pace that a more formal technique is still out of our grasp? Or are we at a point where patterns are emerging and can we can point to a design as a "typical" design?
Don't just document the design (Score:5, Interesting)
The single biggest mistake most design documents make is that they document the design.
That's nice, to a small extent, but generally of relatively little use.
It falls down completely when
- the designer made bad assumptions that subsequently don't hold
- the users change the requirements
- someone actually writes the system
- the system goes through years of maintenance
So what is of use in these circumstances? The ideas, concepts, approaches and general thrust of the design.
Where did this design come from? Why has this approach been taken. What are the concepts embodied here?
Don't tell me that the Widget is round and talks to the Doodah.
Tell me _why_ the Widget is round (and why square wans't good enough), explain what the Doodah does and why the Widget needs to talk to it, what the contract (informal or formal) between the two is.
If the Doodah works with hexagonal Thingamies then explain that. If there aren't any Thingamies yet but it's possible they may be added give the guidance on where they'll be added.
A good piece of software design is a vision, a pure and beautiful concept in the mind of its creator. What gets written on paper has to share that vision with others, so that they can understand it, and share it going forwards.
Then you have design documentation that makes sense, that outlives the initial implementation, that's useful to people in years to come.
~Cederic
This is how you accomodate it. (Score:5, Interesting)
Re:Testing the design -- traceability (Score:5, Interesting)
They're all important points. If I had to rank them in order, I'd sort them as follows:
Items 1, 2, and 4 are the most critical, followed by 5 and then 3.
I'd add subitem 4a: During design, don't add features that aren't in the requirements, unless you can show a derived requirement (usually from one what I call the "X-ability" requirments -- scalability, maintainablilty, testability, and reliability). But don't add it because it would be "a cool feature to have".
Re:Duh (Score:5, Interesting)
Computing is a creative technical discipline that has little to do with engineering and even less to do with science. It is an art, a craft and sometimes a trade.
Just because they're older buzzwords doesn't mean they're accurate.
Testable Requirements (Score:3, Interesting)
It's simple, make sure that each requirement that is stated in your design document meets the testablility test. If you can't think of a simple way to test the requirement, than it isn't properly defined.
Along with this simple idea, the person who is specifying the system has to be willing to put in the time to make sure all requirements are testable. You also have to have a good programming manager, one who will make sure each new requirement is checked for testability and that all changes are checked to make sure they don't mess things up.
With this combination of factors, I was able to reduce the number of errors discovered after release in one system from over 400 (taking 4 months to fix) to 4, which took three days to fix.
Make note of the tests you envisage for each requirement. Ideally, this should be done by a very sharp-eyed QA Analyst.
Finally, build code reviews into your schedule. That way you have a good chance of meeting your deadlines. The code reviews not only find many bugs, they are also good places for mentoring members of the programming team who have less expertise.
Hopefully, some of this is useful to you.
Very simple approach. (Score:3, Interesting)
Second trick: Brevity. Put NOTHING in there unless it communicates the design and is required by the target audience. Get rid of Cut & Paste boilerplate just as you would in your code.
I guess finally I'd have to say --be complete. If you find yourself saying "We'll work out the details of that later", it's going to be one of the more difficult parts of your project.
The problem is that these standards are difficult to quantify, so the most important point would be Hire a good architect. Look at design docs they have written and see how many questions you might have if you were implementing that project. Let them train the rest of your staff.
The difference between a typical programmer and a good architect is about the same difference as that between a house painter and a classic artist. Even if a group of house painters could paint the Sistine Chapel, they would have to have some pretty good instructions to follow--and they would be completely incapable of making those instructions themselves.
Re:Testing the design -- traceability (Score:5, Interesting)
This meets your requirement; the program didn't "dump". *grin* Hopefully that illustrates the dangers of most "not" requirements.
The best way to write the requirement you're looking for in this instance is something like:
"Ignore alpha values where a numeric is expected."
Or, "If an alpha is given where a numeric was expected, ignore all following input up to [some delimeter].
Or, perhaps, "If an alpha value is given when a numeric is expected, display an error and get the input again." The general idea is you want to define the requirement and the bounds of operation; "not" requirements are unbounded.
I will grant, however, that sometimes you want a "not" requirement; this discussion [slashdot.org] has more details on 'not requirements' and rjh [slashdot.org] points out that sometimes folks use not requirements to prohibit things but purposefully allow anything else; in my opinion this is very dangerous and should only be used very rarely.
Re:Duh (Score:3, Interesting)
You make an excellent point. It's the difference between being a developer vs being a "programmer", something that Eric Sink (founder of a small ISV called SourceGear) wrote a very nice article [ericsink.com] about.
If the product doesn't change the requirements... (Score:3, Interesting)
Think about that. If the the software you are creating isn't altering the business methods, the usage patterns, the very opportunities available, why are you bothering?
New software should be disruptive. It should enable things that were never present before. It should create opportunities that simply weren't even on the horizon before.
If it doesn't, then why bother? Make do with the old version. Patch it, kludge it. Software is incredibly expensive to write, so unless it is really disruptive, don't do it.
So your system is worth creating, it is disruptive. THEN IT'S PRESENCE WILL CHANGE THE REQUIREMENTS.
Thus the requirements gathering, design coding, testing, deployment must occur in a tight a loop as possible. The system must be flexible as possible to keep pace with those changing requirements.
This is what extreme programming is about. It is "Extremely Conservative Programming". It is performing the conservative best practices that traditionally, give us quality systems, at a sufficiently high rate to cope with the disruption our software produces.
You mention the designer is the coder is the tester in XP. Wrong.
The coder / customer team are the requirements gathering, priority setting team. The tester is the designer, forcing the designer to design testable systems.
The coder is the implementor constrained by superbly tight design / spec (the test), earning the greatest value to the customer soonest.
The coder then changes hats and becomes the refactorer, who can in the light and hindsight of the evolving system create a superbly designed and crafted system. Why? Because the system was designed to be testable, it is deeply tested so refactoring is safe. It won't unwittingly break the system. And being superbly designed, it is flexible, ready to earn the next greatest chunk of value for the customer.
And having come out of implementation with a higher level of testing than most traditional systems, it is ready for deployment to the customer. It can start to disrupt his business, start changing the way he does things, start changing requirements and priorities.
Re:Testing the design -- traceability (Score:3, Interesting)
This viewpoint seems to be largely tautological - two forms of an algorithm are equivalent because their input/output behavior is equivalent.
Of course, the counterargument is complex and more than I care to handle at work, but let me point out two things:
1) a binary running quicksort and my hand stepping through bubble sort aren't equivalent, even though one being run by the machine and the other being run by the human have the same end-result; input/output equivalence isn't the same as identity equivalence, and
2) various compilers don't output the same binaries for the code, and in the case of a naive and a cutting edge compiler can generate vastly different binaries with majorly different underlying behavior; hell, with optimizations, even the same compiler won't generate the same binary.
The source code most certainly is a generative set of instructions, rather than an equivalent form of output; take a look into compiler optimizations about proving whole sections of code are unnessecary. That the generative instructions are significantly different than the output is in fact the crucial bases of some techniques like SFINAE.
This argument is roughly the same as arguing that a mansion and a home are equivalent just because you can enter them, recieve housing and protection services from them, and because they have the same interface (doors, spigots, electrical outlets, garage, etc.)
In this sense, the code *IS* the final product; the Reeves article incorrectly equates compiling to building. The "building" is the writing of the code. Compiling is more like translating a book from English to some other language.
I just don't think this is correct. The purpose of Reeves' article is to equate programming to design as opposed to construction; all you've done is to take the opposed viewpoint. Whereas the case can be made for either, in the context of Reeves' arguments the latter viewpoint doesn't make sense.
To wit, if you want to discuss building, you should observe that the output in best possible world should be identical every time; that's a reasonable goal for architecture, but not for implementation. Why do you suggest that building is coding? Saying so without rationalization isn't discussion, it's argument.
When I use the blueprints for an automobile to build the automobile, I get something different than the blueprints.
Uh, no, you don't. That's kind of what mass production is about, is that within the limits of machine technology, every car is the same. If you go to a junkyard, you can use pieces from other cars. You can get replacement pieces at a dealership without anyone measuring your car.
Cars to a single line are effectively identical. If you'd like to learn what the difference is, read about the problems WW1 doughboys had because of their Hotchkiss weapons not being able to exchange parts, then consider that that's just a question of low quality manufacture; the reason you want to look that up is because of the glaring contrast with the well-manufactured Browning, which the military didn't deploy because it was the superior weapon of the day, and because they didn't want its design to fall into enemy hands.
Cars most certainly are identical per design, unless you want to go off onto non-issues like micrometer differences due to machining, scratches in paint, et cetera. The average educated adult cannot look at four of the same make and model of new car, have them rearranged, and still tell them apart, provided they're clean; The Price Is Right relies heavily on that fact for more th
Re:Testing the design -- traceability (Score:3, Interesting)
I've been on projects where the RTM became God and we ended up spending more time documenting how X -> Y -> Z throughout the document tree than we did coding it. There is some sort of happy medium.
OTOH, I'm working on a project now with:
No design docs.
No requirements in written form unless my company writes them for the client, and even then they are abbreviated severly.
The great habit of getting a bug report from QC, wanting to say "Show me the requirement" then realizing there isn't one, then having to discuss, argue, etc. out what the 'requirement' that isn't really should have been....
That's going waaaaaay too far the other way.
Also, unspoken in what you said, a design that meets requirements can still be a bad design. One that meets requirements does at least have a chcne of being a good design. One without requirements is effectively FUBAR from the beginning.
Lastly, I've been on teams where the dev *team* tested things. Now, you tended to not test your own work, but you were involved in testing the product, so it gave you a lot of experience with look and feel of the product, workflow, and just the kinds of issues users would hit. Sometimes, you had to be the one testing your stuff because only you could reasonably setup the failure conditions that needed to be tested (for instance, you're the network/dial-up expert, and you need to generate a lot of failure conditions here). Of course, we also always had a separate QC pass, but nothing went into Dev without a pass from developer smoke testing. It led to much smoother QC iterations and a much better and more stable product.
On the topic of design docs:
1. A document topology roadmap is nice. Ours used to show where the document inter-related with other documents in the project tree. That proved handy on more than one occasion.
2. PICTURES! Message flow diagrams, client-server network layouts, cloud diagrams showing software layers and major internal components/organizations, etc. A diagram plus some reasonable number of words is wroth ten times as many words with no diagram. Visio is your friend. So are modelling tools that produce good diagrams. A lot of times, a diagram alone will tell you a lot of what you need to know about a system's architecture or the interactions between components or computers.
3. Be as specific as you need to, but not more. It's tough to know, but consider your audience and don't go into excess detail (ie don't make your DD a regurgitation of the code). You need to have enough detail to make it quite clear what the implemenation must do, but not to *be* the implementation.
4. XRef to various other documents like the requirements documents and to any background docs people might need to clarify something they read in the DD (mostly background).