Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Software Dev - Why Rebuild When We Can Retool? 19

basic70 asks: "There seems to be a strong preference for developing new systems all the time, instead of just refactoring and improving existing ones. Why is that? Modifications such as moving to a new operating system, modifying the business logic, adding a web interface, moving to Unicode etc, shouldn't affect more than perhaps 10-20% (to grab figures at random) of a decently built software system. I can think of two reasons myself. The first is that consulting firms make more money developing new systems, and the second that existing systems are so badly layered and modularized that any larger improvements are impossible. The second reason is scary, because that means that the modern way of building things with short lifespans is starting to make its way into the software business as well. I saw a system written in 1995 that couldn't handle the new millenium. Can't we do any better than that? The GNU suite says we can, so why is it so hard with commercial software?"
This discussion has been archived. No new comments can be posted.

Software Dev - Why Rebuild When We Can Retool?

Comments Filter:
  • by Anonymous Coward
    Most of the programs that are available from the GNU library are based upon the idea that a piece of software should do one thing and do it well. This is the "Unix philosophy", and it shows when using Unix-like operating systems. When taken alone, these programs exhibit a single-mindedness. However lay them side-by-side and you can see large discrepancies between seemingly identical features (e.g. the help system).

    Software created for business, OTOH, necessarily need to be large and complex and single-minded. The Unix philosophy is still at work, but once the "one thing" that a piece of software does becomes more complicated than grepping text or communicating through a pipe, the applicability of the philosophy begins to waver.

    So why start from scratch when software already exists? Any number of reasons:

    1) Poor documentation of the old project
    2) Unnecessarily difficult code in old project
    3) New programmers don't know old implementation language
    4) New programmers suffer from "Not created here" syndrome
    5) Contractor wants to bill more hours than would be billed otherwise

    Obviously it's a lot of useless work to reinvent the wheel every time a project comes along, but think of it as job security. If the only programming being done was stuff that no one ever did before, the developer profession would be filled by only a handful of folks.

    Dancin Santa
  • by bluGill ( 862 ) on Wednesday May 30, 2001 @05:29AM (#190236)

    So said Fred Brooks in his masterpiece on software devolpment.

    There is no way to really figgure out how something should be designed without designing it wrong the first time. If you have expirence it means you can throw away parts (ie your ascii module gets thrown away for a unicode module), but the only way to get expirence is to do it wrong.

    The system I'm working on now has some problems, but they are not bad overall. Surprizingly, the areas we thought would be a problem aren't so bad, while the ares we all worried (what will happen when the customer does x) turn out not to be a problem. Until software hits the real world you don't know how it will be used, and thus you don't know how it will need to be changed.

  • Exceptions exist, but bear in mind that even Open Source and Free Software projects will feel some of these pressures, so they won't necessarily be immune.

    The advantage of Open Source and Free Software is that you do have the source in case you need to modify the existing system.

  • The advantage of Open Source and Free Software is that you do have the source in case you need to modify the existing system.

    You do if you're developing software in-house, too. Non-extensible software is still non-extensible, and making it extensible and documenting it properly is still a large short-term expense with little tangible to show for it (however valuable it may be in the long term). Look at how much people griped while Mozilla was in rewrite-limbo for an example of the usual reaction to this.
  • The reason why most software isn't flexible enough to be extended easily is that it's usually easier in the short term to write non-flexible software and to document its design poorly.

    In the real world, you are almost never given enough time or resources to finish a software project. Part of this is third-party influence - the nature of the market, trade show deadlines, customer deadlines if you're contracting, etc - and part of this is management that believes in short-term profit above all else.

    Sometimes you're lucky enough to avoid one or both of these factors, or to be in a position where you can force software development to take as much time as it needs. Most of the time you can't.

    It's hard enough getting a working, tested, and documented product to ship. Getting an extensible product on top of that is a task of Herculean magnitude.

    Thus, I think that most software will continue to be difficult to adapt to new tasks.

    Exceptions exist, but bear in mind that even Open Source and Free Software projects will feel some of these pressures, so they won't necessarily be immune.
  • by CharlieG ( 34950 ) on Wednesday May 30, 2001 @11:20AM (#190240) Homepage
    Guess what? Most systems are that bad!

    They don't start that way - they start with a clean, well structured design, and then they end up in maintainence mode. Guess what happens then? Yep, it all breaks down. The biggest reason for this is feature creep. We all say it won't happen, but when the end user (who pays the bills - remember programmers are overhead) says "I need X, and the fact that I don't have it is a bug", it doesn't do any good to show them a spec, they want the feature. So it gets added. When the end user says "we need this new feature in 3 days, OR ELSE", what do you do?

  • Well, you answered your question pretty well yourself.

    The first, at least with respect to commercial software is absolutely true. Vendors always want the customers to spend more money on the latest/greatest/biggest/best new version. They don't care about the customer, as long as they're making a profit.

    The second point you made has a few aspects. While commercial software vendors have their own code to build on, the free software community must start from scratch when putting together an alternative to a commercial product. There's also the simple fact that people learn from experience. The coders who put a system together initially may have been relatively new then. When it comes time for a major rework of the system, they have much more experience - they're familiar with the system, know how it works, and how it should be. If the methods used when putting it together in the first place weren't as good as they could have been, it makes sense to just start over and do it right instead of wasting time cleaning up a mess.

    Still, there really isn't much excuse for not creating a well planned, extensible system in the first place. If things were done better the first time around, there would be no need to start over again. Perhaps we as programmers need to take a look at our design processes and see what can be done better to make more 'future proof' code.

    My $0.02

  • I saw a system written in 1995 that couldn't handle the new millenium. Can't we do any better than that? The GNU suite says we can, so why is it so hard with commercial software?

    But the GNU "developers/managers" don't make a boatload of cash for releasing multiple version of the code with new features and bug fixes in '95, '98, 2000 (and XP whenever THAT is (-: ).
  • Thanks for posting that website -- that article is very good, and there is plenty of other good stuff!

  • Yeah, this site is good. Thanks
  • A few scenarios I've come across in my career; 1. Architectural Differences My old batch cobol application running on an ICL Mainframe, which works ok, isn't much of a goer when all my competitors have gone www! 2. Open Standards Same Cobol system, who'd like to list the difficulties of XMLing it so that I can speak to my customers XML B2B procurement portal? 3. Choices I'm being stiffed by the Mainframe Supplier, can I change - nope cos my On-Line App is written in their proprietary Application Master. But if I had a nice shiny new J2EE app it could run on most Servers using any J2EE App Server..... well maybe soon, and even now it wouldn't be too bad a job to shift it. 4.Sheer Complexity My business users want Customer Relationship Management, Enterprise Resource Planning, Supply Chain Management, do I a. Buy CRM/ERP etc packages and integrate them with my batch Cobol System, gibber gibber. b. Build my own CRM/ERP etc which isn't my primary business/wouldn't know where to start/can't find a Cobol developer, at huge cost c. Start from scratch using off the shelf packages and building bespoke where it is my core business and I know what I'm doing. Integrating the lot using J2EE/Messaging solutions. With a layered architecture which allows new channels and services to be added with ease. 5. Reduced Risk My systems were written quite recently using XYZ GUI Development Tool with an Oracle 7.3 database all running on an ABC server with Dodah PC Clients, but XYZ have been taken over by Microsoft/Oracle/IBM/CA... who've ripped out the core clever bits and given notice that support will cease soon, Oracle support for 7.3 ends soon and XYZ doesn't have drivers for 8i, ABC are moving to a thin client model in their new range and support for the cuurent range will cease in a few years. And finally Dodah have done bust - aha but any PC will do so we can migrate to Dell/Compaq/IBM... just need to sort out the backend stuff before we have a problem and my business colapses. PS No specific gripes about ICL, just an example I'm familiar with. Similar arguments would apply to IBM, Vax, Tandem etc.
  • I'll second that. It is a site worth returning to.
  • by lgas ( 143053 )
    Modifications such as moving to a new operating system, modifying the business logic, adding a web interface, moving to Unicode etc, shouldn't affect more than perhaps 10-20% (to grab figures at random) of a decently built software system.

    Ok, well, first of all, grabbing figures at random is not the best way to build a solid argument, but I'll let that go for now.

    Figures aside, I think this blanket statement is just plain old wrong. While I agree that most (or at least a lot of) COTS software should meet some minimal level of standards -- it should have a modular mechanism for handling internationalization -- the code should be written so that it can be ported to new operating systems with minimal effort -- the user interface should be abstracted from the guts of the code, etc... however, not all commercial software is COTS software.

    When I am developing commercial software for a client, I typically spend as much time as needed gathering requirements from that client until I feel that I have a complete understanding of the clients needs. Then I provide that client with an estimate that approximates as closely as possible the costs I anticipate encountering as I develop a good solid piece of software that does exactly what the client wants and nothing more. If I think that the client may have neglected to consider something that may be useful now or in the future, such as a generic XML data gateway module, or a web interface, etc. then I will certainly bring it up and try to explain why I feel it would be better to include it now, but if the client tells me they don't want these things, then I'm not going to develop them, because it will end up costing the client more money for me to include functionality that they are not interested in.

    ...and the second that existing systems are so badly layered and modularized that any larger improvements are impossible. [...]I saw a system written in 1995 that couldn't handle the new millenium.

    This statement doesn't contain enough detail to really carry any weight. Was the "system" a script to back up software on a weekly basis? or was it an air-traffic-control system? How much effort was it to add the Y2K support? Was the system originally developed under tight deadlines and with a tight budget? Was it so difficult that spending the extra 800 man hours to add that functionality up front would've meant the death of the entire project? Was it so easy to add in November of 1999 that it was done by an intern sysadmin by modifying one 3-line perl script and hey she even got it right the first time? There are a lot of good reasons why software written in 1995 might not have been Y2K compatible (hell, there are a lot of good reasons why software written in late December 1999 or even after the new millenium might not be Y2K safe). There are also a lot of bad reasons, but just mentioning this offhandly like this with no context is like saying "I saw a car developed in 2001 that runs on leaded gasoline". Hey most people wouldn't want that car in most situations, but without knowing why or under what conditions that car was developed, you can't automatically claim that it's "bad".

    Can't we do any better than that? The GNU suite says we can, so why is it so hard with commercial software?

    This is useless rhetoric. I've seen plenty of commercial software that works well too. All commercial software is not bad. Implying otherwise is just silly.

    I don't mean to just be disagreeable, I actually think the root argument Cliff is making, which I interpret as "there are alot of problems in the software development industry that don't need to be" or perhaps even more succinctly (and more relevantly to a larger audience) "software should be better" is a valid argument, and the state of software development is in fact a big problem these days... however I think it's important that anyone interested in solving the problem really stay focused and try to understand what the real problems are. There are plenty of them, and going off on a wild goose chase after some red herrings certainly isn't the best way to skin the cat. Or something.
  • by cooldev ( 204270 ) on Tuesday May 29, 2001 @09:24PM (#190248)

    "We're programmers. Programmers are, in their hearts, architects, and the first thing they want to do when they get to a site is to bulldoze the place flat and build something grand. We're not excited by incremental renovation: tinkering, improving, planting flower beds."

    The article: Things You Should Never Do [editthispage.com]

  • Modifications such as moving to a new operating system, modifying the business logic, adding a web interface, moving to Unicode etc, shouldn't affect more than perhaps 10-20% (to grab figures at random) of a decently built software system.

    You assume most software is decently built. More often than not that's not the case. Many existing systems are not flexible enough to withstand modifications.

    This also assumes the the existing system is documented, and therefore the programmers can figure out which part does what, how it will affect the rest of the system, and why things were written the way they are. Once again, this is also not the case. If there are docs, they are almost always out of date.

    When faced with such a mess as this, it's often just easier to rebuild from scratch.

  • I think that you were pointing in (3) towards one of the most important economic arguments for occasionally ripping it all out and starting over: the code moves out of the "maintainability window." Anyone who has seen the bathtub curve will know what I mean. This applies to all systems-- code isn't special in this way. After a certain point, it starts becoming hard to find people who know how to maintain the code base; vendors no longer support it; and there are so many layers of cruft that breakage is the inevitable consequence of even minor changes. And every system contains undocumented features and long-forgotten tradeoffs that increase in proportion to the number of hands that have touched it. Each of these is a booby trap for the next maintainer. At some point it just doesn't make sense to train any more new tightrope walkers instead of fixing the fucking bridge.

  • Usually, when someone write a piece of software, they sit down and design it (assuming they are smart).
    They built a list of requirements and then code that answer those requirements.

    Making the design extendible is a nice requirement to have, but it's *not* there, more often that not.

    And "Do it *fast*" requirement, is *always* there.

    So you get an application that does what the client says they need.

    Then you need a new version, with more stuff in it, so you build on top of the old design, and the next version build on top of that, etc.

    There is a limit to how much you can extend a design without a major re-write.

    Especially since it takes a *lot* of time & effort to get the design right in the first place. And even then, a re-write from scratch means that you get a better design, (well, usually).

    Take Netscape, frex. Built on Mozilla's code, it was an excellent browser for 3 versions, and on 4.0 it flopped. Why? Because NS got cought on hype. (Not to mention the bad stuff that happen when you do cross platform development.)

    They re - wrote large parts of their browser in Java, realized too late that it's too slow and re-wrote it *again* in C++.
    NS 4 is something of a mastery, of how *not* to extend a design.

    By 4.0, NS engineers realized that this code-base was going no where. They could invest tremendous amount of effort cleaning up their act, but that would be stupid. It would be much better to re-write from scratch, you still get to use the old code for a lot of thing, but you don't have to adapt a design that may not be applicable anymore to your needs.

    It certainly didn't help that NS tried to be everything all at once.

    On contrast, take IE. Built on the same code base (Mozilla), but apperantly making much better seperation & modolization, allow MS to continue expend IE.
    I would hazzard a guess that using COM help very much.

    IE is much more complex project than NS, and I don't think that NS decision to throw everything but the kitchen sink inside the browser was wise.

    I think that Mozilla made the same design mistake, they shouldv'e focused on getting to 1.0 with a usable *browser*, email programs/WYSIWYG HTML editor/news clients there are aplenty, and they are quite good.

    Lucky for Mozilla, they seperated stuff much better than NS did, so you can take just the renderring engine (the heart of a browser) and make just a browser (still, I cannot help but grieve on all those man hours spent on programming email & news clients).

    BTW, on of MS' most prized possesion, Win2K, is about 40% re-write (but then, you have to consider what they put in).
    And XP is little more than a shell & compatability update.

  • And, sometimes the change someone wants is fundamentally opposed to design criteria.

    Has anyone ever been involved in a a design in which you ask the clients "is it true that in all cases this never happens". They usually say yes. Then two weeks later you have something that takes that to be axiomatic. Then some clever guy says "oh, no. They always do that".

    Then you get something that goes against a fundamental design issue and you're fsck'd.

    It's usually more of a problem with capturing initial customer requirements. Having worked on installed codebases, sometimes it *seems* like it's easier to rewrite. In truth, you have bigger headaches than just regression testing your changes - you have to come up with entirely new regression tests.

    Glenn
  • really

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...