Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Software Technology Hardware

Environment Variables - Dev/Test/Production? 77

Woody asks: "It's common knowledge that the development environment should be the same as the test environment which should mimic the production environment whenever possible. I'm currently working on a project where we develop on Dell, test on IBM, and the production system is up in the air. For embedded systems the differences can be running on a VM versus running on the actual hardware. What I want to know is what kind of problems have Slashdot readers been faced with because of different environments being used for different phase of their projects, and how have they been overcome?"
This discussion has been archived. No new comments can be posted.

Environment Variables - Dev/Test/Production?

Comments Filter:
  • by BigLinuxGuy ( 241110 ) on Wednesday January 19, 2005 @01:40PM (#11409798)
    I've worked on several similar projects and one of the items that bit us was when the developers made assumptions based characteristics of the development platform that were somewhat radically different than the production platform. Assumptions around the size of data (not just word size, but buffer sizes for some text retrieved from "standard" libraries) caused a number of problems on several Java projects I've worked on (not to mention the ones in other languages). Data size seems to be one of the more commonly overlooked items in design and implementation, so I'd urge you to make sure that you've done your due diligence to ensure that you don't find those after the fact.

    Another area is performance. For reasons I have yet to understand, there seems to be a prevalent myth that performance can be bolted on after the fact (the "make it work, then make it work fast" mindset). The truth of the matter is that performance has to be engineered in from the beginning or you simply spend a lot of time and money rewriting code that should never have been written in the first place. Sadly, educational institutions don't appear to place any emphasis on actual performance or teach the principles of performance tuning as part of their curricula.
  • Cost? (Score:1, Insightful)

    by Anonymous Coward on Wednesday January 19, 2005 @01:42PM (#11409814)
    What if you can't afford a second machine to cover the "duplicate" test environment?
  • by FroMan ( 111520 ) on Wednesday January 19, 2005 @02:27PM (#11410491) Homepage Journal
    Here I would disagree. Performance should rarely even be a consideration until the product works.

    As such, this does not mean to use braindead implementations, but worry about a working product first.

    The first setp to a serious project is to work through the design. This means looking at the interfaces that the software will provide. Whether those are UI or API. Those are your targets that your users will work with.

    Then you will need to have test cases for the interfaces that you have agreed on. These test cases should validate the accuracy of the system. Accuracy is key, far more than performance.

    Finally, work on implementation. The implementation of the project is the most fluid of the rest of the system. You do not want to be changing APIs or UIs after they have been agreed upon unless absolutely necessary.

    Performance, while not exactly an after thought should only be worried about once the problem is known to exist. Often more/better hardware can be thrown at the situation, if not now, 6 months down the road. If there is a problem with the algorithm, then you can change the implementation without affecting the interface.
  • by _LORAX_ ( 4790 ) on Wednesday January 19, 2005 @02:56PM (#11410898) Homepage
    Repeat after me, it's not about the platform unless the production units are seriously constrained in some way that cannot be replicated on development or testing. The one thing that has consistantly hamstringed projects in the past is not being able to replicate a dataset that comes close to replicating production bugs on development and testing. Without having a full production dataset in either development or testing you push something out that works only to find that the data causes some completly unrelated thing to break.

    This is of course assuming that the software platform is adequitly compatible not to introduce stupid bugs because of diffrences between servers.
  • by dubl-u ( 51156 ) * <2523987012&pota,to> on Wednesday January 19, 2005 @03:00PM (#11410956)
    For reasons I have yet to understand, there seems to be a prevalent myth that performance can be bolted on after the fact (the "make it work, then make it work fast" mindset). The truth of the matter is that performance has to be engineered in from the beginning or you simply spend a lot of time and money rewriting code that should never have been written in the first place.

    Myth, eh? Personally, I do no "bolting". The steps I happily follow are
    • make it work
    • make it right
    • make it fast
    The notion here is that you get something basic working. Then you refactor your design to be the right one for where you ended up: a simple, clean design. Then you do performance testing, discover what the real bottlenecks are, and find ways to get the speed with minimal design distortion here. The you go back to step 1 and add more functionality.

    There are a couple of benefits to doing the optimization last. One is that a clean design is relatively easy to optimize. But the big one is that by waiting until you have actual performance data, you get to spend your optimization time on the small number of actual bottlenecks, rather than the very large number of potential bottlenecks. That in turn means that you don't have a lot of premature optimization code of unknown value cluttering up your code base and retarding your progress.

    Of course, this is not a license to be completely stupid. Before you start building, you should have a rough plausible architecture in mind. If there are substantial performance risks at the core of a project's architecture, it's worth spending a day or two hacking together an experiment to see the if your basic theories are sound.

    As Knuth says, "We should forget about small efficiencies, say about 97% of the time: PrematureOptimization is the root of all evil."
  • Re:Cost? (Score:2, Insightful)

    by dubl-u ( 51156 ) * <2523987012&pota,to> on Wednesday January 19, 2005 @03:07PM (#11411046)
    What if you can't afford a second machine to cover the "duplicate" test environment?

    About 95% of the time I hear this, it's false economy. Most hardware is pretty cheap these days, and good developers are very expensive. It takes very little time savings to justify the purchase of new hardware.

    In the few cases where it's too expensive to duplicate hardware, then you can fall back on careful profiling and simulation. For example, if you know that your production hardware has X times the CPU and Y times the I/O bandwidth, you can set performance targets on your development environment that are much lower. Or if you can't afford a network of test boxes to develop your distributed app, then things like VMWare or User Mode Linux will let you find some things out.

    Of course, every time your tests diverge from your production environment, you add risk. A classic mistake is to develop a multithreaded app on a single-processor box and then deploy it on a multiple-processor box. So as you get cost savings by reducing hardware, it's good to keep in mind the added cost of inadequate testing.
  • by SunFan ( 845761 ) on Wednesday January 19, 2005 @03:24PM (#11411267)
    Performance should rarely even be a consideration until the product works. ...until the prototype works. The final product has to perform well. Otherwise, people will find trivial excuses to say it sucks and needs to be replaced, even if, on a whole, it is a decent product.
  • by SunFan ( 845761 ) on Wednesday January 19, 2005 @03:30PM (#11411353)
    Why wouldn't you have asserts in the production code?

    If the code were properly designed to fail gracefully in production, a failed assertion isn't very graceful.
  • by FroMan ( 111520 ) on Wednesday January 19, 2005 @04:13PM (#11411913) Homepage Journal
    Its a bad assumption that you are taking a terribley long time implementing. The only painstakingly slow part of the process should be the design of the interfaces. The rest of the code after that point should be modular enough to replace a poorly performing module, but the interface still exists.

    This allows the best use of developer time by producing "cheap" code for most work. Then the "expensive" code can be written only in the cases where it is needed. Certainly there will be a certain amount of rewrite as the expensive code replaces the cheap code, but that only happens where necessary. Also, the second iteration through the development alone will usually help the situation because the problem domain is better known.
  • Yes...and yet, no. (Score:3, Insightful)

    by oneiros27 ( 46144 ) on Wednesday January 19, 2005 @04:59PM (#11412409) Homepage
    There's another myth about projects -- the requirements were actually correct.

    Odds are, if someone is rushing for you to get a project done on an unrealistic timeline, they haven't done their analysis of the project correctly, either. Having _any_ prototype up there can help drive the requirements analysis, so that you can figure out what needs to be changed.

    But yes, then you scrap that entire thing, so you can do it correctly.

    If you're making minor modifications to an existing system, then yes, you most likely wouldn't need a whole new prototype, but then again, you'd not be designing from the ground up, either, I would hope. [unless you get one some idiot manager who decides a new language is better, or you have to make some sort of fundamental internal change]

    Oh -- and if an outside contractor asks for a couple of weeks of logs of the former systems, get rid of them -- a couple of _months_ will not identify cyclic trends that may be present. [especially when you work for a university, and it's the summer]

    But be realistic of your goals for the project -- sometimes you're working to optimize on CPU, optimize on memory, optimize on disk usage, or optimize on the programmer's time. Until you get _everything_ running, you won't know which one will be the bottleneck. [although prior experience can give clues]
  • by Godeke ( 32895 ) on Wednesday January 19, 2005 @06:38PM (#11413549)
    So, which "performance" are we optimizing for? Memory footprint? Disk access? CPU utilization? Network utilization?

    It turns out you rarely know which will bite your face off until you get a representative data set running. If you made the wrong choice, you probably made things *worse* than if you had opted for working code that could be re-factored from "easy to read" to "mostly easy to read and performs where it counts".

    When you are working on server based solutions that will be hit hard, all of those could be your bottleneck: or none.

With your bare hands?!?

Working...