Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Technology

How Do You Audit Science And Engineering Software? 4

Hilbert the Spherical Chicken asks: "I'm looking for any advice someone might have about how to review and audit software in use at an engineering firm. I've recently been asked to come up with a process for performing audits on analysis software, scientific and engineering models, etc. Our software is provided by both internal programmers and external vendors. Any ideas or experiences?"
This discussion has been archived. No new comments can be posted.

How Do You Audit Science and Engineering Software?

Comments Filter:
  • I work for a consultancy producing engineering-based software. We follow the ISO9000 procedures, and in particular the tick-it [tickit.org] procedures.

    I would say that you should follow these procedures for internally written software (you'll probably need to write your own layer of procedures to satisfy the tick-it recommendations - that's what we've done). For externally supplied software, wherever possible that software should be tick-it or iso9000 compliant. If not, you'll need to define testing procedures, especially if that software is critical to the performance of your product.

  • by bluGill ( 862 ) on Wednesday December 20, 2000 @05:26AM (#547018)

    I'm assuming your looking at ways to prove the code does what it is supposed to and is bug free. Your question was unclear about what you intended to achive.

    Code reviews are a must. Several people who know the product get togather for 1 hour a day to read and understand as much of the code as they can. (Expirence has showen my boss that more then 1 hour at a time is too long, I belive he is right but I've never felt like trying longer) Code reviews are hard work. Everyone in the meeting should have looked over the code first, and they should understand what it should do. (Not nessicarly how, that is what the meeting is for)

    6 people seem about right for a code review. The programmer should be there to answer questions, but should otherwise keep his mouth shut (this is hard, but few programmers can resist ~Yeah, I know that is a problem I'll fix it~ which leave open the possibility that it won't be fixed right. You need someone who can record minutes. We use a system of one person attempts to explain the entire program line by line (This can be the programmer), and the rest find fault. The meeting should run according to the needs of the reviewers. Sometimes we get through only 20 lines, sometimes over 1000. Sometimes we start at the top and go down, other times we start at main() and skip around.

    This is a technical meeting, not a management meeting. Management should avoid attending, and when they do they should sit in a corner and watch.

    Programmers hate doing these reviews and before hand they NEVER recignise the value of them. Afterwords they agree that many problems were found.

    We find the best time for a code review as the moment it compiles. That is instead of
    >make
    >./a.out
    We follow up the make with a code review. Yes we find obvious bugs that would be fixed in the first day, but the code review will find several per hour while the normal debugging process can take a day for each.

    Finialy, code reviews appear to be what you were asking about. However no amount of code review will solve problems that a poorly run product will do. You will still need to do several layers of test, with internal and external people.

  • by deacon ( 40533 ) on Wednesday December 20, 2000 @09:36AM (#547019) Journal
    Find some (a lot of) basic problems for which the longhand solution is known, or for which you can calculate the solution. As a heat transfer example, calculate heat conduction and heat transfer of spheres, radiant heat transfer between cylinders at odd angles, etc.

    Make sure that that the program gives you an answer which agrees with the longhand solution. Vary the boundary conditions of your problem, and particularly check the limits of the boundary conditions, (zero and if appropriate, infinite values) to see if the program blows up in these cases (or gives bad answers.)

    If you have access to competing analysis software, you can create much more complex problems and check that the answers match between software packages.

    Try to create models with deliberate errors, and see if the software will find and remove those errors (zero size cells, etc) and particularly try to put in invalid boundry conditions and see if the software will point them out to you.

    There are undoubtly case historys around where certain types of problems caused numerical errors in the solution, (the errors in the initial calculations of the stress profile of roller bearings comes to mind) you should seek these out and try that type of problem to see if the creators of your software learned from the mistakes of others.

    Is this a vast amount of work? Oh Yes. But by choosing a well rounded test suite of problems, you should find most errors.

  • While I kinda agree with this post, there is a high level description that can be made without telling the questioner *how* to do it.

    Typically an audit is interested in 2 things: 1) do we have the right software/product; 2) are we using it legally.

    2) you can handle with an inventory and license check.

    1) is more complicated, but fortunately the auditor rarely has to do it (literally). He just has to prove that someone else has made that certification by a) proper tests in the case of internally developed software, or b) analysis in the case of externally developed software.

An Ada exception is when a routine gets in trouble and says 'Beam me up, Scotty'.

Working...