Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming

Ask Slashdot: What Tools To Clean Up a Large C/C++ Project? 233

An anonymous reader writes I find myself in the uncomfortable position of having to clean up a relatively large C/C++ project. We are talking ~200 files, 11MB of source code, 220K lines of code. A superficial glance shows that there are a lot of functions that seem to be doing the same things, a lot of 'unused' stuff, and a lot of inconsistency between what is declared in .h files and what is implemented in the corresponding .cpp files. Are there any tools that will help me catalog this mess and make it easier for me to locate/erase unused things, clean up .h files, and find functions with similar names?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Tools To Clean Up a Large C/C++ Project?

Comments Filter:
  • rm (Score:4, Funny)

    by Anonymous Coward on Thursday February 05, 2015 @02:12PM (#48990779)

    Who about "rm"?

  • Or laser from orbit!

  • by underqualified ( 1318035 ) on Thursday February 05, 2015 @02:13PM (#48990795)
    If you're company is willing to pay for it, you can get something like Coverity. On the free(as in beer) side there is CppCheck and clang.
    • by Z00L00K ( 682162 )

      Programmers combined with food and beverage.

      Just declare which standard you want to reach first.

      But sometimes it's way easier to analyze how the current system works and then write a new one. Just figure out which parts that actually contains useful stuff and use that as a template.

    • Comment removed (Score:5, Interesting)

      by account_deleted ( 4530225 ) on Thursday February 05, 2015 @04:18PM (#48992247)
      Comment removed based on user account deletion
    • by tjb6 ( 3421769 )

      Coverity will certainly tell you a lot of things that are broken, but probably wont help you decide how to fix them.
      Brain power is probably the best approach to this one, although some automated detection of unused code and paths won't hurt.
      Amy number of other static analysers will do the same job.

  • CLion (Score:3, Interesting)

    by Anonymous Coward on Thursday February 05, 2015 @02:14PM (#48990821)

    https://www.jetbrains.com/clion/

  • by Anonymous Coward on Thursday February 05, 2015 @02:14PM (#48990823)

    Seriously, that's mid-sized at best.

  • by Anonymous Coward on Thursday February 05, 2015 @02:15PM (#48990829)

    scan-build and scan-view from clang++ will show you what is being used and what isn't as far as static code analysis goes.

  • Easy (Score:2, Funny)

    by Anonymous Coward

    cd Large_Cplusplus_project

    sudo rm -r *

    sudo apt-get install java

    • Step 4, call sysadmin and complain about slowness of execution of new Java app running on hardware that was sized for C++ code.
  • Document first (Score:5, Insightful)

    by gbjbaanb ( 229885 ) on Thursday February 05, 2015 @02:16PM (#48990855)

    So, figure out the layers or logical components between each module and then you will be able to chew smaller chunks.

    Then, doxygen the whole lot, making sure to use dot to create the graphs for callers and callees. This will let you see the interaction points so you can see what impact a change in one method will have (ie which callers you have to check).

    Some people will say "write unit tests" but frankly, it never works with a legacy code base, to effectively unit test you have to write your code differently to how you'd normally do it. You don't have that luxury here. So a good integration test suite should be developed to test the functionality of the whole thing, then you can repeat it to make sure your changes still work. Its not as instant as unit testing (but more effective) so you'll have to invest in a build system that regularly builds and runs the (automated) integration test and tells you the results - and commit changes reasonably regularly so you can isolate changes that end up breaking the system.

    The rest of the task is simply hard work running through how it works and understanding it. There's no short-cuts to working hard, sorry.

    • Re:Document first (Score:4, Informative)

      by laughingskeptic ( 1004414 ) on Thursday February 05, 2015 @02:40PM (#48991233)
      This will find the static interaction points, but will miss the dynamic interaction points. He also has to watch for callbacks and methods present to satisfy oddball templates in C++, methods that will be invoked as a result of casts, etc.
    • Re:Document first (Score:5, Insightful)

      by bmajik ( 96670 ) <matt@mattevans.org> on Thursday February 05, 2015 @05:12PM (#48992791) Homepage Journal

      This.

      One of my first professional programming projects was to take a look at the custom C++ billing software our company had purchased from a contract programmer.

      I had a long unix and programming background, and was back for a summer job after doing 1 semester of C++ in college.

      My boss told me, since I was the only one who had C++ experience, to start documenting the system.

      At the time, we were using IRIX, and so I was using the SGI compiler and tools suite, which were, I believe, licensed from EDG. The point is that there was a very nice call graph visualizer. This was helpful for understanding things at a superficial level.

      However, what was even better was just running the program a bunch of times on test data and seeing what it did while under the debugger.

      While my summer began with the task of documenting the system, as I learned things I'd report them to my boss.

      By the end of the summer, I had re-written some fundamental parts of the system; I'd moved some of the processing outside, and I pre-processed and pre-sorted the data.

      The overall execution time went from many hours to about 45 minutes to calculate monthly bills. THe key innovation was replacing the inner loop of the charge tabulation -- which was 2 or 3 levels of nested linked list traversal.

      Instead, I used the standard unix sort tools to pre-sort the data files before being loaded into the system, and I changed the system to use a data structure that supported a binary search.

      The majority of the code got left alone. By understanding the code under a debugger, and realizing that how it worked on production data was much different than how it performed on the test data it was originally delivered with, I was able to make a critical set of changes that had a huge impact.

      In general, I spend as much time as I can not writing code, but instead, understanding how the existing system works. For a current project, I've spent the last two weeks playing with somebody else's code, and now I've expanded it so that it can also operate on my data sets, and I've probably changed fewer than 100 lines across about 5 different projects.

  • by guruevi ( 827432 ) on Thursday February 05, 2015 @02:19PM (#48990893)

    Any decent IDE has the capability of pointing at least towards unused blocks of code and will generate a tree of function calls. I've worked with Eclipse and Xcode both of which have these capabilities. Even GCC (or another C compiler) can warn you about chunks of unused code or missing/bad header files. You can also rename functions across the entire codebase if necessary.

    If your code has warnings or errors, continue fixing until the warnings are gone. As far as functions that do similar things but are named differently, that is a bit harder because 'looks like they are doing the same thing' doesn't always mean they ARE doing the same thing (if they have the exact same code, you could perhaps solve with statistical analysis or simply a text finder).

    Make sure that if you replace a function that it has the same behavior in all cases. Even mediocre developers have learned that reuse existing code is a "good thing" and often different functions that do "the same thing" have edge cases (often undocumented) where it does behave differently (especially in C/C++ eg. difference in signedness, memory mapping method, characters etc)

    • I haven't used the others much, but here I must recommend KDevelop for its code browsing capabilities. I have worked on several big C++ projects (mostly small changes, not full-on refactoring), and it really helps you get into the code quickly. It doesn't have much in the way of refactoring tools that I would know of, but it's _great_ for looking at code.

  • Risky (Score:2, Insightful)

    by Anonymous Coward

    This strikes me as a very risky undertaking. If there are a lot of functions/modules doing similar things, any attempt to combine many similar functions into one runs a huge risk of introducing bugs if you can't wrap your head around the entire program (which is unlikely imo). There is a huge time and budget risk in this endeavor.

  • by BlueKitties ( 1541613 ) <bluekitties616@gmail.com> on Thursday February 05, 2015 @02:22PM (#48990945)

    Seriously, you never know when some previous programmed made a "duplicate" function to do something bizarre, like force a particular initialization order of static-class-member variables between translation units. Sometimes deleting pointless code can do... terrible things. Just be careful, test your changes, etc.

    • Bware of 'cleanups' (Score:5, Interesting)

      by plopez ( 54068 ) on Thursday February 05, 2015 @03:53PM (#48992063) Journal

      Anecdote from the mists of time:

      There was this C program which had been around a while which had undergone some evolution and maintenance. The decision was made to 'clean it up' There was a data structure, an array I think, which was unused in a subroutine, lets call it subroutine A. So it was removed. The next test runs of the application and suddenly the program started core dumping. After some agonizing debugging it was discovered to come from another subroutine, lets call it subroutine B.

      There had been an array in subroutine B which a loop had run over the end of. But subroutine A had loaded just prior to B and allocated memory for the unused data structure. This had provided enough space to handle the array out of bounds error in subroutine B but when removed subroutine B began overwriting subroutine A causing the crashes.

      It was good that the crashes were easily reproducible or could have been one of those intermittent things that drive people insane. An automated tool may not catch things like that since it may not show up until run time. It is C/C++ we are talking about now isn't it?

      • by rrohbeck ( 944847 ) on Thursday February 05, 2015 @04:22PM (#48992297)

        I still have some superfluous debugging code in a project that literally does nothing in the production version but without it the code crashes randomly after a week or so; a classic Heisenbug. It's clearly data trashed by a wild pointer but I could never find who did it since it's a large multithreaded program that depends on hardware behavior. Neither valgrind nor Coverity were of any help. It's too big to be rewritten so we just have to live with it.

        • by Paul Fernhout ( 109597 ) on Thursday February 05, 2015 @07:35PM (#48994031) Homepage

          Debugging code that prints or logs may act to synchronize access to some data structure. Sometimes that can prevent a deadlock or illegal pointer access as a side effect:
          http://stackoverflow.com/quest... [stackoverflow.com]
          http://en.wikipedia.org/wiki/D... [wikipedia.org]

          So yes, complex programs can act in strange ways from seemingly minor changes.

          I spent a couple years helping maintain a large complex multi-threaded app (which included message passing between the apps, for another layer of fun) which supported 24X7 operations where a minute's downtime could cost millions of dollars in some situations, and it was not easy. The code base was easily 10X to 100X of what the poster of the story is tasked with maintaining. Versions of the code had been in production for over fifteen years. Much of the code had been ported from C++ & Tcl to Java (although C++/Tcl systems remained), but the threading model was somewhat different between the two, and the port had not taken account of all the differences. It would have been nice to be able to rewrite some key parts of the system to make them more maintainable, but there was never enough time for that in a big way -- and realistically, bigger rewrites likely introduce new issues. Still, eventually we got most of the worst deadlocks and memory leaks and similar such things fixed and the system got to the point where people stopped even remembering off-hand the last time a core part of the system needed to be rebooted (previously a fairly frequent event). But each deadlock could involve days, weeks, or even months of study and discussion, adding log statements, writing tests, lab tests, analyzing quite a few multi-gigabyte log files (and writing tools to help with that including visualizing internal message flow), and so on. And, same as you mention, hardware and OS issues could interact with it all, making some things hard to duplicate under virtual machines for developers. One thing is that to the end user, a system that is more stable may not look that different than one that is less so -- there are no new features, so it is not obvious what is being paid for.

          Although obviously if the program you support core dumps from a bad address or stack overflow, rather than just freezes up, it is probably something else. Still, even then, a bad pointer address can sometimes come from one thread freeing a data structure when another thread is still using it. The original C++ in the above mentioned project generally was highly reliable, but it still had some odd issues too. In one rare case, memory was freed in an unexpected way under certain conditions by other code running in the same thread but in code nested way deep with essentially recursive calls processing complex messages. I finally also traced part of that too what looked like maybe a bug in a supporting third-party library (a RogueWave data structure). Because that C++ code had been in production for years, and we were loathe to change it at the risk of introducing new issues, we mostly "fixed" that issue by making changes elsewhere in the system to prevent that component from getting the pattern of data that it had trouble handling. But we would not have known exactly what to change elsewhere without a lot of analysis.

          Sadly, just as we got it mostly working well, the new shiny thing of a mostly COTS system that did something similar came along to replace much of it (at a much bigger expense than maintaining the old, but granted with some nice new features).

          As I saw someone else comment recently about a "stable" OS, the end user generally cares more about how much work a system lets them get done, not how "stable" it is. A reboot can be acceptable, depending on the situation and the alternatives, even if not desirable. Erlang code is probably the master at that approach of rebooting code when it fails. :-) Here

  • Unit tests (Score:5, Interesting)

    by Midnight Thunder ( 17205 ) on Thursday February 05, 2015 @02:22PM (#48990947) Homepage Journal

    While I dislike writing unit tests, I have to admit they are useful in protecting your butt when something breaks, since the test should catch it first. Of course you need to decide whether in a particular scenario they add value or just make you manager happy.

    In a case like yours, you can make code modifications and hope nothing breaks or build unit tests and ensure that you don't break any of them when refactoring. Initially rather than just ripping out the seemingly duplicate methods, rip out/tweak their implementation and have them point to what they seems like a the right method to provide the common functionality. If your unit tests show breakage, then you know that you missed something.

    If you do things wholesale, then you are likely to break something in an unmanageable way. Oh and make sure things are version controlled ;)

    • Re:Unit tests (Score:5, Interesting)

      by gstoddart ( 321705 ) on Thursday February 05, 2015 @02:52PM (#48991387) Homepage

      I've maintained several legacy code bases over the years.

      And I will flat out tell you that unit tests have VERY limited utility in terms of understanding a mess of code you inherited. At least, in the beginning.

      Sure, you can start with a couple of basic premises, and you can convince yourself those basic premises still work.

      But the initial grokking of your code, understanding all places where a function may be used, understanding all of the tricky bits and gotchas, trying to understand why there are 9 functions which look like they do the same thing? That takes some time and effort, and quite possibly some tools.

      Unit tests are great for starting to build up a few things, and move towards better stuff ... but in a system which has several hundred (or several thousand) functions and interactions, resulting in really large numbers of code paths ... having a few unit tests describing the stuff you understand doesn't mean all of the stuff you don't understand wasn't broken, simply because you don't know what you don't know.

      So it is important to understand your new unit tests on legacy code are, at best, a VERY incomplete view of your code. That will improve over time, but you could potentially need to write a few thousand of them to be sure you're not breaking anything in the big picture.

      If you do things wholesale, then you are likely to break something in an unmanageable way. Oh and make sure things are version controlled ;)

      Oh, yes .... This .. for the love of god, this.

      You should learn how to tag branches and the like in your version control so you can identify a baseline of "before I ever touched anything" and then be able to cleanly build everything which predates you, as well as building your "after refactoring this part".

      Branching/tags/whatever your version control calls it -- that doesn't take up much space, so use them often, and consistently. Let the tool do the heavy lifting of keeping track of what you've changed.

      You do NOT want to find yourself unable to build it as it existed, or identify all of the diffs between what you started with and what you have.

  • graphviz (Score:3, Informative)

    by Anonymous Coward on Thursday February 05, 2015 @02:24PM (#48990977)

    graphviz can visualize the inter-functional and inter-file dependencies.

    It's free and built into the functionality of doxygen.

    I'd recommend recommenting all the functions using doxygen - because to clean up a large project you need to know it.

  • by prefec2 ( 875483 ) on Thursday February 05, 2015 @02:26PM (#48991025)

    Modularize the software. There are a lot of tools which can help you to analyze static dependencies in the code which can help you to identify components. You could also use a run-time analysis tool for example Kieker which is initially for Java, but there is an extension for C/C++.

    • Finally. Finally someone popped up who pronounced the word "component". Mod parent up into fucking heaven. Finafuckingly.
  • Git then doxygen (Score:4, Informative)

    by Ultra64 ( 318705 ) on Thursday February 05, 2015 @02:34PM (#48991161)

    You didn't mention a version control system, so assuming you aren't using one:

    Turn it into a git repository so you can easily back out of changes.

    Then run doxygen and start reading through the documentation.

    • by JustNiz ( 692889 )

      If you run doxygen on an existing codebase that was developed without doxygen support already built-in, all you get is a giant list of classes and member names, and empty spaces where any descriptions would go.
      This has little to no value in trying to understand existing architecture or functionality.

    • DOXYGEN ?? You MacOS punks ! Now get off my lawn, before I hose you with my emacs-generated documentation !
  • To be quite frank, what you need are man hours. There are many tools out there that can help you finding corners or edges to start working on, but you can do the same with a coin toss, no tool will significantly reduce the amount of man hours that will have to be spent fixing, re-factoring and re-organizing. Take a good loooooong look, devise a simple strategy and then jump in somewhere. From personal experience, add lots of assertions as you go.

  • Few ideas (Score:5, Informative)

    by postmortem ( 906676 ) on Thursday February 05, 2015 @02:42PM (#48991271) Journal

    1. Modern IDE with good gcc parser: Eclipse, Netbeans, 3rd party paid ones. Not Visual Studio. You want it to build call hierarchy tree for you, so that you can find methods that are unused. It will require some manual steps
    1a. if you have $, Understand for C/C++ is proprietary tool that will map a hierarchy of your code.
    2. perform structural coverage analysis of code in live action, will help map the dead code. gcov is free if you can use it.

    • Call hierarchy is the word. I use that all the time with NetBeans, the built-in function for that is really awesome. Yields a lot of insight.
  • And crank up the warning level to help you find inconsistencies between headers and declarations. In fact, you might need to start by cleaning up header files.

    Doxygen can help you find truly dead code.

    Cloned code is a pain to deal with - I don't know how you fix that. I guess it depend on how much of it there is..

  • by Codeyman ( 1098807 ) on Thursday February 05, 2015 @02:45PM (#48991313) Homepage
    Along with coverity as one of the commenters suggested, you can compile the code with stricter compilation options (like -Werror in gcc, which will error out if variables/functions are not used etc), you would then need to go through each of these files manually and resolve all the issues. Tools like bcpp can help you make sure your complete code base follows a common coding standard. Apart from that, if the name of the function is not indicative of what the function actually does, there are no tools smart enough to help you with that. You'd need to do a lot of cleanup manually by hand.
  • by OrangeTide ( 124937 ) on Thursday February 05, 2015 @02:53PM (#48991401) Homepage Journal

    You need to write a test suite to confirm what works and what does not work.

    Once you have tests, you can start running coverage tools (like gcov or Coverity).
    If your tests are not covering parts, you need more tests or need to consider removing that part of the code.

    When tests are complete, then you can think about how to clean it up (refactor, rewrite, organize or whatever word the cool programmers are using now days). You can use your compiler warnings as a lint. And start to work through the spammy build logs to eliminate all the warnings. A good goal is to have zero warnings and after that build with -Werror which will cause builds to fail if any new warnings are introduced. (if you have 3rd parties or customers that build these libraries, you might not want to do that)

    Another option that becomes available after writing proper tests, is that you can make the decision to discard the entire project and start over from scratch. This is good if the requirements have changed dramatically over the years and a lot of messy hacks exist to support obsolete requirements. I must warn you though, usually rewriting is a waste of time. Time that is better spent understanding and fixing the existing code, after all source code is just a text file, you know how to edit a text file right?

    • You need to write a test suite to confirm what works and what does not work.

      No, before you do anything you need to spend some time understanding what it does and sifting through the code for a LOT of hours. You need to understand the layout, the coding style, start to identify the bits which look like duplicates but which might not be.

      You need to be prepared to document the hell out of it, and be able to walk someone else through it -- if only as an exercise of "this is what I think I see, do you think yo

      • No, before you do anything you need to spend some time understanding what it does and sifting through the code for a LOT of hours. You need to understand the layout, the coding style, start to identify the bits which look like duplicates but which might not be.

        I don't agree that any of this if immediately important. It's putting the cart before the horse. You can review it when you are ready to make modifications. The existing software, documentation (we hope) and maybe header files are a good start. It's more important to understand how to use something before you try to understand the internal details, but really these two things tend to happen in parallel in fits in and starts when we try to grasp the complexities of a large project.

        No, if that's even an option, you need to review, understand, and document it first. If you go off half cocked writing a test suite only to decide you are going to scrap the whole thing ... you've wasted your time writing the test suite.

        The new code must pass the

  • by plcurechax ( 247883 ) on Thursday February 05, 2015 @03:18PM (#48991677) Homepage

    See: Working Effectively with Legacy Code [slashdot.org] book review (2008) for a book [c2.com] of that title by Michael Feathers [objectmentor.com] (PDF article) on that very topic.

    There is even a summary of key points [stackexchange.com] at Programmers @ StackExchange. Hundreds if not thousands of programmer's blogs address [codebetter.com] this very topic.

    You're welcome. Now get back to work.

  • by Grincho ( 115321 ) on Thursday February 05, 2015 @03:24PM (#48991753) Homepage

    Wow, what an easy pitch. :-) At Mozilla, we've put together a tool called DXR ( https://github.com/mozilla/dxr... [github.com] ). It indexes your code and lets you do text and regex searches. But if you can get your project to build under clang, you can really have some fun, with queries that find...

    * Calls of a function (great for dead code removal)
    * Uses a type
    * Overrides of a method
    * Uses and definitions of macros
    * etc., etc., etc. There are something like 24 different structural queries you can do.

    Because all of this is informed by the internal data structures of the clang compiler, it's nigh on 100% accurate (aside from more dynamic behaviors like sticking function pointers in a table and passing them around). You can also explore a hyperlinked version of the source, bouncing from #include to #include and drilling into methods.

    Here's how to set it up: https://dxr.readthedocs.org/en... [readthedocs.org]
    Here's our production instance you can play with: https://dxr.mozilla.org/mozill... [mozilla.org]

    If you run into trouble, pop into #static on irc.mozilla.org, and we'll be happy to help you.

  • by WinstonWolfIT ( 1550079 ) on Thursday February 05, 2015 @03:35PM (#48991873)

    First off, 220k lines of source isn't that big.

    You're not going to solve this with a big bang so get that idea out of your head. You're going to solve it gradually, and for a code base of that size it's going to take maybe a year of relatively slow improvement. Everyone on the team has to be on board, and every code review must include "What has been improved?" and "Did anything get worse? If so, that's not okay."

    1) Pick your battles. The code you're not changing is code that doesn't need to be looked at. Address your pain points as they come up.
    2) When you find a pain point while making a change, MAKE IT TESTABLE. Since you're in here making a usually simple fix, a single nominal test verifying that fix is fine. Testing anything else is a waste of time. Testable code will improve over time.
    3) If you can't make code testable because of an intractable dependency graph, welcome to the hell of "Design Dead". The only way out of this scenario is a rewrite of that area.
    4) Find your comfort level with regard to time boxing refactoring work. On my engagements, they just happen automatically, without explanation outside the team, nor apology to anyone. When estimating a piece of work, pad it with some extra time for cleanup. Only actually create work items for design dead areas. Your definition of done must include testable, tested and improved code.
    5) Duplicate code in itself isn't evil, and inconsistencies are simply inevitable. If you find duplicate code, pick one and deprecate the rest. However, code that is tightly coupled to the deprecated code will need to be refactored and if the coupling traverses an extended dependency graph, you'll simply have to live with the duplication and just stop adding to it.

    • In addition to those excellent suggestions, remember that grep is your friend. Nifty code indexers are all well and good, and might even be all you need if *everything* is c/c++/headers. I find that the larger the code base, the less likely that that's true. Write yourself some grep wrappers if the relevant files are spread around in some awkward manner.

  • Few suggestions (Score:3, Informative)

    by Anonymous Coward on Thursday February 05, 2015 @03:45PM (#48991983)

    -1-
    Install "OpenGrok" ( https://github.com/OpenGrok/OpenGrok ) and index your code.
    OpenGrok is the best source-code browsing option out there.
    Use OpenGrok to extensively read and understand your code based.
    Examples:
    Which files in the linux kernel call 'printk':
          http://lingrok.org/search?q=printk&defs=&refs=&path=fs%2F&hist=&project=linux-next
    Where is 'printk' defined?
          http://lingrok.org/search?q=&defs=printk&refs=&path=&hist=&project=linux-next

    -2-
    Use Clang's static code analyzer, 'scan-build' : http://clang-analyzer.llvm.org/scan-build.html .
    Depending on how good/bad the code is, there could be many false positives.
    but it will give you a sense of what's going on, and what to focus on.

    -3-
    Enable all possible compilation warnings (either in GCC or CLANG).
    The more the better. Use "-Werror" to ensure you don't ignore them.
    Do it iteratively if needed by enabling more warnings, fixing what breaks, and repeat.
    A good list is here:
        http://git.savannah.gnu.org/cgit/gnulib.git/tree/m4/manywarnings.m4#n103

    Especailly eliminate unused code and variables.

    -4-
    Analyzer the McCabe Complexity ( http://en.wikipedia.org/wiki/Cyclomatic_complexity ) of your code, using pmccabe ( https://people.debian.org/~bame/pmccabe/pmccabe.1 ).
    Focus on functions with too-high score, and re-factor them.

    -5-
    Add automated tests to your program, and combine it with code coverage (lcov/gcov).
    In addition to the general good advice of 'try to increase coverage', focus specifically on code sections
    which are critical but not covereged at all - write tests specifically for them.
    Having some tests is better than having no tests at all.

    -6-
    Decide on code style (e.g. linux kernel style, GNU style, any other style) and build shell commands to tests them (i.e. a combination of grep/awk etc.).
    New commited code should adhere to the style. Use git hooks to enfore it.
    Existing code should be (slowly) refactored to the new style.
    Which style is a matter of personal preference, but having a consisted style across all code really helps.

    Ideally, it should be something as easy as 'make syntex-check' in GNU Coreutils.

    -7-
    With all of the above, integrate the tests into an automated system (e.g. autotools or cmake or just makefiles) that will allow you to run and re-run and re-run these checks easily.
    If it takes 10 shell commands to do static analysis - you'll be too lazy/busy/whatever to do it more than once.
    It should be as easy as 'make static-scan' or 'make coverage'.
    Investing in writing a good makefile is worth the effort.

    Good luck.
      - gordon

  • ....Is NOT anything like a large project.
    It's almost small.
    • by Ksevio ( 865461 )
      But it's large enough that you'd want to do some automated stuff to it first, not manually read over the whole thing.
  • by Dan Askme ( 2895283 ) on Thursday February 05, 2015 @03:49PM (#48992033) Homepage

    You should be running at Warning Level 4 when coding. Its good practice to prevent the issue you have now.
    It will give you a crap load of warnings (which are all worth fixing if you have the time), but, it will highlight any unused variables and/or functions.

    in Visual Studio 2008-2013:
    - Project > Properties
    - Configuration Properties > C/C++ > General
    - Change "Warning Level(W3)" to W4

  • It's the only way to be sure....

    Seriously though. C++ is one of the most powerful, complete commercial languages.... with a code interface and syntax designed by Satan. You couldn't have *designed* a coding system that would better encourage missteps, fuck-ups, obfuscation and a plethora of errors.

    It's a product of 90s math nerds whose machismo came from knowing more and better than regular folks. It was never designed to get work done efficiently; it was designed to feed the egos of C++ programmers.

    Better

  • It's not a tool trick, but I found valuable in some project to rename functions and variables to make them telling really what there do. It's not rare that the name was a poor choice or that his semantic changed in the evolution of the project. From my point of view, it's a kind of documentation.

  • That's one person's project for a year to write that volume of code.

    • by iamacat ( 583406 )

      Maybe that volume of crappy code! I would rather that person wrote low tens of thousands of lines which were good.

    • You must be joking (I half suspect you are), that's 1000 lines of code per day. The mythical man month figure is 10 lines. Of course it depends on the language and the domain area, and whether you're hacking or following a depressive production line like Agile, but the larger a project becomes the more time you spend on the inter-relationships to keep it well-architected, and the less lines you can add.

  • Small, for most people, is something with tens of kLOC or less, medium projects have hundreds of kLOC and large projects have millions of LOC.

    A large project would be something like the Linux kernel which has around 16 million LOC.

    I would advise using doxygen to have a global view of the codebase, some kind of lint like g++ -Wall, and a good editor preferably with refactoring support as tools. Plus static code analyzers and valgrind.

    First thing you should do is backups. Save the old codebase source reposito

    • Then you start working by removing dead code, indenting, do static and run-time code analysis to find bugs, then merge duplicate code. Start with the more mechanical parts first. Once you get that working and bug free you basically have the new version 1.0.

      I didn't see anything about tests? How will you know it is "working and bug free" without them?

      In my experience, one of the most dangerous things you can do is to change working code in a mechanical way that should be safe. Whenever I do that, I always use something to make sure the code is unchanged. If the change is strictly cosmetic, something like Beyond Compare [scootersoftware.com] can be used for that. Otherwise, you need module tests that fully characterize the existing code via functional testing with complete condi

      • You assume the software is bug free to begin with.

        Of course you need to do minimal testing. As for complete test coverage good luck doing that when you don't even know what the code is supposed to do in the first place. Unit testing is fine when you are starting a project from scratch but not on something like this.

        Theoretically it should be possible to test if two pieces of code are functionally equivalent but I know of no tool which does this without annotating the code all over by hand first. I have expe

        • You assume the software is bug free to begin with.

          No, I assume that making ostensibly non-funtional changes to functioning code is more likely to introduce bugs than to accidentally remove them. The primary goal at that stage is to not make it worse.

          Anyway, with all that experience you have with theorem proving and compiler design, etc, I can see why you don't bother with any automatic aids to assure that changes don't accidentally make things worse.

          In my own case, I often do what I call "code algebra", which are small refactorings that are intended to le

  • Easy!

  • a lot of inconsistency between what is declared in .h files and what is implemented in the corresponding .cpp files

    That's impossible unless you're talking about comments in the header files, or the implementation (.cpp) files don't include their own headers. Generally speaking, every .cpp file must include its header in the first non-comment line of the file.

    Good:

    // foo.cpp, Copyright (c) 2105 Dynabone LLC
    #include "foo.h"
    #include <cmath>
    ...

    Bad:

    // foo.cpp, Copyright (c) 2105 Dynabone LLC
    #include <cmath>
    ...
    #include "foo.h"
    ...

  • That is about the only thing that will really help.

  • There are probably other tools, maybe even better tools but it is what I know. I'd say try adding the whole thing to a C++ Visual Studio project. You can then set things on to give you build errors for all unreferenced junk. Find all references etc. Other IDEs probably can do it too but at least entry level VS is free and I know it will do it so ... Only issue you might have is if it is a *nix app or whatever perhaps you'd get a lot of false errors because it won't conform to VC++. But I'm guessing their cl

  • Cleanup for the sake of cleanup projects never work. Current code performs some function and nobody can keep enthusiasm reading bad code for months just to have it perform same function in the end.

    Instead, you can gradually raise code quality by setting a high bar for new changes. For example, have each change reviewed by a couple of developers other than the author who are known for good style. If a new utility method is added, ensure that the code was searched for existing similar facilities. When legacy

  • I'm assuming you're here because this code is critical to your business, it works well enough today, and it can't be easily replaced. You need to keep it working as you go, but you desperately need to modernize it. There's a lot you can do to set yourself up for success, and it's not just tools.

    First, get it building in the most current environment available. Is it Visual Studio? Port it to VS2013. Is it Eclipse? Get it into 4.4. Is it not even in an IDE? Get it into one - they're a great timesaver.

"If it ain't broke, don't fix it." - Bert Lantz

Working...