Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Programming Software IT Technology

Git Adoption Soaring; Are There Good Migration Strategies? 346

Got To Get Me A Git writes "Distributed version control systems (DVCS) seem to be the next big thing for open source software development. Many projects have already adopted a DVCS and many others are in the process of migrating. There are a lot of major advantages to using a DVCS, but the task of migrating from one system to another appears to be a formidable challenge. The Perl Foundation's recent switch to Git took over a year to execute. The GNOME project is planning its own migration strategy right now after discovering that a significant majority of the project's developers favor Git. Perhaps some of the projects that are working on transitions from other mainstream version control systems can pool their resources and collaborate to make some standardized tools and migration best practices documentation. Does such a thing already exist? Are any folks out there in the Slashsphere working on migrating their own project or company to a DVCS? I'd appreciate some feedback from other readers about what works and what doesn't."
This discussion has been archived. No new comments can be posted.

Git Adoption Soaring; Are There Good Migration Strategies?

Comments Filter:
  • Git links (Score:4, Informative)

    by Anonymous Coward on Saturday January 10, 2009 @06:24AM (#26397085)
    Why Git Is Better Than X.com/ [whygitisbetterthanx.com] YouTube - Tech Talk: Linus Torvalds on git [youtube.com] (yeah, I'm a convert)
    • Re:Git links (Score:4, Interesting)

      by Bananenrepublik ( 49759 ) on Saturday January 10, 2009 @07:51AM (#26397389)
      whygitisbettertanx.com claims that mercurial doesn't have cheap branching -- the only advantage he sees git having over hg if leaving aside github. I'm surprised by this statement because I use hg branches everyday. The things he describes can all be done straightforwardly with hg, so I'm asking: can anybody in the know tell me if and how git branches are in any way more powerful than hg branches?
      FTR I love hg, and I see no reason to switch to git, even though the whole bandwagon movement seems to have jumped on the git train.
      • Re:Git links (Score:4, Insightful)

        by Anonymous Coward on Saturday January 10, 2009 @08:01AM (#26397435)

        If you love Hg, there are no strong reasons to switch to Git. None whatsoever. In the same coin, if you love Git, there are no strong reasons to bother switching to Hg, either. Both are *really good* at what they do, and they do it very well. Unlike, say, SVN, which really is for the i'm-too-dumb/lazy-to-use/learn-something-better crowd.

        And it is *trivial* to learn to use both git and Hg at the same time if you already know one of them, so you can work properly on projects that use either one without much fuss.

        That said, git is evolving a lot faster than Hg. Which could be a reason to either avoid git over Hg, or to avoid Hg over git, depending on your own view of such matters :-)

        • Re:Git links (Score:5, Insightful)

          by Anonymous Coward on Saturday January 10, 2009 @10:49AM (#26398159)

          Unlike, say, SVN, which really is for the i'm-too-dumb/lazy-to-use/learn-something-better crowd.

          I'm part of the "SVN is working just fine" crowd. If you want people to switch, being a condescending asshole is generally a great way to prevent that and make them more entrenched.

          Frankly, no one, including yourself, ever mentions a reason WHY I should switch. I think you should eat pickles for breakfast! Why? *silence* ... dumb ass.

          Until this article, I didn't even know it was a distributed system. I'm still not sure why I should care. I haven't cared enough to look into git because I'm busy, my time is important, and SVN works flawlessly for me. As far as I know, you're just the type of person that needs a high performance sports car to drive back and forth to work.

          So, the next time you're trying to convert people to your new religion by calling them imbeciles, you might want to consider throwing in at least ONE selling point before your vanishing attention span drags you off to a new shiny. K? Thx

          • Re:Git links (Score:5, Informative)

            by EsbenMoseHansen ( 731150 ) on Saturday January 10, 2009 @06:39PM (#26402309) Homepage

            In case you were looking for answers rather than abuse: I have used both. For me git does what svn does, plus the following in order of most important first.

            1. Locals checkins, that is, you can check in even if offline (on the train, net down, whatever) and you can commit without the rest of the developers getting it. The latter is nice if, say, you have to write an email first to tell the other developers first.
            2. If you want to checkout your code on some meaty machine to test, without pushing the commit to the central server just yet
            3. Have a full tree locally, which means that history dependent operations works offline and fast
            4. Local branches - the advantages is quite obvious

            Of course, this is for me, and all points might be irrelevant for you.

            • Re: (Score:3, Insightful)

              by ckaminski ( 82854 )
              SVK. And you get compatibility with SourceForge repos, and tools like TortoiseSVN and subclipse.
          • Re: (Score:3, Informative)

            by prockcore ( 543967 )

            Frankly, no one, including yourself, ever mentions a reason WHY I should switch.

            I can give you the reason why I switched.. which may or may not help.

            With SVN, I found that branching was so involved that I wouldn't do it. Instead, I would check out code, work on it, and wouldn't check it back in until I had completed whatever I was doing... which may be days or weeks away.

            Checking code back in with SVN almost became a "release".

            With Hg, I pull down a copy of the code, make changes, commit those changes to m

          • Re: (Score:3, Informative)

            Ignore tha fanboys. If anything, use them as statistical evidence that there might be something worthwhile here. :-)

            Why git for a SVN user? There's nothing better than trying it for yourself (git-svn clone svn://whatever, then hack on it with git, then git-svn dcommit). But until then, two big points:

            1. It's distributed. I can make lots of commits without pushing them somewhere public, which is good for the same reasons that hitting "Save" often is good, without being worried that I've broken the build for

    • Re:Git links (Score:5, Informative)

      by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Saturday January 10, 2009 @08:17AM (#26397501) Homepage

      Some things git is bad at:

      - no of partial cloning, so a big history means lots of stuff to download, this is especially bad when it comes to binary files
      - no way to download just a single file or directory, a user always has to clone the whole repository

      • Re:Git links (Score:4, Informative)

        by Drinking Bleach ( 975757 ) on Saturday January 10, 2009 @08:45AM (#26397635)

        For problem one: git clone --depth 1 (or however far back you want your history to go); note this severely cripples git's abilities and isn't very useful at all unless you're on dialup still.
        For problem two: this isn't a real problem with git, but rather with your organization. Multiple projects don't belong in the same repository, it's as simple as that.

        • Re:Git links (Score:5, Informative)

          by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Saturday January 10, 2009 @08:59AM (#26397687) Homepage

          Problem two *is* a problem with git, it has nothing to do with how you organize a project, since you can never guess what a user might want. Simple example: I would like to look at the latest version of a file in the linux kernel, with git I have to download the whole beast when all I want is a single file, which is neither pretty nor fast.

          • Re: (Score:3, Interesting)

            Comment removed based on user account deletion
          • Re:Git links (Score:4, Informative)

            by Drinking Bleach ( 975757 ) on Saturday January 10, 2009 @09:21AM (#26397777)

            I hadn't really thought of that, I had assumed you were referring to Subversion's rather common case where multiple projects are stored in the same repo, and you checkout different directories to access one of them.

            Anyhow, most, but not all, public git servers have a gitweb or similar attached, which will at least let you browse and download files from the tree if you need to. For example, grabbing the latest README of Linus' Linux tree can be had via http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob_plain;f=README;hb=HEAD [kernel.org]

            Git itself doesn't provide any mechanism for it, however, but it's fairly unusual to be interested in a specific file rather than the entire project.

            • by CarpetShark ( 865376 ) on Saturday January 10, 2009 @10:04AM (#26397953)

              You're quite right. It seems to me that Git isn't intended to provide access to the latest versions of individual files. Git, like all DVCS's I know of, is essentially a version-control plugin for your filesystem. The filesystem itself provides the current version you're working with, and so it's only a matter of providing an http server over the directory or something like that. Which is exactly what GitWeb and Trac-Git provide, only they're better.

            • Re: (Score:3, Interesting)

              by fahrbot-bot ( 874524 )

              it's fairly unusual to be interested in a specific file rather than the entire project.

              Except if one is simply reviewing a specific file or files - for a code review, debugging, or copying pieces to another project. I do this all the time when helping others on their projects. I don't need (or want) the whole hot-mess...

          • Re: (Score:2, Informative)

            by maxume ( 22995 )

            Unless you are intensely involved in a project, browsing for the single file using some alternative interface is probably going to be easier. I.e.:

            http://github.com/github/linux-2.6/tree/master/kernel/fork.c [github.com]

            That doesn't work if the repository doesn't have an alternative interface though (but for projects you are involved in, the download only needs to be the difference between the internal and external repositories, not the entire history).

        • Re:Git links (Score:4, Interesting)

          by Bromskloss ( 750445 ) <auxiliary,address,for,privacy&gmail,com> on Saturday January 10, 2009 @11:02AM (#26398245)

          For problem two: this isn't a real problem with git, but rather with your organization. Multiple projects don't belong in the same repository, it's as simple as that.

          I have been wanting to start with Git, but I find it too hard to know what should go into different repositories and what should be in the same.

          First example: I might be writing a book in book/ and keep all images in a subdirectory book/images/. I think it is not far-fetched that I might want to work on only the images without downloading all the other, possibly huge, subdirectories.

          Second example: Say I write a scientific article for which I compute a lot of numerical data. Then I write a second article, which builds upon the same data. Should the two articles go into the same repository, so that I can easily pull and compile everything at once with all dependencies in place, or should they be kept separately, so that I can work on the first article without dragging the other one along?

    • Re: (Score:3, Interesting)

      by mgiuca ( 1040724 )

      I'm a Bazaar fan. That isn't to say I'm not a Git fan, I just prefer Bazaar (by a small margin, for a handful of reasons).

      That website makes a really good case, but I think they should remove "Bzr" from the "Cheap Local Branching" section. I could s/git/bazaar that entire section and it would still be almost correct.

      Bazaar has a totally different view of branches, but it gives you all the same flexibility as Git. The only thing is that Bzr branches are full copies of the entire repository - so they aren't "

  • by MichaelSmith ( 789609 ) on Saturday January 10, 2009 @06:25AM (#26397089) Homepage Journal
    In my day job we are migrating to something totally new...clear case.

    (shit).
    • Comment removed (Score:5, Informative)

      by account_deleted ( 4530225 ) on Saturday January 10, 2009 @08:07AM (#26397451)
      Comment removed based on user account deletion
      • Re: (Score:3, Funny)

        by GooberToo ( 74388 )

        How to identify managers promoted beyond their capability?

        They recommend, or worse, force a migration to Clearcase; usually based on lies about the existing solution to their management.

    • by JamesP ( 688957 )

      The solution to clearcrap is a baseball bat

      I swear to God, last time someone suggested Clearcase at my job I said "OVER MY DEAD BODY"

      • Re: (Score:3, Insightful)

        Comment removed based on user account deletion
        • by JamesP ( 688957 )

          I agree, this reminds me of that false-myth: "USA spend $1M in a pen that work in space, Russians used a pencil"

          Another thing I tend to notice: sw companies that use CC WILL GO BANKRUPT

          Not maybe, WILL.

          Companies where software is not what the company essencially sells or only supports their operation can stand it (pretty much like a benign cancer)

      • by David Gerard ( 12369 ) <slashdot.davidgerard@co@uk> on Saturday January 10, 2009 @09:24AM (#26397787) Homepage
        I foolishly touched Clearcase at work in 2001 and left it on my CV until 2004. I still get calls from pimps desperate for a Clearcase admin to work in deepest suburban Berkshire for GBP30k. (I'm currently a sysadmin in London on GBP40k.) It's one of those cursed words on a resume - if it's ever on your resume, it'll taint it forever and those are the only calls you'll get.
    • Have you put together a clear, serious, constructive proposal for an alternative that spells out the pros and cons? If you haven't, its your own fault really. But if you have, and they don't listen, then yeah, "shit" about sums things up. At any other time, I'd say find better bosses and quit in that situation, but few have that option at the moment.

    • Re: (Score:3, Interesting)

      by ThePhilips ( 752041 )

      I feel you pain. I'm in the same boat. You can't work with CC effectively without 20-30 helper scripts. Hijacked/checked-out files is major pain. Dynamic views are great feature yet are completely useless.

      Though that still doesn't mean you can't use Git like local tool.

      I used before RCS (ci and co command) to preserve history of my modifications locally. Now due to various circumstances I moved to use Git locally and it works quite well.

      After "ct update" (alias ct=cleartool), you go to directory

      • Re: (Score:3, Interesting)

        by jgrahn ( 181062 )

        You can't work with CC effectively without 20-30 helper scripts.

        Where do people get ideas like this? I use CC effectively with one trivial Perl script. It converts "my feature is on this branch off this label" descriptions into config specs -- raw config specs are too complicated to handle, so you need a layer above them which matches your CM process. Yes, IBM/Rational should explain that to their customers. Or maybe make UCM not suck.

        Hijacked/checked-out files is major pain.

        Then you're not branching, lik

        • Re:Meanwhile... (Score:4, Interesting)

          by ThePhilips ( 752041 ) on Saturday January 10, 2009 @05:00PM (#26401387) Homepage Journal

          You can't work with CC effectively without 20-30 helper scripts.

          Where do people get ideas like this? I use CC effectively with one trivial Perl script. It converts "my feature is on this branch off this label" descriptions into config specs -- raw config specs are too complicated to handle, so you need a layer above them which matches your CM process. Yes, IBM/Rational should explain that to their customers. Or maybe make UCM not suck.

          What about normal diff? CC still doesn't allow to use external diff program. And "ct diff" insists on two files - it can't diff hijacked file against original.

          What about normal recursive diff for two branches?

          What about patch generator? So that you can back up you unchecked-in changes.

          What about change log? Recursive change log showing changes for all files in directory?

          How about converting change history into set of patches? To allow easier investigation of regressions.

          The moronism with R/O files? All extracted/"ct get -to" files are marked R/O.

          And this is from top of my head. For all of that I have scripts. And with the scripts, I'd say, CC isn't half bad.

          But to the point of original question, with Git I would not need any of the scripts.

          Hijacked/checked-out files is major pain.

          Then you're not branching, like you're supposed to do, and a hijacked file is the *least* of your problems. You cannot use CC as if it was CVS; a dynamic view is not a sandbox if you set it up to silently show other people's possibly incomplete changes.

          We do branching and hijacked files are not problem per se. It is just better half CC tools, when given as parameter hijacked file, would simply say "f-off, this is view private file."

          In some situations checked-out files are even worse since CC treats checked out files like files on a special branch. Consequently half of CC tools accept the file as parameter, yet show dick but no information about the file.

          Git doesn't draw any difference between the files and files in repo. At any time you can do whatever you like with any accessible file/revision.

          Dynamic views are great feature yet are completely useless.

          Use them correctly for a few years, then report back.

          Care to elaborate on "correct" usage pattern then?

          People tried them in company few years ago and pretty much abandoned them. They are still accessible, yet generally unused. Our CC admins would be happy to know the "correct" usage for them.

          You can't index dynamic view - because it contains all possible vobs and all possible files. And I do not want to deal with 150K files of the whole project, I need only 3.5K files belonging to my part.

          You can't compile in dynamic view - because even if only dozen of people compile simultaneously, CC server simply dies under load.

          Heck, simple "ls" spits on screen bunch of errors every time, because dynamic view can't properly show branch, but shows all files on all branches (readdir() lists all of them). And if file did happen to be not on the branch of the dynamic view, stat()ing it would give you an error.

          If you can't do development with them, what else can you do with the dynamic views?

          I used in past dynamic views solely for porting semi-automatically (with script) trivial fixes into many branches. For more than that dynamic views are useless.

          Please, reveal me the secret: how do I use dynamic view "correctly"? Many people in my company would be happy to know it too.

    • Re: (Score:3, Informative)

      by ewhac ( 5844 )
      We had ClearCase at MOTO. Complete mess. Nobody -- not even the admins -- could explain how it worked or how to use it. It was also supported only on an ancient version of RedHat Enterprise Linux, since it required a binary filesystem blob to support its version-tracking filesystem. If you didn't have exactly the kernel version it was looking for, ClearCase was simply not available to you. (You'd think, given the sums of money involved in procuring and deploying ClearCase, that Rational/IBM would offer
  • by TapeCutter ( 624760 ) on Saturday January 10, 2009 @06:45AM (#26397159) Journal
    In the UK and to a lesser extent here in Australia a "git" is akin to a moron.
    • Re:Adopt a git... (Score:5, Informative)

      by WarwickRyan ( 780794 ) on Saturday January 10, 2009 @06:51AM (#26397175)

      It's more than just an moron, it's an nasty, stubborn, selfsentered and selfish moron.

      "Our neighbour is an right old git" could be used to describe an elderly neigbour who, say, regularly blocked your driveway because your car got in the way of the sunlight on his garden.

      The old neighbour from Dennis, or Victor Meldrew from One Food In The Grave are both fine examples of gits.

      It's like a weaker version of the c-word.

    • Re:Adopt a git... (Score:5, Informative)

      by Wild Bill TX ( 787533 ) on Saturday January 10, 2009 @07:05AM (#26397233) Homepage
      Quoth Linus Torvalds, "I'm an egotistical bastard, and I name all my projects after myself. First Linux, now git." :)
    • Not a moron really, at least in terms of UK usage. Stubborn and irritating, yes, but there is not really an implication of idiocy. The phrase "stupid git" is quite common and isn't a tautology. The OED rather bloodlessly defines "git" as "a worthless person".

    • by arevos ( 659374 )

      In the UK and to a lesser extent here in Australia a "git" is akin to a moron.

      Actually, git is more akin to "bastard" or "son of a bitch". You can say to someone "he was a clever old git" without it being considered an oxymoron.

      Incidentally, Linus claims he named Git after himself.

      • Re: (Score:3, Informative)

        by TapeCutter ( 624760 )
        GP here or "useless git" as my farther used to say. The word originaly comes from foundry workers in the north of England, a "git" is the bit of metal left on the cast where the hole in the mold was. The original metalic git was useless and in the way. - At least that's how BBC's Time Team tells it.

        I wonder if Linus knows?
      • by ettlz ( 639203 )
        I think it's a corruption of "get" implying "ill-gotten" or something. See http://en.wiktionary.org/wiki/git#Etymology [wiktionary.org]
    • We also have GIMP. I think there's some kind of competition going on ;)

  • by James Youngman ( 3732 ) <jay.gnu@org> on Saturday January 10, 2009 @06:57AM (#26397205) Homepage

    If the system you are migrating from manages trees, you should be fine. CVS migration is pretty easy and I understand that Perforce works quite well too (in both directions!). Most of the migration tools are listed in the GIT FAQ [git.or.cz].

    The places where people are likely to have trouble is migrating from tools that don't understand that there's more than one file. For example, RCS and SCCS both support branches, but in a completely different way to git (branches are per-file, they're not for the whole repository). This means that during conversion, something useful has to happen with them, but the right answer isn't clear to a program. If versions 1.1, 1.1.1.1, 1.1.1.2 and 1.2 of file "foo" exist, then versions 1.1.1.2 and 1.2 are on different branches and either may be the older revision. It's not clear if revision 1.43.1.3 of file "bar" is the same "branch" as "foo" 1.1.1.2 or not. Because RCS and SCCS deal with single files only, it's not possible to find an answer to these questions in the history files at all - if there is an answer, it's just a convention of the user. Essentially what's happening here is that the git import process requires information which isn't represented in the files you're converting from. For what it's worth, migrating from SCCS or RCS to CVS has a similar problem.

    Personally, I've migrated from CVS to git for findutils (well, Jim Meyering did the actual migration; he migrated coreutils too). I haven't regretted migrating at all. It took me a long time of using git locally before I was comfortable migrating the master repo, though. As a git beginner the thing I found most worrying was that I found it hard to envisage the effect of the git command I was typing. The thing it took me a long time to figure out is that with a distributed version control system, it's safe to screw up your local copy, as long as you don't push the result.

    • by IamTheRealMike ( 537420 ) on Saturday January 10, 2009 @08:10AM (#26397469)

      Right, that was always the weakness of git, and although it's improved I still have problems with its usability (or lack of it). For all the dumping Linus does on Subversion/Perforce and its ilk, they are easy to understand and it's basically always clear what you're doing. I haven't used git for a while, but last time I did it was like a box of sharp knives. Although hard to mess up the remote copy, messing up your local copy was much easier.

  • strategy (Score:4, Informative)

    by carnicer ( 1449311 ) on Saturday January 10, 2009 @07:11AM (#26397251)
    I have migrated to svn many repos from older stuff, like SCCS and VSS. Migration strategies are important, and to decide about them you need to answer a few questions. First of all, ask yourself if git or DVCS is the best option for you, your project and your company. Just don't be led by hype. It may be that a centralized VCS like svn may be a better option. There are tools to make them perform as DVCS, like the plugins git-svn and bzr-svn. Second, ask yourself how much of the project history is needed, if any is needed at all. That may save you lots of time, disk, chaos and entropy. When migrating, it is very important to tidy up the repo. Purge unnecessary files, binaries, archives, branches, etc. I have seen people who use VCSs like a trashcan. Bad practices may sink the repository performance. After migrating, make sure users know how to use the repo and that understand the basic VCS concepts, either generic or specific for the VCS of choice. Try to remove practices and concepts from the older VCS. As you mentioned it, best practices are very important, and they are not easily found on literature. There is more I could say, but I guess by now it's ok.
    • Comment removed (Score:4, Informative)

      by account_deleted ( 4530225 ) on Saturday January 10, 2009 @08:02AM (#26397441)
      Comment removed based on user account deletion
      • Re:strategy (Score:5, Insightful)

        by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Saturday January 10, 2009 @08:26AM (#26397549) Homepage

        DVCS is in concept always better then centralized VCS, since it offers all the same features plus a lot more. However there are things that git handles pretty badly. Binary files are such a thing, you really wouldn't want to keep large binary files in git, since git forces you to download all the history of them, unlike SVN which allows you to only download the latest version. Another thing missing from git is a way to checkout just a single directory or file, when I am just interested in the latest version of a single kernel module its kind of annoying being forced to download the whole kernel source tree.

    • Re: (Score:2, Troll)

      by drinkypoo ( 153816 )

      As just some schmuck who downloads sources over the internets for compilation, I will give my POV: git is shit! So far I have done pulls from cvs, cvsnt, svn, git, and mtn that I know of. Of these, the only one worth one tenth of one shit from the user's perspective is svn. Why is that? Because every other remote source code control system will happily corrupt your copy of the repository the majority of the time that there is some communications problem. In the case of svn, you can at least almost always re

  • by alexibu ( 1071218 ) on Saturday January 10, 2009 @07:38AM (#26397331)
    I have watched Linus talk about git on google tech talks, and am inspired to use it.
    Unfortunately I think I need a tool like TortoiseSVN for git because I am a git.
  • Git + Eclipse (Score:5, Interesting)

    by david.given ( 6740 ) <dg@cowlark.com> on Saturday January 10, 2009 @08:14AM (#26397487) Homepage Journal

    I'm trying to use git as much as possible --- I'm still pretty crappy at doing anything even slightly complicated with it, but even with minimal skills it's brilliant at keeping track of changes to local directories.

    The only problem is that I'd really, really like a decent Eclipse git plugin. I'm used to using Subclipse for SVN, which is fantastic: I can point at a file or directory, say 'Synchronise with repository', and then get a graphical diff of every change and the ability to quickly and easily revert or commit changes on a per-change, per-file, per-group-of-file basis, etc. (And you can do this with any revision, which makes backing out one specific change very easy.) Doing the same with git's command line tools seems terribly clunky by comparison, especially when I'm struggling to remember the syntax, and the fundamentally unfamiliar workflow.

    I do use the Eclipse git plugin at git.or.cz, but it's still very crude. The file decoration is invaluable, which lets me see at a glance which files are new/changed/pristine in the Eclipse project view, but actually trying to *do* anything with it is deeply unpleasant --- no synchronise view, no graphical diff, and some weird behaviour like if you point at a file, say 'commit this', you get a dialogue prompting you to commit *all* files. Which is not what I want. And there's lots of UI clunkiness all round, due to simple immaturity.

    I've had some luck with giggle, but the UI is pretty bad, and some changes (I forget what; new files, perhaps) don't show up in it, which is a bit awkward. I've had a play with some other GUI frontends but they're all pretty nasty by comparison with Subclipse. Still, the git plugin is getting better with time --- I'm just hoping that Synchronise shows up soon...

    • Re: (Score:2, Informative)

      by Jraregris ( 829815 )
      When using git with Eclipse I've found it to be most useful to use the git plug-in, but use the command line for actual git action. The plug-in displays useful information about file changes and branches, while the command line offers quick, clean and expressive execution of git commands.
  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Saturday January 10, 2009 @08:15AM (#26397491)
    Comment removed based on user account deletion
  • by JensR ( 12975 ) on Saturday January 10, 2009 @08:24AM (#26397529) Homepage

    I used to use cvs, subversion and perforce. After switching to git, it feels a lot more powerful, at the cost of more things that can go wrong.
    My workflow with subversion was:
    - regular update: update, check/fix conflicts, continue work
    - commit: update, pick files I want to commit with TortoiseSVN, verify the changes in the diff view, write log message, commit, continue work
    On GIT:
    - regular update: stash my changes, change to master branch, pull, check for errors or dirty files (mostly endian problems), switch to work branch, rebase from master, check for errors or dirty files, unstash my changes, check for errors or dirty files, continue work
    - commit: update, stage the files I want to commit, commit them, verify the changes, push
    At several stages some obscure thing could go wrong that I needed to look up in the manual or on the internet, or needed to ask someone who used it for longer. That doesn't mean I think GIT is bad, I just feel it takes more time to be fully productive with compared to older systems. And I miss a few minor things from svn, like keyword expansion or properties.

  • by Qbertino ( 265505 ) <moiraNO@SPAMmodparlor.com> on Saturday January 10, 2009 @08:30AM (#26397565)

    I've just gotten fluent with SVN versioning after using it for a few years now. I understand that part of what bothers me about it is what bothers Linus Torwalds aswell and had him write Git and I can follow his reasoning in his famous Google speech on Git *and* I understand the hype that ensues around Git and welcome it. That said, until Git has a neat set of stable mature GUI tools to use with it, I'm sticking with Subversion, TortoiseSVN and/or Subclipse. A mature working toolkit that I know how to handle and that works more or less flawlessly.
    Even a toolset like that would've costed thousands ten years ago. That goes to show how far we have come in some areas of the software field.

  • Mercurial vs. Git (Score:5, Interesting)

    by rpp3po ( 641313 ) on Saturday January 10, 2009 @09:50AM (#26397915)
    I'm working on the OpenJDK source tree through Mercurial. I couldn't be more satisfied. The tools are well structured, very easy to use, stable, fast and well documented. I don't miss any feature. Could anybody, who tried both and prefers Git, list some advantages of it over Mercurial? To me it just seems like a Git done right without the hype and too complex UI.
    • Re: (Score:3, Interesting)

      by dozer ( 30790 )

      Last I used Mercurial, I couldn't create a feature branch. I had to clone the whole damn repo. Well, either that or I could create a branch that lives forever, not a good idea if I'm doing a crazy experiment.

      In Git, if I want to try something out, I create a feature branch (takes basically no time or disk space) and hack away. If it sucks, I just delete the branch. It's a very nice way to work once you get used to it.

      Have they improved branching in hg?

  • I am using Darcs [darcs.net] and it seems to do the job. Is strange that it isn't even mentioned cause it has been around since quite some time and is pretty mature. The only problem I am having with Darcs is huge resource consumption (a copy of the repository is on a VPS with 256mb RAM, no swap) but you can move a repository by just copying it somewhere else (even across systems) without problems. What are the advantages of using Git/Mercurial/Bazaar? I think I need to mention that I am developing on OSX (but a copy
  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Saturday January 10, 2009 @11:30AM (#26398419)
    Comment removed based on user account deletion
  • by Anonymous Coward on Saturday January 10, 2009 @11:39AM (#26398477)

    Many of us develop on Windows, pushing out code to Linux platforms. Git just isn't an option. Poor support for IDEs, especially Eclipse.

    Bazaar has been working great for me. Used Mercurial with success as well.

  • by e40 ( 448424 ) on Saturday January 10, 2009 @12:51PM (#26399129) Journal

    Neither worked on my 18 yr old CVS repo (that was populated with 7 yr old RCS files). What I did find was fromcvs [dragonflybsd.org]. I found a couple of bugs, with the author fixed very quickly. It is also fast. My 3.5G CVS repo was converted in about an hour. Both of the others took 10+ hours (and didn't produce usable output). The biggest reason I love it: it allows incremental updates from CVS to GIT. You can run it any number of times and it imports the new stuff. You do need to leave the git repo you are importing into alone (no commits other than the import commits).

    I still have more testing to do before we go live, but it's looking very, very nice.

  • by Qubit ( 100461 ) on Saturday January 10, 2009 @01:17PM (#26399365) Homepage Journal

    At work we're trying to get all our our repos moved to Git. We moved off of CVS to SVN a year or so (which was a huge improvement), but now that all of the non-programmers in the office are used to using TortoiseSVN [tigris.org], lack of a good windows GUI for Git has been a bit of a roadblock.

    The msysgit [google.com] folks started work on Tortoise-inspired GitCheetah [google.com] GUI, but that project basically fizzled out. Lots of people wanted a Windows GUI, but no one had both the resources and drive to step up and do it.

    Then, exactly two months ago, Frank Li started working on TortoiseGit [google.com]. From what I can tell, this is a fork of TortoiseSVN with most of the Subversion guts pulled out and replaced with git commands. TortoiseGit is not done yet: 'git add' has some issues, Submodules don't seem to work at all, etc..., but development on the tool is in high gear and the primary developer is going the extra mile to help users [google.com].

    If you're looking to deploy tools right now, gitk is a bit more powerful than the log in TortoiseGit, but might be more confusing for naive users.

  • by pauljlucas ( 529435 ) on Saturday January 10, 2009 @01:36PM (#26399523) Homepage Journal
    The thing I don't understand about any distributed VCS is how (for example) others could pull stuff from my repository if I don't have a static IP. Also note that I don't mean "don't have a static IP" only in the usual sense of "having a dynamic IP at home": I also mean in the sense that my development machine is my laptop and I often work at coffee shops and other places and so I'm often behind NATs.
    • Re: (Score:3, Informative)

      by NTmatter ( 589153 )

      There aren't any strict rules saying that people have to pull straight from your laptop.

      In terms of non-distributed VCSes, would you ever host a your repository on a machine that other people couldn't access? It would always be somewhere publicly accessible.

      For this kind of situation, you'd probably have a public development repo that's separate from your official repository. This would give you a set of repositories that looks like:

      official - The authoritative repository, controlled by some kind of integra

    • Re: (Score:3, Informative)

      Take a page out of Freedesktop.org's process. Any user can create and maintain user repositories in their own space. For example, http://cgit.freedesktop.org/~csimpson/mesa [freedesktop.org] is my personal Mesa repo. Then, anybody that wants can pull from there. Very rarely do FD.O people pull and push directly to each other, and I doubt that it happens that way in larger organizations, either.

  • @#$@#$ git! (Score:5, Informative)

    by Anonymous Coward on Saturday January 10, 2009 @02:30PM (#26400033)

    I curse more when I use git than when I use Windoz (and those are the only times I curse). Git's design is really that bad (from a user perspective).

    Git is fully distributed (with no "authoritative" source), but it doesn't give you any tools understand/manage the distribution of files. If you have a work group with more than a few people, you are constantly asking what repo (and what access method to it), what branch, and what (bombastically long) revision. It's fine for 1-2 people, but then any version control system is fine for a small enough group.

    The documentation helps little. When you do "git help merge", you don't @#$@# care that this is the front end to multiple merge methods. You just want the stupid thing to work. If it's a special case, then you'll look for an advanced technique; but you are stuck reading through all this crap trying to figure out what really matters. No offense to the people working on git docs. It used to be awful, now it just sucks. The problem is more in the user interface design than the docs.

    There are over 100 git commands, and a command can do radically different things depending the the switches and target syntax. It's more confusing than any other revision control system that I have worked with.

    I use git because I have to, not because I want to (like Windoz). After using it for months, I still routinely get stuck trying to figure out the right mix of commands, arguments, and target syntax needed to get common things done.

    Git can do some (nice) things that subversion can't, but it creates so many problems that you haven't gained anything.

    I've heard good things about mecurial and bazaar. I wouldn't recommend git to anyone I liked (but it's perfect for perl :-).

  • complexity (Score:3, Informative)

    by XO ( 250276 ) <blade.eric@NospAM.gmail.com> on Saturday January 10, 2009 @04:29PM (#26401077) Homepage Journal

    The complexity of git robs it of quite a bit of the value of it's features. For God only knows what reason, a 5-6 person project that i'm working on is using git instead of subversion, and only the person who setup the project actually has any idea how to use git. The rest of us are just cruising along, not really having any idea of what we are doing with it, and are stopped completely whenever it does anything weird.

    It's awesome to have the whole thing where it merges all the changes in a same file together, fairly intelligently, but even the GUI version for Windows has no functional interface for how to deal with conflicts (which should be easily done as a "which bit of code is the proper piece to use here?" instead of jamming diffs into a file. Also, the Windows and Linux versions of GIT have several problems interoperating with each other.

    In short, Git appears to have been designed entirely with features in mind, and not one bit of usability for anyone other than Linus himself. It is a nightmare for people who only have the need for version control and a handful of people working together. It reminds me very, very much of early Linux, before anyone else besides Linus had been hacking on it.

    • Re: (Score:3, Interesting)

      by Qubit ( 100461 )

      Lots to talk about here!

      The complexity of git robs it of quite a bit of the value of it's features. For God only knows what reason, a 5-6 person project that i'm working on is using git instead of subversion, and only the person who setup the project actually has any idea how to use git.

      It sounds like the first person set up the project, and now expects everyone else to just "make it work", even if they're not programmers and have a good understanding of Git. Fair 'nuff.

      Now I don't know your situation, but if you're actually in a work situation, the lead programmer (or user, if you're not storing code in this repository) should be giving you guys some kind of help or crash course in using Git. The Git model is quite a bit different than SVN, and it has taken me some

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...