Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software Linux

Build From Source vs. Packages? 863

mod_critical asks: "I am a student at the University of Minnesota and I work with a professor performing research and managing more than ten Linux based servers. When it comes to installing services on these machines I am a die-hard build-from-source fanatic, while the professor I work with prefers to install and maintain everything from packages. I want to know what Slashdot readers tend to think is the best way to do things. How you feel about the ease and simplicity of installing and maintaining packaged programs versus the optimization and control that can be achieved by building from source? What are your experiences?"
This discussion has been archived. No new comments can be posted.

Build From Source vs. Packages?

Comments Filter:
  • Support (Score:5, Insightful)

    by ackthpt ( 218170 ) * on Tuesday March 30, 2004 @01:48PM (#8715993) Homepage Journal
    After you've gone it will be easier for the prof to get support on a package than something custom. From experience, the less something you have resembles what tech support is expecting the more finger pointing and the less gets done.

    As often as I've lamented how much employers spend on PC's, vs build them themselves from parts, they would rather not have to rely on someone in-house to support hardware.

  • by Novanix ( 656269 ) * on Tuesday March 30, 2004 @01:49PM (#8716000) Homepage
    Gentoo [gentoo.org] is a great OS as instead of having binary packaged systems, it builds everything from source but can build it effeciently and automatically. In addition it can allow you to just use it to manage the source and you compile it yourself. If you were dealing with many systems you could setup your own gentoo sync server and distribute custom copies of various packages exactly to your specs and compiling details. In addition it can easily determine dependencies, and even install them for you if needed. Gentoo is kind of like a bare bones OS that simply makes it easy to install whatever you want and rather helps shortcut the process of dealing with installing things by compiling things for you.
  • by bperkins ( 12056 ) * on Tuesday March 30, 2004 @01:49PM (#8716004) Homepage Journal

    While building from source can be fun, and necessary sometimes, I don't think it makes sense. You spend far too much time tweaking minor issues, and lose sight of major problems.

    One problem that I've noticed is the fact the build from source people tend to install things in a way that's completely different than anyone else. This means that anyone who tried to maintain the machine is hopelessly lost trying to figure out what the previous person did. OTOH, When (e.g.) RedHat does something weird, the explanation and fix is usually just a few google queries away.

    Most (all?) package formats have source packages that can be modified and rebuild in case you need some really special feature.

  • by DR SoB ( 749180 ) on Tuesday March 30, 2004 @01:49PM (#8716007) Journal
    Your installing a OS from a package, so why not applications? Old programmers moto "Don't re-invent the wheel".
  • yup (Score:2, Insightful)

    by rendelven ( 687323 ) on Tuesday March 30, 2004 @01:50PM (#8716027) Homepage
    I personally try to use the packages when I can. It makes it a bit easier for myself to keep track of everything.

    It's all in what you need to do. If you need those optimizations or special build options that aren't in the package, go ahead, it's what it's there for.
  • Simply (Score:5, Insightful)

    by AchilleTalon ( 540925 ) on Tuesday March 30, 2004 @01:51PM (#8716038) Homepage
    build packages from source!

    Many sources include the SPEC file required to build the package.

  • --No-Deps (Score:5, Insightful)

    by Doesn't_Comment_Code ( 692510 ) on Tuesday March 30, 2004 @01:52PM (#8716055)
    My biggest grievance against packages is the dependacy fiasco. For instance, I have Red Hat at work. And the majority of the programs are .rpm's. Well there was a certain program that I could only get as source, so I compiled and installed it. It turns out that it was required as a basis for other packages I wanted to install. But when I tried to install those, it didn't recognize the prerequisite programs because they weren't installed via rpm.

    I don't care for the dependancy model of packages, and I'd much rather install programs myself. That way I know I'm getting the program compiled most efficiently for my computer, and I don't have to worry about dependancy databases.
  • Run Debian (Score:3, Insightful)

    by jchawk ( 127686 ) on Tuesday March 30, 2004 @01:53PM (#8716066) Homepage Journal
    Run debian, if you absolutely must install from source you can use APT to get grab the source that you need, compile and then build a deb for it so you're still using the debian tracking system. It really is the best of both worlds.

    For most packages though there really isn't a big need to compile from source.
  • by Evanrude ( 21624 ) <david.fattyco@org> on Tuesday March 30, 2004 @01:53PM (#8716076) Homepage Journal
    I used to be a die-hard build from source person myself back when I ran slackware.
    Since that time I have gained more experience with production Linux systems.
    When it comes to managing production servers, I use Debian and typically only install programs that are in the stable tree.
    Every once in a while I have to build a deb from source, but only in rare circumstances.

    Now, when it comes to my development systems I am more likely to compile from source rather than rely on the packages to supply me with the latest and greatest.

    It really all just depends on what kind of stability vs. "new" features you need as well as ease of managment. Installing a package takes 30 seconds vs. compiling/installing from source can take longer and requires more hands on.
  • by Hyperkinetic ( 142875 ) on Tuesday March 30, 2004 @01:53PM (#8716077)
    The only thing I use prepackaged are the GNU tools. Everything else is built from source. There are too many compile time options, and building from source eliminates the problem of binarys being linked against a different lib than that is on the system. Plus auditing the configure and makefile before compilation ensure everything goes where *you* want it to.
  • Depends (Score:5, Insightful)

    by Richard_at_work ( 517087 ) * on Tuesday March 30, 2004 @01:54PM (#8716081)
    I use OpenBSD, which like most of the BSDs has the ports tree, and also has packages. Most of the ports tree are built as packages and are available on the FTP sites, allowing you to either install 3rd party applications from source preprepared for the job, or install the package that has already been preproduced from that port. Best of both worlds, and indeed if you are after customisation and have a number of systems, you can make the changes on one system, and bingo - you have the package ready to roll out to the other systems.

    As for what I use? I used to use solely ports, but now I usually grab all the packages when I do a fresh install, and only use ports for what isnt available as a package, as the packages give me no disadvantage.
  • by FreakyControl ( 751781 ) on Tuesday March 30, 2004 @01:54PM (#8716092)
    I can tell you as a grad student with 3 years experience working in an engineering lab, packages are the way to go. Not just in software, but generally in most situations. As others have mentioned, you have the ease of use, tech support, and the time savings. While you may eke out a little bit of performance, your time is of significant cost to the lab, with which you can be doing many other more valuable services. Also, as a student, you will likely only be there for a couple of years. When you leave, and something goes wrong, someone else has to sort through what you did to try and fix it.
  • by mod_gurl ( 562617 ) on Tuesday March 30, 2004 @01:54PM (#8716095)
    If you're responsible for the machines you run how can you abdicate that responsibility by using whatever some package maintainer decides to give you? At the University of Michigan we use Linux from Scratch to manage hundreds of machines that provide everything from web servers to IMAP servers to user Desktops & Laptops. The trick is leveraging the work used to administer one machine well out to hundreds of machines. The tool for this is radmind [radmind.org]. Radmind doesn't require that you build your software from source, but it leverages the work you put into one machine to manage all of your machines. It also integrates a tripwire with your management software which means you can detect unwanted filesystem changes in addition to managing software.
  • The answer is .... (Score:4, Insightful)

    by Archangel Michael ( 180766 ) on Tuesday March 30, 2004 @01:55PM (#8716100) Journal
    It depends.

    If you are advanced enough to compile source code in such a way that it performs better or in a tighter controlled manner, which suits the purposes you need better than off the shelf builds (packages), then by all means, build it from source.

    If on the other hand, you don't have a compelling reason to compile the source, then use the packaged product.

    I don't know about you, but for most of my servers, the extra configuration options needed to squeeze an extra few percentage points of performance isn't enough to bother running my own compile.

    Those that say they review ALL code before compiling for security (backdoors, holes etc) problems are probably lying. I am sure there are a couple people who do.

    Basically if you do it just so you can be 1337, you are just vain, as I doubt that most people would see/feel the difference.
  • by KenSeymour ( 81018 ) on Tuesday March 30, 2004 @01:55PM (#8716102)
    I would have to agree about using packages. One gripe I have about building from source is
    that most packages do not have "make uninstall".

    With packages, you have a much better chance of removing all the files that were installed with the packages when you need to.
  • by Roadkills-R-Us ( 122219 ) on Tuesday March 30, 2004 @01:59PM (#8716183) Homepage
    I agree. What the professor wants is a readily supportable, production environment, and tat's what you should supply. That means packages wherever possible. IFF there is a clear need, build from source- a 5% speed optimization may not be worth it (that's the prof's call). A 50% speed improvement (unlikely, but possible) would probably be worth it (prof's call). Otherwise, I'd only build from source when there was not a trustworthy package available, or to add features, fix bugs, etc.

    I've been in both your and the prof's position, and this is generally the best bet. It'll make the prof's life a lot easier when you're gone, too.
  • by October_30th ( 531777 ) on Tuesday March 30, 2004 @02:00PM (#8716216) Homepage Journal
    If you'd make your case simply "It's the right thing to do" that would definitely not convince me - in fact, such argumentation would only aggravate me. It smells like an ideological argument.

    If you could demonstrate that installing/upgrading from the source results in a quantifiable improvent in maintenance or performance over a pure binary distribution, I would consider it. If there are no existing reliable benchmarks, but you'd make a good case, perhaps I'd let you turn your own workstation into a demonstration system.

    Anything else. No way. If it works, don't mess with it.

    I run Gentoo at home and, while updating with "emerge" is kind of nice, I've yet to find any compelling reasons why it'd be better than up2date or apt-get. There really are no measurable performance or reliability advantages.

  • Re:--No-Deps (Score:5, Insightful)

    by idontgno ( 624372 ) on Tuesday March 30, 2004 @02:01PM (#8716225) Journal
    I don't care for the dependancy model of packages, and I'd much rather install programs myself. That way I know I'm getting the program compiled most efficiently for my computer, and I don't have to worry about dependancy databases

    That just means that you'll have to store the dependancy databases in your head. A release of a particular software package, whether it's a package or a tarball of source, depends on other software. Always. "config" goes a long ways towards working that out, but if it doesn't work automagically you're going to have to take it by the hand and lead it to wherever your copy of libfoobar.so.17 might happen to be.

    I've just started using yum for RPM management and I'm already liking it a lot. At least dependency management seems a bit cleaner and more automatic.

  • by JM ( 18663 ) on Tuesday March 30, 2004 @02:02PM (#8716247) Homepage
    I used to run an ISP, built everything from source, but eventually it got to the point where it was un-manageable.

    You end up with different versions, different compile options, upgrades are a mess, and it's hard to support.

    Another problem is filesystem pollution. When you do your "make install", it's hard to track what files are installed, and when you upgrade to a new version, you can't be sure it's clean, since you might have configuration files or binaries anywhere on your system.

    So, one day, I started to make RPM packages of stuff I needed, and modified existing RPMS, and sent all the patches to the community.

    What happened is that Mandrake accepted all my packages, so all I had to do was to install the standard distro, and all I needed was there.

    And eventually, I made so many packages that they hired me ;-)

    But even if I wouldn't work for Mandrake, I'm still sold on RPMs. You have a clean SPEC file that contains the pristine source code, plus the patches, and basically all the instructions to build the stuff. You can specify the requirements, you can easily rebuild on another machine, uninstall the old stuff, or upgrade, with a single rpm command.

  • by lazy_arabica ( 750133 ) on Tuesday March 30, 2004 @02:04PM (#8716275) Homepage
    Yes, I know, it is a great distro (it is mine too), it compiles everything from scratch, let you optimize the produced code for your machine, and does it automatically and nearly flawlessly. But I don't think enterprises having to manage dozens of linux servers will ever be really excited about this. Why ? Because compiling simply takes *time*, and that is exactly what most serious system administrators are trying not to loose. However, I agree Gentoo is an excellent distro for geeks and advanced users, especially because of its BSD-like+compiling powerful packaging system. But it is ridiculous to stand up to say gentoo combines "the best of both binary and sources packages". It doesn't.
  • Re:Support (Score:5, Insightful)

    by vrTeach ( 37458 ) on Tuesday March 30, 2004 @02:05PM (#8716304)
    This is very much the case. I have managed 15-20 linux machines for the past seven years, and have moved from largely building from source to largely depending on packages. The porting of apt to rpm systems has completely changed my work for the better, so if at all posible I use the packages and a small subset of apt repositories. My next step is probably develop our own apt repository.

    In some cases, the packaged version won't play well with something that I need, or I particularly don't want upgrades to disturb something. In that case I put together a pseudo-script that gets and builds the source and dependencies, and mark the packages as "Ignore" in my apt configuration.

    eks
  • by drdanny_orig ( 585847 ) * on Tuesday March 30, 2004 @02:06PM (#8716313)
    I use fedora, and most often I get the *.src.rpm versions, then tweak the SPEC files as required, build my own binary rpms, and use those. Best of both worlds, IMO.
  • by multipartmixed ( 163409 ) * on Tuesday March 30, 2004 @02:08PM (#8716335) Homepage
    ..of time.

    It's like the programmer who spends six hours hand-optimizing the inside of a loop that gets called once a day and already executes in 10ms... but ignores the fact that the program takes 20 times longer to run than it should because of an inefficient algorithm. This programmer doesn't know *why* his program is slow, he's guessing, and he will almost always guess badly. This is why profiling was invented.

    Look at it this way. Installing from the packages you get the following benefits:
    - You save time compiling (multiply this by the number of patches you have to add over the box's life time)
    - You save time tracking down dependencies
    - You have a standard platform you can re-deploy at will
    - You have something that another administrator can work on without asking where you shoved shit.
    - You have a package database you can query for version information, dependencies, etc.
    - You have an easily available source of "known good" binaries if you have a suspected intrusion problem.
    - Depending on the package system you use, you might be able to stay on top of security vulnerabilities with very little (or no) work.

    Now, installing from source, you get the following benefits:
    - You can pick where the files go (whoopie)
    - You tune the performance for your platform
    - You can select specific features
    - You can de-select specific features to save disk space

    The only one which gains you a lot 99% of the time is where you can select specific features which are turned off in the standard package. If you need those options, you build it from source. If you're doing ten machines, though, you build it from source on *one* machine, package it up, burn it, and install it from YOUR package on all ten machines.

    Saving a few CPU cycles is never worth saving a man-hour. You can use the man hour more productively on the macro-optimization level. Similarly, you can take the dollars that you would be pay the man and buy a new CPU with it.

    The same argument goes for saving a kilobyte of disk space. If found out that any of my guys spent *any* significant time trying to cut less than a gigabyte out of our application footprint, I would give him a footprint of my own, right in the middle of his colon. Disk is cheap. People are not.

    If you have an application is which is CPU-bound and running too slow, find out why (profile the system or binary), and build from sources only what you need to make your application conform to the target specification. Or, if that will take too long, just buy more CPU.

    Long story short -- tuning of ANY kind should not be done at the micro-level across the board, that's just a waste of time. Tuning should be done by profiling the system as a whole, identifying the constrants, and relieving them. If that requires micro-tuning of a few things, that's fine... but squeezing every last little bit of performance out of absolutely everything is either impossible or incredibly time-prohibitive. And, of course, if you were going to spend that kind of time, you could either buy new hardware with the money (remember Moore's law), OR you example the system more closely at the macro level and come up with a better way to do things.
  • academia! (Score:2, Insightful)

    by cybin ( 141668 ) on Tuesday March 30, 2004 @02:08PM (#8716336) Homepage
    i worked at a university in virginia in the music technology lab, where we had two linux servers that did everything from serve web pages to run netatalk. my boss (also a professor) liked the RPMs too, simply because after i left there was no guarantee he'd get any help from the IT department, and he understood how to use RPM from the command line.

    i guess in academia they are used to having funding for some things some of the time -- your professor probably wants to keep those machines running as long as he possibly can, because money has to be used for other things.

    and besides, compiling programs is a hard thing for the "sorta unix geek" to get his head around :) for a while i would recompile the kernel and he flipped out -- so i started using those crappy RPMs.

    fortunatly, i think this will change when people realize there is an ample supply of knowledgeable folks out there who can do this stuff. it's easier to find a geek now than it was even 5 years ago!

  • by cmg ( 31795 ) on Tuesday March 30, 2004 @02:08PM (#8716346) Homepage
    If you have an application that you need performance out of, spend time compiling that once and then packaging it once and installing it on your 10 machines.

    When looking from the prof's view, it will be easier to get someone else up to speed after you have graduated if your machines stick closely to standard packages.

    Use the time that you'd spend compiling/installing doing more CS related activities.

    Most people (including myself) that have gone through the phase of wanting to compile everything get out of it as soon as they have some real problems to solve.
  • Re:Gentoo (Score:2, Insightful)

    by maximilln ( 654768 ) on Tuesday March 30, 2004 @02:09PM (#8716367) Homepage Journal
    Oh stop already. Unless you're building _every_ library from source then the optimization of later libraries is lost on the precompiled libraries they're dependent on.

    It's a nifty feature of Gentoo but how many users really want to wait for glibc? If they don't wait for glibc then are they really gaining anything significant when they build Mozilla manually as opposed to using a nightly build?

    Think Tetris. If you don't optimize from the very first row then optimization at row 15 isn't going to save your backside.
  • Context (Score:3, Insightful)

    by Second_Derivative ( 257815 ) on Tuesday March 30, 2004 @02:10PM (#8716384)
    For servers, go with something like Debian: good clean integrated system with timely and automatic security updates. Not bleeding edge, but if it's at all a serious server you really don't want it to be.

    Desktops, Ports based system all the way. Why? Because with something like Gentoo, it might take several days to compile but you can be assured you're not going to dependency hell anytime soon when you want to try the latest and greatest. Headers and such are installed by default, so you can usually compile something by hand and it will Just Work whereas if you're using three different unofficial package streams and you need to do some upgrade of a simple library somewhere which has an anal retentive versioning and dependency specification, attempting to apt-get that new version will cause your entire house of cards to come crashing down. I lived with Debian on a desktop like that for god knows how many years until I decided "No more". Yeah I have to wait a while with Gentoo but at least I only have to do it once.
  • because (Score:3, Insightful)

    by mgkimsal2 ( 200677 ) on Tuesday March 30, 2004 @02:13PM (#8716423) Homepage
    I'm guessing it's a bit harder to rebuild and duplicate environments exactly. If I build 3 machines today, it's not easy to ensure I can rebuild the exact same machines 3 months from now, at least not with the standard 'gentoo' approach. At least, not as easy as saying 'pop this mdk10 in and install'. You at least know what base everything is starting from.
  • by bwy ( 726112 ) on Tuesday March 30, 2004 @02:14PM (#8716435)
    You spend far too much time tweaking minor issues, and lose sight of major problems.

    Good point. There are probably very few cases where spending the extra hours of tweak time ever ends up being something that adds a significant amount of value to anybody, except yourself of course. I can think of a couple exceptions, but they are exactly that- exceptions to the rule. IMHO the ability to standardize installation packages is an important aspect of modern computing.

    If time didn't matter, I suppose we'd could all go so far as writing all our own software that would do exactly what we wanted.
  • by ajs ( 35943 ) <{ajs} {at} {ajs.com}> on Tuesday March 30, 2004 @02:15PM (#8716449) Homepage Journal
    the best solution is to do whatever is most efficient at performing those tasks

    And if you've ever had to pick up and maintain a system from someone who left you will know that this is just about 100% wrong.

    The best solution is one that works and is maintainable. If you are willing to put in the extra work involved in making your from-source installations clearly maintainable and upgradable so that the next guy isn't going to have to spend 6 hours learning how everything works when he needs to upgrade foobnitz to version 2.0, then great. If not, think about letting someome else do that work for you.
  • by Kourino ( 206616 ) on Tuesday March 30, 2004 @02:17PM (#8716485) Homepage
    If you're responsible for the machines you run how can you abdicate that responsibility by using whatever some package maintainer decides to give you?

    While in principle I can agree with what you're saying, this is a pretty insulting view to take of all the people who work on GNU/Linux distributions. (Or put another way, how am I better than every Debian developer combined? (Substituting Debian for your distribution of choice, of course.))
  • by bplipschitz ( 265300 ) on Tuesday March 30, 2004 @02:22PM (#8716562)
    this is coming entirely from a *BSD perspective [especially FreeBSD], but the older and slower your hardware, the more you might depend upon packages, just because they take less time to install.

    That said, I routinely build stuff from source on a Pentium Pro 200 MHz dual CPU machine at work. It's not our main server, so the performance hit is never noticed.

    Portupgrade is a absolute must on this machine, as we have all kinds of software running on it. Without portupgrade, I'm sure it would be a nightmare.

    In the end, it's whatever works best in your situation, and to have this as 'news' on slashdot seem really freakin' ridiculous.
  • by BenRussoUSA ( 454940 ) <ben...russo@@@gmail...com> on Tuesday March 30, 2004 @02:26PM (#8716604)
    I've been a UNIX sys-admin for about a decade.
    My advice is that for a workstation that is managed by an individual you can let the admin do whatever they want, but for any server that has to be stable and maintainable you want to stick with a well maintained package repository and try to avoid 3rd party packages and tarballs if possible.

    You have to understand that there is a software stack in most services.
    With the kernel and core libs (like glibc) and such at the bottom of the stack, and applications like Evolution at the top of the stack. In between you can have gdb and openssl and various perl modules (in AMAVIS for example) and you have sasl stuff which may be related to pam and openldap and cyrus or wu.... etc..

    The thing is that even though all of those various pieces of the software stack may be linked against different libraries on the box, the maintainer of the library code may not have a QA group to co-ordinate regression testing and compatability testing before the latest CVS commit is enacted to fix a bug referenced in a CERT alert.

    RedHat and Debian and SUSE and all the others have package repositories, the repository maintainers do an amazingly fantastic job of QA and testing to make sure that new patches don't break your software stack. As an individual you simply can't keep up with that.

    For example the Development team that takes care of OpenSSL doesn't backport their bug fixes and security patches to old versions of the code. They just maintain the latest release version and the current CVS version. If you have an old server running IMAPs and HTTPs and SSH and SMTP/TLS and such, and CERT announces a bug in openssl vX.Y, then the OpenSSL development team will certainly release a patch for the latest version which may be version Z!

    That might cause you to have to upgrade APACHE or wu-IMAP or OpenSSH or Postfix etc... Those things might then have divergent dependencies that would cause you to go and rebuild half a dozen other packages, and so on and so on. Also, do you remember all the magic flags you used for configure and make? Do you have the same environment variables set today that you did the last time you built PostFix? The possibilities for problems are endless. And if you do have a problem you are kind of on your own since your system will be a unique box. Whereas if there is a problem with a standard RedHat or Debian package, then you can always go to the general newsgroups and chances are there are a dozen other "me too" posts with answers already.

    It is much easier to use apt or up2date.

    So, unless you have a very good reason for using a tarball on a production server that requires reliability and security and high availability, then you should stick with packages.

    If you want to build the packages from source, feel free! RedHat and Debian and SuSE make the SOURCE packages available so that you can dig in and read all about'em. I'm sure the Debian team could use a new package maintainer, if you are addicted to compiling and testing things, check them out.
  • by Shakrai ( 717556 ) on Tuesday March 30, 2004 @02:26PM (#8716606) Journal
    Otherwise, I'd only build from source when there was not a trustworthy package available, or to add features, fix bugs, etc.

    If you can't find a site with a trustworthy package what makes you think you can find a site with trustworthy source code? Or are you going to review every line of code to make sure it wasn't tampered with?

    The paranoia works both ways :(

  • by tmoertel ( 38456 ) on Tuesday March 30, 2004 @02:26PM (#8716607) Homepage Journal
    Packages and package managers solve a real problem: Keeping track of software installations, their files, and their interdependencies is hard, hard work. By packaging software and using good, "higher-level", package managers (like yum [duke.edu] and apt-get) you can delegate most of this problem to the computer. That's a smart move.

    It's still a smart move if you're building from source. Just package your source. Then you can build the sources under the control of a package manager (like RPM), and install the resulting packages. You get the full benefits of build-from-scratch and the full benefits of using packages.

    This is exactly the approach I use. In fact, I'm a bit more strict about it: My policy is that I don't install any software that isn't packaged. If I need to install something that isn't packaged, I'll package it first. If I don't like the way a packager built an already existing package, I'll repackage it.

    The bottom line is that creating your own packages (or fixing packages you don't like) is much easier than maintaining a from-scratch, unpackaged installation. Or ten of them.

    To get you started, here a couple of RPM-building references:

    Don't give up the benefits of source. Don't give up the benefits of packaging. Have them both.

  • by jd142 ( 129673 ) on Tuesday March 30, 2004 @02:27PM (#8716614) Homepage
    Ah, but you see you're asking for support from the mod_perl list. If you are using the package from Red Hat, you should try Red Hat support or Red Hat specific mailing lists.

  • by Hal-9001 ( 43188 ) on Tuesday March 30, 2004 @02:32PM (#8716680) Homepage Journal
    how am I better than every Debian developer combined?
    Because you are most likely to know your exact hardware configuration than some nameless packager, so you can optimize your compile flags accordingly.
  • by Brazilian Joe ( 514100 ) on Tuesday March 30, 2004 @02:33PM (#8716701)
    Actually, you can 'emerge -buildpkg foo' and share packages between machines. if you are managing multiple machines, chances are that you will not have each one with an unique configuration, but only a few profiles.
  • by JAgostoni ( 685117 ) on Tuesday March 30, 2004 @02:35PM (#8716736) Homepage Journal
    In the parent post's defense, you can almost always get the source code from the "source" or author. However, sometimes you rely on some other guy to produce a .deb or .rpm or whatever which you might not trust as much as the author.

    I almost always trust packages from the vendor and the distro and only trust "3rd party" packages when there's been tons of anecdotal evidence that they work.

  • by Anonymous Coward on Tuesday March 30, 2004 @02:36PM (#8716743)
    There's plenty of *BSD users that wouldn't touch building a port even if their life depended on it. It's probably true that there's more people with this attitude in the redhat camp than in the *BSD camps, relatively and absolutely, but that's besides the point. And there's plenty of people that build their own RPMs, especially people that run large farms and need easy rollout.

    IME, though, building through the ports system (including building packages for rollout) is easier than building RPMs. This might influence how many actually started to build from sources themselves.

    Regardless of what you do, a good packaging system is valuable for scaling up. It helps document procedure and settings and whatnot. For certain things building a package from source yourself may even be required.
  • Build from source (Score:2, Insightful)

    by mabu ( 178417 ) on Tuesday March 30, 2004 @02:40PM (#8716803)
    I always build from source. IMO, it's the only way to go. A smart admin does not trust anyone else's executables when the alternative exists of building your own code on your own system.

    More importantly, when you build your own from source, you're often reminded of outdated dependencies that need to be upgraded. I recently compiled a new version of OpenSSH and found out that I had a vulnerable copy of zlib on my system. Had I installed a package, I might not have known.
  • Re:Personally (Score:5, Insightful)

    by Frymaster ( 171343 ) on Tuesday March 30, 2004 @02:41PM (#8716812) Homepage Journal
    In my opinion if Gentoo wants to gain a larger user base it needs one.

    and why does gentoo need or want a larger user base? gentoo is geared towards a niche market and those people will be attracted to the distro whizzy installer or no.

    porsche has a tiny market share - but nobody suggests they should make a k-car version to get a bigger slice of the pie!

  • by plcurechax ( 247883 ) on Tuesday March 30, 2004 @02:41PM (#8716816) Homepage
    How you feel about the ease and simplicity of installing and maintaining packaged programs versus the optimization and control that can be achieved by building from source? What are your experiences?

    Humans do not scale well, they have very low bandwidth of information sharing, and have high latency (i.e. you can't get ahold of them). Humans are also expensive, wander off into different jobs, graduate or drop out of college, etc. So I tend to prefer the reducing human cost of the system administration complexity as a default position.

    So my gut feeling is that unless there is a major time or dollar savings in the optimization by building from source (i.e. avoid buying 10+ new CPUs for the systems, or computation runs take a day less) go with the reducing administation complexity by using a package management systems so that you can concentrate on your actual goals (research, profit, or whatever).

  • by jc42 ( 318812 ) on Tuesday March 30, 2004 @02:42PM (#8716823) Homepage Journal
    One problem that I've noticed is the fact the build from source people tend to install things in a way that's completely different than anyone else.

    While I'd agree with you in general, I've found one curious case where I've learned to install from the source to make all my machines the same: apache.

    For some reason, every vendor (and sometimes every release ;-) seems to have apache installed in a clever way that's different from everyone else. They put the pieces in different directories; they munge the config files in gratuitous ways; they even change what a user's public_html directory is called. In a mixed network, figuring out where the web server is hidden on a disk can be a real nightmare.

    So I just grab the latest stable source kit from apache.org, and compile it. That takes maybe 10 minutes on current hardware. I spend 5 minutes or so munging httpd.conf, changing only what I know has to be changed. I get as close to a default install in /usr/local/apache as I can. I run the command it tells me to run to fire up the server. If it works, I copy that command to wherever it belongs in some boot script. Total time, usually about 20 minutes, far less that what I'd waste repeatedly trying to figure out the installation on a collection of machines.

    I had a fun case a few years back on a project where the management had decreed a Netscape web server. Nobody could get it to run right. The usual reason was that it was installed and managed entirely through a web interface. Sounds fine, right? Yeah, until someone misconfigged something slightly, and the web server was a zombie. Now the managment interface is dead, and all you can do is reinstall from scratch.

    One day I said "The hell with this", grabbed the current kit from apache.org, and 20 minutes later I had a live server. The developers could go crazy building their site.

    Occasionally, I'd say "You know, we really should work on that Netscape server that we're supposed to be running." The reaction was always along the lines of "Yeah; we should, but in the meantime apache is running fine and we have stuff to do today." I talked to them recently, and they're still running apache. They also have an unofficial policy of always installing it from source, because they've been frustrated by the packaged installs that they find on linux distros.

    The apache gang has done a fine job of making an install from source fast and painless. In such cases, a package usually just makes your life more difficult. If you take the defaults whenever possible, you end up in a better situation than with most packages. All your installations for all vendors come out the same, and managing a lot of machines is very easy.

  • by bee-yotch ( 323219 ) on Tuesday March 30, 2004 @02:50PM (#8716927) Homepage
    "you CAN use binary packages with Gentoo"

    I don't know how you can argue this in any situations other than when you're initially building your gentoo system.

    Once you run your first emerge sync where do you get packages from? I don't know of any mirrors that supply binary packages and I'd be willing to bet that there aren't any. So unless you're referring to the limitted selection on the gentoo cd's, then no, you can't use binary packages unless you've pre-built them yourself.
  • Amen. (Score:5, Insightful)

    by Sevn ( 12012 ) on Tuesday March 30, 2004 @02:52PM (#8716951) Homepage Journal
    What some people don't seem to understand about Gentoo or the BSD's is that not everyone is hell bent on world domination and market share. Some people want something specific, and Gentoo and the BSD's are there for them. It's not like they are ever going anywhere. BSD "despite the rumors" has never done anything but grow in usership with the steady, yet slow trickle of new users and the fiercly dedicated long time users. Gentoo is growing rather fast, but will no doubt plateau off and settle in the same way the BSD's have. But by all means, continue to have your OS flame wars and make your comparisons and talk about market share or other things that aren't important or even remotely interesting to the majority of most Gentoo and BSD users. It's very humorous. :) HAVE FUN STORMING THE CASTLE!!!
  • by codegen ( 103601 ) on Tuesday March 30, 2004 @02:56PM (#8716998) Journal
    I've also been both positions. As a graduate student and then during my 6 years in industry, I was extremely interested in building from source (custom kernels, custom libaries, webservers, the whole nine yards).

    However now as a profiessor, I've become more interested in focusing on building the tools that are part of my research. These I publish (or will publish once they are ready) as open source. But for the other elements such as development libraries, servers, etc.: I just want them to work.

  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Tuesday March 30, 2004 @02:57PM (#8717010)
    Comment removed based on user account deletion
  • by AciDive ( 543624 ) on Tuesday March 30, 2004 @02:58PM (#8717020)
    If the program is available for my favorite distro (Debian) as a package I will use the package. But if there isn't a package available then I use will compile from source. But as most of the other posters have pointed out it also depends on the program and if I am testing it or if it is for a production system. If it is for testing then I take the package over the source if the package is available. But I, like many others here, will usually compile from source if I am going to us the program in a production environment so I can get the pest performance for my system.
  • Different stages (Score:5, Insightful)

    by matt-fu ( 96262 ) on Tuesday March 30, 2004 @02:58PM (#8717028)
    As far as as I can tell, there are four stages of sysadmin as it relates to installed software:

    1) I am a newbie and have to use packages for *.
    2) I know my way around. I like the level of control I get with compiling/know how to code/read far too much Slashdot. I compile by default.
    3) I manage more than three boxes in my basement now. Having the ability to back out of system changes without a full OS reinstall is a necessity. I build my own packages from source that I've compiled.
    4) I manage more than just three boxes in a department now. Now I have to deal with politics, ordering hardware, the freakin' network, and I generally have time for sysadmin. On top of all that I now have a family so spending two or three extra hours per day on my Unix hobby is no longer feasable. Precompiled packages work just fine.

  • Re:Personally (Score:2, Insightful)

    by bee-yotch ( 323219 ) on Tuesday March 30, 2004 @03:02PM (#8717082) Homepage
    Well, I'm not meaning to imply that they "need" a larger user base. But I'm sure it wouldn't hurt them.

    Anyway, look at it from a different perspective, user base aside, if gentoo really is about choice and control, then why not let the gentoo user's choose between a gui installer and a manual install?
  • by tjwhaynes ( 114792 ) on Tuesday March 30, 2004 @03:13PM (#8717206)

    I use fedora, and most often I get the *.src.rpm versions, then tweak the SPEC files as required, build my own binary rpms, and use those. Best of both worlds, IMO.

    And the tweaking need not be that tricky or time consuming either. Decent defaults for building RPMS can be placed in your ~/.rpmrc file (or /etc/rpmrc, etc.). Once you have set your optimising settings, architectural preferences and packager name and cryptographic signature (if you want to submit them to other people), that's done for all future packages.

    I used to run a mix of RPM packages and tarballs (./configure --prefix=/usr/local && make && su -c "make install") so I could tell what was under RPM control and what was not, but it became annoying when I wanted to build a Source RPM with dependencies on a package I had built from tarballs. These days I usually try and wrap any install up in an RPM - it's not difficult once you get hold of a skeleton spec file for your distro and it saves much hair pulling later on. Also the dependency requirements of RPMs actually save time in the long run because you know when removing a package will hose your system (or part of it) .

    Cheers,
    Toby Haynes

  • Listen to Him (Score:2, Insightful)

    by jmhodges ( 571316 ) on Tuesday March 30, 2004 @03:17PM (#8717256) Homepage

    Do what your professor wants. Why you ask? Because its your damn professor. He will be happier with a package management system that he feels comfortable with. This will make him happier with you. Do not trifle with the grey beards, they have powers you do not yet comprehend.

    I, myself, am working with a professor on a momentum problem generator in Perl (we're physics people) and I was given a nice equation solving library that he wrote for another issue. I've showed it to a number a people with years (1 or 2 near a decade) of experience with Perl and they said that it was some of the worst code they had ever seen. I thought the way I had to interact with it was stupid and klunky. One giant kludge. I fought it in my own head but tried not to let my emotions about it out in front of him. So I worked at it again and again and you know what a few months later he, a peer of mine and I will be doing a seminar for our deptartment on it this April. The code wasn't as awful to work with as I thought (though to this day I wish it wasn't so klunky) and it worked. I just had to suck up my pride and get it done.

    Don't argue with them. Make their lives easier and you get to see the grey beards happy side. May you have many publications in your future.

  • Packages (Score:3, Insightful)

    by Chanc_Gorkon ( 94133 ) <gorkon&gmail,com> on Tuesday March 30, 2004 @03:20PM (#8717300)
    I do packages when available and use whatever package management system available be it apt, fink, darwinports or whatever and I use whatever format for the packages that the management system needs if there is one. I have used rpm, darwinports,deb and llp(AIX). Packages with a management system allow you to easily install and uninstall items when you need to. They also ease upgrades.
  • The point? (Score:5, Insightful)

    by Anthony Boyd ( 242971 ) on Tuesday March 30, 2004 @03:21PM (#8717320) Homepage

    Wow. There sure are a lot of posts about which is better, but I don't see any comments that deal with the underlying problem. And that is this: don't get into a pissing match with your professor. Seriously, what are you hoping to accomplish here?

    If you were thinking that you'd get tons of pro-compiling comments, and then put that in front of the professor, stop right there. Coming to Slashdot for validation of your side of the argument is about as helpful as those wives who write to Dear Abby about their husbands. Because no husband on Earth is going to appreciate getting chastised by Dear Abby, and if Abby sides with him, he's going to gloat. It's lose-lose for the wife, just like it's lose-lose for you if you try to use Slashdot as leverage. Screw with the computers that the professor relies on, and he'll find a way to "thank" you for it. Don't sabotage yourself.

  • by rwa2 ( 4391 ) * on Tuesday March 30, 2004 @03:21PM (#8717324) Homepage Journal
    Hey, you get the best of both worlds... easy install, maintenance, uninstall; plus everything is optimized and you still get to say that you build from source "just because you can".

    We'll make a Debian package maintainer out of you yet!
  • Re:--No-Deps (Score:3, Insightful)

    by FyRE666 ( 263011 ) * on Tuesday March 30, 2004 @03:25PM (#8717396) Homepage
    Yes, dependencies with RPM are its anchovies heal. I tend to start off installing via RPM until I inevitably encounter something that needs about 600 other RPMs installed first. Then I switch to source builds, at which point you can either forget RPM or use --nodeps --force for each new RPM install. Mind you, Gentoo can be as bad - if you don't constantly keep up to date then a single package update can pull in hundreds of (seemingly) pointless other package upgrades - many of these will offer questionable improvement (often you're forced to upgrade from fuzzlePack.1.2.33.3.r4 to fuzzlePack.1.2.33.3.r5 etc). So you might well end up pulling down 40mb of stuff you don't want to build a 200k library. (yes, I know you can just force portage to build a single package and ignore deps, but the maintainers tend to frown on that).

    So, in summary, stick with packages until you have to switch over to source to get anything done!
  • Re:Personally (Score:1, Insightful)

    by WwWonka ( 545303 ) on Tuesday March 30, 2004 @03:31PM (#8717477)
    porsche has a tiny market share - but nobody suggests they should make a k-car version to get a bigger slice of the pie!

    Would you buy that Porshe if it were in a thousand pieces and took a whole year to build?

    FUNK DAT!

    Here's my 70 grand, gimme the fast car and I'll be cruising with your girlfriend while you're in the garage doing a "man porshe"!
  • Re:--No-Deps (Score:5, Insightful)

    by Zathrus ( 232140 ) on Tuesday March 30, 2004 @03:32PM (#8717484) Homepage
    I need to write a compressed bitmap? Okay, then I include code to do so. I need to read a wav file ripped from a CD? Yup, my own code. Calculate an MD5? Inspired heavily on the RFC reference code, but essentially my own.

    Man, I'm glad other industries aren't as stupid as the software engineering industry. Otherwise car manufacturers would have to have steel foundaries, cloth weaving, a slaughterhouse and tannery (for leather), and innumerable other ancillary businesses on site just to build a car. And, of course, everyone would have to know how to do absolutely everything.

    What you're preaching is directly contrary to the practice of reusing code -- and not just your own. It's insane to reinvent the wheel every time you need to drive to the store -- but that's exactly what you're doing. It's one thing to understand the physics behind the wheel, or the foundary, or the paint shop. It's another to rebuild them from scratch.

    I hope there's never a bug in your code... because if there is you're going to have to patch every single code base, and re-issue every single binary (since you prefer to link statically). All because you felt it was better to not trust others and do it yourself. Not to mention the vast amount of time burnt re-implementing that which already works, and works extremely well.

    The code I'm working on uses a multitude of libraries -- STL, Boost (primarily for its shared_ptr's; we'd use more but much of it doesn't compile on our platform), OTL, libcurl, libxml, pcre, openssl, and others. In some cases we've ditched libraries and implemented our own solution (in particular, MQSeries, which sucked deeply). But to re-implement all of those libraries would literally add years to development. And to what purpose? To have a less feature complete, more buggy, less supportable code base?

    And, yes, we've even used libraries sometimes when the library pretty much sucks. Case in point is cgicc, which we used because it's one of the few C/C++ libraries that interfaces "properly" with fastcgi. It's full of bugs, full of really idiotic #define's, and doesn't implement things quite right... but fixing it took much less time than rewriting it from scratch. Because it doesn't do everything wrong, and there's no reason to toss the baby out with the bath water.

    No thanks. I'll happily replicate what's been done in every other scientific and engineering discipline -- to stand on the shoulders of giants while adding my own knowledge to the repository.

    But when a package links against it for the sake of using a single function that the programmer could have reproduced in under ten lines of code... Well, that just screams "laziness" to me.

    Sure. But that situation is pretty rare, at least among competent developers. If you're seeing that commonly, then you're using crap packages (and god knows there's a ton out there... I've ditched many packages because they had too many esoteric dependancies).
  • Re:Personally (Score:3, Insightful)

    by Woody77 ( 118089 ) on Tuesday March 30, 2004 @03:37PM (#8717544)
    It's why I recommend Gentoo to those that are computer-literate, and interested in trying linux. You learn so much about how the computer works during the installation.

    I grew up with PCs as they grew up, and learned DOS/Windows through all of it's incarnations (well, windows 3.1 and later). And I realize that I can handle XP MUCH better than most people I know that came into it later, and don't understand how the low-levels of the OS fit together, and what does what.

    I once saw the definition of an Expert as someone that knew the low-level so well that all of the high level stuff was obvious. I'm nowhere near that (I don't think anyone is with Windows, at this point), but that's the route that I like to go towards. It's so much easier to debug things when you understand the computer is a system, and what the parts are, and what are the core required things to get it functional.

    Gentoo's install steps are essentially a how-to guide for bringing up a box after it falls on it's face. Something often learned the hard way. It's really quite simple, and most of it could be automated, but I think that they have intentionally left it manual.

    A) It requires you to learn to use it
    B) It raises the bar on the quality of noobs.

    I'd rather start someone on an OS where they need to learn how it works than on one where it's all magic. Because magic only goes so far.
  • by ArchAngelQ ( 35053 ) on Tuesday March 30, 2004 @03:37PM (#8717548) Homepage Journal
    the OP should be modded -5 flamebait?
  • Try FreeBSD Unix (Score:1, Insightful)

    by Anonymous Coward on Tuesday March 30, 2004 @03:50PM (#8717700)
    So if you like compiling you would like FreeBSD Unix and/or Slackware Linux. Your teacher, as he loves binary packages, he/she would like Debian GNU/Linux. I recommend you FreeBSD, as IMHO is faster than any Linux distro (even Slack) and tends to be more stable as well. Thanks to the FreeBSD ports you can install anything you want from source. They have the largest collection of open source software (larger than Debian's). www.freebsd.org ;)
  • by macdaddy ( 38372 ) on Tuesday March 30, 2004 @04:09PM (#8717918) Homepage Journal
    Here's another perspective. I've been admining Linux systems for years and I have never had pay for Linux support. I contend that a compotent sysadm using an open source piece of software needs no support other than what's available freely online in FAQs, web forums, newsgroups, mailing lists and his own professional experience. They certainly don't need commercial support. Hardware is a different story. How many people do you know that can still perform component-level repair? How likely do you think it is that a compotent component-level repair technician can get a schematic for an IBM mobo? First I suspect they'll not know what you're asking for. Then once they do realize what you're asking for they'll laugh out loud under the grease their shorts. After that they'll probably suspect you have bad intentions for the data you're asking for and sic their legal team on you.

    In short, Support? Who needs it? Not me. Do you?

  • Re:Personally (Score:2, Insightful)

    by jrnchimera ( 558684 ) on Tuesday March 30, 2004 @04:09PM (#8717925) Homepage
    The problem is that the Gentoo Portage is made specifically for Gentoo and some configurations that out of the box only work on a Gentoo system. So yes, you can use portage on any distribution, but it will not work as well and you will most likely have problems getting stuff to compile without having to contstantly tweak ebuild files etc. The YOS Linux distribution uses Portage and though YOS is nice, the portage system rarely works as well as it does on a real Gentoo box.
  • by Gilk180 ( 513755 ) on Tuesday March 30, 2004 @04:20PM (#8718051)
    I agree completely for vi, grep, etc.

    However for glibc or other common libraries you gain much more than if you hacked sendmail or any other service.

    If you have a backdoor in glibc, nearly ANY program will activate it. You just wait until a setuid root program accesses something in the library, and you have your exploit.

    Or if you need something that stays aware, have this insert a kernel module that hides it's own existence and does whatever you need or launches and hides another process that does what you want.

    In the end, putting a backdoor in a common library has many advantages to putting it in any program or service.
  • by karlandtanya ( 601084 ) on Tuesday March 30, 2004 @04:41PM (#8718285)
    If the "standard" package gets the job done, leave it alone.


    I know there is temptation to make things a little bit better, but support after you're gone is the issue.


    The genius who designs a system that only (s)he can maintain is a poor engineer.


    Find out what your customer's (the prof sounds like the customer in this context) requirements truly are. Is good enough good enough for the prof? If you give him what he wants and he finds out next week that it could have all been optimized to perform .5% better, will he be pissed? Functionality? Optimization?Robustness? Maintainability? Look & Feel? Thorough documentation? Easy transfer of support (to the next slave^H^H^H^H^H student?


    Meet those requirements with the minimum customization.


    Document the system. This may be a nightmare if the system has already been "tweaked" by the previous maintainers. If that's the case, it's even MORE important to simplify and document.


    Provide recovery tools--as simple as a set of drive backup images, or as complex as a set of scripts that rebuild the system from source. At a minimum, supply a system administrator's manual.


    Building a system for a customer to use is a completely different endeavor from elaborately tweaking your own box so it is just exactly the way you like it.

  • by rawg ( 23000 ) <phill@kenoyer. c o m> on Tuesday March 30, 2004 @04:53PM (#8718438) Homepage
    I am currently migrating to FreeBSD from Debian. The main reason is the easy of installing and maintaining software. With FreeBSD Ports system, installing is easy.

    I get the latest stable software. I don't have to worry about crazy dependancies (I don't want MySQL dammit, I use Postgres). The software is in a standard place. It's easy to tweak things.

    I also find that FreeBSD is much faster than my Linux system... Especially RedHat.
  • by Sleepy ( 4551 ) on Tuesday March 30, 2004 @06:11PM (#8719280) Homepage
    Usually when one builds from Source, they install it to wherever the original developer has it set to by default. Unless you did some heavy patching, the software will very likely be more "true" to the original software then many packages.

    Correct me if I am wrong, but are you contridicting yourself here? Gentoo DOES use developer source, but they ALSO do what you call "heavy patching".

    I interpret this "source vs package" debate to be something different: What is the NORM for your distribution, and are you using the OS in ways that were not tested by the vendor's SQA team

    For example, ANY of these distros can get borked if you install Ximian on top of them and THEN go back to the vendor for updates. It wouldn't matter if you did it from source or packages.

    Same with Alien packages on Debian, or "Redhat centric" rpms on Mandrake or SuSE.

    Bottom line is don't mix oil and water. :-)

    I agree with your comments about what is good with Gentoo. I happen to like Gentoo and FreeBSD for the very reason that there's a BAZILLION source packages that all have cross-testing against each other. Same for Debian I suppose.

    Best thing RedHat ever did for their desktop distro was set it free. They NEVER wanted to be in the business of supporting user-borked desktops when they install random stuff from the net, and they never wanted to manage and QA a large repository. Now it looks like there's a Fedora community (two actually) addressing the package distribution issue. Good for them.
  • Re:Amen. (Score:2, Insightful)

    by dotKAMbot ( 444069 ) on Tuesday March 30, 2004 @06:29PM (#8719532) Homepage
    Seriously dude, you are being ridiculous.

    What are you saying? That because a majority of the people in the world use Windows, Gentoo should have a flashy installer?

    If we give all the distros flashy installers and gear them to be simple and not as powerful, I will be in chains with the rest of them, so lets cut the nonsense.

    People use Windows/Mac/Fedora/Gentoo/BSD/Amiga/etc because they want to, and that what fits them best. It makes sense, and there is nothing wrong with any of those choices. Stop trying to save those that don't want saving.

    daniel
  • by hak1du ( 761835 ) on Tuesday March 30, 2004 @08:13PM (#8720617) Journal
    Packages in a distribution like Debian update and uninstall cleanly, you can build every one from source if you want to, and someone else has worried about (1) testing the binary and (2) getting all the dependencies right.

    Build from source if you need the software and no package exists, or if you really, really need a processor-specific version. But for most applications, go with the pre-packaged version: as a system manager, there are a lot more useful things you can do than recompile "ls" on a dozen machines.
  • My two cents (Score:3, Insightful)

    by digitalhermit ( 113459 ) on Tuesday March 30, 2004 @09:36PM (#8721201) Homepage
    You probably won't ever see this because of how late I'm posting... However:

    Building from source is great if you want to tweak a system and get it running exactly how you imagine. Be prepared for configuration and all the various issues associated with source builds. I'm assuming that even if you build from source that you are using some sort of package/file management system to alert you of dependencies and file modifications. This is easy to do with binary packages, not so easy managing sources. I regularly rebuild *on my test machines* all manner of software from source, including the kernel, KDE, glibc and a bunch of other libraries.

    Now for the problems with source builds:
    1) You need a development machine. I.e., you need the compiler tools and libraries. For a regular workstation this is no problem, but you DO NOT want these tools accessible on a server even if they're 'chmod 700' or otherwise locked away. This means you'll build on another machine and create a binary package and... well, you're back where you started except you lost some time.

    2) There's no easy way to create snapshots of packages. Differences in libraries and config files can make or break software. The best errors are those that prevent the software from compiling. The worst are those that compile, but errors or weirdness doesn't show up until a month later. Now RPM is much maligned, but it does allow you to keep the build instructions, dependency information, etc.. inside the package. You get lots of control, once you've learned RPM, on where things get installed.

    3) Backouts are not as easy. You can often do a 'make uninstall' but this requires the sources be kept around in some cases. Tools like checkinstall can ease the burden, however.

    4) Duplication of effort. Source builds are good for customizing, as I mentioned. It's a myth, however, that rebuilding from source will dramatically improve performance except in a few, somewhat rare cases. E.g., rebuilding a 2.4 kernel with a pre-emptible patch can make your desktop faster. Rebuilding a stock 2.4 from kernel.org or your distro's sources will likely not be noticeable.
  • by More Trouble ( 211162 ) on Tuesday March 30, 2004 @10:43PM (#8721624)
    When it comes to testing the software, it must be done against a baselined distribution.

    Sure, but by the same token you wouldn't have one baseline system. In our software development lab, we have a couple of supported RH versions, SUSE, Debian, Mandrake, FreeBSD, OpenBSD, and Solaris.

    :w
  • Re:--No-Deps (Score:3, Insightful)

    by Zathrus ( 232140 ) on Wednesday March 31, 2004 @03:03AM (#8723023) Homepage
    Why can't we just include the dependancies? IS that so hard?

    Good point, but it may actually be hard if your code is under a different license from a library you're using (namely, your code is more restrictive). But I'm not sure that's an issue either.

    This breaks alot of programs and in an rpm system only one version of a library or program can be installed??

    Well, there are ways around that. But if the program was linked against libfoo.so.a instead of libfoo.so.17.a then you're pretty well screwed.

    This, BTW, is the exact same thing as "DLL hell" on Windows systems, where multiple copies of a DLL may be installed, or a program may rampantly overwrite the existing version with its own (even if it's older!). Same story, different name...

    But still Its a bandaid solution for a big problem

    Well, GenToo's emerge system is essentially the same as ports from what I understand. But you're right -- it's a bandaid solution. What's the real fix? I dunno. If programs were linked against the full name of their libraries (instead of the symlink/hardlink shortened name) then it'd probably fix itself. Package managers are certainly capable of telling when a library is still required by a package, and there's no reason to remove old versions of a library until it's no longer required. It'd take a lot of packages being reworked, and probably a lot of devtools as well (like autoconfig, automake, etc).
  • by Karora ( 214807 ) on Wednesday March 31, 2004 @05:00AM (#8723459) Homepage

    As the numbers of machines you manage increases, you will find the meaning of the word "control" changes. We only manage a couple of hundred, but the pressure to standardise, as far as is practicable, is a strong one.

    Look at the people running clusters, and you can see where that gets to in the end.

    The reason we (primarily) use Debian is that the potential architectures for distributing change, and for customisation-with-binary-releases seems to be much greater.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...