Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software Linux

Build From Source vs. Packages? 863

mod_critical asks: "I am a student at the University of Minnesota and I work with a professor performing research and managing more than ten Linux based servers. When it comes to installing services on these machines I am a die-hard build-from-source fanatic, while the professor I work with prefers to install and maintain everything from packages. I want to know what Slashdot readers tend to think is the best way to do things. How you feel about the ease and simplicity of installing and maintaining packaged programs versus the optimization and control that can be achieved by building from source? What are your experiences?"
This discussion has been archived. No new comments can be posted.

Build From Source vs. Packages?

Comments Filter:
  • Personally (Score:5, Interesting)

    by jwthompson2 ( 749521 ) * on Tuesday March 30, 2004 @01:48PM (#8715987) Homepage
    I do a bit of both. I predominantly install items from packages, when available, for testing and review of something new that I am interested in. Once I establish whether what I have been playing with may be useful for some particular purpose I will research the source build options. If there are specific optimizations that can be made for my system's hardware or pre-installed software I will then look at installing from source in order to leverage those optimizations, but if there is no advantage to compiling the source due to lack of any worthy optimizations then I will install from packages any time I want that software.

    That is my way of handling things, do what fits your needs best, that's why we have this option.
  • Both? (Score:2, Interesting)

    by jjares ( 141954 ) on Tuesday March 30, 2004 @01:49PM (#8715997) Homepage
    I actually make my own gentoo ebuilds and build everything emerging them... so, both.
  • by untermensch ( 227534 ) * on Tuesday March 30, 2004 @01:50PM (#8716023)
    If you are working for someone else, maintaining servers that are intended for peforming specific tasks, then I think the best solution is to do whatever is most efficient at performing those tasks. If you really don't need the peformance gains brought by compiling from source (and you probably don't) and it's going to take you a long time to do the compiling, time that could be better spend actually doing the research, then it's not worth your effort. If however the compiling doesn't affect the user's ability to be productive and that is what you as sysadmin are most comfortable with, then it seems reasonable that you should be able to maintain the boxes however you like.
  • OSX (Score:4, Interesting)

    by artlu ( 265391 ) <artlu@art[ ]net ['lu.' in gap]> on Tuesday March 30, 2004 @01:51PM (#8716041) Homepage Journal
    I used to be a huge debian fan because of apt-get and the direct install of packages, but I have migrated to OSX and find myself needing to build packages from scratch to work correctly. However, I will never hesitate to use Fink as much as possible. I think for 90% of what gets installed the packages should be fine, but if you know that there are certain optimizations that you can implement, why not build from scratch?
  • My experience (Score:3, Interesting)

    by Jediman1138 ( 680354 ) on Tuesday March 30, 2004 @01:52PM (#8716048) Homepage Journal
    Disclaimer: I'm only 15 and am a semi-newbie to Linux.

    Anyways, I've found that by far the easiest and most simplistic and time-saving method is to use rpms or debs. But of any distro, Lindows has it down to one or two clicks...though, they're software database subscription is a serious money leech..

    If it was up to me, source would always be an option to use, and the install process for rpms and debs would be one click and automatically update themselves into Menus and such..

    Just a few thoughts..

    ___________________________________________

  • by imbaczek ( 690596 ) <imbaczekNO@SPAMpoczta.fm> on Tuesday March 30, 2004 @01:54PM (#8716082) Journal

    Whenever a binary package for Debian is availible, I prefer it to hand-compiled source. First, it has all the Debian patches it needs. Second, it propably installs without a hassle. Third, it's easy to get rid of it, and last but not the least, apt resolves dependency problems without human intervention in 99.9% of cases.

    In other words, binary packages work for me :)

  • by Anonymous Coward on Tuesday March 30, 2004 @01:56PM (#8716117)
    Speaking of RedHat doing something weird... RedHat managed to _rename_ p_pptr to parent in task_struct in the kernel. How did they manage to get away with something like that? If there are custom kernel modules that happen to want to use p_pptr, then everything breaks!
  • Re:Personally (Score:2, Interesting)

    by LoveTheIRS ( 726310 ) on Tuesday March 30, 2004 @01:56PM (#8716131) Homepage Journal
    I run Fedora Core 2. The packages I have downloaded have not always been compiled to do everything that I need. Also, the packages sometimes are a couple revisions behind so in that case I tend to build from source. I am ambivalent to Gentoo because on one hand you can get code optomized only for your machine, but on the otherhand you have to wait for days for a working system. I'd say that the best for you would be to start your system out with the packages and then compile your own rpms(or whatever else you are using) (I have never been successful at doing so but it is supposed to be easy,) then you have the best of both worlds, compiled code with everything you need and all the files installed are managed by a rpm database. My 2 cents.
  • by Theovon ( 109752 ) on Tuesday March 30, 2004 @01:58PM (#8716160)
    I've often had a lot of trouble building programs from downloaded tarballs. Besides mysterious dependencies that I can't track down, sometimes things just don't compile, or they crash, or they produce errors of other sorts. But in many of those cases, I could download, say, an RPM of supposedly the same package, and it would install just fine.

    On the other hand, I've never had any problems. Emerging new packages deals properly with all dependencies, and things always compile correctly. And there's like a review process where packages are first added to portage as "unstable" and then once they have passed everyone's criticism, they are added to "stable". So far, the only "unstable" package I've decided to emerge was Linux kernel 2.6.4, and that all worked out brilliantly.

    Also, if you have a cluster of computers, you can do distributed compiles with, I think, distcc and/or some other package. Gentoo documents this VERY well. Plus, if your cluster is all identical machines, you can build binary packages once and then install them onto all other machines.

    BTW, Gentoo isn't for everyone. The learning curve is STEEP. I had to start from scratch and do it all a second time before I got everything right. (Although I am a bit of a dolt.) Setting up is complex but VERY WELL documented. Only once you've finished building your base system does the extreme convenience of portage become evident.

    Also, there are still a few minor unresolved issues that no one seems to have a clue about.
  • Re:Personally (Score:5, Interesting)

    by allyourbasebelongtou ( 765748 ) on Tuesday March 30, 2004 @01:58PM (#8716163) Homepage
    In short: I have to agree--I do a bit of both, too.

    The main thing I encounter that keeps me from using them all the time is the need for specific add-ons that are available as part of packages but are available when rolling-my-own.

    As an aside, there are certain bits that I just prefer to compile myself for any number of reasons

    That said, there are other bits of software that are pretty generic items that the packages make *trivially* easy to work with, and where compiling those same things from scratch--particularly on older hardware--makes you get a bit long-in-the-tooth waiting for the compile to return.

    To me, this is truly one of the ultimate beauties of open source: you're not stuck with pre-built, but you can leverage it when it makes sense.
  • yes, hybrid (Score:2, Interesting)

    by CrudPuppy ( 33870 ) on Tuesday March 30, 2004 @01:59PM (#8716193) Homepage

    build packages from source exactly how you want them , make a tarball of that, and then use ssh and key trusts to shoot them out everywhere (this coming from a person who maintains almost 1000 servers)

    it works very well.
  • Do both (Score:2, Interesting)

    by bigbadbob0 ( 726480 ) on Tuesday March 30, 2004 @02:03PM (#8716258)
    Why not build from source on machine 1. Then have machine 1 build a package to use on machines 2->n? Yahoo! Best of both worlds.
  • Ports or Portage (Score:5, Interesting)

    by iiioxx ( 610652 ) <iiioxx@gmail.com> on Tuesday March 30, 2004 @02:04PM (#8716290)
    As a FreeBSD user, I build almost everything from source using ports. I never install from packages. My reasons for this are many and varied, but basically, I prefer to build software myself, with the precise options I need. When you use packages, you are at the mercy of the packager and their preference for options and optimizations. Several years ago when I used Linux, I often encountered problems of pre-built packages lacking a particular build option, and sometimes installing to odd places, or other strangeness.

    And once you've started using packages and package management, it gets harder to introduce source-built software into the same environment without screwing up your dependency databases, or worse - breaking things. So if a package lacks a required option, you really have to build your own package with the option included in order to keep things orderly. That's a lot more work than just installing from source.

    I'm not a Linux user anymore (several reasons) but if I were I to go back to Linux, I would use Gentoo, specifically for its Portage system.

    So, in my opinion, building from source may be a little more time and CPU consuming, but it is the better option for a controlled, tailored environment.
  • by whoever57 ( 658626 ) on Tuesday March 30, 2004 @02:08PM (#8716332) Journal
    I have one Gentoo machine that is my "Compile Server" On this, I build binary packages, which go into the portage tree. All other Gentoo machines on the network then sync against the compile server (instead of using "emerge sync"), and thus also get the binary packages.

    Then, on the other machines, I install from the binaries.

    This allows me to test the installs first, resolve any problems, etc.

    Furthermore, to speed up the process, several machines run DISTCC and are used as clients of the compile server.

  • by adamjaskie ( 310474 ) on Tuesday March 30, 2004 @02:09PM (#8716350) Homepage
    I run Slackware. Most of the major stuff I need is avaliable as official packages from Pat, and quite a bit of other stuff is avaliable on LinuxPackages.net. I will usually look first to see if there is an official package, and if not, I will do a quick look on LinuxPackages.net, but those are usually a bit out of date, so I usually will end up just downloading the source and compiling it. I see nothing wrong with compiling my own stuff, as it doesn't take much longer. With checkinstall, I can even enter it into the package management system to uninstall easier in future.
  • by Anonymous Coward on Tuesday March 30, 2004 @02:09PM (#8716354)
    It has been my experience that packages don't always put things where they should. When building from source, you typically leave the "prefix=" option at it's default, which is what the software writer intended.

    Qt is a good example

    When installing Qt from source, you are told in the install doc where everything is going to go and you are asked to set the QTDIR environment variable by hand. This variable is nowhere to be found with a package. Without this variable it is difficult to find where Qt is installed if you want to do anything with it.

    Also, I have found that installing packages that are dependencies of other packages does not always guarentee that it will be recognized by the depending package, where as it almost always is when building things from source.

    my 2 cents
  • Re:Personally (Score:5, Interesting)

    by Aneurysm9 ( 723000 ) on Tuesday March 30, 2004 @02:11PM (#8716392)
    You don't have to wait days to get a working Gentoo system. With the GRP CDs you can have a working system up and running in a few hours. It's still going to take more time than Fedora or SuSE, but it will be optimized more for your platform with the option of recompiling for further optimization. That's how I setup Gentoo on my laptop as it's hideously slow. Over time it's had almost everything recompiled a piece at a time, but I didn't have to wait for it to do everything from glibc up at once.
  • by Stonent1 ( 594886 ) <`ten.kralctniop.tnenots' `ta' `tnenots'> on Tuesday March 30, 2004 @02:14PM (#8716426) Journal
    But why would you ever use Gentoo in a production environment?

    Security updates w/o waiting for them the be packaged?
  • by thinkninja ( 606538 ) on Tuesday March 30, 2004 @02:15PM (#8716445) Homepage Journal
    Duck and cover, incoming Gentoo zealots :P

    Personally, I install from packages (apt) wherever possible. If something is unpackaged and looks new and shiny, then I'll install from source. I really can't imagine managing a large number of applications without a package manger, even if it's something you've written yourself.

    If installing everything from source is your thing, you're probably already using Gentoo with its package mangagment. So the question is moot.
  • by idontgno ( 624372 ) on Tuesday March 30, 2004 @02:15PM (#8716456) Journal
    (And try installing the native Java from BSD ports - several hours of pure joy!)

    I'm not sure whether to mod you -2 BSDTroll or +1 BSDFunny. However, I'll comment instead. (Commented earlier downthread, so it's already a foregone decision, but what the hey, you only offtopic once.)

    The only joy I get watching compiler messages scroll by is laughing my butt off watching all the warnings. Don't these people use lint?

    And that's funny only if I'm already in a good mood. Otherwise, I hate having to actually watch the unavoidable visible indicators of the quality of the software I'm about to start using. Just like most people don't like watching sausage being made...from live pigs...

    Yeah, I know, if I know so much, why don't I fix it? Because I didn't sign up to indentured servitude, I just want to use the damn software. I realize that violates the canon of Open Source ethics in the minds of the extremists, but I have a job to do and it's not fixing your damn object cast mismatches.

    OK, ok, cooling down now.

    Thank you, in all sincerity, to the authors of those software packages. Please forgive me if watching 2423 warnings per compile cycle makes me a little crazy.

    And that's why it was the best summer ever!

  • Not always the case (Score:5, Interesting)

    by Anonymous Coward on Tuesday March 30, 2004 @02:21PM (#8716549)
    Sometimes the exact opposite is true, especially in terms of "community support". For instance, mod_perl, which for some reason Red Hat decided to ship a very early version. The typical response on the mailing lists for mod_perl or any other alpha/beta package RH included usually goes "try it from source, then email us" (that's after someone submits a reasonably complete bug report).

    Let's not forget the GCC fiasco and probably dozens of other examples where RH decided to "lead the pack" in terms of version numbers but not stability.

    Of course, then there's Debian woody, living in circa-2001 land.
  • by Kourino ( 206616 ) on Tuesday March 30, 2004 @02:22PM (#8716560) Homepage
    Optimization? Control?

    Man, what is this, Gentoo?

    Any sane distributor these days builds binary package with reasonable optimizations that won't break across architecture submodels, and occasionally releases binaries targetting submodels (e.g. PentiumPro-specific packages). On many machines, for many workloads, however, the model-specific optimizations just aren't that helpful. Obvious exceptions are floating point math on most platforms (especially x86, where x87 math code is a dog and should be replaced with SSE code if possible) and - I'm told - really slow hardware. (I'll be able to test that once I get these Indys running GNU/Linux.) In my experience, Debian hasn't really felt any slower than my LFS systems for personal use.

    So, I'll say this: if you have enough time to build everything you're using, do some careful speed comparisons between your self-built packages and the vendor's binaries. If there's really a significant speed increase, and you need that increase, source is the only way to go for the packages that need the speed increase. Otherwise, it's probably not worth your time.

    Unless whatever you're doing is extremely security critical, you can probably deal with the fact that server app foo has features bar and baz installed that you won't use. If you can't, you're probably auditing the source of everything you use anyway, and that doesn't sound like the case, so "control" probably isn't a real issue here either. Control can be found in config files as well as in the configure script.

    People say, "but package dependencies suck!" Well, yes, rpm (the program) isn't built to deal with dependencies that gracefully. If it annoys you that much, go install apt-rpm or something, or even Debian (gods forbid). Package management isn't rocket science.
  • Re:Personally (Score:2, Interesting)

    by JustinMWard ( 456415 ) on Tuesday March 30, 2004 @02:25PM (#8716593) Homepage
    Not to knock Gentoo, but don't expect the install to take longer just because you're compiling things. The install process itself is very, very unfinished. While some of this might be in the name of customization, some of it is just the result of being a very unfinished process.. my favorite example is that you have to link the correct timezone files by hand, instead of choosing your timezone out of a list. Sure, it's a little detail, but it adds up. You've also got to make your own partitions (via fdisk) and do your own formatting (via mkfs).. again no more choice, just more grunt work. The installer is full of things like this.

    It's not a bad little distro, IMO. But the installer has a *long* way to go.
  • by jellomizer ( 103300 ) on Tuesday March 30, 2004 @02:29PM (#8716639)
    If the professor has some sort of grant he may prefer a package because it is quicker to setup and save time so you can be more productive in other areas. If it is some sort of continuing income then you might as well try to incorage recompiling the source because you get more out of it educationally.
  • by cbreaker ( 561297 ) on Tuesday March 30, 2004 @02:31PM (#8716663) Journal
    No way.

    Usually when one builds from Source, they install it to wherever the original developer has it set to by default. Unless you did some heavy patching, the software will very likely be more "true" to the original software then many packages.

    RPM's for distributions such as RedHat or Fedors often have to move configuration files all over the place to mesh with the OS properly.

    You're more likely to be able to sit down at a strange Linux box and troubleshoot whatever program when it's compiled from source tarballs versus an RPM. Unless of course, you know the RPM, or the RPM doesn't do anything funky.

    Considering the stuff is Open Source, and chances are the programs are not under a paid-for support contract, it's pretty safe to say that BOTH methods would have to be supported "In House." And if not, your support contract could very well support the source compiled versions anyways.

    I choose the Gentoo way. Everything is compiled from source; it's just nice and automated. Almost never have I run into something where the program had to be modified to fit the distribution.
  • by MerlynEmrys67 ( 583469 ) on Tuesday March 30, 2004 @02:31PM (#8716673)
    I would choose a distribution based on either source/binary packaging. Don't bother fighting your distribution (have the worst of both worlds)

    That said - for a work machine, I prefer binary packages. I just want the damned thing to work, work well, and not futz with it.

    For a hobby/play/research machine - I prefer source packages. I have found there are many compilers out there that will massively outperform GCC, especially when you turn on those crazy optimizations that most binary distributions won't (plus optimize for the EXACT processor I am running on, etc.)

  • by Poppa_Chubby ( 263725 ) on Tuesday March 30, 2004 @02:32PM (#8716678)
    I'm pretty much just the opposite of where you're at. I generally prefer to use packages for a workstation and src on servers. The reason being that workstations generally have a vast amount of software installed with the accompanying dependency hell. Servers, on the other hand, usually only need one or two applications installed and its easy and preferable to maintain that by hand.

    However, this goal is difficult at best to undertake with most linux distributions, since everything is maintained through packages and the whole concept of third party software is very blurry. In the BSD world, that line is strongly delineated, so maintaining BSD servers with src installations tends to be much easier.

  • by strider( corinth ) ( 246023 ) on Tuesday March 30, 2004 @02:35PM (#8716729) Homepage
    My arguments on why to use a source-based distribution have been covered in other posts, so I won't repeat them here. I think Gentoo provides a solution that will satisfy both you and your professor: you can use a source-based, custom-built binary distribution.

    As you probably know, Gentoo is a source-based distribution, but it also allows binary packages. Many (such as Mozilla Firefox) are distributed by Gentoo as source and binary; you can choose to install either. The ability to build a binary package from a source .ebuild (the file that describes to the system where to find the source and how to build it) requires adding only a single flag to the package compile command, ebuild.

    Additionally, since (if I read you correctly) you're probably using similar hardware for each of your machines, it would be trivial to set up a compile box which would produce binary packages for your other boxen. Packages compiled for your architecture would be faster than most binary-only distributions (many are still compiled for the i386 architecture), and writing a new ebuild is trivial compared to writing a new spec file. (Trust me; I spent a quarter writing a paper on the topic while I was in school, not to mention having had to do it myself in the Real World.)

    Finally, Gentoo integrates and tests its packages. Ebuilds come with Gentoo-specific patches, so you don't have to spend the time to make each source package work with the rest. This is probably one reason why your professor likes binary distributions: they all work together, and enough people rely on them that if something breaks, it gets fixed. A package-based Gentoo distribution would allow you to leverage that, while keeping your machines unified in their versioning (as much as you want them to be, at least) and also provide all of the benefits of a source-based distribution.
  • Re:Personally (Score:5, Interesting)

    by jobsagoodun ( 669748 ) on Tuesday March 30, 2004 @02:38PM (#8716772)
    BUT

    The really great thing is how well it wears. I've a RH8, RH9 installation that have lots of other bits & bobs installed, mainly from tgz's I've pulled down & built. Its an arseabout, and both boxes are cluttered with stuff - and as soon as you go off piste with an installed package, you're on your own.

    OTOH I also have a couple of gentoo installations, and for nearly everything I want, I can just 'emerge xyz' and presto, its there. It was a pain getting it installed, but now its there it is really, really good. Also upgrading it was piss easy too.

    If only I could get portage/emerge for redhat...
  • Depends. (Score:2, Interesting)

    by WWWWolf ( 2428 ) <wwwwolf@iki.fi> on Tuesday March 30, 2004 @02:48PM (#8716907) Homepage

    My general idea is that if a pre-built binary is available, unless there's a good reason not to use it, I use it. The pre-built binaries are not always 100% cool, at least according to some people, but they tend to work for me in most of the cases.

    I'm usually using prepackaged binaries if they're out there in a reasonably well-documented repository - that is, included in Debian, in some rare cases I might even consult apt-get.org.

    For stuff that Debian doesn't yet have, or that absolutely insists that I build from CVS, there's always GNU Stow for easy management of stuff. I also build kernel from source using make-kpkg (because, once upon a time, it was a great Heresy to use the Pre-Packaged, Unoptimal Kernel, and building the kernel seemed to be everyone's baptism by fire so to speak).

    The reason I'm often relying on pre-built binaries is that I'm a very patient person except when installing software (having had a share of installing proggies for friends and relatives tends to hurt one's very being), and I just prefer to have a quick and easy installation.

    Building from source always seems to involve installing required development kits, and then million and one little bits and packages in semi-random order. There have been some pathological cases like mp1e / rte / whatever the hell it was that seemed so complex and convoluted that I needed a week's rest after that, or something like that.

    Then there have been cases where I haven't been even able to build the things due to system constraints. Back in the early days of GNOME, it was hell to try to compile MICO on my Pentium 166MHz when I had meager 32 megs of physical memory, and trying to grab the last available bits of swap space from my 6 gigabyte disk... Oh, and this happens ocassionally even on recent times: I was unable to build Ardour on my current machine. Glad I found it from apt-get.org, and it's now in main Debian tree too.

    I'm just secretly hoping that Debian goes i586 instead of i386 some time...

  • by infra-red ( 121451 ) on Tuesday March 30, 2004 @03:04PM (#8717098)
    If you want to rebuild a package to get the optimizations out of them, you should probably learn how to build the packages.

    Build once deploy everywhere makes it easy to maintain. Last time we had to do a massive openssh upgarde on our equipment, the rpm based boxes were done in 15 minutes, while the source based boxes took about 2 hours. The real kicker is that we had (at that point) about 3 times the number of rpm based systems compared to source based.

    Source is great for the hobbiest, but as a sysadmin, I won't touch them.
  • by petrus4 ( 213815 ) on Tuesday March 30, 2004 @03:15PM (#8717229) Homepage Journal
    >- You have an easily available source
    >of "known good" binaries if you have a
    >suspected intrusion problem.

    A rather dangerous assumption to my mind, this one. I've heard of Red Hat releases in particular making it to the shelves while still having at least the odd security flaw. Of course you're not going to have time to go over it with a fine-toothed comb, but if you know how to read code I'd give at least really critical apps a cursory once over. It's better than your system going down or being invaded by some anarchistic 14 year old, anywayz IMHO.

    As well as the security/stability issue, one of my main reasons for changing to Linux has been the level of customisability. I suppose we can let overworked corporate sysadmins off the hook for wanting to use predigested distros, particularly if they have to deploy to a lot of machines, (even the most broken distro release is likely to be infintely more secure than the IE+OE knock-out punch ;-)) but I'm not sure anyone else wanting to call themselves a respectable Linux user has an excuse.

    To me, compiling from source is one of the main reasons for using Linux. The ability to compile exactly for your CPU and particular environment, coupled with the security of knowing that what you're getting is exactly what you think it is, and not something that's going to turn your system into a script kiddie gang's next 0-day ircd.

    If you need something that can be deployed on a lot of machines, buy standard hardware that you know Linux supports, (avoid exotic Winmodems, onboard cards etc) prototype from source on one machine, and then mirror it to the rest. To me, a secure, stable, well-configured system is something that cannot and should not be attained in five minutes, and any corporate sysadmin who thinks it should be possible, ought to look for a career change. Just as it's true that in the rest of life there is no such thing as a free lunch, when it comes to security, the emphasis should NOT be on short cuts.

  • Re:Amen. (Score:1, Interesting)

    by Anonymous Coward on Tuesday March 30, 2004 @03:54PM (#8717729)
    and this is why I use Slackware.

    I find it immensely easier than any other distro when it comes to running bleeding edge things (Go ahead and try to install the latest ALSA beta releases and/or JACK recent releases on redhat anything or Fedora.. it's a frigging nightmare)

    I dont use the "easy" distros for the same reason I dont use windows...

    I have yet to try gentoo, I tend to like having my distro frozen in time on a set of CD's until I'm ready to jump to the next release (Yes, I build my Slackware ISO's from slackware/current)
  • by Kleedrac2 ( 257408 ) <{kleedrac} {at} {hotmail.com}> on Tuesday March 30, 2004 @04:38PM (#8718252) Homepage
    As there are over 500 comments I'm assuming I'm being "-1 Redundant" but I'm also assuming moderators probably won't get this far ... come to think of it neither will readers! Oh well. Anywho I've always been a build-from-source kind of guy but that's due (at least in part) to my FreeBSD background. In FreeBSD I had the best of both worlds, the port list which made it very easy to install a software package, and the fact that the port list downloaded source and installed! Nowadays I use Suse and as such I can use RPM's, however I usually find myself building from source whenever possible. One of those, "just because I can doesn't mean I should" type of things. I think that until there is a universally accepted and implemented package type that simply works in all linuxes I'll stick to source, not packaged.

    Kleedrac
  • by dotKAMbot ( 444069 ) on Tuesday March 30, 2004 @06:18PM (#8719371) Homepage
    personally I think you are silly if you use packages or ebuilds when it comes to Apache + modules. Your best bet is to just do it from source.

    I run gentoo, redhat or FreeBSD, and I never use any of their packages/ports/portage for Apache or MySQL anymore, it just rarely works out right if you have complex needs.

  • Numbers Matter (Score:2, Interesting)

    by smwalker ( 197560 ) on Tuesday March 30, 2004 @06:55PM (#8719842)
    1 Server, 1 Admin - Build from source
    5 Servers, 1 Admin - Build Packages and install
    1 Server, 5 Admins - Use Standard Packages
    5 Servers, 5 Admins - Build Packages with custom names/versions and install

    Seriously, I have 7 Admins managing a mix of 160 Servers.
    The simplest way I've found to have the best of both worlds, is to D/L the source RPM (SRPM), customize to taste, modify name slightly, rebuild, and distribute.

    For instance,
    Needed customized apache to support a couple of things we're doing.
    D/L apache SRPM
    Modify config files with our own patch
    modify configure line in SPEC file to suit
    modify package name (!Important!)
    rebuild
    uninstall old packages
    install our packages
    WA-LA

    Advantages
    - still get to run up2date/autorpm/fav-update-package with no worries of breaking your own custom stuff
    - Know which packages you've mod-ed by running rpm -q -a | grep "myinitials" or whatever.

    Disadvantage
    - Auto Update doesn't fix the stuff you're behind on...gotta keep up!

  • by ArsonSmith ( 13997 ) on Tuesday March 30, 2004 @07:13PM (#8720030) Journal
    Exactly why developers shouldn't be systems admins. all too often source tar balls put things in the most f-ed up places. Atleast when I install a pre packaged debian supplyed .deb that it will fit to the system layout. conf files, documentation, binaries, and libs will all be in the expected places and not where the programmer thought about putting them. Some programmers would rather just run everything out of your home directory.

  • by Anonymous Coward on Tuesday March 30, 2004 @10:23PM (#8721508)
    Which is so much fun when you need to remove files splattered across 50%% of your file tree 5 directories deep and with the log entries mixed up with misc. garbage from libtool or whatever. But yeah, on the upside, at least you can do it. :)

    Personally, when I use source for maintenance, I just keep the source directory around in /usr/src, with all the object files, so I can quickly patch and upgrade later. This can be a benefit for deinstalling, as well, as you can usually figure out how the install worked, or just rerun, or save your install log files there, or whatever.

    Of course, most makefiles use the install command. Wonder why the install command is so spartan; there's really no reason why it couldn't maintain a database of files installed, and by what processes on what time/date (although maybe time/date would be redundant, since that's already tracked in the file system).
  • by chegosaurus ( 98703 ) on Wednesday March 31, 2004 @09:45AM (#8724494) Homepage
    I do ./configure --prefix=/usr/local/pkg_name plus whatever other options, then make. When make finishes, I mkdir/usr/local/pkg_name-version, and ln -s /usr/local/pkg_name to that, then make install.

    I get all my applications in their own directory, and it's only a matter of changing a link to roll back a version or two. It's also easy to copy an app to another host.

    Some discretion is necessary here: I just dump a lot of small stuff into /usr/local (gnu utils like groff, less, stuff like that) only things like gimp, gcc, TeX, python etc get their own directory. This keeps the PATH sensible.

    My main OS is Solaris, but I employ this technique on HP-UX, Linux, BSD, whatever I'm working on at the time. Keeps things simple for me, and it's easy to tell someone else just where things are.

    The only time I go outside the app dir is for things like logs, which always live in /var/log/app_name, and tablespaces. I always try to keep /usr/local as static as possible.

    As for maintining consistency across a network - NFS?

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...