Build From Source vs. Packages? 863
mod_critical asks: "I am a student at the University of Minnesota and I work with a professor performing research and managing more than ten Linux based servers. When it comes to installing services on these machines I am a die-hard build-from-source fanatic, while the professor I work with prefers to install and maintain everything from packages. I want to know what Slashdot readers tend to think is the best way to do things. How you feel about the ease and simplicity of installing and maintaining packaged programs versus the optimization and control that can be achieved by building from source? What are your experiences?"
Personally (Score:5, Interesting)
That is my way of handling things, do what fits your needs best, that's why we have this option.
Both? (Score:2, Interesting)
Whatever get the job done (Score:5, Interesting)
OSX (Score:4, Interesting)
My experience (Score:3, Interesting)
Anyways, I've found that by far the easiest and most simplistic and time-saving method is to use rpms or debs. But of any distro, Lindows has it down to one or two clicks...though, they're software database subscription is a serious money leech..
If it was up to me, source would always be an option to use, and the install process for rpms and debs would be one click and automatically update themselves into Menus and such..
Just a few thoughts..
___________________________________________
Packages, definitely. (Score:2, Interesting)
Whenever a binary package for Debian is availible, I prefer it to hand-compiled source. First, it has all the Debian patches it needs. Second, it propably installs without a hassle. Third, it's easy to get rid of it, and last but not the least, apt resolves dependency problems without human intervention in 99.9% of cases.
In other words, binary packages work for me :)
Re:Who are these people? (Score:5, Interesting)
Re:Personally (Score:2, Interesting)
Building from tarballs can be problematic (Score:4, Interesting)
On the other hand, I've never had any problems. Emerging new packages deals properly with all dependencies, and things always compile correctly. And there's like a review process where packages are first added to portage as "unstable" and then once they have passed everyone's criticism, they are added to "stable". So far, the only "unstable" package I've decided to emerge was Linux kernel 2.6.4, and that all worked out brilliantly.
Also, if you have a cluster of computers, you can do distributed compiles with, I think, distcc and/or some other package. Gentoo documents this VERY well. Plus, if your cluster is all identical machines, you can build binary packages once and then install them onto all other machines.
BTW, Gentoo isn't for everyone. The learning curve is STEEP. I had to start from scratch and do it all a second time before I got everything right. (Although I am a bit of a dolt.) Setting up is complex but VERY WELL documented. Only once you've finished building your base system does the extreme convenience of portage become evident.
Also, there are still a few minor unresolved issues that no one seems to have a clue about.
Re:Personally (Score:5, Interesting)
The main thing I encounter that keeps me from using them all the time is the need for specific add-ons that are available as part of packages but are available when rolling-my-own.
As an aside, there are certain bits that I just prefer to compile myself for any number of reasons
That said, there are other bits of software that are pretty generic items that the packages make *trivially* easy to work with, and where compiling those same things from scratch--particularly on older hardware--makes you get a bit long-in-the-tooth waiting for the compile to return.
To me, this is truly one of the ultimate beauties of open source: you're not stuck with pre-built, but you can leverage it when it makes sense.
yes, hybrid (Score:2, Interesting)
build packages from source exactly how you want them , make a tarball of that, and then use ssh and key trusts to shoot them out everywhere (this coming from a person who maintains almost 1000 servers)
it works very well.
Do both (Score:2, Interesting)
Ports or Portage (Score:5, Interesting)
And once you've started using packages and package management, it gets harder to introduce source-built software into the same environment without screwing up your dependency databases, or worse - breaking things. So if a package lacks a required option, you really have to build your own package with the option included in order to keep things orderly. That's a lot more work than just installing from source.
I'm not a Linux user anymore (several reasons) but if I were I to go back to Linux, I would use Gentoo, specifically for its Portage system.
So, in my opinion, building from source may be a little more time and CPU consuming, but it is the better option for a controlled, tailored environment.
Re:Gentoo is something of a middle ground. (Score:5, Interesting)
Then, on the other machines, I install from the binaries.
This allows me to test the installs first, resolve any problems, etc.
Furthermore, to speed up the process, several machines run DISTCC and are used as clients of the compile server.
Re:Who are these people? (Score:4, Interesting)
packages put things in unusual places (Score:1, Interesting)
Qt is a good example
When installing Qt from source, you are told in the install doc where everything is going to go and you are asked to set the QTDIR environment variable by hand. This variable is nowhere to be found with a package. Without this variable it is difficult to find where Qt is installed if you want to do anything with it.
Also, I have found that installing packages that are dependencies of other packages does not always guarentee that it will be recognized by the depending package, where as it almost always is when building things from source.
my 2 cents
Re:Personally (Score:5, Interesting)
Re:One word for you... (Score:5, Interesting)
Security updates w/o waiting for them the be packaged?
He took a Gentoo in the face at 250 knots. (Score:3, Interesting)
Personally, I install from packages (apt) wherever possible. If something is unpackaged and looks new and shiny, then I'll install from source. I really can't imagine managing a large number of applications without a package manger, even if it's something you've written yourself.
If installing everything from source is your thing, you're probably already using Gentoo with its package mangagment. So the question is moot.
Re:This is BSD vs Linux argument (Score:5, Interesting)
I'm not sure whether to mod you -2 BSDTroll or +1 BSDFunny. However, I'll comment instead. (Commented earlier downthread, so it's already a foregone decision, but what the hey, you only offtopic once.)
The only joy I get watching compiler messages scroll by is laughing my butt off watching all the warnings. Don't these people use lint?
And that's funny only if I'm already in a good mood. Otherwise, I hate having to actually watch the unavoidable visible indicators of the quality of the software I'm about to start using. Just like most people don't like watching sausage being made...from live pigs...
Yeah, I know, if I know so much, why don't I fix it? Because I didn't sign up to indentured servitude, I just want to use the damn software. I realize that violates the canon of Open Source ethics in the minds of the extremists, but I have a job to do and it's not fixing your damn object cast mismatches.
OK, ok, cooling down now.
Thank you, in all sincerity, to the authors of those software packages. Please forgive me if watching 2423 warnings per compile cycle makes me a little crazy.
And that's why it was the best summer ever!
Not always the case (Score:5, Interesting)
Let's not forget the GCC fiasco and probably dozens of other examples where RH decided to "lead the pack" in terms of version numbers but not stability.
Of course, then there's Debian woody, living in circa-2001 land.
On building from source (Score:5, Interesting)
Man, what is this, Gentoo?
Any sane distributor these days builds binary package with reasonable optimizations that won't break across architecture submodels, and occasionally releases binaries targetting submodels (e.g. PentiumPro-specific packages). On many machines, for many workloads, however, the model-specific optimizations just aren't that helpful. Obvious exceptions are floating point math on most platforms (especially x86, where x87 math code is a dog and should be replaced with SSE code if possible) and - I'm told - really slow hardware. (I'll be able to test that once I get these Indys running GNU/Linux.) In my experience, Debian hasn't really felt any slower than my LFS systems for personal use.
So, I'll say this: if you have enough time to build everything you're using, do some careful speed comparisons between your self-built packages and the vendor's binaries. If there's really a significant speed increase, and you need that increase, source is the only way to go for the packages that need the speed increase. Otherwise, it's probably not worth your time.
Unless whatever you're doing is extremely security critical, you can probably deal with the fact that server app foo has features bar and baz installed that you won't use. If you can't, you're probably auditing the source of everything you use anyway, and that doesn't sound like the case, so "control" probably isn't a real issue here either. Control can be found in config files as well as in the configure script.
People say, "but package dependencies suck!" Well, yes, rpm (the program) isn't built to deal with dependencies that gracefully. If it annoys you that much, go install apt-rpm or something, or even Debian (gods forbid). Package management isn't rocket science.
Re:Personally (Score:2, Interesting)
It's not a bad little distro, IMO. But the installer has a *long* way to go.
Depends on who is paying. (Score:3, Interesting)
Re:Supprt: Naa, that's not true at all. (Score:5, Interesting)
Usually when one builds from Source, they install it to wherever the original developer has it set to by default. Unless you did some heavy patching, the software will very likely be more "true" to the original software then many packages.
RPM's for distributions such as RedHat or Fedors often have to move configuration files all over the place to mesh with the OS properly.
You're more likely to be able to sit down at a strange Linux box and troubleshoot whatever program when it's compiled from source tarballs versus an RPM. Unless of course, you know the RPM, or the RPM doesn't do anything funky.
Considering the stuff is Open Source, and chances are the programs are not under a paid-for support contract, it's pretty safe to say that BOTH methods would have to be supported "In House." And if not, your support contract could very well support the source compiled versions anyways.
I choose the Gentoo way. Everything is compiled from source; it's just nice and automated. Almost never have I run into something where the program had to be modified to fit the distribution.
Depends on your distribution (Score:3, Interesting)
That said - for a work machine, I prefer binary packages. I just want the damned thing to work, work well, and not futz with it.
For a hobby/play/research machine - I prefer source packages. I have found there are many compilers out there that will massively outperform GCC, especially when you turn on those crazy optimizations that most binary distributions won't (plus optimize for the EXACT processor I am running on, etc.)
Re:Depends in Hardware and Purpose of Machine (Score:2, Interesting)
However, this goal is difficult at best to undertake with most linux distributions, since everything is maintained through packages and the whole concept of third party software is very blurry. In the BSD world, that line is strongly delineated, so maintaining BSD servers with src installations tends to be much easier.
Source-based, binary-packaged Gentoo (Score:4, Interesting)
As you probably know, Gentoo is a source-based distribution, but it also allows binary packages. Many (such as Mozilla Firefox) are distributed by Gentoo as source and binary; you can choose to install either. The ability to build a binary package from a source
Additionally, since (if I read you correctly) you're probably using similar hardware for each of your machines, it would be trivial to set up a compile box which would produce binary packages for your other boxen. Packages compiled for your architecture would be faster than most binary-only distributions (many are still compiled for the i386 architecture), and writing a new ebuild is trivial compared to writing a new spec file. (Trust me; I spent a quarter writing a paper on the topic while I was in school, not to mention having had to do it myself in the Real World.)
Finally, Gentoo integrates and tests its packages. Ebuilds come with Gentoo-specific patches, so you don't have to spend the time to make each source package work with the rest. This is probably one reason why your professor likes binary distributions: they all work together, and enough people rely on them that if something breaks, it gets fixed. A package-based Gentoo distribution would allow you to leverage that, while keeping your machines unified in their versioning (as much as you want them to be, at least) and also provide all of the benefits of a source-based distribution.
Re:Personally (Score:5, Interesting)
The really great thing is how well it wears. I've a RH8, RH9 installation that have lots of other bits & bobs installed, mainly from tgz's I've pulled down & built. Its an arseabout, and both boxes are cluttered with stuff - and as soon as you go off piste with an installed package, you're on your own.
OTOH I also have a couple of gentoo installations, and for nearly everything I want, I can just 'emerge xyz' and presto, its there. It was a pain getting it installed, but now its there it is really, really good. Also upgrading it was piss easy too.
If only I could get portage/emerge for redhat...
Depends. (Score:2, Interesting)
My general idea is that if a pre-built binary is available, unless there's a good reason not to use it, I use it. The pre-built binaries are not always 100% cool, at least according to some people, but they tend to work for me in most of the cases.
I'm usually using prepackaged binaries if they're out there in a reasonably well-documented repository - that is, included in Debian, in some rare cases I might even consult apt-get.org.
For stuff that Debian doesn't yet have, or that absolutely insists that I build from CVS, there's always GNU Stow for easy management of stuff. I also build kernel from source using make-kpkg (because, once upon a time, it was a great Heresy to use the Pre-Packaged, Unoptimal Kernel, and building the kernel seemed to be everyone's baptism by fire so to speak).
The reason I'm often relying on pre-built binaries is that I'm a very patient person except when installing software (having had a share of installing proggies for friends and relatives tends to hurt one's very being), and I just prefer to have a quick and easy installation.
Building from source always seems to involve installing required development kits, and then million and one little bits and packages in semi-random order. There have been some pathological cases like mp1e / rte / whatever the hell it was that seemed so complex and convoluted that I needed a week's rest after that, or something like that.
Then there have been cases where I haven't been even able to build the things due to system constraints. Back in the early days of GNOME, it was hell to try to compile MICO on my Pentium 166MHz when I had meager 32 megs of physical memory, and trying to grab the last available bits of swap space from my 6 gigabyte disk... Oh, and this happens ocassionally even on recent times: I was unable to build Ardour on my current machine. Glad I found it from apt-get.org, and it's now in main Debian tree too.
I'm just secretly hoping that Debian goes i586 instead of i386 some time...
Re:Beyond personally - professionally (Score:3, Interesting)
Build once deploy everywhere makes it easy to maintain. Last time we had to do a massive openssh upgarde on our equipment, the rpm based boxes were done in 15 minutes, while the source based boxes took about 2 hours. The real kicker is that we had (at that point) about 3 times the number of rpm based systems compared to source based.
Source is great for the hobbiest, but as a sysadmin, I won't touch them.
Building from source has it's uses (Score:2, Interesting)
>of "known good" binaries if you have a
>suspected intrusion problem.
A rather dangerous assumption to my mind, this one. I've heard of Red Hat releases in particular making it to the shelves while still having at least the odd security flaw. Of course you're not going to have time to go over it with a fine-toothed comb, but if you know how to read code I'd give at least really critical apps a cursory once over. It's better than your system going down or being invaded by some anarchistic 14 year old, anywayz IMHO.
As well as the security/stability issue, one of my main reasons for changing to Linux has been the level of customisability. I suppose we can let overworked corporate sysadmins off the hook for wanting to use predigested distros, particularly if they have to deploy to a lot of machines, (even the most broken distro release is likely to be infintely more secure than the IE+OE knock-out punch ;-)) but I'm not sure anyone else wanting to call themselves a respectable Linux user has an excuse.
To me, compiling from source is one of the main reasons for using Linux. The ability to compile exactly for your CPU and particular environment, coupled with the security of knowing that what you're getting is exactly what you think it is, and not something that's going to turn your system into a script kiddie gang's next 0-day ircd.
If you need something that can be deployed on a lot of machines, buy standard hardware that you know Linux supports, (avoid exotic Winmodems, onboard cards etc) prototype from source on one machine, and then mirror it to the rest. To me, a secure, stable, well-configured system is something that cannot and should not be attained in five minutes, and any corporate sysadmin who thinks it should be possible, ought to look for a career change. Just as it's true that in the rest of life there is no such thing as a free lunch, when it comes to security, the emphasis should NOT be on short cuts.
Re:Amen. (Score:1, Interesting)
I find it immensely easier than any other distro when it comes to running bleeding edge things (Go ahead and try to install the latest ALSA beta releases and/or JACK recent releases on redhat anything or Fedora.. it's a frigging nightmare)
I dont use the "easy" distros for the same reason I dont use windows...
I have yet to try gentoo, I tend to like having my distro frozen in time on a set of CD's until I'm ready to jump to the next release (Yes, I build my Slackware ISO's from slackware/current)
Source is the best ... for now (Score:3, Interesting)
Kleedrac
Re:Gentoo + PHP = infinite pain (Score:2, Interesting)
I run gentoo, redhat or FreeBSD, and I never use any of their packages/ports/portage for Apache or MySQL anymore, it just rarely works out right if you have complex needs.
Numbers Matter (Score:2, Interesting)
5 Servers, 1 Admin - Build Packages and install
1 Server, 5 Admins - Use Standard Packages
5 Servers, 5 Admins - Build Packages with custom names/versions and install
Seriously, I have 7 Admins managing a mix of 160 Servers.
The simplest way I've found to have the best of both worlds, is to D/L the source RPM (SRPM), customize to taste, modify name slightly, rebuild, and distribute.
For instance,
Needed customized apache to support a couple of things we're doing.
D/L apache SRPM
Modify config files with our own patch
modify configure line in SPEC file to suit
modify package name (!Important!)
rebuild
uninstall old packages
install our packages
WA-LA
Advantages
- still get to run up2date/autorpm/fav-update-package with no worries of breaking your own custom stuff
- Know which packages you've mod-ed by running rpm -q -a | grep "myinitials" or whatever.
Disadvantage
- Auto Update doesn't fix the stuff you're behind on...gotta keep up!
Re:Supprt: Naa, that's not true at all. (Score:3, Interesting)
Re:Source and un-install (Score:1, Interesting)
Personally, when I use source for maintenance, I just keep the source directory around in
Of course, most makefiles use the install command. Wonder why the install command is so spartan; there's really no reason why it couldn't maintain a database of files installed, and by what processes on what time/date (although maybe time/date would be redundant, since that's already tracked in the file system).
build from source -but with a strategy (Score:3, Interesting)
I get all my applications in their own directory, and it's only a matter of changing a link to roll back a version or two. It's also easy to copy an app to another host.
Some discretion is necessary here: I just dump a lot of small stuff into
My main OS is Solaris, but I employ this technique on HP-UX, Linux, BSD, whatever I'm working on at the time. Keeps things simple for me, and it's easy to tell someone else just where things are.
The only time I go outside the app dir is for things like logs, which always live in
As for maintining consistency across a network - NFS?