Build From Source vs. Packages? 863
mod_critical asks: "I am a student at the University of Minnesota and I work with a professor performing research and managing more than ten Linux based servers. When it comes to installing services on these machines I am a die-hard build-from-source fanatic, while the professor I work with prefers to install and maintain everything from packages. I want to know what Slashdot readers tend to think is the best way to do things. How you feel about the ease and simplicity of installing and maintaining packaged programs versus the optimization and control that can be achieved by building from source? What are your experiences?"
Support (Score:5, Insightful)
As often as I've lamented how much employers spend on PC's, vs build them themselves from parts, they would rather not have to rely on someone in-house to support hardware.
Gentoo is something of a middle ground. (Score:5, Insightful)
Who are these people? (Score:5, Insightful)
While building from source can be fun, and necessary sometimes, I don't think it makes sense. You spend far too much time tweaking minor issues, and lose sight of major problems.
One problem that I've noticed is the fact the build from source people tend to install things in a way that's completely different than anyone else. This means that anyone who tried to maintain the machine is hopelessly lost trying to figure out what the previous person did. OTOH, When (e.g.) RedHat does something weird, the explanation and fix is usually just a few google queries away.
Most (all?) package formats have source packages that can be modified and rebuild in case you need some really special feature.
OS is from a package (Score:2, Insightful)
yup (Score:2, Insightful)
It's all in what you need to do. If you need those optimizations or special build options that aren't in the package, go ahead, it's what it's there for.
Simply (Score:5, Insightful)
Many sources include the SPEC file required to build the package.
--No-Deps (Score:5, Insightful)
I don't care for the dependancy model of packages, and I'd much rather install programs myself. That way I know I'm getting the program compiled most efficiently for my computer, and I don't have to worry about dependancy databases.
Run Debian (Score:3, Insightful)
For most packages though there really isn't a big need to compile from source.
depends on the system (Score:5, Insightful)
Since that time I have gained more experience with production Linux systems.
When it comes to managing production servers, I use Debian and typically only install programs that are in the stable tree.
Every once in a while I have to build a deb from source, but only in rare circumstances.
Now, when it comes to my development systems I am more likely to compile from source rather than rely on the packages to supply me with the latest and greatest.
It really all just depends on what kind of stability vs. "new" features you need as well as ease of managment. Installing a package takes 30 seconds vs. compiling/installing from source can take longer and requires more hands on.
It's gotta be from source. (Score:1, Insightful)
Depends (Score:5, Insightful)
As for what I use? I used to use solely ports, but now I usually grab all the packages when I do a fresh install, and only use ports for what isnt available as a package, as the packages give me no disadvantage.
From another student... (Score:2, Insightful)
From source, definitely. (Score:5, Insightful)
The answer is .... (Score:4, Insightful)
If you are advanced enough to compile source code in such a way that it performs better or in a tighter controlled manner, which suits the purposes you need better than off the shelf builds (packages), then by all means, build it from source.
If on the other hand, you don't have a compelling reason to compile the source, then use the packaged product.
I don't know about you, but for most of my servers, the extra configuration options needed to squeeze an extra few percentage points of performance isn't enough to bother running my own compile.
Those that say they review ALL code before compiling for security (backdoors, holes etc) problems are probably lying. I am sure there are a couple people who do.
Basically if you do it just so you can be 1337, you are just vain, as I doubt that most people would see/feel the difference.
Source and un-install (Score:5, Insightful)
that most packages do not have "make uninstall".
With packages, you have a much better chance of removing all the files that were installed with the packages when you need to.
Beyond personally - professionally (Score:5, Insightful)
I've been in both your and the prof's position, and this is generally the best bet. It'll make the prof's life a lot easier when you're gone, too.
If I were your prof (Score:2, Insightful)
If you could demonstrate that installing/upgrading from the source results in a quantifiable improvent in maintenance or performance over a pure binary distribution, I would consider it. If there are no existing reliable benchmarks, but you'd make a good case, perhaps I'd let you turn your own workstation into a demonstration system.
Anything else. No way. If it works, don't mess with it.
I run Gentoo at home and, while updating with "emerge" is kind of nice, I've yet to find any compelling reasons why it'd be better than up2date or apt-get. There really are no measurable performance or reliability advantages.
Re:--No-Deps (Score:5, Insightful)
That just means that you'll have to store the dependancy databases in your head. A release of a particular software package, whether it's a package or a tarball of source, depends on other software. Always. "config" goes a long ways towards working that out, but if it doesn't work automagically you're going to have to take it by the hand and lead it to wherever your copy of libfoobar.so.17 might happen to be.
I've just started using yum for RPM management and I'm already liking it a lot. At least dependency management seems a bit cleaner and more automatic.
Source Builds = Administration Nightmares (Score:5, Insightful)
You end up with different versions, different compile options, upgrades are a mess, and it's hard to support.
Another problem is filesystem pollution. When you do your "make install", it's hard to track what files are installed, and when you upgrade to a new version, you can't be sure it's clean, since you might have configuration files or binaries anywhere on your system.
So, one day, I started to make RPM packages of stuff I needed, and modified existing RPMS, and sent all the patches to the community.
What happened is that Mandrake accepted all my packages, so all I had to do was to install the standard distro, and all I needed was there.
And eventually, I made so many packages that they hired me
But even if I wouldn't work for Mandrake, I'm still sold on RPMs. You have a clean SPEC file that contains the pristine source code, plus the patches, and basically all the instructions to build the stuff. You can specify the requirements, you can easily rebuild on another machine, uninstall the old stuff, or upgrade, with a single rpm command.
Gentoo is not good for everybody... (Score:2, Insightful)
Re:Support (Score:5, Insightful)
In some cases, the packaged version won't play well with something that I need, or I particularly don't want upgrades to disturb something. In that case I put together a pseudo-script that gets and builds the source and dependencies, and mark the packages as "Ignore" in my apt configuration.
eks
make your own "packages" (Score:5, Insightful)
Building from source is often just a bloody waste (Score:5, Insightful)
It's like the programmer who spends six hours hand-optimizing the inside of a loop that gets called once a day and already executes in 10ms... but ignores the fact that the program takes 20 times longer to run than it should because of an inefficient algorithm. This programmer doesn't know *why* his program is slow, he's guessing, and he will almost always guess badly. This is why profiling was invented.
Look at it this way. Installing from the packages you get the following benefits:
- You save time compiling (multiply this by the number of patches you have to add over the box's life time)
- You save time tracking down dependencies
- You have a standard platform you can re-deploy at will
- You have something that another administrator can work on without asking where you shoved shit.
- You have a package database you can query for version information, dependencies, etc.
- You have an easily available source of "known good" binaries if you have a suspected intrusion problem.
- Depending on the package system you use, you might be able to stay on top of security vulnerabilities with very little (or no) work.
Now, installing from source, you get the following benefits:
- You can pick where the files go (whoopie)
- You tune the performance for your platform
- You can select specific features
- You can de-select specific features to save disk space
The only one which gains you a lot 99% of the time is where you can select specific features which are turned off in the standard package. If you need those options, you build it from source. If you're doing ten machines, though, you build it from source on *one* machine, package it up, burn it, and install it from YOUR package on all ten machines.
Saving a few CPU cycles is never worth saving a man-hour. You can use the man hour more productively on the macro-optimization level. Similarly, you can take the dollars that you would be pay the man and buy a new CPU with it.
The same argument goes for saving a kilobyte of disk space. If found out that any of my guys spent *any* significant time trying to cut less than a gigabyte out of our application footprint, I would give him a footprint of my own, right in the middle of his colon. Disk is cheap. People are not.
If you have an application is which is CPU-bound and running too slow, find out why (profile the system or binary), and build from sources only what you need to make your application conform to the target specification. Or, if that will take too long, just buy more CPU.
Long story short -- tuning of ANY kind should not be done at the micro-level across the board, that's just a waste of time. Tuning should be done by profiling the system as a whole, identifying the constrants, and relieving them. If that requires micro-tuning of a few things, that's fine... but squeezing every last little bit of performance out of absolutely everything is either impossible or incredibly time-prohibitive. And, of course, if you were going to spend that kind of time, you could either buy new hardware with the money (remember Moore's law), OR you example the system more closely at the macro level and come up with a better way to do things.
academia! (Score:2, Insightful)
i guess in academia they are used to having funding for some things some of the time -- your professor probably wants to keep those machines running as long as he possibly can, because money has to be used for other things.
and besides, compiling programs is a hard thing for the "sorta unix geek" to get his head around
fortunatly, i think this will change when people realize there is an ample supply of knowledgeable folks out there who can do this stuff. it's easier to find a geek now than it was even 5 years ago!
Use Pacakges where possible (Score:2, Insightful)
When looking from the prof's view, it will be easier to get someone else up to speed after you have graduated if your machines stick closely to standard packages.
Use the time that you'd spend compiling/installing doing more CS related activities.
Most people (including myself) that have gone through the phase of wanting to compile everything get out of it as soon as they have some real problems to solve.
Re:Gentoo (Score:2, Insightful)
It's a nifty feature of Gentoo but how many users really want to wait for glibc? If they don't wait for glibc then are they really gaining anything significant when they build Mozilla manually as opposed to using a nightly build?
Think Tetris. If you don't optimize from the very first row then optimization at row 15 isn't going to save your backside.
Context (Score:3, Insightful)
Desktops, Ports based system all the way. Why? Because with something like Gentoo, it might take several days to compile but you can be assured you're not going to dependency hell anytime soon when you want to try the latest and greatest. Headers and such are installed by default, so you can usually compile something by hand and it will Just Work whereas if you're using three different unofficial package streams and you need to do some upgrade of a simple library somewhere which has an anal retentive versioning and dependency specification, attempting to apt-get that new version will cause your entire house of cards to come crashing down. I lived with Debian on a desktop like that for god knows how many years until I decided "No more". Yeah I have to wait a while with Gentoo but at least I only have to do it once.
because (Score:3, Insightful)
Re:Who are these people? (Score:5, Insightful)
Good point. There are probably very few cases where spending the extra hours of tweak time ever ends up being something that adds a significant amount of value to anybody, except yourself of course. I can think of a couple exceptions, but they are exactly that- exceptions to the rule. IMHO the ability to standardize installation packages is an important aspect of modern computing.
If time didn't matter, I suppose we'd could all go so far as writing all our own software that would do exactly what we wanted.
Re:Whatever get the job done (Score:5, Insightful)
And if you've ever had to pick up and maintain a system from someone who left you will know that this is just about 100% wrong.
The best solution is one that works and is maintainable. If you are willing to put in the extra work involved in making your from-source installations clearly maintainable and upgradable so that the next guy isn't going to have to spend 6 hours learning how everything works when he needs to upgrade foobnitz to version 2.0, then great. If not, think about letting someome else do that work for you.
Re:From source, definitely. (Score:5, Insightful)
While in principle I can agree with what you're saying, this is a pretty insulting view to take of all the people who work on GNU/Linux distributions. (Or put another way, how am I better than every Debian developer combined? (Substituting Debian for your distribution of choice, of course.))
Depends upon your hardware. . . (Score:3, Insightful)
That said, I routinely build stuff from source on a Pentium Pro 200 MHz dual CPU machine at work. It's not our main server, so the performance hit is never noticed.
Portupgrade is a absolute must on this machine, as we have all kinds of software running on it. Without portupgrade, I'm sure it would be a nightmare.
In the end, it's whatever works best in your situation, and to have this as 'news' on slashdot seem really freakin' ridiculous.
Use RPM's or DEB's if at all possible. (Score:5, Insightful)
My advice is that for a workstation that is managed by an individual you can let the admin do whatever they want, but for any server that has to be stable and maintainable you want to stick with a well maintained package repository and try to avoid 3rd party packages and tarballs if possible.
You have to understand that there is a software stack in most services.
With the kernel and core libs (like glibc) and such at the bottom of the stack, and applications like Evolution at the top of the stack. In between you can have gdb and openssl and various perl modules (in AMAVIS for example) and you have sasl stuff which may be related to pam and openldap and cyrus or wu.... etc..
The thing is that even though all of those various pieces of the software stack may be linked against different libraries on the box, the maintainer of the library code may not have a QA group to co-ordinate regression testing and compatability testing before the latest CVS commit is enacted to fix a bug referenced in a CERT alert.
RedHat and Debian and SUSE and all the others have package repositories, the repository maintainers do an amazingly fantastic job of QA and testing to make sure that new patches don't break your software stack. As an individual you simply can't keep up with that.
For example the Development team that takes care of OpenSSL doesn't backport their bug fixes and security patches to old versions of the code. They just maintain the latest release version and the current CVS version. If you have an old server running IMAPs and HTTPs and SSH and SMTP/TLS and such, and CERT announces a bug in openssl vX.Y, then the OpenSSL development team will certainly release a patch for the latest version which may be version Z!
That might cause you to have to upgrade APACHE or wu-IMAP or OpenSSH or Postfix etc... Those things might then have divergent dependencies that would cause you to go and rebuild half a dozen other packages, and so on and so on. Also, do you remember all the magic flags you used for configure and make? Do you have the same environment variables set today that you did the last time you built PostFix? The possibilities for problems are endless. And if you do have a problem you are kind of on your own since your system will be a unique box. Whereas if there is a problem with a standard RedHat or Debian package, then you can always go to the general newsgroups and chances are there are a dozen other "me too" posts with answers already.
It is much easier to use apt or up2date.
So, unless you have a very good reason for using a tarball on a production server that requires reliability and security and high availability, then you should stick with packages.
If you want to build the packages from source, feel free! RedHat and Debian and SuSE make the SOURCE packages available so that you can dig in and read all about'em. I'm sure the Debian team could use a new package maintainer, if you are addicted to compiling and testing things, check them out.
Re:Beyond personally - professionally (Score:5, Insightful)
If you can't find a site with a trustworthy package what makes you think you can find a site with trustworthy source code? Or are you going to review every line of code to make sure it wasn't tampered with?
The paranoia works both ways :(
Have your cake AND eat it, too! (Score:5, Insightful)
It's still a smart move if you're building from source. Just package your source. Then you can build the sources under the control of a package manager (like RPM), and install the resulting packages. You get the full benefits of build-from-scratch and the full benefits of using packages.
This is exactly the approach I use. In fact, I'm a bit more strict about it: My policy is that I don't install any software that isn't packaged. If I need to install something that isn't packaged, I'll package it first. If I don't like the way a packager built an already existing package, I'll repackage it.
The bottom line is that creating your own packages (or fixing packages you don't like) is much easier than maintaining a from-scratch, unpackaged installation. Or ten of them.
To get you started, here a couple of RPM-building references:
Don't give up the benefits of source. Don't give up the benefits of packaging. Have them both.
Re:Not always the case (Score:3, Insightful)
Re:From source, definitely. (Score:3, Insightful)
Re:Many words for you... (Score:3, Insightful)
Re:Beyond personally - professionally (Score:5, Insightful)
I almost always trust packages from the vendor and the distro and only trust "3rd party" packages when there's been tons of anecdotal evidence that they work.
Re:This is BSD vs Linux argument (Score:1, Insightful)
IME, though, building through the ports system (including building packages for rollout) is easier than building RPMs. This might influence how many actually started to build from sources themselves.
Regardless of what you do, a good packaging system is valuable for scaling up. It helps document procedure and settings and whatnot. For certain things building a package from source yourself may even be required.
Build from source (Score:2, Insightful)
More importantly, when you build your own from source, you're often reminded of outdated dependencies that need to be upgraded. I recently compiled a new version of OpenSSH and found out that I had a vulnerable copy of zlib on my system. Had I installed a package, I might not have known.
Re:Personally (Score:5, Insightful)
and why does gentoo need or want a larger user base? gentoo is geared towards a niche market and those people will be attracted to the distro whizzy installer or no.
porsche has a tiny market share - but nobody suggests they should make a k-car version to get a bigger slice of the pie!
What would a time/dollar evaluation say? (Score:4, Insightful)
Humans do not scale well, they have very low bandwidth of information sharing, and have high latency (i.e. you can't get ahold of them). Humans are also expensive, wander off into different jobs, graduate or drop out of college, etc. So I tend to prefer the reducing human cost of the system administration complexity as a default position.
So my gut feeling is that unless there is a major time or dollar savings in the optimization by building from source (i.e. avoid buying 10+ new CPUs for the systems, or computation runs take a day less) go with the reducing administation complexity by using a package management systems so that you can concentrate on your actual goals (research, profit, or whatever).
Re:Who are these people? (Score:3, Insightful)
While I'd agree with you in general, I've found one curious case where I've learned to install from the source to make all my machines the same: apache.
For some reason, every vendor (and sometimes every release
So I just grab the latest stable source kit from apache.org, and compile it. That takes maybe 10 minutes on current hardware. I spend 5 minutes or so munging httpd.conf, changing only what I know has to be changed. I get as close to a default install in
I had a fun case a few years back on a project where the management had decreed a Netscape web server. Nobody could get it to run right. The usual reason was that it was installed and managed entirely through a web interface. Sounds fine, right? Yeah, until someone misconfigged something slightly, and the web server was a zombie. Now the managment interface is dead, and all you can do is reinstall from scratch.
One day I said "The hell with this", grabbed the current kit from apache.org, and 20 minutes later I had a live server. The developers could go crazy building their site.
Occasionally, I'd say "You know, we really should work on that Netscape server that we're supposed to be running." The reaction was always along the lines of "Yeah; we should, but in the meantime apache is running fine and we have stuff to do today." I talked to them recently, and they're still running apache. They also have an unofficial policy of always installing it from source, because they've been frustrated by the packaged installs that they find on linux distros.
The apache gang has done a fine job of making an install from source fast and painless. In such cases, a package usually just makes your life more difficult. If you take the defaults whenever possible, you end up in a better situation than with most packages. All your installations for all vendors come out the same, and managing a lot of machines is very easy.
Re:Gentoo is something of a middle ground. (Score:1, Insightful)
I don't know how you can argue this in any situations other than when you're initially building your gentoo system.
Once you run your first emerge sync where do you get packages from? I don't know of any mirrors that supply binary packages and I'd be willing to bet that there aren't any. So unless you're referring to the limitted selection on the gentoo cd's, then no, you can't use binary packages unless you've pre-built them yourself.
Amen. (Score:5, Insightful)
Re:Beyond personally - professionally (Score:3, Insightful)
However now as a profiessor, I've become more interested in focusing on building the tools that are part of my research. These I publish (or will publish once they are ready) as open source. But for the other elements such as development libraries, servers, etc.: I just want them to work.
Comment removed (Score:3, Insightful)
Depends on the program (Score:2, Insightful)
Different stages (Score:5, Insightful)
1) I am a newbie and have to use packages for *.
2) I know my way around. I like the level of control I get with compiling/know how to code/read far too much Slashdot. I compile by default.
3) I manage more than three boxes in my basement now. Having the ability to back out of system changes without a full OS reinstall is a necessity. I build my own packages from source that I've compiled.
4) I manage more than just three boxes in a department now. Now I have to deal with politics, ordering hardware, the freakin' network, and I generally have time for sysadmin. On top of all that I now have a family so spending two or three extra hours per day on my Unix hobby is no longer feasable. Precompiled packages work just fine.
Re:Personally (Score:2, Insightful)
Anyway, look at it from a different perspective, user base aside, if gentoo really is about choice and control, then why not let the gentoo user's choose between a gui installer and a manual install?
Re:make your own "packages" (Score:5, Insightful)
I use fedora, and most often I get the *.src.rpm versions, then tweak the SPEC files as required, build my own binary rpms, and use those. Best of both worlds, IMO.
And the tweaking need not be that tricky or time consuming either. Decent defaults for building RPMS can be placed in your ~/.rpmrc file (or /etc/rpmrc, etc.). Once you have set your optimising settings, architectural preferences and packager name and cryptographic signature (if you want to submit them to other people), that's done for all future packages.
I used to run a mix of RPM packages and tarballs (./configure --prefix=/usr/local && make && su -c "make install") so I could tell what was under RPM control and what was not, but it became annoying when I wanted to build a Source RPM with dependencies on a package I had built from tarballs. These days I usually try and wrap any install up in an RPM - it's not difficult once you get hold of a skeleton spec file for your distro and it saves much hair pulling later on. Also the dependency requirements of RPMs actually save time in the long run because you know when removing a package will hose your system (or part of it) .
Cheers,
Toby Haynes
Listen to Him (Score:2, Insightful)
Do what your professor wants. Why you ask? Because its your damn professor. He will be happier with a package management system that he feels comfortable with. This will make him happier with you. Do not trifle with the grey beards, they have powers you do not yet comprehend.
I, myself, am working with a professor on a momentum problem generator in Perl (we're physics people) and I was given a nice equation solving library that he wrote for another issue. I've showed it to a number a people with years (1 or 2 near a decade) of experience with Perl and they said that it was some of the worst code they had ever seen. I thought the way I had to interact with it was stupid and klunky. One giant kludge. I fought it in my own head but tried not to let my emotions about it out in front of him. So I worked at it again and again and you know what a few months later he, a peer of mine and I will be doing a seminar for our deptartment on it this April. The code wasn't as awful to work with as I thought (though to this day I wish it wasn't so klunky) and it worked. I just had to suck up my pride and get it done.
Don't argue with them. Make their lives easier and you get to see the grey beards happy side. May you have many publications in your future.
Packages (Score:3, Insightful)
The point? (Score:5, Insightful)
Wow. There sure are a lot of posts about which is better, but I don't see any comments that deal with the underlying problem. And that is this: don't get into a pissing match with your professor. Seriously, what are you hoping to accomplish here?
If you were thinking that you'd get tons of pro-compiling comments, and then put that in front of the professor, stop right there. Coming to Slashdot for validation of your side of the argument is about as helpful as those wives who write to Dear Abby about their husbands. Because no husband on Earth is going to appreciate getting chastised by Dear Abby, and if Abby sides with him, he's going to gloat. It's lose-lose for the wife, just like it's lose-lose for you if you try to use Slashdot as leverage. Screw with the computers that the professor relies on, and he'll find a way to "thank" you for it. Don't sabotage yourself.
Build optimized packages from source (Score:3, Insightful)
We'll make a Debian package maintainer out of you yet!
Re:--No-Deps (Score:3, Insightful)
So, in summary, stick with packages until you have to switch over to source to get anything done!
Re:Personally (Score:1, Insightful)
Would you buy that Porshe if it were in a thousand pieces and took a whole year to build?
FUNK DAT!
Here's my 70 grand, gimme the fast car and I'll be cruising with your girlfriend while you're in the garage doing a "man porshe"!
Re:--No-Deps (Score:5, Insightful)
Man, I'm glad other industries aren't as stupid as the software engineering industry. Otherwise car manufacturers would have to have steel foundaries, cloth weaving, a slaughterhouse and tannery (for leather), and innumerable other ancillary businesses on site just to build a car. And, of course, everyone would have to know how to do absolutely everything.
What you're preaching is directly contrary to the practice of reusing code -- and not just your own. It's insane to reinvent the wheel every time you need to drive to the store -- but that's exactly what you're doing. It's one thing to understand the physics behind the wheel, or the foundary, or the paint shop. It's another to rebuild them from scratch.
I hope there's never a bug in your code... because if there is you're going to have to patch every single code base, and re-issue every single binary (since you prefer to link statically). All because you felt it was better to not trust others and do it yourself. Not to mention the vast amount of time burnt re-implementing that which already works, and works extremely well.
The code I'm working on uses a multitude of libraries -- STL, Boost (primarily for its shared_ptr's; we'd use more but much of it doesn't compile on our platform), OTL, libcurl, libxml, pcre, openssl, and others. In some cases we've ditched libraries and implemented our own solution (in particular, MQSeries, which sucked deeply). But to re-implement all of those libraries would literally add years to development. And to what purpose? To have a less feature complete, more buggy, less supportable code base?
And, yes, we've even used libraries sometimes when the library pretty much sucks. Case in point is cgicc, which we used because it's one of the few C/C++ libraries that interfaces "properly" with fastcgi. It's full of bugs, full of really idiotic #define's, and doesn't implement things quite right... but fixing it took much less time than rewriting it from scratch. Because it doesn't do everything wrong, and there's no reason to toss the baby out with the bath water.
No thanks. I'll happily replicate what's been done in every other scientific and engineering discipline -- to stand on the shoulders of giants while adding my own knowledge to the repository.
But when a package links against it for the sake of using a single function that the programmer could have reproduced in under ten lines of code... Well, that just screams "laziness" to me.
Sure. But that situation is pretty rare, at least among competent developers. If you're seeing that commonly, then you're using crap packages (and god knows there's a ton out there... I've ditched many packages because they had too many esoteric dependancies).
Re:Personally (Score:3, Insightful)
I grew up with PCs as they grew up, and learned DOS/Windows through all of it's incarnations (well, windows 3.1 and later). And I realize that I can handle XP MUCH better than most people I know that came into it later, and don't understand how the low-levels of the OS fit together, and what does what.
I once saw the definition of an Expert as someone that knew the low-level so well that all of the high level stuff was obvious. I'm nowhere near that (I don't think anyone is with Windows, at this point), but that's the route that I like to go towards. It's so much easier to debug things when you understand the computer is a system, and what the parts are, and what are the core required things to get it functional.
Gentoo's install steps are essentially a how-to guide for bringing up a box after it falls on it's face. Something often learned the hard way. It's really quite simple, and most of it could be automated, but I think that they have intentionally left it manual.
A) It requires you to learn to use it
B) It raises the bar on the quality of noobs.
I'd rather start someone on an OS where they need to learn how it works than on one where it's all magic. Because magic only goes so far.
Am I the one who thinks... (Score:2, Insightful)
Try FreeBSD Unix (Score:1, Insightful)
Support? What's that? (Score:3, Insightful)
In short, Support? Who needs it? Not me. Do you?
Re:Personally (Score:2, Insightful)
Re:Convenience vs. optimization/security/features (Score:2, Insightful)
However for glibc or other common libraries you gain much more than if you hacked sendmail or any other service.
If you have a backdoor in glibc, nearly ANY program will activate it. You just wait until a setuid root program accesses something in the library, and you have your exploit.
Or if you need something that stays aware, have this insert a kernel module that hides it's own existence and does whatever you need or launches and hides another process that does what you want.
In the end, putting a backdoor in a common library has many advantages to putting it in any program or service.
Start from packages; document the modifications (Score:3, Insightful)
I know there is temptation to make things a little bit better, but support after you're gone is the issue.
The genius who designs a system that only (s)he can maintain is a poor engineer.
Find out what your customer's (the prof sounds like the customer in this context) requirements truly are. Is good enough good enough for the prof? If you give him what he wants and he finds out next week that it could have all been optimized to perform
Meet those requirements with the minimum customization.
Document the system. This may be a nightmare if the system has already been "tweaked" by the previous maintainers. If that's the case, it's even MORE important to simplify and document.
Provide recovery tools--as simple as a set of drive backup images, or as complex as a set of scripts that rebuild the system from source. At a minimum, supply a system administrator's manual.
Building a system for a customer to use is a completely different endeavor from elaborately tweaking your own box so it is just exactly the way you like it.
FreeBSD is the way to go. (Score:2, Insightful)
I get the latest stable software. I don't have to worry about crazy dependancies (I don't want MySQL dammit, I use Postgres). The software is in a standard place. It's easy to tweak things.
I also find that FreeBSD is much faster than my Linux system... Especially RedHat.
source vs package & packaging, or just support (Score:4, Insightful)
Correct me if I am wrong, but are you contridicting yourself here? Gentoo DOES use developer source, but they ALSO do what you call "heavy patching".
I interpret this "source vs package" debate to be something different: What is the NORM for your distribution, and are you using the OS in ways that were not tested by the vendor's SQA team
For example, ANY of these distros can get borked if you install Ximian on top of them and THEN go back to the vendor for updates. It wouldn't matter if you did it from source or packages.
Same with Alien packages on Debian, or "Redhat centric" rpms on Mandrake or SuSE.
Bottom line is don't mix oil and water.
I agree with your comments about what is good with Gentoo. I happen to like Gentoo and FreeBSD for the very reason that there's a BAZILLION source packages that all have cross-testing against each other. Same for Debian I suppose.
Best thing RedHat ever did for their desktop distro was set it free. They NEVER wanted to be in the business of supporting user-borked desktops when they install random stuff from the net, and they never wanted to manage and QA a large repository. Now it looks like there's a Fedora community (two actually) addressing the package distribution issue. Good for them.
Re:Amen. (Score:2, Insightful)
What are you saying? That because a majority of the people in the world use Windows, Gentoo should have a flashy installer?
If we give all the distros flashy installers and gear them to be simple and not as powerful, I will be in chains with the rest of them, so lets cut the nonsense.
People use Windows/Mac/Fedora/Gentoo/BSD/Amiga/etc because they want to, and that what fits them best. It makes sense, and there is nothing wrong with any of those choices. Stop trying to save those that don't want saving.
daniel
packages--the only way to go (Score:3, Insightful)
Build from source if you need the software and no package exists, or if you really, really need a processor-specific version. But for most applications, go with the pre-packaged version: as a system manager, there are a lot more useful things you can do than recompile "ls" on a dozen machines.
My two cents (Score:3, Insightful)
Building from source is great if you want to tweak a system and get it running exactly how you imagine. Be prepared for configuration and all the various issues associated with source builds. I'm assuming that even if you build from source that you are using some sort of package/file management system to alert you of dependencies and file modifications. This is easy to do with binary packages, not so easy managing sources. I regularly rebuild *on my test machines* all manner of software from source, including the kernel, KDE, glibc and a bunch of other libraries.
Now for the problems with source builds:
1) You need a development machine. I.e., you need the compiler tools and libraries. For a regular workstation this is no problem, but you DO NOT want these tools accessible on a server even if they're 'chmod 700' or otherwise locked away. This means you'll build on another machine and create a binary package and... well, you're back where you started except you lost some time.
2) There's no easy way to create snapshots of packages. Differences in libraries and config files can make or break software. The best errors are those that prevent the software from compiling. The worst are those that compile, but errors or weirdness doesn't show up until a month later. Now RPM is much maligned, but it does allow you to keep the build instructions, dependency information, etc.. inside the package. You get lots of control, once you've learned RPM, on where things get installed.
3) Backouts are not as easy. You can often do a 'make uninstall' but this requires the sources be kept around in some cases. Tools like checkinstall can ease the burden, however.
4) Duplication of effort. Source builds are good for customizing, as I mentioned. It's a myth, however, that rebuilding from source will dramatically improve performance except in a few, somewhat rare cases. E.g., rebuilding a 2.4 kernel with a pre-emptible patch can make your desktop faster. Rebuilding a stock 2.4 from kernel.org or your distro's sources will likely not be noticeable.
Re:From source, definitely. (Score:3, Insightful)
Sure, but by the same token you wouldn't have one baseline system. In our software development lab, we have a couple of supported RH versions, SUSE, Debian, Mandrake, FreeBSD, OpenBSD, and Solaris.
Re:--No-Deps (Score:3, Insightful)
Good point, but it may actually be hard if your code is under a different license from a library you're using (namely, your code is more restrictive). But I'm not sure that's an issue either.
This breaks alot of programs and in an rpm system only one version of a library or program can be installed??
Well, there are ways around that. But if the program was linked against libfoo.so.a instead of libfoo.so.17.a then you're pretty well screwed.
This, BTW, is the exact same thing as "DLL hell" on Windows systems, where multiple copies of a DLL may be installed, or a program may rampantly overwrite the existing version with its own (even if it's older!). Same story, different name...
But still Its a bandaid solution for a big problem
Well, GenToo's emerge system is essentially the same as ports from what I understand. But you're right -- it's a bandaid solution. What's the real fix? I dunno. If programs were linked against the full name of their libraries (instead of the symlink/hardlink shortened name) then it'd probably fix itself. Package managers are certainly capable of telling when a library is still required by a package, and there's no reason to remove old versions of a library until it's no longer required. It'd take a lot of packages being reworked, and probably a lot of devtools as well (like autoconfig, automake, etc).
As the numbers increase, the meaning changes (Score:3, Insightful)
As the numbers of machines you manage increases, you will find the meaning of the word "control" changes. We only manage a couple of hundred, but the pressure to standardise, as far as is practicable, is a strong one.
Look at the people running clusters, and you can see where that gets to in the end.
The reason we (primarily) use Debian is that the potential architectures for distributing change, and for customisation-with-binary-releases seems to be much greater.