Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Calling for Smaller Kernel Sources? 69

FrozedSolid asks: "I can understand that the kernel contains many drivers and support for a lot of platforms, but the fact that the full kernel download can amount to 32mb doesn't make it any easier to download with a 56k modem. Kernel patches are nice, but obviously only apply when you have access to an entire kernel tree beforehand. Is there a way you can download a leaner linux kernel source? Is there a place that carries sources for x86 only or possibly sources without some of the less popular drivers?"
This discussion has been archived. No new comments can be posted.

Calling for Smaller Kernel Sources?

Comments Filter:
  • Actually, (Score:5, Interesting)

    by GreyWolf3000 ( 468618 ) on Monday October 21, 2002 @11:54PM (#4501607) Journal
    32MB is not too much to ask. On 56k, that amounts to like a few hours of download. Let's assume 5K/s (I used to get 2 in Windows, but 7 is Linux for some reason). That's 300K/min. That's 1 MB every 3.5 minutes. That's 32MB in ~120 minutes, or 2 hours.

    If you haven't gotten cable, and you're using Linux, the distros themselves are at a magnitude greater in size; I doubt that kernel sources are the real problem.

    • well a dialup user could have bought a linux distro retail, borrowed/was given a burned copy of a distro, or got it packaged with some other product.

      although, i do agree with your point. 32 megs isn't that much. I've downloaded 100+ MB files on a 56k with little discomfort...(well, there was the instance when my ISP tried to over charge me for making use of thier no time restrictions plan...)

      While you sleep is a wonderful time to download.
    • by King of the World ( 212739 ) on Tuesday October 22, 2002 @12:09AM (#4501666) Journal
      You do realise that you're turning your back on the potentially brilliant kernel developers who are physically unable to leave a download going overnight and start "hacking" the next day, don't you?!?

      What sane person would ask such a thing.

      • by jag164 ( 309858 )
        Hell, by the way you talk, it seems you couldn't contribute to a roman orgy with a twelve inch cock. Get with the times Jethro, and get broadband. Don't be so damn cheap. Hell, if broadband isn't available in your area there are other alternatives. Me thinks rsync'ing the kernel sources would work wonders. Worked for me when when I was stuck with 38.4 for four months. If your were so 'brilliant' you would have thought of this already.

        Have a nice day.

    • Re:Actually, (Score:3, Interesting)

      Sometimes, every byte counts.
      I'm in a University residence, where we're allowed 500MB / week...This alone can get eaten up pretty quickly when you're downloading updates to software...Or, for example a DSL provider around here charges over 5GB/month. Now in these examples 10-20 megs won't make or break you, but it does point out that these things can matter.
      • This is nothing (Score:3, Interesting)

        by 0x0d0a ( 568518 )
        Incredibly, Mozilla 1.2 is going to have built-in, enabled-by-default prefetching [mozilla.org]. The amount of bandwidth this will waste blows the mind. Imagine every single Joe on the Net suddenly using up 20 times as much bandwidth downloading stuff that he will *never see*. The intermitent activity on lines turning into constant load -- and ISPs rely on being able to oversell their lines.

        Back in the day when the now-unpopular "web accelerators" were getting big, I always brushed them off as used by network abusers who didn't know what they were doing. Now this abuse has been legitimized.

        The people who are going to be the real losers in all this are the techies, the ones who tend to have several browser windows loading at once, or a n ssh connection, or a server running. Up until now, they've been somewhat subsidized by the fact that ISPs can charge cheap prices because the other 98% of users only use their line 10% of the time. Now that everyone's lines are going to be under continuous load...goodbye Quake.

        The entire idea of single window browsing is simply awful. It places extremely tight constraints on bandwidth and latency. When the user clicks a link, they want the new page there, now, and damn anything that has to be done to get it there. If you work with several windows downloading at once, so that you're reading one while another is coming in, you never run into this problem, since even a modem is easily enough to comfortably handle web browsing of nearly any site...as long as you're not waiting around staring at a progress bar while the image loads. Prefetching simply feeds this flawed single-window user-behavior model.

        For once, a Microsoft program (IE) is actually less of a network abuser than its competitors. Awful.
        • Re:This is nothing (Score:2, Informative)

          by Anonymous Coward
          It only prefetches next/previous link rels, which would be little bandwidth. The only downside of this is the abuse to - in the background - prefetch ads to fake ad impressions.
        • Re:This is nothing (Score:3, Informative)

          by jensend ( 71114 )
          This is false. The Mozilla prefetching is only for pages which explicitly request to be prefetched by a or type construction- a slideshow, for instance, might use Moz's link prefetching (since the probability that someone will proceed to the next slide is rather high), but most sites won't.

          Of course, they ideally ought to implement blacklist blocking for prefetching so people could exclude sites which use it in ways which affect network traffic adversely enough to be a worry, but my guess is that people won't start abusing it until IE does it as well.

          I had the same feeling of shock when I first heard about it a week ago- until I read the FAQ. Remember- any large project like this is unlikely to make highly visible stupid decisions. You linked to the FAQ; please read it.
          • by 0x0d0a ( 568518 )
            You are correct, but you made one bad assumption -- that people designing web pages are interested in responsible use of the network.

            You've got a web designer who can "improve his user experience" by marking everything as prefetchable -- what do you think is going to happen?

            And the overwhelming majority of web designers these days use GUI tools. After word gets out that websites designed with tool "foo" are snappier (because it uses prefetch by default), and it becomes a selling point...

            After all, what web designer wants to believe that people won't delve deeper into his site, and hit those pages *anyway*?
        • Ack. Slashdot stripped the tags (even though I told it to format as Plain Old Text). That would be a "link rel=prefetch" or "link next=" construction.
    • The problem is the base assumption that everyone has broadband. That isn't the case. You don't have to live in the sticks to _only_ have a modem line... and sometimes that modem line isn't that great (connects at 28.8)

      That's why I'm glad Phoenix came out for Mozilla. I don't need a web browser for email / news / irc / horoscopes / making Belgium waffles. The source is a little less then the main product, and it's all that I would need in the first place.

      The one advantage I do have is the accessibility of a co-lo linux box, which I can download to there, then rsync to my home machine. Unfortunately not everyone has this option available. I've noticed the huge size of the linux source tree, and I'm hoping they would start to consider modularizing the source build, similiar to how Xemacs did with their packages.

    • I agree. Back when I had 56k, I would have given a lot if some of the things I was trying to download were as SMALL as 32MB. Ever tried to download an update for a game like CounterStrike? Each update is 70-100MB.

      For what the kernel does, its worth every byte.
    • If I was a 56k modem user, I would buy a linux distro CD because it is easier than downloading 650 Mb. Then if I wanted to update my kernel or whatever thing I could possibly do with the source code, without waiting for a distro release and then buying another CD, the easiest way would be downloading a huge 32 Mb file. Doesn't look good to me, especially considering that the majority of Internet users are on 56k.
  • by gmhowell ( 26755 ) <gmhowell@gmail.com> on Tuesday October 22, 2002 @12:10AM (#4501671) Homepage Journal
    Part of this has already been answered. If you just pick x86 (or PPC or Alpha etc.) the size does not change that much. The vast majority of the kernel is not architecture specific. That's a good thing!

    I don't know of any sites, but let me say a few things. First, your distro probably has a binary package with almost everything either compiled in or a module. Barring that, when I used to be stuck on dialup, I'd get the most recent kernel and then download the patch each time. It was a pain in the butt, but not as bad as downloading the full sources each time.

  • RSYNC!!! (Score:5, Informative)

    by Jeremiah Cornelius ( 137 ) on Tuesday October 22, 2002 @12:12AM (#4501677) Homepage Journal
    You need rsync.

    It was devised to combat just the problem you cite.

    rsync://rsync.kernel.org/pub/[wherever you want to go]

    Thank you, TRIDGE!

    • rsync is great when what you are downloading is a newer version of something old and is uncompressed (ISOs tend to work quite well). Unfortunately, things like a compressed kernel tarball (or an RPM) seem tend to compress differently each time leaving relatively little in common causing the speed up from rsync to be very small (if there at all).
      • by robin ( 1321 )
        I thought that recent gzip compressed files were "rsyncable", in that the blocks they consist of are designed to remain as invariant as possible given the slight differences in content of the files. See, for example, this patch [samba.org].
        • I was unaware of this thanks for sharing this! Do you know if this also applies to bzip2?
          • bzip2 seems to use 900k blocks (see ref [redhat.com] et seq), so even if it does this kind of thing I'm not sure that in the case of rsyncing updates to kernel source code it is going to be very useful. I guess you have to make the trade-off between the smaller .bz2 files vs larger but rsyncable .gz files.
  • by burns210 ( 572621 )
    i have dialup as well and i got Download Accelerator Plus (DAP [downloadaccelerator.com]) it allows you to start a download overnight, and it will disconnect for you, whenever the download completes, this, along with a pause\resume feature make it invaluable for slow access over the net.
    • FreshDownload does much of the same things as DAP, but it's free (And can do _8_ streams instead of 7, or 4 in the newest version of DAP):

      Fresh Download website [freshdevices.com]
    • Just a quick look at the suggested software, but you are asking a GNU/Linux user to:

      1. buy and install some W32 OS
      2. buy and install DAP
      3. download the Linux kernel sources
      4. reboot and compile

      I am aware of vmware, wine, ... but I guess you see the problem here... (next to the fact that it's not even certain you are using an ia32 architecture, something W32 prophets are always assuming obviously).
      I personally do not like 4a ;) because rebooting is for installing new hardware ;)
      • I'm not saying this to advocate the solution proposed by the grandparent, but you are:

        1. Assuming we are kids living in our 'room' with one crappy box that we dual-boot
        2. Assuming we religiously refuse to run any other OS besides Linux like the OSes we run is some excercise in political correctness.
        3. Assuming we are uptime fetishists, the kind of people the Electric Utility investors like having around because we pay for their yachts.


        Free software is not just about zealotry anymore.
    • Or you can use wget.

      wget -c http://kernel.org/pub/linux/kernel/v2.4/linux-2.4. 19.tar.gz

      Poof -- no Windows required.

  • Currently gentoo has a kernel-source package (actually, several) but they don't really get much out of portage/ebuild. If Gentoo had USE support within the kernel, and seperated the sources into PPC/x86, driver, etc. parts, it would be almost like a normal gentoo package and almost like what you're looking for.
  • by cybermace5 ( 446439 ) <g.ryan@macetech.com> on Tuesday October 22, 2002 @12:24AM (#4501726) Homepage Journal
    Seems like a person could set up a few webservers, let people select kernel configuration options, and send the much smaller bzImage (and compiled modules) through email. Sure, the size would vary wildly based on how many modules were selected, the architechture, etc., but on average I'd say it would be much smaller.

    The benefit is less bandwidth wasted for people downloading 35 megs of source to recompile a 900K kernel image. The disadvantage is processor time required, well, how many Athlons do you have to buy to serve the same number of kernels per day, and how does it compare to bandwidth costs?

    Yes, *I* would like to have the sources to myself, I have a few source files I need to tweak to get my machine working properly. But many people just have the burned CD from the friend of a friend, and would appreciate a recent kernel without a mammoth download.

    Maybe someone's already doing this, I don't know.
    • by FueledByRamen ( 581784 ) <sabretooth@gmail.com> on Tuesday October 22, 2002 @01:03AM (#4501856)
      Hey, this does sound interesting. I have a few spare computers around and a little knowledge of PERL... I think that I'll look into this tomorrow. That would be _really_ neat. If I do manage to get something working, I (of course) will insure that it won't remain that way for long by posting it back up here...

      It really shouldn't be too hard. I've been staring at your post thinking for a bit, and the best way to do it that I can think of is to read in the config template file from the kernel source tree (have several selectable versions) and generate a huge page full of radio-select buttons. Once that is submitted, an MD5 hash is applied to the generated config file. If it matches an existing package (unlikely at best), simply serve that up. Otherwise, make a new build tree named "builds/$VERSION/$MDSUM" and copy the config file into it. Build the kernel, tar.gz the resulting modules and kernel image, and email the links to the person.

      This would require quite a bit of CPU horsepower, but it would make for a nice, small kernel download and a sort of set-and-forget build. Set the options and press the button on your lunch break, and have the link sitting in your inbox when you get home (unless it's slashdotted, in which case I'll come home to a hole melted in my floor where a server and accompanying cable modem used to be).
      • Go for it. Coming up with ideas like this is important, and it's great that cybermace did. But it's through people taking ideas and running with them that great things get done.
        • Aren't you proposing the distribution of binaries without the source?

          Be careful. You'll rile up the GPL pedants.

          Besides which, you would be AMAZED at how many kernals some DOS adventurer can request in a matter of a few seconds.
          • Aren't you proposing the distribution of binaries without the source?
            Include a link to kernel.org, or mirror the kernel on your site.
            Besides which, you would be AMAZED at how many kernals some DOS adventurer can request in a matter of a few seconds.
            Keep a cache of pre-built kernels. Odds are in your favor that multiple people will request the same configuration. Use that to your advantage.
          • Set bandwith limits. Limit each IP to, say, one kernel per week. Same for email address. Require registration, with a secret number in an image so it can't be scripted. Etc.
      • You could offer even more functionality, such as:

        remember the configuration the user entered, to make it possible for him to automatically get the next kernel version.

        send reminders whenever a new version is available (you could also send the kernel to him, but I don't think that's a good idea.

        Of course, CPU utilization will be your biggest problem (especially if this becomes successful).

      • What this project needs is a distributed gcc.

        Wow, that pie sure is way up there.

        But, imagine a huge network of computers all compiling your kernel...in a few seconds. You could even have redundancy and checksums to guard against the security concerns some are having about this idea.

        Again, 6000% more work, but interesting nonetheless.
        • Well, the kernel-webconfig backend (CGI or whatever) could do the delegation, and send the job to some idle machine whose owner has volunteered for the project (like distributed.net etc.)

          There are a couple of projects to support this kind of task, sometimes called "grid", "distributed" or "meta-" computing, such as Globus (http://www.globus.org/), Legion (http://legion.virginia.edu/), Globe (http://www.cs.vu.nl/~steen/globe) and many others, a web search for distributed or grid computing or processing should turn up a bunch.

          Another solution is just to redirect the webpage to a random or idle mirror using a DNS random/round robin thing or just serving the home page from a CGI which links to the mirror.

          If you start a project, please post a link to your web page or mailing list, I would be interested in helping.

          reed

      • If you do this, make a note in your journal. I'll be watching.
    • Seems like a person could set up a few webservers, let people select kernel configuration options, and send the much smaller bzImage

      Hmm...yes....

      Well, I have a kernel for you to run. Just send me your config, and I'll send you a bzImage. Problem solved!

      All that's missing is the web server part, but I can fix that once a few people start using my kernels :)

    • don't forget to install the Trojan code, too.

      It's a good idea, but how do I *trust* that webserver that just compiled a new kernel for me? I trust kernels from the disto, kernels I build, and maybe a few other places, but that's about it.
      • Ultimately you have to trust the people who develop any source you compile. Unless, of course, you want to pore through the files and hunt for every possible vulnerability.

        If a trusted entity set up a service like this, I see no reason it would be more vulnerable to abuse than a source distribution. Unless you always check all the kernel source before you compile it, there is a possiblity of compromise either way.

        Since the kernel's track record on this is pretty good, I'd say would be possible to do this without too much risk. I mean, how many of you have used one of those one-floppy-wonder images? Is there any way to be sure a trojan isn't installed in that kernel?
        • So far my kernels and kernel sources have come from "well known" places. For that matter, there are even well known sources for boot/root floppies. You're right, I don't check the source for everything, but I guess I just have to trust my sources.

          Anyone who puts up a kernel compiler should also have a way to show trustworthiness. Perhaps one aspect would be to invite inspection by the (much needed) paranoid fringe.
    • Interesting idea. You'd have to remember to set ARCH depending on the URL you used (eg, /cgi-bin/menuconfig?ARCH=sparc64 or something) and to have the 16 gcc installations you'd need to cross-compile for anything the user wanted. ("Why does my new kernel keep crashing at boot with an 'Invalid Instruction' fault??")

      And of course, this would be much easier if we just made a linux kernel conf [xs4all.nl] frontend for this. :)
    • One interesting problem that would need to be solved in this system is trust, which is a very important benefit of downloading the (maybe signed) source from kernel.org: the sites providing this service need to be linked to some kind of trust system, with verifiable authentication, and it needs to be a no-brainer for users to do the verification before they install their new custom kernel.

      reed

    • another option (lighter for the server side) would be to give back not the compiled kernel, but the part of the source tree needed to compile it.
      • It shouldn't be too hard (or large) to get signed md5 sums of each file from some reputable source. For all I know, already availible, but I've not seen it at the granularity of single files.
  • by toybuilder ( 161045 ) on Tuesday October 22, 2002 @12:28AM (#4501738)
    Never under-estimate the bandwidth of a plane full of CD's... ...or the CD-RW drive at your nearest Internet Cafe.
  • The last time I was poking around the kernel source SRPM for a Mandrake install, the stock linux-2.4.? tarball was in there (along with all the other patches they apply to it).. Your distribution may have the same thing. Unpack the SRPM to get a linux tarball, unpack that to get the source tree, then download and install the patches to bring it up to the version you want to run.

    Granted, this doesn't answer your question, but it may ease the download times a bit..
  • There is this thing called CD distributions you know. Back before I had my cable-modem, I used to have a subscription to the InfoMagic linux developer set. It contained (back then), at least RedHat and Slackware, as well as a dump of some popular ftp-sites with active development).

    I don't think making a smaller kernel tree just for those people who unecessarily wants to be bleeding edge, don't like CD distributions, don't like downloading patches, won't or can't have broadband at home, and doesn't have access to broadband at a friends place, library, workplace, school or anywhere else is worth the trouble. But if you feel otherwise, just go ahead and make it.

  • warning: bitkeeper has a strange license, please consider it very carefully. I do not agree with the use of this license but Linux likes it and it helps development


    Just an FYI for people getting into kernel stuff with RedHat-ish systems:

    Getting Linux via bitkeeper.

    First, get BitKeeper:


    http://www.bitmover.com/cgi-bin/download.cgi [bitmover.com] [bitmover.com]

    Follow the instructions and it will tell you how to download and install BitKeeper.

    Then, clone the main Linux tree using BitKeeper:
    $ cd /usr/src/linux-2.5.40
    (or wherever you would like your stuff)
    $ bk clone bk://linux.bkbits.net/linux-2.5
    $ ln -s linux-2.5.40 linux
    $ (optional if needed, ln -s linux-2.5.40 linux-2.4 ; ln -s linux-2.5.40 linux-2.5) - sometimes dists and weird driver SRPMS look for linux/include in all sorts of places
    $ cd linux
    $ bk -r co

    Also don't forget.

    - /usr/src/linux , /usr/src/scsi ; /usr/src/asm ; /usr/src/asm-generic should all be re-linked to the right places in /usr/src/linux/include [if this is no longer necessary let me know]
    - make install doesn't work with grub, so you have to do your thing manually now
    - recommended compiler is gcc-2.9.5.3 [for 2.4 and 2.5 now], I always have extra compilers ready to go just in case. Make sure all the tools are the proper version, and that you have a recent ksymoops (if you need to do any messing around looking for problems ), modutils - etc.

    If the build fails, find the offending code and remove from selection, or try to hack it if you need it.

    I would like to also mention cvsup and FreeBSD. I like cvsup quite a bit and its free and open. I only wish the linux kernel was using the same method FreeBSD does. I like FreeBSD for its coherency speed and ease of maintenance, and that the kernel is released with a system for a very smooth ride. If you havent tried FreeBSD, please try it.

    rsync is also very good. use it. I would also like to promote the purchasing of very cheap CDroms to get your started, and FreeBSD CD is great because you can use CVSUP to diff the whole thing with minimal bandwidth abuse.

  • Use the patches (Score:2, Informative)

    by Spacelord ( 27899 )
    download once
    use the patches for incremental upgrades
    problem solved
  • I've been thinking about this same problem for a while. Especially before I had broadband.

    Checking the posts, I see someone mentioned a solution similar to what I am thinking about, but for binary kernel downloads. Whilst nice, I think few people actually trust a binary they didn't build themselves.

    I've been wondering why kernel.org didn't create a download configurator somewhat like the one that existed for djgpp in the delorie days.

    The poster is right in a sense, there are a multitude of drivers in the kernel that a large percentage of people will never use, but which are still invaluable for the people who do. So why not allow a custom download somewhat like make menuconfig. Select the packages you *could* end up building, download them, and then build your kernel on the local machine.

    Given that I have absolutely zero knowledge about the kernel source, it's highly likely that I am overlooking something rather basic... is there something I am missing?
    • Well, uncompressed source for 2.4.19 is about 160 megs. Having the source for the entire 2.4 series this way would be in the area of 3 gigs. Since people still use 2.2 and 2.0, we should also allow for configuring these. Disk space considerations, for one, would be a pain. (I'm leaving out 2.5 since I think anyone would accept tha argument that if you can't compile it, you shouldn't be running it :3 )

      Also, someone would have to write some sort of script that could parse all the kernel config files in the source tree, which wouldn't necessarily be difficult, but it would be a pain to do unless you really wanted this feature. Other than that, I think hpa would have a fit if you suggested that a lot of users should be running a bunch of CML1 parsers, tar, and gzip processes on master.kernel.org, since it's been a somewhat problematic machine in previous incarnations ... and since the mirrors likely wouldn't pick up on it, everyone would end up doing it on the main site. ^^;

  • by Anonymous Coward
    Has anyone considered the possibility of modular kernel sources? Break the sources into several modules. Things like kernel-base, kernel-scsi, kernel-reiser. Download the ones you need. extract all to the same location. things like `make menuconfig` would scan for your "modules" and give you options based on which modules you have extracted. Is this a possible method to making smaller downloads?
    • Yes, of course they have.

      As the kernel maintainers say every single time this comes up if someone want to come out with a sane system to do this and package kernels using it, go ahead. No one's stopping you.

  • What about similar to what the MSIE and Netscape installers do - choose what parts you want to download...? Have the equivalent of a 'make menuconfig' that you run on your local machine, which will then just download the required/selected source, or even better calls some kind of CGI that custom packs your source tree into a tgz file for you?

    I think 'make depend' may kill it tho... unless that information can somehow be downloaded ahead of time also.

    - RR
  • i dont think it should be left up to third party people compiling sources... why not go to kernel.org either upload your config or enter everything you need and download just the source you need... however all this could be avoided if people would read the changelog and realize they dont need to upgrade to the latest greatest kernel coming out, especialy when none of their items have had any changes at all

To write good code is a worthy challenge, and a source of civilized delight. -- stolen and paraphrased from William Safire

Working...