MBONE for Software Distribution? 102
Warren Vosper asks: "As I sit here twiddling my thumbs, waiting for the RedHat mirror sites
to finish pulling down RH7, I ponder the need for this. Why can't we
use the MBONE to update the mirrors? I could satisy my burning
need for instant gratification *so* much sooner. Hell, why couldn't
I tune in to an MBONE broadcast from RedHat and get it at the
same time as the mirror sites? As I looked over the ancient (5-6 years
ago) online info regarding MBONE I understand that it's used mostly for video and audio, but why not software distribution?"
Comparisons between cable modem, dsl and other (Score:1)
You could certainly multicast it (Score:1)
And then it might take 2 or 3 times longer to get because if you miss a packet you'll have to wait for the retransmit to pick them up. And by that time enough mirrors have already got it that you can get through, usually so the sex appeal of the multicast is gone.
I think the idea is good though. Redhat could just keep spooling ISOs at 56k so that even modem users could dip in to it and then maybe at 256k which most cable and dsl subscribers should be able to get and you could have a background process that plucks them off the wire and assembles the ISO on your disk. It might take a week from the day they release it but it's possible. They could do errata that way too, there could simply be a Redhat channel and a daemon that runs to monitor it.
It wouldn't be "quick" but it would be efficient use of bandwidth. I also think that if you made it automatic more users might be willing to deal with it. There are still a lot of low bandwidth internet users too, it might be a great way for them to get in on the action, since the daemon would have built error-recovery they could leave their modem on all night and get part of it, and just keep doing that for a week or so until they had the whole thing.. Of course there are plenty of dirt cheap CDs..
MBONE for software distribution (Score:2)
and yes, for large numbers of receivers it would be considerably more efficient
look at the SRM protocol [lbl.gov] and variants for an example.
why dont we use them then? because MBONE penetration is still dismally low...because no one has thought to ask for it. which is strange given the new unicast (wasteful) emphasis on streaming media..the internet had live video and audio more than 8 years ago..with open source tools
even if you have MBONE traffic its still very much a second-class citizen...its not uncommon to see loss rates in excess of 30% due to provider imposed rate limits
Re:Multicasting and the MBONE (Score:3)
Re:Isn't MBONE a huge bandwidth hog? (Score:4)
Just a couple of points. You were using multicasting, not MBONE - there's a difference, one is a protocol, the other is a network established to test large scale multicasting.
And secondly, Ghost killed your network because your switch is dumb, misconfigured or both. In order for multicasting to work well on a switched network, the switch has to listen in on the multicast (usually referred to as IGMP snooping) to determine which ports are part of the multicast. If your switch is unable to do this, whether due to design deficiency or misconfiguration, the multicast session devolves into a bandwidth sucking broadcast storm.
Multicast & What Not (Score:2)
The first is very few people have MBONE access. Most dialups certainly don't have it due to a good portion of the terminal servers simply not supporting IGMP. There is also the issue of ISPs getting access from their uplinks which is like pulling teeth with the bulk of tier 1 providers out there. Most routers have multicast disabled by default which hasn't helped much.
The second is the need for a reliable multicast protocol. The problem isn't getting a reliable multicast protocol, the problem is choosing between the 30 or so that were tailor made for specific configurations. The reliable multicast protocols are complex... they have to deal with slow clients, retransmits to individual hosts, ordering, etc. Getting an complete and open general purpose reliable multicast protocol for use on the Internet which can be made into a standard is the problem.
The complete lack of a 'killer application' to force end users to request multicast capable access to the internet is one of the reasons this stuff hasn't taken off. If it had, I'd like to think I'd be streaming HBO to my laptop and watching it right now (which could use existing UDP multicast streams).
As a side note, IPv6 has mandatory multicast support from what I've read which if it ever is actually deployed fully might finally make some of this stuff a reality.
MBONE is for unreliable connections.. (Score:2)
Use Reliable Multicast Protocol (Score:2)
Multicasting is a wonderful method of distributing data, and it is tragically under-used. Mostly because it's harder to tell which adverts are being read and by whom.
(Though, also because many backbone providers are elitist uber-snobs, who prefer keeping the fun stuff for themselves.)
reliable multicast (Score:2)
There are a couple of efforts going though. Check out the Lightweight Reliable Multicast Protocol (LRMP). You may also want to search for data fountains.
multicast (Score:2)
--
Re:No, MBONE Actually uses Much LESS bandwidth (Score:1)
The streaming formats are actually MADE to work in a packet-loss environement, which is not the case for a regular file-transport.
The difference was that the networks our product ran on were private (mostly satellite) links
As a satellite-distribution system is 'one-way' there is a (large) cost with actually haveing a situation where a remote-node has to request a part of the file it is missing from the server is quit high. (usualy a phone-call from somewhere 'in the bushes'). This cost is much smaller on the internet, which is (in itself) bidirectional.
Cheerio! Kr. Bonne
the bandwith bottleneck: netiquette ??? (Score:1)
In the 'old days', netiquette said that if when you want to download a large file, first look for a mirror-site closer to you, so not to (unnecesairy) use trans-continental bandwith and bandwith towards the 'central server'.
People seam to do this less and less.
Cheerio! Kr. Bonne.
MBONE file distribution, etc. (Score:3)
I've actually wrote a broadcast-oriented ftp around 6 or 7 years ago; it's currently used by a Fortune 500 company to xmit large chunks of data to 3000 locations, simultaneously. This is in a satellite WAN environment.
As for multicast: this is something I've been looking into lately. For those interested, a few pointers: Cisco [cisco.com] has a few articles [cisco.com] RFC draft [ten-34.net], which has expired.
For a general discussion of IP Multicast, check out IPMulticast.com [ipmulticast.com], especially the tech section which discuss reliable multicast protocols.
I should mention that there is a lot of work going into multicast protocols these days, for various reasons, and that, generally, multicast protocols aren't very generally applicable. Some, for instance, are useful for gaming, were the data is time sensitive, but the reliability of data transport doesn't have to be perfect. Data transport is obviously more important for an ftp-like system. Besides the more traditional use of UDP as a protocol base, some folks are implementing their own protocols on top of raw IP.
For a vendor's perspective on a product/implementation, check out Talarian [talarian.com], which has a reliable multicast product. [ You can even get code, if you register ] Note: this relies on the PGM protocol.
Check out Vaccine [sourceforge.net] for an effort to create a multicast distro image distribution tool.
Re:Multicast FTP (MFTP) (Score:1)
Just wanted to say, OmniWeb rocks! Keep up the good work.
--
Re:Configurable Kernel Downloads? (Score:2)
YES, it is!
How did I install Windows 2000 on my Cheap-O(TM) computers with crappy parts? I just dumped the installation files onto the hard drive and ran the installer. It found all of my hardware, with no problems. It even supported my Intel webcam (courtesy of DigiMarc, thanks) although Intel said that they don't work together.
How did I install Linux on a fairly standard computer with good two-year-old hardware? Most of the Debian install was very painless, even with two floppies and the rest being pulled over FTP. But to get it to work with my 3Com NIC (3Com!) NIC, I had to manually install kernel modules.
Granted, things were somewhat different after the install process, with the expected results...
--
Re:Multicast FTP (MFTP) (Score:5)
Fcast seems way more complicated than it needs to be.
Say that you have a file that you want to send to a lot of people. These people are going to want to get the file as fast as possible, but they are also all going to have differing speed connections.
Now, as the sender of the file, we would like to minimize the number of packets we send, but we don't have to ensure that we only send each packet once, we just need to be better than sending every packet once to every recipient.
So, instead of using one multicast channel, use a bunch. Each channel broadcasts at some lowest common denominator speed which can be picked based on your intended recipient's networks (if you don't know, you need 14.4kbps or something like that). Then, compute the time it will take to transmit the entire file at that speed. Time shift each channel by channelNumber*totalTime/numberOfChannels and start broadcasting all of them continuously, at the prechosen speed.
Now, as a receiver, you know how much bandwidth the sender has to you (or at least you can figure it out). Simply subscribe to the largest number of channels you can w/o getting dropped packets over some threshold. You might get some duplicate packets from wrap around between the beginning and ending of the transmission, but those can be tossed.
If packets are lost, you could either request specific packets from the server (if you have only a few and the server isn't too loaded), or you could just jump on the channel that will have that packet soonest (and onto another channel if you miss it again, rinse, wash, repeat).
Assuming a constant base channel speed (which seems reasonable until broadband access is more wide spread), the trade off here is the number of channels. By increasing the number of channels, the sender has to repeat each packet more times, but the clients can have better maximum throughput and less time to wait to replace dropped packets.
There is probably some additional cost at the routing layer for all these people subscribing and unsubscribing from extra channels, but I assume (maybe incorrectly) that the routing layer would be able to handle this problem since it would be distributed across a whole bunch of routers.
Great... but how do I get MBONE? (Score:1)
It would be better if ISP's supported MBONE natively. The problem is that I went looking for MBONE, but I can't get it!! I posted a message on the MBONE Engineering mailing list asking for a tunnel and got no responses. It used to be that I would see lots of requests for MBONE tunnels on the mailing list, but not anymore. What happened? I've had MBONE withdrawls since '94 when I worked at a company that had a decent ISP at the time (InternetMCI). InternetMCI supported MBONE to any customer that asked.
I want my M(ulticast)TV! Give me a (M)BONE here!
Re:Digital Fountain (Score:2)
Re:Digital Fountain (Score:2)
FCP - File Cast Protocol (Score:1)
It's really not a protocol per se but a layer on top of multicast that makes it reliable without increasing the amount of bandwidth on the network (i.e. server doesn't have to wait for rebroadcast). It's a pretty simple idea that I haven't really heard anyone else propose. I would like to have had a prototype by now but I'm just too busy. Here is the basic premise:
1. Client connects to special query port on server and asks if file XYZ.tar.gz is available for FCP transfer. If not, drops back to normal FTP.
2. Client requests to be added to multicast distribution list. Server responds with length of the file and where in the file the server is currently multicasting. Say it is 75% through the file already.
3. Client zeros a file to length of file requested and begins receiving data and writing it to the location in the file that the server specified. Client keeps track of bits that were dropped or corrupted.
4. When the server gets to the end of the file it simply begins multicasting back at the begining and continues.
5. Once client registers a complete pass, it can either request dropped chunks via regular FTP or wait for the next pass.
If would probably only work on local nets or something like the MBONE but it would still be useful and gets around some of multicast's nastier problems. If anyone likes the idea, feel free to start an open project to implement it. I would be willing to help but I can't take the load of starting the project at this time. Email me at handle@visto.com.
If an equivalent has already been done, great! Please let me know where I can grab it!
Re:Perhaps (Score:1)
Sure, this would decrease the best possible download time - the first mirrors would have to wait 2x (assuming you get everything on the second dump). But it would take that amount of time for everyone.
Re:No, MBONE Actually uses Much LESS bandwidth (Score:1)
In that case, couldn't the recipient just queue up a list of all the packets that got lost, wait until the end of the broadcast, then just request those packets (maybe giving up if lost > a few percent)? This would solve the dropped packet problem, and this would still imply a significant bandwidth savings. All you'd need is an algorithm built in to make sure that not all nodes on the broadcast shoot the "repost" request at the same time (which might Slashdot the server).
MCAST could be used, ISPs don't care (Score:3)
First, what it really is. Multicast on a local level uses a special range of IP addresses (class D addresses) that are mapped into NIC addresses using a sort of hash. Modern NIC cards have filters so they can listen in to only traffic sent to them or to a multicast address placed in the filter. This keeps the card from having to process every single packet on the network.
It is possible to route multicast traffic by tracking mulitcast group subcribership and TTL. With this, traffic will only go onto a branch of a network if someone is there to listen to it. That is the whole concept for multicast, elminating redundant or unneeded traffic streams.
The MBONE, or Multicast Backbone, is a way of tieing together networks running multicast traffic over a non-multicast capable intermediate provider (usually your ISP). MBONE works by creating tunnels through the unicast network to carry multicast traffic. It's very inefficient, but it was a way to bootstrap people up. MBONE should have been a temporary stopgap measure. It would have gone away once the service providers upgraded their equipment to support multicast routing. Unfortunately, that has never happened.
Next, a little background. I first saw multicast and MBONE demonstrated at Networld+Interop in 1994. The demo was casting a stereo quality radio signal. I fell in love with the technology then and kept in eye on it for the next six year. It still amazes me it hasn't gone further. The biggest reason I can see for the lack of progress in a complete non-interest by Internet service providers. The is odd since it could save them on bandwidth costs in the long run.
As mentioned before, the most common uses for multicast are audio and video which can support a little bit of loss. Groups like broadcast.com would all but go out of business if multicast came into universal use. If you are sending a 28Kbps audio stream, you only need 28Kbps of outgoing bandwidth. The signal will just divide at routing points until it reaches the end subscribers. If nobody is subscribing to a particular broadcast down one branch, no traffic will go down that branch. Perhaps this scares the service providers since just about anyone could setup their own audio or video broadcasting service with a ISDN (or DSL) line for bandwidth.
With a little work, non-loss tolerant products can be moved via multicast as well. Forward error correction is one way. You could also have the routing points cache a piece of the stream to ask for a rebroadcast as well. In a worst case scenario, you connect clear back to the sender using unicast to request just the blocks of data you missed. From here, it would be possible to send back down via unicast or via the multicast channel so anyone else who missed that block could pick it up. It's all a matter of how creative the programmers want to get with their file transfer service.
Software updates are one file item to use for multicast. It would also be possible to come up with a multicast FTP that could allow several receivers to tap into a stream once the sender is already going. If someone comes in half way through, the sender would just start transmitting from the beginning again once the end of the file is reached. This combines both on demand transfer with streaming broadcast. Best of both worlds.
Usenet news is another good place where multicast could cut down on bandwidth use. Give major branches of the usenet tree their own multicast address. This way, if someone doesn't want the "alt.binaries.pictures.erotica" subtree, they just don't pick up that transmission.
Time sync was a planned use for multicast. ntpd has provisions for receiving packets via multicast. There is even a special address set aside (and defined in the "mcast.net" domain). A few master servers could keep the whole Internet on common time
Personally, I'm looking forward to the day when the entire Washington Post is pushed into my set top box via multicast at night. It could then bounce over to my PDA via BlueTooth and be ready for me to take a long to the train in the morning. Since the content is all advertising supported, the paper from a digital source could be free.
There is so much possibility here, it just needs to be tapped.
Mirrors Could RSync after receiving via MBONE (Score:3)
distributed content systems instead! (Score:1)
Instead, try something like Mojo Nation [mojonation.net] or Freenet [sourceforge.net] to distribute your popular content widely so that you can get it when you want without all having to access the same server.
Re:Perhaps (Score:1)
You would then also send a packet number with each packet of data, and at the end of the transmission, the client could request any dropped/missing packets from the server.
Ok, a crude way to deal with it, but it's sortof like ACK-ing packets with UDP, except it's after the fact instead of realtime. Maybe the server could also send out the responses to the ACKs on multicast...after waiting a few minutes to gather all of the needed packets, and then repeat that until everyone's happy.
Use real broadcast channel: Digital Radio (Score:1)
Digital Audio Broadcast (DAB) (available in more and more parts of Europe, Canada, Near & Far East, Australia) provides for data casting over radio channels (one channel does 24-384 Kbps). There's provision for Broadcast Web sites in the DAB specs and some stations are already doing that (e.g., BBC in the UK). Psion is releasing a DAB receiver for the PC (USB attached) this month/early next month (WaveFinder [star.co.uk]).
Broadcasting an FTP site is really not that different...
You could also broadcast those Linux security patches as RPMs/package-format-of-your-choice and have them installed on your machine automatically...
In the US companies like XMRadio and Sirius Radio seem to be looking into data casting via satellite digital audio radio
Re:Perhaps (Score:1)
The Digital Fountain approach is one example of a system designed for large-scale software distribution over multicast.
Digital Fountain (Score:3)
Use Proper Switching equipment. (Score:2)
(www.foundrynet.com [foundrynet.com])
Financial data. (Score:2)
Re:Isn't MBONE a huge bandwidth hog? (Score:2)
Multicast can be tailored to not smash your network, but the only reason this is fairly effective in the office is because it's not lossy
Perhaps (Score:4)
And no guaranteed delivery?
This is perfectly acceptable for media broadcast, where the codec can deal with dropped bytes.. but..
Reliable multicast (Score:1)
Just normal IP multicast does not have any guaranteed delivery, so that is not good enough if you need to have a recieved copy where every bit is exactly the same as the original. For that you need Reliable Multicast, a protocol placed above IP multicast that takes care of retransmissions etc.
There is actualy some atempts at distributed filesystems using multicast, one example is JetFile [www.sics.se]. I guess something like this could be used to sync ftp mirrors etc.
Not Mbone - Multicast (Score:2)
But I digress, Onto the question. What you need is a reliable multicast protocol. Most of these are based on a FEC(Forward Error Correction) scheme for making sure that all of the data gets to the intended procedure. You have programs like kencast that enable you to do this( there are others, but lately we are working on creating our own..) . One of the other posts suggested a Multicast version of TCP/IP. This makes no sense as TCP is a connection orientated protocal, whereas multicast is based on udp addressed to a class D internet address (224.0.0.0-239.255.255.255).
Any way hope that helps...
-doon
Re:Digital Fountain (Score:1)
MBONE - Unreliable (Score:1)
Current multicast technologies do not include reliable messaging. I'm assuming MBONE uses UDP/IP (user datagram protocol). UDP drops packets. You need something like a TCP/IP version of multicast, which doesn't currently exist. Of course, it would probably require quite a bit of ACK'ing for each outgoing packet. I don't think that would work very well. Not to mention that multicast packets pretty much flood destination LAN's (if they aren't properly switched with IGMP-snooping switches.)
Er, um but not RH... (Score:1)
Whats the rush? (Score:1)
already have downloaded and installed.
Re:Perhaps (Score:3)
really slow (28kbit/s, perhaps), and dropped
packets with forward error correction.
I seem to remember an old mbone tool that
did basically that, but only for still pictures.
(I don't remember what it was called, though.)
A more complex protocol might include reciviers
sending (unicast) requests to the source to
repeat parts of the multicast.
HTTP? (Score:1)
Surely bandwidth problems could be resolved to an extent by simply making the file(s) available via HTTP and rely on ISP's HTTP caches. I don't know how common it is in the USA but in Australia most large ISP's transparently proxy all HTTP.
Of course, many caches may not bother to keep a 600Mb file, but they should, and if the proxying were done at all levels of ISP then the original sites would hurt a lot less.
Re:It's been done... (Score:2)
Oh, I dunno - Descent Operating System possibly - although I tend to use mine as a DOOM2 operating System
--
Re:Anycast ? (Score:1)
A packet sent to an anycast address goes to only one of the members. I think that this would be pretty much useless for mass file transfers.
The point behind multicast is to reduce the load on the server by ensuring that it only has to send any given packet once no matter how many receivers there are. That makes it a marvelous method for reducing the peak server load caused by high-demand events like the release of new distributions of Linux. To be sure, there is a lot of work involved in making sure that the packets all get to every destination in order and intact, but in the multicast protocols discussed so far, the bulk of the additional work takes place at the receiver's end and, therefore, does not constitute a performance bottleneck. Think of it as a distributed client for file transfers.
I consider the suggestion to make use of a multicast network (not necessarily the MBONE) to distribute software of general interest to be extremely interesting whether it uses "Class-D" addresses or IPv6.
Why not use USENET (Score:1)
I know this isn't multicasting, but it has many other advantages.
- Each ISP has only 1 time the external traffic, all the rest is internal.
- USENET is a proven network to distribute LARGE files all over the internet. (I know USENET was never meant to do this, but hey it's used to 'illegally' distribute DVD movies every day, so why not use it legally to distribute free software.
mijn twee centen,
Johan V.
Re:que? (Score:1)
Namely, CVS and CVSup.
Re:Blah (Score:1)
Lets see, 600 Megs at 1200 baud... hhmmmm
But how many channels?
TCP and Multicasting different (Score:1)
With file distribution, it will only work over LANs or with a limited agreed subset of MBONE servers as if one packet doesn't get to one of its destination it has to be multicast to everyone else again.
NBone (Score:1)
2. select the same file on all the servers. some may be on a different path
3. client program verifies that the size and version of all the files are the same
4. client initiates transfer
5. the transfer algorithm would disect the file across the network much like what gozilla does
6. every server gets to transfer only a portion of the file thereby reducing its bandwidth req.
7. the client is responsible for re-assembly.
8. once the server has completed its portion of the file transfer, it could continue with another chunk
9. this could also be accomplished using a napster kind of approach to file transfer
10. the server would do the source selection and advises the client.
11. At the end of a transfer the anealing process would pick up pieces dropped by various servers
and tie the missing links.
Biggest advantage - no need for new infrastructure - serverside.
Just Multicast it the Old-Fashioned Way! (Score:1)
Of course there's usually too many dropped packets for reliability using this method too!
Re:Multicast Killer App (Score:1)
Re:Perhaps (Score:1)
I played with developing this a few years back, building a rough prototype of an MFTP-like multicast file distribution program in perl. It worked well in the test phase, but I never took the software anywhere beyond a simple test scenario... The amazing thing was that with the speed-up of xmitting to 20 hosts simultaneously, the prototype had acceptable speed.
A pipe to suck it through (Score:5)
|-| == bandwidth to move it.
|----| == tarball size. 80s
|--| == bandwidth to move it.
|--------| == tarball size. 90s
|----| == bandwidth to move it.
Over the last 30 years of computing, we've always had more program than pipe to push it through. The only way to overcome this slowly increasing speed of affordable bandwidth, is to pay big bucks for a line that will be outdated in a few years.
In the not to distant past, todays game emulator ROMs used to be moved across the country in a game console containing a huuuuge amount of graphic hardware,. For the day, having a game that totaled more than 1 Meg in run time size was just gigantic, and now, we zip these same ROM images around the net in seconds.
In the very near future IP6 over Multi-Gps fixed wireless will make mirroring linus' balls of tar a trivial task. but, of course, by then, the "kernel" will be 200G ;).
The lesson here is that affordable bandwidth, slowly and stedily has increased over the history of computing, and I see no reason why it should jump.
Re:Digital Fountain (Score:2)
Configurable Kernel Downloads? (Score:2)
The make system could still be aware of the other pieces you don't have so if you ever do need to grab some other stuff then it will tell you exactly code to download. If the kernel gets to 200G, this will be a necessity.
MBone could be a great boon for distribution (Score:1)
Step one: Major software release announced. Multicast time/date announced.
Step two: Software sent over Mbone at some reasonably slow bandwidth for general users to collect. Transfer mechanism allows each end station to identify which parts of the package they have missed.
Step three: Millions of end stations send unicast messages back to the source informing it which parts have been missed.
Step four: Source distribution site restransmits missed portions prioritized by the number of stations who need it.
Step five: Iterate steps three and four a couple more times.
Step six: Major mirror sites automatically inform source distribution site that the package is complete and give a URL (or similar handle) for access on their mirror.
Step seven: Source site multicasts a message which indicates end of multicasts and lists mirror URLs. End stations still missing portions of the package automatically begin unicast retrieval from mirrors (of just the missing portions).
I believe that this could dramatically reduce the bandwidth clog associated with major releases and could be handled automatically. Further, I don't think there is an RFC (yet) which covers all aspects of this process.
Re:HTTP? (Score:1)
MBone wasn't designed for this (as so many people have pointed out), but Squid [squid-cache.org] was. Ideally, the primary source should be a front-ended by a squid cache that only peers with the secondary mirrors. The secondary mirrors wouldn't even have to synchronize; client requests would automatically force a sync with the primary. And becoming a (tertiary) mirror would be as simple as adding the secondaries to your peering list in /etc/squid.conf.
However, in regards to transparent proxying in the US, I can speak from experience. It doesn't pay off. I used to sysadmin for a smallish ISP (2,000 customers, 400 lines) and we experimented with transparent proxying. With 16 gigs of cache, the proxy was serving about 30% of the requests out of cache. However, after some VIP customers noticed that their real-time stock quotes weren't real-time anymore, I had to turn it off.
Multicast FTP (MFTP) (Score:3)
que? (Score:1)
Re:Perhaps (Score:1)
I think we can do better than that: slice the package into multiple streams - say, 256 of them, each at 20 Kbit/sec. Big mirror sites could just follow all 256 streams at once, and get the package at 5 Mbit/sec; modem and ISDN users would receive two or three streams at once. Just piece together all 256 strands of data, and you've got the whole tarball/MPEG/whatever!
I don't think on-demand retransmission is a good idea, though. Instead, just have the multicast repeated continuously by a handful of big servers. The stuff nobody wants wouldn't be transmitted anywhere, so it doesn't take any bandwidth - but it's there if you want it.
In fact, many years ago Acorn Computers used a similar technique for loading software over their network, Econet, known as the Broadcast Loader: a client would request a file from the server. If any other clients wanted a copy, they'd ask for one, and the file would be broadcast around that network segment. Clever stuff - and the user manual had a copy of the transceiver circuit diagram in the back! Now that's open ;-)
clients (Score:1)
-foxxz
Re:Configurable Kernel Downloads? (Score:1)
Besides, the original poster was just talking about selecting the source modules you need I think. There's no real need for me to be downloading driver source for devices which are *never* going to be compiled. However, selecting your options then waiting and getting the resultant compiled kernel would be pretty cool. I first thought of this due to my 24+ hour compiles as mentioned above (with kernels which matched previous configs being cached of course). This would be very workable with 5 minute compile times but now that I have 5 minute compiles, the need goes away. It might be worth doing just for the coolness factor as you say but I don't have a permanent connection (yet)
Rich
Re:You could certainly multicast it (Score:2)
The client could also be smart and once it has most of a file, only tune into channels when the bits it is missing are "playing". As it gets towards the end, it could even get more agressive and tune into faster channels to get the chunks it needs (at the expense of your other bandwidth)
Rich
mbone domain names hijacked (Score:1)
mbone.com is some totally random "portal" site.
mbone.ORG is yahoo trying to make a buck out of it. How the hell do they justify
interestingly though, mbone.net isn't taken.
Some good samaritan who knows how mbone SHOULD be used, want to snag that domain?
Re:Multicast FTP (MFTP) (Score:1)
For the OmniGroup dude, Got any idea when you'll be releasing the Cocoa port of Q3? I'm waiting...
Another system imager (Score:2)
http://systemimager.sourceforge.net/
The page says "Now supports: ext2, ext3, reiserfs, use without DHCP, and more!"
Worthwhile guys!
Isn't MBONE a huge bandwidth hog? (Score:1)
Re:reliable multicast (Score:1)
And for software this could be a convenient way to ensure that your Microsoft OS is already damaged before it even gets installed! Why waste time.
Re:socially responsible use of resources (Score:1)
Otherwise I agree with you. But what are you gonna do, it's been inbred into the geek culture. Wait a couple days to upgrade our stable old Redhat 6.2 systems and traffic will be mostly back to normal -- or find another mirror.
satellite, cable (Score:1)
Re:satellite, cable (Score:1)
Re:No, MBONE Actually uses Much LESS bandwidth (Score:1)
Use RAIP (redundant array of inexpensive packets), such that many packets can be lost, but there is enough redundancy to recreate the lost packets from parity bits. If you spread the parity bits across packets, you can get very efficient results.
MBONE wasn't designed for this... (Score:2)
socially responsible use of resources (Score:1)
Tonight I went to ftp.freesoftware.com to find a couple hundred kilobytes of updates to the stable old Redhat 6.2 systems I maintain. I had to retry about 10 times because its 5000 user limit was reached. While I waited 30 seconds for a small directory listing, I was wondering how many of you are wrecking the availability of the Internet in general.
Here's some advice for the general masses, aside from the great technical commentary already shmoozing around on the topic here. Most of you don't need this software this soon. Think about how your actions affect everyone else. You don't need to download an operating system, especially to do a single installation straight off the remote host rather than mirror it locally. Ask around your neighborhood, ISP, or workplace for setting up a local mirror.
And buy the cdrom at http://cheapbytes.com because it's not expensive, it gets to you pretty quickly, and most of you just didn't need it that soon anyway. Many of you are wrecking the availability for those who do need it fast.
Ask not only "Can I?" but "Should I?". Thanks.
===
Problem with Multicast (Score:2)
So if the server sends 10,000 packet for the whole file and you loose packet number 5,555... you'll probably have to wait till the next round packet number 5,555 comes. The server isn't gonna to resend the packet as is a direct TCP connection. This is tolerable in video & sound as you'll only loose a second or two. But for file, you can't loose too many bytes before it becomes unusable.
A solution for this is that the listener try to absorb as many bytes as possible during the duration of the multicast, and later initiate a TCP connection to request for the packets which the listener missed.
Gary Cho
Re:No, MBONE Actually uses Much LESS bandwidth (Score:1)
Using this scheme, most hosts get the file correctly on the first try, and those which don't usually only need a few extra packets to finish the job. Only a very small number of hosts need a full retransmit, which is equivalent to a single FTP session. The savings are phenomenal.
The difference was that the networks our product ran on were private (mostly satellite) links. I've never really tried to use multicast over the IPV4 internet, nor the MBONE. I'm not sure how well they would work.
Re:No, MBONE Actually uses Much LESS bandwidth (Score:1)
Yes, two-way access over satellites is a problem. In our case, it didn't work at all, and is what caused the architecture I described--the receiver would wait until the transmission had finished, and then make a request for the missing parts, but over a different network, and the reply would come back over this same network. The satellite links were only used for the multicast distribution.
Am I missing something here? (Score:2)
Anyway whether that's the case or not, there still must be an awful lot of excess data floating about if you only want to mirror the file onto say 100 sites. Given that people are also talking about tunnelling mbone connections too - it makes me cringe.
Point blank problem is that redhat dont have enough bandwidth.
Perhaps a better system of conventional ftp mirroring would be the answer. First of all put the file onto a restricted ftp server at redhat.com. Then only allow each contitenents fastest two servers to pull it, they can then distribute it to smaller ones, and finally (like 3 hours later) it's open to the world.
That way i'd probably find that Sunsite at imperial college london grab the file from redhat, my isp download it to their local mirror archive and then i can pull it over my cable modem at 55kbytes/s.
Maybe an open source project (Score:1)
Re:Casper: The Linux-friendly Ghost (Score:1)
Re:HTTP? (Score:1)
-Nev
Re:socially responsible use of resources (Score:1)
-Nev
Re:Limitations ? (Score:1)
If you were building your own multicast backbone across internal sites, you can easily do away with this limit so that your software and audio/video distribution could exceed that. Most companies that use software like Cisco IP/TV or RealNetworks RealServer and do multicast broadcasts do raw MPEG-1 broadcasts between live feeds and servers that unicast streams out to watchers/listeners. That is how live concerts and events can be scalably streamed to a large Internet viewer base.
However, out on the Internet, traffic flow of this magnitude of bandwidth would be unrealistic. Not to mention that once it leaves your network you can not insure Quality of Service and that packets may/will be dropped as they flow through commercial networks.
Multicasting and the MBONE (Score:4)
Most multicast-native customers that make use of the MBONE have quite a bit bandwidth to toss around for video and data broadcasts, or it is part of their business model. (broadcast.com, NASA JPL, US DOE, etc.)
Now in regards to software distribution, it would not be feasible for RedHat to multicast a 600Mb ISO using the Internet multicast backbone as each provider that wanted access to that data would also subject their providers, and their providers providers' to receiving that data as well. So essentially you would have 600Mb flying through 6 transit networks to reach you. Imagine the waste of bandwidth. Do you think multicast providers would take this with an enthusiastic grin?
Currently, there are a few providers that use multicast for stream distribution to multiple servers on live events. You can be assured this is the case for large scalable video distribution houses like broadcast.com, possibly Akamai and others. Hope that provides some insight. I'm not an expert, I've just been performing a lot of multicasting research as of late. Cheers.
Re:Blah (Score:1)
Perhaps not as cheap tho
Leo Howell M5AKW
What about CODA, which is built in to the kernel? (Score:1)
Re:A pipe to suck it through (Score:2)
Re:Great... StarBurst or StarDust? (Score:1)
According to my memory of 2 1/2 years ago, it was called StarBurst. But it appears to have renamed itself StarDust.com [stardust.com]. Perhaps Mars (owners of the StarBurst candy) beat them up for the domain name. Or my memory fails me.
Blah (Score:1)
Lets see, 600 Megs at 1200 baud... hhmmmm
KG4JHX
-
you mean like (Score:1)
No, MBONE Actually uses Much LESS bandwidth (Score:4)
The data then branches off from there. This would be quite suitably for updating mirror sites, since
One problem I could see is that this method of distribution for data files (versus video and audio) wouldn't scale well. Imagine one site drops a packet. Well it can't very well start over, since that same packet did possibly reach all the other listening parties. They are all expecting the NEXT packet, not a retransmit.
On an fast, fault tolerant network (major backbones, and obviously intranets) this works great (we use ImageCast at work to simulcast drive images to multiple systems) the bandwidth used is no more than if a single system was done one at a time. But on any network where packet loss and latency are a problem, thing would seriously hamper to practibility of the system.
So I say, multicast to a few hundred major FTP mirrors from the master server (redhat in this case), and then good ol' traditional FTP from there.
Apply older technology for unreliable "connection" (Score:1)
I think the idea is good though. Redhat could just keep spooling ISOs at 56k so that even modem users could dip in to it and then maybe at 256k which most cable and dsl subscribers should be able to get and you could have a background process that plucks them off the wire and assembles the ISO on your disk.
That's a good start, but even better would be to use a protocol that assumes that packets will be lost, like the one that some tape drives use:|--A--|--B--|--C--| ...|--N--|--O--|--P--| ... |-A^N-|-B^O-|-C^P-|...
(where ^ means exclusive-or). Depending on the typical dropout rate, the number of packets checksummed (here it's 2) could be adjusted.For example, if we're transmitting 1024 packets with 4 packets per parity group, then 256 packets of xors
1st would be from packets 1, 257, 513, and 769;
2nd is 2, 258, 514, and 770...
256th is 256, 512, 768, and 1024.
For those with a perfect connection, there is no waiting at all. For those with tolerable dropouts, only a 25% increase in the total size of the transmission, which would of course repeat. You could pick up the channel at any point, (including during the parity portion), and pick up the whole show without waiting to sync (because once 4 of the 5 packets in a parity group are received, the 5th can be reconstructed).
Perhaps a second channel could use different redundancy (like 8 packets per parity group?)
Now that I think about it, when you burn a CD, some sort of redundancy like this is built in, although I don't know the details....
Re:It's been done... (Score:1)
That's an acronym of DOS that would never have occured to me...
CRC (Score:1)
Interdomain multicast today (and tomorrow...) (Score:3)
For a good survey of the the past, present, and future of interdomain multicast routing, try this paper [ucsb.edu].
As others have mentioned, yes, flow control is a problem, and, yes, reliability is a problem. There are many solutions to both problems that have been researched (largely in academia), but flow control can't really be solved if you're trying to distribute RedHat over multicast to half a million people who are on disparate links. Someone a few posts up mentioned the Digital Fountain idea, but neglected to provide a link [digitalfountain.com]. Digital Fountain aims to solve the problems that are being discussed here for exactly the kinds of applications that are being discussed here. The paradigm is that random bytes are constantly flowing from the fountain (the multicaster) and the recipients fill up their buckets with the random bytes until a file is formed. Read their papers for a more mathematically rigorous explanation...
Re:Multicast FTP (MFTP) (Score:2)
It's been done... (Score:5)
Best Regards,
--
Durval Menezes.
Freenet. (Score:2)
#for PART in *.rar ; do echo "$(./freenet_insert CHK@ $PART 2>&1 | grep "Inserted Key" | cut -b 18-) $PART" >>parts.txt ; done
This command will insert all the .rar files into Freenet and record their keys to parts.txt.
An example of inserting a KSK: #./freenet_insert KSK@my_inserted_data parts.txt
Say you only request 25 files at once, and they all come in at 3 kilobytes/sec (everyone else has a 56k modem). You thus download the file at 75kbyte/sec! Of course, most nodes run on fatter pipes, so speeds will be even better.
And did I mention that the files cache and mirror themselves automatically as demand increases?
Re:Multicast FTP (MFTP) (Score:1)
Moderators are slacking (Score:1)
Casper: The Linux-friendly Ghost (Score:3)