Making BitTorrent Clients Prioritize By Geography? 227
Daengbo writes "While I live in S.Korea and have virtually unlimited bandwidth in and out of the country, not all my Asian friends are so lucky. Many of the SE Asian and African countries have small international pipes. Even when a user has a high-speed local connection, downloads from abroad will trickle in.
Bittorrent clients apparently don't prioritize other users on the same ISP or at least in the same country. Why is that? Is it difficult to manage? If I were to write a plug-in for, say, Deluge, what hurdles would I be likely to come across? If this functionality is available in other clients or through plug-ins, please chime in."
"Prioritizing" (Score:5, Insightful)
Why don't the ISPs help? (Score:5, Insightful)
Re:Stop It (Score:5, Insightful)
Prioritize by network topology is a better way to put it, that just happens to coincide with physical AND political geography in many cases. In the case where you can get 10Mb over a 10-hop connection, or 8Mb over a 3-hop connection, which do you pick? If you pick the latter, there is a good chance that two other users can utilize the other 70% of that 10-hop connection, making total throughput (theoretically) 24Mb.
Because it's a pointless thing to do (Score:5, Insightful)
If your ISPs international pipe is flooded then bittorrent will automatically prefer local peers as they'll be the only people who can send you data at a fast enough rate. If you notice local peers who you're not connected to then it's most likely just because they've already reached their connection limit.
Most bittorent clients will connect to many peers and try to saturate your downstream bandwidth. They don't care where in the world those peers are as long as they're fast. If, in your part of the world, local peers are faster then that means you should just automatically connect to more local peers.
Re:"Prioritizing" (Score:5, Insightful)
Just remember - prioritizing is one thing, but it's a slippery slope to peer exclusion.
Not really. Prioritize who you request data from, but not who you send it to. If all incoming requests are treated equally, but you only request from the optimal peers, you get all the benefits without any of the omgcensorship fud.
Re:Geolocation libraries (Score:3, Insightful)
Re:Stop It (Score:2, Insightful)
I'm, I'll, can't, it's?
Autonomous System Number. I don't think it helps much either way. You either know what it means or you don't. Also, Border Gateway Protocol.
Won't work (Score:3, Insightful)
There is little to no support for multicast by last-mile ISPs.
It would be nice - ISPs keep bitching about how P2P is eating their bandwidth, but they don't bother implementing multicast which would make P2P use a fraction of the bandwidth it currently does.
Admittedly, in addition to lack of support, IPv4 multicast is pretty "meh" - there aren't many multicast addresses available and I have yet to see a good way of choosing/assigning them on a global network.
Re:Won't work (Score:3, Insightful)
It would be nice - ISPs keep bitching about how P2P is eating their bandwidth, but they don't bother implementing multicast which would make P2P use a fraction of the bandwidth it currently does.
I believe you mean P2P uses a fraction of the bandwidth it would if we had multicast. I have a constant upstream of 200k/s, to 5 clients, each getting 40k/s down from me. With multicast, I could have a constant upstream of 200k/s, to 5 clients, each getting 200k/s from me. This means I would send 200k/s up; my router would route for about 6 hops, then send duplicate packets out to 5 different routers from that hop, which would then send 200k/s down to various points on the Internet. Instead of consuming 200k/s + 200k/s, I'd consume 200k/s + 1000k/s. Further, if 100 people want to download that file at once, I can handle 200k/s transfer to 100 people, and 3 other people can do such, and they can download the file at 600k/s, and really clog the tubes.
It'd be faster, but it'd chew a hell of a lot more bandwidth at once. The DoS potential would be massive.
Re:uTorrent (Score:5, Insightful)
That just means that ws_ftp's GUI is a pile of shit, it doesn't mean that GUIs are inherently slow.
Re:ISP's are against local serving (Score:3, Insightful)
Extending a business partnership with them, and convincing them that they CAN allow users to serve content without choking their already oversold bandwidth
This is fairly easy to do right now eith US ISPs...it's called a "business account".
They do cost more, but in my case I get 5 static IPs, guaranteed bandwidth, and no interference of any kind with my data (no port blocking, no throttling, and no caps) for $140. Compared to the non-business version with one dynamic IP, port blocking and some throttling (but no caps yet) at $70, it's not a bad deal.
Re:uTorrent (Score:3, Insightful)
Re:Azereus already has a plugin for this (Score:3, Insightful)
The "barrage" of pings may not have been necessary. A good first step is simply running the IPs through a geolocation database. There are various free ones available, and it's a pretty good first step for narrowing things down. They're very effective at getting the country right, and do a decent job at getting you to at least a nearby city.
From there, if you need farther precision, a single ICMP packet is required to determine the number of hops to a host by checking the TTL. Combine these two things and it shouldn't take much work to get a list of close-by IPs.
If you're considering connecting to 1000 hosts, it would take just a few seconds (or a fraction of a second) to run them all through a geolocation database, and then a few seconds to send off the thousand pings (while 1000 pings would only require 56KB of data to be sent, you might want to send them out a bit slower than that).
All in all, I don't see why you couldn't evaluate all the IPs provided by the tracker in a matter of seconds, getting both geographic and network distances to each one. That should then be more than enough information to make decent guesses about what the best IPs to connect to are.