Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Networking

How Would You Make a Distributed Office System? 218

Necrotica writes "I work for a financial company which went through a server consolidation project approximately six years ago, thanks to a wonderful suggestion by our outsourcing partner. Although originally hailed as an excellent cost cutting measure, management has finally realized that martyring the network performance of 1000+ employees in 100 remote field offices wasn't such a great idea afterall. We're now looking at various solutions to help optimize WAN performance. Dedicated servers for each field office is out of the question, due to the price gouging of our outsourcing partner. Wide area file services (WAFS) look like a good solution, but they don't address other problems, such as authenticating over a WAN, print queues, etc. 'Branch office in a box' appliances look ideal, but they don't implement WAFS. So what have your companies done to move the data and network services closer to the users, while keeping costs down to a minimum?"
This discussion has been archived. No new comments can be posted.

How Would You Make a Distributed Office System?

Comments Filter:
  • erm.. (Score:5, Funny)

    by Anonymous Coward on Monday January 21, 2008 @06:42PM (#22131542)
    Or, in other words, how do i put servers in branch offices without putting servers in branch offices?

    If you solve that one let me know...it's been bothering me a while too...
    • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Monday January 21, 2008 @07:03PM (#22131794) Journal
      Either consolidate your servers, or don't.

      Exactly what costs were you thinking of saving by consolidating? If it's just the cost of building and maintaining those physical servers, then here is the cold, hard truth: You are paying less for less service. Put servers at each branch office if you'd rather pay more for more service.

      You get what you pay for.

      Now, if it's other problems that are keeping you from setting up those dedicated boxes, realize that these are other problems. Identify them, and bring them back to Ask Slashdot. We're Slashdot, we're not psychic.

      If it's your outsourcing partner gouging prices, dump them for an outsourcing partner which doesn't gouge prices, or do it in-house.

      If it's the inability to manage all those servers, get them to talk to each other, etc, that's a more interesting technical problem that Slashdot might be able to help solve.

      There are a few exceptions -- you might be able to get away with something like Coda or AFS, though I don't know how well that scales to crappy bandwidth. But if so, that would imply that your only problem is managing strictly filesystem data -- it doesn't help at all if the problem is access to, say, an intranet webapp. So again, we need details, if we are to find the clever exceptions.

      Otherwise, upgrade your bandwidth, and/or outsource your actual application servers to someone who can scale. If it's just web/email/docs, Google can do that. Otherwise, find someone who specializes in what you're doing (our SVN is run by cvsdude.com), or bite the bullet and buy some virtual servers.
      • Call Citrix.
      • You get what you pay for.
        In my experience, that's exactly opposite what most executives think about IT.
        • by OnlineAlias ( 828288 ) on Monday January 21, 2008 @07:28PM (#22132046)

          I'm an executive in IT with almost 20 years in. I have learned, without a doubt, that in IT what one pays is usually quite unrelated to what one gets.
          • Re: (Score:3, Interesting)

            Well, sure, if you have to deal with Microsoft and people
            that worship Microsoft. If that is not the case, then
            maybe you don't get what you pay for because you don't
            have the budget to hire good people.
            • May I just ask why you insert your own newlines into your comments? I have seen a few people do that online, and I had to read your comment about 8 times to get any sense out of it (I'm guessing because newlines usually signify a new idea or sentence, or maybe I just need some coffee)

          • in software and package deals, maybe, but in hardware?
          • by Ajehals ( 947354 ) on Monday January 21, 2008 @08:40PM (#22132664) Journal
            I totally agree.

            In my experience the only way to ensure value comes down to the processes involved in the planning, acquisition and implementation of any given project.

            Ensure you have a process for identifying the requirements of any new service or equipment acquisition and do it without focusing on a specific system or product, if you limit yourself initially because you have formed a preconception of what you think you need, or you simply copy what others have done before, you will not get a solution that meets your needs.

            Acquisitions of any type should always solve a business problem, whether you are addressing poor or suboptimal communications, the lack of external access, the rigidity of an existing system, scalability, security or stability issues or the lack of proper redundancy and disaster planning. You should not be buying things for the sake of it, or because someone simply thinks it might be a good idea, most of all don't buy things because other people have them. Justification is everything, otherwise you end up with things you don't need or want (but need to support) that don't provide business benefit, but do drain budgets which in turn makes it harder to address real issues. The identification of problems should come from within the business (that's what management is there for to a degree) or from independent consultants brought in for that purpose, it should never come from a vendor who (as it happens) also provides a solution. If a vendor makes a suggestion then assess the need and see if there is a business requirement, but do it independently.

            Make sure you have a decent tendering process when you are sourcing equipment or services (for smaller businesses, that basically means you need to shop around, and tell your existing suppliers that you are doing so). Make sure that there is input not only from management and finance but also from end users and IT staff (sounds basic but not always the case...). You should also have a well thought out budget (after all you are solving a problem and problems should be quantifiable in cash terms), stick to it.

            I don't even want to think about the number of times I have seen needless upgrades, additions and total changes to IT infrastructures for no good reason and more importantly with no real benefit. Resist it if you can (but don't resist change for the sake of resisting change, that is just s bad as doing the opposite.

            As the parent suggests, price is not an indicator of performance. If your specifications and requirements are met, and you are within budget then great, if you are under budget then you are ahead of the game! With that in mind though, do thoroughly check out your suppliers (its inexpensive and easy enough to do), if a supplier is cheap and has a bad reputation then avoid them, make sure your suppliers can deliver before you sign contracts, sure you may be able to sue them (if you have all the information and the budget to do so) after the event, but it will be much cheaper to get it right first time.

            Finally, I have found that the law of diminishing returns seems rather applicable to IT, as things get more and more expensive, the benefit from obtaining them becomes less and less. For example, a email system of some kind in a necessity in most businesses and generally speaking they are fairly inexpensive (relatively at least), whilst electronic whiteboards (my per hate) or upgrading cat5 to cat6 cable (without changing anything else, - something suggested to me by a vendor recently to improve network performance..) bring only marginal benefits but are relatively expensive.

            Hmm, that was probably all totally offtopic - never mind.
            • As the parent suggests, price is not an indicator of performance.

              While that is true, as soon as you find an IT guy who has as much expertise as the parent post here, you do want to pay them quite a bit to retain them.

              For example, a email system of some kind in a necessity in most businesses and generally speaking they are fairly inexpensive (relatively at least), whilst electronic whiteboards (my per hate) or upgrading cat5 to cat6 cable (without changing anything else, - something suggested to me by a ve

          • It's funny because it's true...
          • I have learned, without a doubt, that in IT what one pays is usually quite unrelated to what one gets.

            Furthermore: The quality of software is often related to the size of the software's userbase.

            That $10 million ERP package designed specifically for your industry? You'll be the very first person to hit hundreds of bugs. Guaranteed.

            • by funfail ( 970288 )
              rcw-home: You'll be the very first person to hit hundreds...
              Lex Luthor: (interrupts) Thousands!
            • If only someone would point that out to Microsoft.. the most obvious exception to your relationship. I'd say the size of the userbase is more likely to be related to the quality of the software - that's what you'd hope at least. For some reason, the larger a userbase becomes, the worse software gets as the creators try to expand on it just for the sake of expanding and making more money..
      • by Lumpy ( 12016 ) on Monday January 21, 2008 @09:15PM (#22132880) Homepage
        Exactly. The moron It director at my last company decided to consolidate and had us remove all dedicated servers at offices. we "saved" money.

        Then 6 months later, we have a T1 outage to one of our larger offices, that office grinds to a halt. No BDC, file server and print server mans that as long as the T1 is offline that entire OFFICE IS OFFLINE. zero work is getting done, we spent 5X what we spent to consolidate to undo what he had us do.. It is the wrong thing to not have servers in every office. you have to plan for outages, and performance of having a server local can not be beat. (well you could have OC3's installed to each office, or have fiber ran to every office from your central location, 1000Mbit fiber point to point connections would do it...
        • by duffbeer703 ( 177751 ) * on Monday January 21, 2008 @11:22PM (#22133606)
          The problem is that regulatory/compliance issues make it difficult to place resources in the field, because it is difficult and costly to maintain. One lost backup tape could be a real disaster. You have to balance business needs against cost, security, etc. There's no "one size fits all" solution.

          Here's how we're moving ahead with centralization in a large distributed environment with about 50,000 users and 1,000 branches. We're reducing the server count by about 40%, and the cost by 70% versus a couple of years ago:
          - Most sites with 10-75 people get a headless, stripped down box (~$2,000) that runs our desktop management software
          - Medium/Large sites (75+) get a file server, which fulfills some other roles as well
          - Large and VIP sites get a domain controller, mainly for availability purposes.
          - A few "very large" (800+) sites get a 100MB WAN connection and use the data center services.

          We looked at a few other solutions, with mixed results:
          - WAFS/WAAS looked great, but the solution cost was almost the same as rolling out servers. Additionally, most of our applications are "thin" already, so we weren't really gaining much.
          - Distributed AD servers are purely an availability play. (If your circuits/core servers are sized correctly)
          - NAS also looked promising, but the cheap solutions weren't very manageable at our scale, and the manageable solutions weren't cheap.
          - No backups are done on site, we're rolling out a distributed backup system that we de-dupe the data globally and backup to a data center. If you're using old backup software like TSM, Legato, etc, you MUST go shop around, the newer solutions are way way better and probably have lower administrative costs.
          - Networks are getting faster and cheaper. We're seeing 3MB connections available to replace 512k frame relay connections at a slightly lower cost. We'll be switching as our network infrastructure gets upgraded.
          - If your network supports it, multicast can make it much cheaper and easier to provision your workstations. Most management tools (Altiris, SMS, Tivoli, LANDesk, etc) support it.

      • There are a few exceptions -- you might be able to get away with something like Coda or AFS, though I don't know how well that scales to crappy bandwidth.

        You realize that AFS was designed in the late 80s, when all bandwidth was crappy?
        • In the late 80s, we were sharing little ASCII files, not big powerpoint presentations. And we were talking about a much smaller "scale" in terms of the sheer number of machines.

          That does give me a bit more confidence in at least giving them a shot if I end up needing them, though.
    • by markov_chain ( 202465 ) on Monday January 21, 2008 @08:42PM (#22132674)
      Just have everyone telecommute to the central office. Problem solved!
  • Global file system (Score:4, Interesting)

    by Colin Smith ( 2679 ) on Monday January 21, 2008 @06:42PM (#22131548)
    Such as OpenAFS.

    Something like coda might be nicer but progress on global filesystems seems to have pretty much stalled.
     
    • It's a dead FS (Score:2, Informative)

      by emj ( 15659 )
      It's a no go, OpenAFS and kerberos is a very nice idea, but it doesn't work, the client software for most platforms is very bad.
    • Re: (Score:3, Interesting)

      by tgatliff ( 311583 )
      Better idea... IPCop... You could put a bunch of low cost servers and do a VPN Gateway to each remote office with IPCop. It is very doable and the most cost effective way there is...

      • by bakes ( 87194 )
        I'm using IPCop for this - connecting 2 remote offices back to the central office. IPCop works well, is reliable, and simple to set up. Will have another 2-3 nodes added to the network this year.

        It doesn't solve the OP's REAL problem though - whilst this infrastructure (or OpenSWAN, or OpenVPN, or similar) all provide an interconnection between the offices, but what next? Do you get everyone in the remote offices to use terminal servers in the head office? Or do you put servers in each office and have t
        • Re: (Score:3, Interesting)

          by snuf23 ( 182335 )
          Just a question but on Windows couldn't you use DFS for file replication? Or does that not work in a WAN situation...
  • Financial. Liability.
    • Don't askslashdot.

      The only responsible answer to this question is to get someone in that has a track record of fixing problems like this. Don't expect to get a reasonable answer from a sketchy problem definition in a place like slashdot.

      • I agree, to a point. Slashdot cannot produce guaranteed-reliable information. However, the information produced by an Ask Slashdot article can lead to insight or serve as a staging point for further research. With a modicum of effort, the information from this site could even aid the evaluation of an expert--after all, technical experts do frequent the site. (I consider myself one, albeit this is outside my area of expertise).

        Identifying those experts is left as an exercise to the reader ;)

  • No Good Solution (Score:5, Interesting)

    by maz2331 ( 1104901 ) on Monday January 21, 2008 @06:44PM (#22131580)
    There is no good and cheap solution to this one.

    You can try the application accelerators that are out there now from Cisco. They basically use smoke and mirrors to keep traffic off the WAN and act as local proxies for different services.

    Otherwise, your choices are limited. Citrix servers would be good for some apps, but get god-awful expensive fast. And an organization too cheap to build out a decent system to begin with isn't likely to make the investment in writing efficient apps.

    If you're running on slow lines, bump them to at least fractional T3.

    It sounds like the system was designed to serve 5 gallons of water through a swizzle stick. Ain't gonna work unless something is radically changed.

    Or better....

    Fire the outsourcing partner and the management that buys their bull, and build out a proper distributed archetecture.

    • by wish bot ( 265150 ) on Monday January 21, 2008 @06:46PM (#22131620)
      He should tell us who their outsourced partner is. This sounds very similar to a strategy I'm hearing about for our company right now.
    • Re:No Good Solution (Score:5, Interesting)

      by chappel ( 1069900 ) on Monday January 21, 2008 @06:56PM (#22131716) Homepage
      I was really impressed with the improvements we got by implementing some 'smoke and mirrors' from Riverbed (http://www.riverbed.com/). Granted, we've got some reasonably adequate bandwidth to start with, but it dropped the WAN traffic to our large (500 user) remote site by a good 80%. They seemed mighty expensive for a plain dell server with CentOS, but there's no arguing with results. /reminds self to look into riverbed stock
      • by bhmit1 ( 2270 ) on Monday January 21, 2008 @07:38PM (#22132148) Homepage
        I've done a light evaluation of riverbed's steelhead appliances in the past (less from the efficiency stand point and more for manageability). To call it a dell server with centos is an understatement since there's a lot of software intelligence intercepting various protocols and caching the data that may be transmitted. Handling file locking, multiple email recipients of the same large attachment, and be transparent to the network, aren't easy problems to solve at the protocol level, so I'd say they deserve a few kudos. They weren't a simple WAFS, multiple protocols were included, it would simulate the reply from the remote server when possible, and all traffic to another data center or office with a steelhead would be compressed regardless of protocol (it's been a few years, so feel free to double check those facts). I believe they also included some physical bypass hardware so if the box completely died or needed to be rebooted, you wouldn't lose your network. All in all, I thought it was a nice solution. And no, I have no affiliation with the company.
        • Re: (Score:3, Interesting)

          by Amouth ( 879122 )
          i am wondering.. that sounds like they did a good job.. but from the upstream providers view.. what does the access logs look like? if the transparent proxy is acting as a middle man for the client does it pass info upstream for logs?
          • by bhmit1 ( 2270 )
            From my impression of how it worked, yes, you'll still see your logs. Even when it preempts a reply from the server, the request is still sent to the server, but you may get your simulated reply before the server generates it, so your log timestamps my be a tad off. Otherwise, it's just doing intelligent compression by looking at the protocol when possible, and doing general compression when an unknown protocol is being used. They even compress ssl data if you're willing to give it your encryption keys (
    • Re: (Score:3, Insightful)

      by Tuoqui ( 1091447 )
      I'd mod parent up if I had the points...

      Yes fire the damn outsourcing partner. They obviously did not have your needs in mind when they suggested it. Most likely they thought they could save themselves money by having 1 location they have to go to when shit goes wrong.
      • Re: (Score:2, Insightful)

        by bepo ( 709117 )
        Or, more likely they have A solution. It doesn't matter what the problem is, they are going to shoehorn their one solution in to fit it.
    • by eazeaz ( 1224430 ) on Monday January 21, 2008 @07:07PM (#22131824)
      We use riverbed appliances at all our remote offices. They take about an hour to install and are damn near like magic. I just pulled some statistics from one of our remote offices. Over the last 30 days, we had a reduction in data flow of 95% 6.3GB of data went over the T1 instead of 129.3GB We can run applications over a T1 and users do not know that they are not local. They allowed us to go from DS-3 to T1 lines without any user complaints.
    • by 222 ( 551054 )
      For what it does, the Cisco solution (Wide Area Application Services) is actually pretty affordable. It's more than just smoke and mirrors imho. Using DRE (Data Redundancy Elimination, a sort of digital shorthand), working outside the TCP spec for larger packet sizes (requires an appliance at each site) and as you mentioned, caching of local files, I've managed around a 2x increase in bandwidth efficiency since rolling it out across 5 locations. When I look at what it would actually cost to double my networ
    • Re: (Score:3, Funny)

      by davidsyes ( 765062 ) *
      One good stragety is to add oil to the pipes. You know, to increase teh horsespowers, you have to add more viciouscosity to pump the datas through the tubes.

      Your Senator...
    • by mchawi ( 468120 )
      I agree with a lot of the posts that said without knowing your exact infrastructure (data, bandwidth, office size, budget, etc) it would be difficult to give accurate answers that aren't overkill.

      For all of our branch offices we use Packeteer iShared/iShaper devices with a larger box at the hub. This allows for WAFS, AD/DNS/DHCP/DFS, compression and traffic management all from one box. It isn't going to be cheap and it is a server at the branch office, but we find we save enough in bandwidth and backup ta
  • by Anonymous Coward on Monday January 21, 2008 @06:46PM (#22131612)
    Financial companies, at least in my State, have very specific requirements for storing and transmitting data. Without knowing what your specific needs are, I have no answer other than "Define your problem".

    The reality is other companies, such as yourself, exist and function probably better. If that indeed is the case, perhaps a friendly lunch with another IT staff member might help you.

    I've consolidated offices and I've also pushed out servers to remote offices. It all depends on the need of the client. Examples

    1. Client wanted 99.999% uptime and the only way I could get that was to have their servers in a data center. We moved them and uptime has been great.

    2. Client wanted fast file access. We setup DFS with WIndows 2003 over a WAN link (T1) the client has never been happier.

    So, to answer your question, it depends on your needs.
    • Re: (Score:3, Informative)

      by OzRoy ( 602691 )
      We used DFS as well. When it works, it works really well. Unfortunately it does seem to be a bit temperamental sometimes so you have to keep an eye on it because if it gets out of sync it can take ages to catch up. The other disadvantages are no file locking between sites so it is possible for one user to overwrite the changes made by a user at another site. While you can retrieve this data it can't be done by the user and it's up to the user to realise what has happened. We have also found its reporting
      • I'll second that. I've also had to do complete rebuilds a few times. 2003R2 seems to have improved the situation a bit but overall the reliability of DFS is still rather flaky. (This is running over a very reliable fractional T3s and T1s btw). Since then we've started to move to Citrix wanscalers (previously Orbital, I believe). Haven't had a bit of trouble with them yet and they really speed things up. They're basically Dells with CentOS plus their magic software.
  • Hmm (Score:5, Insightful)

    by moogied ( 1175879 ) on Monday January 21, 2008 @06:46PM (#22131618)

    Dedicated servers for each field office is out of the question, due to the price gouging of our outsourcing partner

    Find a new partner.

    • Re:Hmm (Score:4, Insightful)

      by MightyMartian ( 840721 ) on Monday January 21, 2008 @06:55PM (#22131710) Journal
      No kidding. This sounds to me like someone somewhere sold this guy's company down the river. The short answer is that there's no cheap solution. Any way you look at it; there's two choices; beefing up the lines or getting new servers. Can't speak to the costs of the former, but I'll wager that for what this guy needs, the latter is going to be cheaper.

      In short, this guy better tell the management to get out their chequebooks, because the stupidity of trying to save a buck by cramming a Buick through a pinhole was a costly mistake with only one solution, inputting lots of money.

      To my mind, unless the branch offices are really small, I think servers in each are in order.

      I'm the network admin for a company with three offices; a main branch with about 25 workstations, a branch with 7 workstations and one with a couple. Because of the flakiness of connections, I can't rely on VPN. In the larger branch I have a Win2K AD domain controller running all the local apps, with some mirroring of the file store. Still the branch office can function even if the VPN goes down. For the smaller office, we have some Terminal Services licenses. It does mean if the VPN goes down, they're hosed. If it gets bigger, I'll put a server in. To keep costs down, I'll probably just put a Samba server in place.
  • Put WSUS severs at the offices to keep update bandwidth down.
    • Re: (Score:3, Insightful)

      by nick0909 ( 721613 )
      WSUS servers out at all locations is fairly costly as it requires a decent server and Win2K3. That could be a lot of extra hardware and licenses to buy/support. Unless your company needs to run full bandwidth 24/7, just schedule your updates for the middle of the night and it doesn't matter there is only one server pushing it out. I currently do this for my company that has 30 branches, half overseas, and all on slower connections than I would like. Windows Updates are the lowest bandwidth concern of mine n
    • by nurb432 ( 527695 )
      Updates will be the least of this guys problem
  • Amazing (Score:5, Insightful)

    by obeythefist ( 719316 ) on Monday January 21, 2008 @06:48PM (#22131642) Journal
    Some basic truths.

    IT costs money. I'm sorry that your outsourcer had some bad ideas. But your management must understand that IT services aren't free, and the health of your company depends on it's infrastructure.

    Without knowing the specifics, the only low cost suggestion I can provide is converting desktop PC's into Linux servers, thus providing you with the distributed server network you need. Of course, the boxes will be underpowered and fall over all the time (yay desktop hardware), but if you really want to cut costs, there you have it. For backups, put in extra hard disk and backup to disk, it beats nothing at all.
    • Re: (Score:2, Interesting)

      by sco_robinso ( 749990 )
      Agreed. I actually work for an IT outsourcing company. We don't gauge by any means, but we always come to the table with the 'top drawer solution' right off the mark. If the customer wants XYZ results, we tell what exactly what they need to get there and stay there for a 3 year period. If they don't like the costs, fine by us, we'll put in whatever they want or can afford. But if they come back to us in 6 months or a year and say the solution isn't delivering the expected results, we can always fall back on
  • Pixie dust (Score:5, Funny)

    by c0d3h4x0r ( 604141 ) on Monday January 21, 2008 @06:55PM (#22131708) Homepage Journal
    Think happy thoughts, and sprinkle some pixie dust over your IT infrastructure, and all your problems will be solved.

    But whatever, you do, don't fire your incompetent outsourcing partner or actually invest in beefing up your IT resources. Both of those paths are DOOMED, DOOOOOOMED, I say!

  • by magarity ( 164372 ) on Monday January 21, 2008 @06:55PM (#22131714)
    Dedicated servers for each field office is out of the question ... such as authenticating over a WAN, print queues, etc
     
    Print queues over WAN is taking the consolidation thing a little to the extreme, isn't it? Login authentications and print jobs really want to be local. Sorry about your predicament but you're going to get a lot of comments telling you to switch outsourcers or bite the bullet on their prices. What is the other traffic (as if that isn't bad enough): one assumes email, but are there big apps hosted on remote servers with lots of data traffic to db servers and the like? Simple document file sharing shouldn't be that much of a problem, or is it? You're going to get a lot of guesses without knowing the exact needs of your remote traffic. Good luck!
    • You're going to get a lot of guesses without knowing the exact needs of your remote traffic. Good luck!


      We've all got the excuse that we don't know what exactly this guy or his company needs. The question I'd be posing is why the partner didn't, because, regardless of what the next step is, I'd be giving them a swift, unceremonious kick out the door.
  • Having your Cake (Score:2, Insightful)

    by deadeye766 ( 1104515 )

    and eating it too? Is it just me, or is this one of those situations where upper management makes a design decision from something they glanced over in some IT mag, then decided to implement without consulting anyone with any IT background?

    I don't see how you can create an insanely diffuse network, then turn around and expect it to perform like a network that has a centralized "HQ" with file services etc and a fat WAN connection.

    Of course, you could just ask the execs to spring for ~100 WAN accelerato

  • Too little too late (Score:5, Interesting)

    by armada ( 553343 ) on Monday January 21, 2008 @06:56PM (#22131726)
    I suggest you pay more attention to the data itself. Do an comprehensive and brutaly unbiased audit of what data/resources are needed by whom. You would be amazed at how much of your infrastructure is either superfulous or capricious. Once you do this then you at least have a smaller mountain to climb.
  • Just follow this simple formula:

    1. Call your helpful friends in Distributed Applications at Google;
    2. Let Google's gnomes install distributed apps branded with your company's logo;
    3. ???????
    4. Profit!

    Any application that won't run in a Firefox window is unneeded and merely distracts from the company's core mission. You won't believe how much of a performance boost you will get when you shut down those apps.

  • We don't know which country you're in (and hence which set of regulations you have to adhere to).

    We don't know how much data needs to be made available to each office - is it everything? Or is it just a different subset of the total in each office?

    We don't know if you're talking about megabytes, gigabytes or terabytes of data. We also don't know how much that data changes on a daily basis.

    We don't know if there are any existing factors to consider - be they political or technical (eg. "management almost c
  • WAN Accelerators (Score:4, Informative)

    by mark99 ( 459508 ) on Monday January 21, 2008 @07:07PM (#22131834) Journal
    Checkout Riverbed, Cisco, and many others. Basically they do caching, compress traffic, do TCP/IP traffic control the way it should be done (with the hindsight of 30+ years experience) and some application specific round-trip optimization (some even do voodoo optimization :).

    Not cheap - but easy.

  • I have been looking at this product for a similar situation I am in: http://www.packeteer.com/products/ishaper/ [packeteer.com]
    Basically it is a WAFS box, with WAN traffic shaping, caching, etc, plus it acts as a Domain Controller, print server, authentication, dns/dhcp, etc.
    If it works like they say it will it would be a good solution for you based on the problem description. Basically it is a server, plus WAFS, without being a server...
    I wonder if anyone here has some hands on experience they could share?
  • Might well be a nice solution, assuming that your remote users are frequently throwing large files around.
  • ICA, RDP, and some X variants work well over slow connections. Do applications need to be executed locally, or can you run a farm of application servers with fast connections to the storage. Then put diskless, fanless thin clients (I typically use Wyse V50s), which DHCP configured to give them a config file to load on each startup. This gives you data security (no data is stored locally, or even at a branch office like your situation - someone steals a thin client, you are only out the hardware, application
  • We're a largeish company with one HQ (and associated data center), about 400 field offices, and four regional field service centers. Our approach was to centralize everything but printing, but that means EVERYTHING -- so people use Terminal Services to go into HQ. This means that once they've done the TS hop, everything is local, because they're accessing their files, running their apps, and accessing databases locally to where the terminal server is. Printing is, of course, still done in the office, via
  • by rickb928 ( 945187 ) on Monday January 21, 2008 @07:24PM (#22132006) Homepage Journal
    ... seems to be that your oursourcing partner has you on the Merry-Go-Round. They work it like this...

    1. Propose a WAN-based solution.

    2. When that slows to a crawl, propose a branch server solution.

    3. When that proves to be too expensive to administer, propose a centralized solution.

    4. When that proves to be difficult, unproductive, or slow, propose a branch office solution with accelerators, DFS, and all the goodies.

    5. When that proves too expensive to administer, propose a thin client/remote app solution.

    6. Repeat steps 2-5 as needed, substituting current technology for at least three iterations.

    7. If you still have this client, you may now feel free to propose ANYTHING, including cans and string, or gerbils. They will buy it. Change your technical onsite staff every 6 months, rotating in fresh and untrained candidates. Rotate out those who show promise to be re-deployed at newer clients who are at step 4 or earlier in the process.

    It's kinda sad. Consulting outfits can rarely make a living by doing right for a large client. Sooner or later, they either get replaced when the client starts 'analysing' the operation, or get replaced when some other outfit has a stronger line of bull to offer management.

    Of course, there's incompetence, but my former boss isn't involved. He's busy screwing people in a different business, when he's not busy screwing his employees.

  • There would be two major paths I would investigate.

    If you're in a Windows environment, look at getting Citrix (or something similar) set up. Centralized files, centralized management, and it works very well. The one major issue is printing, although we use a product called Uniprint at work that is fucking fabulous. We went from 60% of helpdesk calls being "reset print spooler" down to 0% when we rolled out Uniprint. Very impressive stuff. We use Citrix at work primarily for our DB-intensive apps (so we
  • Dedicated servers for each field office is out of the question, ...

    Well, how about just an old workstation at each remote site to run Linux on with Samba (assuming you're supporting M$ clients) and CUPS for file and printing services, while using rsync to synchronize the data with your centralized servers? You can even make additional automatic local backups to disk with things like faubackup or dirvish. It worked for me and you don't have to use such cheap hardware as long as I did.

    But seriously, it so

    • A lot of this really depends on what else is going over those lines. If it's just files and email, and maybe lightweight web and database apps, then your solution will work. But there are apps out there (I have to deal with one) that are really disk intensive, and running over any kind of network file system is just plain slow. In that case, you really have to consider running each branch semi-independent with some sort of batch merges to and from the central database. At that point, you have to have a
  • Here where I work, we replaced pretty much all the conventional applications (the ones which are required globally within the organization) for web-based ones. No, it didn't happen from a day to another.

    We have pretty much everything centralized, except cases when you simply cannot escape from .doc/.xls/etc documents and stuff like that. Such cases are processed locally and only the relevant files are sent (either through FTPS or e-mail), SMB shares are not transported through WAN at all.
    It helps our str
  • What would Google do (Score:2, Interesting)

    by rossy ( 536408 )
    I used to work in the high tech industry with companies that made lots and lots of money. These companys had the fastest bandwidth, and the most creative people coming up with cool solutions to solve problems. But basicly the point was, everyone made lots of money, so if IT infastructure was a problem, they threw money at the problem, and it was solved...period. Since that time, I have seen general compression of the $$ side of things, the bright people go somewhere else, and the people outsource the sm
  • Sounds like you need another IT partner, at the least.

    And good luck having branch offices with no server. Only way i can think of doing that is 100% terminal services.

    Oh whats the difference beteen a "branch office in a box" and a branch server? I bet nil.
    • I'm assuming what "branch office in a box" means is some sort of fileserver/VPN black box. And you're right, it's just a server, but one with some of the legwork done for you.
  • Thank God, I'm not the only one grappling with this problem.

    Astronomical real estate prices in Vancouver have made it difficult to justify consolidating our two offices into one location. So management has come up with the great idea of running our two offices as a single LAN. It sounds like a great idea at first, but when you get down to the nitty gritty it becomes decidedly less practical. We deal with big files and need a speedy ODBC database connection, so our IPSec over WAN tunnel just isn't cutting it
    • The problem with moving infrastructure around is that management quite often only looks at fixed costs like rent, leases, electricity, telephone, Internet pipes, and the like, without considering the work and costs involved in modifying network infrastructure. The other thing I blame is all those computer and business management rags with their bullshit reviews and advertising (is there a difference any more) which make it sound like magic black boxes make all the problems disappear.
    • Management was surprised to find that my estimates of several thousand dollars a month for leasing a dedicated fiber connection were, in fact, entirely accurate.

      If theres one thing that management doesn't like, which horrifies them, which makes them stick their fingers in their ears and yell "LALALALALALA" its when the IT guy is proven right.
  • Speaking of WAFS, brocade had a product suite based on an architecture called FAN's (file area networks). Originally it was several cobbled together disparate bits of software and an "appliance" running windows server 2003 - though i believe the components that make up tapestry now look more like they belong together rather then the way they used to look where it was very obvious the products were all from different vendors and had different design paradigms. Take a look though, http://www.brocade.com/produ [brocade.com]
  • Well there are a few ways to make this work. You can set up something like terminal services, or a web portal structure so that all you're transmitting is presentation layer stuff, which can be run on less bandwidth. You can make sure the pipes going out to your remote offices is as fast as a LAN would be. There are also some things that can be done with some of the fancier network hardware you can buy from folks like Cisco.

    That said unless your remote offices barely use the LAN, you already have a really f

  • Then they can't price gouge you on the local servers, which is the best idea.

    Seriously though.

    Actually put WAFS servers or in router devices in each office with decent size disks. They are linux devices and can be configured to do local auth as well as file and print.
  • "I work for a financial company which went through a server consolidation project approximately six years ago, thanks to a wonderful suggestion by our outsourcing partner. Although originally hailed as an excellent cost cutting measure, management has finally realized that martyring the network performance of 1000+ employees in 100 remote field offices wasn't such a great idea afterall. We're now looking at various solutions to help optimize WAN performance. Dedicated servers for each field office is out of
  • by moorley ( 69393 ) on Monday January 21, 2008 @10:16PM (#22133222)
    Of a good a idea that worked well in one area but is not ready for full adoption. Wide Are Network has too much latency to simply turn local office systems global.

    Your company is trying to cheat their development model. Rather than setup a distributed IT application they have simply tried to distribute a small office network worldwide. If you look back to the tried and true OSI model. 7 layers. The 7 layer model doesn't speak of Network File Sharing, it speaks of Hardware and Application. TCP/IP (which we have taken quite for granted) is around/below the application level. If you have an application that runs at the TCP/IP level you are good to go.

    I have setup distributed systems for several ISPs in the late 90's. We didn't think about what we were doing or why it worked. It looked like we could long haul anything we wanted. A little lag in sending mail or a few extra milliseconds to authenticate LDAP is no big thang. The Internet is distributed by nature. Sometimes DNS was a little slow but that was acceptable for 56k modems and DSL customers. But we spent 2 years working on a central web based administration/billing/customer support application with 1 SQL base in the center. We didn't distribute the application and have it write to the SQL base directly or move files around.

    But you can't distribute the file layer. SANs in a local building have had some of the same problems. Any lag affects all applications and you solve it by throwing a big fat fiber backbone in the local building, but it break downs when you try to long haul over WAN links.

    If your company is thinking it can sneak around coming up with a decent workflow model, and then implementing that in an application by simply given MS Office and Exchange (or whatever they have employed) to everybody they are sadly mistaken.

    But worry not. You are not alone. Many business execs scratch their heads as to why the simply can't share out MS Project and their Excel Spreadsheets to 25 plus people teams and it will work fine. You still need to do the leg work of figuring out the work flow and reducing that to a transaction based system centrally located. That's it. All we've done in the last 20 years is replaced printouts with emails and spreadsheets, and the night operator (a job I used ta do) with scripts (or procedures) that dynamically update or run every 10-15 minutes. You still need a central system and then distribute parts of it, or have slim down interface that everyone can use remotely. Look at how a bank does it, just good ole dumb terminals.

    No magic bullets yet. We need faster broadband and much lower latency before you can share out at the file layer using a network stack meant for transaction based appilications.

    Let yourself off the hook. No mortal IT person can turn this tide....

    You need local servers to reduce the latency. You need some decent thought on the application, not the OS and Office Suite. Good luck!
  • This is a topic near and dear to my heart.

    1. First off, you dismissed WAFS-style accelerator solutions - I wouldn't. I think that's going to go a long toward your solution.

    3. Get more bandwidth bang for your buck by consolidating all your connections through 1 carrier (realistically it probably isn't possible, but you might get close.) Something like Megapath. See if you can find someone to build you an MPLS network so you can guarantee layer 3 throughout. Build QoS policies on that. By going with 1
  • Wait. Are you looking for print queues over the WAN? What happens after the document has printed. Does the head office FedEx it to you? Or is it quicker when they use the fax.
  • Are the magic words, but please do prepare your brain for a roller-coaster ride.

    OpenAFS [openafs.org]
    and
    Kerberos [kerberos.org]

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...