Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Networking

How Would You Make a Distributed Office System? 218

Necrotica writes "I work for a financial company which went through a server consolidation project approximately six years ago, thanks to a wonderful suggestion by our outsourcing partner. Although originally hailed as an excellent cost cutting measure, management has finally realized that martyring the network performance of 1000+ employees in 100 remote field offices wasn't such a great idea afterall. We're now looking at various solutions to help optimize WAN performance. Dedicated servers for each field office is out of the question, due to the price gouging of our outsourcing partner. Wide area file services (WAFS) look like a good solution, but they don't address other problems, such as authenticating over a WAN, print queues, etc. 'Branch office in a box' appliances look ideal, but they don't implement WAFS. So what have your companies done to move the data and network services closer to the users, while keeping costs down to a minimum?"
This discussion has been archived. No new comments can be posted.

How Would You Make a Distributed Office System?

Comments Filter:
  • Global file system (Score:4, Interesting)

    by Colin Smith ( 2679 ) on Monday January 21, 2008 @06:42PM (#22131548)
    Such as OpenAFS.

    Something like coda might be nicer but progress on global filesystems seems to have pretty much stalled.
     
  • No Good Solution (Score:5, Interesting)

    by maz2331 ( 1104901 ) on Monday January 21, 2008 @06:44PM (#22131580)
    There is no good and cheap solution to this one.

    You can try the application accelerators that are out there now from Cisco. They basically use smoke and mirrors to keep traffic off the WAN and act as local proxies for different services.

    Otherwise, your choices are limited. Citrix servers would be good for some apps, but get god-awful expensive fast. And an organization too cheap to build out a decent system to begin with isn't likely to make the investment in writing efficient apps.

    If you're running on slow lines, bump them to at least fractional T3.

    It sounds like the system was designed to serve 5 gallons of water through a swizzle stick. Ain't gonna work unless something is radically changed.

    Or better....

    Fire the outsourcing partner and the management that buys their bull, and build out a proper distributed archetecture.

  • Re:No Good Solution (Score:5, Interesting)

    by chappel ( 1069900 ) on Monday January 21, 2008 @06:56PM (#22131716) Homepage
    I was really impressed with the improvements we got by implementing some 'smoke and mirrors' from Riverbed (http://www.riverbed.com/). Granted, we've got some reasonably adequate bandwidth to start with, but it dropped the WAN traffic to our large (500 user) remote site by a good 80%. They seemed mighty expensive for a plain dell server with CentOS, but there's no arguing with results. /reminds self to look into riverbed stock
  • Too little too late (Score:5, Interesting)

    by armada ( 553343 ) on Monday January 21, 2008 @06:56PM (#22131726)
    I suggest you pay more attention to the data itself. Do an comprehensive and brutaly unbiased audit of what data/resources are needed by whom. You would be amazed at how much of your infrastructure is either superfulous or capricious. Once you do this then you at least have a smaller mountain to climb.
  • by eazeaz ( 1224430 ) on Monday January 21, 2008 @07:07PM (#22131824)
    We use riverbed appliances at all our remote offices. They take about an hour to install and are damn near like magic. I just pulled some statistics from one of our remote offices. Over the last 30 days, we had a reduction in data flow of 95% 6.3GB of data went over the T1 instead of 129.3GB We can run applications over a T1 and users do not know that they are not local. They allowed us to go from DS-3 to T1 lines without any user complaints.
  • Re:Amazing (Score:2, Interesting)

    by sco_robinso ( 749990 ) on Monday January 21, 2008 @07:43PM (#22132206)
    Agreed. I actually work for an IT outsourcing company. We don't gauge by any means, but we always come to the table with the 'top drawer solution' right off the mark. If the customer wants XYZ results, we tell what exactly what they need to get there and stay there for a 3 year period. If they don't like the costs, fine by us, we'll put in whatever they want or can afford. But if they come back to us in 6 months or a year and say the solution isn't delivering the expected results, we can always fall back on our initial recommendation. We always say, IT costs money and you have to pay the piper eventually. I actually deal with this a fair bit, and my best recommendation would be to spec out the best and most appropriate solution, costs complete aside. Think of it like 'if I was responsible for the whole setup, and cost wasn't as issue, how would it be done'. Then, present it to management as 'This is how it should be done. Period. Here's the costs.' It's not rocket science.

    Don't let yourself get caught up in the financials and politics of it before you begin. Simply spec out what is needed given the demands and needs. If the management isn't comfortable with the costs, fine, but at least you can now rest on the laurels of having recommended what was needed in the first place.

    More specifically, a basic server in each branch office with DFS over Win2K3 is a good starting point. DFS has decent WAN optimization technologies out of the box, so it's usually a good starting point. Either way, there will be an investment at either end, be it a server at each office or a big data center at the middle of it with a decently fat pipe to each office.
  • What would Google do (Score:2, Interesting)

    by rossy ( 536408 ) on Monday January 21, 2008 @07:52PM (#22132268) Homepage
    I used to work in the high tech industry with companies that made lots and lots of money. These companys had the fastest bandwidth, and the most creative people coming up with cool solutions to solve problems. But basicly the point was, everyone made lots of money, so if IT infastructure was a problem, they threw money at the problem, and it was solved...period. Since that time, I have seen general compression of the $$ side of things, the bright people go somewhere else, and the people outsource the smart clever IT folks that worked at the the tech company to some outsourcing firm...
    and all the call centers are shipped off to India.
    So... I think... where is all the money now, and clever people?
    Google.
    Just ask Google to host your IT applications, they already index the rest of the damn web anyway.
    This would beat Googgle to their next big thing anyway... why not just host the world at Google?
    Storing your sensitive financial information will be just a spec of content compared to the rest of the web. Then buy some good fiber connections from Verizon. (I'm spoiled with my FIOS service at home...better than the DSL at my companies remote office)... and viola, problem solved. Besides, then anyone can get to your data from anywhere.... the security issue is a myth... who has time to look up all this financial information anyway... most people are reading Dilbert cartoons about how your company outsourced the network.
    Plus, you can tell all your clients to buy Google stock, prior to handing over all the data.
    -- R
  • by SpaceLifeForm ( 228190 ) on Monday January 21, 2008 @08:19PM (#22132502)
    Well, sure, if you have to deal with Microsoft and people
    that worship Microsoft. If that is not the case, then
    maybe you don't get what you pay for because you don't
    have the budget to hire good people.
  • by Amouth ( 879122 ) on Monday January 21, 2008 @08:35PM (#22132622)
    i am wondering.. that sounds like they did a good job.. but from the upstream providers view.. what does the access logs look like? if the transparent proxy is acting as a middle man for the client does it pass info upstream for logs?
  • by tgatliff ( 311583 ) on Monday January 21, 2008 @09:13PM (#22132872)
    Better idea... IPCop... You could put a bunch of low cost servers and do a VPN Gateway to each remote office with IPCop. It is very doable and the most cost effective way there is...

  • by snuf23 ( 182335 ) on Tuesday January 22, 2008 @02:50AM (#22134942)
    Just a question but on Windows couldn't you use DFS for file replication? Or does that not work in a WAN situation...
  • by Anonymous Coward on Tuesday January 22, 2008 @12:23PM (#22138782)
    We have several heavy use remote locations and were running into the same issues as the poster. We selected the iShaper and iShared devices from Packeteer after trialling them for the following reasons:
    1) The iShaper is two devices in one - the regular Packeteer shaper to QoS WAN traffic and a Windows 2003 Storage server plane connected with an internal gigabit switch. The Windows side can be setup as Domain/DHCP/DNS/Print and app server.
    2) The device can be placed inline.
    3) With an iShared in your datacenter, the iShapers can pull content file share content with them via DFS and their specific protocol they have been working with MS on. They use a "hot/Cold" system for you to prepopulate the device with the user shares and other file shares, and then the protocol tracks the changes to make file shares uber fast over 128k and above connections. In our lab testing, it has been at least a x2 to x10 improvement in file load time.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...