How Would You Make a Distributed Office System? 218
Necrotica writes "I work for a financial company which went through a server consolidation project approximately six years ago, thanks to a wonderful suggestion by our outsourcing partner. Although originally hailed as an excellent cost cutting measure, management has finally realized that martyring the network performance of 1000+ employees in 100 remote field offices wasn't such a great idea afterall. We're now looking at various solutions to help optimize WAN performance. Dedicated servers for each field office is out of the question, due to the price gouging of our outsourcing partner. Wide area file services (WAFS) look like a good solution, but they don't address other problems, such as authenticating over a WAN, print queues, etc. 'Branch office in a box' appliances look ideal, but they don't implement WAFS. So what have your companies done to move the data and network services closer to the users, while keeping costs down to a minimum?"
It's a dead FS (Score:2, Informative)
WAN Accelerators (Score:4, Informative)
Not cheap - but easy.
Re:Not enough information. (Score:3, Informative)
Riverbed is a decent Solution (Score:5, Informative)
Re:So, here's your answer: (Score:5, Informative)
In my experience the only way to ensure value comes down to the processes involved in the planning, acquisition and implementation of any given project.
Ensure you have a process for identifying the requirements of any new service or equipment acquisition and do it without focusing on a specific system or product, if you limit yourself initially because you have formed a preconception of what you think you need, or you simply copy what others have done before, you will not get a solution that meets your needs.
Acquisitions of any type should always solve a business problem, whether you are addressing poor or suboptimal communications, the lack of external access, the rigidity of an existing system, scalability, security or stability issues or the lack of proper redundancy and disaster planning. You should not be buying things for the sake of it, or because someone simply thinks it might be a good idea, most of all don't buy things because other people have them. Justification is everything, otherwise you end up with things you don't need or want (but need to support) that don't provide business benefit, but do drain budgets which in turn makes it harder to address real issues. The identification of problems should come from within the business (that's what management is there for to a degree) or from independent consultants brought in for that purpose, it should never come from a vendor who (as it happens) also provides a solution. If a vendor makes a suggestion then assess the need and see if there is a business requirement, but do it independently.
Make sure you have a decent tendering process when you are sourcing equipment or services (for smaller businesses, that basically means you need to shop around, and tell your existing suppliers that you are doing so). Make sure that there is input not only from management and finance but also from end users and IT staff (sounds basic but not always the case...). You should also have a well thought out budget (after all you are solving a problem and problems should be quantifiable in cash terms), stick to it.
I don't even want to think about the number of times I have seen needless upgrades, additions and total changes to IT infrastructures for no good reason and more importantly with no real benefit. Resist it if you can (but don't resist change for the sake of resisting change, that is just s bad as doing the opposite.
As the parent suggests, price is not an indicator of performance. If your specifications and requirements are met, and you are within budget then great, if you are under budget then you are ahead of the game! With that in mind though, do thoroughly check out your suppliers (its inexpensive and easy enough to do), if a supplier is cheap and has a bad reputation then avoid them, make sure your suppliers can deliver before you sign contracts, sure you may be able to sue them (if you have all the information and the budget to do so) after the event, but it will be much cheaper to get it right first time.
Finally, I have found that the law of diminishing returns seems rather applicable to IT, as things get more and more expensive, the benefit from obtaining them becomes less and less. For example, a email system of some kind in a necessity in most businesses and generally speaking they are fairly inexpensive (relatively at least), whilst electronic whiteboards (my per hate) or upgrading cat5 to cat6 cable (without changing anything else, - something suggested to me by a vendor recently to improve network performance..) bring only marginal benefits but are relatively expensive.
Hmm, that was probably all totally offtopic - never mind.
Re:So, here's your answer: (Score:4, Informative)
Then 6 months later, we have a T1 outage to one of our larger offices, that office grinds to a halt. No BDC, file server and print server mans that as long as the T1 is offline that entire OFFICE IS OFFLINE. zero work is getting done, we spent 5X what we spent to consolidate to undo what he had us do.. It is the wrong thing to not have servers in every office. you have to plan for outages, and performance of having a server local can not be beat. (well you could have OC3's installed to each office, or have fiber ran to every office from your central location, 1000Mbit fiber point to point connections would do it...
Welcome to the cross roads... (Score:5, Informative)
Your company is trying to cheat their development model. Rather than setup a distributed IT application they have simply tried to distribute a small office network worldwide. If you look back to the tried and true OSI model. 7 layers. The 7 layer model doesn't speak of Network File Sharing, it speaks of Hardware and Application. TCP/IP (which we have taken quite for granted) is around/below the application level. If you have an application that runs at the TCP/IP level you are good to go.
I have setup distributed systems for several ISPs in the late 90's. We didn't think about what we were doing or why it worked. It looked like we could long haul anything we wanted. A little lag in sending mail or a few extra milliseconds to authenticate LDAP is no big thang. The Internet is distributed by nature. Sometimes DNS was a little slow but that was acceptable for 56k modems and DSL customers. But we spent 2 years working on a central web based administration/billing/customer support application with 1 SQL base in the center. We didn't distribute the application and have it write to the SQL base directly or move files around.
But you can't distribute the file layer. SANs in a local building have had some of the same problems. Any lag affects all applications and you solve it by throwing a big fat fiber backbone in the local building, but it break downs when you try to long haul over WAN links.
If your company is thinking it can sneak around coming up with a decent workflow model, and then implementing that in an application by simply given MS Office and Exchange (or whatever they have employed) to everybody they are sadly mistaken.
But worry not. You are not alone. Many business execs scratch their heads as to why the simply can't share out MS Project and their Excel Spreadsheets to 25 plus people teams and it will work fine. You still need to do the leg work of figuring out the work flow and reducing that to a transaction based system centrally located. That's it. All we've done in the last 20 years is replaced printouts with emails and spreadsheets, and the night operator (a job I used ta do) with scripts (or procedures) that dynamically update or run every 10-15 minutes. You still need a central system and then distribute parts of it, or have slim down interface that everyone can use remotely. Look at how a bank does it, just good ole dumb terminals.
No magic bullets yet. We need faster broadband and much lower latency before you can share out at the file layer using a network stack meant for transaction based appilications.
Let yourself off the hook. No mortal IT person can turn this tide....
You need local servers to reduce the latency. You need some decent thought on the application, not the OS and Office Suite. Good luck!
Re:So, here's your answer: (Score:4, Informative)
Here's how we're moving ahead with centralization in a large distributed environment with about 50,000 users and 1,000 branches. We're reducing the server count by about 40%, and the cost by 70% versus a couple of years ago:
- Most sites with 10-75 people get a headless, stripped down box (~$2,000) that runs our desktop management software
- Medium/Large sites (75+) get a file server, which fulfills some other roles as well
- Large and VIP sites get a domain controller, mainly for availability purposes.
- A few "very large" (800+) sites get a 100MB WAN connection and use the data center services.
We looked at a few other solutions, with mixed results:
- WAFS/WAAS looked great, but the solution cost was almost the same as rolling out servers. Additionally, most of our applications are "thin" already, so we weren't really gaining much.
- Distributed AD servers are purely an availability play. (If your circuits/core servers are sized correctly)
- NAS also looked promising, but the cheap solutions weren't very manageable at our scale, and the manageable solutions weren't cheap.
- No backups are done on site, we're rolling out a distributed backup system that we de-dupe the data globally and backup to a data center. If you're using old backup software like TSM, Legato, etc, you MUST go shop around, the newer solutions are way way better and probably have lower administrative costs.
- Networks are getting faster and cheaper. We're seeing 3MB connections available to replace 512k frame relay connections at a slightly lower cost. We'll be switching as our network infrastructure gets upgraded.
- If your network supports it, multicast can make it much cheaper and easier to provision your workstations. Most management tools (Altiris, SMS, Tivoli, LANDesk, etc) support it.
Re:No Good Solution (Score:2, Informative)
For example, if two excel spreadsheets are 90% similar it would reference the "cached" copy, and just send the 10% differences. It would re-assemble on the other side and pass on to the user.
They work so well that riverbed (we used netdirect systems) will ship eval units for you to try for free. We plugged our eval units in and wrote a check the next day.