What Goes into an Enterprise Network? 61
Komi asks: "I work for a big semiconductor company, and I'm part of a group that is spear heading the Linux movement here. Right now everyone uses Sun machines to design, but you can get a cheaper Linux x86 machine that is four times faster. So it is my job to prove that Linux works. The problem is that I'm an analog circuit designer stuck in the role of sysadmin. So I need some advice on what goes into a network. It won't be that large right now, but it has to be scalable for up to a couple of hundred machines. If this works, then hopefully we'll convince all designers at my company to make the switch."
"Here's the hardware that I am planning on getting:
- 2 servers:
These would hold the home accounts and tools, as well as serve out NIS, NTP, etc. I know I'll need a lot of hard drive space (2x72GB SCSI each), but do I need a lot of memory? (It's 4GB RDRAM max.) Should the processor be fast, or dual?
-
3 batch machines:
These would be a small compute farm running LFS or something. Jobs would get queued up and run continuously. So these should be dual CPU with lots of memory, probably 4GB each. Any other particular details?
- 10 desktop machines:
These would be on the designers and developers desktops. These should be reasonably fast (~2GHz) single CPU machines with probably need at least 2 GB RAM. The simulations we run do not benefit from dual CPUs. They probably don't even need SCSI. I'm thinking a $2k PC should work.
- 1 Itanium server:
This would be to play around on to test our 64-bit applications. The only advantage of 64-bit is applications using huge amounts of data.
What Goes into an Enterprise Network? (Score:4, Funny)
Dilithium?
Re:What Goes into an Enterprise Network? (Score:3, Funny)
Isolinear chips. Lots of them. All over the ship. Even in places where a teenager intoxicated by a recycled plot device (cough! [slashdot.org]) can get to them and sabotage the ship. Hence the importance of locking the server room.
Re:What Goes into an Enterprise Network? (Score:1)
Think bio-neural gelpacks [codefusion.org]
I wonder what kind of throughput they get...
What goes into a network? (Score:1)
2. ?????
3. Big Profits!!!!
Sorry, had to be done, go easy on me!!!
Biggest troll ever? (Score:3, Insightful)
*HIRE A REALLY GOOD SYSADMIN*
You're horrendously out of your depth and there are shedloads of really good sysadmins around who need jobs. Take someone on for three months to look at the problem properly. Advice 2:
*DON'T BUY AN ITANIUM MACHINE*
There is simply no point, particularly if you don't really know what you're going to use it for.
Cheers,
Dave
Re:Biggest troll ever? (Score:3, Informative)
Hire a good sysadmin and the job will get done much better and faster.
Itanium also doesn't sound like the way to go for you.
Think about it. Hammer is going to be out in a few weeks which will give you better 64-bit & 32-bit performance than the itanium for a fraction of the price. It's not the solution for everyone, but for you.. it sounds like it.
Now about redhat... You could consider other distros as well like Gentoo [gentoo.org] which give you added benefits like better package management especially if you are going to have a lot of your own source around. You can write ebuilds for them and easily install your source packages on all your machines. It could also give you a nice performance benefit.
But the distro might best be picked by the sysadmin you hire. (who needs to be a specialist in linux, but maybe not already tied down to a specific distribution)
Re:Biggest troll ever? (Score:1)
Re:Biggest troll ever? (Score:5, Informative)
1) There's a difference between PC's and 'Server Class' hardware. The biggest is testing. It will work, and it can be supported easily. Drivers are nice and available (generally speaking). Usually dual proc, usually RAID enabled. You can use RAID to speed up read access, but almost no one does. They use it for redundancy (in case a disk flakes out on you). How much money you spend depends on how much downtime costs. If it really costs, you need RAID 1+0 or 0+1. Go with hardware based RAID if you can.
2) Sun hardware. There are many more advantages to sun hardware than what's obvious. Never over look what a good support organization can do. You pay for it, but if something fried, I can have a part in my hands 4 hours later. Sun's low end desktop's are nothing to write home about. However, if you've got Ultra60's or SunBlade 1000 or 2000's, that's some really class hardware. You can do some supprising things with it. [1]
3) Dual procs. On Desktops, even if your simulations don't benifit from dual proc, if they take a while, and they eat that 1 CPU, you'll be happy to have a second (web browsing, etc). On your servers, it's effectivly a must.
4) RAM. On the servers, crank it. On the desktops, you should probably crank it.
5) Cost. If your work is anything like mine, you have 'capital' money, and 'O and M' money. When in doubt, over spec the machines, so that your less likely to have to request more money from the 'capital' pool than you initially quoted. "Going back to the well" viewed poorly.
6) NIS. NIS is evil and the plague. If your in a relativly local office with good connectivity, it's alright. If you try to spread it over WAN links, you're going to get hurt at some point.
7) NTP? Why run a seperate server when you don't have too. Leverage what's already in use in the company. This leads to my last point (and what was the best point of the parent)
8) Get yourself a real sysadmin. These are decisions that s/he is experienced in, and paid to do. Your trial by fire that would come from this will probably drive you insane. Good sysadmins are a rareish breed. I know, I am one. There are a fair number of good ones out of work now. Find one.
[1] The reason largely has to do with cache. Sun chips made in the last 2ish years have 8 MEGS of cache on them (that's even mirrored so it's 16 in total, but you can only use 8). We built a GIS app, and field tested it on Sun and Intel hardware. The intel hardware could deal with 1 to 4 users with less resources than the sun box could. However, the the sun box kept growing up to several hundered users, while the intel box started thrashing hard after 10 or so. We compared a dual US3 box to a dual Xeon P4.
Re:Biggest troll ever? (Score:3, Informative)
Er, every commonly-used RAID level will speed up read access...
If it really costs, you need RAID 1+0 or 0+1.
You want RAID 10 (1+0, stripes over mirrors), not 0+1.
There are many more advantages to sun hardware than what's obvious. Never over look what a good support organization can do. You pay for it, but if something fried, I can have a part in my hands 4 hours later.
Dell can have a part in my hands in two hours - often with an on-site tech included. There are certainly reasons to buy Sun hardware, but hardware support isn't a major one.
However, the the sun box kept growing up to several hundered users, while the intel box started thrashing hard after 10 or so. We compared a dual US3 box to a dual Xeon P4.
And how did the intel boxes go when you spent the same amount of money on them (purchasing multiple machines and scaling horizontally) as you had spent on the Sun box(es) ? Or didn't your app scale horizontally ? (that *would* be a good reason to buy Sun - or some other lots-of-CPUs-in-a-machine). Similarly, if you have an app that really benefits from massive amounts of L2/L3 cache, then machines that have CPUs with lots of cache might give disproprtionately better results. "It all depends".
The idea with going intel over Sun is to use the enormous price difference to buy lots of intel machines and cluster them. This works only if your application can efficiently scale across multiple machines and doesn't benefit disproprtionately from things that simply aren't available on intel - like massive amounts of fast cache memory.
0+1 vs. 1+0 and horizontal scaling (Score:2)
I mean, if you're using 0+1, and a single drive fails, it goes to raid 0. However, it's just a little ahead in performance to a 1+0 system, so you have to determine if that performance is worth the slightly reduced redundancy. So, if you're using something that's more interesting in keeping the data, because the data's rapidly changing [databases], you'd most likely go with 1+0.
For something that's all about performance, but does want some redundancy [high load file servers], 0+1 might be better for you.
Oh....and as for whole concept of horizontal scaling -- there are times for more machines... however, in this case, if it's 10 users vs. hundreds, that'd mean at least a 20:1 difference. I don't know what your background is, but well, I'm personally not big into having to perform maintenance on 19 machines I don't have to. [not to mention that most support contracts are based on the system... that you have to have 20 times the disk space allocated for OS and applications, and similar overhead for RAM, etc...]
Cost calculations need to be performed on the TCO -- Total Cost of Overhead, not on the initial cost of the hardware. [oh...and if you're buying software, you might have to buy 20x the number of licenses... that little problem when not all software is free]. Sure, you might save some cash up front, and it may be easier to have rolling outages for upgrades later, but when it comes time to do some app upgrade that takes 1 hr donut spinning, and an hour of interactive configuration... you want to do it overnight, or want it to burn your entire weekend. [and well, don't think that after a long weekend, you get to take the upcoming week off... you've got to be there extra hours, in case something went wrong, and won't show up 'till under load.... and you don't get off in advance, as you're prepping everything for the upgrade, and going to meetings to assure people that it's not going to be a problem, but they weren't told the upgrade that's been planned for months is happening in 3 days]
Re:0+1 vs. 1+0 and horizontal scaling (Score:2)
I'd in benchmarks to qunatify how much faster 0+1 is. My guess is it would be insignificant - even more so given that it would only apply to throughput, not latency and, really, how many of your fileservers are maxing out on throughput ?
I don't know what your background is, but well, I'm personally not big into having to perform maintenance on 19 machines I don't have to. [not to mention that most support contracts are based on the system... that you have to have 20 times the disk space allocated for OS and applications, and similar overhead for RAM, etc...]
I sysadmin at a fairly large Australian Uni, so I do get to deal with a nice wide range of machines. With decent configuration management tools, maintaing lots of machines (particularly when they are configured nearly identically) is not overly taxing. Heck, compared to the mess a large domain can get to after having a dozen different sysadmins and service admins banging away on it for a few years, a bunch of small, single-purpose boxes is a godsend, IMHO. Indeed, we are hoping to move away from our big machines (e10ks) to lots of intel boxes, simply because of the phenomenal cost of Sun equipment. A single processor board for an e10k costs nearly as much as a rack full of 2.4Ghz+2G Dell 2650s, so it makes a fairly compelling argument in terms of hardware costs.
And as for whole concept of horizontal scaling -- there are times for more machines... however, in this case, if it's 10 users vs. hundreds, that'd mean at least a 20:1 difference.
It depends on what you're doing. Your GIS app probably gained enormous benefits from the large CPU cache, whereas some other things would not gain (relatively speaking) as much of an improvement, and so benefit more from being able to spread the load across a bunch of machines will smaller caches (that cost 1/10th as much). Something like webserving (for example) benefits enormously from horiztonal scaling.
Re:Biggest troll ever? (Score:4, Informative)
Sysadmins will be hired for this once money gets freed up and we can prove to groups that linux works. A later post was correct that there are really two issues. a) Getting everyone to switch to linux, and b) getting designers to put linux on their desktop. We really only care about b), but by the nature of the deal, we have to prove a) and b). Also we don't care about the cost of switching a design group over to linux either. That's someone else's job. We just show that the end result works.
And finally, we do need 64-bit machines. Some of the programs we run use huge ammounts of data that need 64-bit to address them. So if we're getting free loaner equipment, then why not play with an Itainium? :)
I appreciate the advice from everyone.
Thanks,
Komi
Re:Biggest troll ever? (Score:1)
1. OS selection: RedHat 7.3 or 8.0. Don't bother considering anything else.
2. Skip the file servers. You already have a large network of Suns, so I'm assuming you already have Sun, Netapp, or some other enterprise file servers. Don't mess with this. Your cheap Linux boxes are best used as fast compute servers. If one dies, chuck it into the dumpster and swap in a new one.
3. Skip the expensive graphics card. Any sub-$100 2D card is fine for the EDA apps used for IC design.
4. Skip the multiprocessor servers. I don't think the memory systems of most PC's are up to the task. Buy single processor systems, and run one job at-a-time on them. Use Gridware from Sun for free batch processing. Or pay for LSF if you're already using it in your existing network. Set up correctly, either can dispatch jobs to a Sun or a Linux box transparently to the user.
5. Skip the CD drive in the compute servers; you can install the OS from the network. Make your desktop users happy by including a CD drive so they can listen to tunes...
6. Put small IDE hard drives (30-40 Gig) in all of your Linux boxes. Discourage using the local drives for anything but transient data. Keep some spares on hand; some of these drives are going to croak.
7. Pay for fast processors and lots of RAM. We bought Athlons, but P4's are probably faster today. There's a 3Gig per process limit, so more than 3 gig is not useful.
8. Forget the Itanium except as a science project. Run any jobs that are too big (>3gig) for the Linux boxes on a 64-bit Sun. You won't find (m)any commercial EDA apps for Itanium. If you have some in-house applications that you really want to port to Itanium, go for it. I wouldn't bother...
9. Install Win4Lin to run those pesky windows apps. It's cheap, and gets the job done.
10. Prepare for people to start fighting over who gets the new Linux boxes. We don't have a single person here who would go back to their Sun.
Good luck. EDA on Linux works. It's as simple as that.
Backup (Score:3, Informative)
Re:Backup (Score:2)
Feasibility Study... (Score:3, Informative)
1 > Time savings versus average hourly rates for computing & employee time costs. This would be an agressive ROI metric.
2 > A more conservative metric would be the cost of replacements of Sun systems over time versus costs of , say, a small farm of Dell Optiplex PCs.
You could then also compute the value of gigaflops per dollar, showing the clear advantage of the PCs.
Re:Feasibility Study... (Score:3, Insightful)
QUALITY!
Because of the reduced price and increased performance of our DELL XEON boxes we are able to run more simulations on our circuits. This allows us to check a greater range of operating conditions thus improving the overall quality of our product.
You're facts aren't quite straight. (Score:5, Informative)
1) You had better find some damn fine PCs to replace those Suns, because a couple hundred PCs can make your life miserable due to lots of random breakage.
2) This is not true (unless you found Pentiums with SPECfp of over 3000!). If you buy the right-sized computers for your task, the hardware costs won't be a dominating part of your budget. Human costs and non-OS commercial licensing will be, regardless of your platform choice.
Whenever people say that Linux is absolutely outright cheaper then commercial UNIX, then I'm pretty convinced they haven't figured out all the costs involved. Also, I'm not convinced they understand just how simple maintaining a Solaris box can be, for example, due to sunsolve.sun.com, ample documentation, optional support out the wazoo, etc.
Before you go blazing these new trails, just stop and think for a minute. Put aside the zealotry and really think hard about what is and is not cost effective. Regardless of your choice, you really need to be convinced it is the right one.
Re:You're facts aren't quite straight. (Score:3, Interesting)
Like the author of this email I work at a larger semiconductor company. We are in the middle of switching from Sun to Linux. The price/performance difference is huge. There is more than a 10X difference in price between a DELL box and slower Sun machine.
We have been up on Linux for over a year and so far haven't had many issues. Ok, well we have one issue.
Re:You're facts aren't quite straight. (Score:3, Interesting)
As far as your concerns over reliability, we buy name brand PCs that are meant to be servers, not crappy integrator machines or desktops. I find that the hardware reliability on these machine is at least as good as the Suns.
That said, we *have* to have the Suns for those jobs. We use Suns for infrastructure stuff as well. And some of the engineers cannot *live* without a Sun on their desktop, though they all whine they their desktops are too slow to run a browser.
My feeling is that until Hammer hits the streets it's going to take a mixed environment to get the job done. Certainly when we have an option to run Linux servers with many GBs of RAM we won't be buying more Suns.
$2000 a PC? (Score:2, Interesting)
A 2.4GHz chip is $160. 2GB of memory is around $500 (1.5G? More like $250). $85 for a DVD/CD-RW, $150 for a board with onboard sound, $60 for a decent video card, $80 for a good case, $15 floppy, $30 on KB and optical mouse
Figure $1400-1500 a PC, even from a major OEM, tops. Anymore and you're getting hosed.
-----
Re:$2000 a PC? (Score:1)
The Linux movement, not the Windows XP Pro movement. You can now take off $140. $80 for a case? jeebus, that must be one mighty fine case...
Re:$2000 a PC? (Score:2, Insightful)
SOHO server with lockable front and side
panel with a 430 watt power supply
Good cases and power supplies are worth it
when you have expensive hardware you are
powering and housing
Cases with no filters lead to dirty parts
inside, that is why most Sun hardware has
filters on them
As for $2,000 for these machines, I agree,
that is WAY over priced if you shop around
on http://www.pricewatch.com
Money saved build an extra server or 2,
several extra desktop boxes for rapid
replace, and house all the data on the
servers with a RAID5 array with hot
swappable drive trays with HOT standby
spares
Cool thing is you can do this with IDE
drives now and save a small fortune
IDE's tend to not last as long according
to factory testing, so keep a few spares
handy and you should be fine
Well Ia m re-writing what I have already
posted, my apologies
Peace...
Ex-MislTech
I think you missed something.... (Score:1)
Re:$2000 a PC? (Score:1)
19" Monitor??? (Score:1)
Yes, take away their Suns and give them CRAP. (Score:3, Insightful)
The poster of this article claims eventually all user workstations will be replaced with Linux boxes (in his fantasy world at any rate.) So, let's say he has 200 highly-paid CAD engineers whose workstations might be replaced. Let's figure it will take them each one week (40 hours) to re-learn everything on the Linux side, which IMO is not unrealistic at all assuming he simply plops a new Linux box down on their desk one day. So, when all is said and done, his "cheaper" solution will have cost the company 8000 man-hours in wasted labor. Let's say each engineer or developer makes $40 an hour (again, not unrealistic.)
So, that's $320,000 out the door on top of hardware costs, which I am sure will be more than you quoted because you can't do CAD on a machine with the kind of parts you listed (hint: a $60 video card will not work for CAD. Don't even attempt it. No, AutoCAD does not count. Real pro-level CAD cards cost more than the PC you specced out.) A truly realistic figure for a PC capable of being used by engineers would cost closer to $4000, especially if you want someone else to do hardware support (trust me, with 200 or more users, this is what you want.)
Then there's the countless hours (hundreds, perhaps thousands) spent porting any custom applications and simulation software you may have to Linux.
Then there's the Sun machines you'll still have to keep around for the applications which don't run on Linux.
Then there's the two or three full-time sysadmins you will need to hire to at least oversee this tremendous effort (there's no way in hell I'd trust anything like this to an amateur who read about Linux and thought, hey, it would be awesome to switch everyone over to that at work.)
Then, unless you hire sysadmins permanently for this, there's the 40 more hours you'll be working every week as one.
Enjoy your Linux boxes!!
- A.P.
The BIG Never in Enterprise Networks (Score:4, Funny)
DAMMIT Jim! I'm a Doctor not a UNIX admin!
Re:Think things over... (Score:1)
It is running on x86 hardware .
http://www.intel.com/eBusiness/casestudies/
snapshots/google.htm
It seems to work well, granted this is 2 different scenarios, but it proves it can and will be done .
If ppl spent more time figuring out how to do things, rather than taken the default path of devil's advocate
the US would not be in the shape it is in now .
It is soooo much easier to say it is all crap, and type in CAPs and puff up like a toad .
Loud does not equal right .
I am not even saying I am right, but give it some equal consideration .
Corporate loyalty to corporate royalty is like a horse with blinders following the buggy whip commands off the edge of a cliff .
Fact is x86 hardware is unreliable, but then again Sun equipment breaks too, unless you buy their Ultra high end, money is no object, systems .
Google even talks of this on their site.
They just pop another box in and send in the broken one for diagnostics and batch reload via imaging, It's all automated
On a 1 for 1 basis the Sun boxes are better if you are willing to pay MUCH more for each one of them
Try telling the ppl at http://www.mosix.com that linux is just not good enough
They have been doing this for 20 years, most recently on x86 hardware
The collective compiling of all those linux boxes will amaze you
This is the new frontier in silicon, and in code, give it a chance before you condemn it
becasue you are old school sun and own thousands of shares of tanked stock .
Peace...
Ex-MislTech
Hardware doesn't make the enterprise software does (Score:3, Interesting)
Think TASKS not BOXES!! (Score:4, Interesting)
Now, back up and think about this:
In your case, you're talking primarily about engineers, and they are primarily (for job functions) going to be doing engineering
Now, on you EXISTING network, measure what a few users do for at least a few days. If you've got admin on, you should be able to extract information from the logs. This will give you a chance to get at how much load there really is.
Next task: establish some of your "non-functional" requirements. In particular, how long can response time be for your most important tools, how long can you afford to have the system as a whole be unavailable, and how much work (an hour, half a day, a week?) can you afford to lose. Divide all of those by two and make them your basic "service level agreement" -- which is simply a statement of the service you promise the users, it doesn't have to be fancy.
Here are some reasonable values, from experience, but YMMV: most people will put up with the whole system being unavailable for an hour, they want half-second response time from specialized tools and more like about 4 seconds on a web page, and engineers hate losing ANYTHING but usually don't get too pissed off if it's less than a couple of hours work and doesn't happen very often.
Next: what's the environment? Do you have to think about firewalling yourself from the rest of the network? (Don't assumme just because you're inside the corporate firewall that you're protected. Get AND READ the corporate security policy, as well as talking with the admins who own the network as a whole.) How will you do backups? How do you fit into the corporate disaster planning scheme? (Lots of people forget that one, but just look into what happened to the Wall Street Journal on 9/11 to see how essential it really is.) This analysis will give you a good idea what you need.
And now, having said all that, it will turn out that what you're going to need is (1) a "big enough" file server with 5/4 RAID and a good periodic backup onto "archival media" like tapes or writeable CDs; (2) one workstation good enough for all your applications, and with at least a years' room for growth, for each desktop (plan to buy at leasy one for a spare, and set it up "hot" so a single failure doesn't slow anyone down"); (3) a smallish box as a print server (if you manage your own email, it can often go onto this); and (4) a firewall box or a router (betcha 50 cents Canadian that the company will insist on this.)
Plan for a full week, plus one day per user workstation, for installation. That is, with 4 users, plan on 5 + 4 = 9 days for two people.
All the other stuff, like using NIS, NFS, Kerberos, etc, will more or less fall out if you get these steps right first.
Why fight the whole war at once (Score:4, Interesting)
1) Getting the whole company moved over to Linux for everything
2) Getting engineer workstations running on x86s so you can get 4x the speed.
(2) is a much easier battle to fight than (1). Don't spec a whole Linux solution for everything, spec out a Linux solution for the workstations that allows them to work with the Suns. There you can make the cost difference really obvious. Reliability isn't a big deal.... Your software vendor might even give you the test software in hopes of the license switch down the line. In the back of your mind you can keep the total Linux solution but your strategy should be to take out the Suns piece by piece by piece.
Total overhauls come down from above not up from below. Incrimental change that overtime turns into a total overhaul comes up from below. You don't sound like you have anywhere near the juice to get a total overhaul through the company regardless of how good your analysis is.
Re:Why fight the whole war at once (Score:2, Interesting)
Agreed. You're biting off more than you can chew. You'd be absolutely insane to throw away your Solaris infrastructure on day one. Quality will sell your ideas, and consistency is the one true measure for quality. Work with your sysadmin staff to make Linux a first-tier quality desktop. Don't go cheap. Let Linux and Solaris compete on equal terms, and it will be easy to pick a winner.
Do you have all of your ISV's lined up? Getting all of the software pulled together that you need to be productive at your job is the hardest part. Are you there? If you're not, you might not want to even consider taking another step until you convince your ISV's to support Linux.
We've replaced hundreds of SGI's with Linux workstations and seen huge gains in performance and employee output. We started with a single specialized application on about thirty systems, and three years later we're down to our last 20-30 IRIX boxes out of about 1000 systems in house.
Re:Why fight the whole war at once (Score:1)
A total switch out is usually very painful,
and ppl will fight it like crazy
Alot of it is not baseless either
Verify all aspects of software will work
under Linux, and setup some kind of training
for the ppl similar to a Intranet or
just a video cd . This training can be a
project unto itself
Make it a progressive gradual switch out,
perhaps with volunteers at first , or ppl
you are comfortable with making the switch
Make the needy, emotional, nuerotic freaks
the last ones to get switched
Good Luck,
Peace...
Ex-MislTech
Re:Why fight the whole war at once (Score:2)
This is probaly the crux of the matter, management is setting up the Linux thing for failure predominatly because they can say they "tried the Linux thing but it didn't work for us" this poor guy seems like he locked into a death spiral and if he even gets close to sucess they'll either cut his resources or increase his requirements. I'm sure any help he gets from us would be greatly appreciated.
#1 advice (Score:2)
People buy Sun's for a reason (Score:2, Interesting)
You can get a cheaper Linux machine, yes. It might be four times faster, than a Sparc10, but new x86's aren't anywhere near as realiable or powerful as a new Sun. As I said, people do buy Sun stuff for a reason, and pay a hefty premium.
4 x faster, pah! If you plonked a PC that is four times faster than the one I'm using in front of me, I wouldn't notice during the bulk of my work, because the machine is 90% idle on avg. Processor speeds go up and up, and some OS's just bloat and bloat to make up for it.
So it is my job to prove that Linux works.
This is already done for you. Convincing the management that you can use it to save them money I think is what you need to do, and at the end of the project you might find that this wasn't the case. Just because the OS installs for free, doesn't mean it doesn't cost anything.
Methinks you've just started at the job, been using Linux at home for a while and think you can plonk it anywhere and go. On a production machine, I can't see the argument for Linux over (presumably) Solaris, and definitely not x86 over SPARC. I admit, I was guilty of the same Linux zealotry three years ago. Now I only want to replace every NT server w/ Linux, and leave the Solaris machines well alone, for I've learned a lot about them now, and it just can't be beat.
BTW, what Linux distribution were you thinking of, because that makes all the difference too. It's hard to find one with a name that management will take seriously, and that doesn't suck at the same time.
Doh! (Score:1)
> What Goes into an Enterprise Network?
Prise, of course.
not only hardware (Score:4, Informative)
What I would concentrate in is:
Just a quick overview, to sum it up I would second the advice somebody else gave you in a previous posting: hire a decent sysadmin and plan things with him.
x86 faster? (Score:2)
Most engineering work, whether it's CFD and FEA or ICE is bound by memory bandwidth, not CPU speed. It requires the construction of very large in-memory data structures, which see a combination of random access and sequential traversal. Before you assert that an x86 machine is 4x faster, benchmark it with the actual applications you use, don't rely on SPECmarks and the like, which can run entirely in cache, because benchmarks aren't representative of real applications. And if you've got UPA in your workstation (like say the old Ultra 1) then no bus-based x86 can match you for I/O. If you want workstation-class hardware, it costs more than a PC.
This is my experience: for benchmarks, my 1Ghz P3 beats my 225Mhz Octane easily, but for work, the Octane runs rings around the PC - one I/O bound task and the PC is almost unusable 'til it completes, but the Octane can max out its disks, run at 100% CPU and still remain responsive. I see similar comparing PC with Sun.
Secondly, when you buy Sun, you aren't just buying a piece of hardware, you're buying a service. Support and maintenance you can get are far, far beyond what you can expect from an x86 vendor. If a part goes bad, you can get it replaced in a few hours. All the components in your machine have been certified as working together and working with the OS. And can your Linux vendor do this [sun.com]? (No, they can't even stabilise on one libc!) Running a network of workstations for your company's core business is a completely different game than running a network of PCs for ordinary office workers.
Re:x86 faster? (Score:1)
x86 servers and workstations (Score:2, Informative)
They need to get with the future
I have had some killer boxen I have built that have worked well for years and have passed on thru hands of other ppl.
PC hardware like one poster pointed out is cheap and is gonna break Make up several extra PC's ready to go with a "image" if identical hardware is used
Keep several ready to go and working in a storage closet out of site and keep their
existence little known or else they will get appropriated just because ppl "feel" they need an extra boxen
Don't tell anyone either they will slip and tell someone and then they will never stop pestering
you til you have no extra boxen .
They will even stoop to calling in favors of ppl in authority to try to scrounge them an extra boxen . They are snakes !!!
Users should keep all their files on the servers, BECAUSE
If one server catches on fire, the other is backing it up during "low load" times, or at pre-scheduled cron times
Monitor load usage of network and servers, plan back ups and other simlar tasks off peak.
IDE based raid is now cheap and reliable and you can get awesome amounts of storage for reasonable money
Ex.: 12 channel IDE Raid 5 controller with 12 - 120 gig drives pushing 1.4 Tera prior to losing 33% due to overhead of parity .
Keep several extra IDE drives laying around, use all the same size and order them in bulk factory direct if you can
Hot swap trays are essential, read reviews and get the best RAID
Alot of ppl on slashdot have used 3ware and promise, Adaptec is always damn good too .
Ex: order several cases of drives from the manufacturer . In IDE stay away from Seagate, and Maxtor drives that were Quantum's .
Alot of ppl I know generally like Western Digital, IBM, and the better Maxtors
Again read reviews online, learn to form your own opinion . learn from the pain of others, serach news groups for model#'s you are considering buying
Never buy the newest, just got on the shelf products, alot of the time they are buggy and need BIOS updates.
I know I just bought one.
Tried and true is what should go in a server. If it is not the pillar of praise, you do not want it in your server .
If you want to be 100% sure, go with SCSI, but be prepared to pay hideous amounts of money for equal storage
Set the 2 Raid 5 arrays to snapshot each other every day , and you can restore a backup in minutes this way or incrementally .
The sheer volume of volume will let you do these monster backups, cheaply, and quickly if you use 64 bit controllers, and 64 bit PCI slots
Dual Xeon's for the Servers is most likely best . As for waiting for AMD's hammer,
that is postponed damned near indefinitely, I have heard 3rd or 4th quarter
When I worked for cisco this is how they did it, and they snapshotted the desktops too .
The servers, build to the teeth, MAX RAM, Dual or Quad Ethernet NIC's . Then bond the NIC ports as needed , load balance as needed . Set up some basic SNMP package with an e-mailer to let you know when boxen are burping
Careful not to over do it on the SNMP it can burden your servers or your network, just the essentail info, the books will clue you in on this
Don't bother with the expense of RDR RAM , go DDR, use the extra money to buy more of it.
Hell use the extra money for an extra server .
Fast RDR costs almost triple what DDR does, and RDR only outperforms in select apps .
price compare here : http://www.pricewatch.com
I'd recommend a top of the line Ethernet switch, after all what good is your servers if the network is crap
Consider fiber GBIC's from the servers to to a blade on a nice cisco switch
Giga-bit ethernet over fiber is a beautiful thing to behold
Consider a Giga-bit link from server to server to the backups so they do not load the network .
You can just use a crossover cable if you you use Giga-bit over copper
Cisco is expensive as hell, but they are good . Juniper and Extreme are good too as long as you are just running one protocol and not trying to make a hybrid multi-protocol network.
The "Hire a real sysadmin" statement is true, unless you are one to like new HUGE challenges .
If you are stuck with this, you need to do ALOT of reading, O'reilly has some good books, but there are others you will need as well .
Don't skimp here, read the highly recommended Unix Bible and any books it recommends .
Unix Admin's guide too, but these alone will not be enough .
You are about to read several thousands of pages of material, you might point that out to the ppl that dumped this on you .
Software for the servers, I'd do alot of research, I have no recommendations, I am a hardware guy . Linux of course, I am partial to Redhat
As for Sun boxes beating x86 boxes
you can build many x86 boxes and use somthing like a beowulf cluster or www.mosix.com .
When it comes down to $$$'s, x86 is gonna win, if you want support, and someone to hold your hand and be there 24x7x365 go sun .
Sun support, parts, and just about you name it is mucho deniro . I think if you get your learn on, you can better spend the money elsewhere.
The learning curve on this is going to look like the combined eliptical orbits of every planetary body in our galaxy
Network Security ??? Call in a well known expert and have them set up a plan , follow it religiously or get hacked.
Security is almost becoming a science unto itself , a good firewall, well setup and maintained.
IP access lists in your Cisco, or other managed Router or layer 3 switch
Oh, and if your religious, you might pray .
If you have any specific questions just e-mail me at my addy on the webpage below
If I do not know it, the *nix wizards that taught me will for sure . I am still learning myself, but if your a REAL IT person you always will be.
Peace...
Ex-MislTech
http://www.geocities.com/duanenavarre
Just get an A-brand... (Score:1)
But you spent a lot of time keeping up and matching parts together. Just get you x86 hardware from an A-brand and get their servergrade stuff. (Good examples: NEC (good service!), HP/Compaq, even Dell. )
Nowadays in many applications they can even outperform brands like SUN. (with quality RAID controllers!)
Summary: (Score:2)
Why Linux is better ... (Score:1)
Linux is cheaper, and is more flexible
It is open source, and it free except for
the learning curve, and the cost of migration
Migration even in the M$ world can be painful
with their own damn OS, I have done of a few
of those as well
1. Cost up front , TCO is variable
(if you KNOW linux well you can get low TCO)
(If you do not know linux well, you won't)
2. more flexible (less lawyers)
3. open source (you get the community)
Sun has 5 and 6 - 9's reliablility machines
for telecom networks that cost as much a
porsche a piece
If you want to compare $1,500 x86 boxes to
these high end high reliability hardware/
software systems
Linux is gonna lose
But dollar for dollar, the linux system is
going to win . There is PC hardware out
there that is more reliable than most , but
does not carry a HUGE mark up for it
www.google.com runs on x86 hardware and linux,
and www.mosix.com does as well now
There are ALOT of squid boxes out there now,
and no telling how many Linux/BSD firewalls
and routers
Linux is building a pretty strong case here
The city of houston chose linux over Sun,
so did Oracle just recently, hardware too,
they went with Dell
Sun's day in the sun may be at an end
The bottom dollar may get them laid off
just like it has many of my friends
I wish them the best of luck
Peace...
Ex-MislTech
NIS ? Try LDAP (Score:1)
As for your NIS question, I would be very tempted to use LDAP. NIS is horribly horrible whereas LDAP is much easier to understand, implement, support and interoperate.
Check out LDAPGuru [ldapguru.com] and OpenLDAP [openldap.org].
As for the hardware, go for the biggest, baddest you can. Assuming you use RAID on your servers (make it hardware RAID) can you survive with only 72 GB of storage ?
Anyway, I'll have a think about your hardware some more.
cheers, Tim
"analog circuit designer...sysadmin" (Score:3, Funny)
"Hi, I was a desktop support tech, now I have been thrown into the job of managing our Windows network, how do I install that Active Directory thing?"
Windows has had the burnden of bad, inexperienced sysadmins for years, now Linux can share in the joy as it's more widely deployed.
one machine at a time (Score:2)
And if it isn't broken, don't fix it. That little Sun sitting in a corner running some server all by itself, just leave it alone until it starts causing problems (bad hardware, can't access NFS, whatever). Then, deal with it.
The other thing that goes into an "enterprise network" is lots of diversity. Enterprise networks pretty much always have a diverse mix of machines in them.
Upgrade what you need... (Score:3, Interesting)
As for the desktops, if you're careful, you can -stay- with Solaris _AND_ switch to fast, cheap, x86 hardware for the workstations. You might be stuck with Linux on the Itanium compute server (which is only really going to be useful if you get >4GB RAM...), but you can keep the desktops virtually the same (assuming your software has Solarix86 support).
If you're not really a 'qualified' admin, I'd try to change as little as possible. Doing LFS for a handful of compute servers is pointless; Take slackware or debian, do a custom kernel compile, remove some unneeded packages and services and then recompile a few key apps with excessive optimizations. You'll save yourself a load of time, have a system that actually works right and the engineers won't notice the difference. It might be different if you were building a cluster or something, but it's not worth your (or the company's) time in this situation.
Re:Upgrade what you need... (Score:2)
The old saying - Linux is only free if your time has no value.
Fundamental Points, sorry I'm late with 'em... (Score:1)
1. don't buy an Itanic, if you're going with Opteron for its ultra-fast RAM ( compared with Itanic ) and drastic cost-effectiveness ( ditto ), an Itanic won't show you whether Opteron'd be a good match: the architectures are totally different.
2. RAID storage: don't buy Promise 'raid' cards ( and DON'T do 'raid' 0/1, do RAID-5 ).
Why? .. .. don't know about Highpoint or Adaptec ), and...
1. it ISN'T possible to use S.M.A.R.T. diagnostics in your drives with the Promise ones, at least ( you'll crash the PCI-bus, hanging, fatally, the machine, using Promise chips
2. they oppose Open Source drivers, and coders, for their own products [kerneltraffic.org].
Highpoint has only SuSE 7.3-8.0/Redhat-whatever ( IIRC ) drivers for their fast 1520 cards, but if you want compute performance, you want Gentoo... ( and SuSE's been at 8.1 for ages, now... )
Adaptec? I don't know if their cards have the same issues as the Promise/Highpoint, but their cards compete with Promise's, and so probably cut corners in similar ways ( I'd love to see hard data on that point, though )...
3ware [3ware.com] are the only cheap ( compared with SCSI ) RAID controllers I know-of, that offer bootable, real, actual, S.M.A.R.T.able RAID on ATA drives.
( I'd stick scads of 120GB IBM 180GXPs on 'em, because they're cooler-running than the 180GB versions, and better than most other drives: fast, quiet, reliable-looking, etc .. quiet means, to me, that wear&tear isn't happening as much, though I wonder-at the No-Seagate rule expressed earlier... is it that fluid-bearings fail soon? or that Seagate has worthless support from our perspective? )
3. SuSE or Gentoo are really your only choice, that I can see.
Why? .. 1. Redhat's trying to microsoft linux, by ignoring standards and making its way law, and Mandrake's .. a flaky ( though fast ) variant originally based on Redhat... I'm fed-up with both, but YKMV ( metric, here )..
2. SuSE includes damn-near every program-capability one could imagine, and has excellent hardware support ( beyond any others' )..
3. Gentoo's compiled specifically for the hardware you are running, and with --buildpkg you get to build on one, then copy all the tbz2's built, to all of the other ( identical ) machines, and just install 'em, and voilá: ultra-performance.
Misc Links:
Chassis [calpc.com], suitable for lots-of-drives NAS type thing.. or this one [skyhawkusa.com] for well-cooled system ( thick aluminum's a good conductor of heat, and that makes for a longer-living, less-downtime machine )
I'd use Athlons, but that's just me ( Intel's murdered/crippled WAY too many CPUs, and chipsets, for me to be loyal to them ), and would use these HSFs [thermalright.com] with Verax.de [verax.de] ( or Panasonic Panaflo ) fans on 'em, just because the noise machines make increase sick-time and reduce health/sanity/productivity so damn much.
Consider using P/Ss like these [enermax.com.tw], remembering that 1. they're REALLY quiet only when running at about 50% load, and 2. the UPS-VA-rating you need for each one is DOUBLE the delivered-watts rating of the P/S.
Also, you want LINE-INTERACTIVE UPSs on all machines. ( NO data-corruption due to brown-outs or other glitches ).
I'd consider dual-CPU machines standard for the desktop, simply because even if a CPU was saturated, on that machine, the machine'd still respond, and I'd stuff as much quick RAM into it as I possibly could ( 3GB/desktop, for engineers ), and I'd ALWAYS use ECC RAM.
Consider this board [tyan.com] as something to compare against, with Something Like This KVR266X72C25/1G [valueram.com] or this [crucial.com] times 3 of 'em, per motherboard.
Like the Marines: Capability-based, not capability-choked, right?
The best advice I've seen on this page is
1. get a GOOD admin ( character, more than anything, values, sanity, cultural-harmony-with-you: you CAN change someone's skillset, you CANNOT change their nature ), and
2. metrics, understanding precisely what 'success' means, what the context is, etc...
3. do it one unit at-a-time
Oh, yeah, here's [amdboard.com] an Opteron-board news link... ( I'm waiting for lots-of-SATAs-on-board )...
Finally, change the ferro-resonant ballasts in your flourescent lighting to RF ballasts, and switch to Phillips TL-930 4' fluorescent tubes ( Colour Rendition Index of 95, rather than the cheap-cool-white CRI 50!! ), and your health will improve, significantly ( you can then ask for a raise, for your increased effectiveness, see )... if you find the warm-white of the TL-930s ( 3000K ) not brilliant/awakening enough, then mix-in a couple of TL-950s ( 5000K, mid-day-sunshine/sky colour ), to punch-up your alertness.
More info here [www.akva.sk]
Re:Fundamental Points, sorry I'm late with 'em... (Score:1)
Damn, sorry I forgot:
IF you CAN find 'em, you can also use the Silicon Image SATA chip based motherboards/add-on-cards with linux ( the 2.6 kernel's going to be fully supporting 'em, though for the 2.4 kernel, 3Ware's your only open-source choice, it seems, UNLESS you can get drivers specifically for that SATA board from somewhere )
Reason for using SATA rather than normal/parallel ATA? Very Low CPU Usage, that's why..
Re:Fundamental Points, sorry I'm late with 'em... (Score:1)
Fucked-In-The-Head mistakes I make when annoyed/tired:
THIS [tyan.com] is the board I was trying to recommend you try in your prototyper machine...
Why?
Athlon's floating-point-optimized CPUs are, I gather, drastically faster than Intel's streaming-multimedia optimized CPUs in most engineering stuff, and the DUAL CPU 'board will mean the machine still responds, even when one CPU's saturated.
Why'd I recommend 3GB? because you can't functionally get 4GB into 'em: the PCI devices eat about .5GB, so 3.5's as high as can sanely be got.
Sorry I can't provide the link to the quotes/benchmarks of that chip-designer guy who'd compared both Intel 'boards and AMD 'boards, but .. damn, it was significant difference, between 'em.
Also, I'm REALLY recommending/seconding that advice that you take it one unit at a time, but amplifying on it: build prototypes so you understand the 'gotchas' involved, and are able to get hard data on the different subsystems in your intended answer.
SCSI versus IDE (Score:2)
Reasons why people prefer SCSI:
Avoid cheap IDE RAID cards: they are often just conventional IDE cards with software RAID drivers. Take a look at the 3ware cards instead. And see if you can get Serial ATA cards and drives to cut down on the ribbon cables.
BTW, if you do the sums then you will find that the most cost effective backup solution per gigabyte for media only (never mind buying the tape streamer) is a collection of IDE drives.
Paul.