Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Technology

SAN, NAS, Cost and Benefits? 44

luetin asks: "Our company is at the point where our storage and backup infrastructure is ok, but not for much longer. We are looking into SAN, NAS, and variations thereof. We are a small IT department, with two sysadmins and two programmers. Right now we have stored/circulating about 2TB of data, and that's going to increase steadily in coming years. Does Slashdot have experience setting up SANs? Tales of costs and benefits of SANs versus a gaggle of NAS? Can SAN be implemented by reasonably seasoned IT people, or is it too dark an art?"
This discussion has been archived. No new comments can be posted.

SAN, NAS, Cost and Benefits?

Comments Filter:
  • by Dancin_Santa ( 265275 ) <DancinSanta@gmail.com> on Tuesday September 16, 2003 @11:54PM (#6982763) Journal
    Don't waste your time doing this kind of thing yourself. The final storage system will end up being cheaper and likely more robust if you go with an off the shelf solution.

    Companies specialize in building these types of systems all the time. Hire one of them to help you set up. If you have that much data and expect it to grow at a faster rate into the future, don't bet on your own rinky dink implementation, get a professional.
  • Don't do your own. It can be disasterous. XIOtech [xiotech.com] is awesome.
    • Counterexample:

      A coworker built a Linux machine, with a simple RAID setup using an "IDE splitter" card to mirror the two disks. It ran Samba, and was used as the CAD/CAE archive for the Electronic Design Automation department.

      Two years after he left that company, I asked a friend in the IT department how well that server was working. "Oh, it's great, we just reboot it once every few months". Unlike the proprietary massive RAID box (>8U rack space) from a (fairly)wellknown company, which had 27 or so
  • by Anonymous Coward
    You should consider automation for managing your storage deployment. It eliminates error-prone manual steps but still gives you the control you want as a storage administrator.

    There are some new players in this area, such as Invio [inviosoftware.com]. I think its worth checking out, especially for a new deployment.
  • Opposing opinion (Score:5, Interesting)

    by GigsVT ( 208848 ) on Wednesday September 17, 2003 @12:50AM (#6983076) Journal
    I'd have to go against the well-funded flow here.

    Right now you can get 3TB+ of storage in a single SATA RAID5 unit from www.acnc.com, for about $11,000.

    You can get it with a SCSI or FC external interface. Use two of them hooked to two computers in two locations (preferably 300+ miles away) with rdiff-backup if you want extra redundancy. We use local and remote mirrors for maximum protection. The space is so cheap, it's easy to keep extra mirrors.

    We've finally eliminated our last major SCSI and FC arrays, and I couldn't be happier. We're up to about 6 TB total ATA and SATA storage now. Get cheap storage if at all possible, because it will be obselete in need something that a cheaper system can't offer. That isn't much these days, now that 10K rpm SATA drives are out.

    As far as single drive reliability, the first ATA unit we installed has been in service 2 years this month. We've only replaced two drives out of 48, and even then, the drives passed the factory recertification tests from the manufacturer when we ran them on them. And even if you think that's a higher failure rate than your experience with SCSI/FC, keep in mind that the cost is so much lower, it lets you have more mirroring redundancy, so individual drive failures are much less of an incident.
    • because it will be obselete in need something that a cheaper system can't offer.

      I was trying to say, because it will be obselete in less than 3 years, get cheap storage unless you absolutely need something a cheaper system can't offer.

      Unfortunately, I used the &lt without escaping it. :)
    • Dunno much about rdiff-backup, but there are other solutions, e.g. rsync. If you want the enterprise level solution, Veritas volume manager can set up volume replication over a WAN to perform updates almost in sync (provided your WAN link stays up and doesn't get swamped!). EMC, Hitachi, NetApps & others will all sell you similar solutions for their arrays.
  • I'd go this route (Score:4, Informative)

    by Sevn ( 12012 ) on Wednesday September 17, 2003 @01:12AM (#6983175) Homepage Journal
    Used Netapp brand Network Appliances [zerowait.com]

    We used to use Netapps at MindSpring to serve 80,000+ commercial webhosting clients. They are tough as hell, easy to maintain, last forever, and do everything right. You can mount the shares with NFS or CIFS. It has a web based interface for configuration, or a simple command line interface. You can add drives and change volume sizes, inodes, etc without shutting it off or losing a connection to it. The snapshot feature will eventually save your butt like it saved mine on many occasions. Hope this helps.

    • I remember that NetApp (and other appliance companies such as EMC) are of the position that you must buy a new software license if you plan to use second-hand hardware. Just something to think about, it's probably worth negotiating that first with them before buying the hardware.

      The machines are pretty good at what they do though, but the software and support licenses are fairly high.

      • Not always. My company has bought several TB of used NetApp, and our vendor is able to transfer licenses, with full authorization from NetApp. My guess is that this isn't that rare, they still get our money for support anyway.

        • The issues NetApp were having seem to be from the sale of their hardware on ebay. Knowing what we use NetApp for at my employer and how much its saved us on countless occasions, I I see it as a worthwhile cost IF you're company is big enough to take advantage of 1TB+ of active/non-taped data.

          We're about to implement several of their 24tb arrays and wow are they great.
          • I like 'em, but disks die way more often than I would think. Much of our storage is way offsite, as in, I couldn't get to it in an emergency within about 2 days. Their support is good, but they have serious problems with address changes. We've had about 6 or 7 instances of disks being shipped to an old address, and we just keep updating them. Gets old.

            ITMLS [itmls.com] can transfer licenses. I'm sure other places can too. They've always been good to us.
    • While I LOVE Netapp's even a used F820 cluster with the kind of storage he needs will be around $80K. That's a serious investment for a small IT department. And a F820 cluster is the minimum configuration I would recomend for that much data as the smaller clusters would be maxed on storage from the get-go.
  • by maunleon ( 172815 ) on Wednesday September 17, 2003 @01:15AM (#6983185)
    1. What is the use of this data? Who accesses it? How many concurent users? What type of transfer rate? Will you need funky network cards (for example 10GB nics which may not work in some solutions)

    2. Can you accept downtime? If not, how much redundancy do you need? How fast can you get replacement parts?

    3. Do you need specialized apps running on the machine (such as virus checkers, management tools, etc)?

    For a professional installation, I would say you would at least want to ensure some redundancy. For example, an hp Proliant DL-360 G2 or G3 with redundant power supplies, redundant fans and a drive array or two. The server itself is fairly cheap, what will cost you money is all those drives you will need to buy. However it's a sturdy box.

    I don't mean to single out hp, you can look for other alternatives as well. We do run an hp/compaq shop, and I am familiar with them.
    All this redundancy helps in ways you don't expect. For example tonight I was able to move the server from one rack to another without losing service.. I disconnected one power supply and connected it to the new rack, then disconnected one network cable (the 2 onboard NIcs were teamed) and rerouted it.. dropped the other nic and cable, mounted the server in the rack and connected the remaining cables.. The users at the other end had no idea anything happened.

    This may not sound like a big deal to many, but for us to schedule a 30-minute shutdown of a critical server requires up to a month advance notice.

    You could of course accomplish the same thing using a cluster setup, but not without some major headaches. Clusters are cool on paper but for most users the bang-to-headache ratio is too low to justify it.

    • by Zapman ( 2662 ) on Wednesday September 17, 2003 @07:56AM (#6984420)
      These are all very important questions. The other thing that people forget about is the service agreement.

      My company paid too much money for an EMC array. We don't need the performance, and we don't need all the wizbang features.

      That doesn't mean it sucks though. EMC calls us within 5 minutes of any hardware issues on the box. It's fault tollerant to a amazing degree. The only time we've taken downtime was when we needed to schedule a major firmware update. The support agreement is amazing. If something dies, they call us and say 'we can be on site with an engineer and the part in 2 hours. When do YOU want us to show up?'

      Asside from our database servers, none of the boxes we've put on EMC come close to pushing the throughput limits. We would have been much wiser to utilize NAS for most of the servers. Much cheeper, good reliability, reasonable performance, and they can talk all the major file sharing protocols: SMB, CIFS, NFS, etc, and we could have spent less on the SAN.

      A poster above suggested a mix of either building the SATA arrays yourself, or a small vender. I poked on their website, and their support agreement says 'parts next day'. If the driveplane blows out, 'next day' may or may not be good enough. People scream really loudly if email is down.

      The other thing is making sure you find someone who can reverse engineer your 'rsync' solution. This is one reason companies tend to prefer out of the box apps for system level stuff. They can send someone to training for it.

      • The other thing is making sure you find someone who can reverse engineer your 'rsync' solution. This is one reason companies tend to prefer out of the box apps for system level stuff. They can send someone to training for it.
        Yea, that "rsync" is so rare and unusual that it takes a $10k certification to run it. Hint: Anyone on the unix side of the fence knew exactly what he was talking about. It's not real-time replication, though, just differential backups.
    • > the bang-to-headache ratio is too low to justify it.

      Just like my last girlfriend!
  • by yancey ( 136972 ) on Wednesday September 17, 2003 @01:41AM (#6983284)

    You're on the extremely low end of where a SAN becomes practical... and it may not be practical in your environment. If the SAN is really going to stretch your budget thin, don't do it!

    Xiotech's products are very easy to use, but costly. You'd be lucky if you could get the whole setup for under 80,000 USD. The SAN hardware itself is reasonably priced, but you pay a license to use 1-8 servers, you pay more for 9-16, and so on. The software part gets expensive quick. Also, be aware that Xiotech charges about three times the off-the-shelf price for drives and they won't let you use off-the-shelf drives or you void your warranty and get no support. They certainly make their money on the drives.

    HP/Compaq has the EVA series of disk arrays that use very similar "virtual array" technology that Xiotech uses. Again, very flexible, but expensive. At least HP doesn't make you pay more to connect more servers and the charges for drives is a little more reasonable.

    It certainly seems that someone along the way forgot that RAID means Redundant Array of Inexpensive Disks. The whole point is to make a bunch of relatively unreliable disks into a very reliable whole, unless you just want speed and don't need the reliability.

    If you're thinking about a two or three server cluster and just need shared storage where you can add more drives, you might look at some of the small and inexpensive rackmount IDE RAID solutions that are available with Fibre Channel and a FC hub, but be sure to get references and see who's using the things and what their experience has been. You can get two terabytes for under 15,000, but some of these are good yet inexpensive and some of them are just cheap junk.

    These IDE RAID solutions do not provide the advanced features of a virtualized SAN, like changing RAID types on the fly (from RAID 5 to RAID 10, for example). However, you could easily spend 60,000+ on a Xiotech SAN or you could spend the same amount and have eight terabytes in four IDE RAID modules. Your choice, but for a small shop, I say get the eight terabytes and setup mirroring across two of these RAID boxes.

    • You're on the extremely low end of where a SAN becomes practical...

      I can second that. In our company we have 2 EMCs (50km apart, 1 was free, leftover from somewhere), but the switches and FC cards took a surprisingly large amount of money. And of course the disks are expensive too. No way to put in usual SCSI disks. It all still makes sense in our case, because we gain flexibility: adding some GB to that server, take some way here, change RAID levels (not a 1-step process though), copy the data to the re

  • by Anonymous Coward
    Well I would start with staying away from Compaq (now HP) SAN products. I was on the QA team for their SAN it was the worst software I have ever encountered in my 15 years as an engineer. Total crap.
    • I'll second this. We deployed an HP SAN in 2001 and I'm now trying to get funding to replace it with and EMC. It's been down several times and it's always some horrendously complicated SAN thing (like the switch can't logon to the array.) It's very hard to manage and even experienced UNIX guys can easily do something fatally wrong without realising.
    • Now that you mention it....

      I was almost contracted by LARGE MULTINATIONAL (convenience stores, all numbers is the name) in my country to do benchmarking of an HP/Compaq SAN product. They moved everything to Oracle/Linux and now they just wanted assurance that the san was going to behave for the kind of demand they were going to generate (about 4000 sales points, plenty realtime applications on each, enough bandiwth in every one of those).

      When they saw the price on the benchmarks (bout 15% the cost of the
  • The company I work for, InoStor [inostor.com], has the ValuNAS [inostor.com] line of products; and as they are a Tandberg Data company, can provide excellent integration of near-line and offline storage (tape).

    The 2.25 terabyte ValuNAS is only around $16K [dealtime.com]. It gives you a 2.4 GHz P4 processor, gigabit connection, and multiple RAID levels, including multiple disk redundancy (RAIDn [inostor.com]). It utilizes SATA technology to allow hotswap drives at a fraction of the cost of SCSI.

    The iceNAS software is very easy to use, and supports SMB/CIFS (thro
  • I'm looking at rolling my own around a Dell server (dual power supplies etc, brand not that important), and transtec's (www.transtec.de) scsi based direct attached storage.

    It runs ATA disks with a linux internal O/S and a SCSI connection.

    that way I can control what protocols I run but putting *nix (FreeBSD being my choice but a GNU/Linux distro is equally well placed) on the front end server. Also links in with the rest of our systems which are still *nix.

  • by Neck_of_the_Woods ( 305788 ) * on Wednesday September 17, 2003 @08:53AM (#6984747) Journal
    Do not, I repeat do not go with a cheap ass no one supports you solution of white boxes with IDE raids.....Don't FUCKING DO IT!!!

    You like your job? You want to keep it? Buy something with 4 hour support gold contract or close to it.

    I have done SANS for the last 5 years and when you loose an MS module, or your HBA goes nuts, or your switch drops it's config your in a world of hurt if you don't already know how to fix it. GET THE SUPPORT. You suck up on this and loose 4 terra of data, every sql server in your network goes down your screwed.

    I don't know how to say this any stronger...NO NO NO NO....do not go cheap on this. Do not listen to people taht say something cheap will work...Do not, not get the support contract. If you don't the unforgivable will happen, Murphy will come down and take a giant shit right on top of you.

    Does SANS work? IS it worth it...hell yes it is. I have used it in 40 million+ hit a month webfarm back ended by MS SQL, I have used it with Oracle, I have used it with AIX and it improved everything everywhere. With everything from MS clusters, to crappy little Novell file feeders. IT is wonderful...when it is working. It works 99.9% of the time but when it goes it goes big, and if you can't fix it.. your standing there holding the bag for all you data. Keep in mind you have moved it all to a single point of failure....now are you really going to trust that to some Rube Goldberg POS some linux zeolots told you would work just as well?

    Go buy a big name if your pushing around that much data and buy a contract. Ask you boss if that data vanished overnight if it would be and issue. Think about using super DLT or LTO 10+ tape drives to back it up on the fiber switch to reduces you backup time. Think about snap shot technology that basically does a slice to slice copy then backs up offline. Think about near line storage. These are the things you think about when you get this deep into it. Money will always be an object, ask them to put a price on the data...how much is it realy worth? Then move forward to protect it as if you where holding cash.

    ----wow end rant here. Sorry guys, you just don't mess around with crap when you get to the level that you need a SANS. Hobbling together a bunch of crap is the fastest way out the door. You don't believe me? Then you have never put together a real system, with real world money, and real world problems. They don't wait for you to find a driver, or a web page with the solution. They just show you the door.



  • B.O.M.

    3Ware 12-Cahannel controller (SATA)
    9 250 Meg Drives
    9 Sets of SATA Cables
    4 sets of 3 drive 3Ware enclosures
    Big, Huge Antec Server Case with Dual Power Supplies.
    Old Intel BX Motherboard and Pentium II ~ 1000 MHZ
    New Fans for Pentium II
    Intel Gigibit Ethernet Card
    Free-BSD

    Should be around $6000 for 2TB of storage with 1 hot spare.

    Damn noisy - but works *very* well for us. Share it with the network with CIFS or NSF.

  • by Bravo_Two_Zero ( 516479 ) on Wednesday September 17, 2003 @09:44AM (#6985111)
    We use our SAN extensively. Most of our HP-UX systems have the preponderance of their disks on it. A number of Windows systems do the same. However, the configuration and support of our HP XP disk systems is compex and expensive. In power alone, they have steep requirements.

    But, we have technical needs that require consolidated storage. It's not for the understaffed or underfunded (we're on the edge of being both, too). It is a hassle, and it's difficult to do without a healthy support contract from your disk subsystem vendor. Also, cheaper second-market devices don't get supported without a big "recertification" fee to the vendor.

    But, how much of that 2TB needs to be online? Can it be on 9.99 uptime systems? Can it be near-line? Does it need to be copied offsite for DR? Depending on those answers, you might be very happy with devices like the ones Raidzone sells (not an endorsement since I have no hands-on with them).

    On the other hand, a NAS device with a reasonably affordable fast or gigabit ethernet (my network gurus assure me that those are two separate things... I think they might be high) backbone could allow iSCSI, NFS or CIFS mounting with no issue. In addition, you might be just as happy to have the device serve its own files rather than be mounted by other servers. It depends on what the data needs to do.

    The only big caveat is to find something structured around what you need to do with the data, and buy two or three of them. Even if redundancy isn't simple or obvious, you'll find a way to do it eventually. And you'll be much happier that you did.
  • Watch the standards (Score:3, Informative)

    by bluGill ( 862 ) on Wednesday September 17, 2003 @10:18AM (#6985400)

    I've been out of the SAN buisness for a year or two now. Back them standards was a big thing, and some of the big names didn't interoperate well with anyone else. Ask the standards question, and don't buy anything until you have an answer you like.

    That doesn't mean you have to go with standard gear, but know what you are getting if you don't. Don't buy an old brocade switch for example, because it won't work (without upgrades which may or may not exist) with anything but brocade switches. Likewise EMC isn't standard one all points, but they make some popular gear for a reason. It may or may not be worth the expense. (I'd say no, but I worked for their competitor)

  • There is a new breed of SAN out there -- iSCSI. No more expensive fibre channel. For *most* uses the speed difference is not worth it and the cost savings are tremendous.

    1) no buying fibre hardware (hidden costs)
    2) the actual SAN is an order of magnitude cheaper as well

    Microsoft has software based iSCSI controllers, Intel and several others sell hardware controllers (think a NIC meets a scsi card).

    Yes, it is new tech. Obviously many here haven't heard about it. I know some people have allergies to ne
  • by rakerman ( 409507 ) on Wednesday September 17, 2003 @03:54PM (#6988469) Homepage Journal

    Every machine directly connected to the SAN will need a fibre channel HBA. If you have more than a couple machines, you may need to get a fibre channel switch. The FC gear is pretty expensive.

    Also consider diagnostics - you're no longer monitoring an Ethernet network - how are you going to track down problems in the storage network?

  • You hate your Sixth.....

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...