Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Technology

Storage Area Network Solutions? 23

TJPile asks: "I work for a larger advertising company with offices all over the US and soon Europe and Asia. Due to our growth in the past year, our current archive/storage system cannot fulfill our needs. Others in my IT team have been talking with Dell about a storage area network. I will be the one administering this system and I was wondering if anyone in the Slashdot community has ever dealt with this before. I know we will be needing some heavy metal along the lines of an SMP Sun or SGI box. We need a system that can support (at max) about 100 simultaneous users working on large image files stored on the server. We also need cataloging software that will allow PC/Mac users to browse documents via thumbnails and job numbers. What do you guys think?" The previous two articles that touched on this subject didn't get much traffic, and were posted at least 6 months ago. Has the intervening time provided more advancements in this area?
This discussion has been archived. No new comments can be posted.

Storage Area Network Solutions?

Comments Filter:
  • by jfrisby ( 21563 ) on Wednesday October 25, 2000 @01:55PM (#675751) Homepage
    We have a bunch of NetApps where I work (don't ask) -- they are blindingly fast (something like 6,000 NFS ops/sec with 10ms latency) with oodles of space (hundreds of GB or more) and loas of redundancy. You can get them with dual fiber channels from each head unit to the disk shelves, and you can cluster two of them together so that if a head unit fails, the other just takes over...

    Very impressive, relatively cheap... And oh yeah -- no Sun box required since they just hook straight up to your LAN. (100Mb, or 1,000Mb Ethernet...)

    -JF
  • by human bean ( 222811 ) on Wednesday October 25, 2000 @02:06PM (#675752)
    Designing a storage-area-network that has better performance than a traditional client-server storage model is not a trivial task. Yes, you can put all the pieces together in the right ways, but can you make it go faster and hold more?

    A major part of the operation of this sort of system is tuning it to the actual data and use patterns of your particular users. Note that not all of your users are the same, and that you will probably have to compromise on the tuning. Even though systems like this are supposed to be redundant, don't forget to build in enough network capacity for data backup servers.

    Best bet is to buy a Network Appliance or one of the SUN arrays, and get it over with. Of the two I would go with the Net App boxes.

    The other factor is the networking outside of your SAN, that is, now that you have the data online how do your processors get to it? If you are doing image-at-a-time load-work-save sorts of work patterns, then you will have less of a traffic load. If you are doing batch image processing one-right-after-another, then you are going to need all the bandwidth you can get. Doing larger geo models, we found that gigabit ethernet was needed from workstations (SGI Octanes) down to the storage farm. Trunked ethernet just didn't do it.

    YMMV. Get a sniffer and do a study on your traffic to get a feel for what you need.

  • NetApps are NAS (network attached storage) not SAN (Storage attached network) different things, a NAS has all the data stored in one node, whereas a SAN has the data shared between multiple nodes. SAN provides redundancy in that data is duplicated within the nodes and if one node goes down, the data is still accessible.

    However, I think NetApps are GREAT, and should work well in the setting in question, provided that lag from the other side of the ocean isn't going to be an issue. NetApps are Alpha boxes, with tons of disks on them, they can be clustered for redundancy and are fast and reliable. I've gotten 12,000 ops/sec from them.

    But since the situation involves 2 continents, I believe that a true SAN should be the solution. maybe outsourcing to a company like StorageNetworks [storagenetworks.com] might be an option.

  • Any firm *serious* about shared storage uses EMC [emc.com]. This isn't snake-oil motivated zealotry or whatever. If any of you have engineers in big name firms are in them yourself, ask them what they use for shared storage and you'll have the majority be EMC. You simply cannot beat the performance, reliability, and scalability. If this is interesting to you, contact me via email and I'll put you in touch with my EMC rep, a very knowledgable, kind, and no-nonsense guy.

    Regards

    ps: I know this sounds like a lot of marketing, but when you have such a complex situation as massive shared storage that needs uber uptime and so on, I can't really convey how happy it makes you to find a solution that Just Plain Works. I call it like I see it, no intra-ass sunshine blowing.
  • SAN provides redundancy in that data is duplicated within nodes and if one node goes down, the data is accessible.
    NetApps are Alpha boxes, with tons of disks on them, they can be clustered for redundancy and are fast and reliable

    Perhaps you might want to give a clearer explanation of the differences? ;-)

  • I'll second that. We've got a NetApp and Digital (Compaq?) StorageWorks array. It's definately worth the money to buy these. The Sun vs. Alpha argument is always fun also!
  • I have been researching this very topic big time for my company. We have about 1 terrabyte of storage needs. That doesnt sound like a lot, but we need IO speed not capacity. Hence I need to spread my storage needs across many storage devices. From the sounds of what you need.. it doesnt look like you really need a storage area network. Sans these days require fiber channel
    networks. It cant run over ip. I would suggest
    you setup the following..

    A Sun Enterprise 4500 with 4 A5500 storage arrays mirrored, connected via fiber channel to the 6500. The 4500 would have 16 processors and 4 gig of ram. That should easily handle 100 high end graphics users and permit the pc users to browse via cifs/nfs, thumbnails.

    Malice95
  • I'll have to second his post, I've used netapps before, and they're very kickass. They use a 250 MHz MIPS chip in them (speed might have gone up, but it's definitely MIPS, not alpha), and have gigabit or 100bT interfaces.

    Everything jfrisby said is straight-up good.
    -----
  • The most closely related articles to this story could be Affordable Backup Hardware for Today's Systems? [slashdot.org] and Hardware To Archive/Manage Large Collection Of Images? [slashdot.org].
  • The biggest thing about EMC I didn't like was the fact that they won't let you near the box. You have absolutely no access to the configuration of it. Need to rebind your disks into different RAID sets? Gotta call EMC to come down and do it (and a hefty cost too).

    An alternate solution is their Data General division, which makes the Clar iio n [emc.com] disk arrays and SAN. We actually bought one of these puppies and it should be delivered sometime next week (at which time no one will ever read this thread again since /. threads have a half-life of about two hours so me following up with status is useless...)

  • OK.. I guess I wasn't clear enough...

    The big difference is that a NAS is basically a fileserver, and a SAN is a network of disks. A SAN needs to be attached via a traditional disk attachment(scsi or fibrechannel), while a NAS is ethernet based(be it 10/100/1000/etc).

    The advantage of a NAS is definitely cost. it's cheaper and way easier to install and maintain.
    The advantage of a SAN is that it is a network of "disks" or nodes, and data is replicated between those nodes, you can have a nodes all in one space, or in separate datacenters, london, ny, la, etc. The SAN takes care of making the data available to the servers attached to it, no matter where the location is, but the server -must- be attached to the SAN in some way to access the data or to share the data.

    A cheaper alternative to netapps are the new IDE NAS's maxtor, quantum, have made these with IDE disks, based on linux, RAID5 in most cases going up to 480GB in a 1U or 2U case. pretty cheap and quick storage considering it's plug and use.

    The advantages for the netapps is that they have a very cool filesystem/operating system with snapshop capabilities.. this means that it literally takes a snapshot of the disk at a given time, say every midnight, and keeps the snapshot accessible from anywhere in the filesystem in .snapshot, this provides for easier backups (no file locking problem) and easy file retrieval if things change.
  • A friend of mine tells me a story that a gyy he knew worked at a TV station in Florida when that plane (ValueJet?) crash happend about 4 years ago. Anhow, their SGI server that housed all the video they were streaming over the internet was filling up and the admin had a flight to catch. The TV people wanted him to add new diskspace and grow the XFS after the newscast. Well since the guy had that plane to catch he did it DURING the newscast. Dynamically grew the XFS while it was mounted and sending data all over the net under heavy usage and NO ONE NOTICED! Pretty schweet.
  • If you're going to buy an SGI box anyway, SGI can sell you the software too. It sounds like you need something like SGI's StudioCentral [sgi.com] software. It is a digital asset-management package that can run on SGI servers, and it supports Windows and Mac clients. You can extend it with C++ or Perl. It can do versioning and thumbnails.

    ccg

  • Check Data Direct Networks [datadirectnet.com] out. The SDD is a I/O monster - 20 pipes of Fibre Channel going to the disks, 8 channels of Fibre Channel going to hosts (or switches, etc). It can handle GIGs of cache too - very cool stuff. Lot of internet media serving, TV/film media production, etc, uses the gear. The SANds side provides file system sharing, etc that works great with the hardware and SAN features of the SDD.
  • we will be needing some heavy metal along the lines of an SMP Sun or SGI box. We need a system that can support (at max) about 100 simultaneous users working on large image files stored on the server.

    The newspaper I work for has 75-85 ad builders (30 or so a shift) working on Macs. They regularly work with full page ads that are more than 70 meg each (color doubletruck runs 230 meg or so). For the past four years, they've been using a single processor (486DX-66) Novell server (hardware by Tricord) with 270 gig of SCSI disk space and 512 meg of RAM. It has a pair of 10mbps NICs. It has an uptime of more than two years. This machine is probably half of what you need. It's slow but rock solid.

    We're replacing it before the end of the year with a big IBM Netfinity with four PIII processors, 320 gig of disk space, four 100mbps NICs (one per ad subnet and a hot spare) and a gig of RAM. I suspect that this would do what you need it to do and then some.

    We also need cataloging software that will allow PC/Mac users to browse documents via thumbnails and job numbers.

    CCI's AdDesk [ccieurope.com] is your overall solution. We (the Orlando Sentinel [orlandosentinel.com] who I am not speaking for) have used it for several years now. If you look at the top 25 newspapers in the world, more than half will be using CCI's products for either Editorial or Advertising.

    AdDesk ain't great but it's the best available in terms of a full-featured, highly-expandable, highly-customizable solution. It's built on top of standard applications (Photoshop, Illustrator, etc.) held together by common (Oracle, TCL, etc.) running on either AIX or Solaris.

    What do you guys think?

    I think you have two choices. You can go cheap, buy some heavy hardware and put an operating system on it. Or, you can go with an AdDesk-like solution, spend a bunch of money and have a real advertising creation environment. It all depends on the size of your budget.

    InitZero

  • i'll add to this one. I run XFS and its got to be the BEST filesystem ever. XFS gives you real time *guaranteed* transfers from the disk...they call it REACT or something silly...i'll trust XFS over any toher kind of filesystem. its bailed me out more times than i can remember. i've had loads of problems with SANs but XFS and AFS have always worked.
  • Strictly speaking, Storage Area Networks are not the same as TCP/IP attached NFS or CIFS storage (which are typically referred to as Network Attached Storage - NAS).

    NAS is nice since there are a lot of simple off-the-shelf solutions that allow you to put a bunch of disks up with a server that can be accessed by many computers at the same time for both reading and writing. NFS is simple old technology with support in any $500 Linux box with a $20 ethernet card. The disadvantage is that it is slow ... as much as 100x slower than local hard drives due to all of the networking overhead.

    True SAN gets rid of the TCP/IP and NFS and just directly attaches disks to computers using something like a Fibre network (SCSI-3). This is blazingly fast (approximately local HD speeds), but requires more complex networking. Since each computer is basically mounting SCSI devices, you also don't have any easy way to have multiple computers that can read and write from the same SAN storage. Shared-SAN software is in the pipeline from Tivoli and Veritas, but you might want to take a look at Global File System [sistina.com], which allows you to have multiple Linux boxes on a Fibre SAN (or SCSI bus!) read and write from the same disks.

  • visit FileNET's webpage (www.filenet.com), They have been in the business of LARGE scale imaging and document management for over 10 years. Thier server products run reliably on several different platforms.

    I have installed and supported systems of 250+ users that ran fine on a few (2 or 3) NT boxen, I can just imagine how screaming they run on HPUX, AIX, or solaris (no linux support yet ;(.

    Thier software scales better than any other imaging product out there, and an imaging solution is really what you are after, not a SAN.

  • My NetApp NetFiler 720 runs an Alpha

    "Now, I hope and pray that I will, but, today I am still just a bill"

  • wow... hehe, okay. perhaps older ones use MIPS, newer ones Alpha...
    -----
  • we had a Clariion installed the day before yesterday, it kicks ass - pretty simple to configure and performs like a dream. Formatted a 35GB RAID 1+0 ufs filesystem on a Sun box in 1 minute. How sweet is that? The box was like 600kg though ;)
  • Like several other people posted, you do not want a SAN. SAN requires a really high speed, low latency link. It's almost always a fiber setup.

    What you're -really- looking for is a combination of technologies. You need fileservers at each of your locations to serve the local clients, then you want replication between the fileservers. You do not want to try and run NFS over a WAN. Especially not the distances you are talking about. The timeouts will drive you mad.

    So, I reccomend NetApp. They're a finalist in our search for home directory storage, but we don't own any. However, I've read a lot. They have a technology that will keep multiple NetApps in sync over long distances. Go this route and you'll have high speed storage for your local clients while keeping the data in sync between your locations (they send block level deltas, if you care).

    So, check NetApp. Read their white papers. This has already been solved. Don't reinvent the wheel.
  • EMC is the choice of dot coms with more money than sense as well as easily frightened IT execs who are from the "you can't get fired for buying IBM" school of thought.

    EMC will sell you an incredibly expensive system that is not particularly fast (for what you pay) and does not play well with any other type of storage system (especially in SAN environments)

    I can think of very few situations where EMC's advanced software features, bulletproof uptime and excellent service organization make sense and are worth the obscene price premium that EMC charges.

    Nine times out of ten, especially when thinking about fibre channel and SAN stuff it makes much more sense to deal with a vendor that makes interoperable hardware based on open systems and standards. You can save tons of money, get better performance and avoid vendor lock-in all at once.

    My 7TB+ SAN needed to play niceley with Alpha, Sun, SGI, HP, Linux & Wintel all at once. I ended up going with Brocade FC switches and Compaq Storageworks disks & controllers.

    Mind you I like the EMC product line but my personal opinion is that EMC is not worth the price in most cases - especially where the customer has no need or interest in getting the advanced software and hardware features that EMC sells as options.

    Just my $.02.

"If I do not want others to quote me, I do not speak." -- Phil Wayne

Working...