Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Hardware Technology

How Many Desktop PCs Can One Server Replace? 107

NZheretic asks: "HP has just announced that they have upgraded a four-processor server with Advanced Micro Devices' new dual-core Opteron. The amount of processing power a multi-processor multi-core system can deliver seem like a waste of processing power for most traditional servers, which are more likely to suffer from disk access bottlenecks before lack of processing power becomes a problem. But what if that power could be delivered direct to the desktop users? The HP ProLiant DL585 supports eight 64-bit PCI-X I/O Slots (Six 100MHz, two 133MHz). The ATI FireMV(TM) 2400 supports Quad DVI/VGA displays on PCI Express. Assuming that you leave one PCI-X slot for a multiport USB card, thats up to twenty eight displays with USB keyboards,mice and headsets that could theoretically replace twenty eight networked desktop PCs. Using DVI and USB extenders, not all of the user stations would have to be within the 7.5 meter cable distance imposed by the DVI cable limit. The only OS currently capable of supporting this many displays is Linux. What limits would be imposed by the hardware and PCI-X bottlenecks? Taking into account the added cost of the HP and ATI hardware, could it deliver a great reduction in the total cost of ownership over both traditional PCs and thin client systems? How many desktops is it practical for a high end server to directly replace?"
This discussion has been archived. No new comments can be posted.

How Many Desktop PCs Can One Server Replace?

Comments Filter:
  • by TripMaster Monkey ( 862126 ) * on Friday April 22, 2005 @04:34PM (#12317276)

    I'm sorry, but this is one of the dumber products I've seen out there.

    The software retails for $99 per workstation, and this gets you only one year...additional years are $29, again per station.

    Add to that cost the cost of all those dual-headed video cards, USB cards and hubs, and DVI and USB extenders, and your total cost is not at all inconsequential. And for all this work, you have a maximum of 10 users to a server? Plus, those users are physically tethered to the server, severely restricting your network design.

    It seems to me that all this and a lot more could be accomplished with less money and less hassle via some very low-end systems and VNC. In fact, that's how I'm accomplishing it right now.

    • Actually, it might be cheaper than buying 10 computers. Depends on the actual prices.

      For a regular setup, assume a 3-year upgrade cycle and a $1200 computer (not unreasonable).

      As long as you can do it for under $12k, you're going to come out ahead. I don't know enough about the equipment to price it all out, but I don't think it sounds that unreasonable.

      • $1200??? I said low-end!

        If you think that's a good price for low-end, I've got some Celeron 300s laying around here you might be interested in...
        • I meant $1200 for the PCs that the server was supposed to replace.

          • OK...that makes more sense...kinda defeats the whole idea of the thin-client/dumbterm solution, but that raises an important question...is the thin-client/dumbterm model even viable anymore in today's era of low-priced desktop systems?

            • OK...that makes more sense...kinda defeats the whole idea of the thin-client/dumbterm solution, but that raises an important question...is the thin-client/dumbterm model even viable anymore in today's era of low-priced desktop systems?


              Last I saw, you could pick up a new box (no display) for 250 dollars. That included Windows XP.

              That's retail at CompUSA.

              • Say it with me:

                "TCO > Cost of hardware and software."

                The thin client model mainly works because, properly done, the users cannot break their PC. Even if they throw it out the window, or set fire to it, they cannot break it. (If they do that you get another thin client out of the cupboard and plug it in, presto, back online in 5 minutes)

                However, the thin client model does mean that the direct attach model has a hard time

                Also, there are other reasons for using thin client. For example, I have several
        • by llefler ( 184847 ) on Friday April 22, 2005 @05:16PM (#12317770)
          Why not new equipment? I recently purchased a brand new thin client (rdesktop, xterm) for $150. It includes keyboard and mouse and NO moving parts. I figure the useful life will be in the 7-10 year range. Connect that to a Linux server using X or a Windows box with Terminal circus.

          It gives the same end result without messing with exotic hardware and configurations, and you only have to be as close as your nearest ethernet port.
          • Which model did you get, and are you happy with it?

            I've been considering an option like that for the bedroom for a while now, but there's a morass of info to wade thru. Couple $150 with the cost of a cheap 15" LCD and it's in my price range :)

            SB
            • Which model did you get, and are you happy with it?

              NTAVO [ntavo.com]

              I've used it with TS and X, and it works pretty good. The fit and finish on the hardware is excellent. It boots linux and has an interface that is really familiar to Windows users. (start button, status bar, icons on the desktop for various server connections)

              I bought the first one to demo to clients, but I think I'll get a couple LCDs and put one in the living room and another in the kitchen.
              • That is some sexy hardware! I can't even build my own Via rig that cheaply.

                My one gripe is that the LCD model would be better replaced with a notebook at that kind of pricing. Still damn sexy though.
                • My one gripe is that the LCD model would be better replaced with a notebook at that kind of pricing.
                  What is a notebook going to do for you if you have to have an infrastructure to be able to use it? I do agree that the price is getting up there for that, though.
    • The cost sounds pretty reasonable for a corporate installation, and I think it's safe to assume that with a subscription, you get the software updates and patches as part of the deal. For the type of business that will buy this thing, these numbers just are not that bad. HP products are not "free as in beer".
    • I'm running a single-CPU 3.0GHz Xeon with an SATA RAID mirror under VMWare Workstation 5 serving 3 copies of Windows using UltraVNC.

      Works OK. $2000 box. Snapshots and backups make it worthwhile.

      So a 4x2-way HP with Ultra320 SCSI drives ought to be able to handle a dozen users easily.
      • I'm running a single-CPU 3.0GHz Xeon with an SATA RAID mirror under VMWare Workstation 5 serving 3 copies of Windows using UltraVNC.

        Have you tried using Remote Desktop instead ? It's probably a *lot* faster.

        • Have you tried using Remote Desktop instead ? It's probably a *lot* faster.

          Are you familiar with UltraVNC? It works as a device driver which helps with the latency tremendously. It's different than regular VNC.

          Over a modem I've been pretty happy with Remote Desktop, but this is over a switched 100 full network and I don't see any downside in this case to using the open standard, always my first choice.
          • will ultra VNC-
            mount local hard drives to the 'serving' machine automatically?

            load local printers to the apps running on the serving machine automatically, then remove them when disconnected?

            Not tie up the serving pc from being used locally?

            My work, I'm the IT guy in addition to a lot of other stuff,

            the web is LOCKED DOWN at work via a 3com router with a short 'allowed' url list.

            however, I can RDC into my XPPRO 'chine at home, and do all my webbrowsing,email/whatever- while my wife plays DiabloII on t
            • All VNC's I've seen, just like PC anywhere, require tying up the computer from local use..

              Well, when you're running several virtual machines on a single server there's not much point to local access, now is there? Terminal Services is good for certain problems but that doesn't mean its the right choice for every problem.

              But yes, someone at the console could interact with the remote users, should it be needed.
    • Its much better than 10 machines because if only one person is using it, she has 100% of the server's power, much like cable internet or shared hosting.

      But VNC is a better idea. We do provide ~20 users with terminal services desktop on a windows2000 server (dual PIII 1.4GHz), and it has been very snappy and impressive for 3 odd years now. No plans to upgrade it. At one time we had ~20 Pentium1 workstations using the server; savings and easy administration.

      The only issue appears with games and movies. Movi
    • To make linux support multiple screen/keyboard/mouse sets does not require an expensive product. There are plenty of projects out there to do this.
      • I thought there was a standard one involving lots of USB keyboards and mice. I use it here on a machine with four keyboard/mouse/screen combos.

        Section "InputDevice"
        Identifier "Keyboard0"
        Driver "evdev"
        Option "Device" "/dev/input/event0"
        Option "AutoRepeat" "500 50"
        Option "XkbLayout" "us"
        EndSection

        Section "InputDevice"
        Identifier "Keyboard1"
        Driver "evdev"
        Option "Device" "/dev/input/event1"
    • I'd have to agree. At home, I've replaced five PCs with one dual P II running RedHat 9. The remote desktop sessions are handled by GDM calling Xvnc instead of X. If I was given a dual core Opteron system, I could probably easily do at least 28 sessions without any of that extra hardware. The only thing required would be cheap thin clients for the users.
  • by invisik ( 227250 ) * on Friday April 22, 2005 @04:43PM (#12317375) Homepage
    ..and on my desk, then I'd say 1.

    -m
  • Can't you already set up Xorg to configure which monitors are with which keyboards/mouse etc..

    The only issue would be the USB/user thing... but with hald/gnome-volume-manager (not sure of the KDE equiv), this can be worked around...

    Why buy this product when I _THINK_ you can do this already.
  • ltsp (Score:3, Insightful)

    by tka ( 548076 ) on Friday April 22, 2005 @04:48PM (#12317441)
    The power can be direct to users via Linux Terminal Server Project. Use a gigabit network and you can have lots of users. But why would someone buy it if it has too much processing power for their needs?
  • Haven't we gone through multiple iterations of this idea already? Dumb terminals, thin clients and now well...dumb terminals again. I mean you could do it but isn't this just a rehash of a really old idea?
    • You're right.

      The economic problem with these approaches is that you'd have to sell millions of devices before you'd get the economy of scale that PCs enjoy.

      You end up with compromised functionality at about the same cost per user.
  • PCI-X != PCI-Express (Score:4, Informative)

    by anderm7 ( 68050 ) on Friday April 22, 2005 @04:49PM (#12317468) Homepage
    Pci-Express and PCI-X are not interchangable. PCI-X is really fast PCI, where-as PCI-Express is different altogether (Although a PCIe to PCI/PCI-X bridge is supported).

    Depending on how these systems are configured, it may not be possible to use that many monitors.
  • ...a beowolf cluster of these?
    • Easy enough to imagine.

      Since this is the anti-beowulf solution (it's one PC pretending to be lots of PCs, while beowulf is a lot of PCs pretending to be one), then if you clustered them, they would cancel themselves out in a flash of gamma radiation.

      You should try it, a bit of hard radiation might do your spelling some good.

  • PCI Express is normally shortend to PCI-E or PCIe

    Your plan will not work with this motherboard.
  • Linux is no good (Score:2, Insightful)

    by keesh ( 202812 )
    Linux treats all keyboards and mice as a single input source, so you'll need to get patching if you want more than one active user at a time...
    • There are seperate device nodes for each mouse, but by default they are multiplexed into /dev/input/mice. You can certainly tell X to read just one of them. And you can run multiple instances of X on different (or even the same) video card. USB keyboards (maybe PS2 now, but good luck having multiple PS2 keyboards anyway) also have seperate device nodes. From the standpoint of X, it's possible and has been done (google for it). The one thing that would need major patching is virtual consoles (i.e. Ctrl-
      • Yep. I've done this in X. But the issue is moot as the OP screwed up and assumed that PCI Express is the same as PCI-X, which it's not.

        While I like the advances in bus technology, there have gotten to be too many incompatible buses now. It's horrible. Looking at the ASUS web site, we now have motherboards with 5 slots, but all of them different! That's just insane. Why can't they come up with ONE bus that works at high speeds (64 bit), and works for both video and other devices? Is it totally impossible to
    • I recall there being a post on /. a few months ago about how this has already done to a lesser extent -- on Linux -- in (South?) Africa, where one PC is providing displays, keyboards and mice for four simultaneous users to cut down the cost per seat. I'm sure the information on how they pulled it off is readily available online with a little bit of digging.
    • Re:Linux is no good (Score:4, Informative)

      by homer_ca ( 144738 ) on Friday April 22, 2005 @08:26PM (#12319354)
      It'll work with Linux, but you need a patched kernel. See here [c3sl.ufpr.br].
    • Each X11 server (which is what people would be running) can be configured to use whatever collection of keyboards, mice, and displays you want it to use.

      Of course, that makes it no less of a stupid idea to do that (you should be using an X terminal and set the thing up as a server). But, in principle, Linux will support this sort of insanity if you must.
    • Here's your patch:

      - /dev/mouse
      + /dev/input/mouse0

      Phew. That's some hard-core hacking required to make it work. Of course, to make keyboards work, something similar will need to be done with /dev/input/keyboard... :)
  • God that's dumb. (Score:4, Interesting)

    by Apreche ( 239272 ) on Friday April 22, 2005 @04:56PM (#12317556) Homepage Journal
    Take that giant server and put it in a back room under lock and key. The only things that should plug into it are a single power cable and network. Put a single KVM in your rack to access all the servers in it.

    Now buy 30 thin clients. Each one gets a KVM and a network card. Good. Now plug in the power on all the thin clients and plug their network cables into a switch. To remove clutter if you want you can use 802.11 and all the thin clients will only need power.

    Ta-da! Welcome to intelligence.
    • Do these hypothetical thin clients support encryption? Because I don't particularly like the idea of people's passwords getting sent in plaintext between the thin client and the server.

      We threw away a bunch of X terminals back in the late '90s for exactly that reason. Kinda scary to see it's being suggested in 2005.

      • Implementing X over SSL would be pretty easy for these doohickeys, so I imagine they are available in secure flavours.

        Heck, half these things run a cut down Linux as the thin client's local OS.
        • Uhh, X can cause heavy network traffic, especially if the user is browsing the web or something else that is graphics-intensive. If your CPU can handle the encryption at a high enough rate, it might as well be a desktop machine.

          Of course, this is how I use windows now. It's just a cheap desktop that provides a web browser and an X server for my real work (all done on unix).

  • WAY too expensive. (Score:4, Informative)

    by FreeLinux ( 555387 ) on Friday April 22, 2005 @04:56PM (#12317564)
    OK. Given that a two processor version of the DL 585 is $16,000US and does not include any storage, we can assume that a fully loaded box processors, memory and some storage, is going to run $28,000 plus and that doesn't include monitors. That's more than $1,000 per user just for hardware. Since the average business PC runs under $1,000 the server solution that you suggest just isn't cost effective.

    Now add to that cost, the single point of failure issue. Even if the hardware never fails, all you need is for some malicious or clueless user to run :(){ :&:;};: at a bash prompt and you're fired.
    • Addendum (Score:3, Informative)

      by FreeLinux ( 555387 )
      I forgot to metion that the same super server could service hundreds of concurrent users if you used Linux or Citrix with thin clients. Depending on your applications disk io is not the big bottleneck. Usually when it comes to terminal server bottlenecks it is memory, processors, and then disk io.

      When you do it this way, the cost goes down in dramatic fashion. A $50,000 server setup is only $250 per user when you have 200 users running off it and A server as large as you suggest could easily run 500 or mor
    • Also, the person asking the question doesn't seem to understand the difference between PCI-X and PCI Express. PCI-X is just a faster/wider version of normal parallel PCI, while PCI Express is a serialized version. Those cards will not work in that server, or *any* server I've seen.

      That said, I have personally done the multi-seat thing, with the appropriate X patches (built into Ubuntu's x.org, had to patch Debian's xfree86) and the right hardware. I'm going to be deploying quad-seat machines to a small
    • Being that none of my users would have that type of access to files that they wouldn't already have in our 'normal' current configuration, I don't see this as being a problem with this setup.

      This is a permissions problem, not a multi-seat issue.

    • I tried to 'google' for more on what that character sequence does but can't find anything. Do you have a link to what this( ":(){ :&:;};:" ) does?

      Sounds like pulling access to a CLI would be required.

      Thanks.

      LoB
  • PCI-X (Score:4, Informative)

    by bartjan ( 197895 ) * <bartjan.vrielink@net> on Friday April 22, 2005 @05:04PM (#12317647) Homepage
    This is a server, and it does not have any PCI Express slots. Those shiny ATI cards won't fit. I believe Matrox has some cards that support quad head on PCI.

    Why do you need an USB card? The server already comes with 2 USB ports, and an USB bus supports up to 127 devices.

  • Cat got your tongue? (Score:4, Interesting)

    by Harik ( 4023 ) <Harik@chaos.ao.net> on Friday April 22, 2005 @05:06PM (#12317667)
    Wow, lots of factual mistakes. PCI-X != PCI Express. PCI is a bus, PCI-X is a faster version of that bus. PCI-Express is next-gen AGP. I don't know of many PCI-X video cards. As for input, seperate devices are marked seprately in the kernel, if you just use /dev/input it conglomerates all the inputs. You still need a decent Xserver/servers to handle all the seperate sessions, though. And since X is monolithic, you'll need to run seperate X threads per display, or one idiot going to a website with a thousand animated .gifs will stop everyone. In short: Bad Idea.
  • by gtrubetskoy ( 734033 ) * on Friday April 22, 2005 @05:07PM (#12317687)

    How many desktops is it practical for a high end server to directly replace?

    None, just like a big truck doesn't replace any passanger cars.

    You could, however try something like OpenVPS [openvps.org] to replace a couple 'o dosen servers with it...

  • by eht ( 8912 ) on Friday April 22, 2005 @05:08PM (#12317694)
    No one makes PCI-X display adapters, only regular PCI ones, and those are getting harder to find and unsuitable for what you desire.

    This machine has 0 AGP and 0 PCI Express slots, only "Graphics Integrated 1280 x 1024, 16M color on PCI local bus, 8 MB of SDRAM video memory".

    Neat idea, but sorry.
  • PCI-X != PCI-Express (Score:4, Informative)

    by photon317 ( 208409 ) on Friday April 22, 2005 @05:09PM (#12317710)

    PCI-X is not PCI-Express, and the two technologies don't even have compatible pinouts or signals. PCI-X is the follow-on to traditional parallel PCI, with speeds of 100 and 133 Mhz and a 64-bit wide data path (compared to previous parallel PCI standards of 32/64-bits at 33/66 Mhz). PCI-Express is PCI re-done serially instead of in parallel, in an attepmt to be fashionable like the new Serial ATA standard. It's also potentially faster than PCI-X, and not at all compatible.

    You'll notice just about every communications standard that doesn't go long haul alternates back and forth between parallel and serial methods every few years just to sound new and exciting and better.
    • You'll notice just about every communications standard that doesn't go long haul alternates back and forth between parallel and serial methods every few years just to sound new and exciting and better.

      I don't think this is true. In fact, what communications standards have alternated back and forth at all? I'm not a hardware guy, but I think the main expansion bus on desktop PCs (ISA/EISA/PCI) have always been parallel, right?

      I think that most communications mediums, if they have alternated at all, have
      • the next step is probablly a multiple serial setup

        ie more than one line but they aren't forced to run in precise lock-step
        • the next step is probablly a multiple serial setup

          ie more than one line but they aren't forced to run in precise lock-step


          That's what PCI-Express already does. Each card can use multiple "lanes", each lane is serial, and lanes are not synched. I don't know how many lanes the bus supports.

    • You'll notice just about every communications standard that doesn't go long haul alternates back and forth between parallel and serial methods every few years just to sound new and exciting and better.

      That's awfully cynical. I don't think it's done to "sound new and exciting"; I think it's driven by the available technology. When advances are made that permit higher clock rates, we tend to see things shift toward serial interfaces; meanwhile, when such advances have not been made in a while, we tend t

  • Actually, one quad opteron server can be a terminal server to about 600 clients...:) Doing it now! Saves about 40hours a week in "spyware" issues..
  • When you have a separate system for each user, you're pretty much relegated to using NFS/SMB for network data storage. But with a single system for a couple of dozen users, it suddenly becomes much more practical to put a fibre channel card in the system with a direct connection to a storage system. So the assumption that I/O would become a bottleneck in such a system may actually not only be wrong, it might be faster.
  • by crow ( 16139 )
    What about using VMWare to give each user their own system? Then not only can you run different OSes for each user, including ones that don't support multiple keyboards, but you also can let users move virtual machines between physical work areas (or even between different servers).

    I know that the higher-end VMWare products support migrating between servers. I don't know how well they support multiple physical input and display devices, but I suspect that if a major customer requested it, it could happen
  • Mega-systems like this make sense for big SQL and application servers where CPU is a bottleneck. Most of the other uses are solutions looking for a problem. Intel and AMD would dearly love to corral more of the dollars spent on building systems; these mega-chips have high margins. But the reality is that it is going to be more cost-effective for most setups to use a large number of inexpensive PCs with modest CPUs.
  • SunRay + V480 (Score:3, Insightful)

    by Tsunayoshi ( 789351 ) <tsunayoshi&gmail,com> on Friday April 22, 2005 @05:40PM (#12318027) Journal
    How the hell is this any different/better than using SunFire servers and Sunray thin-clients? A Sunfire V480, 4 900mhz UltraSParc III processors, 16 GB RAM, mirror 73gb disks. This system ran 100+ Sunray thin clients all running continuously updating graphical simulation displays with 3 or 4 other semi-rigourous processes, on a 100mb LAN. All data and programs were NFS mounted. The V480 was ~$40-50K + $300/sunray (already owned monitor and file server).

    The system was spec'd by Sun to handle those 100 sessions. The head engineer bought two and set them up to load-balance and provide redundancy.

    This isn't anything new...move along.
    • Re:SunRay + V480 (Score:3, Informative)

      by SunFan ( 845761 )

      For the "but the V480/V490 can't be ordered for $999 from Dell" trolls, there's also the v40z Opteron server that now sells with 4 dual-core CPUs.

      However, for supporting a 100 desktops, something as robust as the V490 might be a good thing.
      • Re:SunRay + V480 (Score:1, Interesting)

        by Anonymous Coward
        However, for supporting a 100 desktops, something as robust as the V490 might be a good thing.

        Exactly. Why some people on here suggest using commodity parts to support a hundred users is beyond me. Perhaps they've never been in a business where if 100 people aren't working for 5 minutes they've just loss thousands of dollars of productivity.

        It sounds like for $50k server + $30k clients = $80k you can get a robust system from sun. Throw in another $20k a year in maintaince and its only $100k for the fi
        • Re:SunRay + V480 (Score:3, Informative)

          by SunFan ( 845761 )
          people will soon see that Sun is a bargain.

          Sun is definitely a bargain, now, but they have to overcome the baggage of having the UltraSPARC II stuck at 480MHz while the UltraSPARC III was being delayed. That is the source of most of the "but my Pentium is five times faster for 1/20 the cost" trolls. Fortunately, that is _not_ a problem, anymore.
      • In the heat of the moment I forgot to mention that the configuration I described was something I used over 2 years ago. Now I would definitely go with the v40Z, although off of the top of my head I don't remember if the Sunray Server could run on Solaris x86. The "looks like only linux can support this many displays" comment on the frontpage got me irked, since Solaris (for example) has been doing a number of things long before linux was mature enough to do so
  • by Myself ( 57572 ) on Friday April 22, 2005 @05:43PM (#12318064) Journal
    There's a product called Buddy [thinsoftinc.com] that's been doing this for many years. Originally, the Buddy card was a combined PCI video card and PS/2 keyboard+mouse controller, which spit all the signals out an 8-position modular jack (RJ45 for the cretins), and a little breakout box at the other end of a (long, shielded cat-5) cable accepted the monitor and input devices. The software gave two Windows95 users the impression that they were the only one on the machine, and I'm still not sure how they did that on a non-NT architecture, but it worked and worked well. Only trouble is, the video bandwidth of the cable was limited, and the RAMDAC in the video card didn't support sync rates over 60Hz, so the flicker on the slave station was pretty obnoxious.

    In the years since Buddy was first released, PCI video cards have learned to play nice with their neighbors, and USB has provided a way to connect oodles of keyboards and mice to the same machine. Thus, Buddy is reincarnated as BeTwin, a software-only product that associates specific keyboards, mice, and video cards with specific sessions on the machine. (I'm not sure how it deals with sound. Multiple soundcards would seem easy.)

    They say it only supports 5 users, but that sounds like an arbitrary limit and I'm sure they'd tweak a 28-user version if you felt like it.

    Related links... I'm going off-topic here, but playing stupid tricks with virtual hardware is fun.

    Check out MaxiVista [maxivista.com], a "virtual video card" which Windows treats as a second monitor, allowing you to do multi-head tricks. The data for the second display goes out over the network (a la VNC [google.com]) to a client machine, which simply pipes it into the video buffer. Turn that scrap laptop into a second monitor! I stuffed a 10base-T card in my old lappy and it was perfectly usable for everything except fullscreen video. At 100 or gigabit, it'd be worth a try.

    Xinerama [sourceforge.net] is Linux software that does the same thing, creating one large virtual X display, which then chops up the image and sends it to a number of smaller actual displays, some or all of which can obviously be located across the network.

    As long as we're doing silly tricks with virtual hardware, you should be aware of Virtual Audio Cable [ntonyx.com], which enables digitally-perfect audio patching between applications' outputs and inputs, even if the apps themselves think they have exclusive control over the soundcard. (Also enables multiclient sound output under 9x, even if your card doesn't support it, because it does software mixing.)

    If video is your thing, try Softcam [softcam.com], to feed your videoconferencing software any old source you feel like. Switch between actual cameras, use your desktop screenshot as a "camera input", add effects, etc. Their WaveMux tool is a nice complement to VAC, too.
    • You stole my comment. And you probably did a better job of it than I would have.

      As I type I'm working on a PC with Thinsoft's "BeTwin [thinsoftinc.com]" software installed. Two video cards, two monitors, two keyboards, two mice -- two stations that you can log onto independently.

      At work, I have Maxivista, though I haven't used it for a while since the power supply for a little network switch died. I must get that replaced.

      Other interesting stuff to screw around with monitors includes; Margi's Display-to-go [margi.com] PCMCIA video ca

      • I'd been looking for that device a while ago, but I didn't find anything promising. It still seems like vaporware, I'm unable to find anyone actually selling it. For those whose VGA ports are on the docking station, the USB2VGA would be a better way of driving a projector from a small laptop. Or driving a whole pile of monitors at once.

        Being USB 2.0, I'm surprised there's a bandwidth problem. I was running MaxiVista at 10Mbps, and it was tolerable. At USB1.1's 12Mbps, I'd expect similar. But at 480Mbps? It
    • I love to flog [freesearch.co.uk] Windows!

      verb {T} -gg-
      to beat someone very hard with a whip or a stick


      (yes, I know you meant the British definition [freesearch.co.uk])
  • by Anonymous Coward
    Huh? It depends on the application. You could run 1000 dumb terminals in a call center with a single 900 MHz P3 based machine. Old Sun SPARC 2 can handle 64 dumb terminals without breaking a sweat.
  • Mainframe? (Score:5, Insightful)

    by angst_ridden_hipster ( 23104 ) on Friday April 22, 2005 @06:43PM (#12318560) Homepage Journal
    In the old days, there was one big (relatively) powerful computer with a bunch of terminals hanging off of it. This computer was called a Mainframe.

    As time went on and miniaturization progressed, people wanted their own department computer, so they could have more CPU time available.

    Then they wanted their own desktops.

    Then they wanted to network their desktop machines, so they could share data.

    Then some applications started sharing CPU and other resources over the network.

    But all these networked machines were a big configuration hassle for IT. They envisioned "thin clients" and similar solutions.

    Now machines are so powerful that users can have their own virtual PCs running on a central server, so they can just have dumb terminals on their desks.

    There's a lesson in here somewhere. As soon as the network comes back, I'll google for it and find out what that lesson is.
    • Cluster the cheap desktop computers, add some expensive glue and storage that keeps things reliable and run stuff like OpenSSI.

      Simplistically speaking you split the desktop computer into two. One is part of a "Big Server", the other is the "Thin Client".

      If multicore CPUs and virtualization becomes common this isn't going to be that hard.

      Of course if users randomly pull the plug on their nodes that does make things a bit problematic. So I suspect the current "thick desktop" stuff is going to be around for
    • Then they wanted their own desktops.

      No, they didn't. They didn't want to become system managers, they didn't want to have to spend their time defragging disks or dealing with viruses or any of those other problems.

      A lot of managers found it easier to dump a bunch of cheap hardware on their staff and have them be their own system managers than to make a big up-front investment in a server and staff to deal with the server.

      Only a few nerds preferred their own desktops. And some users that were saddled w
  • I guess it depends on how narrowly you define "traditional servers". Maybe you meant file servers. But plenty of database servers, and most compute servers, are reasonable candiates for this.

    And there are plenty of those around.

    We use lots of compute servers. After reliabaility, we care most about TCO per CPU/RAM set. A dual Opteron with 16GB is cheaper than two Opterons with 8GB each. But even if it was slightly more to buy, we'd take two in 1U over 2 in 2U. This scales forever, or until we hit som
    • I guess it depends on how narrowly you define "traditional servers"

      Indeed; the original question is how many PC's can a server replace but that's the wrong question. It should be "How many servers can a server replace?" Using VMWare, you can have what would otherwise be a rack full of little servers in one large machine. It costs less (when buying enterprise class hardware) and it's easier to manage. Dual core CPUs are a tremendous benefit when doing this.
  • I've wanted to do something like this for a long time. The big kludge of the suggestion in the post is involves all the different wires you need to run (sound, DVI, USB) from the server to the places where people work. Oh, how I wish there were a standard one-cable solution to do all this, and without needing repeaters! How about a card you plug into the server that sends out all this over fiber, and then a standardized fiber-to-signal translator box?

    I've thought about this a lot. Now that we have SMP and

  • by TheSHAD0W ( 258774 ) on Saturday April 23, 2005 @02:05PM (#12323756) Homepage
    Never mind running multiple users off it, I want one for a dedicated flight simulator.
  • I used to build systems a long time ago that ran PC-MOS on 386 boxes to run multi-user MS-DOS systems. Combined with a (at the time) super sexy Maxpeed ( http://www.maxspeed.com/ [maxspeed.com] ) it would allow you to run DOS programs on serial terminals like the Link MC5 and similar.

    Pretty cool stuff for the time. Also awesome for running multi-node BBS systems.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...