Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
The Gimp

High Octane Hardware For GIMP Use? 16

green pizza asks: "My research group will soon be purchasing several workstations for telescope image analysis. We are currently planning on going with dual Pentium III systems from VA running Redhat 7. We have a good deal of custom filters and scripts that will be churning away in The Gimp and would like the best performance possible. Is our current choice the best one? Should we consider moving up to a Xeon system or perhaps a high-end, multiple processor Sun Blade 1000, SGI Onyx 3000, or Alpha?"
This discussion has been archived. No new comments can be posted.

High Octane Hardware for GIMP Use?

Comments Filter:
  • I have been doing Satelite Oceanography analysis for 3 years now, as part of a research group. The main machine that we have been using for image processing and analysis is a SGI Octane, however we use IDL and SeaDAS to do most of our analysis not GIMP. We have however had very little problems even analysising GIGs of FLAT files simultaneously. While I know this may not help you much I can attest to the fact that the guys over at SGI have been very helpful to us, even loaning us a couple of extra machines when we had a small conference/course. We have also been using a set of O2s as terminals and to handle other services like DNS, file and print sharing, etc. While you do pay a bit of overhead to get the SGI name they are great machines with great service and a great company backing them up.
  • I would go with AMD ThunderBirds, as they have a better cache system and can do math such as for the Gimp faster.
    Or I would go with Alphas if you had the money, but I have no experience with them.
  • I'll leave the hardware decisions up to you (becuase the performance and cost constraints your application(s) face(s) are best decided by you). But if you go with a linux solution, DON'T USE RH 7. Don't get me wrong, I used RH 5.2 ->6.2 very happily (before that I was a Slack fan). RH 7.0 is FULL of bugs, from little annoying ones to great big monsters.

    So either ``downgrade'' to RH 6.2 or use some other distro (I switched to debian potato for a change of pace).

    Speaking of the GIMP, is it the best solution for batch image processing? I suspect that something like PDL (perl data language, a module that provides perl with blazing fast data manipulation, matrix math, image proc, et al) or numerical python would have lower overhead.

    If you'd like my 2 cent's on the hardware, if money isn't an object go with the SGI harware. If money is slightly important go with an alpha solution. If money is important enough to equal performance, go with MP Athlon systems in a few months once the 760MP chipset comes out (excellent FP perf and cheaper than Intel).


    --
  • go alphas. go with debian. the alphas dont have 2 gig file limitations that 32 bit systems have and dont require nasty hacks to get files above 4 gigs working. i recommend atipa or microway and use debian or redhat 6.2 not 7. autoupgrade will save your ass a few times..i know debians apt-get dist update && apt-get upgrade and redhats up2date -b in the cron have saved my bacon lots of times. alphas ..especially the new 21264As with the cs20 1u form factor or 4u up2000+ mobo powered 833Mhz duals kick intel butt all over the place. i run a few alphas and am planning getting loads more.
  • I totally agree. Even Linus was pretty critical of RH7. Heh.. it's too bad distros like this give Linux a bad name.

    Btw, there are various graphics libraries. One I like to use (and also has PHP bindings) can be found at www.boutell.com/gd.

    Moz.
  • Thanks everyone for the input. I will certainly check out some of the suggestions. Does anyone happen to know just how well GIMP scales? SGI's Origin/Onyx 3000 scales to 512 processors right now (though it's possible to go to 1024) as a single machine with ungodly bandwidth. While we could not afford such a beast, we could probably go for a 64-processor system as it could be used for many uses... batch jobs, rendering, etc... plus multiple IR3 gfx pipes could be used as several different "workstation/terminal" interfaces for interactive work. Though if GIMP doesn't scale to >1 processors very well, then we may as well just buy a bunch of wiz-bang 1.X GHz PCs. Will investigate other options as well. Thanks again for all of the help!
  • I read somewhere...don't remember where, that the Onyx gets smoked by Voodoo cards. So, I think the Onyx would be kind of a waste of money. Just beowulf some 486's together. It'll be much better. Not to mention, enterprise scalable.
  • SGI needs to die as quickly as possible. People like you need to stop sending them money, so that they go out of business, and the engineers, marketeers, managers, etc of SGI go out and find economically and socially useful work.

    In my experience, SGI doesn't charge a single price for a machine; they want to know details about the purchaser so they know if they can charge more or less, and bargain with each purchaser separately, making it difficult to get a quote for a proposal. Things like memory have to be bought from third parties to save money, and the SGI salesman behaves all nasty like he worked for IBM back in their heyday. The machines themselves are rarely better than one tenth the money spent on cheap PC technology and running linux or FreeBSD (where they do work is in fast access to memory and disks -- I think the payoff isn't great enough though, you can often re-think your task to work as fast on cheap intel architecture).

    I've also had problems with IRIX. There's no port of glibc; if you use a window manager other than than 4dwm piece of crap, annoying problems happen; setting the MANPATH variable to anything seems to make it so you can't get any man pages at all, so how are you supposed to add man pages to a package you just installed in some non-standard place ?

    The final kicker is, there seems to be precious little answer to these things on the SGI newsgroups. Just a bunch of morons posting "Do you think SGI can survive by embracing linux?" and "My work gave me an old broken Indy. Can I install Windows and IE on this thing ?" and "Hey guys I love SGI ! Am cool cause I don't like Microsoft or what?" It seems to just rub in the fact that all the smart people have already abandoned the platform.

    I will never work on a project that uses anything SGI again. As far as I am concerned, they are already deader than SCO.
  • SGI actually does have a set price list for each country they do business. SGI sales offices and VARs, however, rarely charge list price. Depending on the type of order (single item, multiple items, educational, promotional, etc) they have various discounts. Confusing, yes, but if you call or email a sales office you can generally get a nicely-detailed quote within the day... from which you can see their discount schedule (ie, 15% off on hardware, 20% off on software, etc). I keep in contact with the same sales rep that my former university used over 6 years ago, really can't complain too much.
  • So you're saying you haven't actually used an Onyx, yet you're offering this wonderful "information." Having used several generations of SGI Onyx systems, I can honestly say that a Voodoo does NOT "smoke" them. Not even close. You might want to check into the specs of an Onyx 3000 before spouting random nonsense.
  • It sounds like you don't like SGI because it's not enough like BSD or Linux for you. Yes, their sales model is different than purchasing off-the-shelf parts, but so what? You get what you pay for, and like it or not, SGI machines continue to be relevant workhorses in many markets.

    BTW - the 'smart people' are probably all off doing interesting things with these machines.
  • In any case, the Alpha, MIPS, and SPARC are much better designed for high-end engineering work than your garden variety AMD/Intel CPU.

    And there is always the option of rewriting your code in portable C++ that can be run on a MIPS, Alpha, or SPARC system if the Gimp needs to be whipped. ;)

    The only problem with SGI is the questionable company future. :/
  • Take a look at what your processing will be doing. It it is mostly integer, nothing out there will beat the hot x86 chips (P III/IV, Athlon) this side of a supercomptuer. If your doing lots of FP intesive stuff, _and_ you don't have space for twice as many x86 boxes, the alpha is the chip of choice. The Suns have excellent reliablilty and bandwith (nice for ISPs), but suck for price/performance. SGI MIPS hardware is very specialized. I'd go for the VA linux boxes. SMP PIII And btw -- running RH 7 happily on a production system.
  • by BitMan ( 15055 ) on Thursday December 28, 2000 @09:19AM (#539689)

    Felt this was appropriate given the posts here on RedHat 7.0 (since /. rejected it as a feature previously -- hence why it is reposted here).

    An open letter to RedHat ... (2000 Dec 17)

    OVERVIEW:

    • Introduction
    • The Most Trusted Distribution Release Model
    • Version Marketing v. Neurotic Upgrading
    • RedHat 7: Why They Did That
    • RedHat 7.0: The Worst Dot-Zero Yet
    • Not Just the Waiting Game
    • RedHat Dot-Zero: Have the Rules Changed?
    • Just Say No To YARBD
    • Accommodate With Marketing?
    • Legal

    INTRODUCTION

    As the system administrator at a well-known semiconductor design firm for the past year and a half, my current employment includes a heavy dose of primarily Linux-based services and numerous workstations in a production engineering environment. This will eventually turn into a completely Linux-centric model with it replacing both the traditionally NT engineering desktop and the Sun workstation in the computing farm or remote X-application server; a direct result of most electronic design automation (EDA) software vendors releasing all ports of their software to the cost-effective Lintel platform (of which, may already exist and are in use on the desktops of my firm).

    Yet another sysadmin in the crowd, known fondly as "TheBS" to some. An avid user of GNU/Linux/OSS solutions since 1993, the past five years have involved production utilization at various traditional engineering firms. From Samba for file services back when people were debating NetWare v. NT (and how they could bridge them to each other, let alone UNIX), to introducing Linux supercomputing clusters at aerospace firms who could not afford SGI or Sun-based solutions (of which I still do on a consulting basis), Linux is an extremely potent OS for traditional engineering environments.

    For these past five years, I have been totally RedHat distribution-centric. As such, this letter will commence with a bit of praise.

    THE MOST TRUSTED DISTRIBUTION RELEASE MODEL

    I have tried numerous RPM-based distributions over these same five years, after discovering RedHat in late 1994. Although people have obvious feelings for various distributions (usually with some disgust of RedHat in minor or major form), and many articles and reviews focus on the install, GUI configuration and "ease-of-use," there is a simple and basic software engineering law that keeps me set with RedHat. And that very important law involves the very careful attention to the release model and versioning of their products.

    Unlike Caldera, Mandrake and many other RPM-based distros, any release revision of almost any arbitary major RedHat version will tell me: **

    • C Library version
    • GNU toolchain version
    • Kernel (or kernel headers) version

    For the greater part, binary compatibility is easily achieved on this major version (e.g., 5, 6 and now 7). The same goes for headers, source and the like. I can upgrade from the .0 or .1 revision to the .2 revision in a version without major worry of incompatibilities. Furthermore, these upgrades can be made piecemeal with the simple command-line interface (CLI) RPM utility on-the-fly with little or no downtime, no major issues. No other [RPM] distribution that I know of maintains this simple, but quite rudimentary release model that gives sysadmins comfort in upgrading.

    This same model also gives me a reference point for major version maturity. A RedHat .0 revision is obviously a distribution that does the following:

    • Introduces library issues
    • Introduces build/development issues
    • Introduces less mature/tested packages

    As such, RedHat .0 revision releases should be avoided for production systems. RedHat, in fact, went out of their way to announce version 7.0 as an "experimental release," which was also echoed on most Linux enthusiast hangouts like /. Unfortunately, I seriously doubt that same statement was echoed on the RedHat 7.0 box at my local CompUSA.

    VERSION MARKETING v. NEUROTIC UPGRADING

    Before I can start addresses RedHat's 7.0 release in greater detail, I first have to do a final round of education. A common end-user issue seems to plague many and causes the most troubles. Although vendor "version marketing" is annoying, the root problem with software adoption is what I like to call "neurotic upgrading." Although it may be commonplace when using vendor products that limit choice, users should be responsible enough to avoid this, if they can, in an software industry of so much GNU/Linux/OSS choice.

    Most users, at least most enthusiasts, are neurotic upgraders, often failing to read simple READMEs or properly evaluating a new release of software before installation. Furthermore, they fail to listen to the comments of their peers, either dismissing them as not applicable to them or, more arrogantly, from less capable users. Quite ironically, many users sometimes do not even realize that despite their efforts to do so that they have not, in fact, actually acquired the latest version of the software. [ And this is common in the case of computer hardware as well ]

    Most of all, I want to dispel any misconceptions before I start about RedHat 7.0 being the result of "version marketing," as it is not. The best example I can think of regarding "version marketing" was Mandrake's jump from version 6.1 to 7.0, just because of DrakX about a year ago. I was quite relieved when RedHat released 6.2 a few months later, instead of trying to play Mandrake's continuing game. RedHat waited to release 7.0 some time well after, when changes were approprite, although we will discussion whether or not that was long enough.

    REDHAT 7: WHY THEY DID THAT

    Applying the elementary list of three key components of any RedHat release, version 7 is built upon:

    • GLibC 2.2
      (to GLibC 2.1 in v6, 2.0 in v5, LibC 5 in v4, LibC 4 in v3?)
    • GCC 3 toolchain
      (to EGCS/GCC 1.1.2/2.9 in v6, GCC 2.7 in v5/4)
    • Kernel 2.4
      (to kernel 2.2 in v6, 2.0 in v5/4)

    Of these, GLibC 2.2 was near-release when RH70 hit in late August (2.2 released in October). GCC 2.96 is much more in-line with the API with GCC 3 than 2.95 was (much more so). And kernel 2.4 was in test release, with the headers and other key structures well defined.

    After six months without a new release, I am sure RedHat was obviously itching to get a new distribution out the door with newer components like XFree86 4.0, a kernel with USB support and more of the like. Whether or not RedHat rushed out version 7 to meet competitors who already had these components, is a matter of discussion, let alone speculation. But there is no doubt that there were obviously some bad judgments made.

    REDHAT 7.0: THE WORST DOT-ZERO YET

    To itemize the specifics:

    • GLibC 2.1.92
    • GCC 2.96 (as of September)
    • Including GCC 2.95.2 for kernel compiles
    • Ripping other, non-release quality components from CVS trees

    First off, the core of any RedHat major release is LibC, without any doubt since even RedHat version 3 (LibC 4 I believe). This library needs to be stable and release quality. Unfortunately, both 2.1.92, and the subsequent 2.1.94, release had numerous bugs and issues. Furthermore, the release of the completed 2.2 version was only about a month and a half away from 7.0's release (and even then RedHat took over a month to get it out, although that looks like it might have been due to the Alpha port holding it up?). I expect to see unstable pre-releases of core components out of Rawhide (RedHat's latest and greatest package archive), or even a beta, but not a boxed release.

    Now I know most people are questioning the inclusion of GCC 2.96 and the API switch towards GCC 3 away from EGCS 1.1.2 (which subsequently became GCC 2.91 and refined to 2.95) used by RedHat/Rawhide releases (among other, current distros). That is the wrong view to have IMHO. In fact, RedHat is on the money, again IMHO, on moving to GCC 2.96 as the forthcoming GCC 3 release is going to force source code changes anyway (everyone complained when RedHat started to use EGCS 1.1.2 as well, let alone GLibC 2.0).

    The main problem with GCC 2.96 was the fact that, as of September, it was way too new to include in a production release. In fact [and I may not be remembering correctly], I believe GCC 2.96 was so new that RedHat included GCC 2.95.2 in the Pinstripe RedHat 6.9 (7.0 beta) release. Many in RedHat's own Cygnus division (who is highly respected by anyone who has used their software in mission critical spacecraft and military hardware, like NASA let alone myself at my previous employer), voiced concerns on this switch. I know RedHat was rightly gung-ho to push towards GCC 3 with 2.96 in RedHat 7.0, but far from stable was GCC 2.96 at the time of its release.

    And the true testament to this was RedHat's inclusion of GCC 2.95.2 disguised as "kgcc" in the release. If RedHat wished to push to a new compiler, they should have readied kernel headers and code for it. Instead, the world has been left with a new level of mass confusion regarding kernel and driver compilation, among the normally semi-understandable nags of new 2.4 headers and includes (i.e. thanks for complicating things much more then you had to RedHat). If RedHat was serious about the quality of GCC 2.96, they should have gotten the kernel to compile with it. [ Although this is nothing new, I believe the same issue arose with EGCS' inclusion in an much older version .0 or .2 revision? ]

    Lastly, GLibC 2.1.92 and GCC 2.96 were not the only CVS trees ransacked, but a few, other projects' CVS trees were pillaged. While some components, like pre-release KDE 2, were tucked safely away in a non-installation directory, others were not (including installing the Qt 2.2 beta by default).

    NOT JUST THE WAITING GAME

    Now one could easily argue about waiting on a release for components to mature until we were all blue in the face. Understand one always, at some point, has to put their foot down and release.

    But it is quite obvious to me that RedHat failed to keep its core component at release quality for a .0 revision. Namely RedHat broke its unwritten rule of always releasing a .0 revision release with a release quality C library. By waiting another month and a half on the 7.0 release, RedHat could have not only had a release quality GLibC, but GCC 2.96 would have almost exponentially matured, a better 2.4.0 test release existed, let alone KDE 2 and KOffice were out! [ NOTE: I actually am a Gnome-bigot ]

    Now one might say hind-sight is 20/20. But the focus is on the release quality of GLibC 2.1.92 and GCC 2.96 at the end of August, when RedHat 7.0 was released. I am still shocked to see the release of a full RedHat version with such experimental components at the core.

    REDHAT DOT-ZERO: HAVE THE RULES CHANGED?

    The FreeBSD gurus among us are probably sneering now at my comments on a .0 revision. Okay, they are probably sneering at so much discussion about a Linux release regardless. But in all seriousness, the FreeBSD world regards the .0 revision as obviously a non-production release. And those same FreeBSD'ers have universal, strict tagging system of -CURRENT, -STABLE, -RELEASE to keep one in-line as well. As such, the FreeBSD .0 revision is a perfect time to take new components, get them integrated and say "hey, this is where we are heading, it works for us, now try it out."

    So my question is, after seeing the RedHat 7.0 release take form, whether or not this is where RedHat 7.0 is moving (regardless of whether they want to or not)? It may be, in fact, a fine move as seems to work quite fine for the FreeBSD folk. But RedHat 7.0 was obviously not a typical RedHat .0 revision by any means, and the rules have now, if only temporarily, changed.

    If so, RedHat is going to have to realize that there is more to the FreeBSD release model than just how to approach .0 revisions or a set of known tags for their prereleases (like Rawhide). So far, RedHat has maintained a single path of releases, moving onto release 6 when 6.0 was released, and now seven, since 7.0 has been released. Now sure, there are some updates for older releases, but it still seems that much support for a previous product is dropped when a new major version is released.

    With FreeBSD, and most GNU/Linux/OSS projects, there are multiple versions in development at the same time. These are called branches (and a project without a second version is said to have a single branch). For example, FreeBSD released 4.0-RELEASE in March, 3.5-RELEASE in June and 5.0-CURRENT branch has been in the CVS tree since after the FreeBSD (c/o Walnut Creek) and BSDi merge (not to mention the current 4.2-RELEASE and the 4.3-CURRENT branch).

    [ NOTE: Please do not comment that a branch is the same things as a fork. Non-developers will take this as the newfound negative meaning of fork, and not the use of it in this example/proposal. ]

    So maybe it is time RedHat looks at doing something similar?

    JUST SAY NO TO YARBD

    What I'm driving at here is the avoidance of YARBD: Yet Another RedHat-Based Distro. With both RedHat 5.2, and now 6.2, I have maintained my own network installation server for the latest .2 release with tons of patches, updates, independent updates, mix-ins of custom kernels, 3rd party packages (that may be standard in the latest .0), etc... Now I do not expect RedHat to continually release new revisions every time a patch is released, but going almost nine months on a .2 release with outdated components is a nightmare to use on production workstations and servers.

    [ Side request: RedHat, please, please including all run-time compatibility libraries for all previous versions in your releases. Keeping only one major version back is not enough. You do not have to keep the entire compatibility libraries for development, just for binaries that need the run-time libraries. I know I can get them from the previous distribution version, but it is not only a pain, but can be confusing for the newbie. At a minimum, could you at least include them on the Powertools CD? Thank you. ]

    Furthermore, I could easily see RedHat releasing a fourth version 6 revision, just like I thought they should have with version 5. Now I know this is not status-quo for RedHat, and not what most people are looking for. But RedHat had an updated Rawhide kernel, 2.2.16-8 with USB, I2C, AGP, and various chipset functionality, etc... support about three months after RedHat 6.2's release (yet 2.2.16-3 is what they have in the 6.2 upgrade folder). In addition, they could of released XFree86 4.0 for GLibC 2.1.3, instead of pushing everyone to 2.1.9 well before it was ready.

    Finally, I submit what every production UNIX sysadmin wants: Journaling and stable NFS support (for us UNIX-to-UNIX network admins). Since June, I have been running with both Ext3 and NFS3 on my main file and applications servers (to Linux and Solaris clients) with great success. Much of this comes with thanks to H. J. Lu at VALinux (who creates kernels from VA's tarball in RedHat's RPM set layout, plus adds a number of goodies), among several other, required packages. If RedHat was serious about a production revision, at least for servers, I submit they should have seriously considered adopting the NFS3 kernel backport (from 2.4) in their kernels (starting with 2.2.14) as well as possibly opening up Ext3 as a filesystem option. And it was quite humorous to find out RedHat even left these out of the "experimental" (in their words) 7.0 release.

    [ Technical Note: Ext3 is currently the only option for a journaling filesystem with the newer NFS server code on kernel 2.2. ReiserFS runs into a performance race condition. ]

    Packages and services like these, I believe, warrant a .3 revision, especially if it is released at the same time at the next version's .0 revision. This expands the choices for the consumer, so one is rrunning a distribution with drivers and services some nine months old, or force to go to an experimental release just to get them. Ala RedHat 7.0.

    Otherwise, sysadmins like myself with production systems maintain their own "Dot Three" distro. And as much as I would like to share my repository of RPM selection with everyone around the world, understand I do not have time to be a YARBD vendor and support them. RedHat should consider releasing a .3 revision at the same time they release a .0 revision of the next version.

    ACCOMMODATE WITH MARKETING?

    Which brings me to a final point, there is a good marketing spin on a .3 revision released at the same time as a new .0. Taking a marketing lesson from Microsoft (yes Microsoft), not a technical one, but a marketing one, RedHat could use this to their advantage.

    Whenever I see a user go to my local CompUSA to talk about buying a new computer, or just an OS upgrade, I always hear something of the sort (in recent times), "Should I run Windows 98 or Windows 2000?"

    Worse yet is the fact that I actually stick around to hear the $8/hour CompUSA technician (if half-way competent) explain how "Windows 98 is a consumer PC operating system and Windows 2000 is a workstation/server operating system." This is something that, quite routinely, causes me to break out into a virtual, western-style. quick draw where I shoot both the user and the sale rep squarely in the forehead to put a pair of no-good, brainwashed intelligence to their quick and painless death where they can do no further harm.

    My Linux does both and I am better off running just that single, powerful OS. But neither of them know that.

    Now the point I am making is that Microsoft is not out their arguing whether or not Windows is better than Linux for the consumer, or (God help them) workstation/server, but they are selling the two-sided story as, "one needs either this product or this product." Both are, of course, conveniently owned and sold by them, hence really a one-sided argument, just with two choices (instead of sides). Sure, Microsoft is starting to play a [losing] game by arguing against Linux in the corporate front, but one does not seen much of that going on in the consumer arena.

    So if we want to get the word out to "neurotic upgrading" users, while promoting GNU/Linux/OSS at the same time, maybe we should take Microsoft out of the comparison as well. Maybe RedHat should create an adverstisement display such as "What Linux version is right for me? 6.3 (or even 6.2 for now) or 7.0?" We educate users about the RedHat release model. We save them from themselves. We act like Microsoft does not exist. We sell (or make) more GNU/Linux/OSS powered systems.

    Okay, okay, maybe I am stretching this all a bit, trying to add some sort of "new, revolutionary perspective" at the end of a commentary. I mean, do not worry ESR, I do not see me quitting my day job to become a world class Linux advocate. So maybe it is because it is around 8am EST and I have chosen to spend my Saturday night / Sunday morning babbling endlessly about RedHat 7.0. Either way, I hope I shed some light on the RedHat release model, what works (and has worked) and what does not (and has seemingly taken a major wrong turn IMHO).

    Until this happens, 'Fester (my installation box, as in "InstalFest'er") is going to be active for as long as RedHat has .0 revisions lag six months after a .2 revision. And who knows, maybe "VaporWare Labs"(TM) might give RedHat some competition with its "Dot Three Linux" release for those companies who need that extra revision release.

    [ NOTE: Anyone who knows me knows I am the king of VaporWare(TM) ]

    Bryan "TheBS" Smith
    RedHat Bigot(TM)

    LEGAL

    You are free to repost, retransmit or reprint the above commentary provided that you include an entire copy of the message (including this LEGAL section), proper reference and clear indication of the author at the beginning of a thread, page or view. You may respond, analyze, make fun of, or otherwise trash this commentary in any way you see fit, including inserting any aforementioned types of commentary, jokes and puns in between portions of the original post, as long as their is some clearly defined way to separate additions from the original as posted above. In the spirit of sharing, giving, flaming and overall end-user abuse, I leave the implementation of all this to you in the hope that you will do it no grave injustice.

    In no written or unwritten way, shape or form do the opinions, views, comments or self-hypocricy reflect, project or reform those of my employer, associates, family, relatives, society (well maybe society), etc... All statements are original unless otherwise noted. If you even try to blame someone other than myself, I will personally take issue to the point with the offender to the point of UT deathmatch.

    All rights reserved to the Free Software Foundation.

    Seriously now, I hate lawyers and see no business for them here. But I do have a day job and everyone should fully comprehend my employer plays no part in nor sanctions anything written above.

    -- Bryan "TheBS" Smith

  • by StandardDeviant ( 122674 ) on Thursday December 28, 2000 @10:44AM (#539690) Homepage Journal

    The gimp is not intended to scale at all on CPU (memory is a differnt story). It's a photoshop clone. So maybe if you had a 64 CPU box you could have 64 gimp instances each working on 64 different data sets.

    Video performance in the machine is almost a non-issue from the task you've described. Heavy duty image manipulation is very CPU dependent (speed and cache size), but since you aren't rendering this stuff in real-time 3d, it's just a bunch of numbers on disk. So PDL, numpy, c/c++/fortran with LAPACK/BLAS, matlab, et al. would be better. Put your money in CPU and IO bandwidth (like somebody said, if your data sets are >2gb each you need a 64bit platform like alpha/MIPS/sparc).


    --
  • But I do most of my work on Solaris, I don't have the same complaints about them. True, they don't give out the source code to the system, but I've never really had problems getting any free software packages to compile on it; they are pretty much a commodity with a standard (if high) price; if you go to news groups, everybody uses solaris and can help you; and things like fvwm and man actually work.

    One of the compliants I left out above is that when I use fvwm instead of 4dwm the "K" key stops working on the keyboard. What's up with that ? If something like that happened on a solaris machine, a deja or google search would find the answer, and you wouldn't have to post. If you did, you would get an answer. Your "K" key works if you turn off CDE.

    I know SGI has some good stuff, but I think they have a layer of crappy sales and crappy software hiding it. The problem is, for the price and effort, someone can just buy more PCs and spend more time organizing the system to get around the disk and memory bottlenecks.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...