High Octane Hardware For GIMP Use? 16
green pizza asks: "My research group will soon be purchasing several workstations for telescope image analysis. We are currently planning on going with dual Pentium III systems from VA running Redhat 7. We have a good deal of custom filters and scripts that will be churning away in The Gimp and would like the best performance possible. Is our current choice the best one? Should we consider moving up to a Xeon system or perhaps a high-end, multiple processor Sun Blade 1000, SGI Onyx 3000, or Alpha?"
My Experience (Score:2)
AMD (Score:1)
Or I would go with Alphas if you had the money, but I have no experience with them.
Avoid RH 7 like the plague!!! (Score:2)
I'll leave the hardware decisions up to you (becuase the performance and cost constraints your application(s) face(s) are best decided by you). But if you go with a linux solution, DON'T USE RH 7. Don't get me wrong, I used RH 5.2 ->6.2 very happily (before that I was a Slack fan). RH 7.0 is FULL of bugs, from little annoying ones to great big monsters.
So either ``downgrade'' to RH 6.2 or use some other distro (I switched to debian potato for a change of pace).
Speaking of the GIMP, is it the best solution for batch image processing? I suspect that something like PDL (perl data language, a module that provides perl with blazing fast data manipulation, matrix math, image proc, et al) or numerical python would have lower overhead.
If you'd like my 2 cent's on the hardware, if money isn't an object go with the SGI harware. If money is slightly important go with an alpha solution. If money is important enough to equal performance, go with MP Athlon systems in a few months once the 760MP chipset comes out (excellent FP perf and cheaper than Intel).
--
alphas alphas alphas. (Score:1)
Re:Avoid RH 7 like the plague!!! (Score:1)
Btw, there are various graphics libraries. One I like to use (and also has PHP bindings) can be found at www.boutell.com/gd.
Moz.
Thanks (Score:1)
modern PC video cards can SMOKE the onyx (Score:1)
Re:My Experience (Score:2)
In my experience, SGI doesn't charge a single price for a machine; they want to know details about the purchaser so they know if they can charge more or less, and bargain with each purchaser separately, making it difficult to get a quote for a proposal. Things like memory have to be bought from third parties to save money, and the SGI salesman behaves all nasty like he worked for IBM back in their heyday. The machines themselves are rarely better than one tenth the money spent on cheap PC technology and running linux or FreeBSD (where they do work is in fast access to memory and disks -- I think the payoff isn't great enough though, you can often re-think your task to work as fast on cheap intel architecture).
I've also had problems with IRIX. There's no port of glibc; if you use a window manager other than than 4dwm piece of crap, annoying problems happen; setting the MANPATH variable to anything seems to make it so you can't get any man pages at all, so how are you supposed to add man pages to a package you just installed in some non-standard place ?
The final kicker is, there seems to be precious little answer to these things on the SGI newsgroups. Just a bunch of morons posting "Do you think SGI can survive by embracing linux?" and "My work gave me an old broken Indy. Can I install Windows and IE on this thing ?" and "Hey guys I love SGI ! Am cool cause I don't like Microsoft or what?" It seems to just rub in the fact that all the smart people have already abandoned the platform.
I will never work on a project that uses anything SGI again. As far as I am concerned, they are already deader than SCO.
SGI Pricing (Score:1)
Re:modern PC video cards can SMOKE the onyx...NOT! (Score:1)
Re:My Experience (Score:1)
BTW - the 'smart people' are probably all off doing interesting things with these machines.
In any case... (Score:1)
And there is always the option of rewriting your code in portable C++ that can be run on a MIPS, Alpha, or SPARC system if the Gimp needs to be whipped.
The only problem with SGI is the questionable company future.
x86 is good price/perforamnce (Score:1)
An Open Letter To RedHat ... (Score:3)
Felt this was appropriate given the posts here on RedHat 7.0 (since /. rejected it as a feature previously -- hence why it is reposted here).
An open letter to RedHat ... (2000 Dec 17)
OVERVIEW:
INTRODUCTION
As the system administrator at a well-known semiconductor design firm for the past year and a half, my current employment includes a heavy dose of primarily Linux-based services and numerous workstations in a production engineering environment. This will eventually turn into a completely Linux-centric model with it replacing both the traditionally NT engineering desktop and the Sun workstation in the computing farm or remote X-application server; a direct result of most electronic design automation (EDA) software vendors releasing all ports of their software to the cost-effective Lintel platform (of which, may already exist and are in use on the desktops of my firm).
Yet another sysadmin in the crowd, known fondly as "TheBS" to some. An avid user of GNU/Linux/OSS solutions since 1993, the past five years have involved production utilization at various traditional engineering firms. From Samba for file services back when people were debating NetWare v. NT (and how they could bridge them to each other, let alone UNIX), to introducing Linux supercomputing clusters at aerospace firms who could not afford SGI or Sun-based solutions (of which I still do on a consulting basis), Linux is an extremely potent OS for traditional engineering environments.
For these past five years, I have been totally RedHat distribution-centric. As such, this letter will commence with a bit of praise.
THE MOST TRUSTED DISTRIBUTION RELEASE MODEL
I have tried numerous RPM-based distributions over these same five years, after discovering RedHat in late 1994. Although people have obvious feelings for various distributions (usually with some disgust of RedHat in minor or major form), and many articles and reviews focus on the install, GUI configuration and "ease-of-use," there is a simple and basic software engineering law that keeps me set with RedHat. And that very important law involves the very careful attention to the release model and versioning of their products.
Unlike Caldera, Mandrake and many other RPM-based distros, any release revision of almost any arbitary major RedHat version will tell me: **
For the greater part, binary compatibility is easily achieved on this major version (e.g., 5, 6 and now 7). The same goes for headers, source and the like. I can upgrade from the .0 or .1 revision
to the .2 revision in a version without major worry of
incompatibilities. Furthermore, these upgrades can be made
piecemeal with the simple command-line interface (CLI) RPM utility
on-the-fly with little or no downtime, no major issues. No other
[RPM] distribution that I know of maintains this simple, but quite
rudimentary release model that gives sysadmins comfort in
upgrading.
This same model also gives me a reference point for major version maturity. A RedHat .0 revision is obviously a distribution that
does the following:
As such, RedHat .0 revision releases should be avoided for
production systems. RedHat, in fact, went out of their way to
announce version 7.0 as an "experimental release," which was also
echoed on most Linux enthusiast hangouts like /. Unfortunately, I
seriously doubt that same statement was echoed on the RedHat 7.0 box
at my local CompUSA.
VERSION MARKETING v. NEUROTIC UPGRADING
Before I can start addresses RedHat's 7.0 release in greater detail, I first have to do a final round of education. A common end-user issue seems to plague many and causes the most troubles. Although vendor "version marketing" is annoying, the root problem with software adoption is what I like to call "neurotic upgrading." Although it may be commonplace when using vendor products that limit choice, users should be responsible enough to avoid this, if they can, in an software industry of so much GNU/Linux/OSS choice.
Most users, at least most enthusiasts, are neurotic upgraders, often failing to read simple READMEs or properly evaluating a new release of software before installation. Furthermore, they fail to listen to the comments of their peers, either dismissing them as not applicable to them or, more arrogantly, from less capable users. Quite ironically, many users sometimes do not even realize that despite their efforts to do so that they have not, in fact, actually acquired the latest version of the software. [ And this is common in the case of computer hardware as well ]
Most of all, I want to dispel any misconceptions before I start about RedHat 7.0 being the result of "version marketing," as it is not. The best example I can think of regarding "version marketing" was Mandrake's jump from version 6.1 to 7.0, just because of DrakX about a year ago. I was quite relieved when RedHat released 6.2 a few months later, instead of trying to play Mandrake's continuing game. RedHat waited to release 7.0 some time well after, when changes were approprite, although we will discussion whether or not that was long enough.
REDHAT 7: WHY THEY DID THAT
Applying the elementary list of three key components of any RedHat release, version 7 is built upon:
(to GLibC 2.1 in v6, 2.0 in v5, LibC 5 in v4, LibC 4 in v3?)
(to EGCS/GCC 1.1.2/2.9 in v6, GCC 2.7 in v5/4)
(to kernel 2.2 in v6, 2.0 in v5/4)
Of these, GLibC 2.2 was near-release when RH70 hit in late August (2.2 released in October). GCC 2.96 is much more in-line with the API with GCC 3 than 2.95 was (much more so). And kernel 2.4 was in test release, with the headers and other key structures well defined.
After six months without a new release, I am sure RedHat was obviously itching to get a new distribution out the door with newer components like XFree86 4.0, a kernel with USB support and more of the like. Whether or not RedHat rushed out version 7 to meet competitors who already had these components, is a matter of discussion, let alone speculation. But there is no doubt that there were obviously some bad judgments made.
REDHAT 7.0: THE WORST DOT-ZERO YET
To itemize the specifics:
First off, the core of any RedHat major release is LibC, without any doubt since even RedHat version 3 (LibC 4 I believe). This library needs to be stable and release quality. Unfortunately, both 2.1.92, and the subsequent 2.1.94, release had numerous bugs and issues. Furthermore, the release of the completed 2.2 version was only about a month and a half away from 7.0's release (and even then RedHat took over a month to get it out, although that looks like it might have been due to the Alpha port holding it up?). I expect to see unstable pre-releases of core components out of Rawhide (RedHat's latest and greatest package archive), or even a beta, but not a boxed release.
Now I know most people are questioning the inclusion of GCC 2.96 and the API switch towards GCC 3 away from EGCS 1.1.2 (which subsequently became GCC 2.91 and refined to 2.95) used by RedHat/Rawhide releases (among other, current distros). That is the wrong view to have IMHO. In fact, RedHat is on the money, again IMHO, on moving to GCC 2.96 as the forthcoming GCC 3 release is going to force source code changes anyway (everyone complained when RedHat started to use EGCS 1.1.2 as well, let alone GLibC 2.0).
The main problem with GCC 2.96 was the fact that, as of September, it was way too new to include in a production release. In fact [and I may not be remembering correctly], I believe GCC 2.96 was so new that RedHat included GCC 2.95.2 in the Pinstripe RedHat 6.9 (7.0 beta) release. Many in RedHat's own Cygnus division (who is highly respected by anyone who has used their software in mission critical spacecraft and military hardware, like NASA let alone myself at my previous employer), voiced concerns on this switch. I know RedHat was rightly gung-ho to push towards GCC 3 with 2.96 in RedHat 7.0, but far from stable was GCC 2.96 at the time of its release.
And the true testament to this was RedHat's inclusion of GCC 2.95.2 disguised as "kgcc" in the release. If RedHat wished to push to a new compiler, they should have readied kernel headers and code for it. Instead, the world has been left with a new level of mass confusion regarding kernel and driver compilation, among the normally semi-understandable nags of new 2.4 headers and includes (i.e. thanks for complicating things much more then you had to RedHat). If RedHat was serious about the quality of GCC 2.96, they should have gotten the kernel to compile with it. [ Although this is nothing new, I believe the same issue arose with EGCS' inclusion in an much older version .0 or .2 revision? ]
Lastly, GLibC 2.1.92 and GCC 2.96 were not the only CVS trees ransacked, but a few, other projects' CVS trees were pillaged. While some components, like pre-release KDE 2, were tucked safely away in a non-installation directory, others were not (including installing the Qt 2.2 beta by default).
NOT JUST THE WAITING GAME
Now one could easily argue about waiting on a release for components to mature until we were all blue in the face. Understand one always, at some point, has to put their foot down and release.
But it is quite obvious to me that RedHat failed to keep its core component at release quality for a .0 revision. Namely RedHat
broke its unwritten rule of always releasing a .0 revision
release with a release quality C library. By waiting another
month and a half on the 7.0 release, RedHat could have not only
had a release quality GLibC, but GCC 2.96 would have almost
exponentially matured, a better 2.4.0 test release existed, let
alone KDE 2 and KOffice were out! [ NOTE: I actually am a
Gnome-bigot ]
Now one might say hind-sight is 20/20. But the focus is on the release quality of GLibC 2.1.92 and GCC 2.96 at the end of August, when RedHat 7.0 was released. I am still shocked to see the release of a full RedHat version with such experimental components at the core.
REDHAT DOT-ZERO: HAVE THE RULES CHANGED?
The FreeBSD gurus among us are probably sneering now at my comments on a .0 revision. Okay, they are probably sneering at so
much discussion about a Linux release regardless. But in all
seriousness, the FreeBSD world regards the .0 revision as obviously
a non-production release. And those same FreeBSD'ers have
universal, strict tagging system of -CURRENT, -STABLE, -RELEASE to
keep one in-line as well. As such, the FreeBSD .0 revision is a
perfect time to take new components, get them integrated and say
"hey, this is where we are heading, it works for us, now try it
out."
So my question is, after seeing the RedHat 7.0 release take form, whether or not this is where RedHat 7.0 is moving (regardless of whether they want to or not)? It may be, in fact, a fine move as seems to work quite fine for the FreeBSD folk. But RedHat 7.0 was obviously not a typical RedHat .0 revision by any means, and the
rules have now, if only temporarily, changed.
If so, RedHat is going to have to realize that there is more to the FreeBSD release model than just how to approach .0 revisions or a
set of known tags for their prereleases (like Rawhide). So far,
RedHat has maintained a single path of releases, moving onto release
6 when 6.0 was released, and now seven, since 7.0 has been
released. Now sure, there are some updates for older releases, but
it still seems that much support for a previous product is dropped
when a new major version is released.
With FreeBSD, and most GNU/Linux/OSS projects, there are multiple versions in development at the same time. These are called branches (and a project without a second version is said to have a single branch). For example, FreeBSD released 4.0-RELEASE in March, 3.5-RELEASE in June and 5.0-CURRENT branch has been in the CVS tree since after the FreeBSD (c/o Walnut Creek) and BSDi merge (not to mention the current 4.2-RELEASE and the 4.3-CURRENT branch).
[ NOTE: Please do not comment that a branch is the same things as a fork. Non-developers will take this as the newfound negative meaning of fork, and not the use of it in this example/proposal. ]
So maybe it is time RedHat looks at doing something similar?
JUST SAY NO TO YARBD
What I'm driving at here is the avoidance of YARBD: Yet Another RedHat-Based Distro. With both RedHat 5.2, and now 6.2, I have maintained my own network installation server for the latest .2
release with tons of patches, updates, independent updates, mix-ins
of custom kernels, 3rd party packages (that may be standard in the
latest .0), etc... Now I do not expect RedHat to continually
release new revisions every time a patch is released, but going
almost nine months on a .2 release with outdated components is a
nightmare to use on production workstations and servers.
[ Side request: RedHat, please, please including all run-time compatibility libraries for all previous versions in your releases. Keeping only one major version back is not enough. You do not have to keep the entire compatibility libraries for development, just for binaries that need the run-time libraries. I know I can get them from the previous distribution version, but it is not only a pain, but can be confusing for the newbie. At a minimum, could you at least include them on the Powertools CD? Thank you. ]
Furthermore, I could easily see RedHat releasing a fourth version 6 revision, just like I thought they should have with version 5. Now I know this is not status-quo for RedHat, and not what most people are looking for. But RedHat had an updated Rawhide kernel, 2.2.16-8 with USB, I2C, AGP, and various chipset functionality, etc... support about three months after RedHat 6.2's release (yet 2.2.16-3 is what they have in the 6.2 upgrade folder). In addition, they could of released XFree86 4.0 for GLibC 2.1.3, instead of pushing everyone to 2.1.9 well before it was ready.
Finally, I submit what every production UNIX sysadmin wants: Journaling and stable NFS support (for us UNIX-to-UNIX network admins). Since June, I have been running with both Ext3 and NFS3 on my main file and applications servers (to Linux and Solaris clients) with great success. Much of this comes with thanks to H. J. Lu at VALinux (who creates kernels from VA's tarball in RedHat's RPM set layout, plus adds a number of goodies), among several other, required packages. If RedHat was serious about a production revision, at least for servers, I submit they should have seriously considered adopting the NFS3 kernel backport (from 2.4) in their kernels (starting with 2.2.14) as well as possibly opening up Ext3 as a filesystem option. And it was quite humorous to find out RedHat even left these out of the "experimental" (in their words) 7.0 release.
[ Technical Note: Ext3 is currently the only option for a journaling filesystem with the newer NFS server code on kernel 2.2. ReiserFS runs into a performance race condition. ]
Packages and services like these, I believe, warrant a .3 revision,
especially if it is released at the same time at the next version's .0 revision. This expands the choices for the consumer, so one is
rrunning a distribution with drivers and services some nine months
old, or force to go to an experimental release just to get them.
Ala RedHat 7.0.
Otherwise, sysadmins like myself with production systems maintain their own "Dot Three" distro. And as much as I would like to share my repository of RPM selection with everyone around the world, understand I do not have time to be a YARBD vendor and support them. RedHat should consider releasing a .3 revision at the same
time they release a .0 revision of the next version.
ACCOMMODATE WITH MARKETING?
Which brings me to a final point, there is a good marketing spin on a .3 revision released at the same time as a new .0. Taking a
marketing lesson from Microsoft (yes Microsoft), not a technical
one, but a marketing one, RedHat could use this to their advantage.
Whenever I see a user go to my local CompUSA to talk about buying a new computer, or just an OS upgrade, I always hear something of the sort (in recent times), "Should I run Windows 98 or Windows 2000?"
Worse yet is the fact that I actually stick around to hear the $8/hour CompUSA technician (if half-way competent) explain how "Windows 98 is a consumer PC operating system and Windows 2000 is a workstation/server operating system." This is something that, quite routinely, causes me to break out into a virtual, western-style. quick draw where I shoot both the user and the sale rep squarely in the forehead to put a pair of no-good, brainwashed intelligence to their quick and painless death where they can do no further harm.
My Linux does both and I am better off running just that single, powerful OS. But neither of them know that.
Now the point I am making is that Microsoft is not out their arguing whether or not Windows is better than Linux for the consumer, or (God help them) workstation/server, but they are selling the two-sided story as, "one needs either this product or this product." Both are, of course, conveniently owned and sold by them, hence really a one-sided argument, just with two choices (instead of sides). Sure, Microsoft is starting to play a [losing] game by arguing against Linux in the corporate front, but one does not seen much of that going on in the consumer arena.
So if we want to get the word out to "neurotic upgrading" users, while promoting GNU/Linux/OSS at the same time, maybe we should take Microsoft out of the comparison as well. Maybe RedHat should create an adverstisement display such as "What Linux version is right for me? 6.3 (or even 6.2 for now) or 7.0?" We educate users about the RedHat release model. We save them from themselves. We act like Microsoft does not exist. We sell (or make) more GNU/Linux/OSS powered systems.
Okay, okay, maybe I am stretching this all a bit, trying to add some sort of "new, revolutionary perspective" at the end of a commentary. I mean, do not worry ESR, I do not see me quitting my day job to become a world class Linux advocate. So maybe it is because it is around 8am EST and I have chosen to spend my Saturday night / Sunday morning babbling endlessly about RedHat 7.0. Either way, I hope I shed some light on the RedHat release model, what works (and has worked) and what does not (and has seemingly taken a major wrong turn IMHO).
Until this happens, 'Fester (my installation box, as in "InstalFest'er") is going to be active for as long as RedHat has .0
revisions lag six months after a .2 revision. And who knows, maybe
"VaporWare Labs"(TM) might give RedHat some competition with its
"Dot Three Linux" release for those companies who need that extra
revision release.
[ NOTE: Anyone who knows me knows I am the king of VaporWare(TM) ]
Bryan "TheBS" Smith
RedHat Bigot(TM)
LEGAL
You are free to repost, retransmit or reprint the above commentary provided that you include an entire copy of the message (including this LEGAL section), proper reference and clear indication of the author at the beginning of a thread, page or view. You may respond, analyze, make fun of, or otherwise trash this commentary in any way you see fit, including inserting any aforementioned types of commentary, jokes and puns in between portions of the original post, as long as their is some clearly defined way to separate additions from the original as posted above. In the spirit of sharing, giving, flaming and overall end-user abuse, I leave the implementation of all this to you in the hope that you will do it no grave injustice.
In no written or unwritten way, shape or form do the opinions, views, comments or self-hypocricy reflect, project or reform those of my employer, associates, family, relatives, society (well maybe society), etc... All statements are original unless otherwise noted. If you even try to blame someone other than myself, I will personally take issue to the point with the offender to the point of UT deathmatch.
All rights reserved to the Free Software Foundation.
Seriously now, I hate lawyers and see no business for them here. But I do have a day job and everyone should fully comprehend my employer plays no part in nor sanctions anything written above.
-- Bryan "TheBS" Smith
Re:Thanks (Score:4)
The gimp is not intended to scale at all on CPU (memory is a differnt story). It's a photoshop clone. So maybe if you had a 64 CPU box you could have 64 gimp instances each working on 64 different data sets.
Video performance in the machine is almost a non-issue from the task you've described. Heavy duty image manipulation is very CPU dependent (speed and cache size), but since you aren't rendering this stuff in real-time 3d, it's just a bunch of numbers on disk. So PDL, numpy, c/c++/fortran with LAPACK/BLAS, matlab, et al. would be better. Put your money in CPU and IO bandwidth (like somebody said, if your data sets are >2gb each you need a 64bit platform like alpha/MIPS/sparc).
--
Re:My Experience (Score:1)
One of the compliants I left out above is that when I use fvwm instead of 4dwm the "K" key stops working on the keyboard. What's up with that ? If something like that happened on a solaris machine, a deja or google search would find the answer, and you wouldn't have to post. If you did, you would get an answer. Your "K" key works if you turn off CDE.
I know SGI has some good stuff, but I think they have a layer of crappy sales and crappy software hiding it. The problem is, for the price and effort, someone can just buy more PCs and spend more time organizing the system to get around the disk and memory bottlenecks.