What is Mainframe Culture? 691
An anonymous reader asks: "A couple years ago Joel Spolsky wrote an interesting critique of Eric S. Raymond's The Art of Unix Programming wherein Joel provides an interesting (as usual) discussion on the cultural differences between Windows and Unix programmers. As a *nix nerd in my fifth year managing mainframe developers, I need some insight into mainframe programmers. What are the differences between Windows, Unix, and mainframe programmers? What do we all need to know to get along in each other's worlds?"
Cats (Score:4, Interesting)
Everything Old Is Old Again (Score:4, Interesting)
Mainframe programmers are *old* (Score:5, Interesting)
The reasons mainframes are interesting, to the extent that they are, is that they can handle very large databases with very high reliability, which is not the same as being fast (though some of IBM's newer mainframe products are also quite fast.) That means there's a heavy emphasis on building and following processes for deployment and operations so that things won't break, ever, at all, even when the backup system's down for maintenance, and on building processes to feed data in and out of this rather hostile box so every bit gets bashed like it's supposed to. The programming environments have gotten better, but you're looking at a level of flexibility like Debian's Oldest-and-most-stable releases, not like Debian sid.
Programming in COBOL (Score:5, Interesting)
M/F is just a job (Score:5, Interesting)
On the other side of the coin, I think that *nix and Windows programmers tend to enjoy what they do. To them, programming is not just their job, it's enjoyable.
Honestly, I don't blame them. M/F sucks. As soon as you get your first compile error because your command isn't in the right column, or have JCL spit out a bunch of random nonsense because you didn't allocate the correct blocksize for your file you'll hate your job too.
Re:One difference (Score:4, Interesting)
I agree (Score:3, Interesting)
a few observations (Score:4, Interesting)
The following example might be interesting, not sure if helpful. On batch system you have many jobs executing concurrently. MVS (at that time) didn't have anything like preemptive multitasking. COBOL didn't have asynch I/O either, so when it issues I/O it just goes into a wait state, so another task is scheduled. So the bottom line was that your program won't be very efficient (e.g., won't be overlapping I/O and CPU activites), but that would create a nice (from MVS perspective) mix of jobs. Some are doing I/O, some are doing CPU, so MVS can accomodate many concurrent tasks.
Well, at that time I was budding assembly language programmer and even took a course at university where we had to write our own operating system, entirely in BAL/370, including the Initial Program Loader (boot, if you wish). I was working at the same time, and there was a problem at my job. They (John Hancock Insurance) had hundreds and hundreds of COBOL programs, and nothing like cross-referencing dictionary, like which program modifies some common record fields. So when something unexpected happened, they had to search through the source code, to find all the instances of such references and that was taking something like 5-6 hours. I've learned asynch I/O at school and how to overlap I/O and CPU activites, and I've ended up writing fairly efficient program. Program was reading large chunks of disk data into several buffers. As soon as the first buffer was full, that event was detected, and the program starts parsing that buffer for some keywords --- while continuing reading the tracks into other buffers. (it was a circular list of buffers). After some trials I got the execution time down to less than 20 minutes. Everyone in my area was happy.
Everyone except mainframe operators. I've been told they HATED my program to its guts. The problem was that the program didn't behave nice as far as 'good mix' is concerned. It grabbed the resources and hold them for a long time because it went to the wait state only occasionally. But that was a great help for production problems, so they had to let it run.
That was many years ago. I don't know if MVS got changed so to introduce preemptive multitasking. At that time it was a strictly batch-oriented system. All I/O was executed in a separate subsystems (channels). To run something interactive (like CICS) wasn't trivial at all. The best strategy was to dedicate entire mainframe to such task. Mixing CICS and batch jobs int the same machine was suboptimal solution. Of course, MVS scheduler got improved since, to provide better balancing between batch and interactive tasks, and yet, as I understand, MVS fundamentally remains batch operating system.
Re:Everything Old Is Old Again (Score:1, Interesting)
The ITSM "industry" is growing 50-100%CAGR, and there are standards emerging (BS15000, AS8018, ISO20000) in this area.
I guess the key ideas in ITSM are that the primary focus is on the service, not on the technology that is used to deliver it, and that good consistent processes maximise service quality. These are ideas that have existed in the MF world for as long as I've worked in IT (+30 years), but are sometimes sadly lacking in "newer" environments.
Patrick Keogh posting as AC because I can't remember my password...
Re:A couple of comments (Score:3, Interesting)
Re:Good question. (Score:4, Interesting)
I'm on Mainframe and I'm "Young" (Score:2, Interesting)
Re:An idea... (Score:4, Interesting)
windows geek : unix geek
they've generally been around the block a few more times, know shit most unix geeks dont, and have quirks unix geeks dont.
"PUNK! when i was your age, i was writing operating systems... in binary... on punchcards!"
Re:Don't reboot (Score:3, Interesting)
Mainframe programmers as geeks (Score:4, Interesting)
But I bet you'll notice a core psychology that's pretty familiar to most geeks...
Re:I agree (Score:2, Interesting)
1) using the C language. There will always be a place for C, but it is an increasingly small place.
2) not understanding the difference between programmer and developer, or between code and product. A programmer writes code, a developer delivers products. Code is a widget, not a product. A product is the totality of what a customer buys. An apple is a widget; but the whole product consists of you buying that apple from a nice display in clean supermarket from a courteous cashier in a convenient location with adequate parking.
3) underestimating the importance of personal skills. Mediocre developers who communicate clearly, coordinate their actions with others, and have good hygene are much more valuble than brilliant programmers who would rather invent their own solution than follow standards, lack manners, and act/dress weird.
Coded on mainframes, code in *nix now (Score:5, Interesting)
I've also had the opportunity to train mainframers in shops where MVS platforms were displaced by *nix based platforms. So, here is a subject that, no doubt, I can speak about:
The major factors/differences:
First, most of the mainframer programmer contingent has been moved offshore or is being done by NIV programmers. Really not much of a career path here, but OTOH, a great deal of critical systems (charge card processing, airline reservations, utility company systems) are still coded in MVS COBOL/DB2 (or IMS, a hierarchical mainframe database platform for IBM MVS). To convert these systems means you need to be able to understand these systems, and please don't give me a business analyst -- the days of their expertise are long gone, and the metamorphisis of systems over time means business knowledge is embedded in the code.
Mainframers don't get GREP. I've tried so many ways to impart this wonderful tool, but all I get back is puzzled stares and bewilderment, for anything more complex than what could be accomplished with a non-regex, or simple wildcard search.
Globals. This is something that put me aback 6-7 years ago, when I made the leap into Unix programming, and traded C/REXX/CLIST for C/Perl/etc... COBOL is structured into divisions and all your data declarations are laid out and globally accessible. Though many COBOL systems are quite complex, with a "program" actually being a driver for a whole hierarchy of 20-40 sub-programs, and the necessity to restart at a given point in processing can make things quite complex.
Approvals, checkoffs, signoffs, and procedures. They're largely absent in the Unix (and most webdev work) world, but mainframers have grown accustomed to reams of authorization and approvals for even simple changes. Lead times of a week or more, along with VP signoff, QA signoff, user group signoff, fellow developer signoff, etc.... Even getting a downstream system to agree to test changes may take a formal request process and budgetary allocation of thousands of dollars. This is probably the biggest divide, and future schisms will be prevalent, as data center leadership trys to impose this kind of checks and balances on developers not accustomed to these obstacles. IBMs trouble and difficulty in the web server world offer a prime example -- business being told that it'll take 3-4 months to get a server online, and folks who know better just can't understand that.
Lack of user tools. A big part of what I did as a mainframer was building tools, using BTS and File-Aid to allow developers and testers to create their own test bed and automate the test process. On Unix side, the tools come with the OS, and awk, Perl, and all the other CLI goodies make automating testing a snap.
File in/File out vs. piping. Mainframers have a tendency to see everything as file-in/file-out. In a way so do *nix coders, but a seasoned *nix programmers sees the tools all being able to feed eachother. Rather than step1 filein fileout, step 2 sort filein, out fileout, step 3 filein, reportout, etc...
On the age thing, most of the really skilled mainframers now, like myself, do Unix or migrated to Java. Others are awaiting retirement, or head over to six sigma teams, business analyst roles, or seek refuge in management, escaping the axe that clears the way for the offshore coder.
Paper over softcopy. Got to have that printed listing, and the sticky notes (and before that, paper clips). I still remember a senior manager telling me when I first broke in how his appraisal of a programmer was how many fingers he needed to act as placeholders when he perused a program listing.
weirdest thing: mainframers turning to 'doze (Score:3, Interesting)
You'd think they'd run from Windoze as fast as they can. But no -- perhaps because of some vague VMS gene still running around in 'doze -- they occasionally take to it like babes to the teat.
These guys do exist. I've heard one recently defend VSS as a reasonable source code control system -- when Micro$oft themselves won't touch it, and the following remark has been attributed [wadhome.org] to a M$ employee:
Another one of these mainframer-turned-M$-nut dudes tried to explain to me that M$ is "redesigning the internet to use binary protocols" because "text formats obviously don't work" and are "breaking everything". He also believes Apple should be annihilated because they stand in the way of a total monoculture -- and he sees monoculture as necessary to achieve our "Star Trek future". The fact that he foresees a smoothly running galaxy running Windoze Everywhere is just plain amusing.
Buddy, if the future is like Star Trek, I don't want any damn part of it. Diversity is Life.
Re:How did you get a mod of 5? (Score:3, Interesting)
Apparently you have no experience with the UNIX way.
What you don't seem to know is that MS Windows is utterly missing the wonderful collection of little tools available on every UNIX platform (Well, without installing cygwin -- but that's UNIX, right?). Each little tool does one little job, and does it well, and all of the tools can be connected in standard ways. So, I *can* use C++ or C or PERL or Python, but I don't *have* to -- many times all I need is sh and that wonderful collection of utilities...
And yea, I do write huge programs sometimes -- but only when that's really required to get the job done.
I don't think I ever heard of a MF being used for the types of things I get involved with, though. (back to the topic
Re:I agree (Score:5, Interesting)
1) While most programs today should probably not be written in C, I think it's still an important language to learn and understand as a beginner programmer. Most applications today use C at some level. If you understand it, you get a chance to understand how the application/framework/library you are using works which make you able to use it better. See Joel Spolski's "Back to Basics" [joelonsoftware.com] for more on this.
3) More on this in Robert L. Read's How to be a Programmer [mines.edu].
Re:How did you get a mod of 5? (Score:3, Interesting)
Oooh.. touch a nerve did we?
You missed the point (and the humor) of the OP.
Firstly, was that a question? Secondly, nothing. Yes, you can do it all in Windows. The point you originally missed is that (s)he was referring to the whiz-bangetry of
Bottom line is, older UNIX folks are much more likely to have written the mass majority of their own code where today we see CASE tool technicians masquerading as software engineers. I see this at work all the time with the "new generation" of IS/IT types rolling through the door who couldn't code a b-tree save to save their lives and rely on prebuilt everythings to give the illusion that they do something "difficult".
Got news for ya folks, my Subaru tech has more skill than most of these chumps.
Re:An idea... (Score:5, Interesting)
All of the programming I do startes out in a graphic environment, whether that is Visual Studio, or Dreamweaver.
I really have no need to 'program' boxes, fonts, text areas, etc. (Which of course don't exist in a CLI anyway...)
But using something like Visual Studio you get to draw out your 'forms' and make them look pretty. Then double-click an object to put in the corresponding code.
Of course most projects require about 99% of your time in code-view- but that 1% in design view would probably take me 5-20 times longer if I was using code to lay things out.
I thought the best line from the article was:
Unix culture values code which is useful to other programmers, while Windows culture values code which is useful to non-programmers.
I've never seen it written so succinctly- but that is basically it. I only value two things when creating a program:
A- can a neophyte use it...preferably, someone who doesn't even really understand the purpose of the program.
B- will it be easy for me to come back and modify it.
I stopped worring about 'efficiencies' and 'cycles' years and years ago. It is so nice to live in an era where it is nearly impossible for me to tax the hardware.
But that's me...maybe you're doing some video editing, or rendering, or something like that. But when I am mostly dealing with data storage and retrieval, nothing should take over a few milliseconds.
Re:I agree (Score:4, Interesting)
I feel the same way whenever I look at the SMTP spec, the MIME spec, the SMTP email format spec, pretty much any on-the-wire specs actually...
At the very least people could prefix strings they're transmitting with the # of bytes in them, so that memory access is efficient.
Look at HTML - all ASCII. ASN.1 was invented so that you didn't have to use all ASCII for this kind of data (look at the SNMP spec if you want more details). But does anyone use it for the on-the-wire format? No.
Unixheads seem to claim that it's perfectly admirable to hack around the ASCII format for everything because it makes it easier to debug, whereas all I see is wasted entropy and bandwidth.
Anyway...
Re:On the difference (Score:3, Interesting)
When will the unix admins learn that just because Windows doesn't do as much piping as *nix doesn't mean it's not fully scriptable? The paradigm is different, is all. In *nix, if I want to programmatically kill a process I grep for it, cut or awk out the pid, and pipe that into kill (ignoring killall). In Windows, I query the process table with a WMI query object, retrieve the returned Process object, and call Kill (actually, I'm not sure what the name of the method is, but the idea is that you call methods on objects rather than piping text to processes). They both get the job done, and there's very little that piping or object automation can't do. I'd even argue that the object method is more robust because it doesn't have to infer information from presentation-formatted data (what do you do if the column of data you want changed positions because you're using different command line options to ps?). In either case, you're relying on developers to support the interface mechanism (stin/stdout in *nix, IDispatch in Windows), and it's not the system's fault if an application's implementation is sub-standard.
The widely-held belief that a *nix administrator can adequately perform the job of a knowledgeable windows administrator is just wrong. You'd laugh if I tried to suggest that a Windows admin could do the job of a *nix admin, so why is it assumed that the other way around works? And no, you're not going to install cygwin and bash and perl on my production systems unless they're absolutely critical (ie, a web server that needs to serve perl-based CGI scripts). No, helping you do your job the wrong way (*nix admin attempting to be a windows admin) is not "mission critical".
Re:Everything Old Is Old Again (Score:3, Interesting)
Eventually CMS went away - the programmers moved to TSO, the functional users (SCT Banner's term for "end users" or "lusers") moved to a combination of TSO/ISPF, Model204 applications, and web apps. But our mainframe change control processes still worked.
Enter Unix, in the guise of SCT Banner. Don't let anyone kid you, it's an ERP, and a big hairy mess. Many of our programmers are "back to square 1", having to deal with C, shell scripts, perl, etc rather than JCL, PL/I and User Language. Cobol is somewhat familiar ground, although "make" is a wierd construct to them and shell script drivers are copied by rote (they're used to DMS/CMS or ISPF applications building compile JCL).
Change control? Hah. We're so busy trying to get acclimated to the environment that the closest we have right now is ".old" and ".save" files laying around. I've installed SVN, but I'm too busy fighting fires to write the documentation, particularly since I'm one of the few people that truly *is* Unix-literate (but I can still build a CP nucleus if I had to!) I spent 2.5 hours tonight unsnarling Banner Job Submission and its interaction with Appworx.
Doug
Good Mainframe Programmers... (Score:2, Interesting)
Mainframe on the other hand, have no interfaces and if any, it's a TEXT (EBCDIC) world. Mainframe is a no-frill world and strictly a business proposition. In a word - strictly no nonsense for you to hack with.
Re:#1 Cultural Difference (Score:5, Interesting)
They don't call it reboot, they call it a "re-IPL" [Initial Program Load] and depending on the machine it takes up to 30+ people, each with specialized knowledge about a specific part of the process. [you can mod me funny, but THIS IS NOT A JOKE]
Unix guys reboot the system occasionally.
Only because of a hardware upgrade, and only because the technician convinces them it REALLY DOES need to be turned off to add more RAM or a (non-hot-swap) disk drive.
Windows guys reboot their machine several times a week.
"Several" in this context is a number greater than ten. A boot often lasts through the day, but not always. But I remember the 3.1 days (it shudda been called "three point one over six point two two"), it was boot-in-the-morning and reboot-after-lunch, as well as many other times.
Re:a few observations (Score:2, Interesting)
Need proof? Grab yourself a copy of the Hercules mainframe emulator and MVT (google for it). MVT is MVS's daddy. Give it a go yourself.
Long live Poughkeepsie (but watch out for the submarines in the Hudson river)! Now, where did I leave my cane???
Difference is scope of understanding (Score:1, Interesting)
Unix programmers can and often do have some understanding of every aspect of the Operating System, and have mastered several current and useful computing languages.
Windows programmers can and often do have some understanding of every Microsoft product and have mastered several GUI-based integrated development environments.
Re:Don't reboot (Score:3, Interesting)
Seems an old support person, former Mainframe Operator told be a story of a largish corporation that had a whole *CLUSTER* of Mainframes, and an odd issue that kept crashing a Mainframe every now and then. Apparently the in-house people couldn't figure it out, and the vendor wanted to take things down for a little while to work on it. Of course, being a 7x24 shop management wouldn't have it. So they added another machine to the cluster. Don't ask my how or why, but the extra machine made sure that the processes kept going while one node of the cluster fell to it's knees and re-IPLed. Cheaper than shutting down a production shift.
not only was he insightful, I'd mod YOU down (Score:3, Interesting)
Insightful, how about idiotic. What can you program in Unix that you can't in Windows.
That wasn't the point the original poster was trying to make. The point is HOW you program in Un*x vs Windows. Nobody will argue that you can do anything you want with either platform. However, a great many people would argue that the "UNIX way" is FAR mor elegant.
In Windows you have C and C++ just like Unix. Java, Perl they are all there as well.
This statement really demonstrates your inability to comprehend the differences. To extend the "building toys" analogy, C/C++, Java, Perl et al are NOT the pieces, they are the plastic/wood/metal with which the pieces are made. You could make lego bricks out of the latest space-age carbon fibre composites, but they would be useless if the "bumps and holes" on each brick were different sizes and wouldn't lock together.
Now the platforms may be different but largerly they are more similar than not from a progammers ability to make a program perform a required task.
There I'd really have to disagree with you. There are things that Un*x style architectures do easily that are arduous to perform in the Windows environment. Similarly, there are things Windows excels at. IPC was really much more refined under UN*X--some might say Windows works with threads so well because it has to since its IPC abilities have historically sucked--really in UN*X it is much easier to get various components to play nicely with each other yet keep their resources separate and protected. OTOH, there are reasons Windows-based games are so far ahead besides simple market share--graphics interfaces are one of those "funny shaped blocks" in Windows that is very well suited to its task.
Really that Lego analogy is very apt indeed. UN*X is very uniform in how it works, just like a bucket of classic Lego bricks. You have a library of pipes, sockets, shared memory etc. that is very standard across all programs that extends all the way to the user interface (you can pipe all manner of programs input and output together right on the command line to a degree not yet seen in production releases of Windows). Once you get the hang of the UN*X Way you can snap these blocks together esily to suit your needs.
For all the "object orientedness" of Windows, there is not that level of uniformity in interfacing to make those reusabel objects work together. Instead, you have an overly complex framework in the form of DDE/OLE1/OLE2/COM/DCOM that was largely designed to accomodate disjointed, inconsistent interfaces between various components/applications. This is something like the "licensed from the movie" Technics sets with all the little odd-sized rods/axles, funny-shaped blocks, special wheels and so on. There many little sets where the pieces fit together very nicely in a few commonly required configurations, but when the time comes where you want to make your own creation not in the instruction booklet you become frustrated with the useless pieces. For many kids, the six or so really cool things you can build are good enough, for the 10% "most geeky" kids it would bore you quickly.
I can't say I really know for sure what a "mainframe toy" would be--mainframes don't seem like fun at all. I think "mainframers" may have forgotten what childhood was like, or perhaps hatched from a pod fully grown, who knows. I do not have a lot of exposure to that philospohy/culture. If I HAD to pick a toy that was most mainframe-like I might say Mecanno, because like UN*X they are fery uniform in structure, however you have tediously fiddle with those little screws to put anything together, just like a mainframe--you have your "special screwdrivers" (arcane knowledge) and have to follow tedious processes to get things done. Or, perhaps it is like building a birdhous with popsicle sticks, where you have to tediously glue the pieces together with Elmers glue, wait for it to dry bef
Windows Admin has bad name from NT 4.0 Days (Score:5, Interesting)
When I worked a decent sized MS Partner, the MS Way was "point-and-click." They were going to do a 10,000 user migration by hand, because that was the MS Way. I grabbed the NT 4 Resource Kit and whipped up some Batch scripts to do the parsing, and the Windows guys were amazed.
Windows has some very intelligent scripting, buts its somewhat hidden because the NT 4 Days, which weren't short, but caused a problem. Older PC guys knew Batch scripting, which kinda disappeared in the NT 4 days because the tools weren't readily available (buried in the Resource Kit meant that you couldn't count on them being on the machine). The newer object-oriented programing method is cool (and absolutely preferable to parsing text streams, which as you said depends on an unchanging text output from a program, which is very constraining), but you need a new generation of Windows Geeks.
Unfortunately, hacking on Windows is about as "cool" as a Mac was 10 years ago, so your computer geeks just aren't learning it. This doesn't change the fact that good admins are critical, but there is a perception problem. Just like Novell became perceived "dead" because nobody saw it because the machines didn't crash.
The WMI/AppleScript approach (as in, thick self contained apps that are callable) is perfectly legitimate.
The other problem you have here is what happened to the MCSE in the late NT 4.0 days. When I was just finishing my MCSE, all the MCSE study guides were coming out... teaching to the test, and MS didn't upgrade the tests fast enough. Stuff that took me weeks reading the NT 4 Resource Guide was available in a condensed 4 hour book. Combine that with the MCSE Courses, that taught to the test, and the whole industry get messed up. People hired cheap "paper" MCSEs, and people got used to Admins not being able to program. Finding a Windows Admin that truly gets it is rare, because there is too much dependance on unknowledgable paper-admins, so people assume all Windows Admins suck.
Alex
Re:The Difference (Score:2, Interesting)
Mainframe coding reborn on the web (Score:2, Interesting)
Re:The Difference (Score:2, Interesting)
Now, allow me to provide THE final word of the difference between a thread and a process:
Process
An address space with one or more threads executing within that address space, and the required system resources for those threads.
Thread
A single flow of control within a process. Each thread has its own thread ID, scheduling priority and policy, errno value, thread-specific key/value bindings, and the required system resources to support a flow of control. Anything whose address may be determined by a thread, including but not limited to static variables, storage obtained via malloc(), directly addressable storage obtained through implementation-defined functions, and automatic variables, are accessible to all threads in the same process.
Those would be quotes from The Open Group Base Specifications Issue 6, Definitions.
How does that apply to this discussion? It tells us that you're confusing the specification with the implementation. NT can also create processes that share address spaces, but you can't do so with any API available to programmers (did you know that NT can fork, too? but you also can't do that with any API available to programmers); this is no different than Linux and other flavors of Unix. A process by definition has a separate address space, while threads of the same process by definition share an address space. In other words, even on Unix, processes (if they do any IPC) will be MUCH slower than threads running in the same process (for further details of why this is, see my longer post from earlier).
The difference IMHO (Score:5, Interesting)
The first difference is the difference of work running on a system. Unix & Windows development typically takes place on dedicated machines. The changes are then applied to a separate production machine. On a mainframe development & production are often the same LPAR (Logical Partition) or the same physical box. Because of this development gets the low priority. If you run out of juice on a Unix/Windows box you either get a bigger one or you cluster them together. In the mainframe you either redesign it to run more efficiently or you start shelling out $$$ for a bigger machine. Normally your only choice is the redesign.
Software on a mainframe is horribly expensive and the faster the machine the more it usually costs. This is an old way of spreading the pain of software development. The big guys pay more because their machines are faster but the smaller guys get to pay less. Imagine if MicroSoft decided to charge a lot less for Office if you ran it on a P5 instead of the newest processor? Some software on Windows is licensed by the CPU, but I've never heard of the speed of the CPU being a factor. Do you think you'd get that fancy new PC if the software would cost 10x as much?
On a mainframe software development is a slow process with lots of checks along the way. Nobody just "slams in a change" unless they are either 100% sure it will work and it fixes a critical problem that is impacting business, or they want to be fired. Banks frown heavily on downtime. Unix & Windows systems seem to be more tolerant of this (with the odd exception being email - how email became the most important application is beyond me).
Once you develop, debug, and get a mainframe program running you can usually forget about it. There are programs running on mainframes today that haven't changed in 30 years. That is a pretty good return on investment. I've dealt with both and it seems to boil down to "pay me now or pay me later". Installing stuff on a mainframe take a lot of up front work but if you do it correctly you can expect it to work well when you are done. Windows programs are easier to install and develop but you have the constant reboot issues, memory leaks, and just plain annoying mysteries to deal with.
Mainframes (in my opinion) have far far far superior system diagnostic tools. If a program is running slow I can determine if it is CPU, disk, database contention, or any other resource shortage. This is mainly because there is so much running on any given mainframe that system diagnostic tools need to be very good. The tools on Unix and Windows are good but they don't need to be as complete because the environments are far less complex.
Program debugging tools on a mainframe can be awful. Interactive debuggers are the exception, not the norm. They tend to take up CPU which drive up software costs which the finance department hates. I've seen good interactive debuggers but they suck CPU and make the finance department hate you.
Batch controls on a mainframe are far superior to Unix or Windows. This is mainly because the mainframe started life as a batch system. Once you understand and master JCL it is really a good system. Batch on Unix and especially Windows is more of an after thought. You can run batch, but the tools to monitor failures, schedule dependencies, and validate results are not as good.
A programmer must know how a program is going to run on a mainframe long before you run it. You need to know how much disk, CPU, and memory you need and how man lines of output you are going to use. If you exceed this by too much your program will be automatically canceled. This is because you are not the only one using the system and if you exceeded what you said you needed your program could have a problem. That can be painful but it stops program loops if done properly.
The "just reb
Re:Cats (Score:4, Interesting)
Bits of it are marvellously elegant and I struggle to think of clean ways of implementing equivalent things within a UNIX-like OS. Other bits seem oddly like DOS or embedded OSs such as vxWorks (more precisely, DOS and vxWorks sometimes look a bit like VMS). And then, if you install UNIX-originated software such as TCPware on VMS, bits of it /do/ start looking like UNIX.
I was able to support TCPware on UNIX purely because many of the tools were ports or recreations of key parts of the BSD IP stack. I was even able to help a customer set up PPP when none of our experienced TCPware engineers could, because it was using pppd, as on Linux.
The most annoying things about VMS - to a UNIX geek - are a) no 'cd' command b) apparent lack of relative paths c) system-wide date/time a la Windows, except in parts, when TCPware is installed (making for a very confusing experience around DST changeover days, especially if you have NFS in the mix too).
Re:I agree (Score:3, Interesting)
As are ASN.1 parsing vulnerabilities [google.com] because ASN.1 is so hard to parse that nearly everyone who uses it ends up using the same flawed ASN.1 parsing codebase.
Re:Programming in COBOL (Score:3, Interesting)
When they learned to code, people did not have computers with compilers available at home. You learned COBOL because that was what the application your company had was written in. It was written in COBOL because when development began, COBOL (or RPG) was the only one suitable for an application that handled valuable data rather than calculations or low level hardware control.
You didn't play around or experiment, because you were working with data that was very valuable to other people.
On top of that, comes what compares to a Mac mentality: being in love with an increasingly marginal phenomenon, and staying with the once handsome, but now bitter and abusive spouse, blind for the limitations, or even seeing them as strengths. You can't teach modern programming practices to people who run 5250 or 3270 emulators on 3 GHz Pentium IV PC's and try to explain the superiority of EBCDIC to you.
Re:The Difference (Score:3, Interesting)
No, the whole point of an operating system is to provide a stable programming target and perform resource management.
That's what resource management is. If programmers cooperate with each other to share resources, then resource management is done automatically. But with any modern operating system this isn't necessary. Each program is separated from the other programs so that it can behave as though it is alone in the computing ether, as though it owns all the memory, all the processor, all the hard drive, etc.
I suppose saying that it protects individual programmers from the actions of each other was a little bit overboard. Really it is designed to protect individual programs from the actions of each other. As for protecting the individual programmers, in a project with multiple programmers, that's really the job of the programming language and development tools.