Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Software Unix Windows

What is Mainframe Culture? 691

An anonymous reader asks: "A couple years ago Joel Spolsky wrote an interesting critique of Eric S. Raymond's The Art of Unix Programming wherein Joel provides an interesting (as usual) discussion on the cultural differences between Windows and Unix programmers. As a *nix nerd in my fifth year managing mainframe developers, I need some insight into mainframe programmers. What are the differences between Windows, Unix, and mainframe programmers? What do we all need to know to get along in each other's worlds?"
This discussion has been archived. No new comments can be posted.

What is Mainframe Culture?

Comments Filter:
  • simple to explain (Score:2, Informative)

    by Anonymous Coward on Monday July 18, 2005 @07:59PM (#13099389)
    windows developers half ass everything. they curl up in a ball and cry if they cant use an IDE to do everything.

    Unix programmers have to seperate the program into 60 different modules that all do their own thing and are called by a main program that uses all the modules to attempt to make the task work, they AVOID gui like it is walking death.

    Mainfraime programmers will take weeks to decide how to start the project, endless flowcharts, argumetns about the architecture and finally when code is written it willtake months on end to test it well beyond reason before they let you even see it run.

    good luck
  • Typical Spolsky (Score:1, Informative)

    by Anonymous Coward on Monday July 18, 2005 @08:00PM (#13099399)
    It's a lot of great insights into something the author wasn't saying. He rips the idea that a program should output well formatted parsible text and be generous with what it accepts as a general rule and pretends it's seen as an absolute rule.
    But this isn't always considered good, examples:
    BSD::useradd
    linux::mke2fs
    linux::rpm #provides a lot of pretty output options
    linux::wget
    linux::proz

    In fact, in other chapters Raymond talks about the 7/10ths of a second rule. That says that the most time your program should be quiet, usually, is 7/10ths of a second. It makes sense, especially on the command line and slightly less in the gui, because that's about how much time the most impatient people can't stand to wait. And it's about how long it takes people to think "teh omg it's teh uberlox0red."

    I've read Joel's book by the way; and he seems to contradict himself a lot with many great insights. In fact, he's a very smart guy with amazing insight; it's connecting it into a final conclusion and removing the thoughts that were just wrong that he's terrible at.
    In other words: Provide your own conclusion for Joel's ideas; his conclusion is almost invariably based on an incomplete set of facts.

    The difference between Unix and Windows cultures are many, and the technical differences show up. In fact, Joel talks about that in his book (which is just his blog on paper). But of course, here he says they're technically the same (sure, if you only look at kernel level things and gloss over higher stuff at such a distance that a dog looks like a cat).

    By the way, this is so old it's in his book!
  • Re:Interoperability (Score:2, Informative)

    by $RANDOMLUSER ( 804576 ) on Monday July 18, 2005 @08:10PM (#13099485)
    In my VAX/VMS days, we'd type these incredible "FOO /INPUT=BAR /OUTPUT=BAZ /NOEVERLASTINGGOBSTOPPER /COKEBOTTLE /SINCE=10-17-82" type commands, and when the DCL prompt came back, we'd scream "It Loves It!!!!!".
  • A couple of comments (Score:1, Informative)

    by Anonymous Coward on Monday July 18, 2005 @08:24PM (#13099584)
    if you're from Windows or Unix you think of systems as I have this much data space, I write these programs, I execute this, etc.

    MF world is very different.

    Dataspaces are shared by default - and are often owned by the application type rather than a user. And space is measured very differently (often blocks rather than kbytes).

    Not that this bothers the MF. Their job is to write a script (it's closer to a script than a program) to rip thus dataspace A, do something, and print the results to B.

    Tools? Provided by the OS. And more study is needed to master than unix tools (imo).

    If you think about data structures (or classes) you're in a different world the MF. They think about records (rows in that dataset).

    Ever write a sort routine? Know the difference between bubble-sort and quick-sort? The average MF doesn't. He calls the system level command SORT and he's done.

    And don't get me started on EBCDIC
  • by BrynM ( 217883 ) * on Monday July 18, 2005 @08:30PM (#13099626) Homepage Journal
    Mainframe culture and rigorous "change control,"
    The best example of this is Documentation. From operator logs to the big IBM books - here you will find everything. Something named ICKDSF messing with your process? Go into the computer room or grab an IBM CD and look it up. Why did your process crash last night? Look at the operator log and find out it had to be killed because of a tape problem.

    Lack of documentation is what irks me most about the PC world.

    Now don't IEFBR14 reading Slashdot right after work so much ;)

  • by DynaSoar ( 714234 ) * on Monday July 18, 2005 @08:31PM (#13099631) Journal
    Windows programmers work from the assumption that their job is to protect users from the machine.

    Mainframe programmers work from the assumption that their job is to protect the machine from users.

    Unix programmers work from the assumption that they're the users and the only protection they or anyone else needs is knowing enough about what they're doing. They also work from the assumption that "enough" means "as much as I know", no matter how much or little they know.

    2/3 of Macintosh programmers think the same as Windows programmers. The other guy doesn't think about it.

    I'm still an Apple II programmer. I still think it's a good idea, and necessary, for everyone to be able to program down to bare metal, because it's only for showing off what you can do since everyone is going to do their own programming anyway. At this point I believe that the only way I'll ever see any Apple II op code coming from anybody else would be if that's what they decode from the SETI signals.
  • by WindBourne ( 631190 ) on Monday July 18, 2005 @08:44PM (#13099714) Journal
    First note that Windows/Mainframers tend to be CISers,

    while Unix ppl are CS/EE. Differences between CS, CIS, and EE.
    For any given project,
    • Project will take 1 year with 40 ppl working on it.
    • After 1 year, it will take another year, if they double the team, otherwise cancel the project.
    • At the end of 2 years, the program will run in length 1 day - 1 week (just a relative time).
    • a number of bugs will be found, of which, by hiring more ppl, these bugs will only take another 1-2 years to iron out.
    • Every line of code will be documented, but many of them will be incorrect, such asi += 5; // call sub routine a()
    • In addition, somebody will have attempted to use hungarian notation but most will not match the code.
    • For the CISers on the mainframe, there is more of a mind set that the system can not be allowed to be down for any reason.
    • For the CISers on the windows boxes, the mindset will be that if the users want 24x7 uptime, then they need to talk to the system admins/operators, or the maintence coders.

    For the CSers mostly on Unix, for the same project
    • It will take 3 ppl exactly 1 month.
    • At the end of one month, it will be just another month. repeated for about 3 more months (grand total of 4 times).
    • At the end, of this, it will run in under 1 minute of time (relative to the above).
    • However, just before the end of the 1 minute, it will crash.
    • Upon your complaining, they will say to write a bug report about it. Of course, when you do, they announce that they will get to it, when they have completed their current 1 month project (see above for timeline).
    • The docs will be few, but will be accurate. But spartan is very much the word until a documenter is hired.
    • The code will be the cleanist and with fewest bugs, but all bugs will be wicked hard to find.

    EE are interesting.
    • after studying the situation for several months, They will tell you that it will take 15 ppl 4 months. Upon the 4th month, at the stroke of midnight, they will deliver the code.
    • It will run in about 10 min. - 1 hour of time. There will be some minor bugs.
    • If you read it will be sloppy. In addition, several sections will have been moved to horrible designed assembler (or possibly even a hardware solution). That allows them to get around bad designs.
    • There will be reams of docs. It will be very accurate, but it will go into a 5 paragraph description of why a i += 1was used in placed of a ++i ( SHORT ANSWER; becuase ++ was a shortcut on the old PDP7 and was only used for pointers, whereas the += conveys the sense of a non-pointer).
    • When you want to file a bug report, it will cost you money to file it. Then the actual bug fix will cost more than the original code, but slightly less than having somebody else do it.

    So how do you manage them?
    CISers; Lousy design/code, but good report with customers. Politicians.

    CSers; great design/code, lousy time-lines/documents. Lousy with Custmer support

    EEers; great time-lines, lousy code design, but will code around the issues. Long term maintence is bad. Professional with customer (like mainframers)
  • Re:my 2 cents (Score:5, Informative)

    by Anthony Liguori ( 820979 ) on Monday July 18, 2005 @08:50PM (#13099750) Homepage
    Unix is process-centric. Windows is thread-centric. This is also an artifact of GUI programming. For example the GUI should never stall by processing a request. Instead it should fork off a thread

    This has nothing to do with GUI programming and everything to do with the cost of creating a process on NT. People began abusing threads because it was so painful to use processes.

    Most unix apps don't use threading. This is not for lack of threading or knowledge of how to use threads. It's simply that processes are as cheap as threads and offer more protection.

    Nearly all Windows development involves a GUI. This is usually done with an event-driven API. On the other hand, many Unix geeks probably never program in the event-driven paradigm.

    Long before Windows existed, X-Windows had a callback, event driven mechanism for GUI programming. This resulted in considerably better performance than the message mechanism used in Win16 (which was carried over to Win32).

    The reason for using messages in Win16 was simple--there was no real multitasking. Context switches didn't exist so there was no difference in having a process handle events it cared about verses every possible event (with a standard default handler).

    The problem with most Windows developers is that they don't understand the history of Windows. They pick up things like "event-driven paradigm" as if it was some great innovation that makes their lives easier. That my friend, is the power of marketing :-)
  • Mainframe culture (Score:5, Informative)

    by asdef ( 261823 ) on Monday July 18, 2005 @09:07PM (#13099876)
    First, I am not your typical Mainframe admin / programmer, as I am 27 and a relitave expert on mainframe constructs like JES, JCL, SMF, SMS, and RACF. From my 5 years of experience working in the mainframe operations group I've noticed the following differences and similarities from Linux (I'm a home user) and Windows (my work laptop):

    - The mainframe is highly structured in it's change management procedures. This is an artifact of how long mainframes have been around. The procedures support the mainframe's goal of 24x7x365 uptime.

    - Due to the high level of structure, there are usually at least 3 groups (often times many more depending on the size of the orginization) that are responsible for the mainframe: System programmers, Operators, and Application programmers. Each fills a very specific role in the operation of a mainframe system.
    System programmers are typically responsible for the health of the operating system, and installing new system wide applications from vendors. The nearest match for system programmers is a Unix admin or windows admin.
    Operators provide the 24x7x365 support aspect, making sure that the hardware is healthy, jobs are running, and important business applications remain available or come up on schedule. Operators may also be responsible for the scheduling package, and security. Again in the Unix world, this is equivalent to the system administrator. The operator position originated because mainframes at one time required people to run around and physically mount tapes and disk drives, and to spite automation that takes care of these tasks, the position remains.
    The final group, application programmers, are what are most frequently though of when talking about a mainframe. They tend to work in languages like COBOL, CICS, DB2 stored procedures, and on occasion Asembly. Their role is to produce the online and batch applications that process the transactions that make the company money. App deveopers on the MF tend to be very carefull about testing code to ensure the proper result because first it could hurt the bottom line, but mroe importantly the operations group won't let it run in production with out assurances that it will run smoothly.

    - Mainframes have been built from the begining for reliability, availability. scaleability, and performance. IBM accomplished this by virtualizing everything. This virtualization allowed IBM to have duplicate pieces of hardware internally double checking each other. For example, every instruction is run thru two physical CPU's at the same time, and if the result is different, the diagnostic code kicks in, disable the CPU that's incorrect, and calls IBM to replace it. This method of RASP is very different from what you see in the windows and unix world where multiple machines are load balanced with geographic redundancy, and if 1 box fails, the others pick it up.

    - Operationally, in a windows or Unix/Linux world if you need to run sumething you just run it. In the mainframe context you submit it in a job to JES. JES (Job Execution Stream) is a resource manager that manages all the mainframe resources for executing jobs and tasks. The biggest difference is that on a mainframe yor job or task may not start running immediately if resources are not available, unlike Unix or Windows where it will start taking time away from already running tasks.

    - Development on the mainframe is usually given very low priority for resources, in order to ensure that the production onlines and batch get everything that they need. Where Linux and Unix have 40 levels of priority (20 to -20) The mainframe has virtually unlimited priorities, because the system programmer jugles CPU, DASD (disk to the uninitiated), tape, and resource wait information to determine the real time priority of a particular task using relitavely sophisticated algorithms to do so. Because of this the system can be tuned very specifically to give the most resources to the tasks which earn the company the most money.
  • by Chanc_Gorkon ( 94133 ) <gorkon&gmail,com> on Monday July 18, 2005 @09:11PM (#13099896)
    I beg to differ. We have some pretty kickass pSeries machines and they don't come close to what our old Multiprise could turn out. Got to drop some students for non-payment? 2,000 to 4,000 records dropped in 20 min on mainframe vs 3-4 hours on the big UNIX Box. Mainframe systems EXCEL at I/O. They SUCK when they try and do computational heavy tasks. The mainframe we bought in 2000 did not even have floating point processors and it still performed better then our current solution. Mainframes DO kick very much butt in processing.
  • Re:The Difference (Score:4, Informative)

    by schon ( 31600 ) on Monday July 18, 2005 @09:39PM (#13100075)
    Isn't the whole point of an operating system to allow programmers to make that assumption?

    No, the whole point of an operating system is to provide a stable programming target and perform resource management.
  • Two good sources (Score:3, Informative)

    by jbolden ( 176878 ) on Monday July 18, 2005 @10:28PM (#13100353) Homepage
    Thought I'd mention two sources for this that I think are worthwhile.

    The first is a great article [perl.com] about what the differences between mainframe programers and Unixy programmers. The second is a book [amazon.com] designed to teach mainframers to operate in a Unix environment. The article is definitely worth a look for anyone interested in this topic.
  • Mainframe hacks (Score:1, Informative)

    by Anonymous Coward on Monday July 18, 2005 @11:02PM (#13100503)
    Take a magic marker and draw a diagonal line across the top of your deck of punch cards so that if you drop them you can put them back in order.

    Versioning was a lot easier on mainframes. You just applied the updates when and where you wanted them. Merging something from a branch to the main trunk in CVS is a nightmare if you didn't get the tags just right or forget to put tags in to begin with.

    Mainframe programmers generally never get root privileges so they learned how to get along without it. Windows programmers are on the other extreme. They can't do anything without root. Unix is somewhere in the middle.

    Mainframes have something called system programmers which is something like a unix admin but different. It's the same in the sense that what they did required special privileges. Mainframes always had much better security and much more scheduling and resource control than unix. Unix still looks like a toy in that regard.

  • by Anonymous Coward on Monday July 18, 2005 @11:30PM (#13100644)
    This post was so full of hooey I don't know where to begin.
    MVS (batch oriented)
    Have you never heard of TSO? CICS? IMS/DC?
    COBOL provided some basic routines, but do to something interesting (like asynch I/O
    Merde. The access methods (I/O services provided by the operating system) have ~always~ supported multiple buffers, chained scheduling and other goodies. MVS and IBM big iron in general is really really good at I/O overlap.
    MVS (at that time) didn't have anything like preemptive multitasking
    Complete rubbish. MVS has a variety of dispatching algorithms including time-slicing, MTTW, task/address space priority, and others. The preemptive dispatcher goes all the way back to MVS's predecessor twice removed, OS/360 -- comfortably older than you likely are, Porky.
    I was budding assembly language programmer and even took a course at university where we had to write our own operating system, entirely in BAL/370
    Curious, since there was no such thing as "BAL/370". There was a BAL -- "Basic Assembly Language" (no macros) for BPS, one of the first early monitors for S/360.
    All I/O was executed in a separate subsystems (channels)
    You idiot! Of COURSE all I/O is executed outboard, in the channels. That's why the m/f systems are so good at it.
    To run something interactive (like CICS) wasn't trivial at all. The best strategy was to dedicate entire mainframe to such task. Mixing CICS and batch jobs int the same machine was suboptimal solution
    Okay, I've figured it out. You took a summer internship at an insurance company, and you augmented your l33t Apple skillz with some 370 assembly language. The workaday COBOL guys lionized you and you became quite full of yourself. The insurance company was running storage constrained, or was running their channels above 80% utilization (post-XA) and response time was an issue -- and you naively assumed that the MVS dispatcher was somehow at fault.
    as I understand, MVS fundamentally remains batch operating system
    You understand nothing.
  • Re:The Difference (Score:5, Informative)

    by afidel ( 530433 ) on Monday July 18, 2005 @11:50PM (#13100733)
    Unisys sells 32 CPU windows boxes, HP sells up to 128 CPU Superdomes capable of running Windows 2003 Datacenter edition, and there's probably some others I'm not aware of. Since quite a few companies have high end systems using Itanium 2 processor's there's very little reason not to support windows server, it just might sell some more units =)
  • Re:Whoops! (Score:3, Informative)

    by iggymanz ( 596061 ) on Tuesday July 19, 2005 @12:10AM (#13100835)
    the mainframes I've worked on ran from 3 phase 480VAC flywheeled motor-generator sets, so the rotational inertia would keep the juice going while emergency generator or another utility could be switched in. Power never failed. The only scheduled power-downs were to make major upgrade reconfigurations such as replacing stacks of circuit boards, or wiring in whole new set of disk drives or peripheral processing units. Then a couple days to boot the motherfucker, 2.5 GB of OS didn't load fast then.
  • Re:The Difference (Score:2, Informative)

    by Quantam ( 870027 ) on Tuesday July 19, 2005 @12:37AM (#13100959) Homepage
    Now to you last paragraph, windows is was NOT designed to be SMP, and if it was it sucked ass at it. linux has had SMP support for a long long time. And the fact windows is 4CPUs at most, and linux is running on 64+ CPU machines all the time, and on huge clusters of 1000s of boxes shows me that linux was designed pretty well.

    And as always linux is ahead of the curve overall then windows (in the kernal) at all times. and runs on more system.


    Correct me if I'm wrong, but the Linux kernel was funneled until fairly recently (like 2.4 or something), was it not? OS X was dual funneled until 10.4, IIRC. As far as I'm aware, NT has never had any kind of funnels, and the kernel has always (back at 1993, or whenever it was that 3.1 came out) been reentrant and SMP-capable. Could you perhaps be a bit more specific than "sucked ass at it"?

    Oh, and it's not really important, but NT was originally intended (and designed) to support up to 32 processors, although I've not heard of any NT machines with more than 8.
  • Re:Simple (Score:3, Informative)

    by homer_ca ( 144738 ) on Tuesday July 19, 2005 @01:11AM (#13101090)
    Well the stability and disaster recovery side of mainframes isn't really a result of the programmer. To the applications programmer, the system "just works", which it should for the price you're paying. Backups and disaster recovery is something for the operators (they're not called administrators). If applications themselves are stable, it's probably a result of COBOL being a straighforward procedural language without all the trickiness of C pointers.

    Now as far as security, I'd say mainframe security is 99% security by obscurity. The mainframe programmers I know are hopelessly naive about network security policy, basic things a Windows or Unix admin would know from working in a hostile environment like the Internet. You know, things like password policy, IP networking, etc.
  • Re:Simple (Score:3, Informative)

    by Nutria ( 679911 ) on Tuesday July 19, 2005 @02:29AM (#13101384)
    Backups and disaster recovery is something for the operators (they're not called administrators).

    And the System Programmers and Operations Managers who buy packages like Fastpath and Harbor NSM.

    If applications themselves are stable, it's probably a result of COBOL being a straighforward procedural language without all the trickiness of C pointers.

    Thank $DEITY I'm not the only person to think so...

    Now as far as security, I'd say mainframe security is 99% security by obscurity. The mainframe programmers I know are hopelessly naive about network security policy, basic things a Windows or Unix admin would know from working in a hostile environment like the Internet. You know, things like password policy, IP networking, etc.

    How many black hats can get into a mainframe, anyway, and know the mainframe utilities?
  • by franois-do ( 547649 ) on Tuesday July 19, 2005 @02:37AM (#13101411) Homepage
    Terminals in mainframe culture (CMS, TSO, CICS...) were mostly used in full-screen mode. You could do whatever you wanted on your screeen using its local possibilities (insert, replace, jump to next field, etc.), it was strictly a local action, and you could be sure that nothing would be transmitted to the mainframe until you pressed either ENTER or a PF/PA key. The 3270 interacted with its control unit.

    When the screen was ready, all the page was transmitted to the computer. This scheme allowed to have sometimes 8000 terminals and over on a 8MB (yes!) machine.

    Incidentally, terminals did have lowercase letters and dead keys for national languages from 1978 on with the 3278 line. This was not hard to implement : just an extended ROM to display the characters on the 3278, and a slight change in microcode to handle the dead keys on the 3274 or 3174 control unit.

  • by logpoacher ( 662865 ) on Tuesday July 19, 2005 @04:45AM (#13101723)
    Looks like the mainframe guy needs to read this:

    http://www.straightdope.com/classics/a4_220.html [straightdope.com]

    Keep washing those hands, kids!

  • Re:Cats (Score:3, Informative)

    by Nutria ( 679911 ) on Tuesday July 19, 2005 @06:23AM (#13101922)
    Other bits seem oddly like DOS or

    Heh. When I 1st used VMS, it was after being an MS-DOS user for a few years, and seperately, an MS-DOS & DOS/VSE progammer for a couple of years.

    I thought I had died & gone to heaven! The interactivity of MS-DOS & the richness of the m/f all wrapped into one perfect package.

    The most annoying things about VMS - to a UNIX geek - are
    • a) no 'cd' command -
      SET DEF
    • b) apparent lack of relative paths
      DIR [---.FOO.BAR.SNIVLE]SNAGGLE.BAZ
      Each "-" send you up one level, and ".", if you remember, is the subdirectory delimiter
    • c) system-wide date/time a la Windows,
      Huh?


    Unix was very strange to me, with it's cryptic commands and *ix could definitely learn a thing or 20 from the VMS command-line parser, like only having to type in a maximum of 4 characters for each command and option, even when it's a long command like DIRECTORY.

    But bash & grep have grown on me and DCL is really showing it's age.
  • Re:Mainframe culture (Score:1, Informative)

    by Anonymous Coward on Tuesday July 19, 2005 @12:15PM (#13104228)
    (I'm not a mainframer, I just read the IBM Systems Journal from time to time.)

    Each instruction runs on two CPUs sharing the same die. When the CPUs disagree, the die/module is flagged as faulty and the OS re-schedules the task on a new CPU pair.

    Since all CPUs come in pairs, it doesn't matter which execution unit failed: both get replaced.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...