Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Operating Systems Software Unix Windows

What is Mainframe Culture? 691

An anonymous reader asks: "A couple years ago Joel Spolsky wrote an interesting critique of Eric S. Raymond's The Art of Unix Programming wherein Joel provides an interesting (as usual) discussion on the cultural differences between Windows and Unix programmers. As a *nix nerd in my fifth year managing mainframe developers, I need some insight into mainframe programmers. What are the differences between Windows, Unix, and mainframe programmers? What do we all need to know to get along in each other's worlds?"
This discussion has been archived. No new comments can be posted.

What is Mainframe Culture?

Comments Filter:
  • Cats (Score:4, Interesting)

    by infonography ( 566403 ) on Monday July 18, 2005 @07:57PM (#13099366) Homepage
    Herding Cats. Some are big, some are small, some aren't cats at all.
  • by JeiFuRi ( 888436 ) on Monday July 18, 2005 @07:58PM (#13099374)
    The thing that's really preserved the mainframe over the past couple of years has not been performance; it hasn't been throughput, because those things turn out to be terrible. It's been the set of operational practices that have been codified around it. Mainframe culture and rigorous "change control," contrasts with the PC server culture of "whiz kids" who never learned the basic operational rules necessary to avoid costly mistakes.
  • by billstewart ( 78916 ) on Monday July 18, 2005 @08:03PM (#13099431) Journal
    Hey, I was a TSO wizard back in ~1980, but fortunately I haven't had to use that stuff in ages :-) However, you'll find that mainframe programmers mostly look like Sid in Userfriendly.org - either grey hair or no hair. Mainframe programmers, like Unix and Windows programmers, range from the old wizard who can answer really arcane questions about JCL syntax from memory to some Cobol drone who went to trade school, like the Visual Basic trade school drones of today.

    The reasons mainframes are interesting, to the extent that they are, is that they can handle very large databases with very high reliability, which is not the same as being fast (though some of IBM's newer mainframe products are also quite fast.) That means there's a heavy emphasis on building and following processes for deployment and operations so that things won't break, ever, at all, even when the backup system's down for maintenance, and on building processes to feed data in and out of this rather hostile box so every bit gets bashed like it's supposed to. The programming environments have gotten better, but you're looking at a level of flexibility like Debian's Oldest-and-most-stable releases, not like Debian sid.

  • Programming in COBOL (Score:5, Interesting)

    by cratermoon ( 765155 ) on Monday July 18, 2005 @08:09PM (#13099479) Homepage
    I've never actually written any COBOL myself, but here's what I've learned from trying to teach Java to former mainframe developers.
    • COBOL is actually remarkably like wordy assembly language
    • The typical mainframe programmer, being steeped in COBOL, will think of everything in "records".
    • Mainframes are case-preserving but case-insentive. Like DOS, a token can be any mixture of case an it will work. Thus, a mainframer might wonder why 'PRINTF' doesn't work for 'printf'.
    • On the same topic, a mainframer will assume that something like the Java type 'Account' and the variable 'account' actually don't distinguish anything, and will be confused when the compiler refuses to assign to a type.
    • MOVE CORRESPONDING is COBOL's big hammer. It will take all the values of the elements of one record and copy them to the "corresponding" fields of another record. There is nothing like type-checking for this. This will cause mainframers to be confused about why you can't assign a linked list to an array and have it "just work".
    • Not that mainframers will grasp "linked list" or "array". Actually, they won't really get any of what we call the standard data structures and algorithms learned in the first year of any CS program.
    • COBOL programs have no scoping rules. EVERYTHING is global. Thus, a mainframe programmer won't understand why an assignment way over in some little function in another library to a variable called "date" won't affect the "date" value in the code everywhere.
  • M/F is just a job (Score:5, Interesting)

    by ThePolkapunk ( 826529 ) on Monday July 18, 2005 @08:10PM (#13099480) Homepage
    As a *nix programmer forced into the mainframe world, I'd have to say that m/f programmers do not look at computers as a hobby or thing of interest. To them, programming and computers are just what they do to get paid. To the m/fers, a computer is just a tool that they have to use to do their job. They take no joy or pleasure in programming, it's just what they do.

    On the other side of the coin, I think that *nix and Windows programmers tend to enjoy what they do. To them, programming is not just their job, it's enjoyable.

    Honestly, I don't blame them. M/F sucks. As soon as you get your first compile error because your command isn't in the right column, or have JCL spit out a bunch of random nonsense because you didn't allocate the correct blocksize for your file you'll hate your job too.
  • Re:One difference (Score:4, Interesting)

    by BrynM ( 217883 ) * on Monday July 18, 2005 @08:11PM (#13099499) Homepage Journal
    Unix and mainframe programmers are more likely to know multiple systems, out of necessity, and consequently have a more general understanding of the commonalities of all computer systems.
    You know, this is something that I have taken for granted for years. Thanks for making this point. Having done big iron, desktop and server programming has given me a definite edge in the past and I couldn't put my finger on it until your comment. The period I spent integrating some Alpha NT boxen to an S390 system (showing my age a little) really taught me a lot of versatility.
  • I agree (Score:3, Interesting)

    by DogDude ( 805747 ) on Monday July 18, 2005 @08:12PM (#13099503)
    I didn't get into the industry until 10 years ago, and I was amazed at this difference between the windows kids and the mainframe guys. I was a Windows/Oracle developer, but luckily I learned good practices from old MVS/greenscreen guys who taught me things that hold true no matter what kind of computer platform you're working with. I'm blown away to see some of the stupid things that new programmers/admins do. Blown away.
  • a few observations (Score:4, Interesting)

    by porky_pig_jr ( 129948 ) on Monday July 18, 2005 @08:21PM (#13099566)
    I was working with IBM MVS (batch oriented) and VM (interactive) for quite a while. At that time the main choice was between COBOL and Assembly Language (BAL/370). COBOL provided some basic routines, but do to something interesting (like asynch I/O, your own memory management, etc) you had to use BAL.

    The following example might be interesting, not sure if helpful. On batch system you have many jobs executing concurrently. MVS (at that time) didn't have anything like preemptive multitasking. COBOL didn't have asynch I/O either, so when it issues I/O it just goes into a wait state, so another task is scheduled. So the bottom line was that your program won't be very efficient (e.g., won't be overlapping I/O and CPU activites), but that would create a nice (from MVS perspective) mix of jobs. Some are doing I/O, some are doing CPU, so MVS can accomodate many concurrent tasks.

    Well, at that time I was budding assembly language programmer and even took a course at university where we had to write our own operating system, entirely in BAL/370, including the Initial Program Loader (boot, if you wish). I was working at the same time, and there was a problem at my job. They (John Hancock Insurance) had hundreds and hundreds of COBOL programs, and nothing like cross-referencing dictionary, like which program modifies some common record fields. So when something unexpected happened, they had to search through the source code, to find all the instances of such references and that was taking something like 5-6 hours. I've learned asynch I/O at school and how to overlap I/O and CPU activites, and I've ended up writing fairly efficient program. Program was reading large chunks of disk data into several buffers. As soon as the first buffer was full, that event was detected, and the program starts parsing that buffer for some keywords --- while continuing reading the tracks into other buffers. (it was a circular list of buffers). After some trials I got the execution time down to less than 20 minutes. Everyone in my area was happy.

    Everyone except mainframe operators. I've been told they HATED my program to its guts. The problem was that the program didn't behave nice as far as 'good mix' is concerned. It grabbed the resources and hold them for a long time because it went to the wait state only occasionally. But that was a great help for production problems, so they had to let it run.

    That was many years ago. I don't know if MVS got changed so to introduce preemptive multitasking. At that time it was a strictly batch-oriented system. All I/O was executed in a separate subsystems (channels). To run something interactive (like CICS) wasn't trivial at all. The best strategy was to dedicate entire mainframe to such task. Mixing CICS and batch jobs int the same machine was suboptimal solution. Of course, MVS scheduler got improved since, to provide better balancing between batch and interactive tasks, and yet, as I understand, MVS fundamentally remains batch operating system.
  • by Anonymous Coward on Monday July 18, 2005 @08:22PM (#13099572)
    Fully support you JeiFuRi. In the management of UNIX and Windows world, a hot topic is ITSM (IT Service Management). It is about managing the quality, predictability and cost effectiveness of IT services. The key approach is based on some best practice material which was codified back in the '80s in mainframe environments (Google ITIL). It seemed "common sense" back then, but many Win/Unix environments have grown up without decent capacity management, change management, problem management etc. etc. etc.


    The ITSM "industry" is growing 50-100%CAGR, and there are standards emerging (BS15000, AS8018, ISO20000) in this area.


    I guess the key ideas in ITSM are that the primary focus is on the service, not on the technology that is used to deliver it, and that good consistent processes maximise service quality. These are ideas that have existed in the MF world for as long as I've worked in IT (+30 years), but are sometimes sadly lacking in "newer" environments.

    Patrick Keogh posting as AC because I can't remember my password...

  • by BrynM ( 217883 ) * on Monday July 18, 2005 @08:37PM (#13099666) Homepage Journal
    Ever write a sort routine? Know the difference between bubble-sort and quick-sort? The average MF doesn't. He calls the system level command SORT and he's done.
    You're right except for the Sysprog. Working directly on a MF at the systems level is akin to kernel programming. All of those utilities have to be maintained as well as the JES and JCL "scripting" and new utilities are needed all the time to save resources and optimize performance.
  • Re:Good question. (Score:4, Interesting)

    by Knetzar ( 698216 ) on Monday July 18, 2005 @08:44PM (#13099710)
    As a windows programmer turned unix programmer turned unix operations support who just recently started working with mainframe operations, I would like to say your post seems to be right on. In addition in the mainframe world CPU utilization is everything. If your CPU is not above 90% utilization, then something is wrong. This is different then the Unix world where capacity planning is done so that expected peak CPU utilization is anywhere from 40-80%.
  • by itunes keith ( 900814 ) on Monday July 18, 2005 @08:45PM (#13099718)
    I grew up using Mac OS and now am the youngest, by far, in our datacenter to be an MVS DB2 DBA. My 25th birthday is coming in a few months :P I studied CE in school and the move to 'old school' has been an interesting change. Working for 2 1/2 years now.
  • Re:An idea... (Score:4, Interesting)

    by CrudPuppy ( 33870 ) on Monday July 18, 2005 @08:50PM (#13099753) Homepage
    the best way I can think of it is with an analogy:

    windows geek : unix geek :: unix geek : mainframe geek

    they've generally been around the block a few more times, know shit most unix geeks dont, and have quirks unix geeks dont.

    "PUNK! when i was your age, i was writing operating systems... in binary... on punchcards!"
  • Re:Don't reboot (Score:3, Interesting)

    by Detritus ( 11846 ) on Monday July 18, 2005 @08:58PM (#13099818) Homepage
    That's assuming that you have access to the machine. I've run jobs on mainframes that I've never seen. I'd just drop off the job at the service desk and pick it up the next day. The mainframe was in a restricted area, where users and programmers were not allowed without an escort and a good reason to be there.
  • by dhuff ( 42785 ) on Monday July 18, 2005 @09:01PM (#13099833)
    I worked around some mainframe programmers for several years at a major bank, and they strike me as being much more like older Unix geeks than anything else, with some important differences:

    • They're probably much more politically conservative
    • They may wear much more conservative clothing, incl. stuff like neckties, but it'll still be grubby and ill-fitting
    • They probably drive a midsize, American sedan like a Ford or GM product
    • They generally have poor health habits, but lean more towards the old fashioned vices like coffee (from the office coffeemaker, not Starbucks) and cigarettes - diet is also poor
    • They obsess over uptime and reliability way more than Windows or even Unix geeks
    • They most likely don't have the typical geek interests like Star Trek, computer games or reading Slashdot :)


    But I bet you'll notice a core psychology that's pretty familiar to most geeks...
  • Re:I agree (Score:2, Interesting)

    by Anonymous Coward on Monday July 18, 2005 @09:02PM (#13099844)
    what kinds of mistakes you see new programmers making, especially with reguards to C programming

    1) using the C language. There will always be a place for C, but it is an increasingly small place.

    2) not understanding the difference between programmer and developer, or between code and product. A programmer writes code, a developer delivers products. Code is a widget, not a product. A product is the totality of what a customer buys. An apple is a widget; but the whole product consists of you buying that apple from a nice display in clean supermarket from a courteous cashier in a convenient location with adequate parking.

    3) underestimating the importance of personal skills. Mediocre developers who communicate clearly, coordinate their actions with others, and have good hygene are much more valuble than brilliant programmers who would rather invent their own solution than follow standards, lack manners, and act/dress weird.
  • by Naum ( 166466 ) on Monday July 18, 2005 @09:02PM (#13099853) Homepage Journal
    Dating back to Burroughs machines and managers who didn't know how to use a text editor, preferring to use punched cards and "punched card emulation", but yet they could do a randomizer calculation without a calculator in their head within a second or two.

    I've also had the opportunity to train mainframers in shops where MVS platforms were displaced by *nix based platforms. So, here is a subject that, no doubt, I can speak about:

    The major factors/differences:

    First, most of the mainframer programmer contingent has been moved offshore or is being done by NIV programmers. Really not much of a career path here, but OTOH, a great deal of critical systems (charge card processing, airline reservations, utility company systems) are still coded in MVS COBOL/DB2 (or IMS, a hierarchical mainframe database platform for IBM MVS). To convert these systems means you need to be able to understand these systems, and please don't give me a business analyst -- the days of their expertise are long gone, and the metamorphisis of systems over time means business knowledge is embedded in the code.

    Mainframers don't get GREP. I've tried so many ways to impart this wonderful tool, but all I get back is puzzled stares and bewilderment, for anything more complex than what could be accomplished with a non-regex, or simple wildcard search.

    Globals. This is something that put me aback 6-7 years ago, when I made the leap into Unix programming, and traded C/REXX/CLIST for C/Perl/etc... COBOL is structured into divisions and all your data declarations are laid out and globally accessible. Though many COBOL systems are quite complex, with a "program" actually being a driver for a whole hierarchy of 20-40 sub-programs, and the necessity to restart at a given point in processing can make things quite complex.

    Approvals, checkoffs, signoffs, and procedures. They're largely absent in the Unix (and most webdev work) world, but mainframers have grown accustomed to reams of authorization and approvals for even simple changes. Lead times of a week or more, along with VP signoff, QA signoff, user group signoff, fellow developer signoff, etc.... Even getting a downstream system to agree to test changes may take a formal request process and budgetary allocation of thousands of dollars. This is probably the biggest divide, and future schisms will be prevalent, as data center leadership trys to impose this kind of checks and balances on developers not accustomed to these obstacles. IBMs trouble and difficulty in the web server world offer a prime example -- business being told that it'll take 3-4 months to get a server online, and folks who know better just can't understand that.

    Lack of user tools. A big part of what I did as a mainframer was building tools, using BTS and File-Aid to allow developers and testers to create their own test bed and automate the test process. On Unix side, the tools come with the OS, and awk, Perl, and all the other CLI goodies make automating testing a snap.

    File in/File out vs. piping. Mainframers have a tendency to see everything as file-in/file-out. In a way so do *nix coders, but a seasoned *nix programmers sees the tools all being able to feed eachother. Rather than step1 filein fileout, step 2 sort filein, out fileout, step 3 filein, reportout, etc...

    On the age thing, most of the really skilled mainframers now, like myself, do Unix or migrated to Java. Others are awaiting retirement, or head over to six sigma teams, business analyst roles, or seek refuge in management, escaping the axe that clears the way for the offshore coder.

    Paper over softcopy. Got to have that printed listing, and the sticky notes (and before that, paper clips). I still remember a senior manager telling me when I first broke in how his appraisal of a programmer was how many fingers he needed to act as placeholders when he perused a program listing.

  • by toby ( 759 ) * on Monday July 18, 2005 @09:17PM (#13099933) Homepage Journal
    One of the strangest (and scariest) things I've observed have been old "mainframe" guys who've embraced "The Micro$oft Way" when it clearly goes against every principle enshrined in the old days. You know, old-fashioned ideas like efficiency, reliability, availability.

    You'd think they'd run from Windoze as fast as they can. But no -- perhaps because of some vague VMS gene still running around in 'doze -- they occasionally take to it like babes to the teat.

    These guys do exist. I've heard one recently defend VSS as a reasonable source code control system -- when Micro$oft themselves won't touch it, and the following remark has been attributed [wadhome.org] to a M$ employee:

    "Visual SourceSafe? It would be safer to print out all your code, run it through a shredder, and set it on fire." -Fitz

    Another one of these mainframer-turned-M$-nut dudes tried to explain to me that M$ is "redesigning the internet to use binary protocols" because "text formats obviously don't work" and are "breaking everything". He also believes Apple should be annihilated because they stand in the way of a total monoculture -- and he sees monoculture as necessary to achieve our "Star Trek future". The fact that he foresees a smoothly running galaxy running Windoze Everywhere is just plain amusing.

    Buddy, if the future is like Star Trek, I don't want any damn part of it. Diversity is Life.

  • by J.Random Hacker ( 51634 ) on Monday July 18, 2005 @09:28PM (#13100003)
    Don't feed the trolls, Don't feed the trolls... Uh, OK.

    Apparently you have no experience with the UNIX way.

    What you don't seem to know is that MS Windows is utterly missing the wonderful collection of little tools available on every UNIX platform (Well, without installing cygwin -- but that's UNIX, right?). Each little tool does one little job, and does it well, and all of the tools can be connected in standard ways. So, I *can* use C++ or C or PERL or Python, but I don't *have* to -- many times all I need is sh and that wonderful collection of utilities...

    And yea, I do write huge programs sometimes -- but only when that's really required to get the job done.

    I don't think I ever heard of a MF being used for the types of things I get involved with, though. (back to the topic :)
  • Re:I agree (Score:5, Interesting)

    by Krunch ( 704330 ) on Monday July 18, 2005 @09:31PM (#13100025) Homepage

    1) While most programs today should probably not be written in C, I think it's still an important language to learn and understand as a beginner programmer. Most applications today use C at some level. If you understand it, you get a chance to understand how the application/framework/library you are using works which make you able to use it better. See Joel Spolski's "Back to Basics" [joelonsoftware.com] for more on this.

    3) More on this in Robert L. Read's How to be a Programmer [mines.edu].

  • by Anonymous Coward on Monday July 18, 2005 @09:36PM (#13100058)
    Insightful, how about idiotic.


    Oooh.. touch a nerve did we?
    You missed the point (and the humor) of the OP.

    What can you program in Unix that you can't in Windows.


    Firstly, was that a question? Secondly, nothing. Yes, you can do it all in Windows. The point you originally missed is that (s)he was referring to the whiz-bangetry of .Net (and admittedly Jbeans) style programming, where you find an object that quacks sort of like the duck just before you extend and instantiate it to a bull. I'm not saying there isn't a place for this (user interfaces, database connectors, etc), but people now apply OO with assloads of add-in toolboxes even where it doesn't make sense.

    Bottom line is, older UNIX folks are much more likely to have written the mass majority of their own code where today we see CASE tool technicians masquerading as software engineers. I see this at work all the time with the "new generation" of IS/IT types rolling through the door who couldn't code a b-tree save to save their lives and rely on prebuilt everythings to give the illusion that they do something "difficult".

    Got news for ya folks, my Subaru tech has more skill than most of these chumps.
  • Re:An idea... (Score:5, Interesting)

    by bigman2003 ( 671309 ) on Monday July 18, 2005 @09:44PM (#13100113) Homepage
    Actually, a mildly funny comment.

    All of the programming I do startes out in a graphic environment, whether that is Visual Studio, or Dreamweaver.

    I really have no need to 'program' boxes, fonts, text areas, etc. (Which of course don't exist in a CLI anyway...)

    But using something like Visual Studio you get to draw out your 'forms' and make them look pretty. Then double-click an object to put in the corresponding code.

    Of course most projects require about 99% of your time in code-view- but that 1% in design view would probably take me 5-20 times longer if I was using code to lay things out.

    I thought the best line from the article was:

    Unix culture values code which is useful to other programmers, while Windows culture values code which is useful to non-programmers.

    I've never seen it written so succinctly- but that is basically it. I only value two things when creating a program:

    A- can a neophyte use it...preferably, someone who doesn't even really understand the purpose of the program.

    B- will it be easy for me to come back and modify it.

    I stopped worring about 'efficiencies' and 'cycles' years and years ago. It is so nice to live in an era where it is nearly impossible for me to tax the hardware.

    But that's me...maybe you're doing some video editing, or rendering, or something like that. But when I am mostly dealing with data storage and retrieval, nothing should take over a few milliseconds.
  • Re:I agree (Score:4, Interesting)

    by spectecjr ( 31235 ) on Monday July 18, 2005 @09:45PM (#13100117) Homepage
    I didn't get into the industry until 10 years ago, and I was amazed at this difference between the windows kids and the mainframe guys. I was a Windows/Oracle developer, but luckily I learned good practices from old MVS/greenscreen guys who taught me things that hold true no matter what kind of computer platform you're working with. I'm blown away to see some of the stupid things that new programmers/admins do. Blown away

    I feel the same way whenever I look at the SMTP spec, the MIME spec, the SMTP email format spec, pretty much any on-the-wire specs actually...

    At the very least people could prefix strings they're transmitting with the # of bytes in them, so that memory access is efficient.

    Look at HTML - all ASCII. ASN.1 was invented so that you didn't have to use all ASCII for this kind of data (look at the SNMP spec if you want more details). But does anyone use it for the on-the-wire format? No.

    Unixheads seem to claim that it's perfectly admirable to hack around the ASCII format for everything because it makes it easier to debug, whereas all I see is wasted entropy and bandwidth.

    Anyway...
  • Re:On the difference (Score:3, Interesting)

    by Osty ( 16825 ) on Monday July 18, 2005 @09:57PM (#13100179)

    Hopefully this can help shed some lite for some of the GUI addicted PFY's out there missing out on a decent tool kit 'cause their OS of choice doesn't have a shell or basic programming (Pun Intended) facility.

    When will the unix admins learn that just because Windows doesn't do as much piping as *nix doesn't mean it's not fully scriptable? The paradigm is different, is all. In *nix, if I want to programmatically kill a process I grep for it, cut or awk out the pid, and pipe that into kill (ignoring killall). In Windows, I query the process table with a WMI query object, retrieve the returned Process object, and call Kill (actually, I'm not sure what the name of the method is, but the idea is that you call methods on objects rather than piping text to processes). They both get the job done, and there's very little that piping or object automation can't do. I'd even argue that the object method is more robust because it doesn't have to infer information from presentation-formatted data (what do you do if the column of data you want changed positions because you're using different command line options to ps?). In either case, you're relying on developers to support the interface mechanism (stin/stdout in *nix, IDispatch in Windows), and it's not the system's fault if an application's implementation is sub-standard.

    The widely-held belief that a *nix administrator can adequately perform the job of a knowledgeable windows administrator is just wrong. You'd laugh if I tried to suggest that a Windows admin could do the job of a *nix admin, so why is it assumed that the other way around works? And no, you're not going to install cygwin and bash and perl on my production systems unless they're absolutely critical (ie, a web server that needs to serve perl-based CGI scripts). No, helping you do your job the wrong way (*nix admin attempting to be a windows admin) is not "mission critical".

  • by dougsyo ( 84601 ) on Monday July 18, 2005 @09:57PM (#13100182)
    You got that right... for years we had a VS1-MVS-XA-ESA-OS/390-z/OS environment with a parallel VM/370-VM/SP-VM/ESA environment for doing development. Our change control was simple, our sources were based on 80-byte records, etc. Our online environments were CICS and a few home-grown DMS/CMS applications, batch was predominantly PL/I with ISAM (and later VSAM) files. Then we brought in a DBMS with a 4GL and built-in TP monitor. (Model 204). Life was still pretty good, although we started writing REXX applications and had to develop change control for that and the DMS/CMS code.

    Eventually CMS went away - the programmers moved to TSO, the functional users (SCT Banner's term for "end users" or "lusers") moved to a combination of TSO/ISPF, Model204 applications, and web apps. But our mainframe change control processes still worked.

    Enter Unix, in the guise of SCT Banner. Don't let anyone kid you, it's an ERP, and a big hairy mess. Many of our programmers are "back to square 1", having to deal with C, shell scripts, perl, etc rather than JCL, PL/I and User Language. Cobol is somewhat familiar ground, although "make" is a wierd construct to them and shell script drivers are copied by rote (they're used to DMS/CMS or ISPF applications building compile JCL).

    Change control? Hah. We're so busy trying to get acclimated to the environment that the closest we have right now is ".old" and ".save" files laying around. I've installed SVN, but I'm too busy fighting fires to write the documentation, particularly since I'm one of the few people that truly *is* Unix-literate (but I can still build a CP nucleus if I had to!) I spent 2.5 hours tonight unsnarling Banner Job Submission and its interaction with Appworx.

    Doug
  • by SluttyButt ( 264722 ) on Monday July 18, 2005 @10:32PM (#13100371)
    ...are aware of the intrinsic I/O between CPU, HD and peripherals. It is well published in the manuals anyway. PC programmers (generally) have no idea how data and at what rate are moved between peripherals. Thus they have little control over the inner workings of the language the program in. They are forced to work in sculpted interfaces provided by the Windows world.
    Mainframe on the other hand, have no interfaces and if any, it's a TEXT (EBCDIC) world. Mainframe is a no-frill world and strictly a business proposition. In a word - strictly no nonsense for you to hack with.
  • by antispam_ben ( 591349 ) on Monday July 18, 2005 @10:36PM (#13100386) Journal
    Mainframe guys don't reboot their system.

    They don't call it reboot, they call it a "re-IPL" [Initial Program Load] and depending on the machine it takes up to 30+ people, each with specialized knowledge about a specific part of the process. [you can mod me funny, but THIS IS NOT A JOKE]

    Unix guys reboot the system occasionally.

    Only because of a hardware upgrade, and only because the technician convinces them it REALLY DOES need to be turned off to add more RAM or a (non-hot-swap) disk drive.

    Windows guys reboot their machine several times a week.

    "Several" in this context is a number greater than ten. A boot often lasts through the day, but not always. But I remember the 3.1 days (it shudda been called "three point one over six point two two"), it was boot-in-the-morning and reboot-after-lunch, as well as many other times.
  • by Anonymous Coward on Monday July 18, 2005 @10:45PM (#13100423)
    No multitasking in MVS??? Oh contrare... take it from an OLD MFT, MVT, MVS, OS/390, and z/OS systems programmer who wrote some of this archaic IBM operating system stuff... preemptive multitasking has been there since the nearly the beginning.

    Need proof? Grab yourself a copy of the Hercules mainframe emulator and MVT (google for it). MVT is MVS's daddy. Give it a go yourself.

    Long live Poughkeepsie (but watch out for the submarines in the Hudson river)! Now, where did I leave my cane???
  • by Anonymous Coward on Monday July 18, 2005 @11:04PM (#13100511)
    Mainframe programmers can and often do have at least some level of understanding of every aspect of the computer system, and have mastered several ancient but useful computing languages.

    Unix programmers can and often do have some understanding of every aspect of the Operating System, and have mastered several current and useful computing languages.

    Windows programmers can and often do have some understanding of every Microsoft product and have mastered several GUI-based integrated development environments.
  • Re:Don't reboot (Score:3, Interesting)

    by Genady ( 27988 ) <gary.rogers@NOSPaM.mac.com> on Monday July 18, 2005 @11:25PM (#13100609)
    This actually reminds me of a story I remember when I was just a lowly little Desktop Support Tech.

    Seems an old support person, former Mainframe Operator told be a story of a largish corporation that had a whole *CLUSTER* of Mainframes, and an odd issue that kept crashing a Mainframe every now and then. Apparently the in-house people couldn't figure it out, and the vendor wanted to take things down for a little while to work on it. Of course, being a 7x24 shop management wouldn't have it. So they added another machine to the cluster. Don't ask my how or why, but the extra machine made sure that the processes kept going while one node of the cluster fell to it's knees and re-IPLed. Cheaper than shutting down a production shift.
  • by WebCowboy ( 196209 ) on Monday July 18, 2005 @11:44PM (#13100708)
    ...but I won't. Rother I'll explain why you have no clue.

    Insightful, how about idiotic. What can you program in Unix that you can't in Windows.

    That wasn't the point the original poster was trying to make. The point is HOW you program in Un*x vs Windows. Nobody will argue that you can do anything you want with either platform. However, a great many people would argue that the "UNIX way" is FAR mor elegant.

    In Windows you have C and C++ just like Unix. Java, Perl they are all there as well.

    This statement really demonstrates your inability to comprehend the differences. To extend the "building toys" analogy, C/C++, Java, Perl et al are NOT the pieces, they are the plastic/wood/metal with which the pieces are made. You could make lego bricks out of the latest space-age carbon fibre composites, but they would be useless if the "bumps and holes" on each brick were different sizes and wouldn't lock together.

    Now the platforms may be different but largerly they are more similar than not from a progammers ability to make a program perform a required task.

    There I'd really have to disagree with you. There are things that Un*x style architectures do easily that are arduous to perform in the Windows environment. Similarly, there are things Windows excels at. IPC was really much more refined under UN*X--some might say Windows works with threads so well because it has to since its IPC abilities have historically sucked--really in UN*X it is much easier to get various components to play nicely with each other yet keep their resources separate and protected. OTOH, there are reasons Windows-based games are so far ahead besides simple market share--graphics interfaces are one of those "funny shaped blocks" in Windows that is very well suited to its task.

    Really that Lego analogy is very apt indeed. UN*X is very uniform in how it works, just like a bucket of classic Lego bricks. You have a library of pipes, sockets, shared memory etc. that is very standard across all programs that extends all the way to the user interface (you can pipe all manner of programs input and output together right on the command line to a degree not yet seen in production releases of Windows). Once you get the hang of the UN*X Way you can snap these blocks together esily to suit your needs.

    For all the "object orientedness" of Windows, there is not that level of uniformity in interfacing to make those reusabel objects work together. Instead, you have an overly complex framework in the form of DDE/OLE1/OLE2/COM/DCOM that was largely designed to accomodate disjointed, inconsistent interfaces between various components/applications. This is something like the "licensed from the movie" Technics sets with all the little odd-sized rods/axles, funny-shaped blocks, special wheels and so on. There many little sets where the pieces fit together very nicely in a few commonly required configurations, but when the time comes where you want to make your own creation not in the instruction booklet you become frustrated with the useless pieces. For many kids, the six or so really cool things you can build are good enough, for the 10% "most geeky" kids it would bore you quickly.

    I can't say I really know for sure what a "mainframe toy" would be--mainframes don't seem like fun at all. I think "mainframers" may have forgotten what childhood was like, or perhaps hatched from a pod fully grown, who knows. I do not have a lot of exposure to that philospohy/culture. If I HAD to pick a toy that was most mainframe-like I might say Mecanno, because like UN*X they are fery uniform in structure, however you have tediously fiddle with those little screws to put anything together, just like a mainframe--you have your "special screwdrivers" (arcane knowledge) and have to follow tedious processes to get things done. Or, perhaps it is like building a birdhous with popsicle sticks, where you have to tediously glue the pieces together with Elmers glue, wait for it to dry bef
  • by alexhmit01 ( 104757 ) on Tuesday July 19, 2005 @12:12AM (#13100843)
    In NT4 and earlier, those systems weren't there (WSH came out around Option Pack 2, right? It's been a while). However, up until recently, the majority of Windows Network systems were NT 4.0. The W2K+ Scripting environment is quite impressive (I've been doing my first Windows work in a while recently, although mostly Excel/VBA programming, but played with the scripting capability for fun), and it has come a long way.

    When I worked a decent sized MS Partner, the MS Way was "point-and-click." They were going to do a 10,000 user migration by hand, because that was the MS Way. I grabbed the NT 4 Resource Kit and whipped up some Batch scripts to do the parsing, and the Windows guys were amazed.

    Windows has some very intelligent scripting, buts its somewhat hidden because the NT 4 Days, which weren't short, but caused a problem. Older PC guys knew Batch scripting, which kinda disappeared in the NT 4 days because the tools weren't readily available (buried in the Resource Kit meant that you couldn't count on them being on the machine). The newer object-oriented programing method is cool (and absolutely preferable to parsing text streams, which as you said depends on an unchanging text output from a program, which is very constraining), but you need a new generation of Windows Geeks.

    Unfortunately, hacking on Windows is about as "cool" as a Mac was 10 years ago, so your computer geeks just aren't learning it. This doesn't change the fact that good admins are critical, but there is a perception problem. Just like Novell became perceived "dead" because nobody saw it because the machines didn't crash.

    The WMI/AppleScript approach (as in, thick self contained apps that are callable) is perfectly legitimate.

    The other problem you have here is what happened to the MCSE in the late NT 4.0 days. When I was just finishing my MCSE, all the MCSE study guides were coming out... teaching to the test, and MS didn't upgrade the tests fast enough. Stuff that took me weeks reading the NT 4 Resource Guide was available in a condensed 4 hour book. Combine that with the MCSE Courses, that taught to the test, and the whole industry get messed up. People hired cheap "paper" MCSEs, and people got used to Admins not being able to program. Finding a Windows Admin that truly gets it is rare, because there is too much dependance on unknowledgable paper-admins, so people assume all Windows Admins suck.

    Alex
  • Re:The Difference (Score:2, Interesting)

    by Quantam ( 870027 ) on Tuesday July 19, 2005 @12:56AM (#13101031) Homepage
    ...besides the fact that "NT" refers to the kernel, which is still used in Windows Server 2003 (lay off the crude jokes, will ya?), people DO still produce NT 4 machines. They still have NT 4 machines up and running (and replace them with new NT 4 machines, if the old ones die) at my dad's work (a chemical testing laboratory - NT 4 is popular among machines connected to laboratory instruments: mass spectrometers, chromatographs, etc., there)
  • by Anonymous Coward on Tuesday July 19, 2005 @03:14AM (#13101506)
    I was of the vintage that started my CS degree using punched cards and ended it with Unix and Windows. What I learned from coding on the MVS system is that the programmer should batch up as much of the request as possible because each user gets only a tiny slice of the processor's attention, and it is a Good Thing to do let the client side have responsibility for some state information. Later when I started to learn to program on the web I was able to reuse most of that orientation to patch together stateless web pages into a coherent application workflow. I guess that is why I still write my webpages in text editor and never ever use an "integrated development environment" for web coding ... I guess I just don't like to have the physical processes hidden away from my analysis process.
  • Re:The Difference (Score:2, Interesting)

    by Quantam ( 870027 ) on Tuesday July 19, 2005 @03:37AM (#13101570) Homepage
    *actual

    Now, allow me to provide THE final word of the difference between a thread and a process:
    Process

    An address space with one or more threads executing within that address space, and the required system resources for those threads.

    Thread

    A single flow of control within a process. Each thread has its own thread ID, scheduling priority and policy, errno value, thread-specific key/value bindings, and the required system resources to support a flow of control. Anything whose address may be determined by a thread, including but not limited to static variables, storage obtained via malloc(), directly addressable storage obtained through implementation-defined functions, and automatic variables, are accessible to all threads in the same process.

    Those would be quotes from The Open Group Base Specifications Issue 6, Definitions.

    How does that apply to this discussion? It tells us that you're confusing the specification with the implementation. NT can also create processes that share address spaces, but you can't do so with any API available to programmers (did you know that NT can fork, too? but you also can't do that with any API available to programmers); this is no different than Linux and other flavors of Unix. A process by definition has a separate address space, while threads of the same process by definition share an address space. In other words, even on Unix, processes (if they do any IPC) will be MUCH slower than threads running in the same process (for further details of why this is, see my longer post from earlier).
  • The difference IMHO (Score:5, Interesting)

    by FJ ( 18034 ) on Tuesday July 19, 2005 @03:44AM (#13101580)
    I'm a mainframe sysprog but I've coded on Unix & Windows. I'm also rather young (33) for a mainframe sysprog. Here are the differences.

    The first difference is the difference of work running on a system. Unix & Windows development typically takes place on dedicated machines. The changes are then applied to a separate production machine. On a mainframe development & production are often the same LPAR (Logical Partition) or the same physical box. Because of this development gets the low priority. If you run out of juice on a Unix/Windows box you either get a bigger one or you cluster them together. In the mainframe you either redesign it to run more efficiently or you start shelling out $$$ for a bigger machine. Normally your only choice is the redesign.

    Software on a mainframe is horribly expensive and the faster the machine the more it usually costs. This is an old way of spreading the pain of software development. The big guys pay more because their machines are faster but the smaller guys get to pay less. Imagine if MicroSoft decided to charge a lot less for Office if you ran it on a P5 instead of the newest processor? Some software on Windows is licensed by the CPU, but I've never heard of the speed of the CPU being a factor. Do you think you'd get that fancy new PC if the software would cost 10x as much?

    On a mainframe software development is a slow process with lots of checks along the way. Nobody just "slams in a change" unless they are either 100% sure it will work and it fixes a critical problem that is impacting business, or they want to be fired. Banks frown heavily on downtime. Unix & Windows systems seem to be more tolerant of this (with the odd exception being email - how email became the most important application is beyond me).

    Once you develop, debug, and get a mainframe program running you can usually forget about it. There are programs running on mainframes today that haven't changed in 30 years. That is a pretty good return on investment. I've dealt with both and it seems to boil down to "pay me now or pay me later". Installing stuff on a mainframe take a lot of up front work but if you do it correctly you can expect it to work well when you are done. Windows programs are easier to install and develop but you have the constant reboot issues, memory leaks, and just plain annoying mysteries to deal with.

    Mainframes (in my opinion) have far far far superior system diagnostic tools. If a program is running slow I can determine if it is CPU, disk, database contention, or any other resource shortage. This is mainly because there is so much running on any given mainframe that system diagnostic tools need to be very good. The tools on Unix and Windows are good but they don't need to be as complete because the environments are far less complex.

    Program debugging tools on a mainframe can be awful. Interactive debuggers are the exception, not the norm. They tend to take up CPU which drive up software costs which the finance department hates. I've seen good interactive debuggers but they suck CPU and make the finance department hate you.

    Batch controls on a mainframe are far superior to Unix or Windows. This is mainly because the mainframe started life as a batch system. Once you understand and master JCL it is really a good system. Batch on Unix and especially Windows is more of an after thought. You can run batch, but the tools to monitor failures, schedule dependencies, and validate results are not as good.

    A programmer must know how a program is going to run on a mainframe long before you run it. You need to know how much disk, CPU, and memory you need and how man lines of output you are going to use. If you exceed this by too much your program will be automatically canceled. This is because you are not the only one using the system and if you exceeded what you said you needed your program could have a problem. That can be painful but it stops program loops if done properly.

    The "just reb
  • Re:Cats (Score:4, Interesting)

    by cowbutt ( 21077 ) on Tuesday July 19, 2005 @05:28AM (#13101824) Journal
    As a fairly dyed-in-the-wool UNIX type (or more precisely, POSIX, since I started with the Amiga, which is more POSIX-like than anything else, IMHO), VMS seemed very odd to me when I picked up bits and pieces from an ex-DEC greybeard VMS geek.

    Bits of it are marvellously elegant and I struggle to think of clean ways of implementing equivalent things within a UNIX-like OS. Other bits seem oddly like DOS or embedded OSs such as vxWorks (more precisely, DOS and vxWorks sometimes look a bit like VMS). And then, if you install UNIX-originated software such as TCPware on VMS, bits of it /do/ start looking like UNIX.

    I was able to support TCPware on UNIX purely because many of the tools were ports or recreations of key parts of the BSD IP stack. I was even able to help a customer set up PPP when none of our experienced TCPware engineers could, because it was using pppd, as on Linux.

    The most annoying things about VMS - to a UNIX geek - are a) no 'cd' command b) apparent lack of relative paths c) system-wide date/time a la Windows, except in parts, when TCPware is installed (making for a very confusing experience around DST changeover days, especially if you have NFS in the mix too).

  • Re:I agree (Score:3, Interesting)

    by cowbutt ( 21077 ) on Tuesday July 19, 2005 @05:34AM (#13101838) Journal
    ASN.1 encoding is used all around you.

    As are ASN.1 parsing vulnerabilities [google.com] because ASN.1 is so hard to parse that nearly everyone who uses it ends up using the same flawed ASN.1 parsing codebase.

  • by rve ( 4436 ) on Tuesday July 19, 2005 @06:27AM (#13101936)
    In my opinion (as someone who does both Unix and mainframe programming for a living) the problems you describe have a lot to do with the fact that COBOL programmers tend to be older people, who rolled into the programming field from something else, such as engineering or accounting a long time ago. They didn't study CS or IT in college, because there was not really such a field 20 years ago.

    When they learned to code, people did not have computers with compilers available at home. You learned COBOL because that was what the application your company had was written in. It was written in COBOL because when development began, COBOL (or RPG) was the only one suitable for an application that handled valuable data rather than calculations or low level hardware control.

    You didn't play around or experiment, because you were working with data that was very valuable to other people.

    On top of that, comes what compares to a Mac mentality: being in love with an increasingly marginal phenomenon, and staying with the once handsome, but now bitter and abusive spouse, blind for the limitations, or even seeing them as strengths. You can't teach modern programming practices to people who run 5250 or 3270 emulators on 3 GHz Pentium IV PC's and try to explain the superiority of EBCDIC to you.
  • Re:The Difference (Score:3, Interesting)

    by anthony_dipierro ( 543308 ) on Tuesday July 19, 2005 @07:43AM (#13102134) Journal

    No, the whole point of an operating system is to provide a stable programming target and perform resource management.

    That's what resource management is. If programmers cooperate with each other to share resources, then resource management is done automatically. But with any modern operating system this isn't necessary. Each program is separated from the other programs so that it can behave as though it is alone in the computing ether, as though it owns all the memory, all the processor, all the hard drive, etc.

    I suppose saying that it protects individual programmers from the actions of each other was a little bit overboard. Really it is designed to protect individual programs from the actions of each other. As for protecting the individual programmers, in a project with multiple programmers, that's really the job of the programming language and development tools.

Always look over your shoulder because everyone is watching and plotting against you.

Working...