Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Operating Systems Software Unix Windows

What is Mainframe Culture? 691

An anonymous reader asks: "A couple years ago Joel Spolsky wrote an interesting critique of Eric S. Raymond's The Art of Unix Programming wherein Joel provides an interesting (as usual) discussion on the cultural differences between Windows and Unix programmers. As a *nix nerd in my fifth year managing mainframe developers, I need some insight into mainframe programmers. What are the differences between Windows, Unix, and mainframe programmers? What do we all need to know to get along in each other's worlds?"
This discussion has been archived. No new comments can be posted.

What is Mainframe Culture?

Comments Filter:
  • Easy (Score:1, Insightful)

    by Anonymous Coward on Monday July 18, 2005 @07:55PM (#13099346)
    Windows programmers program as fast as possible to maximize profit ignoring the reprecussions of bad programming while Unix programmers take pride in their product.
  • The Difference (Score:2, Insightful)

    by Anonymous Coward on Monday July 18, 2005 @07:58PM (#13099373)
    The difference is single threaded and multi threaded...Unix programmers know that they have to assume that they could be walking over someone else's session info.

    Windows programmers always seem to assume they are alone in the computing ether.
  • Faith Machine (Score:5, Insightful)

    by Doc Ruby ( 173196 ) on Monday July 18, 2005 @07:59PM (#13099384) Homepage Journal
    This "anonymous poster" has been managing mainframers for five years, is a Unix nerd, and doesn't already know how the three cultures are different? Or are they just a Windows troll, stoking the flames of the OS holy wars?
  • One difference (Score:5, Insightful)

    by Prof. Pi ( 199260 ) on Monday July 18, 2005 @07:59PM (#13099386)

    Unix and mainframe programmers are more likely to know multiple systems, out of necessity, and consequently have a more general understanding of the commonalities of all computer systems. Windows-only programmers are more likely to know The Microsoft Way, and only The Microsoft Way. They're less likely to know standard terms, and will only know Microsoft's replacement terms. At least in my experience (and these are tendencies with plenty of exceptions).

  • The difference (Score:5, Insightful)

    by Jeffrey Baker ( 6191 ) on Monday July 18, 2005 @08:00PM (#13099391)
    What are the differences between Windows, Unix, and mainframe programmers? What do we all need to know to get along in each other's worlds?

    The difference is one programs Windows, one Unix, and one mainframes. As a fifth-year geek, you should take the rantings of Joel, ESR, and any other pointless windbag and send them to the bit bucket.

  • Good question. (Score:5, Insightful)

    by BrynM ( 217883 ) * on Monday July 18, 2005 @08:00PM (#13099395) Homepage Journal
    I'm a bit rusty on the mainframe side, but I'll give this a stab.

    The main difference is one of resources. The mainframe folk utilize a shared resource: the Mainframe System. You may have parallel hardware, but from their point of view it's a single system. There's no ability to install a quick machine to use as a test server. Sure you can have test CICS regions or test OS partitions, but if you bring the hardware down you bring the datacenter to a screetching panic. Worse, you can piss off the operators and have 0.00001%CPU for the rest of your tenure. This keeps a certian unspoken level of panic about. Don't worry if you notice it bubble up when one of your coders fucks up. The panic symptoms will pass as it goes back down to it's normal level. It won't go away though. ;-)

    Which brings me to scheduling. Remember that production=batch and batch knows no sleep. When code goes to production, it's just as bad for the stress level as a major version release of other software or a website launch. Unfortunately for the MF coder it happens a lot more often. Having to talk to your operators when you can't even see straight (from sleep or other things) takes something that is unique to this kind of coder. On-call programming takes talent and some craziness. If you can find where that is for each of them, you will realate to them well.

    One last thing: make your coders work in operations for at least a week. They will have a better understanding of the hardware end and productivity will go up. There's a reason that the best coders are in the computer room a lot.

  • by rbarreira ( 836272 ) on Monday July 18, 2005 @08:01PM (#13099403) Homepage
    can easily program for all of those systems.

    So there is no difference. There programmers and non-programmers. Some non-programemrs don't program at all, others pretend they do. Programmers will quickly adapt to any operating system. One of those groups has a future, and the other one does not.
  • by ScentCone ( 795499 ) on Monday July 18, 2005 @08:01PM (#13099409)
    Why, in my day, we used stone punch cards we had to mine ourselves from the limestone quarry! Planning ahead made a lot of sense back then. Tell that to kids today, and they don't believe you!

    Seriously, I think the real problem is management addicted to immediate change in production systems. This started when it was web content, and now they expect back-office stuff to change just as quickly.
  • On the difference (Score:5, Insightful)

    by Tsiangkun ( 746511 ) on Monday July 18, 2005 @08:07PM (#13099465) Homepage
    Unix programmers like their code like the old legos. Each piece might be a different size or shape, but the bottom of one snaps onto the top of another and the ordering and number of pieces used is left as an excercise for the reader. With experience, anything can be built with the pieces, and yet each piece is simple and easy to understand.

    Windows is like the new lego sets. You get specialized premolded parts suitable for one specific task, plus two or three additional add-on pieces that give the illusion of being fully configurable for any task. You can build anything you want with the new legos, as long as you only want to build what is on the cover of the package.

    Yeah, that's it in a nutshell.
  • by Jay Maynard ( 54798 ) on Monday July 18, 2005 @08:11PM (#13099492) Homepage
    You got it. Unix shops are learning lessons now the hard way that mainframe shops learned the hard way 40 years ago, and they're evolving the same answers.
  • by el-spectre ( 668104 ) on Monday July 18, 2005 @08:13PM (#13099508) Journal
    Whoa... MS folks are better balanced? Not trying to fan flames here, but I work with a lot of MS guys who don't understand basics of technology, but only the bloody MS API.

    For example (I'm a web geek) we're trying to figure out why a HTTP request is getting garbled.

    My first response: "ok, lets look at the whole request -it's just text- and see what it says"

    MS-Guy's response: "I don't know... there's no method in the API for that..."

    And that, kiddies, is why I try to remain skilled cross platform :)
  • Re:The Difference (Score:4, Insightful)

    by Quantam ( 870027 ) on Monday July 18, 2005 @08:14PM (#13099519) Homepage
    The word you're looking for is "user", not "threaded". From my experience, Windows coders are much more knowledgeable about threads than Unix programmers. Back when I was just learning some POSIX programming (I've been doing Windows programming forever) I'd ask even halfway experienced Unix programmers how to create a second thread in my program's process, and the usual response was "why on earth would you ever need to do that?"
  • by SuperKendall ( 25149 ) * on Monday July 18, 2005 @08:14PM (#13099521)
    From the article:

    *Unix Programmers* don't like GUIs much, except as lipstick painted cleanly on top of textual programs, and they don't like binary file formats. This is because a textual interface is easier to program against than, say, a GUI interface, which is almost impossible to program against unless some other provisions are made, like a built-in scripting language.

    I would disagree with this assesment, instead I would say people who prefer textual interfaces do so beacuse they often offer a much denser display of information. You can get a lot of information packed into text that may be quite spread out in a GUI.

    Also I would say that people eventually come to favor programs with scripting interfaces.

    It seems to me that as users grow more sophisticated eventually all users become programmers in at least a specific domain, or at least desire to. All users grow used to a tool, and after a while they start wanting more dense an informative displays.

    Just look at PhotoShop, probably one of the longest running commercial applcations (i'm sure there are others that elude me but it's just a really good example). Does that even follow any kind of UI guideline? No it does not; there are so many users that have used it for so long, that they demand a richer and more complex interface. Over time they demanded plugins and then of course scriptability (through actions).

    Yes Windows was a way to bring many people into computers that could not have come through UNIX. But in the long run users grow into wanting more flexible uses of the computer and they start leaning towards the "UNIX Way" and looking for apps that are pluggable and scriptable.

  • by ScentCone ( 795499 ) on Monday July 18, 2005 @08:21PM (#13099564)
    I'm guessing that you're only hearing these stories because people have actually experienced them (I know I have). Of course, these stick out because they are trouble, and the places that do it right are the ones you never hear about because there are no war stories involved (or PHBs).
  • Re:The Difference (Score:5, Insightful)

    by cratermoon ( 765155 ) on Monday July 18, 2005 @08:21PM (#13099565) Homepage
    "why on earth would you ever need to do that?"

    That IS a good question. In Unix, creating a new process and using IPC is so simple, you almost don't need threads. In fact, before POSIX threads, Linux threads WERE processes. The advantage you got over threads was cleaner separation of memory and variables -- always worth a lot when programming in three-star C. The disadvantage, of course, was that same separation meant that everything you wanted to share had to go through IPC.

  • by Wyatt Earp ( 1029 ) on Monday July 18, 2005 @08:26PM (#13099601)
    And...

    The Windows Dev need a P4 with a gig of ram.
    The Unix Programers can do it on a P4, but it'll work just fine on a Mot 68K or a 486.
    The Mainframe Programmers think a Ti-92 has too much horsepower.
  • by Detritus ( 11846 ) on Monday July 18, 2005 @08:34PM (#13099644) Homepage
    There are differences, and ignoring them can be a career-limiting move.

    Have you ever written a program in an environment where if it malfunctions once during operations, the incident will be investigated by a review board? The board will want to know why it failed, and what is being done to prevent it from happening again. Then there is configuration control, requirements traceability, test plans, software build procedures, security audits, etc.

  • by mattdm ( 1931 ) on Monday July 18, 2005 @08:44PM (#13099712) Homepage
    You work at a University. Not a corporation. Things are very very much different even though you don't know that yet. Some day you'll leave, and then you'll realize how bad it really sucked there.

    Yeah, man, 35-hour standard work weeks with flexible hours, seventeen paid holidays plus four weeks of vacation a year, classes for free, and great retirement benefits, plus an environment where experimentation and individual initiative are encouraged. Working for a university sucks!

    Oh, and my own office instead of a cube -- oh, life is hard.
  • by gvc ( 167165 ) on Monday July 18, 2005 @08:50PM (#13099752)
    I've done a lot of mainframe development and a lot of Unix/Linux development; scarcely any Windows.

    The main difference I see between mainframe development and *ix development culture is respect. With the mainframe you have to book time days in advance and work in the wee hours to make any changes. And you make damned sure that, when you're done, things work as they should.

    With *ix development, things are laissez-faire. You send out a message a few hours/days/minutes in advance of some monumental change. Then you blame the users when they can't sign on to their system in the morning. Quote some recently-adopted standards if they argue.

    Of course, I'm speaking of the early days of *ix. These systems are more and more critical, and the admins are trying to learn respect. But they're playing catch-up. There's nothing like the fear of taking down a $500/minute system to make you careful.

    Windows development follows a similar pattern. The whole culture is so "personal computer" based that the concept of a year's continuous uptime is foreign.
  • by symbolic ( 11752 ) on Monday July 18, 2005 @09:02PM (#13099845)
    or have JCL spit out a bunch of random nonsense because you didn't allocate the correct blocksize for your file you'll hate your job too.

    That's all it takes to hate your job? Ever get an error compiling a C++ app using templates, or a highly abstracted java class with an error generated somewhere, causing a problem somewhere else? These don't exactly put the joy *into* programming.
  • Re:my 2 cents (Score:3, Insightful)

    by radish ( 98371 ) on Monday July 18, 2005 @09:10PM (#13099890) Homepage
    This has nothing to do with GUI programming and everything to do with the cost of creating a process on NT. People began abusing threads because it was so painful to use processes.

    Most unix apps don't use threading. This is not for lack of threading or knowledge of how to use threads. It's simply that processes are as cheap as threads and offer more protection.


    How is using threads "abusing" them? To counter your point, the problem with threads in Unix is that they are as expensive to create as processes.

    Why use threads? Well maybe you don't WANT as much protection - you know having multiple threads of execution through a common memory space is actually really useful sometimes. I personally write apps which often have over 1000 threads running at a time, and it's not because I'm running on Windows (I'm not).

    I'm not saying that (for example) X doesn't have an event model similar to Windows - of course it does. But your blatent assumption that threads are for some reason bad while having many processes is magically good is complete balderdash - they are two different things and are good for different situations.
  • by jinzumkei ( 802273 ) on Monday July 18, 2005 @09:34PM (#13100045)
    uhm i'd get a refund from whoever taught you programming if this is a problem.
  • by sinewalker ( 686056 ) on Monday July 18, 2005 @09:52PM (#13100158) Homepage
    Appart from the obvious religious stuff about GUI (or lack of) and user-centred interfaces (or lack of), the biggest difference, and the biggest advantage that Mainframe brings is it's culture of process and change control. It is something you should strive to let your Mainframe masters pass on to the *nix/windoze padawans before they die of old age.

    I am a *nix padawan, but, crocky technology asside, I'm frequently impressed by my Mainframe elders, their ability to deploy code to Production environments that works *the first time* nearly every time, and their ability to communictate technical changes necessary to fix broken code in the middle of the night in the 0.1% of cases where they failed to get it working first time.

    Key values that I have picked up from my masters, and which should be inherrited by both *nix and PC/Mac enclaves are focused around Engineering principles. Mainframe guru's program like a civil engineer builds a bridge. No shortcuts are taken unless it can be proven that it is safe to do so. Testing is carried out in stages and test results must be submitted with the change request before a program migrates to Production. If a program must "abend" (Abnormal End) then it should do so noisily and with as much information as possible. If it finishes cleanly, little information is needed other than this fact.

    These closely follow the advice Raymond has encoded in his book, but there is probably much more that your Mainframe gurus know that you should cherrish and extend to your newer team members.

    Forget about the religious wars, the technology changes and the "focus" of your programmers on users or other programmers. Get the real truth from your Mainframe masters who have seen it all pass before them but have learned the hard way how to make a stable computer environment that stays up, even on cruddy mainframe technology. If their attitudes were adopted by people fluent in today's fantastic systems, all people would benefit.

  • Re:Whoops! (Score:1, Insightful)

    by Anonymous Coward on Monday July 18, 2005 @10:19PM (#13100302)
    Why the fuck did the power go out? That doesn't happen if the people running the place are worth anything.
  • Re:An idea... (Score:2, Insightful)

    by bit trollent ( 824666 ) on Monday July 18, 2005 @11:04PM (#13100512) Homepage
    Looks like I have been modded down as AC a bit too much lately so I will have to post logged in.

    Bottom line: Dreamweaver is a very useful tool for laying out and designing asp.net pages. You are such a smug son of a bitch that you can't even recognize this obvious fact. Let me lay it out for you:

    You use Dreamweaver to make well formatted, attractive web sites. You (I) then use Visual Studio and c#/SQL for what you reffered to as real computer work. I did this for a (now defunct) startup and I'm not even out of College.

    I guess you can go on telling me about how smart you are (salary != brains) and feel free to ignore everything I have said. If you can be simultaniously condecending and ignorant that will certainly be a plus.
  • by GomezAdams ( 679726 ) on Monday July 18, 2005 @11:09PM (#13100531)
    Most of the mainframers that I have worked with are nine to fivers and many didn't have a PC at home or do much more than check emails when they do. They don't code routines at home to expand their work capabilities and many think that if it doesn't weigh 6 tonne and need 10 keepers and an air conditioning plant, you can't call it a computer. Most depend on the fact that their arcane skills aren't taught any more and that's all the job security they need.
  • Re:The Difference (Score:4, Insightful)

    by jericho4.0 ( 565125 ) on Monday July 18, 2005 @11:33PM (#13100664)
    "Unix really seems like it was designed for a computer with a single CPU (and it probably was; but even current Unix implementations don't seem to have adapted very well to the new capabilities of computers, in this respect), whereas NT was designed to run on SMP systems with many threads and/or processes running truly simultaneously."

    Whatever. Unix has been on 64+ CPUs for a long time now. Is anyone selling an NT machine that comes close?

  • UNIX vs Windows (Score:2, Insightful)

    by justine_avalanche ( 546756 ) on Monday July 18, 2005 @11:34PM (#13100668)
    "UNIX programs tend to solve general problems rather than special cases."

    Brian K. & Rob P.

  • by Quixadhal ( 45024 ) on Monday July 18, 2005 @11:58PM (#13100778) Homepage Journal
    Windows/PC users fix problems by rebooting until it goes away.

    Mainframe users fix problems by going away.

    *grin*

    Seriously, imagine a one-player game on a console, where you turn it on, play a while, turn it off, and when you come back you start over, or perhaps start at the last level you finished. That's a PC.

    Now imagine a multi-user game where lots of people connect and disconnect all the time, some of them keep playing while they're offline, others don't. The world itself is always there, and there are very few "resets". That's a mainframe.

    On a PC, programming tends to be sloppy because it's generally assumed (at least in the world of application development) that it won't run for more than 8 hours a day, and that the PC itself will probably reboot every day. So, a small memory leak, or a resource that gets "lost" is not going to be a major disaster. Even data corruption is likely to only affect a handful of workers and their local files.

    On a mainframe, if memory somehow gets lost, it stays lost for months or years at a time. A faulty driver can destroy the entire company data-store (hope you make backups!). But because of this, most software is checked with a bit more care.

    I hated the VAX/VMS cluster we had when I was in college.. but after 10 years of dealing with the hardware annoyances of PC's, and the software incompatibilities of linux, and general unreliability of windows... I think I'd rather be back typing those big long DCL commands. At least that thing never crashed, and was totally predictable (more users == slower; in a nice linear fashion).
  • Re:The Difference (Score:4, Insightful)

    by Fallen_Knight ( 635373 ) on Tuesday July 19, 2005 @12:05AM (#13100806)
    ... in windows that is true, threads are better period. That is NOT because of process vs threads its bceause of how windows handles process and threads. threads are efficent and multiple processes aren't.

    as said by someone who knows more then i do:
    "There are no threads in Linux.
    All tasks are processes.
    Processes can share any or none of a vast set of resources.

    When processes share a certain set of resources, they have the same
    characteristics as threads under other OSes (except the huge performance
    improvements, Linux processes are already as fast as threads on other OSes). "
    read
    http://www.uwsg.iu.edu/hypermail/linux/kernel/0103 .0/0935.html [iu.edu]

    the one that is faster and better for a task depends on: programmer skill, style and design. Do you want shared memory, harder debugging, for a bit of a speed increase, or a clean modular design, simple processes working togeather. smaller project sizes, at possibly a bit of speed cost. Its all about trade offs and in linux you can pick either so its all comes down to the programmer usualy, in windows threads are the only way to go,

    Now to you last paragraph, windows is was NOT designed to be SMP, and if it was it sucked ass at it. linux has had SMP support for a long long time. And the fact windows is 4CPUs at most, and linux is running on 64+ CPU machines all the time, and on huge clusters of 1000s of boxes shows me that linux was designed pretty well.

    And as always linux is ahead of the curve overall then windows (in the kernal) at all times. and runs on more system.

    and the other reply so far is from a total idiot.
  • Re:The Difference (Score:3, Insightful)

    by jericho4.0 ( 565125 ) on Tuesday July 19, 2005 @12:40AM (#13100967)
    By "NT" I was refering to the kernel, which, AFAIK, still goes by that name.
  • by Anonymous Coward on Tuesday July 19, 2005 @01:25AM (#13101135)
    While I agree with your last 2 sentences I cannot agree at all with
    The thing that's really preserved the mainframe over the past couple of years has not been performance; it hasn't been throughput, because those things turn out to be terrible.

    The mainframe still rules in performance and especially in throughput. Otherwise they would have disappeared a long time ago.

  • Re:Answer: (Score:4, Insightful)

    by Technician ( 215283 ) on Tuesday July 19, 2005 @02:12AM (#13101333)
    What do we all need to know to get along in each other's worlds?"

    What is needed is open specs on anything that enters or leaves the machine whether it be a file, protocol, or hardware handshake.

    The biggest area's of contention are printers that won't work, this sound card, video card, input device, etc won't work because of no published standard of how to talk to it. We need the end of closed drivers, files and secret drivers.

    Open standard items work great on all platforms. Take for example Ethernet and TCP/IP. The cable is standard as well as the low level signals. Plug in a cable, follow the spec for the address and it works.

    Anything that uses TCP/IP on Ethernet also just works.

    Plugging in my printer into a Centronics port takes care of the low level hardware connection, but there are big problems after that. The printer box should not require anyting beyond a Centronics, USB, or other standard connection. The idea of Requires Windows 2000 or Windows XP is for the birds. Saying it is Postscript is 100% OK. I can connect that to anything with the proper hardware port (Centronics, USB, Ethernet, Firewire, etc.) that supports Postscript.
  • Re:I agree (Score:3, Insightful)

    by PakProtector ( 115173 ) <cevkiv@@@gmail...com> on Tuesday July 19, 2005 @02:14AM (#13101337) Journal

    I said Java is useless except for extremely specific things, one of which would be Web Applications, and the other would be ease of portability.

  • Re:I agree (Score:1, Insightful)

    by Anonymous Coward on Tuesday July 19, 2005 @02:57AM (#13101461)
    um.
    tcp/ip does that in the next layer down.
  • by deaddrunk ( 443038 ) on Tuesday July 19, 2005 @03:11AM (#13101492)
    As someone who was a COBOL permie then contractor for many years all I can say to that is AHAHAHAHAHAHAHA. My god it must be really awful in the Unix/Windows world if the horrendous shambles that are 90% of mainframe projects are being held up as an example of how to do it.

    The way to run a perfect project (at least to my mind) is:
    1) The senior managers are there to say how they want the app to interface with the business. They have no say in how the application looks since they aren't the ones going to be using it.
    2) Some end-users need to be involved early in the process so that the developers see exactly how they do their jobs, not how the PHBs think they do
    3) Any non-trivial change to a signed-off business spec should require a good justification just like IT people have to provide a cost justification when they want something from management and if the justification isn't good enough then they'll have to wait (and maybe learn to get their requirements right before starting the design and coding the next time)
    4) Non-IT people DO NOT get to set the deadlines. They can request it but if we say it'll take that long then usually it will and any cutting of deadlines usually mean it taking longer, having less functionality and being an utter bear to maintain/enhance.

    I've worked in maybe 12 different organisations and only 2 even came close to any of those. Most didn't even do one of the above.
  • Re:I agree (Score:3, Insightful)

    by jadavis ( 473492 ) on Tuesday July 19, 2005 @03:26AM (#13101543)
    using the C language.

    Why is there so much animosity toward C on /. (OK, so maybe it has something to do with the buffer overflow track record, but still...)?

    It may not even be his first language. Maybe he's already learned some python and wants to gain better understanding.

    C language concepts are not isolated to the C language. If you're programming in Java, you may not have to allocate memory directly, but you have to manage the resouces you do have.

    If you're programming a web application that caches frequently-accessed pages, you need to manage your cache memory as a resource. That's a hard problem and if you're not careful there are all kinds of pitfalls that can leave you with inconsistent cache, starving clients, or a useless waste of memory and CPU time. No magical langauge or library knows enough about your application's caching needs to be any help. And you're going to be in real trouble if you've never run into an easier resource problem, like doing malloc/free for dynamic data structures.

    Trying to prevent the buffer overflows by discouraging the study of resource management is OK for anyone at the "code monkey" level or below. Code monkeys solve problems that have already been solved many times before, and frequently have no need to manage resources. However, anyone who wants to be able to solve new and difficult problems (like the poster to whom you responded) should certainly be aware of resource management concepts.

    That being said, you should use the right tool for the job. If you are actually planning on connecting your software to the internet, it's nothing short of irresponsible to allow rampant security problems, particularly if another tool makes it so easy to be secure.
  • Re:An idea... (Score:3, Insightful)

    by dbIII ( 701233 ) on Tuesday July 19, 2005 @03:44AM (#13101583)
    I stopped worring about 'efficiencies' and 'cycles' years and years ago.
    Some people still have to worry - when you run an operation on some data that takes three days on a dual 3GHz Xeon, and you have different datasets doing the same stuff on another eleven machines, time savings of a couple of percent make a bit of a difference.

    When we get more computing power we just get it to do more stuff, it's no excuse to be slack.

  • Re:I agree (Score:3, Insightful)

    by Cardinal Biggles ( 6685 ) on Tuesday July 19, 2005 @03:47AM (#13101596)
    Look at HTML - all ASCII. ASN.1 was invented so that you didn't have to use all ASCII for this kind of data (look at the SNMP spec if you want more details). But does anyone use it for the on-the-wire format? No.

    Actually, I think HTML and HTTP are a good example of how it should work. First, you make an easily understood, easily implemented and easily debugged format and protocol. Then, you can use something like gzip as a transfer encoding, and you've optimized for bandwidth in the correct place.

    So ASN.1 should be replaced by XML-type things ASAP as far as I'm concerned. Unfortunately you're wrong where you say that nobody uses ASN.1 -- think LDAP, SSL, SNMP, ... ASN.1 encoding is used all around you.

  • Re:An idea... (Score:5, Insightful)

    by GCP ( 122438 ) on Tuesday July 19, 2005 @05:44AM (#13101850)
    Come back to me when you've graduated and have >10 years under your belt.

    Maybe he doesn't have that kind of experience, but I have more than twice that, and I think you have all the earmarks of a 15 yr old Slashdot troll, with the bragging, profanity, "hahaha" in place of argument, "I make X dollars/year" blather, posting as AC, etc.

    Normally I wouldn't waste time on an adolescent troll, but I want to make sure something is clear. I was one of the architects involved in the creation of Dreamweaver, and it we designed it to be used by programmers, not just Web designers. Among other things, it was designed to be used just as BitTrollent is describing: as a code generator for GUI elements. Typically, you'll lay out a page, or some page element such as a form, graphically in the GUI view. Then you'll switch to the embedded code editor and tweak it. Then you can either use the code editor to embed inline code in a language like PHP, or "code behind" (as in ASP.Net, using VS as he described to write the C#), or do what I've done on occasion and copy the generated HTML into your source in some other language (e.g. a "HERE document" in Perl or a C++ header file, perhaps after running it thru a preprocessor to turn tokens into method calls or whatever).

    A troll who can't understand that real programmers who create GUI apps sometimes start with a GUI layout and code generation tool--or even with a simple drawing program--doesn't really understand the process. If he really has ten years of experience himself, it must have been pretty narrow experience to be so unfamiliar with such a common form of development.

  • Re:I agree (Score:4, Insightful)

    by blane.bramble ( 133160 ) on Tuesday July 19, 2005 @06:07AM (#13101896)

    Unixheads seem to claim that it's perfectly admirable to hack around the ASCII format for everything because it makes it easier to debug, whereas all I see is wasted entropy and bandwidth.

    Wait until you have to troubleshoot issues with SMTP, POP3 and the like, then you will absolutely love the fact you can simply fire up telnet, connect to a server and manually test things by typing the protocol handshakes in. Not only are they all ASCII, they are easy to remember and make lots of sense.

    Take it from this SysAdmin/Programmer, you'll never want to go back to a binary protocol again.

  • Re:I agree (Score:3, Insightful)

    by aaronl ( 43811 ) on Tuesday July 19, 2005 @08:28AM (#13102306) Homepage
    Oh yes, prefixing the length seems really smart. Then somebody improperly prefixes and things start going to hell. You also have to worry more about converting endianness, char widths, etc. What if there was a transmission error? What will your program do if I say a string will be 1000 chars long, and then only send you 50? By the time you finish with all the checking, you would've been better off not trying to rely on precalculated string lengths at all. Also, how much additional data do you end up using if you start sending everything in multibyte chars and such?

    If you didn't generate the data, then you don't trust the data. Hell, even if you did generate the data, you might not want to trust it.

    If you want to have Pascal strings, then just calculate the length and hold onto it yourself. Then after the first time, you don't take the performance hit. There are very good libraries that do all this if you don't want to write your own code library.

    You ended up epitomising exactly what the GP was talking about with new programmers.
  • by fitten ( 521191 ) on Tuesday July 19, 2005 @09:18AM (#13102591)
    Raymond invents an amusing story to illustrate this which will ring true to anyone who has ever used a library in binary form.

    Unfortunately, I can't tell you how many Open Source libraries I've thrown away after trying to use them for the exact same reasons. They APIs aren't documented (if at all), the APIs don't work like they are described, the APIs don't work like it seems they should, the APIs just don't work, or the APIs are just way overly complicated to use for what I/we need done. So what if I can debug through the source, maybe it gets me to the point of throwing away the OSS library faster because I can see that it is useless, but the end result is the same.

    You can also chalk up the GPL to some of it. I've found libraries that seemed OK but were GPL'd and software we write doesn't have GPL'd source in it, thus forcing me/us to reinvent the wheel.

    Neither model is perfect and programming never will be the utopian ideal of being able to always reuse everyone else's code. Sometimes others' code just isn't written to fit the way someone else thinks so it seems unnecessarily complicated and/or contrived.
  • by Keck ( 7446 ) on Tuesday July 19, 2005 @09:30AM (#13102691) Homepage
    As a *nix nerd in my fifth year managing mainframe developers, I need some insight into mainframe programmers.

    So you have been managing a group for five years, and have no idea what makes them tick? Sounds like you're definitely management material :)
  • Re:Cats (Score:3, Insightful)

    by EvilTwinSkippy ( 112490 ) <yoda AT etoyoc DOT com> on Tuesday July 19, 2005 @03:04PM (#13105969) Homepage Journal
    cd is standard?

    Last I checked VMS and Unix were developed around the same time (Late 60's, Early 70's). The idea that everything should look the same and act the same didn't appear until the Macintosh in 1984.

    You don't exactly walk around Europe and say "narrow streets are so difficult to drive around." Ok, Americans do, but last I checked the streets were there first.

    And before you get all high fallootin about how MS-DOS has used CD since the begining, it hasn't. There were no subdirectories before the release of DOS 2.0 in 1983. (I started using DOS with version 2.1, so I don't know how it worked before then.)

We are each entitled to our own opinion, but no one is entitled to his own facts. -- Patrick Moynihan

Working...