Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Programming Technology

Why Haven't Special Character Sets Caught On? 117

theodp asks: "Almost forty years after Kenneth Iverson's APL\360 employed neat Selectric hacks to implement Special Character Sets to express operators with a single symbol, we're still using clunky notation like '<>', '^=', or 'NE' to represent inequality and cryptic escape sequences like '\n' to denote a new line, even though the Mac brought GUI's to the masses more than twenty years ago. Why?"
This discussion has been archived. No new comments can be posted.

Why Haven't Special Character Sets Caught On?

Comments Filter:
  • And special characters wouldn't be?
  • Why? (Score:5, Insightful)

    by wowbagger ( 69688 ) on Monday October 17, 2005 @04:55PM (#13812019) Homepage Journal
    Why are we not using characters that are:
    1. Hard to generate on a standard keyboard
    2. Not standardized in the specifications of the language.
    3. Not standardized in the character sets of most non-bitmapped displays.
    4. Not standardized in HTML markup.


    Gosh, I don't know!

    Now, if you will excuse me, I need to create a local variable named <The Symbol for the Artist Formerly Known as "The Artist Formerly Known As Prince">
    • It's easy now to implement input methods, html does standardize most of the characters, and large unicode fonts are generally available, so most of these considerations are obsolete. It would be a useful and trivial hack to make gcc accept & ne; et filia, for example. The result would be fewer bugs, more readable code. It is really is about time. Usually a new level of technique does not become predominant until it offers substantial advantage at marginal cost. Now that international communication,
      • Re:Why? (Score:2, Interesting)

        by usrusr ( 654450 )
        fewer bugs?

        i think you remember what happened when "they" introduced extended characters in the DNS: the only people who really used them were the phishers who could now create domain names with the new characters that looked very much the same as the names they were trying to imitate so browsers had to make a 180 for security reasons.

        source code is a slightly different environment, but there it can already be different enough to visually distinguish between l and 1 in many fonts, or the various variations
        • > You may argue that nobody willingly uses variable names like
          > ll1O0, l1100 and lllOO

          I would argue that no programmer worth his salt willingly uses fonts that don't make those characters easy to distinguish. I certainly don't. However, unicode characters are another matter; I only care about how my fonts display safe and sane characters (i.e., ASCII characters with decimal values from 32 through 126). Unicode characters can show up as a character-sized box as far as I'm concerned, because they don
    • I need to create a local variable named

      I thought he'd changed his name to "The Artist who until recently was known as the Artist formerly known as Prince"? :o)

      (Bonus: a cookie for anyone who gets the reference. :o)
      • Does he say "Ni!"?
      • by Bake ( 2609 )
        Monty Python and the Holy Grail - The Knights Who Say Ni.

        Sorry, the Knights Who say Ecky Ecky patong zoooop *mumble* ... err ... The Knights who until recently said Ni.

        Now. Where's my cookie? :-)
    • Hard to generate on a standard keyboard

      Well, most of them are, but some of them are not. It was a while since I used OS X, but if I recall correctly it only took [some modifier key] + "/=" to type the not-equals sign. Quick as well as intuitive. Just because it is possible to imagine completely useless characters doesn't mean we ought to limit ourselves to ASCII.

      Not standardized in the specifications of the language.

      Wasn't the original question more in the lines of "why are they not standardized i

    • I wouldn't say it's hard to generate special characters on a standard keyboard - AppleScript uses them, and it's just hitting key combinations like "option =" and "option enter." The real issue for me wasn't that stuff, it's that it's a bunch of crap that I have to type in which isn't on the keyboard. != or are easy, I know what they mean, and I know how to type them. A real mathematical "not-equals" symbol is non-obvious and figuring out how to type it requires consulting the manual.

      Apple's solution is
  • by Kelson ( 129150 ) * on Monday October 17, 2005 @04:56PM (#13812025) Homepage Journal
    In programming? Most languages seem to be designed with ASCII in mind, so you have to stick with what's available there.

    In general? I think it's a matter of input methods. Give me an input method where it takes only two keystrokes to type "" and I'll use it instead of "NE" or "". If I need to use a vulcan death grip, remember a code, or find it in a character map, I'm only going to bother when I have motivation: either making a point, like earlier in this paragraph, or making a polished document. Why go to the effort in a casual email, or a forum post, when it's much easier to type "" instead?
    • by Kelson ( 129150 ) * on Monday October 17, 2005 @04:59PM (#13812044) Homepage Journal
      I entered an actual not-equal sign in that post, and Slashcode stripped it out!
    • If I need to use a vulcan death grip

      If you think emacs editing sequences are obscure now, imagine how much more fun they'd be with all those "special characters"...

      If you're a touch typist, you really want to minimize the number of keys you have to press simultaneously to get something done, especially if you can't use hands separately to do it. Typing two or more normal characters together is much easier.

      Eric
      Get some stroller advice here [stroller-advisor.com]

      • Just because Emacs takes up every possible key combination all by itself doesn't mean the rest of us have to deal with that limitation. :-)

        Realistically the control key should be used for control functions, and alt (or option) key should be somehow involved in accessing alternate characters. Doesn't the labeling make it obvious?

        And if you need more control keys than you can get with just control, then maybe you need to be using a command line. Using esc from "insert mode" to get to a command line makes re
    • How about dead key input or keyword replacement? You can try them in Vim with :imap != <C-k>!= or :abbreviate != [not-equals sign here].

      Try one and type if(a != b), Vim will turn it into a nice \ne sign.

      The dead key approach could easily be included in X (X.org doesn't seem to support making arbitrary keys dead now). The keyword approach (which I prefer) would have to be on an editor-by-editor basis.
  • I mean, they're better, right?
  • by DrSkwid ( 118965 ) on Monday October 17, 2005 @05:02PM (#13812074) Journal
    my OS [bell-labs.com] is where UTF8 [pdf] [bell-labs.com] was invented.
  • by LeninZhiv ( 464864 ) * on Monday October 17, 2005 @05:06PM (#13812101)
    \n is cryptic and APL isn't?

    I'd say it's more a question of 'choose your poison'. There is a learning curve whether one aims at mathematics-based notation schemes or historical computer science notations, and the market has already chosen (30 years ago) which one it prefers.

    And not without cause. Human language looks a lot more like modern programming languages than mathematical notation, and a major goal of programming language design is to make it as straightforward as possible to tell the computer what you want it to do. One might object that by that argument Cobol is better than C, but humans, especially experts working in a specific domain, like abbreviations too. Cobol is hated because it doesn't allow you to abbreviate, not because it is hard to read, after all. APL or other such specialised syntaxes are hard to read and they don't fit closely enough with the way non-mathematicians think to be intuitive.
    • COBOL is hated because it was a language designed for programmers not by programmers. Same as Java.

      Try C, LISP, SmallTalk or Ruby and you start to feel that the language is helping you. Cobol & Java fell like they are getting in the way.

      Persoanlly, I love ruby but that might just be me.
      • I don't mean to be difficult, son, but:
        • Grace Hopper was, and James Gosling is, most definitly a programmer. If you don't think COBOL was an advance at its time, you've just never coded 701 machine code.
        • If you think Java's syntax is radically different from C's syntax, you've never coded in one or both of them.
        • In fairness to the OP, if, like many programmers these days, you've never programmed or seen anything *but* C/C++ and Java, you might think the syntax is very different.
        • If you think Java's syntax is radically different from C's syntax

          Syntax is one of the least important features of a language, and OP never said anything about them having different syntax.

          OP wrote:
          "Try C, LISP, SmallTalk or Ruby and you start to feel that the language is helping you. Cobol & Java fell like they are getting in the way."

          One can infer that he's in favor of strongly, dynamically typed languages with the exception of C (weakly, statically typed), and against strongly, statically typed langua
        • I ain't your son, got that pal?

          Compare COBOL to LISP which one came first? Which one can be applied to solving the most problems? Which one provides the most abstractions to the programmer? COBOL was in no way a step forward from LISP.

          Below is some history of the languages, pretty accurate IMHO.

          http://en.wikipedia.org/wiki/COBOL [wikipedia.org]
          http://en.wikipedia.org/wiki/LISP [wikipedia.org]
          http://en.wikipedia.org/wiki/Java_programming_lang uage [wikipedia.org]

          You'll see that COBOL/JAVA was designed by committe before being in general use. Most succ
          • No, you're not my son, you're just another young moron who thinks links reflect knowledge. Of course, if you read your links you'll see that COBOL was driven by FLOW-MATIC; Java wasn't designed by a committee, but the version of C you've most certainly used was; and that LISP, FORTRAN, and COBOL are in fact exactly contemporary.

            If you had much deep knowledge of programming languages --- or had read the links you posted --- you'd also realize that Java has more in common with Smalltalk than pretty much any
            • Ahh,

              my mistake I took you original post as having some slight oversights that I could gently point in the direction of understanding. My mistake, won't happen again with you. Apologies for wasting your time.
            • Actually, I'd say Java has a lot more in common with Ada than it does with Smalltalk. Especially the way packages and threads work.
              • Well, of course Smalltalk doesn't really have either one, or even separate compilation, so there are similarities to Ada, C, C++, Eiffel, and so on, that distinguish Java from smalltalk. On the other hand, Java has the bytecode interpreter basis, closures of a sort, garbage collection, and a large standard class library with lots of GUI and network richness.

                Perhaps more to the point, Jim Gosling has been quoted as saying that Java was based on trying to bring Smalltalk to C++ programmers.

    • There aren't so many of us in this thread with the background to actually do so. I learned APL when I took a calculas course at the local university because I was applying to schools out of province where calculas was taught in the terrible idea known as "grade 13". The math course allowed me access to the CS lab and I soon started to thrive on being able to write programs in APL that ran a *lot* faster (sometimes factors of 10) than any of the programs written in compiled Pascal by the CS undergraduates
      • Let's assume we have a C language implementation from the day when an integer was 16 bits and a long was 32 bits. For this C language host, the expression -32768 does *not* do what one first supposes: represent the least possible 16-bit signed integer value, because this is parsed as the negation operator applied to the long integer constant 32768. How stupid. Give me back my high minus sign!

        Eh? I'm not an old fogey, so maybe C implementations back then sucked, but -32768 does indead mean the least poss

  • Listen to me (Score:5, Interesting)

    by Profane MuthaFucka ( 574406 ) * <busheatskok@gmail.com> on Monday October 17, 2005 @05:08PM (#13812123) Homepage Journal
    Now sonny, sit down a second and listen to grandpa rant about the good old days. The truth is, when I talk about the good old days, it's not because the days were actually good. It's because I have a sucky memory and questionable taste.

    Now it is TRUE that I once did do programming in APL. This was on an old Zenith 8088 based PC clone with 640K of memory, a CGI display, and a 20 meg hard drive. The system itself worked rather well. If you could work a line editor, the development environment was all you could want. The problem was all the little stickers that went on the keys. Every key mapped to about three other symbols besides the normal ones, and just about every key had a little sticker on it. It was NOT fun. Just because your computers can display characters that look like Chinese doesn't mean that it's a good idea.
    • Hey, was that a Z-100 [retrotechnology.com] (with the S-100 bus) or one of their later straight clones?

      Only reason I ask is that I remember the Z-100 as having a very nice keyboard, dished like a Selectric and with a good feel to it.

    • I've actually programmed for an 8088, and an 8086; you should check some of your facts. Only the amount of memory is accurate, and that's not even really completely true, because some of it was allocated for the dos shell only. Then again, you did say that your memory was sucky. Also word perfect was not a line editor.
    • by Anonymous Coward
      It's a CBM SuperPET. It does APL, and it still works. All the keys have the freaky APL symbols on them, and as you probably know there were quite a few other symbols that you had to create by overstriking.

      Now, really APL is just functional programming in disguise, with a pathetic lack of flow control constructs. Now, if you're familiar with Scheme or LISP, you end up wondering just what the hell the point is of doing a bizarre sequence of keystrokes to produce a strange glyph, when you could instead ju

    • APL on a Zenith?!?

      That's the problem... APL was best on IBM terminals-- Like the 3270, or even a 5100.

      Took the CS language survey course in college - APL on a Cyber 170 on ASCII terminals -- nasty three char tags in place of the operators.

      Later I took a calculus course (Calculus of Vectors and Spaces, otherwise known as Magic Math) -- we used APL on a 370... much easier to understand.

  • Efficient (Score:2, Insightful)

    by Threni ( 635302 )
    Because we don't need to change for the sake of it, to a system which isn't supported by a lot of software and hardware. Why not just change your software to interpret the characters as an image, like some already does with smilies?
  • Simple (Score:5, Insightful)

    by fm6 ( 162816 ) on Monday October 17, 2005 @05:11PM (#13812154) Homepage Journal
    Same reason the Dvorak keyboard has never caught on -- nobody wants to learn to type all over again.

    Display was never the issue with APL. There are implementations of APL that use keywords instead of symbols. It's just that turning everything into an operator makes for really dense, hard-to-maintain code.

    I'm reminded of Forth [wikipedia.org], which lacks APL's weird symbols, but shares its reputation for dense code. In its heyday, Forth programmers justified using it by claiming it made them more productive. And that's true — if you define "productivity" as "number of lines of new code hacked out per day". But code isn't just written, it's maintained, and dense languages are not maintenance friendly.

    • I tried to learn Forth once... I thought it was made for people that thought asm was to straight forward.
      There does seem to be a sweet spot for code density vs readability.
      c and c++ seems to have found it for most people.
      At one end you have COBOL, Pascal, and Ada.
      At the other you have Forth APL.
      c, c++, c#, python, java, and the other c like languages seem to be in the middle.
      • It depends on how you look at it, but I think you could arrange for many variations of this.

        Most people would agree that Java, c#, perl, and python have high level constructs that are fairly core to their langauge implementation.

        C, C++, and to some extent Pascal (modern day Pascal) seem to be on a similar tier with the ability for inline code, memory management, and inline assembler. Also the generated code is fairly straight forward when compared to the assembler output.

        Almost nobody uses cobol,
        • Hmmm... interesting. I've never heard that division before; it sounds pretty much like an HLL/low-level language distinction.

          I tend to avoid such divisions when I'm speaking to others. It is far too easy to fall into the the Blub paradox [paulgraham.com]. You tend to evaluate programming languages by going down the power scale from whatever you've learned, and therefore fail to recognize that there's a range up from what you've learned. If somebody generates a brilliant new language that has the potential to make progr

        • Actually ADA and Cobol are still used a lot. I hope on new programs are written in Cobol but there is still a lot of old code being used.
          Ada was popular in Europe and is popular enough to have a GNU compiler available.
          That being said my divisions where based on the level of verboseness of the actual source code as opposed to the features of the language.
    • by epine ( 68316 )

      The concept that APL code is "hard to maintain" is correct to first approximation, but it's more myth than reality when one digs deeper into the question. Most of the densest lines of code I once concocted in APL were 100% maintenance free: efficient and correct over the entire usable operand range. The density of the code squeezed out many degrees of freedom for making stupid errors even before you began.

      There were other factors, having little to do with code density, that made APL systems hard to mainta
    • Actually, Forth programmers pride themselves on how few lines they write each day. A favorite Chuck Moore quote:
      Another aspect of Forth is analogous to Ziff compression. Where you scan your problem you find a string which appears in several places and you factor it out. You factor out the largest string you can and then smaller strings. And eventually you get whatever it was, the text file, compressed to arguably what is about as tightly compressed as it can be.
  • by metamatic ( 202216 ) on Monday October 17, 2005 @05:13PM (#13812165) Homepage Journal

    Because standardization of extended character sets, via Unicode, is a relatively recent development. Hence, there's a lot of software around that still doesn't handle Unicode.

    For example, I switched to bash because tcsh didn't cope with Unicode. Mozilla's Unicode support is incomplete--card symbols defined in the HTML 4.01 standard don't show up properly on the Mac, even though it definitely has them in its standard fonts. Many text editors don't support Unicode. And so on.

    In fact, it's only recently that Slashdot was fixed to allow us to use words like "cliché" and enter amounts of money in Pounds Sterling like £5.99, even though those 'special' characters were part of HTML 1.0. Forget about using the aforementioned card symbols on Slashdot—we got 1996's CSS a couple of months ago, maybe we'll get 1999's HTML 4 in 2008?

    Next you add in the fact that most people are too lazy to even learn to spell correctly, far less learn how to type an e with an acute accent, and you have a recipe for today's state of the web.

  • by torpor ( 458 ) <ibisum.gmail@com> on Monday October 17, 2005 @05:17PM (#13812199) Homepage Journal
    {disclaimer: i'm a closet fontographer.}

    i've thought about this question since 1978, as i have encountered over the years since then a grand litany of different ways of describing symbols in such a way that they can be standardly used, and i have come to a very simple answer. humans are stuck on a symbol treadmill with infinitely smooth bearings.

    fontography is a lesson of symbols .. and the description of these symbols is limited by strict hardware limits: economic, social, cultural elements all have a part to play in the definition of input devices. where i say QWERTYZXCV, you say QWERTZYXCV.

    we haven't seen terribly wide-spread specialization of symbols because of the producer-/consumer- cults of USKEY101, and peoples unfamiliarity with alt-numkeypad chops, and Mac vs. PC, and ASCII vs. UTF-8, and XML vs. .bin, and "X" vs. "Y", blah blah, ad infinitum..

    the fact is, perhaps deep down inside we know we should be grateful for what we've got, and let the "!=" and ">=" expressions, 2 lonely bytes in a vast nasty sea, stand as testament to the human desire to at least, a little bit, get along on the same key. they may not be pretty, but pretty much everyone can get to those two bytes and use them when they need to .. its only a tiny clique can do the alt-numpad thing, and even fewer who choose to jump out of the ASCII pool and towel off..
    • One possibility is to define not equal to as a special Unicode symbol that can be produced by typing '!' and '=' one after another. A bit like how you'd produce amlauts and all those cool European characters, and Indic characters on a US 101 keyboard.
  • The real question here is 'Gosh, languages don't use all the same syntax to represent mathematical ideas. Isn't there some way we could force them to do so?' And the answer is, succinctly, no.

    And for non-visual characters like 'newline'.... what other idea, exactly, did you have? \n is pretty straightforward, once you know how it works. I submit that some random symbol would be worse than what we have now.

    The musician Prince tried the glyph substitution trick, if you recall, and it wasn't tremendously s
    • And for non-visual characters like 'newline'.... what other idea, exactly, did you have? \n is pretty straightforward, once you know how it works.

      Yes, it is of course completely silly to want a special character for newline, since you already have one: it is generated by the enter key. The idea of \n is that you can see that there is a linebreak without clobbering the layout of your code. If you want to have some symbol X for newline, then you will have to escape X if you want to display an X instead of a

    • Re: \n as newline (Score:4, Insightful)

      by some guy I know ( 229718 ) on Tuesday October 18, 2005 @03:44AM (#13815083) Homepage
      And for non-visual characters like 'newline'.... what other idea, exactly, did you have?
      How about U+2424?
      Actually, that's the symbol for a graphic representing a newline (a slightly raised N next to a slightly lowered L, shrunk and crammed together into an area approximately a single em-space wide), so maybe that's not such a good idea (as how would you represent the graphic itself in a string?).
      OTOH, a \ followed by U+2424 could better represent a newline graphically in a string.

      The reason that \n seems "pretty straightforward" is that most of us are used to it.
      The concept of backslash followed by a letter representing a control character started in C in the 1960s (or possibly even in earlier languages), and has been copied into dozens of other languages, along with other things like using % in printf strings to format variables (although some languages, like Ruby, are starting to offer alternative representations to %).
      Note that, in Common LISP, a newline is represented by ~% and ~& in formatting strings, and #\Newline (spelled just that way) represents a newline character outside of formatting strings.
      In Object Pascal/Delphi, a newline is represented by its decimal or hexadecimal equivalent, #10 or #$0A.
      Some languages, like Python and sh/ksh/bash/etc., allow an actual newline in a string itself, so no representation is necessary (although Python allows \n as well, in its non-raw strings).
      Other representations that I have seen in the past include ^J and ^M^J (for line feed and carriage return/line feed as control characters) and $ (for end-of-line in regular expressions (although the $ doesn't (usually) match the actual newline itself)) and in "list" mode in vi.
      • So are you seriously asserting that \Unicode 2424 should be used in place of \n? Sure, it's pretty and all, but A) it takes a hell of a lot longer to type/specify using a keyboard, and B) common functions should be mapped to common characters. Newline is EXCEEDINGLY common, so it should be very, very fast to specify, not mapped to some obscure graphic buried somewhere in Unicode. (at least 2424 would be pretty easy to remember.)

        Your observations of the alternate newline syntaxes were interesting, but I su
        • So are you seriously asserting that \Unicode 2424 should be used in place of \n? Sure, it's pretty and all, but A) it takes a hell of a lot longer to type/specify using a keyboard, and B) common functions should be mapped to common characters. Newline is EXCEEDINGLY common, so it should be very, very fast to specify, not mapped to some obscure graphic buried somewhere in Unicode. (at least 2424 would be pretty easy to remember.)

          No, I'm speculating that, visibly, it may be more indicative of an actual newl

          • That's actually not a bad idea. Wonder if someone will pick it up and run with it?
            • Well, ":imap ^Q^M \n" works in vim to insert (the two-character sequence) "\n" when <return> is hit, but I can't figure out how to detect when the cursor is within a string.
              It would be nice if autocommand had a mode that would fire when entering or leaving a syntactic region, so that I could map and unmap the key that way.
              Oh, well; I will play with it some more when I have time.
              If I can figure it out, I will also set it up so that "^I" (tab) inserts "\t", etc.
  • data entry (Score:3, Insightful)

    by TheSHAD0W ( 258774 ) on Monday October 17, 2005 @05:30PM (#13812298) Homepage
    I think a large part of it is because, even if we have the ability to display the characters, we don't have a convenient way to enter them. The keyboard doesn't have a Sine symbol key. Further, expanding the keyboard to include these symbols will just make it unwieldy. I suppose one could have the display automatically convert sequences into special characters, much like modern word processors perform auto-superscript, but this might cause problems when editing. I personally prefer it as-is.
  • Two questions: (Score:1, Offtopic)

    by Just Some Guy ( 3352 )
    <smartass>

    1. Do you own Wikipedia stock?
    2. even though the Mac brought GUI's to the masses -- brought GUI's what to the masses? Poor GUI - what if he didn't want his possessions to be widespread?

    </smartass>

  • by Craig Maloney ( 1104 ) * on Monday October 17, 2005 @05:31PM (#13812312) Homepage
    It's pretty simple: Lowest common denominator. Creating special character sets creates incompatibilities with other machines out there. That's why ASCII was such a boon, and why character sets like PETASCII, ATASCII, and others fell by the wayside. (And if you really want some character set fun, try EBCDIC sometime [natural-innovations.com]).
  • All you need to do is have an ASCII chart handy and you can deal with text on any platform. Plus, the special symbols aren't on my keyboard. If I have to go hunting through obscure key combinations I need a pretty damn good reason!
  • Troll Article? (Score:1, Offtopic)

    by slashflood ( 697891 )
    That's one of the worst Ask Slashdot articles ever - it almost tops this one [slashdot.org]. Is it meant to be a troll? What exactly is the connection between your question about special character sets and the link to Wikipedias "Apple Macintosh" entry? Apple fanboyism?

    Back to your question: What should be included in the special character sets? Do we need a set for every programming/markup language?
  • Suppose you go to the Unicode folks and say "lets use a spare set of codepoints to encode programming language constructs".

    OK, so, which constructs?

    Well, we've got the basic operators of C. Java and C++ can share a lot of those. Then we've got the stuff in Ada, they have a few of their own. And ocaml has a few more. Haskell can use some of the ocaml ones, but we'll distinguish them with a diacritic to mark them as lazy...

    Oh drat, someone sent me a program in Perl, and I haven't got the right font. It just l
  • From the Wikipedia entry linked in the original post...

    "APL, in which you can write a program to simulate shuffling a deck of cards and then dealing them out to several players in four characters, none of which appear on a standard keyboard." - David Given (?)

    "APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums." - Edsger Dijkstra, 1968

  • Long ago, the Lisp Machines roamed the Earth. They have a pretty wide set of characters available, as well as a swath of bucky bits. When people were writing for just Lispms, they'd use the extra characters sometimes. But then the code couldn't be brought over to other machines easily, or sent to a line printer, or... well, you get the idea. When Common Lisp came out, it defined a "standard character set"... a minimal set of characters that a Lisp implementation must support, and portable programs could

  • by mnmn ( 145599 ) on Monday October 17, 2005 @06:35PM (#13812772) Homepage
    Have a large number of individual characters rather than a few characters than can be combined in many ways?

    Why you sound like youre in favor of CISC.
  • by Quarters ( 18322 ) on Monday October 17, 2005 @06:47PM (#13812837)
    \r\n, =, !=, etc... make sense to programmers. They understand the language. Just like the design of 32nd, 16th, 8th, 1/4, 1/2, and whole notes, along with extra notation to modify their true length of play and volume, makes sense to musicians. Why waste time and effort to make it readable for the masses when the masses probably don't care? If they did they'd learn to read the language.
  • by sharkey ( 16670 )
    Because nobody wants to be seen typing on the "short" bus with a "special" character set.
  • What might be interesting is if you can have your keyboard switch modes.

    I could put the keyboard in math notation and automatically the keys on the keyboard display math symbols in a standardised pattern (like QWERTY is for letters but for math). Other modes could be added later.

    On slashdot a few months back there was a keyboard in which the labels on the keys are dynamic. I think that is going in the interesting direction.

    It reminds me of maybe how the computers in Star Trek Next Gen might behave. Where th
  • You may as well ask why we don't use specialized languages for specific tasks, such as using C for pointers, Java for objects, FORTRAN for mathematics, etc. all within the same project, perhaps even in the same source file. Why should the compiler care what language we use? The computer can handle all those languages just as the computer can handle special symbols. Another similar question would be: Why don't we have specialized processors dedicated to certain tasks, like a speech processor, speech recognit
    • It's not the compiler programmers have to worry about.

      Sure, a compiler could in essense sort out a file written in a dozen different programming languages, but imagine a team of developers all with different programming backgrounds trying to figure out what each coded? Software design would cease to work.

      Software language is like spoken language in general, we all need a set of syntax and grammar rules so we can simply understand each other and effectively communicate. If you write a book using a random a
  • we're still using clunky notation like '', '^=', or 'NE' to represent inequality

    Unicode contains characters such as U+2260 (NOT EQUAL TO). Unicode has certainly caught on; all HTML documents use that character set, for instance. So why the need for a special character set?

    Perhaps you are asking why people don't choose to use such characters - I guess it's just ignorance. After all, if somebody who has gone to the trouble of submitting an Ask Slashdot doesn't know about these characters, why woul

  • . . . that it's much faster to write out an equation in LaTeX than using any WYSIWYG editor on earth.

    If there aren't enough buttons on the keyboard for all the symbols you need, it's much easier to represent the new ones as an obvious and transparent combination of the ones you already have than it is to construct complicated schemes to generate new ones.

    Until we experience a revolution in computer interface design, it's going to take at least two keys to create a not-equal-to sign. If people are going to
  • The simple answer is: nothing is really gained by this. Why is english the language of the world? Because it is rather simple. Chinese won't replace english for the same reason. I would never ever consider writing a program with non-ascii characters, besides perhaps accented characters in strings. The reason is also very simple: have you ever tried to port a program to a different platform? I did it many times. Only the ASCII part survived. Sure, you may create a better standard than ASCII, but if you use i
  • I am using expanded character sets. I've been using "&#8800;" and friends in AppleScript for years. In Scheme, I use "&#955;" instead of typing "lambda". I use the native2ascii program that comes with the JDK to use Kanji or Esperanto characters in identifiers. I wrote a similar preprocessor to expand "&#8800;" & friends in C source.

    I can't tell you why it hasn't caught on, but there's nothing stopping anyone from doing it today.

    (Although, it seems Slashdot doesn't like those characters.

  • I disagree (Score:3, Insightful)

    by gurps_npc ( 621217 ) on Tuesday October 18, 2005 @11:31AM (#13817586) Homepage
    Most of you people are listing problems related to keyboards.

    That demonstrates a lack of vision.

    MAKE A NEW KEYBOARD.

    Not that hard to do. Almost all computers have function keys on top. The majority of users DON'T USE THEM.

    Just print up some new keyboards that have single symbols representing the major programer stuff, such as >=, To use them, print them above the F1,F2,F3, etc. access them by typeing shift F1, etc. etc. Allow them to be over-riden by programs that want to over-ride it.

    If Apple did this, it would catch on instantly. In one year, Microsoft would steal the idea.

    • While what you have said is right, I think you're missing part of the keyboard "issue". Many of us know how to touch-type: I learned years ago on a [manual!] typewriter. For us, the number of keystrokes (within reason) is not really a big issue. I will type this post without ever looking at the keyboard; although a Dvorak keyboard might be theoretically "better", the putative benefit is not worth the aggravation of the change for me.

      If someone really wanted to do this, there is not all that much standing

    • Back in 1967 I first used apl. It was an ibm 360 in watson research center in yorktown heights ny. I connected to it from belmont ma with some kind of dedicated line. The interface was a ibm selectric typewriter with a special type ball. We either had a special key layout or sticky labels for all the keys. This way we had a wysiwyg interface. The key looked the same as what was typed. I spent every waking moment programming apl for the next 4 years. It was great. I believe that any problem could be programm
    • And which symbols is it really useful to have symbol keys for across all languages?
      • Re:I disagree (Score:3, Insightful)

        by gurps_npc ( 621217 )
        Our job is nto to design a new keyboard for all languages. Non-english speakers already make their own keyboards. But for English speakers, there are a bunch of simple symbols that should definitely go in.

        ...Math...

        Greater than or equal to

        Less than or equal to.

        Not equal to.

        ...Programming...

        New line symbol.

        Is it Alphabetically equal to (does not set, only used for asking. Equivelent to EQ, could co-opt the wavy equal sign)

        Is it Numberically equal to (does not set, only used for asking, Equivelent

  • If we had an assignment operator nobody (?) would have confused = with == inside a C if-statement and we surely would have saved the cost many times over by now.

    For crying out loud we coders have contributed a lot to computing (!) At least give us an assignment operator and a new equality operator on the keyboard. It's not as if we aren't sure we're going to need it in a few years!

  • If the compilers were able to process Unicode, and would accept &ne; where we now type !=

    Dang slashdot won't display the &ne; properly!

    Then the text editors could then gradually migrate to converting != on input to &ne; in the text and displaying it. For instance, notepad on windows will display unicode properly. Most other editors will too, like eclipse...

  • Actually, I think it's worse than that. Perl6 is adding unicode support in perl source. So you think they'd be able to add support for all sorts of new operator. But from what I can tell from the mailing lists they are just over loading the hell out of the question mark.

    @list = ? 1 2 3 4 5 ?;
    @list ? grep {$_ % 2}
    ? map {$_ ? 2}
    ? @newlist; # 1, 9, 25
    @list3 = -? @newlist; # -1, -9,

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...