Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Unix Operating Systems Software

Unicode and the Unix Console? 57

Phactorial asks: "At it's current state, most UNIX consoles (not graphical terminal emulators, mlterm is out for this) I have dealt with do not handle unicode properly. This is essential when it comes to dealing with languages that require characters that are not in the current ASCII set. I was wondering if anyone out there is developing a solution for non-Linux platforms. I know the Arabeyes project is currently working on a project called 'Akka' which provides UTF-8 (kinda) support and even shaping and bidirectional code (essential for many languages in the East, the program works fine and I am working on getting a FreeBSD port out). However, I was pondering, how are other UNIX consoles doing? Do any of them fully support unicode, even bidirectional characters? shaping? (a great many of today's UNIX applications lack many if not all of these ;(). If you know of such applications or are working on support for a platform, could you give feedback as to your experiences and thoughts on the current state of the UNIX console?"
This discussion has been archived. No new comments can be posted.

Unicode and the Unix Console?

Comments Filter:
  • by Boiotos ( 139179 )
    Its Gnome 2 terminal can deal with any truetype unicode font, even those that are proportionally spaced such as the luscious, but now under-wraps, 'Arial Unicode MS'. RH 8's vim is also unicode savvy.

    A major improvement for my line of work.
    • Uhh... (Score:5, Insightful)

      by jensend ( 71114 ) on Thursday December 19, 2002 @09:06PM (#4927158)
      From text of question:

      (not graphical terminal emulators, mlterm is out for this)
      I was wondering if anyone out there is developing a solution for non-Linux platforms.


      The answer "Sure, there's this graphical terminal emulator in a recent linux distro!" seems somewhat inappropriate to the question.
    • RH 8.0 handles unicode, but the implementation is awkward and doesn't display everything quite correctly. If you've ever logged into a RH 8.0 machine and run something like man, you'll see garbage for special characters.

      The solution is to set /etc/sysconfig/i18n from LANG="en_US.UTF-8" (I think that's what it was) to just LANG="en_US".

  • by Mr. Piddle ( 567882 ) on Thursday December 19, 2002 @09:50PM (#4927366)
    Solaris 2.6 supports 56 "locales" and is six or so years old now [sun.com]. Is this what you were asking about? I don't have experience with non-USA locales, but it seems the UNIX people have realized that there are countries outside of North America and have tried to accomodate them.
  • BeOS unicode native. I'd expect that FreeBeOs (or whatever it's called) is the same, and I think it's also Unix compatible?
  • Use something else (Score:2, Insightful)

    by dentin ( 2175 )
    (This will be considered flamebait, but someone has to say it.)

    The way I see it, we shouldn't be cluttering a clear, simple and sane interface like the unix console with complexity like unicode. Unix is inherently byte based, and unix terminals are byte based. If it's not a byte, don't put it in a unix terminal.

    This isn't to say that we shouldn't have other mechanisms for supporting foreign languages - but this particular path has been travelled before and it's not pretty. Look at the AS/400 - tables stored in the DB/FS are marked as being in a particular character set, and the OS tries real hard to fix up and convert from set to set as needed. This causes countless problems in the infrequent cases where there is no possible mapping between sets.

    Another way to look at it - why don't we have unicode support for grep? Why aren't all files tagged with an appropriate character set, so we know what they're really supposed to look like? When you 'tail -n 20' a file, how does tail know that those line feeds and carriage returns aren't part of some unicode char?

    In short, unix is byte based. All the unix tools are byte based. If you want to use unicode, build a unicode layer on top of the bytes, but dont screw with the existing stuff that already works perfectly well.
    • In short: Yikes! UNIX is a timesharing system for TTY terminals from 1979*.

      That's a rather depressing outlook. We need to do better. This is supposed to be a discussion about that, not just another 'UNIX is UNIX because it is UNIX' polemic.

      (* Just stating the facts. I connected to a SparcStation with a VT220 terminal as a serial console just last week- it was handy and it's cool that it works.)
    • Nonsense (Score:5, Insightful)

      by GCP ( 122438 ) on Friday December 20, 2002 @03:31AM (#4928290)
      Not flamebait, just nonsense.

      Unix isn't byte based, it's text based. Of course one layer deeper, it's byte based, but so is every other OS, and below that it's transistor based, etc.

      What distinguishes Unix from other OSes is an emphasis on working in text with text utilities, often thru windows (telnet clients) on other machines -- windows whose only supported datatype is text.

      In Unix, as in XML, text is sort of considered the ultimate data type. Bytes are just the medium used to represent the text under the surface. If the bytes were what mattered, people would usually work in a hex editor and do hex I/O, but they don't. They work at the text level of abstraction most of the time. It's the text that matters, not the bytes used to digitize it.

      For text to reach its full potential, though, you have to say goodbye to grampa's ASCII and move on to a rich, universal form of text: Unicode. It's ludicrous for someone to say that speakers of non-Western languages should never have the ability to use the full range of Unix the way Westerners can. People who make comments like that are usually unaware of the problems that even English speakers have with single-byte encodings. (The second most powerful currency on earth is the Euro. Where is the Euro sign in Latin-1? Where are the curly quotes used by almost all English-language press? What happens when a press release destined for Time Magazine gets piped thru a series of single-byte Unix utilities? Undefined!)

      XML, another system that considers text the universal data type, is Unicode based. They understand the concept of "universal". Same for HTML now. More and more Web pages are going to UTF-8, even for English, to avoid weird problems with Macs vs. PCs, Euro signs, curly quotes, embedded non-English text, etc. Are such pages really supposed to be out of reach of standard Unix utilities?

      Java and .Net are 100% Unicode. Windows and Macintosh are now all Unicode based.

      IETF and W3C have made it clear that no non-Unicode-based text protocols will be considered from now on.

      Oracle is recommending Unicode as the format for all database text for new databases. So what happens when you cripple Unix so that it can't handle Oracle data in default form?

      AT&T considers Unicode the future of Unix (cf. Plan 9), Sun has made the conversion to full Unicode fundamental to the future of Solaris, and as we speak the Free Standards Organization is preparing to do the same for an upcoming version of the LSB (Linux Standards Base) common core that all major Linux vendors have committed to.

      It's unfortunate that so many Unix users still think that ASCII was good enough for grampa, so it should be good enough for every Unix user on earth from now on, but fortunately those who drive the standards have abandoned that kind of thinking forever.

    • by Meowing ( 241289 )
      You can keep the byte orientation and still have Unicode support. See this. [bell-labs.com]
      • by divbyzero ( 23176 ) on Friday December 20, 2002 @06:21PM (#4932679) Journal

        People who fear that a switch from US-ASCII to UTF-8 will break their existing programs should really read the Bell Labs document linked above, section 2.3 of the Unicode spec [unicode.org], or RFC 2044 [ietf.org]. UTF-8 was designed very carefully to make life extremely easy for people making that exact migration. There are amazingly few circumstances where it even matters that it is variable width. Those people who are suggesting UCS-2, UCS-4, etc. as alternatives in order to solve the nonexistant problem of UTF-8's variable width nature should really take a closer look at it.

        • Well...it isn't quite that simple. UTF-8 is a fine compromise, but it has real imitations when compared to a constant width unicode encoding like UCS-2 or UCS-4.

          UTF-8 is much better than other MBCS systems because backspace is not O(n) in the length of the string. That's good. That said, UTF-8 is inefficient for multilingual operation. First, many characters in UCS-2 wind up three bytes long in UTF-8. That means that FE systems require 50% more memory to do string ops than they would in UCS-2, which is itself not as compact as the individual code pages are for each of the languages. UCS-2 is a better compromise, in that case.

          RAM is cheap, though -- cycles are not. UTF-8 is inefficient:

          (a) backspace through a string still involves repeated calls to back-search functions and
          (b) worse, forward space in logical order through a string requires repeated calls to multistep logical functions.

          Considering the frequency with which strings are searched for tokens, there is a significant performance hit to using UTF-8.
  • Go all the way (Score:3, Insightful)

    by Anonymous Coward on Friday December 20, 2002 @03:27AM (#4928275)
    If you start handling Unicode in files, then you need unicode in file names, because users will try to name them that way. If you allow unicode in file names, then you need to have make understand unicode, because someone will name all their .c files with Cyrillic characters. The shells then need it for completion. Soon you realize that you need the C compiler to understand unicode as well, so that you can have unicode variable names, etc.

    So I think it would be best to bite the bullet and go all the way. It will require some planning -- the gcc folks would have to decide gcc will read unicode in 4.0, and Linus would have to decide linux 3.0 will be in unicode. Then the various distributions will have to come out with "unstables" or "rawhides" or whatever they call them, and slowly beat thousands of little apps each with their own presumptions on the size of a text character into submission.

    plan9 is unicode inside and out. I'm not advocating it over simply improving good old linux, but it can be examined for lessons and ideas.
    • by Anonymous Coward
      I agree. But which unicode encoding? UTF-8 is a no-no, in my book - variable-length encodings suck. I vote for UCS-32 - space is cheap these days, and there's actually no reason that a byte has to be defined as 8 bits in the C spec - so why not declare bytes 32bit, use UCS-32, and be done with it?

    • Re:Go all the way (Score:4, Informative)

      by Samrobb ( 12731 ) on Friday December 20, 2002 @01:22PM (#4930536) Journal

      Already done, at least in part. Take a look at the UTF-8 and Unicode FAQ for Unix/Linux [cam.ac.uk]

      I've seen make work just fine with UTF-8 and other character encodings. You can build gcc with "--enable-c-mbchar" to turn on MBCS support. The kernel would need little or no modification to work properly - take a look at the "How do I have to modify my software?" [cam.ac.uk] and "What is UTF-8?" [cam.ac.uk] entries in the FAQ mentioned above:

      Any Unix-style kernel can do fine with soft conversion and needs only very minor modifications to fully support UTF-8.

      UTF-8 was originally called UTF-FSS (for "UCS transformation format, file system safe") UTF-8 was originally called UTF-FSS (for "UCS transformation format, file system safe")
    • So I think it would be best to bite the bullet and go all the way. It will require some planning -- the gcc folks would have to decide gcc will read unicode in 4.0, and Linus would have to decide linux 3.0 will be in unicode. Then the various distributions will have to come out with "unstables" or "rawhides" or whatever they call them, and slowly beat thousands of little apps each with their own presumptions on the size of a text character into submission.

      gcc is a poor place to start the unicode revolution. A more reasonable starting point would be a --utf flag for the gnu text utils, [ef]grep, etc.

      GCC does support using L".." to generate unicode/wide string constants, and has for quite a while.

  • by DaphneDiane ( 72889 ) <tg6xin001@sneakemail.com> on Friday December 20, 2002 @04:29AM (#4928416)
    I just tried a test in the standard Terminal in Jaguar and it works. (In case the characters don't display in the post... I tried typing a i u e o in hiragana.)
    bash-2.05a$ echo "AãIãUãEãOãS" | perl -ne 'print join(",",map { sprintf("%04X",$_) } unpack("U*",$_))."\n";'
    0041,3042,0049,3044,0055, 3046,0045,3048,004F,304A,000A
  • Good luck (Score:4, Funny)

    by sql*kitten ( 1359 ) on Friday December 20, 2002 @05:46AM (#4928555)
    However, I was pondering, how are other UNIX consoles doing? Do any of them fully support unicode, even bidirectional characters? shaping? (a great many of today's UNIX applications lack many if not all of these ;( ). If you know of such applications or are working on support for a platform, could you give feedback as to your experiences and thoughts on the current state of the UNIX console?"

    Whoa there, cowboy. Let's work on getting the delete key to work properly before we try any of that fancy stuff! If I never have to type stty erase again, I'll be a happy bunny!
  • I don't want a BSD port.
    I want my OpenBSD to be native utf-8, nothing else.
    Currently it is not locale/NLS aware (which I consider
    A Good Thing(tm)), but handles eight-bit I/O as if
    it was iso-8859-1. I want it to change that to utf-8
    because more characters ( comes to mind) can
    be handled that way.
    • It's not "as if it was iso-8859-1", it's "byte-value transparent handling of data", and it is a good thing to have -- software not directly involved in displaying data (what is pretty much everything in /usr/bin) should not make assumptions about it other than that it's a sequence of bytes. If UTF-8 will be declared "native" charset, it will have to be enforced/handled in every utility, and those utilities will lose the ability to pass anything else -- even binary data. Want dd to handle blocks? No, can't do that, utilities are not allowed to split multibyte sequences! Want wc to count bytes? Same problem, it will count "characters" instead. And so on -- in the end everything that Unix is built on will have to be ruined just because someone wants to enforce a poorly designed enormous charset that can't be used (or written, or remembered, or designed a complete font) by any single person anyway.

      I'll rather use fonts of charsets that I use, and thank the underlying layer and utilities for not messing up my data by assuming that everything they handle is a "text" (as opposed to "binary" -- hi, DOS idiocy, long time no see!) and that I love a semi-proprietary monstrosity of a charset (Unicode) in a variable-length nightmare of encoding (UTF-8).
  • Kterm is xterm with double byte support. It's been available since before unicode, but you shouldn't have any trouble hacking it to use a unicode font. http://packages.debian.org/stable/x11/kterm.html
  • even the c source code

    http://plan9.bell-labs.com/plan9

    okok, It's a graphical OS but bitmap terminals are hardly hard to come by

  • Seriously, I have yet to see a person (other than Martin Duerst who apparently made a career of stuffing Unicode into everything he notices) willingly using Unicode, as opposed to being forced to do that by some software that requires it. The "internationalization" of documents is a strawman -- at this point in history no non-linguistic document contains more than two languages, local charsets handle that perfectly, and linguists went far beyond what Unicode can provide already, so they have to use different formats anyway. If and when true internationalization will be necessary people will need one simple thing -- language/charset tagging. Tagging is also important because it makes those texts "machine-readable" -- programs will know what parts of text they should interpret using rules that apply to different languages and charsets, and pass "as-is" everything that is in the languages they don't know.

    XML already allows language attribute in all tags, and if charset attribute will become valid everywhere where the language is valid, problem will go away immediately and without mandatory Unicode adoption everywhere because everyone who can read a language has a font of a charset that is used with it, and everyone who doesn't shouldn't have problem with occasional

    "can't display this section of text, ( ) download "klingon fixed" font to make it readable, (x) show as block, do not edit, ( ) display/edit in hex".

    Obviously, a console, or any other kind of program can easily be modified to do that if necessary, and there will be no loss for people that, like myself, simply use their native language + ascii charset, and switch to all other charsets using nice xterm font menu.

  • If I remember correctly, the text modes have the font stored in a 2048 bytes array, with every character having a byte per line, and 8 lines per character. I don't think there's any way of squeezing more chracters into a text mode, unless video card designers come up with some extension.

    So probably if Linux is made to support Unicode correctly this will only work in framebuffer modes, where it's possible to have as many characters as you want. That would mean a lot of improvement is needed in this area. For example the rivafb driver/nVidia X driver would need to be fixed to coexist. Sure vesafb can be used, but it's painfully slow, and some really old cards don't support it.
  • Can AC post a pithy reply in time?

    I prefer the Unix console, myself.

  • Back then, it was very well publicized, but hardly anyone used it. Unfortunately, I feel we are in the same boat today.

    ac, you fail it.

    --

    sex [tilegarden.com]

E = MC ** 2 +- 3db

Working...