Unicode and the Unix Console? 57
Phactorial asks: "At it's current state, most UNIX consoles (not graphical terminal emulators, mlterm is out for this) I have dealt with do not handle unicode properly. This is essential when it comes to dealing with languages that require characters that are not in the current ASCII set. I was wondering if anyone out there is developing a solution for non-Linux platforms. I know the Arabeyes project is currently working on a project called 'Akka' which provides UTF-8 (kinda) support and even shaping and bidirectional code (essential for many languages in the East, the program works fine and I am working on getting a FreeBSD port out). However, I was pondering, how are other UNIX consoles doing? Do any of them fully support unicode, even bidirectional characters? shaping? (a great many of today's UNIX applications lack many if not all of these ;(). If you know of such applications or are working on support for a platform, could you give feedback as to your experiences and thoughts on the current state of the UNIX console?"
RH 8.0, out of the box (Score:2, Informative)
A major improvement for my line of work.
Uhh... (Score:5, Insightful)
(not graphical terminal emulators, mlterm is out for this)
I was wondering if anyone out there is developing a solution for non-Linux platforms.
The answer "Sure, there's this graphical terminal emulator in a recent linux distro!" seems somewhat inappropriate to the question.
Re:RH 8.0, out of the box (Score:1)
The solution is to set
It's been around quite a while... (Score:3, Interesting)
Re:It's been around quite a while... (Score:1)
hehe that's not so strange since the vast majority of UNIX people lives outside of the USA anyway:)
Re:It's been around quite a while... (Score:1)
Isn't (Score:1)
Use something else (Score:2, Insightful)
The way I see it, we shouldn't be cluttering a clear, simple and sane interface like the unix console with complexity like unicode. Unix is inherently byte based, and unix terminals are byte based. If it's not a byte, don't put it in a unix terminal.
This isn't to say that we shouldn't have other mechanisms for supporting foreign languages - but this particular path has been travelled before and it's not pretty. Look at the AS/400 - tables stored in the DB/FS are marked as being in a particular character set, and the OS tries real hard to fix up and convert from set to set as needed. This causes countless problems in the infrequent cases where there is no possible mapping between sets.
Another way to look at it - why don't we have unicode support for grep? Why aren't all files tagged with an appropriate character set, so we know what they're really supposed to look like? When you 'tail -n 20' a file, how does tail know that those line feeds and carriage returns aren't part of some unicode char?
In short, unix is byte based. All the unix tools are byte based. If you want to use unicode, build a unicode layer on top of the bytes, but dont screw with the existing stuff that already works perfectly well.
Re:Use something else (Score:3, Insightful)
That's a rather depressing outlook. We need to do better. This is supposed to be a discussion about that, not just another 'UNIX is UNIX because it is UNIX' polemic.
(* Just stating the facts. I connected to a SparcStation with a VT220 terminal as a serial console just last week- it was handy and it's cool that it works.)
Nonsense (Score:5, Insightful)
Unix isn't byte based, it's text based. Of course one layer deeper, it's byte based, but so is every other OS, and below that it's transistor based, etc.
What distinguishes Unix from other OSes is an emphasis on working in text with text utilities, often thru windows (telnet clients) on other machines -- windows whose only supported datatype is text.
In Unix, as in XML, text is sort of considered the ultimate data type. Bytes are just the medium used to represent the text under the surface. If the bytes were what mattered, people would usually work in a hex editor and do hex I/O, but they don't. They work at the text level of abstraction most of the time. It's the text that matters, not the bytes used to digitize it.
For text to reach its full potential, though, you have to say goodbye to grampa's ASCII and move on to a rich, universal form of text: Unicode. It's ludicrous for someone to say that speakers of non-Western languages should never have the ability to use the full range of Unix the way Westerners can. People who make comments like that are usually unaware of the problems that even English speakers have with single-byte encodings. (The second most powerful currency on earth is the Euro. Where is the Euro sign in Latin-1? Where are the curly quotes used by almost all English-language press? What happens when a press release destined for Time Magazine gets piped thru a series of single-byte Unix utilities? Undefined!)
XML, another system that considers text the universal data type, is Unicode based. They understand the concept of "universal". Same for HTML now. More and more Web pages are going to UTF-8, even for English, to avoid weird problems with Macs vs. PCs, Euro signs, curly quotes, embedded non-English text, etc. Are such pages really supposed to be out of reach of standard Unix utilities?
Java and
IETF and W3C have made it clear that no non-Unicode-based text protocols will be considered from now on.
Oracle is recommending Unicode as the format for all database text for new databases. So what happens when you cripple Unix so that it can't handle Oracle data in default form?
AT&T considers Unicode the future of Unix (cf. Plan 9), Sun has made the conversion to full Unicode fundamental to the future of Solaris, and as we speak the Free Standards Organization is preparing to do the same for an upcoming version of the LSB (Linux Standards Base) common core that all major Linux vendors have committed to.
It's unfortunate that so many Unix users still think that ASCII was good enough for grampa, so it should be good enough for every Unix user on earth from now on, but fortunately those who drive the standards have abandoned that kind of thinking forever.
Re:Nonsense (Score:2)
May I offer this guide to all things unicode:
Unicode terms, FAQs, and mistakes [jbrowse.com]?
It helps clear up confusion between things like 'character sets' and 'encodings' and 'code points'.
Re:Nonsense (Score:1, Interesting)
The most common example for me: In Unix consoles that do not support Unicode, I can't (easily) move between directories that were created with Unicode characters on an OS that supports it. Typically the Unicode characters are converted to unprintable, or at least, untypeable, characters.
Some programmers forget that the point of the program is to serve the user, not some idiotic notion of what the underlying implementation should be.
Also, those mods that gave dentin [slashdot.org] an Insightful, might want to look at his other (recent) brilliant reasons why Unicode should not be supported:
The Internet will be broken up into cliques because some people don't know how to type an umlaut, and therefore won't have access to a site they can't read anyway [slashdot.org]
Forcing programmers to support languages that cannot use ASCII is unfair to computer science and all those programmers who have spent years investing in ASCII [slashdot.org]
"In a hundred years, there will be a global language anyway - if anything we should be vehmently refusing to pointlessly break perfectly good code to support local quirks" [slashdot.org]
It's the future, but not without it's pains (Score:1)
Re:Use something else (Score:3, Interesting)
Re:Use something else (Score:4, Informative)
People who fear that a switch from US-ASCII to UTF-8 will break their existing programs should really read the Bell Labs document linked above, section 2.3 of the Unicode spec [unicode.org], or RFC 2044 [ietf.org]. UTF-8 was designed very carefully to make life extremely easy for people making that exact migration. There are amazingly few circumstances where it even matters that it is variable width. Those people who are suggesting UCS-2, UCS-4, etc. as alternatives in order to solve the nonexistant problem of UTF-8's variable width nature should really take a closer look at it.
Re:Use something else (Score:2)
UTF-8 is much better than other MBCS systems because backspace is not O(n) in the length of the string. That's good. That said, UTF-8 is inefficient for multilingual operation. First, many characters in UCS-2 wind up three bytes long in UTF-8. That means that FE systems require 50% more memory to do string ops than they would in UCS-2, which is itself not as compact as the individual code pages are for each of the languages. UCS-2 is a better compromise, in that case.
RAM is cheap, though -- cycles are not. UTF-8 is inefficient:
(a) backspace through a string still involves repeated calls to back-search functions and
(b) worse, forward space in logical order through a string requires repeated calls to multistep logical functions.
Considering the frequency with which strings are searched for tokens, there is a significant performance hit to using UTF-8.
Go all the way (Score:3, Insightful)
So I think it would be best to bite the bullet and go all the way. It will require some planning -- the gcc folks would have to decide gcc will read unicode in 4.0, and Linus would have to decide linux 3.0 will be in unicode. Then the various distributions will have to come out with "unstables" or "rawhides" or whatever they call them, and slowly beat thousands of little apps each with their own presumptions on the size of a text character into submission.
plan9 is unicode inside and out. I'm not advocating it over simply improving good old linux, but it can be examined for lessons and ideas.
Re:Go all the way (Score:1)
Re:Go all the way (Score:4, Informative)
Already done, at least in part. Take a look at the UTF-8 and Unicode FAQ for Unix/Linux [cam.ac.uk]
I've seen make work just fine with UTF-8 and other character encodings. You can build gcc with "--enable-c-mbchar" to turn on MBCS support. The kernel would need little or no modification to work properly - take a look at the "How do I have to modify my software?" [cam.ac.uk] and "What is UTF-8?" [cam.ac.uk] entries in the FAQ mentioned above:
Re:Go all the way (Score:1)
gcc is a poor place to start the unicode revolution. A more reasonable starting point would be a --utf flag for the gnu text utils, [ef]grep, etc.
GCC does support using L".." to generate unicode/wide string constants, and has for quite a while.
Looks like OS X does... (Score:4, Informative)
Re:Looks like OS X does... (Score:2)
Re:Looks like OS X does... (Score:1)
Good luck (Score:4, Funny)
Whoa there, cowboy. Let's work on getting the delete key to work properly before we try any of that fancy stuff! If I never have to type stty erase again, I'll be a happy bunny!
Re:Good luck (Score:1)
Giggle. That's the second best thing I've read on
Port? Nah, base! (Score:2)
I want my OpenBSD to be native utf-8, nothing else.
Currently it is not locale/NLS aware (which I consider
A Good Thing(tm)), but handles eight-bit I/O as if
it was iso-8859-1. I want it to change that to utf-8
because more characters ( comes to mind) can
be handled that way.
Re:Port? Nah, base! (Score:2)
I'll rather use fonts of charsets that I use, and thank the underlying layer and utilities for not messing up my data by assuming that everything they handle is a "text" (as opposed to "binary" -- hi, DOS idiocy, long time no see!) and that I love a semi-proprietary monstrosity of a charset (Unicode) in a variable-length nightmare of encoding (UTF-8).
kterm (Score:2)
xterm isn't console (but does utf8 on its own) (Score:2)
Re:xterm isn't console (but does utf8 on its own) (Score:2)
plan9 - unicode through and through (Score:2)
http://plan9.bell-labs.com/plan9
okok, It's a graphical OS but bitmap terminals are hardly hard to come by
Unicode sucks, no one uses it (Score:2)
Seriously, I have yet to see a person (other than Martin Duerst who apparently made a career of stuffing Unicode into everything he notices) willingly using Unicode, as opposed to being forced to do that by some software that requires it. The "internationalization" of documents is a strawman -- at this point in history no non-linguistic document contains more than two languages, local charsets handle that perfectly, and linguists went far beyond what Unicode can provide already, so they have to use different formats anyway. If and when true internationalization will be necessary people will need one simple thing -- language/charset tagging. Tagging is also important because it makes those texts "machine-readable" -- programs will know what parts of text they should interpret using rules that apply to different languages and charsets, and pass "as-is" everything that is in the languages they don't know.
XML already allows language attribute in all tags, and if charset attribute will become valid everywhere where the language is valid, problem will go away immediately and without mandatory Unicode adoption everywhere because everyone who can read a language has a font of a charset that is used with it, and everyone who doesn't shouldn't have problem with occasional
"can't display this section of text, ( ) download "klingon fixed" font to make it readable, (x) show as block, do not edit, ( ) display/edit in hex".
Obviously, a console, or any other kind of program can easily be modified to do that if necessary, and there will be no loss for people that, like myself, simply use their native language + ascii charset, and switch to all other charsets using nice xterm font menu.
Re:Unicode sucks, no one uses it (Score:3, Interesting)
Well, you could start by looking at everybody who wrote that software you mention.
People who write that software never use their "internationalization" -- they see it as a "feature" to add in the list of marketing checkboxes.
Then add everybody who has to deal with more than just Ameri^H^H^H^H^HEnglish text on a day-to-day basis.
That will be me -- and I hate Unicode.
Probably took too small a survey, then. People in my lab write them every day. We write mostly in English (sometimes German), and refer to people, locations, and events in a dozen European countries. Using some pre-Unicode technique, like "codepages", would be a nightmare.
Almost all European languages, including English, are in a single iso8859-1 charset -- what happens to coincide with the beginning of Unicode table. People who use iso8859-1 can "switch to Unicode" and continue using just the same thing with longer bytes, getting no benefit whatsoever but pretending to have "internationalized" their software. For everyone else Unicode causes nothing but trouble, waste of resources and incompatibilities.
As for "code pages" this is a DOS/Windows kludge that is a dumb idea in its own way -- everyone else uses _charsets_ and those can be easily displayed in pretty much everything. The only problem is, no one bothered to make a usable (that means, not XML) tagged format that can include information about languages and charsets used in a document. MIME has charset information for parts of the document, and substrings in the header but not substrings in the document, so it isn't really usable either, however can be used as a proof of viability -- most of mail clients have it all implemented, therefore metainformation with charsets can be easily used.
Re:Unicode sucks, no one uses it (Score:3, Insightful)
ISO 8859-1 is "West European". A quick web search seems to indicate it covers about half of the European languages.
And this is who in Europe actually "uses" it.
As a software developer, it pays of in spades because I don't have to answer any questions about languages: if they're in Unicode, they'll always be there.
At the expense of crippling the software.
No more wondering how to get this language to display in that web browser; it just works.
Software is not to "wonder how it will look in a web browser", it's to operate on data. Most of operations can be absolutely byte-value-transparent and they must never depend on charsets and languages in the first place, however ones that are dependent on them usually have to either use tags with metainformation (and there Unicode is not any better than anything else) or do some horrible guesswork -- say, which language is used in a certain chunk of text that contains some characters (what simply should never be done in the first place, but Unicode accomplishes nothing for it because characters are shared between languages). Displaying pretty characters is easy. So easy, one should never think about it. However a computer is not a typewriter, and therefore charsets and encodings should never be designed to simplify a tiny bit the simple task of displaying while turning any complex text processing into a complete hell.
If I want a character, I can look up the bytes; if I know the bytes, I can look up the character.
How can a computer LOOK UP the bytes? There is nothing BUT the bytes in the computer's memory so certainly it can't look them up. You can use "bytes" as an index in a font, or you can pass them to some processing routine.
No more "incompatibilities" than you'd get from bugs in any other library on your system.
There are no bugs in the font renderers already -- this is a non-issue. The problem is entirely in trying to force people to make huge amount of assumptions about the data's content in otherwise byte-value transparent operations, just to accommodate UTF-8 where it should not matter. It's a design issue, not implementation.
Wouldn't a framebuffer be needed? (Score:1)
So probably if Linux is made to support Unicode correctly this will only work in framebuffer modes, where it's possible to have as many characters as you want. That would mean a lot of improvement is needed in this area. For example the rivafb driver/nVidia X driver would need to be fixed to coexist. Sure vesafb can be used, but it's painfully slow, and some really old cards don't support it.
This is the event horizon... (Score:1)
I prefer the Unix console, myself.
I researched UNICODE several years ago... (Score:2)
ac, you fail it.