Sub-Pixel Rendering on CRTs? 18
rst2003 asks: "Is it possible (in theory) to do sub-pixel rendering (e.g. cleartype) on a CRT monitor using a triangular dot matrix instead of a 1x3-aspect rectangular one? If so, has it been done? I'm fairly sure it's not been done by Microsoft or Adobe, but is it available on X?"
Re:Fustus Postus Maximus (Score:1)
Since a few LCD panels have their sub-pixels arranged in B-G-R instead of R-G-B order, any industrial strength delivery of sub-pixel rendering technology will require a user-settable (or operating system readable) option to inform the system's LCD rendering engine whether the sub-pixels are arranged in 'forward' or 'reverse' order.
Yes but no (Score:1)
Flatscreens don't have a defined order either
They don't have a standard order, but any given LCD screen does have a defined order that is consistent across the screen. The problem with a CRT is that any pixel can end up on any old splotch of phosphor in no defined, consistent, or useful order. Most people run their monitors at resolutions so high that multiple pixels actually overlap on the same phosphor patches... which really is like free antialiasing already. :)
Re:What is sub-pixel antialiassing? (Score:1)
Sure! But I'm not clear on what happens if you want to do something that DPS doesn't support (like, 3d bump-mapped whatsahooey). You'd either require a protocol extension (losing your device independence and requiring certain hardware) or render it locally and send an image (possibly inefficient, and requires knowing a target resolution, again losing some device-independence).
On the plus side, the future coming of 200dpi monitors would be a heckuvalot easier to deal with - the monitor would automagically render the higher-resolution fonts and vector graphics, and images would be scaled to an appropriate size instead of limited to some particular pixel count that doesn't match the real world; all without changing a thing on the computer.
Re:What is sub-pixel antialiassing? (Score:2)
Is it like super-sampling, where the OS and Video card generate 4 times the information and then downsample and anti-alias?
Basically, but when pixels have (known, ordered) components to them such as in color LCD displays - red, green, and blue chunks - you can go farther and treat those three chunks as having separate positions instead of pretending the pixel is a single solid chunk. Thus instead of ooo in the middle of a diagonal line you can have .oO where . is red, o is green, and O is blue.
Why couldn't monitors come with enough RAM to store 2 frames at the highest resolution at 32 bpp? At 1600x1200, that'd be 7.5 MB. 8MB of RAM coupled with a processor that runs fast enough to blend pixels for and 85Hz refresh rate.
That could be done, sure, but... why? You could put the same ability in the graphics card/chip, and avoid sending 4x redundant data to the monitor. And, you wouldn't have to buy a new monitor to get there - monitors are much more expensive than graphics cards in general
Make monitors like printers.
This is a much more interesting idea, the device-independent sound of it intrigues me... but basically you're integrating the graphics card into the monitor. Now, if you want the latest 3d bump-texture-particle gizmo feature, you have to buy a new monitor instead of just a new video card.
But I'm off topic.
Hey, this is slashdot! If you don't have a goatse.cx link, you're about as on-topic as they come here.
It cannot be done on a CRT, period. (Score:2)
I'm surprised nobody said this sooner.
An LCD has distinctly addressable colours within each pixel. There is a red, a green and a blue. Each pixel (unless under screen-expansion or something else stupid) uses one and only one red, one and only one green, one and only one blue. The geometry of these is clearly defined and immutable.
A CRT on the other hand depends on the angle at which each electron beam penetrates the shadow mask. The position is not clearly defined (just adjust the width or height of your image and tell me otherwise). Each pixel is comprised of multiple phosphors. The shape of the electron beam defines the shape of the pixel. That shape is not very complex either.
If you see a bennefit from subpixel rendering on a CRT it is probably the colourful antialiasing you are noticing more than anything else. Although straight antialiasing would work better.
There is no super-precise alignment of individual phosphors to the electron gun.
Re:Short answer is no.... (Score:1)
Re:Short answer is no.... (Score:2)
Short answer is no.... (Score:2)
Check out the white papers [microsoft.com] on ClearType filtering. (Some nice pictures/samples, too) It's so simple, you'll think, "Why didn't I think of that?"
The real tragedy in this is that we didn't think of this before M$. So, being covered by M$ pattents, it'll probably never make it into any standard distro of X.
To do sub-pixel (cleartype) rendering, you need some way of addressing the subpixels in your monitor. LCD panels have a nice, well-defined "sub-pixel RGB" location. This is not really true in CRTs using Trinitron or Invar Shadow mask technology. (Almost every single monitor you'll use) Try taking a magnifying glass to your monitor, and look at the "RGB" dots. They're not really well-defined, like an LCD is.
Sony has some special technology for their monitors that uses an Octahedral "sub-pixel". You *might* be able to get some sort of sub-pixel rendering going on that, but it would be far more complex than an LCD panel, and probably be yield less pleasant results. But, having never examined a Sony monitor to that degree, I can't tell you if it's possible or not.
--
Re:How subpixel rendering works... (Score:3)
I'm not sure about it being done 26 years ago, but I do remember doing this on the Apple
Basically, by setting the colour of the pixel you were plotting, you could move it by a third of a column's width on the screen.
Of course with CRTs of the day and with Apple's hi-res graphics mode, this looked hideous unless you turned the colour all the way down. Text and lines looked smoother, but had freakish colour halos around them. In monochrome, though, it was pretty good. Fast too, if done in assembly (vs. basic!)
Sigh. Enough reminiscing for today...
Re:Fustus Postus Maximus (Score:1)
I've tried it on my laptop a few times, and while it does make the text a lot nicer looking... it also makes black not exactly black.
it seems kinda like i'm on a messed up crt where the different colors don't line up well.
What is sub-pixel antialiassing? (Score:1)
Is it like super-sampling, where the OS and Video card generate 4 times the information and then downsample and anti-alias?
Why couldn't monitors come with enough RAM to store 2 frames at the highest resolution at 32 bpp? At 1600x1200, that'd be 7.5 MB. 8MB of RAM coupled with a processor that runs fast enough to blend pixels for and 85Hz refresh rate.
Actually, this might be a bad idea. Can someone really build a device to antialias a bitmap and actually gain detail? I remember seeing articles in old MacWorld's about software that did just that, but I never saw it beyond paper.
Make a monitor that has some sort of alternate interface, like IEEE 1394 or USB 2.0, that could be used to transmit a vector language. Make monitors like printers. (Althought the page-per-minute would be nearer to 5100 (85 Hz refresh), but the resolution is considerably less!)
Give these screens the ability to autoswitch input, perform video mixing and such.
But I'm off topic.
Re:Short answer is no.... (Score:1)
On the other hand, with an LCD display, there's a one to one mapping between pixels and RGB triads on the screen.
P.S. I'm running a WinXP beta on my laptop, and have ClearType turned on... it does look nicer than the standard grayscale antialiasing (IMO), although occasionally I do see color fringes. I also notice that there doesn't seem to be any setting for RGB vs. BGR subpixel ordering (maybe it's some hidden registry setting; I dunno).
Re:Short answer is no.... (Score:2)
MSB bits phase color
0 00 n/a black 1
0 01 0 magenta
0 10 180 green
0 11 n/a white 1
1 00 n/a black 2
1 01 90 blue
1 10 270 orange
1 11 n/a white 2
Voilà, color graphics for cheap :) (albeit with some rather annoying limitations)
Now a side effect of adding the delay for the 90 degree phase shift was that it actually moved the pixels in that byte to the right by half a pixel. On a color TV or monitor, this wasn't particularly noticeable for the colors, but it was noticable for white. If you had a pixel of white 1 at (5, 0) and a pixel of white 2 at (5, 1), the pixel at (5, 1) was shifted over to the right a bit.
Some programs used this to good effect when drawing fonts on the graphics screen... a couple of years back, I had beaten this game (Robot Odyssey... pretty tough game, but way cool :) and wanted to make a GIF of the final screen for posterity. I wrote my own program to convert the video RAM dump into a PPM file, and assumed that a 280x192 bitmap would do the trick, since that was the resolution of the Apple II's "hi-res" graphic mode. When I tried it out, the "I"s turned out lopsided, more like a "[". It turns out the stem of the I was shifted over half a pixel, so I had to make a 560x192 picture to accurately capture it (which I then scaled to 560x384 so the aspect ratio would be more square).
So... while you can get more horizontal res out of the Apple II, it doesn't work the same way as ClearType.
How subpixel rendering works... (Score:4)
Fustus Postus Maximus (Score:1)
I have read that it's because - unlike flatscreens - there's no defined order of RGB so you can't lighten the red of a pixel and know it's the left.
Re:Fustus Postus Maximus (Score:1)
--
doesn't it cause funny colors? (Score:1)
Re:What is sub-pixel antialiassing? (Score:1)
This is a much more interesting idea, the device-independent sound of it intrigues me...
Isn't this one of the inevitable uses for Display Postscript? You can render the PS description at any point in the image-- you can either do it immediately or you can pipe it over the net to some other display device.