Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software

Sub-Pixel Rendering on CRTs? 18

rst2003 asks: "Is it possible (in theory) to do sub-pixel rendering (e.g. cleartype) on a CRT monitor using a triangular dot matrix instead of a 1x3-aspect rectangular one? If so, has it been done? I'm fairly sure it's not been done by Microsoft or Adobe, but is it available on X?"
This discussion has been archived. No new comments can be posted.

Sub-Pixel Rendering on CRTs?

Comments Filter:
  • by Anonymous Coward
    Flatscreens don't have a defined order either (the majority of LCDs use the same order, but some don't). From http://grc.com/ctwhat.htm:

    Since a few LCD panels have their sub-pixels arranged in B-G-R instead of R-G-B order, any industrial strength delivery of sub-pixel rendering technology will require a user-settable (or operating system readable) option to inform the system's LCD rendering engine whether the sub-pixels are arranged in 'forward' or 'reverse' order.

  • Flatscreens don't have a defined order either

    They don't have a standard order, but any given LCD screen does have a defined order that is consistent across the screen. The problem with a CRT is that any pixel can end up on any old splotch of phosphor in no defined, consistent, or useful order. Most people run their monitors at resolutions so high that multiple pixels actually overlap on the same phosphor patches... which really is like free antialiasing already. :)

  • Isn't this one of the inevitable uses for Display Postscript? You can render the PS description at any point in the image-- you can either do it immediately or you can pipe it over the net to some other display device.

    Sure! But I'm not clear on what happens if you want to do something that DPS doesn't support (like, 3d bump-mapped whatsahooey). You'd either require a protocol extension (losing your device independence and requiring certain hardware) or render it locally and send an image (possibly inefficient, and requires knowing a target resolution, again losing some device-independence).

    On the plus side, the future coming of 200dpi monitors would be a heckuvalot easier to deal with - the monitor would automagically render the higher-resolution fonts and vector graphics, and images would be scaled to an appropriate size instead of limited to some particular pixel count that doesn't match the real world; all without changing a thing on the computer.

  • Is it like super-sampling, where the OS and Video card generate 4 times the information and then downsample and anti-alias?

    Basically, but when pixels have (known, ordered) components to them such as in color LCD displays - red, green, and blue chunks - you can go farther and treat those three chunks as having separate positions instead of pretending the pixel is a single solid chunk. Thus instead of ooo in the middle of a diagonal line you can have .oO where . is red, o is green, and O is blue.

    Why couldn't monitors come with enough RAM to store 2 frames at the highest resolution at 32 bpp? At 1600x1200, that'd be 7.5 MB. 8MB of RAM coupled with a processor that runs fast enough to blend pixels for and 85Hz refresh rate.

    That could be done, sure, but... why? You could put the same ability in the graphics card/chip, and avoid sending 4x redundant data to the monitor. And, you wouldn't have to buy a new monitor to get there - monitors are much more expensive than graphics cards in general

    Make monitors like printers.

    This is a much more interesting idea, the device-independent sound of it intrigues me... but basically you're integrating the graphics card into the monitor. Now, if you want the latest 3d bump-texture-particle gizmo feature, you have to buy a new monitor instead of just a new video card.

    But I'm off topic.

    Hey, this is slashdot! If you don't have a goatse.cx link, you're about as on-topic as they come here.

  • I'm surprised nobody said this sooner.

    An LCD has distinctly addressable colours within each pixel. There is a red, a green and a blue. Each pixel (unless under screen-expansion or something else stupid) uses one and only one red, one and only one green, one and only one blue. The geometry of these is clearly defined and immutable.

    A CRT on the other hand depends on the angle at which each electron beam penetrates the shadow mask. The position is not clearly defined (just adjust the width or height of your image and tell me otherwise). Each pixel is comprised of multiple phosphors. The shape of the electron beam defines the shape of the pixel. That shape is not very complex either.

    If you see a bennefit from subpixel rendering on a CRT it is probably the colourful antialiasing you are noticing more than anything else. Although straight antialiasing would work better.

    There is no super-precise alignment of individual phosphors to the electron gun.

  • Subpixel rendering on LCDs is _already_ part of the Xft library. Set Xft.rgba: rgb in your X resources....
  • Every phosphor triad corresponds to a single opening (through which the 3 electron beams pass at different angles at a given instant in time) in the shadow mask. How much more well-defined than that can you get?
  • But the long answer is maybe, but for all intents and purposes, it's not feasible.

    Check out the white papers [microsoft.com] on ClearType filtering. (Some nice pictures/samples, too) It's so simple, you'll think, "Why didn't I think of that?"

    The real tragedy in this is that we didn't think of this before M$. So, being covered by M$ pattents, it'll probably never make it into any standard distro of X. :(

    To do sub-pixel (cleartype) rendering, you need some way of addressing the subpixels in your monitor. LCD panels have a nice, well-defined "sub-pixel RGB" location. This is not really true in CRTs using Trinitron or Invar Shadow mask technology. (Almost every single monitor you'll use) Try taking a magnifying glass to your monitor, and look at the "RGB" dots. They're not really well-defined, like an LCD is.

    Sony has some special technology for their monitors that uses an Octahedral "sub-pixel". You *might* be able to get some sort of sub-pixel rendering going on that, but it would be far more complex than an LCD panel, and probably be yield less pleasant results. But, having never examined a Sony monitor to that degree, I can't tell you if it's possible or not.
    --
  • by dead_penguin ( 31325 ) on Sunday May 13, 2001 @08:48AM (#226559)
    ...Microsoft didn't invent subpixel rendering! Instead, this technique was used on Apple computers 26 years ago...



    I'm not sure about it being done 26 years ago, but I do remember doing this on the Apple //e in the early 80s. In fact, Nibble magazine published some assembly routines that did exactly this, although simply doing some calculations from Applesoft Basic (ugh) and plotting the points using hplot gave you the same effect for lines and shapes.
    Basically, by setting the colour of the pixel you were plotting, you could move it by a third of a column's width on the screen.

    Of course with CRTs of the day and with Apple's hi-res graphics mode, this looked hideous unless you turned the colour all the way down. Text and lines looked smoother, but had freakish colour halos around them. In monochrome, though, it was pretty good. Fast too, if done in assembly (vs. basic!)

    Sigh. Enough reminiscing for today...

  • so thats what cleartype does...

    I've tried it on my laptop a few times, and while it does make the text a lot nicer looking... it also makes black not exactly black.

    it seems kinda like i'm on a messed up crt where the different colors don't line up well.

  • I think I know what it is... maybe.

    Is it like super-sampling, where the OS and Video card generate 4 times the information and then downsample and anti-alias?

    Why couldn't monitors come with enough RAM to store 2 frames at the highest resolution at 32 bpp? At 1600x1200, that'd be 7.5 MB. 8MB of RAM coupled with a processor that runs fast enough to blend pixels for and 85Hz refresh rate.

    Actually, this might be a bad idea. Can someone really build a device to antialias a bitmap and actually gain detail? I remember seeing articles in old MacWorld's about software that did just that, but I never saw it beyond paper.

    Make a monitor that has some sort of alternate interface, like IEEE 1394 or USB 2.0, that could be used to transmit a vector language. Make monitors like printers. (Althought the page-per-minute would be nearer to 5100 (85 Hz refresh), but the resolution is considerably less!)

    Give these screens the ability to autoswitch input, perform video mixing and such.

    But I'm off topic.
  • Ah, but you can't control which triad (or even how many triads) get illuminated via software. All someone has to do is fiddle with the horizontal size or vertical size setting of the monitor, and your pixel to phosphor triad mapping changes.

    On the other hand, with an LCD display, there's a one to one mapping between pixels and RGB triads on the screen.

    P.S. I'm running a WinXP beta on my laptop, and have ClearType turned on... it does look nicer than the standard grayscale antialiasing (IMO), although occasionally I do see color fringes. I also notice that there doesn't seem to be any setting for RGB vs. BGR subpixel ordering (maybe it's some hidden registry setting; I dunno).

  • Anyone know exactly what M$ patented? 'cuz while the Apple II trick did get you more resolution, it's not the same thing as Cleartype. The Apple II video output got color by doing a rather clever trick... for each byte of video RAM, it shifted the lower 7 bits out at twice the NTSC colorburst frequency, and sent that to the composite video output jack. So alternating 1s and 0s would get turned into a square wave that was either 0 or 180 degrees out of phase with the colorburst signal, giving you two colors (I don't know NTSC well enough to say exactly which two colors those would be, but for the purpose of the discussion, I'll say magenta and green). Now if the MSB of the byte of video RAM was set, a small delay would be added before shifting the lower 7 bits out, so alternating 1s and 0s would make a square wave either 90 or 270 degrees out of phase with the colorburst, giving you blue and orange:

    MSB bits phase color
    0 00 n/a black 1
    0 01 0 magenta
    0 10 180 green
    0 11 n/a white 1
    1 00 n/a black 2
    1 01 90 blue
    1 10 270 orange
    1 11 n/a white 2

    Voilà, color graphics for cheap :) (albeit with some rather annoying limitations)

    Now a side effect of adding the delay for the 90 degree phase shift was that it actually moved the pixels in that byte to the right by half a pixel. On a color TV or monitor, this wasn't particularly noticeable for the colors, but it was noticable for white. If you had a pixel of white 1 at (5, 0) and a pixel of white 2 at (5, 1), the pixel at (5, 1) was shifted over to the right a bit.

    Some programs used this to good effect when drawing fonts on the graphics screen... a couple of years back, I had beaten this game (Robot Odyssey... pretty tough game, but way cool :) and wanted to make a GIF of the final screen for posterity. I wrote my own program to convert the video RAM dump into a PPM file, and assumed that a 280x192 bitmap would do the trick, since that was the resolution of the Apple II's "hi-res" graphic mode. When I tried it out, the "I"s turned out lopsided, more like a "[". It turns out the stem of the I was shifted over half a pixel, so I had to make a 560x192 picture to accurately capture it (which I then scaled to 560x384 so the aspect ratio would be more square).

    So... while you can get more horizontal res out of the Apple II, it doesn't work the same way as ClearType.

  • by DevTopics ( 150455 ) on Sunday May 13, 2001 @05:11AM (#226564) Homepage
    is explained, in depth, at Gibson Research [grc.com] If you don't get through to this site, its not becaused they are slashdotted, but because someone is inflicting a DDoS on them (which has the same impact, though). For this reason, I can't give you the complete address, but its just a from mouseclicks away (as soon as the attack stops). Yes, it may be amazing, but (to some extend) sub-pixel rendering improves quality on a CRT, too! If you don't believe me (and even if you do), check out that small programm that is offered on the web-page above. But notice why it works: it does so because antialiasing and sub-pixel rendering is similiar. And contrary to some believes Microsoft didn't invent subpixel rendering! Instead, this technique was used on Apple computers 26 years ago (this has been a topic on /. AFAIR).
  • Nope, CRTs can't do sub-pixel anti-aliasing.

    I have read that it's because - unlike flatscreens - there's no defined order of RGB so you can't lighten the red of a pixel and know it's the left.

  • I can't foresee a single slashdotter disagreeing with me when I make such a common sense assertion as "sub pixel antialiasing is a fundamentally stupid concept" and "the pixels suX0rz" and "toast0 collects funny little pixels and tortures them".

    --

  • If I want gray between white and black... wont I actually end up with edge pixels on one side being slightly one color, and the other side being slightly another color?
  • Make monitors like printers.

    This is a much more interesting idea, the device-independent sound of it intrigues me...

    Isn't this one of the inevitable uses for Display Postscript? You can render the PS description at any point in the image-- you can either do it immediately or you can pipe it over the net to some other display device.

For God's sake, stop researching for a while and begin to think!

Working...