Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Technology

Digital Video Capture and High Frame Rates? 211

Jeff asks: "So the folks at a place called Conniption Films (great name) developed a camera called the Millisecond Camera which can shoot 12,000 frames of film a second. I read the article and thought 'Hmm that's neat' but then realized they were still using an analog process for shooting this highspeed film. Being a geek, not necessarily into the film side of things but curious nonetheless, I wonder, shouldn't a computer be able to do a better job of such a thing? They say the film runs around a spindle going 500 mph (!). Wouldn't that be prone to failure and use alot of energy? Wouldn't it be more appropriate, easier, and overall cheaper to just hook up a high res CCD to a beowulf </duck> cluster of 2 ghz+ machines and capture high speed images that way? Why hasn't it been done yet? Or has it and I haven't seen it yet?" I did a double-take, when I first read this question, and then got curious and did a little digging. Turns out, high frame rates are not exclusive to the analog photography world, and to illustrate my point, I provide this link. It's woefully short on details, and the explanations as to why a camera that can record 1M frames per second is limited to a playback of only 103 frames, but the technology is out there. Has anyone seen any other digital cameras out there with high frame-rates? What visual mischief could you aspiring photographers get into with such a camera?
This discussion has been archived. No new comments can be posted.

Digital Video Capture and High Frame Rates?

Comments Filter:
  • CCDs (Score:2, Informative)

    by Anonymous Coward
    are often a bit delayed. they're slow. that's it.
  • Slow CCDs (Score:1, Insightful)

    by Anonymous Coward
    Film is much faster than CCDs, still. If there's enough light, film is much faster and better quality.
    • Re:Slow CCDs (Score:2, Insightful)

      Is CCD the ONLY way to capture light digitally ???
      • Re:Slow CCDs (Score:2, Informative)

        by whovian ( 107062 )
        There is another form of image sensor called a CMOS (complementary metal oxide semiconductor) sensor, and it is usually used in cheaper devices like web cams, or digital cameras under $100, since it's much cheaper to manufacture. As a result CMOS images generally worse.

        • This ain't true anymore.
          The flagship prosumer digital SLR from Canon, the D60, uses a Canon-made CMOS which renders incredible pictures.
          Take a look at some sample image on www.dpreview.com [dpreview.com] and see for yourself! Things have changed since 2 years...
          On another notem, CMOS sensors tends to be slower. Thats probably why their pro SLR like the 1D does use a CCD.
    • Re:Slow CCDs (Score:3, Informative)

      by egomaniac ( 105476 )
      You were probably correct a few years ago, but you need to get with the times.

      There is no film in the world that can outshoot a high-sensitivity CCD nowadays. Cameras like the Kodak 760x can shoot at ISO 6400 with reasonable quality, which film is utterly incapable of matching with any sort of quality, and CCDs are only getting better.

      Yes, crappy consumer digicams suck at anything over ISO 100. But a serious professional digicam beats the pants off of film at high ISOs. (In case you were wondering, my wife is a professional photographer who shoots with a Nikon D1X. I do know a bit about this.)
  • Bandwidth (Score:2, Insightful)

    by WPIDalamar ( 122110 )
    The problem is the bandwidth.

    Small, 8bit color uncompressed movie at 300x300 pixels would require something like 8 billion bits per second. (300 * 300 * 12000 * 8)

    Now we probably want more resolution & a higher bit depth, so multiply apporpriatly.

    What are we going to use to transfer that much data around a cluster? Or even just from the camera to the cluster?
    • [8gigabits/second for a tiny 300x300 image .... ] What are we going to use to transfer that much data around a cluster?

      From reading the article, the bandwidth problem was solved by giving each pixel of the camera it's own memory. One problem that I can see is that: This is going to eat space on the chip that would normally be used for imaging. If you put too much memory around a pixel, you're going to start suffering in the quality of the image. (and they already had to increase the size of each pixel to be able to capture the light fast enough)

      It would seem that they pegged the usable tradeoff at 103 samples per pixel, so that's how many images you can store.

    • Most beowulfs use a gigE interconnect... perhaps have two or three NICs per node, one for the system interconnect, and the other two for connection to the CCD. The CCD module could easily be broken down into a grid of virtual segments, each with its own cache and a gigE interconnect.

      Another option is to wait for 10gigE (along with the rest of the supercomputing world) or go with Myrinet, which has recently broken the 1 gigabit barrier.
      • Another option is to wait for 10gigE (along with the rest of the supercomputing world) or go with Myrinet, which has recently broken the 1 gigabit barrier.

        Recently? Myrinet has been doing 2 gigabits full-duplex since May 2001 when it started using fiber. [myri.com] Not to mention that full link utilization only uses a few percent of the host CPU. What's the point of fast cluster interconnect when you use half your CPU sending packets through the TCP/IP stack?
        • I haven't been keeping up with Myrinet... until they license their design to other manufacturers and/or drop their prices significantly, I'm not interested. If I wanted to be locked into a product built by only one company, I would have bought a Cray or SGI in the first place.
          • I can definitely appreciate the desire to avoid vendor lock. However Myricom has not placed any artificial barriers in the way to keep people from competing with them. Myrinet is an ANSI standard, [myri.com] just as open as Ethernet. It just happens to be much more expensive to produce hardware and software that performs the way Myrinet does than to make Ethernet NICs and drivers.
  • It's woefully short on details, and the explanations as to why a camera that can record 1M frames per second is limited to a playback of only 103 frames [...]

    Memory problems, I suppose. They say each pixel is its own memory. I guess that getting 1 million frames per second through any kind of bus to any kind of memory is going to be tough. AGP isn't going to cut it. ;)

  • re... (Score:1, Interesting)

    by Anonymous Coward
    You bet. A FOAF(friend of a friend) is working with some folks developing some very cool high speed cams for all kinds of research. They're using CMOS sensors instead of CCD's. These allow you to capture images as fast as you want (tens of thousands of fps) with a corresponding reduction in resolution.

    If you could get a hold of a cmos image sensor you could probably rig up something similar but remember those data rates are INCREDIBLY high. Also, that means the length of the shot tends to be fairly short.
  • by MadCow42 ( 243108 ) on Saturday August 24, 2002 @12:43PM (#4133345) Homepage
    The explanation as to why it can only play back 103 frames is QUITE clear... the chip has 103 "on-chip" memory buffers per sensor, and they get cyclicly overwritten with the last 103 frames.

    This overcomes the bottleneck of trying to transfer data off the CCD at such high frame rates in real time, but limits you to "downloading" the last 103 frames after-the fact from the chip.

    MadCow.
    • Seems kind of arbitrary -- why would the cycle time have to be exactly one second? C'mon, go easy on Cliff.
    • When I read it, it suggests that the CCD is able to store the 103 frames slightly after the image is taken. Therefore, to me, the 103 frames are simply a caching buffer.

      However, the really strange part is that the article says the playback is actualy 10 frames per second, which if true is really sucky playback.

  • Bandwidth (Score:5, Insightful)

    by Rosmo ( 2521 ) on Saturday August 24, 2002 @12:43PM (#4133346)
    A quick calculation on the bandwidth of capturing 12000 SVGA-resolution full color frames per second:

    1024 (width) * 768 (height) * 4 (32-bit color) * 12000 (fps) = 377,487,360,00 bytes/second (35 Gbytes/s)

    So no wonder they use film...
    • As I pointed out in another post, a good option would be to use several gigE interconnects to connect the CCD module to the many nodes of a beowulf cluster. Besides, you're going to need a cluster to manage that much data anyway.
      • by AlecC ( 512609 )
        No point in going to a cluster - beowulf or otherwise. You just want to stream straight to disk. If, say, Infiniband takes off, you could stream striaght out of the camera to lots of disks. Don't shove the data into a CPU until you need to process it.
      • With a bandwidth requirement of 35Gbps even GigE would be inadequate. You'd need the newer 10GigE and you would still suffer a performance hit.

        • Note that he said 35 Gbytes/s. THat's 280 Gbps. Even 10GigE would be woefully inadequate. Hell even 100GigE would just barely cut it. You'd need terabit ethernet to do it properly.
      • For 35 Gbytes/sec (280 Gbit), you'd need 280 gigE interconnects, and that's assuming you can perfectly divide up the data amongst them in realtime with no performance hit.
    • They say the film runs around a spindle going 500 mph (!). Wouldn't that be prone to failure and use alot of energy?

      Well, as the principle of the parent post is summarised in the famous quote:

      "Never underestimate the bandwidth of a lorry full of tapes hurtling down the motorway"
    • Re:Bandwidth (Score:2, Informative)

      by agallagh42 ( 301559 )
      That's not so out of reach with today's technology. There's certainly no reason to use a cluster, since it could be done internally with the proper (custom, expensive) hardware. I believe the highest bandwidth consumer dram is PC1066 RDRAM, which has a bandwidth of approx. 3.2Gbytes/s. You'd need eleven RDRAM channels to reach 35Gbytes/s, so you get one second of video for each GB of RDRAM per channel that you throw at it.

      Number of required channels can be reduced if higher bandwidth DRAM is used, which I'm sure exists somewhere.

      Yes, it would be frighteningly expensive, but these high speed film cameras aren't exactly cheap either.
      • Hell that would be just 22 512MB PC1066 rdram sticks and 11 pieces of rdram interface glue. Total price around $10K, not much at all. The real problem would be getting someone to design it for you, then again maybe you could get a graduate EE student to do it as a masters project =) I bet a couple runs film would cost at least $10K, not to mention a camera whos spool runs that fast.
    • Re:Bandwidth (Score:4, Insightful)

      by timeOday ( 582209 ) on Saturday August 24, 2002 @01:31PM (#4133511)
      Look at it this way, 12000 / 30 / 60 = 6 2/3, so it would take over 6 1/2 minutes to watch 1 second of video at regular framerate. The events of interest here are likely to be much less than one second. You could fit 0.1 seconds of video into a 32-bit address space. 0.1 seconds doesn't sound like alot, but it's way more than enough time to watch a bullet pierce a playing card.
    • Several points... (Score:4, Informative)

      by My Third Account ( 78496 ) on Saturday August 24, 2002 @03:15PM (#4133858)
      1024 (width) * 768 (height) * 4 (32-bit color) * 12000 (fps) = 377,487,360,00 bytes/second (35 Gbytes/s)

      Well, For one thing nobody records at that resolution. As another reply stated, DV is 720x480.

      Another problem with your simple calculation is that video is never stored as 32-bit color. That's totally unrealistic. The common way to store video is not RGB, but YUV. Because of the way the human visual system works, the color components (U,V) are typically stored at 1/4 the resolution of the luminance (Y), meaning that an 3*X pixel RGB image would be stored as a X+X/4+X/4=1.5X image in YUV, half the number of pixels.

      More significant, though, is that fact that just about every digital image recording mechanism stores information compressed onto the storage media. This is true from consumer digital cameras to DV cameras to the Sony HDTV cameras Lucas used for Star Wars.

      Consider what it means to take 12,000 frames per second. You're probably recording a single nearly-instantaneous event, or getting many images of a very fast event. In the former case, there will be a series of frames before the event in which nothing is going on, and the difference between the frames is close to zero, which compresses extremeley well with MPEG-style compression. Your data rate could be 1/100th of the uncompressed rate. When the event occurs, the instantaneous data rate goes up, but buffering can solve this, since it probably lasts a few frames.

      In the latter case, recording a fast event at a fast framerate, is essentially the same as recording a normal-speed event at normal frame rates. In this domain as well MPEG-style compression is extremeley effective. At the maximum you would need 1/5th or 1/10th the uncompressed rate, but 1/100th is a pretty reasonable number given current technology.

      The only challenge with realtime compression at this speed, of course, is sufficiently fast hardware. I think it could be done in parallel -- capture several GOPs worth of data (15-45 frames perhaps) and send it to a compressor, and then switch the buffer output to a new compressor, round-robin style.

      In any case, video is usually stored at rates many factors smaller than the uncompressed rate. So if you change the variables of your equation to a more realistic resolution and color depth, then divide that number by 10 or 100, you'll have a more realistic data rate.

      720(w)*480(h)*1.5(color)*12000(fps)= 6.2GB/s, divide by 100 for agressive compression but reasonable results = 600MB/s

      Still too fast, but not completely unrealistic if you've got a healthy budget. ;-)

      • Another problem with your simple calculation is that video is never stored as 32-bit color. That's totally unrealistic. The common way to store video is not RGB, but YUV. Because of the way the human visual system works, the color components (U,V) are typically stored at 1/4 the resolution of the luminance (Y), meaning that an 3*X pixel RGB image would be stored as a X+X/4+X/4=1.5X image in YUV, half the number of pixels.

        The original idea behind YUV was to enable colour broadcast TV which is compatable with existing monochrome broadcasts.
        It's not uncommon to do high speed filming in monochrome so you may need only the Y component.
      • by aibrahim ( 59031 )
        Clearly you are unaware that 720x480 NTSC DV is NOT the ultimate framesize. For one the D1 digital standard is 720x486 for NTSC. Then there's PAL which is 720x576 D1. Of course since it is likely that a high speed camera like this will be used for either scientific or film work you have to consider higher framsizes as essential.

        HD TV comes in a variety of flavors but it seems to me that 1920x1080 resolution is becoming the acquisition standard. Frames can be downconverted to D1 or any other HD resolution from this size. This is one possibility, and it seemed to work well for Star Wars EP2.

        Film is another beast entirely. It holds a LOT more frame resolution than HD, but even in the most lavish productions we rarely get to see more than 4k resolution. (That is ~4000 pixels by whatever height your aspect ratio requires, typically near 2000 pixels.)

        Next for the vast majority of professional uses video is UNCOMPRESSED. Nobody would ever dare use 100:1 compression for any project. DV uses 5:1 compression and that is BARELY acceptable in a LOT of circumstances.

        Next color depth. DV is 4:1:1. This is the rough equivalent to 17bit color. Most video is 4:2:2 which is roughly like 24 bit color. Rarely do people deal with 4:4:4 video, but that is roughly like 32bit color. Video for film projects like Star Wars are most often handled during post at 4:4:4, but then by the time we see them they are 4:2:2 again. Film follows a similiar path through production.

        Now you may thing that DVD is just great, but from a production standpoint material originated a DVD style MPEG-2 is next to useless for production. The MPEG-2 being considered for acquisition is very different in both data rate and IBP frame composition than what you have on DVD.

        With that little smidgen of knowledge I think you'll find that the parent posts criticisms are ALMOST entirely unfounded.

        I say almost because nobody uses SVGA for video capture. They either use a D1 or HD framesize and rate. SVGA is thus meaningless, except mathematically in that it is close the mean of available resolutions.

        So...what we have here is another Slashdot post by someone who knows little or nothing about the topic they are discussing getting rated informative by others whom also know little or nothing about the topic.

        I really wish you people would just keep your fingers still when you don't know what you are talking about. Disinformation (regardless of actual intent) is worse than no information.
        • Mod Parent up... At last someone who knows wtf he's talking about.

          Gigabit interconnect, heh.. I am working with bandwidth/video/crazy-ass datatransfers everyday and when I read people saying "transfer your 1024x768x32bits @whateverframerate over gigabit ethernet (like if it was SOOO much faster and since they never touched it they think it's the solution to world hunger). sheeh..

          (btw saying 32bits colors is totally lame, it's 24bits + 8 bits alpha you're probably talking about... so... how do you want to transfer ALPHA information from CCD again? xray cam? sheesh) ..and transfer something that is over 10 gigaBYTE a second realtime over a gigabit interconnect... heck, even with a beowulf, even with any technologies mentionned, WTF is the keyword here; you're way overkill and overspending... Local buffers would cost a lot less, heck, building a few gigabyte dram buffer module would be a lot simpler and cheaper than going crazy interconnecting and streaming data over multiplexed gigabit interconnect with loads of raid drives to receive the data and remultiplex it and so on... I hope people comming out with such ideas are not project leaders or R&D directors because I'd say, it's like someone who would talk about building a road and the first thing the construction worker would say is "we need to fill a lake here, and put a ski ressort there and....".

          ok I need to go to bed :)

      • Ok, how about this addition too...

        you could multiply the numbery of fps by using a prism and having a different ccd for each color (i think some digicams do this already). Or have a rotating mirror that reflects the image to a different ccd for each frame for a cycle of, say 10, and that should get you another 10x capture speed, roughly. Is that even possible? It sounds easy enough. A mirror that rotates in circles isn't too much wear, 15k rpm is easy with a good bearing.

        Doesn't save the bandwidth, though. 600MB/sec seems reasonable with an expensive disk array. Or heck, even get gobs of RAM (it's cheap too!)

        I never said this was gonna be cheap.

      • To be precise, digital video is not stored in the YUV colorspace. YUV is the colorspace used by analog PAL hardware. Digital video is compressed from the RGB colorspace (used by CCDs on input and phosphors on output) into the Y'CbCr colorspace and is almost always incorrectly called "YUV".

        The benefit of converting to this Y'CbCr colorspace is that you can get cheap, easy compression by simply subsampling the 2 color-difference channels, Cb and Cr, and humans will probably not notice.

        Now lets assume that your "healthy budget" means "professional" and not "consumer"... That 1/4 chroma resolution implies a 4:1:1 or 4:2:0 sampling format which is fine for consumer level products like DV and DVDs, but any professional-level hardware will use at least 4:2:2, bringing you up to 16bpp.

        Let's try that calculation again:
        720(w)*480(h)*16(color)*12000(fps)= 66.4 gigabits per second.

        Now for those of you who want to compress this monsterous stream with a beowulf cluster, I'd like you to show me one that can suck a 66.4 Gbps data stream out of a camera. :) right.

        That leaves us with the only option of compressing in the camera hardware as the previous poster suggested with an array of encoders, each working on 1 GOP. Assuming we use realtime hardware encoders, we'll need about 401 of them: 12000(camera fps)/(30000/1001)(NTSC fps) = 400.4. I'd recommend setting the 401 encoders to produce 5Mbps MPEG-2 streams to achieve a decent quality. That gives us about 2 Gbps of MPEG-2 output.

        BTW, to hold a 30-frame GOP in memory for each encoder while they encode, we'll need almost 8GB of RAM: 720(w)*480(h)*2(color)*8(GOP)*401(encoders) = 7.74GB (Let's try 8-frame GOPs for 2GB of RAM)

        To store that 2Gbps video stream, we'll need a single SCSI Ultra320 hard drive. (not bad)

        Now go build it! :)

      • 1024 (width) * 768 (height) * 4 (32-bit color) * 12000 (fps) = 377,487,360,00 bytes/second (35 Gbytes/s)
        Well, For one thing nobody records at that resolution. As another reply stated, DV is 720x480.

        While I agree that most people do not record at this resolution, there are some people who would like to be able to do so. I am in contact with several research projects through work where people use video as a data source. The video is data, just like readings from a sensor or output from a file. Compression is undesirable because it modifies the video signal.

        I think your back of the envelope calculations about bandwidth are correct, however. The groups with whom I work are trying to determine ways to address the massive bandwidth requirements. There is some thought of using ultra-fast networks (fiber out of the camera), distributed storage (for the huge file sizes), and then off-line editing to get eliminate/compress video time spaces that are not interesting. MPEG2 and MPEG4 seem to be popular/potential options for compression and DV is another option for a more COTS solutions (as some cameras have fire-wire interfaces). I have seen some discussion of DV in the comments already.

        Some of the areas where high speed video is useful include investigations where physical anamolies need to induced in real time (physical structures such as architectural loads, specific points on automotive chassis, specific points on aeronautical joints, etc.) where it is not possible to slow down vibrations or reduce loads so that physical deformations can be analyzed/simulated in real time. Note, this type of work is almost always done in conjunction with computational simulations. It is quite easy to slow down the computational visualizations. However, if you want to compare the simulation to the real object, there has to be a way to slow down the real event. (I guess this is a bit obvious).

    • Keep in mind that in many 32-bit color systems, only 24 bits of color are actually used (8 bits per RBG channel). They store as 32-bit so that the data stays aligned on word boundaries in memory (greatly increasing efficiency for image processing, but the padding would be unneccessary and unwanted when streaming data off the CCD).

      If I was trying to design something like this, I would use an array of low-resolution CCDs, put some sort of extremely slick real-time hardware-based compression either on the CCD or someplace within a few cm so I can use an extremely high-speed bus to move the data. Then it's just a matter of keeping your data from the multiple CCDs in sync...
  • CCD (Score:5, Informative)

    by The Moving Shadow ( 603653 ) on Saturday August 24, 2002 @12:44PM (#4133348)
    CCD simply needs a few milliseconds to regain their 0-volt signal level again before they can emit a new pulse. This recoverytime makes it unsuitable for high speed filming. Helas.
  • What visual mischief could you aspiring photographers get into with such a camera?

    I have to say the obligatory ultra slo mo pron!
    Actually fact is, the adult industry often drives the need for newer technologies I've read.
    • Actually fact is, the adult industry often drives the need for newer technologies I've read.
      They helped the VCR get where it is, that's true. But I think that's about it.
      • They helped ecommerce along too. For-pay porn, adult entertainment, and related products were well-established in the online world before the rest of the world caught up.

        Of course, ecommerce would have happened without them, but they were the trailblazers.

        siri
  • Bandwidth (Score:4, Insightful)

    by MarcoAtWork ( 28889 ) on Saturday August 24, 2002 @12:46PM (#4133356)
    also consider that most of the time, people that are interested in such frame rates, are also *very* interested in having detailed high-resolution frames of the event at 'interesting' times.

    This probably means having to shoot images of around 4-6 megapixels, and I really don't see any way of doing that at the speed needed for this kind of application.

    The only way might be exactly what the poster of the topic didn't grasp: have a camera that can take 100-1000 pictures at a 1Mpics/sec frame rate and store them in ultra-fast local memory, and transfer them out at leisure, with a good triggering setup, 100-1000 microseconds worth of data might just be enough for certain applications.

  • It may shoot at a rate of 12,000 frames per second, but the film is only 120 frames long.
  • To achieve this sort of bandwidth, memory and ccd would have to be on the same die ! We would have similar architectures as for cpu's with the ccd as the core, the 1st and 2nd level cache to store 1 or 2 seconds of film, that would be transferred after recording to the outside world - ie the main cpu of the computer that would transfer the data to disc. evolving design would allow to stretch the capacity of this architecture - higher resolution or longer recordings. But be sure of one thing - it will take several years to achieve the same quality you get today for analog devices.
  • I have a Sony DSC-F707. It takes beautiful pictures but only has enough buffer memory for 3 burst pictures. With higher resolution images (akin to film level quality) you'd need way more memory and throughput than can be supported with traditional flash memory. A external drive mechanism won't work either (i.e. bluetooth) because of the throughput necessary to sustain something at the rate discussed. I mean camera's today can't even do mpeg compression decently.
  • Current image sensor technology simply can't offer the same resolution as film at a rate of 12,000fps. Off the top of my head, the closest image sensor I can think of is from Silicon Imaging [siliconimaging.com]. Their CMOS camera head can do 2056x32 images at 700fps (or so), and to even aproach the quality of film, you would need to shoot HD (1920x1080).

    As any /. reader knows however, it's only a matter of time before silicon catches up with whatever it's chasing.

    • Hell, FILM technologies can't offer rates of 12,000 fps: once you expose a piece of film, it takes thousands upon thousands of years before it settles back so that you can take another frame!

      The movie industry, of course, solves this by having tons of pieces of film, and rotating between them. This is, of course, directly applicable to CCDs/digital camera solutions: have a LOT of sensors, and a prism to shift between them all.

      Take, for instance, two of those SI sensors: they'd then be able to do 1400 fps. Take 20 of them, and you've got 14K fps.
  • bus speed? (Score:4, Insightful)

    by n9hmg ( 548792 ) <n9hmg@@@hotmail...com> on Saturday August 24, 2002 @12:49PM (#4133370) Homepage
    At 640x240x24, you're talking 7372800 bits per frame. At 1000 frames/second, we'd need to be transferring 7Gbps. That would be a bit hard to handle. You could cut the rate by dropping colors. At B/W, it'd be pretty manageable, but that's probably not what you want. You probably also want higher resolution. No matter what, you wouldn't be able to swallow the stream for long.

    Oh, and by the way. The confusion about a million frames/second versus 103 was just poor word choice in the article. What they mean is a 1 microsecond shutter speed - 1 microsecond frames with 9707 microsecond gaps. Great stop-action to cut blurring, but manageable transfer rates.
    • What they mean is a 1 microsecond shutter speed - 1 microsecond frames with 9707 microsecond gaps

      Wrong: the camera takes 1million frames per second (but only for about 103 microseconds), and then it can play back those 103 frames at 10 frames/second.
      ( It's great for some applications, but it's obviously not going to do anything usefull if you're trying to do a time-lapsed sunset :-)

  • 10,000 FPS Camera (Score:5, Interesting)

    by ralfp ( 519069 ) on Saturday August 24, 2002 @12:49PM (#4133373)
    A. El Gammal, et al. published a 10,000fps imager with a 352x288 pixel resolution. This guy can maintain the full speed indefinately. Unfortunately is it not a commercial device, but something similar will probably be available within a few years.

    Kleinfelder, S. SukHwan Lim Xinqiao Liu El Gamal, A. "A 10000 frames/s CMOS digital pixel sensor", Solid-State Circuits, IEEE Journal of. v38 n12, pp. 2049-2059. Feb. 2001.

    The abstact is as follows:
    A 352 x 288 pixel CMOS image sensor chip with per-pixel single-slope ADC and dynamic memory in a standard digital 0.18um CMOS process is described. The chip performs "snapshot" image acquisition, parallel 8-bit A/D conversion, and digital readout at continuous rate of 10000 frames/s or 1 Gpixels/s with power consumption of 50 mW. Each pixel consists of a photogate circuit, a three-stage comparator, and an 8-bit 3T dynamic memory comprising a total of 37 transistors in 9.4x9.4 um with a fill factor of 15%. The photogate quantum efficiency is 13.6%, and the sensor conversion gain is 13.1uV/e. At 1000 frames/s, measured integral nonlinearity is 0.22% over a 1-V range, rms temporal noise with digital CDS is 0.15%, and rms FPN with digital CDS is 0.027%. When operated at low frame rates, on-chip power management circuits permit complete powerdown between each frame conversion and readout. The digitized pixel data is read out over a 64-bit (8-pixel) wide bus operating at 167 MHz, i.e., over 1.33 GB/s. The chip is suitable for general high-speed imaging applications as well as for the implementation of several still and standard video rate applications that benefit from high-speed capture, such as dynamic range enhancement, motion estimation and compensation, and image stabilization.
  • by Anonymous Coward
    Many people seem to think that digital image quality is superior to analog. This is untrue. So anything you want to analyze by looking at it, you want analog film. Digital advantages are: computer analysis, reproduceability.

    Now to your question, the primary digital disadvantage is bandwidth.

    Digital images have a very specific size: 1024x768x32 = 3 MegaBytes

    Analog images have virtually unlimited sizes (infinite x infinite x infinite). Some people have tried to estimate the resolution of analog images, and they best they come up with is a vertical and horizontal resolution in the thousands, however this is unreasonable. Analog images are more detailed than that.

    Now bandwidth calculation:
    (size of single frame)(frame rate) = (bandwidth)
    (3MB)(12000) = 36000 MBps

    So, we are looking at processing and storing about 3.6 Gigabytes per second. I mean processing because we want to use lossless compression, and this would require some very specialized hardware to handle this framerate. This cannot be processed or stored in real time on any modern generalized computer. It should be possible to build a specialized machine to accomplish this.

    I have discounted limitations in CCD speed, possibility of using mulitple cameras, high-end hardware I don't know about.

    Conclusions: "Digital" is not the panacea. Visual image analysis should always be done with analog film. Digitization is good for reproducing images, and transporting them intact. A camera that does 12k fps is mostly for image analysis of high velocity and high acceleration objects, for analysis in a lab. There are applications of high speed digital imagery, but I don't know any offhand.

    Finally, using a computer to process the resulting data takes a substantial amount processing time. So the answer to the question "why not use digital cameras" is "why would you need to?" If you can justify the need, do it. It will require, however, substantial resources which also need to be justified.

    For amateur photography, don't worry about a 12k fps camera, stick with the 30fps DV handicams.

    Torsten
    • vertical and horizontal resolution in the thousands, however this is unreasonable. Analog images are more detailed than that.

      There is a limit, analong isn't infinite as you assert. If it were, you could take a picture and zoom in forever and keep seeing new detail.

      I'm sure there are scientific ways to get a good measure of the resolution on the film, applying something like Nyquist principles to minimize data loss.

      I don't think it would be too complicated either. Just take a picture that contains dots that get progressively smaller and are exectly measured. Other test partterns could be used like lines that get closer and closer together.

      If you are using enough oversample, then it is possible to say, I have 99.999 percent of the data in that image, and be able to back it up scientifically.

      The hundred pixels per mm or so is often cited for 35mm film. Analog imaging on a different media with different equipment might need more or less pixels per mm. There isn't one analog-digital magic number.

      In the printing industry where I work, most stuff is printed with a 150 dpi line screen. Our digital images are at 300X300dpi. On our printing presses, due to ink spreading and things like that, much higher res isn't possible. The printed material still looks pretty good at that low res.

      Anyway, my point is, you can't say analog is infinite... You almost seem like a luddite that is afraid of digital imaging.

      I have discounted [... the] possibility of using mulitple cameras, high-end hardware I don't know about.

      Just because you don't know about it doesn't mean it won't eventually make you obselete.
  • Limits (Score:3, Informative)

    by russianspy ( 523929 ) on Saturday August 24, 2002 @12:59PM (#4133400)
    There are a lot of limits when it comes to cameras connected to PC's. I've worked in a lab where we used cameras that generated 640x480x4 (32 bit color) frames at 60 Hz. Guess what. You can't even buy a HD that can sustain that kind of transfer rate for any period of time. Good thing those computers had about a gig of ram each ;-) There is actually a few limitations. Bandwidth is the most important one. Here you're looking at the connection between camera and the computer. We used special frame grabber boards, fireware or USB - well... nothing that I know of can handle 12,000 Hz. Next, somewhat smaller limitation is the bandwidth to memory. When you're talking about 12,000 Hz - that will become a factor. And of course - unless you've got about 40 Gigs of Ram (at least) you would want to save the stream. There are Video Vaults which are basically raid arrays, but again - they can't handle this kind of data stream. Technology is coming along though. The new CMOS based cameras can have fairly high frame rates. You can actually select between resolution and framerate. Last time I checked the fastest they could go was about 500fps (at low resolution), the limit being again the link between the camera and PC. I believe the theoretical limit of the CMOS type camera is at either 5000 or 8000 fps (I don't really remember which - sorry).
    • I've worked in a lab where we used cameras that generated 640x480x4 (32 bit color) frames at 60 Hz. Guess what. You can't even buy a HD that can sustain that kind of transfer rate for any period of time.

      Sure you can. Get 4 u160 disks and run them in raid0 - instant 80-120MB/s sustained bandwidth. If you want 10khz, it's a bit different, but 60 is doable.

  • As some have said already... the bandwidth is the problem, but even after that, you still have to write it pretty damn fast... so here is the obvious solution... a cluster... per pixel.

    Rather than just have ram per pixel as the article says they did, for digital, have a single computer unit per pixel. So say you want a 4megapixel resolution full motion video. Then you get 4 million computers each processing a single pixel. That should be plenty fast enough to get some very high speeds (assuming the ccd can handle it).

    Of course, the problem now is to tie all that data together into a single video... and even then to find a machine to play something like that, though i supposed you could take each 4 million machines and have them each play their data into a single pixel on an lcd.

    But then, why not just use film?
  • by Animats ( 122034 ) on Saturday August 24, 2002 @01:08PM (#4133425) Homepage
    You can rent an 11,000 FPS camera [alangordon.com] right now, for $200 per day. Photek [visiblesolutions.com] makes a camera that can reach 40,000 FPS, although only with 8mm film frame size. Rotating-prism cameras like this have been around since at least the 1940s. The film advances continuously, and a rotating prism synchs the image to the moving film. Typically the synchronization has been mechanical, which means major problems at very high speed. An obvious upgrade to current technology is to feed the film with rollers or air jets rather than sprockets, detect the sprocket holes or some other form of clock track, and synchronize the rotating mirror prism electronically.

    On the pure digital front, there are units that can record 1000 FPS continuous [visiblesolutions.com] at 512 x 512 pixels. The system is data-rate limited. The imager can go much faster; if you cut the image size down to 32x128 pixels, you can get 32K frames/sec. At 128 x 128, you can get 11.2K frames/sec. The data goes into a buffer in the control unit (1 GB, typically), and is read out via FireWire. So this system can take a lot more frames than the device described in the article, which stores the images in memory within the imager and can only store 100 images or so.

    • repost
      Yep I built a electonic video camera that had megarhertz frame rates 8 years ago. I patented it too. Actually two different designs.
      C.E.M. Strauss, "Synthetic Array Heterodyne Detection: A single Element Detector Acts As An Array", Optics Letters, Vol 19, No. 20, 1609(1994)
      and
      B.J. Cooke, A.E. Galbraith, B.E. Laubscher, C.E.M. Strauss, N.L. Olivias, Grubler, A.W. laser field Imaging through fourier Transform Hetrodyne, proc of SPIE, 3707, 390-408, (1999)

      the problem with pixelated detectors is reading out the darn pixels fast enough. Normally this is done by some sort of bucket brigade across the ccd or some sort of serial memory access across a cmos array. very slow. And parallel access to an entire ray is absurdly complicated and expensive

      In my approach I solved this problem by multiplexing all of the pixel signals onto the same single wire. Each pixel when activated creates an osciliory signal at a unique frequency. All of these are combined on a single wire out put (amplified by a single fast amplifier) and then the AC signal is digitized by a single fast digitizer and streamed to a hard disk. The frame rate is determined by the frequency separation between the pixels, so if the oscillation frequency is a megahertz then a frame can be resolved every microsecond. This process is continuos and can go on for as long as you have disk space.

      the other cool feature is that the chip you do this on is a single pixel chip! not a pixelated array. the pixels come from painting the chip with a rainbow of light. for a 1-D example, imagine red light on the left edge and blue light on the right. when a reference signal comes in it beats with the light. the beat frequency that gets ouput is determined by where (left to right) the incoming beam hit.

      of course the good news and the bad news is that this is intended for active remote sensing where one is illumunating a target with a single frequency laser. It does not work with ambient light (note the second articele referenced above will work with polychormatic light) . The good news is that the detection method is hetrodyne detection which has shot noise limited detection sensitivity even on a crappy photo detector. thus the system is capable of detecting a single photon of light.

      Another cool feature is that one can do doppler detection with this too since any frequency shift in the target's reflected shifts the pixel frequency. This could be used for example the image bllod flowing in veins, find moving objects in noisy scenes (e.g. submarines, air planes) or any number of flow imaging concepts. The heterodyne detection means its sensitive enough to do at very long distances (say space), or to use it for imaging through very dense media (for example, imaging through the side of a vein or through breast or brain tissue.

      A description of how it works in stilted patent language can be read on line here [uspto.gov]

    • Are there any parallels to FFT? A FFT goes between the frequency and the temporal domains for a signal. These cameras allow us to trade off resolution in the spatial domain for resolution in the temporal domain.

      I'm wondering if we couldn't basically reduce the camera to one _very_ fast pixel, and then FFT to retrieve a sequence high resolution image. Of course, just one pixel wouldn't work, but I'm throwing that out there as an extreme to illustrate what I lack the vocabulary to express.
  • by mkcmkc ( 197982 ) on Saturday August 24, 2002 @01:09PM (#4133428)
    I was looking into this question a while back, thinking that it would be a fast way to OCR or copy a book without lengthy manual placement or having to cut the binding off the book. The idea is that you could fan the book (i.e., a two-second flip through the pages with your thumb) under such a camera and then postprocess the results to capture the original. It might work, anyway.

    --Mike

    • Been there, done that. Ever seen "Short Circuit"? Johnny 5 did exactly that when studying the encyclopedias. Let's just ask the movie producers how they did that? :)
    • That's a great example of how movies can contaminate one's mind and screw up perceptions of whats possible and impossible.

      Assuming the way most people do that in movies, it's basically not possible. Take a stop-motion picture of the process, and you'll find the majority of each page is not even visible, ever. You can't read through paper without a damaging amount of light.

      Now, you might be able to page the book quickly, in a minute or two, because then each page is visible. Lots of processing power and probably some novel techniques would be necessary, but at least the data is present, so it's theoretically possible.
  • Picture an image sensor as a one-inch-square array of pixels. If the frame rate is 30 per second, then 1/30 (3.3%) of the light that falls on the array makes up each frame.

    If the frame rate is 12k/second then only 1/12000 (0.0083%) of the light can be used to make each frame. That means that the CCD must be 4000 times more sensitive to light, or you must use a light source that is 4000 times brighter, to get the same results.

    And that ignores the fact that solid state light-to-electricity convertors like CCDs have a certain "latency" or "stickiness". Like the effect that the eye sees after a watching a flashbulb, CCDs suffer from after-images, and the brighter the light the worse the problem. Film doesn't have that problem because each frame is exposed on a new "receptor", i.e. a new piece of film.
    • You're right in the last part of your analysis: it's the latency that kills CCD detectors. The light sensitivity (i.e., the quantum efficiency) of CCDs is definitely not the problem, though: CCDs are as close to ideal photodetectors as you can get. They capture virtually 100% of the light that falls on them (their QE is >90%: film is usually quoted at 20%).

      This is actually a point in favor of high speed CCDs : in order to get the same level of contrast, you need about 5 times less light than a normal high-speed camera. Remember that the same argument you made for light sensitivity/light levels also applies to film. They'd need a light source 4000 times brighter as well, as the film is only exposed for a small fraction of the time.

      You might be able to do something cool that mixes film and CCDs: have a film made of CCDs that are then read out after being exposed to light. This solves the bandwidth problem as well, because you could have multiple systems reading out the data from multiple CCDs - it's not hard to aggregate GB/s worth of bandwidth from slower sources. The main problem, of course, would be flexible silicon. That'd take some work. :)
  • A high-res CCD could be viewed as a collection of low-res CCDs. So, design a high-res CCD that has multiple output paths. Each output path would go to a seperate computer and the data could then be recombined to construct the original frames.

    The data from a 1280x1024 CCD could be split into 16 320x256 segments.

    Of course someone's got to make the CCD, and I imagine having 16 computers connected to the same CCD probably poses some interesting problems. But, I'm sure that it is solvable.
  • </duck>? (Score:3, Funny)

    by pivo ( 11957 ) on Saturday August 24, 2002 @01:21PM (#4133470)
    Shouldn't it be <duck/>?
  • You can get around any bandwidth issues with a sufficiently large amount of cabling. The whole idea of doing this in parellel implies that. Anyway, compare the bandwidth of digital photography with the physical bandwidth of looping film through an eyepiece at 12,000 frames per second and you come up with a very different problem -- you've got to use TINY film, with an effective resolution much lower than what some of you linux numbercrunchers are assuming. "SVGA resolutions?" Think more like 320x240 -- and don't expect more than a few seconds per cannister, high costs, etc.

    No, the problem is light itself. You don't get much of it captured with a shutter speed of .000083 s. With low light, you need extremely sensitive equipment to even detect it and even more sensitive equipment to detect the subtle variations in wavelength that make up colors. Today's CCD cameras are very slow to register intensity light -- much slower than film. The chemical reaction in film triggered by exposure can be controlled much better, simply by changing the tolerance of the film -- which is why your high end, high speed shutter digital cameras are so godawful expensive. The $2500 Canon I've been looking at has roughly the same shutter speed as an equivalent $300 film camera. The extra price is NOT a "coolness" tax...it's for the set of three extremely high res CCD sensors and the chips capable of processing their information at that speed. My film prof used to say "digital ain't digital"...there's a quality factor of all digital electronics that can be poiled down to the quality of interpolation, quality of the ADC and of transistors leading up to it.

    CCD kind of sucks, man. For all its glorious promise, the best CCD chipsets aren't all that much better than the wonderful X-10 spycam.
    • Are you sure about this? CCDs have near perfect QEs, so they capture all of the light that falls on them (as opposed to 20% or so for film, I think). The process for generating the charge is the photoelectric effect, which is basically instant. I think you're more talking about the latency of CCD, rather than the response time. That is, the amount of time it takes to readout the actual frame and let everything settle back to zero.

      This problem is solvable: after all, film has the same problem, much much worse: the settling time for film is millions of years (heh)! They solve this by placing huge arrays of film on a loop, and exposing them all for a fraction of time. You could do the exact same thing with a CCD (if you could make flexible silicon, or something like that) that would solve all of these problems.

      CCD most distinctly does not suck: you can prove this by looking at astrophotography, which is without a doubt one of the hardest photographic problems that exists: extremely low light levels, and moving targets. Astrophotography is completely dominated by CCDs, because the sensitivity is just so much better, so you can get far more light in a shorter time.
  • by Nogami_Saeko ( 466595 ) on Saturday August 24, 2002 @01:24PM (#4133486)
    There's also a different sort of CCD highspeed camera that's used in various types of racing.

    That system uses a single row of pixels which can be scanned at extremely high rates - the picture is built from objects moving in front of the pickup row, rather than the camera actually taking a full-resolution image.

    Sort of a high-tech slit-camera.

    Perhaps not 100% on-topic, but still interesting.

    The other factor when talking about extreme high-speed photography (when people are calculating bandwidth):

    Most really high-speed cameras shoot in black and white afaik.

    If you drop the calculations from 32bpp down to 8bpp for a nic greyscale image, you're starting to get to manageable numbers... Also, adding cheap hardware based compression (RLE or the like), would be able to reduce the data stream to even more manageable levels.

    You're not going to be able to shoot 6 megapixel pictures that fast, but 320x240 or 640x480 images should be possible at high framerates. I doubt it would replace film, but it might be handy for quick playback without having to get negs developed.

    If you watch the "Bad Boys" DVD (the Will/Martin ver of Bad Boys), they have some very cool high-speed photography of different guns being fired into different objects. They used some sort of kodak high speed imager afaik - around 2000fps.
  • cheaper alternatives (Score:3, Interesting)

    by green pizza ( 159161 ) on Saturday August 24, 2002 @01:26PM (#4133492) Homepage
    There have been many good replies to this thread, though most are talking theory, some experimental at best. A camera module with the ability to capture 12,000 high resolution frames per second is bound to cost a fortune, and I really doubt there will be much competition for a long time. Perhaps a cheaper alternative would be to purchase several currently-available high speed CCD/CMOS camera modules and use a series of mirrors and lenses to allow the cameras to work together in a round-robin fashion to achieve the a much higher framerate. This would certainly keep the project from being locked into proprietary hardware -- be it a single interface type, manufacturer, or other monopolistic attribute.
    The idea of "parallel" items is nothing new, we've already seen success with drives, clusters, and even an array of projectors to create a high resolution projected wall.
    Just a thought...
  • by small_dick ( 127697 ) on Saturday August 24, 2002 @01:29PM (#4133503)
    anyone else remember eg&g's high speed nuke cameras?

    Rapatronic Camera Shots [vce.com]

    • That brings back memories... in 1962 I took a seminar with Dr. Edgerton and, as a matter of fact, he showed us some pictures like those. They look like an abstract painting of a kohlrabi...

      I'm not sure this really counts, however, since each physical camera could only take one picture, so it wasn't really a motion-picture process--to get ten frames, you needed ten cameras. It was really like Muybridge's original technique (recently used for that "bullet-time" sequence in _The Matrix_). I'm sure you could use the same technique with digital cameras and get very high frame rates for very short sequences.
  • Quantum Mechanics (Score:3, Interesting)

    by cperciva ( 102828 ) on Saturday August 24, 2002 @01:34PM (#4133521) Homepage
    Let's do some arithmetic:

    The wavelength of visible light (in a vacuum) is between 4x10^(-7) and 7x10^(-7) m.
    The speed of light is 3x10^8 m/s (in a vacuum). Planck's constant is 6.6x10^(-34) J s.

    Put these together, and a single photon of visible light has an energy of between 2.8x10^(-19) and 5x10^(-19) J.

    Suppose you want to get 24-bit colour. As an absolute minimum, you'll want to be able to detect 4096 photons per colour per pixel per frame. CCDs are typically 50% efficient, which means you need 256*3*2 incoming photons per pixel. At, say, 1024x1024 pixels and a million frames per second, that means 3*4096*2*1024*1024*1000000 = 2.6x10^16 photons per second, at an average energy of 3.9x10^(-19) J each.

    That's an absolute minimum of 1.0x10^(-2) W of incoming radiation.

    How much light is available? Well, at "bright sunlight" is approximately 30 W/m^2 of visible light.

    That means that you'd need an aperture roughly 28mm across... which isn't impossible, but is certainly not going to be desireable.

    So how does ultra-fast photography work? They use really bright flashes of light... which is why you don't want to be filmed for more than a fraction of a second at once.
    • Ever seen the inside of a car impact test facility? You've seen this on some commercials, even if you didn't realize what you were looking at. There's a huge bank of lights on during a test. Literally hundreds of lights, each of them on par with the lights used to light stadiums.

      They have to be turned on in sequence, because if you tried to turn them all on at once, the current draw would kill the power grid. This despite the extra-hyper-ultra-industrial strength wiring into the grid.

      You also can't leave them on for more then a few seconds because the heat generated burns them out. (It is actually a challenge to balance these two conflicting priorities, to turn them on quickly, yet slowly.)

      The power draw for a single test is enough that they actively try to minimize the amount of time these lights are on. This despite the fact that electicity is normally so cheap we really don't think about it much. (Think in business terms; this means it's worth someone who is being paid $50+/hour to actively spend time worrying about how to minimize the time these lights are drawing current.)

      All of this for the ultra-high-speed photography that takes place. I don't recall exactly how bright they said it was in the facility I was in, but I think it blows sunlight away by several times.

      I mention this as an example application where "bright flashes of light" (emphasis mine) aren't practical, so they have to go whole hog. Kinda cool.
      • Ever seen the inside of a car impact test facility? [snip] I mention this as an example application where "bright flashes of light" (emphasis mine) aren't practical, so they have to go whole hog. Kinda cool.

        Yes, there are always going to be exceptions. But you'll note that they don't have people inside those facilities when they have all the lights turned on.

        I guess I should have said "you can't get high quality 10^6 fps video for more than a fraction of a second at a time unless you evacuate the area first".
        • Very true! I literally meant I was just pointing it out as an exception.

          In fact, the most popular question on the tour was, "Can you turn them on for us?" and the answer was basically "Do you want to be blind?" (They had some clever, prepared joke, but I don't recall it well enough.)
    • CCDs (or back-illuminated CCDs, to be specific) are typically 100% efficient - or close enough (90-95 or so). Just search on the Web for "quantum efficiency CCD" - it's a strong function of the energy of the photon, but there are plenty that are virtually 1 around visible wavelengths. Front-illuminated CCDs do have a QE of about 50%, but why would you use one of them?

      Hence the reason that CCDs are way cool compared to film: film is only about 20% or so - it takes 5 times less light to get the same image out of a CCD than it does film.

    • Re:Quantum Mechanics (Score:3, Interesting)

      by gerardrj ( 207690 )
      I must admit I'm somewhat lost in your assumptions, but aren't you discounting the possibility of amplification of the signal after capture and the standard practice of super-cooling CCDs for better performance? How did you arrive at needing 4096 photons/color/frame, I don't see the support in your message and it doesn't seem to be common sense.

      With a high efficiency CCD cooled to around -300F (liquid nitrogen) you can reliably distinguish a single photon hitting a collection well from a cell from those with no photon strikes. So if you where going to do 3CCD imagery with prism splitting, you would need what? Three photons per pixel? Or do photons split in a prism to 1/3 their initial energy so you'd only need one?

      I can't recall the exact show now, but I think on TechTV not too long ago they did a segment on a very high speed digital camera that did something like 300,000 frames per second. The think was pretty small, about the size of two towerstyle computer cases.
      • Ok, I'll admit that I'm out by a factor of two on the CCD efficiency thing -- I wasn't sure how good they were, so I googled and took the first number which came out.

        As for the number of photons: 8 bit grayscale usually means 64 (not 256!) distinguishable levels. That is, on the 0-255 scale, you'll usually be able to distinguish between a 100 and a 104. In order to resolve N different levels when you're receiving statistical inputs, you need N^2 data points, ie "white" would be at least 4096 photons. Because you want to have that same quality on each of the three colour planes, multiply by three.
        • Perciva--

          On what grounds would 2^8 bits of data "imply" 2^6 bits of actual information?

          2^4 systems don't magically degrade to black or white, after all.

          Though I'm a bit rusty on the stats, I do believe the N through N^2 process matters only during calibration. Once you've established a given correspondance between inputs and outputs, further samples may share in the results of the previous calibration. I do suspect that a large standard deviation would require greater sampling levels to achieve a given level of accuracy.

          --Dan
        • But really that's not the case. In most all video applications green is overscanned such that is comprises as much as 2/3 of the incoming data. The remaining pixels are then spread across red and blue in decreasing proportion.
          Granted for scientific purposes they may use equally balanced color input, ie 1:1:1

          I would also argue that this isn't a statistical issue. If you can detect a 1 photon difference across CCD collection wells, then you might only need 255 photons for white (assuming 24 bit color). Again, whether you could split the one photon to three CCDs, or need three incoming photons I don't know. I'd feel asamed, but given the ongoing debate over wave/particle/etc theories I think my ignorance is tolerable.
          Anyway, that seems to mean that we only need either 255 or 765 photons per image pixel in such a case.

          The limitations are all in the quality of the read-out logic of the CCD and the amplifier and A/D once the information is off-chip.
  • The problems abound and are covered quite clearly in other posts.
    How I would tackle the problem is to setup a series of CCD (Foveon's X3 would be my choice), with each pixel element feed directly into a huge RAM cache where the data could be loaded off into yet slower storage.
    Since we had to deal with charge time (hence the first C in CCD) of 1ms, we'd need 1000 CCDs each with their own data cache and so on.
    Then comes the problem of making an image - since we'd be dealing with 1000 CCDs, we're going to have to figure out how to place each pixel so than when that pixel's series fires, we can capture an image which would look like any other series' image. So this is what you'd be dealing with:
    DATA RATE = 3 * (W*H) * 1000 * Time Duration
    For giggles:
    3*(1024*768) * 1000 * 1s = 2,414,592,000,000 (2,358,000,000 Bytes per second).[1]
    Thank you, I'd like a stiff drink now, and film looks mighty good.
    [1] If I didn't fuck up my math...
    • That's only 2.4 GB/s: that's not that bad - you can get memory nowadays that can sustain that. You're also talking about 1M frames per second (1000 CCDs, firing 1000 times/second): if you move to a more manageable, say, 12K fps (as in the post), you're only talking about 30 MB/s or so, which is just plain slow. Granted, you'd need a thousand of them, but it's just money.

      It'd be expensive, yes, but it'd pay for itself in the added sensitivity (5 fold) and the recurring film cost and lack of many-moving-pieces in time. From what I've been reading around here, it looks like there are several companies already working on it.
    • I did fuck up my math, forgot to multiply by the color depth.

      So

      DATA RATE = 3 * (W*H) * 1000 * Time Duration

      Should be:

      DATA RATE = 3 * CDept * (W*H) * 1000 * Time Duration

      For giggles:
      3 * 8 * (1024*768) * 1000 * 1s = 18,874,368,000,000 (18,432,000,000 Bytes per second / 18.5Gps)

      Ouchie.
  • by mr_burns ( 13129 ) on Saturday August 24, 2002 @04:12PM (#4133987)
    From an artistic point of view, the problem isn't which medium to develop...it's how to improve both technologies such that cost/energy/latency is not too different. I should have the freedom to choose the technology which best serves the intent of the piece free from those constraints. It could be film, it could be video. It really depends on how I want it to turn out in the end.

    So more substance, less rant: here's how I think these technologies would be useful to end users, and thus what we should be thinking about here.

    Video Tap: A major video breakthrough in the feature film making process was Jerry Lewis's video tap. This puts a prism or split field diopter in between the lens and the film plane, splitting it in two, one going to the film plane, the other going to a video camera. This is how a director is able to get immediate feedback on how the scene went (instead of waiting for the dailies the next night to see it). A high framerate video tap for high framerate film would be extremely handy. The quality wouldn't have to be great, it would just need fidelity to tell the director and cinematographer how well composed the take was, and making sure all the stuff thats supposed to be in the take are there...and nothing else (like a boom mic).

    Internet/NLE: This also would help in modern, internetworked digital non-linear processes. This is where takes are digitized as they are shot (if not already doing initial capture in DV) and dropped into the timeline in a nonlinear edit suite (avid, cinerella, final cut pro) whos project files are shared in an internetworked data store (film crews on other ends of the world, and the CG shop instantly are able to see their shot in the context of the other units shots...in realtime) via a 3 point edit. Even with a film process, the tap could digitize the footage and insert it into the timeline...the print of the footage could be later scanned and conformed to the timeline. Very handy. So this ties into the throughput problem. You have to consider that the bottleneck isn't CCD voltage intervals, cache tomfoolery or writing to a non-volatile medium. It could be a crappy ADSL connection or satellite uplink set up by people who scarcely understand how that stuff works.

    Noise and heat: One of the banes of film making and one of the big advantages of digital video is the noise that all those ratchet/crank/shutter type mechanisms in a camera create. A lot of the sound work in a film is dealing with the noise from the camera. Sometimes, the sound is recorded later after discarding the sound from the set wholesale. Now, in order for a cmos imager to be effective at these speeds, we'll need to keep it cool. Heat is more likely to degrade throughput than buffer speed or size. Hence, we're going to need to build hardware to cool the cmos. That hardware is likely going to be more exotic than the cmos, take more energy than the motor for a high speed film device and potentially create a lot of noise on it's own. So the advantages of the high speed DV cam over film are only possible if the apparatus that supports the camera don't reintroduce the same problems on an equal or greater scale than existed in film.

    Personally, I feel that the single greatest and most useful application of this technology, from a creative standpoint is the high speed video tap. It would liberate crews from the burden of dailies and integrate high speed footage into modern production processes.

    For non-creative uses (scientific/research), this technology can free users from the latent and toxic nature of film processing infrastructure.
  • by Rui del-Negro ( 531098 ) on Saturday August 24, 2002 @04:47PM (#4134064) Homepage
    All image capture is analog. It can be electronic (CCDs) or chemical (film), but there's always an element that "charges up" as it's hit by photons.

    Compared to film, CCDs are extremely low-res (top quality 35 mm film has resolution equivalent to a 50 megapixel CCD) but, more importantly, they're slow. At very short exposure times, CCDs have so much noise that the final result is useless. The problem isn't the transfer rate, it's the time the CCDs take to "charge up" to meaningful values.

    There is one alternative: use very large CCDs. The larger the CCD, the more light hits it, and the faster it can charge. But larger CCDs are more expensive and require special lenses.

    Recording directly to digital does have one big advantage: you don't have to pay for the film. But the CCDs simply aren't up to film quality yet (and probably won't be for another 5 years or so). So the solution is simple: shoot on film, then digitise it.

    RMN
    ~~~
  • There is an Astronomy project called Ultracam [shef.ac.uk] that uses special CCDs to capture astronomical events at highspeed.
    Interestingly the computer interface that they use for the special CCDs uses Linux.

    I am sure that you can get an idea of what is involved from Ultracam and use it in other real world applications (patents not withstanding).
  • by Snuffub ( 173401 ) on Saturday August 24, 2002 @05:09PM (#4134116) Homepage
    I can think of one good reason why this isnt a good idea. As still digital cameras push the limits of how many megapixles they can fit into an image some profesionals are noticing an interesting problem. at a certain point, adding more resolution to a camera actualy decreases the image quality. THat's because when you decrease the size of the sensor which records the data for a pixel in a camera you decrease the amount of light that will hit that sensor.

    This means that the signal to noise ratio for each sensor goes up. At 3 megapixels you wont see any degredation from this but as the resolution increases youre going to see more and more loss of acuracy and less acurate images. now this is for a camera which takes photo's at the equivalent of 50 ISO (1/50th of a second) if you want a camera that takes images at 12000 images per second that means each image has to be captured in about 1/36000th of a second, or 720 times faster, or 1/720 of the light will hit each pixel. inorder to maintain acuracy at that speed youre going to have to drop down the resolution to 4000 pixels per image (100x40?) inorder to maintain image quality.
  • %s/Slashdot/google/g

    Search for high framerate digital cameras. Let the poor slashdotters get back to writing code.
  • Stupid write up (Score:4, Insightful)

    by Apotsy ( 84148 ) on Saturday August 24, 2002 @10:17PM (#4135035)
    The submitter of the article says "being a geek" he figured there just had to be a non-mechanical solution. Thing is, his definition of "geek" isn't exactly all that worldly. Truly talented engineering types are also mechanically inclined.

    Mechanically inclined.

    When was the last time you even heard that phrase? We live in a physical world. A mechanical device is a perfectly acceptable solution to a problem. Not everything needs to be done with software. Just look at the guy's level of disappointment. "But there has to be a way to do this with electronics! Electronics are always better than mechanics, aren't they? It's impossible for mechanics to do something electronics cannot, ins't it? Hello?"

    And Cliff's additional writeup is no help either. The reason the video in the example he found can only played back at 103fps is fully explained in the link he provides (and apparently didn't bother to read). Also, the 12,000fps film camera that got everyone talking in the first place not the first of its kind. High-speed film cameras have been around for decades. The real kicker is Cliff's silly statement at the end, which makes it sound as though an electronic high-speed camera would be the first high-speed camera ever. He says, "What visual mischief could you aspiring photographers get into with such a camera?" Gee, I dunno Cliff, how about the exact same things people have been doing with high speed film cameras for the past 50 years, eh?

    Sheesh. The world goes beyond the bits in a CPU. Turn off the computer and take a look around at the tangible, physical world.

  • Over at Micron Technology [micron.com] in the imaging department [micron.com] we are working on the next generation of digital imaging sensors. All the processes are CMOS based instead of CCD.
    Of particular interest to high frame rates might be the MI-MV13 [micron.com], Micron Imaging - Machine Vision 1.3Mega Pixel CMOS digital image sensor. This particular sensor can do 500fps at 1.3Mega pixel but can also be windowed to do for example 4000 fps at 1280 x 128.

  • Bandwidth.
    Pixel Depth.
    Image Dimensions.

    Bitch, bitch, bitch, bitch, bitch. To quote someone who dearly needs to be heeded in this case (Dennis Leary), "Shut the fuck up, NEXT!" I've heard enough crap, why don't we just call up Nikon and ask them for one of their explosive imaging cameras? If I remember my Guinness Book of World Records, that unit is a digital camera performing in the MILLIONS of frames per second! 12,000? Feh!

    Gee, how about a simple Google search, even? Let's try that, shall we (since the Guinness world record site SUCKS!):

    • "Fastest Camera" search [google.com]
      First 3 links are about the same camera! A half-million dollars, 200 million frames per second.

    • "Ultrahigh-speed Imaging" search [google.com]
      Grab the .PDF [aip.org] in that first hit -- it's from "The Industrial Physicist", and has some nice info on a "gated still-video camera." A quote:

      • "Multisensor, ultrahigh-speed electronic imaging systems (such as that shown in Figure 1) are capable of recording sequences of discrete images at frame rates of up to 100 million pictures per second. They incorporate compact, intensified charge coupled device (CCD) modules that exhibit virtually no geometric distortion or intensity variation and provide the user with digital images that can be analyzed using a personal computer."

      Oh, one other thing: The article is from December 1997 when Pentium IIs were hot stuff, and you counted yourself lucky to have 64 MB of RAM and a 9.1 GB F/W SCSI-2 hard drive!
    Another point I quickly found is that high speed (million-plus FPS) imagery has been around since the late 80s. Most of it's digital. (Imagine that.) You can thank the US military for funding that.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...