Digital Video Capture and High Frame Rates? 211
Jeff asks: "So the folks at a place called Conniption Films (great name) developed a camera called the Millisecond Camera which can shoot 12,000 frames of film a second. I read the article and thought 'Hmm that's neat' but then realized they were still using an analog process for shooting this highspeed film. Being a geek, not necessarily into the film side of things but curious nonetheless, I wonder, shouldn't a computer be able to do a better job of such a thing? They say the film runs around a spindle going 500 mph (!). Wouldn't that be prone to failure and use alot of energy? Wouldn't it be more appropriate, easier, and overall cheaper to just hook up a high res CCD to a beowulf </duck> cluster of 2 ghz+ machines and capture high speed images that way? Why hasn't it been done yet? Or has it and I haven't seen it yet?" I did a double-take, when I first read this question, and then got curious and did a little digging. Turns out, high frame rates are not exclusive to the analog photography world, and to illustrate my point, I provide this link. It's woefully short on details, and the explanations as to why a camera that can record 1M frames per second is limited to a playback of only 103 frames, but the technology is out there. Has anyone seen any other digital cameras out there with high frame-rates? What visual mischief could you aspiring photographers get into with such a camera?
CCDs (Score:2, Informative)
Slow CCDs (Score:1, Insightful)
Re:Slow CCDs (Score:2, Insightful)
Re:Slow CCDs (Score:2, Informative)
Re:Slow CCDs (Score:2)
The flagship prosumer digital SLR from Canon, the D60, uses a Canon-made CMOS which renders incredible pictures.
Take a look at some sample image on www.dpreview.com [dpreview.com] and see for yourself! Things have changed since 2 years...
On another notem, CMOS sensors tends to be slower. Thats probably why their pro SLR like the 1D does use a CCD.
Re:Slow CCDs (Score:3, Informative)
There is no film in the world that can outshoot a high-sensitivity CCD nowadays. Cameras like the Kodak 760x can shoot at ISO 6400 with reasonable quality, which film is utterly incapable of matching with any sort of quality, and CCDs are only getting better.
Yes, crappy consumer digicams suck at anything over ISO 100. But a serious professional digicam beats the pants off of film at high ISOs. (In case you were wondering, my wife is a professional photographer who shoots with a Nikon D1X. I do know a bit about this.)
Bandwidth (Score:2, Insightful)
Small, 8bit color uncompressed movie at 300x300 pixels would require something like 8 billion bits per second. (300 * 300 * 12000 * 8)
Now we probably want more resolution & a higher bit depth, so multiply apporpriatly.
What are we going to use to transfer that much data around a cluster? Or even just from the camera to the cluster?
Re:Bandwidth (solved internally) (Score:2)
From reading the article, the bandwidth problem was solved by giving each pixel of the camera it's own memory. One problem that I can see is that: This is going to eat space on the chip that would normally be used for imaging. If you put too much memory around a pixel, you're going to start suffering in the quality of the image. (and they already had to increase the size of each pixel to be able to capture the light fast enough)
It would seem that they pegged the usable tradeoff at 103 samples per pixel, so that's how many images you can store.
gigE from each segment of the CCD (Score:2)
Another option is to wait for 10gigE (along with the rest of the supercomputing world) or go with Myrinet, which has recently broken the 1 gigabit barrier.
Re:gigE from each segment of the CCD (Score:3, Informative)
Recently? Myrinet has been doing 2 gigabits full-duplex since May 2001 when it started using fiber. [myri.com] Not to mention that full link utilization only uses a few percent of the host CPU. What's the point of fast cluster interconnect when you use half your CPU sending packets through the TCP/IP stack?
Myrinet (Score:2)
Re:Myrinet (Score:2)
Memory? (Score:1)
It's woefully short on details, and the explanations as to why a camera that can record 1M frames per second is limited to a playback of only 103 frames [...]
Memory problems, I suppose. They say each pixel is its own memory. I guess that getting 1 million frames per second through any kind of bus to any kind of memory is going to be tough. AGP isn't going to cut it. ;)
re... (Score:1, Interesting)
If you could get a hold of a cmos image sensor you could probably rig up something similar but remember those data rates are INCREDIBLY high. Also, that means the length of the shot tends to be fairly short.
Cliff, did you READ it? (Score:5, Informative)
This overcomes the bottleneck of trying to transfer data off the CCD at such high frame rates in real time, but limits you to "downloading" the last 103 frames after-the fact from the chip.
MadCow.
Re:Cliff, did you READ it? (Score:1)
Re:Cliff, did you READ it? (Score:2)
However, the really strange part is that the article says the playback is actualy 10 frames per second, which if true is really sucky playback.
MATH ERROR, DOH! (Score:2)
I divided 2^20 by 24 not by 3, heh.
make that 349525 frames with 1 megabyte of storage per pixel.
Quite frankly with that much storage space I would say store 64bits for the visible spectrum and use another 24 for the infrared spectrum, and if CCDs ever get advanced enough, another 16bits or so for the ultraviolet.
With the visible + infrared though, that is 11 bytes, or a mere 95325 frames per individual pixel, hehe.
Bandwidth (Score:5, Insightful)
1024 (width) * 768 (height) * 4 (32-bit color) * 12000 (fps) = 377,487,360,00 bytes/second (35 Gbytes/s)
So no wonder they use film...
gigabit ethernet (Score:2)
Re:gigabit ethernet (Score:2, Insightful)
Re:gigabit ethernet (Score:2)
worse than that (Score:2)
several? (Score:2)
Re:Bandwidth (Score:2)
Well, as the principle of the parent post is summarised in the famous quote:
"Never underestimate the bandwidth of a lorry full of tapes hurtling down the motorway"
Re:Bandwidth (Score:2, Informative)
Number of required channels can be reduced if higher bandwidth DRAM is used, which I'm sure exists somewhere.
Yes, it would be frighteningly expensive, but these high speed film cameras aren't exactly cheap either.
Re:Bandwidth (Score:2)
Re:Bandwidth (Score:4, Insightful)
Several points... (Score:4, Informative)
Well, For one thing nobody records at that resolution. As another reply stated, DV is 720x480.
Another problem with your simple calculation is that video is never stored as 32-bit color. That's totally unrealistic. The common way to store video is not RGB, but YUV. Because of the way the human visual system works, the color components (U,V) are typically stored at 1/4 the resolution of the luminance (Y), meaning that an 3*X pixel RGB image would be stored as a X+X/4+X/4=1.5X image in YUV, half the number of pixels.
More significant, though, is that fact that just about every digital image recording mechanism stores information compressed onto the storage media. This is true from consumer digital cameras to DV cameras to the Sony HDTV cameras Lucas used for Star Wars.
Consider what it means to take 12,000 frames per second. You're probably recording a single nearly-instantaneous event, or getting many images of a very fast event. In the former case, there will be a series of frames before the event in which nothing is going on, and the difference between the frames is close to zero, which compresses extremeley well with MPEG-style compression. Your data rate could be 1/100th of the uncompressed rate. When the event occurs, the instantaneous data rate goes up, but buffering can solve this, since it probably lasts a few frames.
In the latter case, recording a fast event at a fast framerate, is essentially the same as recording a normal-speed event at normal frame rates. In this domain as well MPEG-style compression is extremeley effective. At the maximum you would need 1/5th or 1/10th the uncompressed rate, but 1/100th is a pretty reasonable number given current technology.
The only challenge with realtime compression at this speed, of course, is sufficiently fast hardware. I think it could be done in parallel -- capture several GOPs worth of data (15-45 frames perhaps) and send it to a compressor, and then switch the buffer output to a new compressor, round-robin style.
In any case, video is usually stored at rates many factors smaller than the uncompressed rate. So if you change the variables of your equation to a more realistic resolution and color depth, then divide that number by 10 or 100, you'll have a more realistic data rate.
720(w)*480(h)*1.5(color)*12000(fps)= 6.2GB/s, divide by 100 for agressive compression but reasonable results = 600MB/s
Still too fast, but not completely unrealistic if you've got a healthy budget. ;-)
Re:Several points... (Score:2)
The original idea behind YUV was to enable colour broadcast TV which is compatable with existing monochrome broadcasts.
It's not uncommon to do high speed filming in monochrome so you may need only the Y component.
Re:Several points... (Score:3, Insightful)
HD TV comes in a variety of flavors but it seems to me that 1920x1080 resolution is becoming the acquisition standard. Frames can be downconverted to D1 or any other HD resolution from this size. This is one possibility, and it seemed to work well for Star Wars EP2.
Film is another beast entirely. It holds a LOT more frame resolution than HD, but even in the most lavish productions we rarely get to see more than 4k resolution. (That is ~4000 pixels by whatever height your aspect ratio requires, typically near 2000 pixels.)
Next for the vast majority of professional uses video is UNCOMPRESSED. Nobody would ever dare use 100:1 compression for any project. DV uses 5:1 compression and that is BARELY acceptable in a LOT of circumstances.
Next color depth. DV is 4:1:1. This is the rough equivalent to 17bit color. Most video is 4:2:2 which is roughly like 24 bit color. Rarely do people deal with 4:4:4 video, but that is roughly like 32bit color. Video for film projects like Star Wars are most often handled during post at 4:4:4, but then by the time we see them they are 4:2:2 again. Film follows a similiar path through production.
Now you may thing that DVD is just great, but from a production standpoint material originated a DVD style MPEG-2 is next to useless for production. The MPEG-2 being considered for acquisition is very different in both data rate and IBP frame composition than what you have on DVD.
With that little smidgen of knowledge I think you'll find that the parent posts criticisms are ALMOST entirely unfounded.
I say almost because nobody uses SVGA for video capture. They either use a D1 or HD framesize and rate. SVGA is thus meaningless, except mathematically in that it is close the mean of available resolutions.
So...what we have here is another Slashdot post by someone who knows little or nothing about the topic they are discussing getting rated informative by others whom also know little or nothing about the topic.
I really wish you people would just keep your fingers still when you don't know what you are talking about. Disinformation (regardless of actual intent) is worse than no information.
Re:Several points... (Score:2)
Gigabit interconnect, heh.. I am working with bandwidth/video/crazy-ass datatransfers everyday and when I read people saying "transfer your 1024x768x32bits @whateverframerate over gigabit ethernet (like if it was SOOO much faster and since they never touched it they think it's the solution to world hunger). sheeh..
(btw saying 32bits colors is totally lame, it's 24bits + 8 bits alpha you're probably talking about... so... how do you want to transfer ALPHA information from CCD again? xray cam? sheesh)
ok I need to go to bed
Re:Several points... (Score:2)
you could multiply the numbery of fps by using a prism and having a different ccd for each color (i think some digicams do this already). Or have a rotating mirror that reflects the image to a different ccd for each frame for a cycle of, say 10, and that should get you another 10x capture speed, roughly. Is that even possible? It sounds easy enough. A mirror that rotates in circles isn't too much wear, 15k rpm is easy with a good bearing.
Doesn't save the bandwidth, though. 600MB/sec seems reasonable with an expensive disk array. Or heck, even get gobs of RAM (it's cheap too!)
I never said this was gonna be cheap.
Several corections... (Score:2, Interesting)
The benefit of converting to this Y'CbCr colorspace is that you can get cheap, easy compression by simply subsampling the 2 color-difference channels, Cb and Cr, and humans will probably not notice.
Now lets assume that your "healthy budget" means "professional" and not "consumer"... That 1/4 chroma resolution implies a 4:1:1 or 4:2:0 sampling format which is fine for consumer level products like DV and DVDs, but any professional-level hardware will use at least 4:2:2, bringing you up to 16bpp.
Let's try that calculation again:
720(w)*480(h)*16(color)*12000(fps)= 66.4 gigabits per second.
Now for those of you who want to compress this monsterous stream with a beowulf cluster, I'd like you to show me one that can suck a 66.4 Gbps data stream out of a camera. :) right.
That leaves us with the only option of compressing in the camera hardware as the previous poster suggested with an array of encoders, each working on 1 GOP. Assuming we use realtime hardware encoders, we'll need about 401 of them: 12000(camera fps)/(30000/1001)(NTSC fps) = 400.4. I'd recommend setting the 401 encoders to produce 5Mbps MPEG-2 streams to achieve a decent quality. That gives us about 2 Gbps of MPEG-2 output.
BTW, to hold a 30-frame GOP in memory for each encoder while they encode, we'll need almost 8GB of RAM: 720(w)*480(h)*2(color)*8(GOP)*401(encoders) = 7.74GB (Let's try 8-frame GOPs for 2GB of RAM)
To store that 2Gbps video stream, we'll need a single SCSI Ultra320 hard drive. (not bad)
Now go build it! :)
Re:Several points... (Score:2)
While I agree that most people do not record at this resolution, there are some people who would like to be able to do so. I am in contact with several research projects through work where people use video as a data source. The video is data, just like readings from a sensor or output from a file. Compression is undesirable because it modifies the video signal.
I think your back of the envelope calculations about bandwidth are correct, however. The groups with whom I work are trying to determine ways to address the massive bandwidth requirements. There is some thought of using ultra-fast networks (fiber out of the camera), distributed storage (for the huge file sizes), and then off-line editing to get eliminate/compress video time spaces that are not interesting. MPEG2 and MPEG4 seem to be popular/potential options for compression and DV is another option for a more COTS solutions (as some cameras have fire-wire interfaces). I have seen some discussion of DV in the comments already.
Some of the areas where high speed video is useful include investigations where physical anamolies need to induced in real time (physical structures such as architectural loads, specific points on automotive chassis, specific points on aeronautical joints, etc.) where it is not possible to slow down vibrations or reduce loads so that physical deformations can be analyzed/simulated in real time. Note, this type of work is almost always done in conjunction with computational simulations. It is quite easy to slow down the computational visualizations. However, if you want to compare the simulation to the real object, there has to be a way to slow down the real event. (I guess this is a bit obvious).
Re:Bandwidth (Score:2)
If I was trying to design something like this, I would use an array of low-resolution CCDs, put some sort of extremely slick real-time hardware-based compression either on the CCD or someplace within a few cm so I can use an extremely high-speed bus to move the data. Then it's just a matter of keeping your data from the multiple CCDs in sync...
CCD (Score:5, Informative)
Well you asked for it... (Score:2, Funny)
I have to say the obligatory ultra slo mo pron!
Actually fact is, the adult industry often drives the need for newer technologies I've read.
Re:Well you asked for it... (Score:1)
Re:Well you asked for it... (Score:2, Insightful)
Of course, ecommerce would have happened without them, but they were the trailblazers.
siri
Bandwidth (Score:4, Insightful)
This probably means having to shoot images of around 4-6 megapixels, and I really don't see any way of doing that at the speed needed for this kind of application.
The only way might be exactly what the poster of the topic didn't grasp: have a camera that can take 100-1000 pictures at a 1Mpics/sec frame rate and store them in ultra-fast local memory, and transfer them out at leisure, with a good triggering setup, 100-1000 microseconds worth of data might just be enough for certain applications.
Clarify (Score:1)
New Architecture !! (Score:1)
It's all about buffer memory (Score:2, Informative)
Image Sensors aren't good enough (yet) (Score:1)
As any /. reader knows however, it's only a matter of time before silicon catches up with whatever it's chasing.
Re:Image Sensors aren't good enough (yet) (Score:2)
The movie industry, of course, solves this by having tons of pieces of film, and rotating between them. This is, of course, directly applicable to CCDs/digital camera solutions: have a LOT of sensors, and a prism to shift between them all.
Take, for instance, two of those SI sensors: they'd then be able to do 1400 fps. Take 20 of them, and you've got 14K fps.
bus speed? (Score:4, Insightful)
Oh, and by the way. The confusion about a million frames/second versus 103 was just poor word choice in the article. What they mean is a 1 microsecond shutter speed - 1 microsecond frames with 9707 microsecond gaps. Great stop-action to cut blurring, but manageable transfer rates.
Re:bus speed? (Score:2)
Wrong: the camera takes 1million frames per second (but only for about 103 microseconds), and then it can play back those 103 frames at 10 frames/second. :-)
( It's great for some applications, but it's obviously not going to do anything usefull if you're trying to do a time-lapsed sunset
10,000 FPS Camera (Score:5, Interesting)
Kleinfelder, S. SukHwan Lim Xinqiao Liu El Gamal, A. "A 10000 frames/s CMOS digital pixel sensor", Solid-State Circuits, IEEE Journal of. v38 n12, pp. 2049-2059. Feb. 2001.
The abstact is as follows:
A 352 x 288 pixel CMOS image sensor chip with per-pixel single-slope ADC and dynamic memory in a standard digital 0.18um CMOS process is described. The chip performs "snapshot" image acquisition, parallel 8-bit A/D conversion, and digital readout at continuous rate of 10000 frames/s or 1 Gpixels/s with power consumption of 50 mW. Each pixel consists of a photogate circuit, a three-stage comparator, and an 8-bit 3T dynamic memory comprising a total of 37 transistors in 9.4x9.4 um with a fill factor of 15%. The photogate quantum efficiency is 13.6%, and the sensor conversion gain is 13.1uV/e. At 1000 frames/s, measured integral nonlinearity is 0.22% over a 1-V range, rms temporal noise with digital CDS is 0.15%, and rms FPN with digital CDS is 0.027%. When operated at low frame rates, on-chip power management circuits permit complete powerdown between each frame conversion and readout. The digitized pixel data is read out over a 64-bit (8-pixel) wide bus operating at 167 MHz, i.e., over 1.33 GB/s. The chip is suitable for general high-speed imaging applications as well as for the implementation of several still and standard video rate applications that benefit from high-speed capture, such as dynamic range enhancement, motion estimation and compensation, and image stabilization.
It's all about bandwidth (Score:2, Interesting)
Now to your question, the primary digital disadvantage is bandwidth.
Digital images have a very specific size: 1024x768x32 = 3 MegaBytes
Analog images have virtually unlimited sizes (infinite x infinite x infinite). Some people have tried to estimate the resolution of analog images, and they best they come up with is a vertical and horizontal resolution in the thousands, however this is unreasonable. Analog images are more detailed than that.
Now bandwidth calculation:
(size of single frame)(frame rate) = (bandwidth)
(3MB)(12000) = 36000 MBps
So, we are looking at processing and storing about 3.6 Gigabytes per second. I mean processing because we want to use lossless compression, and this would require some very specialized hardware to handle this framerate. This cannot be processed or stored in real time on any modern generalized computer. It should be possible to build a specialized machine to accomplish this.
I have discounted limitations in CCD speed, possibility of using mulitple cameras, high-end hardware I don't know about.
Conclusions: "Digital" is not the panacea. Visual image analysis should always be done with analog film. Digitization is good for reproducing images, and transporting them intact. A camera that does 12k fps is mostly for image analysis of high velocity and high acceleration objects, for analysis in a lab. There are applications of high speed digital imagery, but I don't know any offhand.
Finally, using a computer to process the resulting data takes a substantial amount processing time. So the answer to the question "why not use digital cameras" is "why would you need to?" If you can justify the need, do it. It will require, however, substantial resources which also need to be justified.
For amateur photography, don't worry about a 12k fps camera, stick with the 30fps DV handicams.
Torsten
Re:It's all about bandwidth (Score:2)
There is a limit, analong isn't infinite as you assert. If it were, you could take a picture and zoom in forever and keep seeing new detail.
I'm sure there are scientific ways to get a good measure of the resolution on the film, applying something like Nyquist principles to minimize data loss.
I don't think it would be too complicated either. Just take a picture that contains dots that get progressively smaller and are exectly measured. Other test partterns could be used like lines that get closer and closer together.
If you are using enough oversample, then it is possible to say, I have 99.999 percent of the data in that image, and be able to back it up scientifically.
The hundred pixels per mm or so is often cited for 35mm film. Analog imaging on a different media with different equipment might need more or less pixels per mm. There isn't one analog-digital magic number.
In the printing industry where I work, most stuff is printed with a 150 dpi line screen. Our digital images are at 300X300dpi. On our printing presses, due to ink spreading and things like that, much higher res isn't possible. The printed material still looks pretty good at that low res.
Anyway, my point is, you can't say analog is infinite... You almost seem like a luddite that is afraid of digital imaging.
I have discounted [... the] possibility of using mulitple cameras, high-end hardware I don't know about.
Just because you don't know about it doesn't mean it won't eventually make you obselete.
Limits (Score:3, Informative)
Re:Limits (Score:2)
I've worked in a lab where we used cameras that generated 640x480x4 (32 bit color) frames at 60 Hz. Guess what. You can't even buy a HD that can sustain that kind of transfer rate for any period of time.
Sure you can. Get 4 u160 disks and run them in raid0 - instant 80-120MB/s sustained bandwidth. If you want 10khz, it's a bit different, but 60 is doable.
The easy solution (Score:2)
Rather than just have ram per pixel as the article says they did, for digital, have a single computer unit per pixel. So say you want a 4megapixel resolution full motion video. Then you get 4 million computers each processing a single pixel. That should be plenty fast enough to get some very high speeds (assuming the ccd can handle it).
Of course, the problem now is to tie all that data together into a single video... and even then to find a machine to play something like that, though i supposed you could take each 4 million machines and have them each play their data into a single pixel on an lcd.
But then, why not just use film?
12,000 FPS isn't a breakthrough (Score:5, Informative)
On the pure digital front, there are units that can record 1000 FPS continuous [visiblesolutions.com] at 512 x 512 pixels. The system is data-rate limited. The imager can go much faster; if you cut the image size down to 32x128 pixels, you can get 32K frames/sec. At 128 x 128, you can get 11.2K frames/sec. The data goes into a buffer in the control unit (1 GB, typically), and is read out via FireWire. So this system can take a lot more frames than the device described in the article, which stores the images in memory within the imager and can only store 100 images or so.
I designed and built one. (Score:3, Interesting)
Yep I built a electonic video camera that had megarhertz frame rates 8 years ago. I patented it too. Actually two different designs.
C.E.M. Strauss, "Synthetic Array Heterodyne Detection: A single Element Detector Acts As An Array", Optics Letters, Vol 19, No. 20, 1609(1994)
and
B.J. Cooke, A.E. Galbraith, B.E. Laubscher, C.E.M. Strauss, N.L. Olivias, Grubler, A.W. laser field Imaging through fourier Transform Hetrodyne, proc of SPIE, 3707, 390-408, (1999)
the problem with pixelated detectors is reading out the darn pixels fast enough. Normally this is done by some sort of bucket brigade across the ccd or some sort of serial memory access across a cmos array. very slow. And parallel access to an entire ray is absurdly complicated and expensive
In my approach I solved this problem by multiplexing all of the pixel signals onto the same single wire. Each pixel when activated creates an osciliory signal at a unique frequency. All of these are combined on a single wire out put (amplified by a single fast amplifier) and then the AC signal is digitized by a single fast digitizer and streamed to a hard disk. The frame rate is determined by the frequency separation between the pixels, so if the oscillation frequency is a megahertz then a frame can be resolved every microsecond. This process is continuos and can go on for as long as you have disk space.
the other cool feature is that the chip you do this on is a single pixel chip! not a pixelated array. the pixels come from painting the chip with a rainbow of light. for a 1-D example, imagine red light on the left edge and blue light on the right. when a reference signal comes in it beats with the light. the beat frequency that gets ouput is determined by where (left to right) the incoming beam hit.
of course the good news and the bad news is that this is intended for active remote sensing where one is illumunating a target with a single frequency laser. It does not work with ambient light (note the second articele referenced above will work with polychormatic light) . The good news is that the detection method is hetrodyne detection which has shot noise limited detection sensitivity even on a crappy photo detector. thus the system is capable of detecting a single photon of light.
Another cool feature is that one can do doppler detection with this too since any frequency shift in the target's reflected shifts the pixel frequency. This could be used for example the image bllod flowing in veins, find moving objects in noisy scenes (e.g. submarines, air planes) or any number of flow imaging concepts. The heterodyne detection means its sensitive enough to do at very long distances (say space), or to use it for imaging through very dense media (for example, imaging through the side of a vein or through breast or brain tissue.
A description of how it works in stilted patent language can be read on line here [uspto.gov]
Re:12,000 FPS isn't a breakthrough (Score:2)
I'm wondering if we couldn't basically reduce the camera to one _very_ fast pixel, and then FFT to retrieve a sequence high resolution image. Of course, just one pixel wouldn't work, but I'm throwing that out there as an extreme to illustrate what I lack the vocabulary to express.
OCR a book by fanning it under a camera (Score:3, Interesting)
--Mike
Re:OCR a book by fanning it under a camera (Score:3, Funny)
Re:OCR a book by fanning it under a camera (Score:2)
Assuming the way most people do that in movies, it's basically not possible. Take a stop-motion picture of the process, and you'll find the majority of each page is not even visible, ever. You can't read through paper without a damaging amount of light.
Now, you might be able to page the book quickly, in a minute or two, because then each page is visible. Lots of processing power and probably some novel techniques would be necessary, but at least the data is present, so it's theoretically possible.
Bandwidth is not the only problem (Score:2, Interesting)
If the frame rate is 12k/second then only 1/12000 (0.0083%) of the light can be used to make each frame. That means that the CCD must be 4000 times more sensitive to light, or you must use a light source that is 4000 times brighter, to get the same results.
And that ignores the fact that solid state light-to-electricity convertors like CCDs have a certain "latency" or "stickiness". Like the effect that the eye sees after a watching a flashbulb, CCDs suffer from after-images, and the brighter the light the worse the problem. Film doesn't have that problem because each frame is exposed on a new "receptor", i.e. a new piece of film.
Re:Bandwidth is not the only problem (Score:3, Informative)
This is actually a point in favor of high speed CCDs : in order to get the same level of contrast, you need about 5 times less light than a normal high-speed camera. Remember that the same argument you made for light sensitivity/light levels also applies to film. They'd need a light source 4000 times brighter as well, as the film is only exposed for a small fraction of the time.
You might be able to do something cool that mixes film and CCDs: have a film made of CCDs that are then read out after being exposed to light. This solves the bandwidth problem as well, because you could have multiple systems reading out the data from multiple CCDs - it's not hard to aggregate GB/s worth of bandwidth from slower sources. The main problem, of course, would be flexible silicon. That'd take some work.
Re:Bandwidth is not the only problem (Score:2)
Do it in parallel (Score:1)
The data from a 1280x1024 CCD could be split into 16 320x256 segments.
Of course someone's got to make the CCD, and I imagine having 16 computers connected to the same CCD probably poses some interesting problems. But, I'm sure that it is solvable.
</duck>? (Score:3, Funny)
The problem is NOT bandwidth... (Score:5, Insightful)
No, the problem is light itself. You don't get much of it captured with a shutter speed of
CCD kind of sucks, man. For all its glorious promise, the best CCD chipsets aren't all that much better than the wonderful X-10 spycam.
Re:The problem is NOT bandwidth... (Score:3, Informative)
This problem is solvable: after all, film has the same problem, much much worse: the settling time for film is millions of years (heh)! They solve this by placing huge arrays of film on a loop, and exposing them all for a fraction of time. You could do the exact same thing with a CCD (if you could make flexible silicon, or something like that) that would solve all of these problems.
CCD most distinctly does not suck: you can prove this by looking at astrophotography, which is without a doubt one of the hardest photographic problems that exists: extremely low light levels, and moving targets. Astrophotography is completely dominated by CCDs, because the sensitivity is just so much better, so you can get far more light in a shorter time.
A different sort of camera (Score:3, Interesting)
That system uses a single row of pixels which can be scanned at extremely high rates - the picture is built from objects moving in front of the pickup row, rather than the camera actually taking a full-resolution image.
Sort of a high-tech slit-camera.
Perhaps not 100% on-topic, but still interesting.
The other factor when talking about extreme high-speed photography (when people are calculating bandwidth):
Most really high-speed cameras shoot in black and white afaik.
If you drop the calculations from 32bpp down to 8bpp for a nic greyscale image, you're starting to get to manageable numbers... Also, adding cheap hardware based compression (RLE or the like), would be able to reduce the data stream to even more manageable levels.
You're not going to be able to shoot 6 megapixel pictures that fast, but 320x240 or 640x480 images should be possible at high framerates. I doubt it would replace film, but it might be handy for quick playback without having to get negs developed.
If you watch the "Bad Boys" DVD (the Will/Martin ver of Bad Boys), they have some very cool high-speed photography of different guns being fired into different objects. They used some sort of kodak high speed imager afaik - around 2000fps.
cheaper alternatives (Score:3, Interesting)
The idea of "parallel" items is nothing new, we've already seen success with drives, clusters, and even an array of projectors to create a high resolution projected wall.
Just a thought...
Rapatronic -- technology from the 1950's. (Score:3, Interesting)
Rapatronic Camera Shots [vce.com]
Re:Rapatronic -- technology from the 1950's. (Score:2)
I'm not sure this really counts, however, since each physical camera could only take one picture, so it wasn't really a motion-picture process--to get ten frames, you needed ten cameras. It was really like Muybridge's original technique (recently used for that "bullet-time" sequence in _The Matrix_). I'm sure you could use the same technique with digital cameras and get very high frame rates for very short sequences.
Quantum Mechanics (Score:3, Interesting)
The wavelength of visible light (in a vacuum) is between 4x10^(-7) and 7x10^(-7) m.
The speed of light is 3x10^8 m/s (in a vacuum). Planck's constant is 6.6x10^(-34) J s.
Put these together, and a single photon of visible light has an energy of between 2.8x10^(-19) and 5x10^(-19) J.
Suppose you want to get 24-bit colour. As an absolute minimum, you'll want to be able to detect 4096 photons per colour per pixel per frame. CCDs are typically 50% efficient, which means you need 256*3*2 incoming photons per pixel. At, say, 1024x1024 pixels and a million frames per second, that means 3*4096*2*1024*1024*1000000 = 2.6x10^16 photons per second, at an average energy of 3.9x10^(-19) J each.
That's an absolute minimum of 1.0x10^(-2) W of incoming radiation.
How much light is available? Well, at "bright sunlight" is approximately 30 W/m^2 of visible light.
That means that you'd need an aperture roughly 28mm across... which isn't impossible, but is certainly not going to be desireable.
So how does ultra-fast photography work? They use really bright flashes of light... which is why you don't want to be filmed for more than a fraction of a second at once.
Car Smashing (Score:2)
They have to be turned on in sequence, because if you tried to turn them all on at once, the current draw would kill the power grid. This despite the extra-hyper-ultra-industrial strength wiring into the grid.
You also can't leave them on for more then a few seconds because the heat generated burns them out. (It is actually a challenge to balance these two conflicting priorities, to turn them on quickly, yet slowly.)
The power draw for a single test is enough that they actively try to minimize the amount of time these lights are on. This despite the fact that electicity is normally so cheap we really don't think about it much. (Think in business terms; this means it's worth someone who is being paid $50+/hour to actively spend time worrying about how to minimize the time these lights are drawing current.)
All of this for the ultra-high-speed photography that takes place. I don't recall exactly how bright they said it was in the facility I was in, but I think it blows sunlight away by several times.
I mention this as an example application where "bright flashes of light" (emphasis mine) aren't practical, so they have to go whole hog. Kinda cool.
Re:Car Smashing (Score:2)
Yes, there are always going to be exceptions. But you'll note that they don't have people inside those facilities when they have all the lights turned on.
I guess I should have said "you can't get high quality 10^6 fps video for more than a fraction of a second at a time unless you evacuate the area first".
Re:Car Smashing (Score:2)
In fact, the most popular question on the tour was, "Can you turn them on for us?" and the answer was basically "Do you want to be blind?" (They had some clever, prepared joke, but I don't recall it well enough.)
Re:Quantum Mechanics (Score:2)
Hence the reason that CCDs are way cool compared to film: film is only about 20% or so - it takes 5 times less light to get the same image out of a CCD than it does film.
Re:Quantum Mechanics (Score:3, Interesting)
With a high efficiency CCD cooled to around -300F (liquid nitrogen) you can reliably distinguish a single photon hitting a collection well from a cell from those with no photon strikes. So if you where going to do 3CCD imagery with prism splitting, you would need what? Three photons per pixel? Or do photons split in a prism to 1/3 their initial energy so you'd only need one?
I can't recall the exact show now, but I think on TechTV not too long ago they did a segment on a very high speed digital camera that did something like 300,000 frames per second. The think was pretty small, about the size of two towerstyle computer cases.
Re:Quantum Mechanics (Score:2)
As for the number of photons: 8 bit grayscale usually means 64 (not 256!) distinguishable levels. That is, on the 0-255 scale, you'll usually be able to distinguish between a 100 and a 104. In order to resolve N different levels when you're receiving statistical inputs, you need N^2 data points, ie "white" would be at least 4096 photons. Because you want to have that same quality on each of the three colour planes, multiply by three.
Re:Quantum Mechanics (Score:2)
On what grounds would 2^8 bits of data "imply" 2^6 bits of actual information?
2^4 systems don't magically degrade to black or white, after all.
Though I'm a bit rusty on the stats, I do believe the N through N^2 process matters only during calibration. Once you've established a given correspondance between inputs and outputs, further samples may share in the results of the previous calibration. I do suspect that a large standard deviation would require greater sampling levels to achieve a given level of accuracy.
--Dan
Re:Quantum Mechanics (Score:2)
Granted for scientific purposes they may use equally balanced color input, ie 1:1:1
I would also argue that this isn't a statistical issue. If you can detect a 1 photon difference across CCD collection wells, then you might only need 255 photons for white (assuming 24 bit color). Again, whether you could split the one photon to three CCDs, or need three incoming photons I don't know. I'd feel asamed, but given the ongoing debate over wave/particle/etc theories I think my ignorance is tolerable.
Anyway, that seems to mean that we only need either 255 or 765 photons per image pixel in such a case.
The limitations are all in the quality of the read-out logic of the CCD and the amplifier and A/D once the information is off-chip.
How it could work (Score:2)
How I would tackle the problem is to setup a series of CCD (Foveon's X3 would be my choice), with each pixel element feed directly into a huge RAM cache where the data could be loaded off into yet slower storage.
Since we had to deal with charge time (hence the first C in CCD) of 1ms, we'd need 1000 CCDs each with their own data cache and so on.
Then comes the problem of making an image - since we'd be dealing with 1000 CCDs, we're going to have to figure out how to place each pixel so than when that pixel's series fires, we can capture an image which would look like any other series' image. So this is what you'd be dealing with:
DATA RATE = 3 * (W*H) * 1000 * Time Duration
For giggles:
3*(1024*768) * 1000 * 1s = 2,414,592,000,000 (2,358,000,000 Bytes per second).[1]
Thank you, I'd like a stiff drink now, and film looks mighty good.
[1] If I didn't fuck up my math...
Re:How it could work (Score:2)
It'd be expensive, yes, but it'd pay for itself in the added sensitivity (5 fold) and the recurring film cost and lack of many-moving-pieces in time. From what I've been reading around here, it looks like there are several companies already working on it.
Re:How it could work (Score:2)
So
DATA RATE = 3 * (W*H) * 1000 * Time Duration
Should be:
DATA RATE = 3 * CDept * (W*H) * 1000 * Time Duration
For giggles:
3 * 8 * (1024*768) * 1000 * 1s = 18,874,368,000,000 (18,432,000,000 Bytes per second / 18.5Gps)
Ouchie.
from a creative point of view (Score:5, Interesting)
So more substance, less rant: here's how I think these technologies would be useful to end users, and thus what we should be thinking about here.
Video Tap: A major video breakthrough in the feature film making process was Jerry Lewis's video tap. This puts a prism or split field diopter in between the lens and the film plane, splitting it in two, one going to the film plane, the other going to a video camera. This is how a director is able to get immediate feedback on how the scene went (instead of waiting for the dailies the next night to see it). A high framerate video tap for high framerate film would be extremely handy. The quality wouldn't have to be great, it would just need fidelity to tell the director and cinematographer how well composed the take was, and making sure all the stuff thats supposed to be in the take are there...and nothing else (like a boom mic).
Internet/NLE: This also would help in modern, internetworked digital non-linear processes. This is where takes are digitized as they are shot (if not already doing initial capture in DV) and dropped into the timeline in a nonlinear edit suite (avid, cinerella, final cut pro) whos project files are shared in an internetworked data store (film crews on other ends of the world, and the CG shop instantly are able to see their shot in the context of the other units shots...in realtime) via a 3 point edit. Even with a film process, the tap could digitize the footage and insert it into the timeline...the print of the footage could be later scanned and conformed to the timeline. Very handy. So this ties into the throughput problem. You have to consider that the bottleneck isn't CCD voltage intervals, cache tomfoolery or writing to a non-volatile medium. It could be a crappy ADSL connection or satellite uplink set up by people who scarcely understand how that stuff works.
Noise and heat: One of the banes of film making and one of the big advantages of digital video is the noise that all those ratchet/crank/shutter type mechanisms in a camera create. A lot of the sound work in a film is dealing with the noise from the camera. Sometimes, the sound is recorded later after discarding the sound from the set wholesale. Now, in order for a cmos imager to be effective at these speeds, we'll need to keep it cool. Heat is more likely to degrade throughput than buffer speed or size. Hence, we're going to need to build hardware to cool the cmos. That hardware is likely going to be more exotic than the cmos, take more energy than the motor for a high speed film device and potentially create a lot of noise on it's own. So the advantages of the high speed DV cam over film are only possible if the apparatus that supports the camera don't reintroduce the same problems on an equal or greater scale than existed in film.
Personally, I feel that the single greatest and most useful application of this technology, from a creative standpoint is the high speed video tap. It would liberate crews from the burden of dailies and integrate high speed footage into modern production processes.
For non-creative uses (scientific/research), this technology can free users from the latent and toxic nature of film processing infrastructure.
Image capture is always analog. (Score:3, Interesting)
Compared to film, CCDs are extremely low-res (top quality 35 mm film has resolution equivalent to a 50 megapixel CCD) but, more importantly, they're slow. At very short exposure times, CCDs have so much noise that the final result is useless. The problem isn't the transfer rate, it's the time the CCDs take to "charge up" to meaningful values.
There is one alternative: use very large CCDs. The larger the CCD, the more light hits it, and the faster it can charge. But larger CCDs are more expensive and require special lenses.
Recording directly to digital does have one big advantage: you don't have to pay for the film. But the CCDs simply aren't up to film quality yet (and probably won't be for another 5 years or so). So the solution is simple: shoot on film, then digitise it.
RMN
~~~
Yes, it's being done (Score:2)
Interestingly the computer interface that they use for the special CCDs uses Linux.
I am sure that you can get an idea of what is involved from Ultracam and use it in other real world applications (patents not withstanding).
why this wouldnt be a good idea. (Score:3, Interesting)
This means that the signal to noise ratio for each sensor goes up. At 3 megapixels you wont see any degredation from this but as the resolution increases youre going to see more and more loss of acuracy and less acurate images. now this is for a camera which takes photo's at the equivalent of 50 ISO (1/50th of a second) if you want a camera that takes images at 12000 images per second that means each image has to be captured in about 1/36000th of a second, or 720 times faster, or 1/720 of the light will hit each pixel. inorder to maintain acuracy at that speed youre going to have to drop down the resolution to 4000 pixels per image (100x40?) inorder to maintain image quality.
%s/Slashdot/google/g (Score:2)
Search for high framerate digital cameras. Let the poor slashdotters get back to writing code.
Stupid write up (Score:4, Insightful)
Mechanically inclined.
When was the last time you even heard that phrase? We live in a physical world. A mechanical device is a perfectly acceptable solution to a problem. Not everything needs to be done with software. Just look at the guy's level of disappointment. "But there has to be a way to do this with electronics! Electronics are always better than mechanics, aren't they? It's impossible for mechanics to do something electronics cannot, ins't it? Hello?"
And Cliff's additional writeup is no help either. The reason the video in the example he found can only played back at 103fps is fully explained in the link he provides (and apparently didn't bother to read). Also, the 12,000fps film camera that got everyone talking in the first place not the first of its kind. High-speed film cameras have been around for decades. The real kicker is Cliff's silly statement at the end, which makes it sound as though an electronic high-speed camera would be the first high-speed camera ever. He says, "What visual mischief could you aspiring photographers get into with such a camera?" Gee, I dunno Cliff, how about the exact same things people have been doing with high speed film cameras for the past 50 years, eh?
Sheesh. The world goes beyond the bits in a CPU. Turn off the computer and take a look around at the tangible, physical world.
CMOS digital image sensors (Score:2)
Of particular interest to high frame rates might be the MI-MV13 [micron.com], Micron Imaging - Machine Vision 1.3Mega Pixel CMOS digital image sensor. This particular sensor can do 500fps at 1.3Mega pixel but can also be windowed to do for example 4000 fps at 1280 x 128.
Re:CMOS digital image sensors (Score:2)
For crying out loud! (Score:2)
Bandwidth.
Pixel Depth.
Image Dimensions.
Bitch, bitch, bitch, bitch, bitch. To quote someone who dearly needs to be heeded in this case (Dennis Leary), "Shut the fuck up, NEXT!" I've heard enough crap, why don't we just call up Nikon and ask them for one of their explosive imaging cameras? If I remember my Guinness Book of World Records, that unit is a digital camera performing in the MILLIONS of frames per second! 12,000? Feh!
Gee, how about a simple Google search, even? Let's try that, shall we (since the Guinness world record site SUCKS!):
First 3 links are about the same camera! A half-million dollars, 200 million frames per second.
Grab the
bullet from a gun (Score:2)
Say your company does crash tests or you want to find out what happens when something explodes.
if you frame size was 30cm squared (about a foot) then 12000 frames per second would allow you to capture 10 frames of somthing travelling at about the speed of sound(i think my math is correct!)
or a few frames of a bullet.
Re:Why? (Score:5, Interesting)
Because certian events, despite what you might think, *do* occur within 1000ths of a second. (The fireball from a nucelar blast for instance.)
Good cammera's shutter speeds tend to go up to 1/1000th of a second, and can go up to 1/8000th.
As far as the humman eye comment, well, just because you record at 12000 fps, doesn't mean you play it back at 12000 fps...
Re:Why? (Score:3, Funny)
think, *do* occur within 1000ths of a second
Yes, like when I get fragged playing quake.
Michael
Re:Why? (Score:2)
Well, don't tell that to the hard core gamers - I'm sure some of them would pay alot of money to get a video card that does even 1000 fps.
Actually, motion looks fairly fluid above 25 frames per second (although your monitor will need to refresh at least 3 times that speed to avoid flicker).
Re:Why? (Score:2, Interesting)
Re:Why? (Score:2)
Capability drives application.
Re:Why? (Score:1)
Answer to second question: No, you didn't
Re:Quantum Limits (Score:3, Informative)
For high speed photography, you need lots of light. This is just generically true. But the quantum efficiency of CCDs is virtually 100%, as opposed to film which is much lower. So, this is a strong point in favor of a CCD system - you'd need less light. There's a poster above talking about how in car crash tests they need massive lighting systems to be able to see things. This'd cut down on their power bill quite a lot.