Increasing Video Detail Using Super-Resolution? 41
Cecil Esquivel asks: "I'm looking for ways to increase the quality of video by using super-resolution algorithms which use the visual information across multiple frames of video to increase the resolution of individual frames. I have found very little on the web that can do this effectively for the entire length of video. There is commercial software, VideoFOCUS, which produces hi-res stills from video, but doesn't seem to have a product for producing hi-res video from video. There is a thesis from Duke U. which is 6 years old, monochrome only and is mostly proof of concept.) Anybody out there have more information or is willing to help me develop some software that can do this? Darwin/Mac OS X solution that can work with QuickTime DV, preferred." Typically, super-resolution uses image samples generated from low-resolution and high-resolution samples of the same source, which is then converted into source independent information that can be used to increase detail for other low resolution sources. Has anyone seen programs that use super-resolution techniques for increasing the resolution of your typical digital video clip?
Some deinterlacers do similar work already (Score:5, Interesting)
Looking at the state of deinterlacing technology and some of the "detail enhancing" resizing filters would be a good area for study.
I'd *love* to see this used to help correct data errors in video streams as well. A DirecTV receiver with this built in would be cake++.
Re:Some deinterlacers do similar work already (Score:2, Redundant)
Re:Some deinterlacers do similar work already (Score:1)
Re:Some deinterlacers do similar work already (Score:2)
Re:Some deinterlacers do similar work already (Score:1)
multiple sources? (Score:1)
I know many web sites have a video available in multiple formats; i.e. real, mp4, and sorenson, or whatever.
I'd like something that takes the different streams and assembles for me a high res final product, based on the idea that there are going to be different details available in each compression scheme... does this make sense? Does it sound like a reasonable idea?
Re:multiple sources? (Score:1)
Think of a badly compressed video with lots of blocky shapes, combine several like that and you'll probably end up with something resembling a brick wall.
I've thought that a better method wou
Next generation imaging technology from X-10 (Score:3, Funny)
Just imagine, you too could have a Beowulf cluster of X-10 cameras...
Spatial vs. Temporal (Score:3, Informative)
Re:Spatial vs. Temporal (Score:2)
Re:Spatial vs. Temporal (Score:1)
Kinda, but not quite. You are bounded in your available information content. Total information is given by "spatial information * temporal information." If you increase one, you decrease the other, at least if you hold the bitrate constant.
You're right that it's like Heisenberg's Uncertainty Principle, in that its formulation is similar. (delta-X * delta-T == constant, so as you make delta-X smaller, delta-T gets bigger.) The difference is that Heisenberg's Uncertainty Principle is based on a single
Re:Spatial vs. Temporal (Score:4, Interesting)
If my programming skills were better, I've be VERY interested in seeing if this can be done with MPEG-2. Noise originating from the original video has always been my primary gripe with those kinds of sequences. if this could be compensated for and exported to MPEG-4.... MMM that would be tasty.
Re:Spatial vs. Temporal (Score:4, Interesting)
Not necessarily, though it'd require two passes.
In the first pass, you generate a high res image by using motion tracking to figure out how far the camera turned. Then, using that motion, it can read the sub-pixel data. That's semi easy to do, it's been done before. Before long, you have a high-resolution image.
Then, you do the second pass where you take that high resolution image and paste it on top of the low resolution footage using the motion tracking data to move the new pixels around in the right position. As long as the motion tracking data is reasonably accurate, then you could theoretically create a higher resolution video without losing temporal resolution. It's not clear to me, though, that there wouldn't be situations where that would break down.
Man I hope I expressed that fairly clearly. I've got a little experience with digitally painting video to change the details of a scene.
Wow.. (Score:1)
The script... is $#!+. (Score:2)
Is this the first time software can actually be made to dramatically increase the quality of a movie, even if the source sucks?
Image processing systems have contained "noise reduction" processes for a long time. However, no software can save a bad script.
Surely this should be evident (Score:1, Informative)
You can't figure out why that is?
Re:Surely this should be evident (Score:2)
No. Why don't you explain to us the magical quality of the universe that prevents us from doing that.
If you can enhance a single frame, you can enhance all the frames and re-encode an enhanced video.
algorithms which use the visual information across multiple frames of video to increase the resolution of individual frames.
Isn't that what the poster asks for?
Why this is possible (Score:2)
Suppose I show three consecutive frames of an eggbeater. Now, if you knew nothing about this video, if the frames had no correlation between each other, and were just each random stati
Use VideoFocus (Score:3, Insightful)
http://www.salientstills.com/product/pr
So...just run it through video focus, make an avi with all the frames, add the original audio track, then compress them using some video compression...you have yourself a high res video!
I'm guessing something will look strange, like funny blurs, or background motion or something. Who knows. I don't think it would do good with fast moving objects in the video...probably blur it real bad.
It's worth a try though.
Re:Oh yea. I've seen this. (Score:2)
This principle applies to astronomy. If you want to find faint galaxies, you simply increase the exposure time for your video device
Re:Oh yea. I've seen this. (Score:3, Informative)
There is nevertheless a form of super-resolution which works on standalone single frames. It depends on what you mean by "additional detail". Normally when magnifying images you would use nearest-neighbor, or better yet, bilinear interpolation. But a magnification using these met
Found some more information (Score:4, Informative)
http://www.ai.mit.edu/~brussell/research/sres/dat
Anyways, it seems that without proper filtering, the output looks REALLY weird. (look at they guy walking in a circle in front of the quilt) It seems that the motion vectors from the MPEG get taken in as part of the detected edges somehow! Thus, this would be most useful for uncompressed analog video as an input.
VideoFOCUS may already do that (Score:1)
This isn't much help, but.... (Score:1, Informative)
Covered on slashdot last year (Score:1)
What kind of video? (Score:2)
I've been working on this for some time and have had a decent amount of success using multiple captures of digital broadcast (broadcasts are still generally encoded on the fly, so each stream will be different). This can also be applied (with great success) to analog captures of broadcast video. Just sum each capture with equal weighting and you'll be surprised how mu
Visar (Score:1)
Although, as it's apparently only available though Intergraph I'm sure it's rather costly.