Choosing Better-Quality JPEG Images With Software? 291
kpoole55 writes "I've been googling for an answer to a question and I'm not making much progress. The problem is image collections, and finding the better of near-duplicate images. There are many programs, free and costly, CLI or GUI oriented, for finding visually similar images — but I'm looking for a next step in the process. It's known that saving the same source image in JPEG format at different quality levels produces different images, the one at the lower quality having more JPEG artifacts. I've been trying to find a method to compare two visually similar JPEG images and select the one with the fewest JPEG artifacts (or the one with the most JPEG artifacts, either will serve.) I also suspect that this is going to be one of those 'Well, of course, how else would you do it? It's so simple.' moments."
ImageMagick can give you EXIF data. (Score:5, Informative)
The ImageMagick package includes a command called identify, which can read the EXIF data in the JPEG file. You can use it like this:
identify -verbose creek.jpg | grep Quality
In my example, it gave " Quality: 94".
This will not work on very old cameras (from ca. 2002 or earlier?), because they don't have EXIF data. This is different info than you'd get by just comparing file sizes. The JPEG quality setting is not the only factor that can influence file size. File size can depend on resolution, JPEG quality, and other manipulations such as blurring or sharpening, adjusting brightness levels, etc.
Re:File size (Score:5, Informative)
File size doesn't tell you everything about quality.
For instance, if you save an image as a JPEG vs. first saving as a dithered GIF and _then_ saving as JPEG, then the second one will have much worse actual quality, even if it has the same filesize (it may well have worse quality AND have a larger file size).
DCT (Score:5, Informative)
Just look at the manner in which JPEGs are encoded for your answer!
Take the DCT (discrete cosine transform) of blocks of pixels throughout the image. Examine the frequency content of the each of these blocks and determine the amount of spatial frequency suppression. This will correlate with the quality factor used during compression!
use a "difference matte" (Score:4, Informative)
load up both images in adobe after effects or some other image compositing program and apply a "difference matte"
Any differences in pixel values between the two images will show up as black on a white background or vise versa...
adam
BOXXlabs
Try ThumbsPlus (Score:3, Informative)
ThumbsPlus is an image management tool. It has a feature called "find similar" that should do what you want as far as identifying to pictures that are the same except for the compression level. Once the similar picture is found you can use ThumbsPlus to look at the file sizes and see which one is bigger.
Found it a while ago (Score:5, Informative)
I mean, you don't want second rate pictures in your pr0n stash?
I had problems building it back then, let alone writing the scripts for it and the hassle of figuring out which images were duplicates, but this utility [schmorp.de] seems to fit the bill.
image quality measures (Score:5, Informative)
HOSAKA K., A new picture quality evaluation method.
Proc. International Picture Coding Symposium, Tokyo, Japan, 1986, 17-18.
Re:File size (Score:5, Informative)
http://linux.maruhn.com/sec/jpegoptim.html [maruhn.com]
No. You can compress JPEG lossless.
Re:Filesize is a hint (Score:5, Informative)
Blur Detection? (Score:2, Informative)
I wonder if out of focus or blue detection methods will give you a metric which varies with the level of jpeg artifcats, after all the jpeg artifacts should make it more difficult to do things like edge detections etc which are the same the things that made more difficult by blurry and out of focus images
A google search for blur detection should bring up things that you can try, Here [kerrywong.com] is series of posts that to do a good job of explaining some of the work involved
Re:File size (Score:1, Informative)
actually one of the meta values that is stored is a quality indicator.
Re:File size (Score:5, Informative)
Except for Lossless JPEG [wikipedia.org] standardized in 1993. But other than that, no there is no lossless jpeg.
Re:ImageMagick can give you EXIF data. (Score:3, Informative)
See http://www.imagemagick.org/script/compare.php [imagemagick.org]
Also, [[www.gimp.org]] is able to look at an image and approximate what JPEG compression quality setting was used, and use that same quality setting to save an output JPEG copy of the image. So -- they have some algorithm inside of their application which takes an image and returns (a good guess of) the corresponding jpeg quality value.
Of course, this does not help you if the image was saved with a lousy JPEG quality value, like 10/100, and later saved at a much higher value, like 98/100. Since the algorithm only sees the last image, it would tell you the quality value is 98/100, even though the contents of the image would indicate the results of 10/100 compression, because of multi-generational lossy compression.
Look at the DCT coefficients (Score:4, Informative)
JPEG works by breaking the image into 8x8 blocks and doing a two dimensional discrete cosine transform on each of the color planes for each block. At this point, no information is lost (except possibly by some slight inaccuracies converting from RGB to YUV as is used in JPEG). The step where the artifacts are introduced is in quantizing the coefficients. High frequency coefficients are considered less important and are quantized more than low frequency coefficients. The level of quantization is raised across the board to increase the level of compression.
Now, how is this useful? The reason heavily quantizing results in higher compression is because the coefficients get smaller. In fact, many become zero, which is particularly good for compression - and the high frequency coefficients in particular tend towards zero. So partially decode the images and look at the DCT coefficients. The image with more high frequency coefficients which are zero is likely the lower quality one.
Re:Check the quantization (Score:4, Informative)
http://www.cs.dartmouth.edu/farid/research/tampering.html [dartmouth.edu]
http://www.cs.dartmouth.edu/farid/publications/tr06a.html [dartmouth.edu]
Re:AI problem? (Score:2, Informative)
Re:AI problem? (Score:5, Informative)
Try jpgQ - JPEG Quality Estimator (Score:1, Informative)
jpgQ - JPEG Quality Estimator
http://www.mediachance.com/digicam/jpgq.htm
Re:AI problem? (Score:3, Informative)
Re:AI problem? (Score:2, Informative)
Or you could just measure the amount of data in the DCT space. Duh.
That'd be a Discrete Cosine Transform [wikipedia.org]
(for the confused like me. Crazy what they can do with math these days)
Re:AI problem? (Score:5, Informative)
Re:File size (Score:5, Informative)
Lossless JPEG and lossless JPEG2000 are both exactly that - lossless. Not perceptually lossless, which is what people often use to refer to high-quality, lossy JPEG/JPEG2000, or JPEG-LS. Lossless JPEG uses a PCM-like encoder, not DCT, AFAIR. Lossless JPEG and lossless JPEG2000 are, in fact, lossless, at least with regards to image data in supported color spaces. This is in part a result of *not* converting to YCrCb, since that conversion is lossy, of course. Not all Lossless JPEGs are 8bit YCrCb.
Accusoft, for one, has a toolkit for building lossless JPEG applications which supports 16bit RGB and greyscale lossless JPEG modes.
The near-lossless JPEG you're thinking of is JPEG-LS, which is perceptually lossless, and guarantees a maximum error rate that is generally neglible for almost all applications. This format gets better compression ratios than Lossless JPEG, of course.
Neither the lossless or near-lossless JPEG modes are common though, outside of niche apps. Lossless JPEG2000 is, however, since almost all JPEG2000 libraries support it alongside the lossy modes.
Re:AI problem? (Score:4, Informative)
http://www.jhnc.org/findimagedupes/
There's a bunch, but I know you can construct command line operations with this one. I imagine you could construct a system from this and the parent program that will find dupes, then nuke the poorer quality of each, or whatever.
Re:ImageMagick can give you EXIF data. (Score:1, Informative)
ImageMagick does not need EXIF data. It estimates the quality by looking at the JPEG quantization table.
$ convert logo: jpeg:- | identify -verbose - | grep Quality
Quality: 92
Sorting steps to find originals (Score:3, Informative)
You probably don't necessarily want to find the "best quality" image, but rather the image that was closest to the original.
I take it you're either trying to eliminate the low-quality duplicates or thumbnails from a really large collection of pr0n, or trying to write an image search engine that tries to present the "best" rendition of a particular image first.
For the second pass, you'd likely want to scan through the metadata first, especially stuff exposed by EXIF. So you'd want to give higher scores to EXIF data that makes it sound like it came directly off a digital camera or scanner, and bump down the desirability of pictures that appeared to have been edited by any sort of photo editing software.
Then maybe you want to look at something that would rank down watermarks or other modifications.
Another step would be to compare compression quality, but I think that's what most of the other posts are concentrating on. But this is a difficult step because it can be easily fooled, since idiots can re-save a low quality image with the compression quality cranked all the way up so the file size becomes high even though the actual image quality is worse than the original. You probably need to run it through one of those "photoshop detectors" that could tell you whether the image has been through smoothing or other filters in a photo editor. The originals (especially in raw format and maybe high quality JPEG) will have a certain type of CCD noise signature that your software might be able to detect. In the same vein, a poorly-compressed JPEG will have lots of JPEG quantization artifacts that your software might be able to detect as well. Otherwise, you're kinda left with zooming in on pics and eyeballing it.
Finally you might be left with a group of images that are exactly the same but have different file names... you probably want some way to store some of the more useful bits of descriptive text as search/tag metadata, but then choose the most consistent file naming convention or slap on your own based on your own metadata.
Hopefully this gives you a start to important parts of the process that you might have overlooked...