Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software The Gimp

Using The GIMP (or Photoshop) to Improve Photos? 111

Nom du Keyboard asks: "Is it possible to use The GIMP (or Photoshop) to improve my digital photos? I have a mid-range 7.1MP Olympus camera capable of shooting in Raw mode. When I inspected a section of clear blue sky on a bright, sunny day (which I've long believed to be relatively good reference of uniform color and brightness) I was surprised (disappointed, since I expect digital perfection) at the variance in adjacent pixels. It's also a quick way to identify any bad pixels. Surprisingly, actual photos from this camera look pretty good despite this variance so far. Moving on from that point it led me to wonder that, if you shot a uniform white surface, perhaps blurred as much as possible to avoid any imperfections in the surface itself, could a correction (adjustment) layer be created in GIMP or Photoshop exactly tuned to your camera that fixed the variations in your CCD sensor and improved the image quality in the process. Any thoughts?"
This discussion has been archived. No new comments can be posted.

Using The GIMP (or Photoshop) to Improve Photos?

Comments Filter:
  • try it (Score:2, Insightful)

    by tverbeek ( 457094 ) * on Saturday January 27, 2007 @01:45PM (#17784094) Homepage
    I don't know. Why don't you try it?
  • Interesting idea (Score:4, Insightful)

    by Aladrin ( 926209 ) on Saturday January 27, 2007 @01:45PM (#17784096)
    That's an interesting idea, but it assumes some pretty clean conditions. The light has to be absolutely the same over the entire surface, it would probably need to be blurred as you said, the surface would have to be absolutely the same color everywhere (no dust, no marks), the surface would to be completely non-reflective, and probably some other things that I haven't thought of. It would be extremely hard.

    It also assumes that the variations are always the same, and that the variations in your photos are from defects and not from the natural color differences in the real world and the digital camera's attempt to map them to a very restricted color palette.
  • by Gothmolly ( 148874 ) on Saturday January 27, 2007 @02:04PM (#17784228)
    a) If you are using anything above ISO50 on a cheap digital (like yours), you will get ISO noise
    b) blue sky is not really blue, you can't expect 7.1 million pixels to all agree
    c) there may have been microscopic dust on your lens

    Basically, you're looking for your camera to be Adobe Illustrator, and it isn't.
  • by Overzeetop ( 214511 ) on Saturday January 27, 2007 @02:25PM (#17784346) Journal
    You'd be just devastated if you blew a film image up to the level where you could see the grain.

    Here are two questions for you:

    1) Do you find that you are printing your images at sizes larger than 12x18?

    If you are, then you probably ought to have more pixels (i.e., a better camera). I'm okay with digital pictures down to about 150dpi, others swear that you need 300+. Then again, there are people who swear that $3000 unobtainium coated silver strands wrapped in virgin PTFE and assembled when the planets are in alignement make their music sound better.

    2) Presuming you are actually printing at at least 200dpi, can you really see the difference without a loupe on the final prints? I'm not worried about your monitor, because I'm going to bet that if you have a consumer-level camera, you're not doing photoediting on a 7.1MP monitor.

    You see, if you can't tell, don't worry about it. Let your geek side go and spend more time in the field and less time in the darkroom. Seriously - unless you have significant image problems you can see in your final output, the camera and imaging is good enough. Go take some great pictures, and worry a bit less about having digitally perfect pixels.
  • by dlevitan ( 132062 ) on Saturday January 27, 2007 @02:58PM (#17784556)
    Yes, your $1000 digital camera is not going to have a perfect CCD. There is no such thing as a perfect CCD. And I don't understand why you care unless you're trying to do science work with it. Look at it this way, no one is ever going to look at your picture and say its horrible because one pixel is slightly different than the one next to it. You look at the content of the whole photograph, not three pixels.

    If you are trying to do science, then a DSLR is not what you need. DSLRs use Bayer interpolation to create a color image. This inherently kills your accuracy since not every pixel in the image is actually a pixel on the camera. CCDs used for astronomy (which cost more than your whole camera) do not do this and they still suffer from the effects you mentioned. Every exposure used for scientific work goes through a whole data reduction process that tries to remove as much noise as possible. Others have mentioned most of the process (bias frames, dark frames, and flat fields), but most astronomical CCDs also have an overscan region which is part of the CCD that is not exposed to light and is used to record the thermal noise on the CCD. This changes from exposure to exposure and from temperature to temperature (and yes I am a researcher in astronomy).

    In short, there's no reason for you to care about this, and there's no chance of fixing this completely (CCDs are not digital - they're analog). There's also no way of applying the same solution to every photograph (and CCDs can change over time). Don't worry about pixel-to-pixel variations and just take photographs for their content. If you're really interested in how CCDs work, read the Handbook of CCD Astronomy by Steve Howell. Its a great introduction to CCDs and how to use them for astronomy.
  • Your post was a little offtopic; and now mine is WAY offtopic, but I have to respond. Hopefully the mods will look kindly and my "Offtopic" mods will equal my "insightful" for a break-even ;)

    I disagree with your basic premise here completely. Everything you say about KNOWLEDGE is correct, but that doesn't address stupidity, which "Overzeetop"'s sig is about. There are indeed many more stupid people in the world than there used to be, and I put it down to many factors - a noticeable one that is different in today's society compared to the recent past being personal responsibility.

    First let me define intelligence (and therefore also stupidity): The very definition of intelligence is debated and in some people's definition does include such things as knowledge. But if we're going for a "purist" definition, then it boils down to "the ability to figure stuff out" (reasoning). Naturally, those WITH more knowledge are likely to be more intelligent, and those who are intelligent will likely gain more knowledge, however knowledge itself is not a factor in the definition of intelligence itself.

    Now, back to personal responsibility: Once upon a time, if someone did something stupid, they'd suffer the consequences for it. These days, they can blame others for their own mistakes. Because of this, they generally don't learn "the hard lessons" and will continue to do stupid things. So we can see from this that personal responsibility has a direct effect on learned intelligence. Now, there is also a direct effect the other way as well - those who are intelligent are less likely to blame others when they do something stupid once in a while, and they will learn "the hard lesson" from it. I put this down to innate intelligence (be it genetic or learned at a young age, that's a debate I won't discuss here).

    Other factors which I'll mention, but not go in to such great depth on, include: less practical and more faith based adherence to religious ideals (somewhat related to personal responsibility); less importance placed on intelligence in many modern education systems (it's okay to be stupid; we'll teach you how to get by as you are); and games that don't include as much critical thinking in order to win (games of chance or reaction vs games of skill (I'm thinking mainly of non-computer games here, but it does apply to both)).



    To try and save my "on-topicness" a little, I'll just say I agree completely with your analysis of people's lack of desire to learn about the things they need to know in order to be good at what they want to - they want an "easy fix". Some of this may actually fall back to my definition of stupidity above, but it probably falls more back to sheer laziness, which is closely related to stupidity and has many of the same factors, but I'd class as a mostly independent phenomenon.

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...