Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Graphics Software The Gimp

Using The GIMP (or Photoshop) to Improve Photos? 111

Nom du Keyboard asks: "Is it possible to use The GIMP (or Photoshop) to improve my digital photos? I have a mid-range 7.1MP Olympus camera capable of shooting in Raw mode. When I inspected a section of clear blue sky on a bright, sunny day (which I've long believed to be relatively good reference of uniform color and brightness) I was surprised (disappointed, since I expect digital perfection) at the variance in adjacent pixels. It's also a quick way to identify any bad pixels. Surprisingly, actual photos from this camera look pretty good despite this variance so far. Moving on from that point it led me to wonder that, if you shot a uniform white surface, perhaps blurred as much as possible to avoid any imperfections in the surface itself, could a correction (adjustment) layer be created in GIMP or Photoshop exactly tuned to your camera that fixed the variations in your CCD sensor and improved the image quality in the process. Any thoughts?"
This discussion has been archived. No new comments can be posted.

Using The GIMP (or Photoshop) to Improve Photos?

Comments Filter:
  • try it (Score:2, Insightful)

    by tverbeek ( 457094 ) *
    I don't know. Why don't you try it?
    • Re: (Score:3, Informative)

      by smallfries ( 601545 )
      I'd try it twice, with two different but supposedly uniform surfaces. I'll bet that the fluctuations in pixel intensities aren't uniform across both surfaces as they're not caused by a systematic bias in the CCD. Rather they are caused by random noise in the circuit.

      If it turns out that there is a systematic bias (ie one that you can correct in the GIMP with a static image) then you would be best off taking a picture of something as black as you can make it. The inside of a bad should do. And then as light
      • You're right, the bias isn't systematic and it won't work. My favourite way of getting an exposure as dark as possible is to use a camera that can shoot with the shutter closed; some can do that automatically right after a normal exposure in order to detect hot pixels. However, if you take an exposure that is "as light as possible", every pixel will be over-exposed to the max, so you'll get zero information. (Except you have a sensor with a really unusual fault.)
        • Re: (Score:3, Interesting)

          by fbjon ( 692006 )
          Many cameras do this automatically as you say, like my FZ30, for instance. It's called a dark frame, and it simply closes the shutter and takes a dark image right after the actual shot, using the same shutter speed, then subtracts it from the original using some algorithm. This will take out hot spots that are mostly consistent over a short period of time, but won't touch any other noise.

          Replying to TFA:

          surprised (disappointed, since I expect digital perfection) at the variance in adjacent pixels.


          • by mi ( 197448 )

            You (the submitter) are taking images of the real world, where light moves around somewhat randomly in energy packets called photons, not in perfect rays.

            Are you really suggesting, a digital camera is capable of capturing fluctuations among photons? They really are "perfect rays" as far today's (and tomorrow's) equipment is concerned...

    • Re: (Score:1, Interesting)

      by Anonymous Coward
      Because this is Ask Slashdot, where users try not to do any real thinking, instead seeing if people have done it for them.

      I think that this section has had 1 good question in the last month.
      • While you are absolutely right in your diagnosis of Ask Slashdot: what is wrong with that? Isn't it wonderful that you can get other peoples opinions on questions and you don't have to figure all the answers yourself?

        If our culture was based on figuring everything out ourselves, we'd be still living in the caves learning to catch deer for a living, unless we had already gone extinct trying to figure out how to raise children by ourselves.

        As to the subject at hand: is it realistic to expect all the pixels to
  • Interesting idea (Score:4, Insightful)

    by Aladrin ( 926209 ) on Saturday January 27, 2007 @01:45PM (#17784096)
    That's an interesting idea, but it assumes some pretty clean conditions. The light has to be absolutely the same over the entire surface, it would probably need to be blurred as you said, the surface would have to be absolutely the same color everywhere (no dust, no marks), the surface would to be completely non-reflective, and probably some other things that I haven't thought of. It would be extremely hard.

    It also assumes that the variations are always the same, and that the variations in your photos are from defects and not from the natural color differences in the real world and the digital camera's attempt to map them to a very restricted color palette.
    • Re: (Score:3, Informative)

      by Goeland86 ( 741690 )
      that's not too hard. Get a projection screen with a spotlight aimed at it. There's your screen. Then you can create your layer fairly easily, no?
      Any meeting room, or multimedia classroom will have one of those, anywhere with a projector will work. You just need a pair of tripods, your camera and a fairly powerful wide range spotlight. Done!
      • Spots (at least theater lighting instruments, which is what I have experience with) aren't even at all. You can easily see variances if you look close. Most of theater is about not looking too close, so it works out.

        Evan "I have a follow spot in my kitchen right now"

    • How do you get a surface that is white, but is all completely non-reflective? Doesn't the fact that you see it as white means it reflects white light (or all colours if you want to get technical). A surface that is completely non-reflective would be black, although just about every surface i've seen will reflect some light.
  • It's out there. (Score:1, Informative)

    by Anonymous Coward
    My digital rebel xti has software that lets you do what you describe. It's used to correct blemishes caused by dust particles on the sensor. In a regular imaging tool you could probably work out a similar fix by creating a mask that does some enhancements and whatnot where there are darker pixels. Just a guess, I've never done it.
  • by linuxbert ( 78156 ) on Saturday January 27, 2007 @01:53PM (#17784158) Homepage Journal
    What you describe is normal, and your question exhibits a lack of understanding about white ballence.
    essentially, if your white is right, then all the other colors will be as well. your camera has several settings to compinsate for various light types (Tungsten, Flourescent, Daylight)Yours is probably set to AWB (Auto) which is easy - as the camera will figure it out pretty well and a Custom - which you can configure based on the lighting by shooting a grey card - which is a card that is 15% grey (Or there abouts) that the camera can then use to figure out what true white is.

    The variation in pixels can also be the result of the ISO setting you are using. 100 has the least noise, but also requires longer exposures. higher settings react faster, but have more noise (400,800,1600) This is a tradeoff between desigered exposure and ambiant light.

    I would suggest reading Strobist [] for more on lighting. There are also several other sites dedicated to post processing images, that you may find helpfull. it also might be worth looking at the various pool discucssion groups on Fliker.

    • by Frobnicator ( 565869 ) on Saturday January 27, 2007 @02:53PM (#17784538) Journal
      You are right that light balance and natural noise are both very important.

      Take for example the camera-assistant production slates (those little boards you see movie makers use with the clapper on top). They do a lot more than just showing the script location and film location, but they also have little black and white (and gray) lines on the clapper. Those are amazing tools that are deceptively simple. The clapper makes a sharp noise that lets you sync and balance the audio, digital boards will record the sync for individual film frames, and the lines provide for image calibration.

      The black, gray, and white boards allow you to balance the brightness in post production exactly the way the original post was looking for.

      Most boards also have calibrated colors to help balance those, as well.

      Shooting slate is a very important step in good photography, both for stills and motion pictures.

      And to the posters suggesting trying to eliminate all natural noise in photos, you don't really understand what you are talking about. Your eye expects noise in the real world.

      Photos need natural noise, they look unnatural or cartoonish without it. Traditional photographs are full of noise because the silver halide gelatin and other chemicals are not perfectly uniform. The chemicals naturally clump up and form noise. (This property makes it easy to identify tampered photos since the natural noise is different between two areas.) Even digital photos get noise when you print them or display them on your screen. If your camera automatically smoothed out all the noise, the image would look like a cartoon or a naively ray-traced image.

      As far as using image editing apps such as the GIMP or Photoshop, yes they are able to do a great job with digital images but they are limited by the knowledge and skill of the human using them.

      • by gwait ( 179005 )
        True, real images have noise in them, but you don't need the camera adding extra noise that is not there in the "real" image.
        There are many types of noise from an image sensor. If the camera has a fixed pattern noise, then yes one could take a shot of a uniform white background, lower it's digital intensity until the lowest value is zero, and then subtract that noise pattern image from a digital photo, (at the same exposure duration as the noise pattern image) to clean out the pattern noise. Perhaps some of
        • by flewp ( 458359 )

          True, real images have noise in them, but you don't need the camera adding extra noise that is not there in the "real" image.

          So true. The problem I've seen a lot of people struggle with is distinguishing between camera noise (the added noise you speak of), and the real, natural noise. I've seen more than one example of someone applying a filter like Noise Ninja to an iamge, only to get the cartoony effect that Frobnicator was reffering to. For instance, think of a picture of a road. The concrete/asphalt/etc surface is going to have what appears to be noise (the natural bumps, color variations, etc in the road surface), while

    • Quick technical modification, usually light sensors are looking for 18% grey. At least that's what I have always been taught.

      However, there is an interesting post I just uncovered here [] that discusses the true standard as being 12%.

      Of course perhaps you knew this and picked the middle ground? :)
  • Noise ninja (Score:5, Informative)

    by Illusion ( 1309 ) on Saturday January 27, 2007 @01:55PM (#17784164) Homepage
    See Noise Ninja [] for well-known commercial software that does this. There's apparently now also a Linux version.

    While playing with it a while ago, I found that JPGs compress something like 25-33% better after you remove the CCD noise. Improving the image quality while making the images take less space seems like a nice combination. :)

    This seems like it would be great to get in the hands of more people as a free software app or plugin, but I'm not aware of any.

    -- Aaron

    • Or GREYCstoration (Score:5, Interesting)

      by dschl ( 57168 ) on Saturday January 27, 2007 @02:18PM (#17784296) Homepage
      GREYCstoration []. Ugly name, but does the same job, and is open source. Haven't tried it, but there appear to be several plugins for various open source digicam programs and image editors (bottom of their downloads [] page).
    • Bibble [] (runs on Win/Linux/Mac) includes Noise Ninja and has a host of other features. Lightning quick on raw formats too... One of the few bits of software that's actually worth paying for!

    • Just a small tidbit: The bits that setting your JPEG encoder to a decent quality should do is cut off high-frequency mess. You use another tool to do that so effectively you're encoding your image as something very much like JPEG, then decoding it and encoding it at "higher quality" which effectively does null with that data, figures out it's a string of 0's and rips them out.
      • by Illusion ( 1309 )
        That would be true if Noise Ninja or similar were indiscriminately blurring the image by stripping all high-frequency data, such as just using a lower-quality JPG setting. However, by removing just the CCD noise and leaving the rest alone, JPG actually encodes more of the high-frequency data that I care about. The resulting image turns out much more detailed than doing what you suggest.
  • by NereusRen ( 811533 ) on Saturday January 27, 2007 @01:55PM (#17784166)
    Using the sky or a white piece of paper may be interesting, but it probably won't give you anything you can use to calibrate the rest of your photos.

    A better bet for isolating the noise your camera generates is to take completely black photos, using the lens cap and some extra covering (and a dark room) to make sure absolutely no light hits the sensors. This will let you make raw images of the "dark noise" and "bias noise" that your camera generates, and subtract those images from your real photos before doing any other processing.

    Details of this method can be found here: [].
    • Re: (Score:3, Informative)

      by zippthorne ( 748122 )
      Better cameras already do this, by taking a picture with the shutter closed. You can sometimes select between taking a "dark frame" for every picture and taking a single "dark frame" to apply to all subsequent frames. Best to read the manual for your particular model to see if you have these features.
  • Yes. (Score:4, Interesting)

    by oskard ( 715652 ) on Saturday January 27, 2007 @01:56PM (#17784168)
    If we saw a sample of the photos, it would be easy to determine if they could be fixed. Its hard to understand what the exact problem is from a text description, but the general answer is: Yes, anything can be done with The GIMP / PS.
    • by qzulla ( 600807 )
      Anything? Then show me how to do Marshall Oils in GIMP. I still have not figured it out.


    • Even if you left the lens cap on, you can repair the image the The GIMP. It does take some manual editing of the RBG values at each pixel location though.
  • by DonnarsHmr ( 230149 ) on Saturday January 27, 2007 @01:57PM (#17784172) Homepage
    You're almost right. The method you're using is called Dark Frame Subtraction. The idea is that you photography the non-random noise inherent in the sensor and then take that out of the captured images. To do this, you make an image that is completely black (i.e. body cap on the front of the camera and viewfinder cover on the back) at the same temperature conditions and for the same length of shutter speed as the image you are trying to fix. Then you add that as a layer in photoshop, subtract it from the real image, and the non-random noise disappears.

    However, it is MUCH more likely that the noise you are complaining about is random thermal noise, which is not treatable via Dark Frame Subtraction. Because it's, well, random noise, it'll be different in every shot. There are several photoshop plugins that can address this issue. In my opinion, the most effective and easiest to use of them is Noise Ninja.
    • by hankwang ( 413283 ) * on Saturday January 27, 2007 @02:47PM (#17784500) Homepage

      Then you add that as a layer in photoshop, subtract it from the real image, and the non-random noise disappears.

      I doubt that that will work. Once in the computer, the pixel values are not proportional to the absolute brightness, see gamma correction [] on wikipedia. You would need to do the substraction on linearly encoded data (12 or more bits rather than 8). Maybe photoshop can indeed do this, provided you find the right settings, but GIMP as far as I know doesn't.

      • Well, I find it hard to believe that GIMP can't do something that photoshop can [].
        • Re: (Score:3, Interesting)

          by tigersha ( 151319 )
          Ok here is a list to quell your doubts:

          Photoshop has:

          Adjustment layers which allow you to change filters after the fact
          Filter layers which allow you to switch a stack of filter on and off and season to taste after tha fact (in CS3)
          These two features allow you to view image processing more like a spreadsheet in the same way that Excel is better than a calculator

          Can do filters on the GPU in hardware (in CS3)
          Save for web
          Absolute color systems (Lab color)
          Capability too do color proofing fo
          • by r00t ( 33219 )
            The scaling just got fixed. It'll be in the next release AFAIK.
          • by Lehk228 ( 705449 )
            Gimp allows you to see your text on the image as you type.

            the GIMP interface is better than photoshop unless you have learned photoshop first.
            the price comparison does hold because by being free GIMP can be acquired trivially on any internet connected computer. whatever version of photoshop you use, if you are visiting someone across the country and need/want to do some image work you are SOL if you depend on photoshop.

            also the vast majority of photoshop users are criminals.
            • by Phisbut ( 761268 )

              also the vast majority of photoshop users are criminals.

              Over here on /. we call them "copyright infringers", not "criminals". They're civil offensers.

              • by Mr Z ( 6791 )

                Not after the DMCA. Circumventing an access control is a criminal act under the DMCA. Using a hacked key or similar is sufficient to meet that threshold.

              • by Lehk228 ( 705449 )
                if it was downloaded through bittorrent, or traded for other warez you are exchanging something of value (such as your parts in a torrent) to obtain that copy, they are a criminal under the NET (No electronic Theft) signed into law in 1997 be president clinton facing up to 5 years in prison
        • by Goaway ( 82658 )
          Well, I find it hard to believe that GIMP can't do something that photoshop can.

          Wow. You don't get out much, do you? So to speak?
      • Take a look at cinepaint [] it's running 8, 16 and 32 bit color channels and Kodak Cineon CIN, Digital Picture Exchange DPX and OpenEXR, EXR files formats
  • Sure Can. (Score:3, Informative)

    by philibob ( 132105 ) on Saturday January 27, 2007 @01:58PM (#17784178) Journal
    You already can. Some cameras let you shoot against a blank white area to compensate for dust particles on the CCD. It's called "Dust Reference" in Nikon Capture, which works with most of their DSLRs.
    • Even better, some cameras have anti-dust sensor coating and can remove dust from the sensor by vibrating it. Nikon's Dust Reference justs masks the symptoms, the dust is still there. And technically if you shoot raw, you can also use Dust Reference with any other camera, as a post processing step.
  • yes.. (Score:3, Informative)

    by slashkitty ( 21637 ) on Saturday January 27, 2007 @01:59PM (#17784188) Homepage
    I don't know about your particular problem, but other camera flaws have been fixed with processing. For example, if your camera adds a vignette, you shoot a piece of white paper, then remove that shading from all the photos. This is gives you an automatic, scriptable way to do that with ImageMagick:

    Vignettation Removal []

    • Re: (Score:3, Informative)

      Note to anyone that plans on doing this - good digital SLRs have this kind of function built in and you should only consider this if your camera doesn't. The quality of the adjustment will actually be significantly worse unless you ensure:

      1. The light is hitting the white paper evenly
      2. The white paper is a nice bright and clean white (don't even THINK of using standard office copy paper)
      3. The paper is of a very short grain
      4. There is no curving or folding in the paper

      It might be better to consider the alterna

      • by r00t ( 33219 )
        The camera is actually crap as far as this goes.

        The problem will vary with zoom and apreture, and maybe even with focus. If a flash is involved (yuck), you have to deal with even worse problems from that.
  • Not quite... (Score:4, Informative)

    by Joe Decker ( 3806 ) on Saturday January 27, 2007 @02:02PM (#17784210) Homepage
    Discounting truly bad pixels, variations in the sensor readings on an even sky have two sources--pure sampling noise from the fact that the sensor is only reading a finite number of pixels, and a more constant, but still varying per-pixel offset. It's likely with a daylight shot that you're primarily seeing the former, the latter effects tend to be more significant during long exposures doing astrophotography. Check out the "Digital Rebel" astrophotography page here, [] it outlines a procedure for measuring and subtracting off this varying per-pixel offset, but notices you need to essentially compute the "dark frame" (or offset) for a particular set of conditions (temperature, ISO, exposure time). That subtraction could be done in PS, but again, you really need a new "dark frame" for each shot.

    It is possible to smooth rough skies and such in Photoshop, I can't speak from personal experience with the GIMP but I'd expect something similar would work. I'd take the image, duplicate a regular (non-adjustment) layer on top of the main image, call that second one "smoothed"), blur it (Gaussian blur, fiddle with the radius to keep the effect gentle), add a layer mask to "smoothed" and paint it so that it only targets the sky in a shot. You may end up finding that you want to leave a little noise in the resulting image to avoid posterization, [] if your results are too smooth you can always adjust the opacity of the smoothed layer downward.

  • Though I've never tried it myself, I have heard that you can take long-exposure photos with the lens cap on to reveal any consistent noise in your camera and filter it out. By using a long exposure, you can highlight which pixels are brighter than others, then use that image to mask out the same noise in your other photos.

  • by Gothmolly ( 148874 ) on Saturday January 27, 2007 @02:04PM (#17784228)
    a) If you are using anything above ISO50 on a cheap digital (like yours), you will get ISO noise
    b) blue sky is not really blue, you can't expect 7.1 million pixels to all agree
    c) there may have been microscopic dust on your lens

    Basically, you're looking for your camera to be Adobe Illustrator, and it isn't.
  • Yes (Score:4, Informative)

    by YGingras ( 605709 ) <> on Saturday January 27, 2007 @02:05PM (#17784230) Homepage
    Scale down the picture, choose cubic interpolation and you're done. You can't fix the original, the information is scrambled already, but you can use the information of the larger image to average the pixels of the small image to get something clean. When you read X mega-pixels, you should know that this is a scam. There are no camera out there that will give you an image usable at X resolution but you can still have pretty pictures at X/2 (which is roughly 3/4 of the side on the original).
  • If you're using windows you can download the .NET framework and grab ( I have it and it's great if you're familiar with MS paint. That with the ability to add different plugins makes it a wonderful free programs like GIMP.
    • Re: (Score:3, Informative)

      Paint.NET is really great for those who need a quick and dirty image editor with a lot more power than MSPaint. However be careful - on most systems, it's a SERIOUS resource hog when dealing with large images (such as the 8 megapixel images from my camera). I find Paint.NET is great for anything that fits on my screen without scaling to less than about 50%, but go above that and my poor little work laptop (Dell Latitude D510 - 512MB RAM, 1.73GHz Pentium-M) will choke and die with lots of swap file use. P
  • Something similar (Score:5, Informative)

    by ceoyoyo ( 59147 ) on Saturday January 27, 2007 @02:06PM (#17784236)
    Something similar is done in astrophotography. There are two kinds of fields you can remove from your images. A dark frame (taken with the lens cap on) is subtracted to remove things like pattern noise, hot pixels and amp glow that appears in images. A flat frame is then used to remove multiplicative effects, like vignetting and dust spots. Acquiring a flat frame can be tricky. One of the best ways is to use a translucent lens cap and a fairly bright light that provides a fairly uniform illumination.

    However, the effects (unless there's something seriously wrong with your camera) are really only noticeable for long exposures.
    • Re: (Score:3, Interesting)

      by dougmc ( 70836 )

      One of the best ways is to use a translucent lens cap and a fairly bright light that provides a fairly uniform illumination.
      We used to just point the telescope up during the day and take a picture of a nice blue sky -- it worked very well. (Of course, this was 15 years ago, and maybe things have changed somewhat and there are better ways to do it now.)
      • Re: (Score:3, Interesting)

        by gsn ( 989808 )
        Still do skyflats :-) but it does depends on what passband you care about for imaging - twilight sky flats work pretty well in B. These are sort of bothersome on larger telescopes because you don't want to saturate but you do want good statistics but you don't want to cut into observing time, and you have to slew between each one to reject any bright early rising stars. A lot of big telescopes use quartz lamps to illuminate a screen and image that. Dome flats are pretty common these days, especially in spec
  • Yes (Score:5, Informative)

    by Ankh ( 19084 ) * on Saturday January 27, 2007 @02:07PM (#17784242) Homepage
    First, the biggest improvement you are likely to see in the Gimp is if you go to Colour->Layers (in older versions of the Gimp it's Layers->Colours->Levels) and click Auto. For pictures that should contain some black and some white this will usually make a noticeable improvement.

    Second, yes, Canon (for example) includes (Windows only, proprietary, secret, closed-source) software to compensate: you shoot a 25% grey surface. You can also use this inside the camera itself: there it will use the data for white balance correction.

    In practice, though, it's fairly hard to do this yourself. One difficulty is that the amount and position of colour aberrationswill probably vary depending on the lens you use, or, with a fixed lens, the amount of zoom and the aperture size. I know I found that when my Casio developed some dark spots.

    There are some programs that are used with hugin, the panorama stitching UI, that help with some lens corrections; it might be you could ask those people. However, a lot of the variation you are seeing is likely to be digital noise. Try taking 3 shots usinga tripod and timer or remote, and comparing them.
  • Others have alluded to it already, but what you're asking for sounds exactly like noise reduction. And there's plenty of software out there that does that. The problem with noise reduction is that it reduces fine detail as well. (Although some software does a respectable job, it can't perform miracles.)

    If you're concerned about noise, what nobody has pointed out yet is that you may want to consider a camera with fewer pixels, a physically larger sensor, or both. Cramming 7 million photosites on a tiny 1/2.5
    • I got only 6.3 MP (6.0 MP active, 6.1 MP claimed) on
      a 23.5x15.7 mm chip. Your example was 7 MP on 5.8x4.3 mm.

      Going by the 6.3 MP figure...

      Mine is thus 58.56 square micrometers per pixel.
      You example is only 3.56 square micrometers per pixel.

      That is a factor of 16.44 difference.
  • by Technician ( 215283 ) on Saturday January 27, 2007 @02:23PM (#17784336)
    Pixels are not identical in their dark current and light sensitivity.

    For information on correcting these issues which compound in long exposures, find a good astronomy photographers forum. They discuss taking long exposures of various times with the camera capped to identify bright (high dark current) pixels. They use these corrections in their star shots of the same exposure time to subtract out the brightness caused by high dark current pixels. In bright scenes the same thing can be done to correct for low sensitivity (low bright current) pixels. A way out of focus shot of a white screen with primary color filters or lighthing should be able to give you some good sensor correction factor data. Remember that the errors are temprature sensitive so a full correction may be hard to get.
  • by Overzeetop ( 214511 ) on Saturday January 27, 2007 @02:25PM (#17784346) Journal
    You'd be just devastated if you blew a film image up to the level where you could see the grain.

    Here are two questions for you:

    1) Do you find that you are printing your images at sizes larger than 12x18?

    If you are, then you probably ought to have more pixels (i.e., a better camera). I'm okay with digital pictures down to about 150dpi, others swear that you need 300+. Then again, there are people who swear that $3000 unobtainium coated silver strands wrapped in virgin PTFE and assembled when the planets are in alignement make their music sound better.

    2) Presuming you are actually printing at at least 200dpi, can you really see the difference without a loupe on the final prints? I'm not worried about your monitor, because I'm going to bet that if you have a consumer-level camera, you're not doing photoediting on a 7.1MP monitor.

    You see, if you can't tell, don't worry about it. Let your geek side go and spend more time in the field and less time in the darkroom. Seriously - unless you have significant image problems you can see in your final output, the camera and imaging is good enough. Go take some great pictures, and worry a bit less about having digitally perfect pixels.
    • Responding to your sig, it's not that people are particularly stupid, it's that the world is getting more complicated faster than most people can keep up. Also technology is getting democratised. "Audiophilia" that you mention arises because the scientifically illiterate want to seem more capable and knowledgeable around audio equipment than they actually are, and there are more people than before who can afford this stuff. Once upon a time the average middle class joe could afford one hobby, so he became p
      • Your post was a little offtopic; and now mine is WAY offtopic, but I have to respond. Hopefully the mods will look kindly and my "Offtopic" mods will equal my "insightful" for a break-even ;)

        I disagree with your basic premise here completely. Everything you say about KNOWLEDGE is correct, but that doesn't address stupidity, which "Overzeetop"'s sig is about. There are indeed many more stupid people in the world than there used to be, and I put it down to many factors - a noticeable one that is differen

    • Re: (Score:3, Informative)

      by Andy Dodd ( 701 )
      "If you are, then you probably ought to have more pixels (i.e., a better camera). I'm okay with digital pictures down to about 150dpi, others swear that you need 300+. Then again, there are people who swear that $3000 unobtainium coated silver strands wrapped in virgin PTFE and assembled when the planets are in alignement make their music sound better."

      More pixels is not necessarily better. More sensor area is usually more important.

      This is why high-end DSLRs with only 4-5 megapixel resolution deliver bett
      • All true, but in this case, in order to get more pixels, he/she will have to go to a "better" camera - i.e. a DSLR. There aren't many cheap cameras with significantly more than 7 MP (haven't been in the market in the last few months, but most P&S go about 8 max, iirc).

        I totally agree about the sensor area. It's one reason that my P&S was selected for sensor size/efficiency, but I know that a DSLR at 2 stops faster and the same pixel count will still produce better pictures.

        Unfortunately, the OP has
  • 1. Open image file.
    2. Duplicate layer.
    3. Select the subject of your photo using the lasso tool. It doesn't need to be perfect, just outline it.
    4. Go to Select -> Feather. Give it about 30px, when it asks.
    5. Go to Layer - >New -> Layer Via Copy.
    6. Go to the second layer, this one should be called "Background copy"...or whatever you renamed it.
    7. Go to Filter -> Blur -> Gaussian Blur, and then blur that layer such that you can still make out shapes.
    8. Save new image.
    • This will simply make your subject look like a cardboard cutout. It's a half-decent gimmick if you're doing webpage design, but useless for real photography.

      Folks, if you want to isolate a subject, use a lens with a larger opening, narrower field of view, or just get closer to your subject.

      • Instead of bluring, save as a minimum-quality JPEG and then load it again.

        As long as you maintain alignment with the 8x8 JPEG compression blocks (possibly 8x16, 16x8, or 16x16 in the chroma channels) you'll get very little additional loss from subsequent recompression. The high-frequency information is simply gone.

        Now the non-critical parts of the image will compress really well.
      • Folks, if you want to isolate a subject, use a lens with a larger opening, narrower field of view, or just get closer to your subject.
        Please oh great guardian of wisdom, how do I get closer to the SKY?
  • by dlevitan ( 132062 ) on Saturday January 27, 2007 @02:58PM (#17784556)
    Yes, your $1000 digital camera is not going to have a perfect CCD. There is no such thing as a perfect CCD. And I don't understand why you care unless you're trying to do science work with it. Look at it this way, no one is ever going to look at your picture and say its horrible because one pixel is slightly different than the one next to it. You look at the content of the whole photograph, not three pixels.

    If you are trying to do science, then a DSLR is not what you need. DSLRs use Bayer interpolation to create a color image. This inherently kills your accuracy since not every pixel in the image is actually a pixel on the camera. CCDs used for astronomy (which cost more than your whole camera) do not do this and they still suffer from the effects you mentioned. Every exposure used for scientific work goes through a whole data reduction process that tries to remove as much noise as possible. Others have mentioned most of the process (bias frames, dark frames, and flat fields), but most astronomical CCDs also have an overscan region which is part of the CCD that is not exposed to light and is used to record the thermal noise on the CCD. This changes from exposure to exposure and from temperature to temperature (and yes I am a researcher in astronomy).

    In short, there's no reason for you to care about this, and there's no chance of fixing this completely (CCDs are not digital - they're analog). There's also no way of applying the same solution to every photograph (and CCDs can change over time). Don't worry about pixel-to-pixel variations and just take photographs for their content. If you're really interested in how CCDs work, read the Handbook of CCD Astronomy by Steve Howell. Its a great introduction to CCDs and how to use them for astronomy.
  • I suggest reading this book for color management:
    Color Management for Photographers: Hands on Techniques for Photoshop Users by Andrew Rodney.

    The short gist is that you want to get a color calibrator like a Eye One Display II or a Colorvision Spyder2Pro to calibrate your monitor to a standard.

    Second, you will want to get these two books for color correcting your images.
    Photoshop LAB Color: The Canyon Conundrum and Other Adventures in the Most Powerful Colorspace by Dan Margulis
    Professional Photoshop: The
  • ight_Spots_01.htm []

    This should help. Its for long exposure shots but the same concept applies. Keep in mind that your camera sensor won't always show the same noise. So you'll probably end up doing this for every shot.
  • The best way to improve your pictures is to learn something about composition and lighting. If the subject matter is good, you have a good picture.

    You want better data? Get a better camera. Ditch that point-and-shoot for a DSLR, or even (gasp!) a film camera. My 50 year old Crown Graphic [] takes pictures that the very best DSLRs can only dream about.


    • You want better data? Get a better camera. Ditch that point-and-shoot for a DSLR, or even (gasp!) a film camera. My 50 year old Crown Graphic [] takes pictures that the very best DSLRs can only dream about.

      A view camera? What size is the film? I've been thinking of getting a 645 medium format with both film and digital backs. I think this size would be good for both large landscapes and photojournalism, I want to do both.


      • The old press cameras, the 1950s newspaper photographer cameras, are the ancestors of the modern field/technical cameras. They fold up in to a portable little box and take sheet film. They can do lots of view camera things, but they're not really view cameras. They were made in different models to take different size film. Mine, in particular, takes 4 by 5 inch film. Since an 8 by 10 print is only a 2x enlargement, you can just about get away with murder. :-)

        Press cameras were intended to be used handheld

        • It's helpful if you can do your own darkroom work.

          I have worked in darkrooms developing film and prints, however it's been too long since I have. There's a photographer association in the area, IFP Minneasota [], I've been thinking of joining. It has classes and a darkroom I would be able to use after taking a darkroom orientation, which I'd need to take. Eventually I'd like to build and setup a darkroom in my basement

          You mentioned medium format, and used medium format gear is cheap nowadays

          I've loo

          • Another thing I'd like to do is get a good telescope with a camera mount so I can photo the stars.

            One word of advice on this.


            While astrophotography can be enormously rewarding, it can also be very expensive, and you really do have to know what you're doing to get anything other than blank film with vague blurs on it. Learn the sky. Learn the stars and planets and stuff. If you can't point your finger at, say, M31, how can you point a telescope at it, let alone photograph it?

            Too many people

            • Too many people attempt astrophotography, find it's far harder than it looks, and give up in frustration. Please don't be one of those people.


              Thanks for the warning, I'll try not to take it as a challenge. Now that you mention it, I couldn't point out and identify any stars now other than the North Star. I used to know some, but not now. Then again growing up I was in a model rocktry club. I'd also go out at night and lay on the ground staring at the stars. Occassionally I'd get to see a

    • by n3k5 ( 606163 )
      If the sensor of your digital camera matches the size of your film, and price is irrelevant, you can get one that at least matches the quality of the analog one. Pictures that the very best DSLRs can only dream about? No, definitely not. Gear that your wallet can only dream about? Yes. :-)
    • My 50 year old Crown Graphic takes pictures that the very best DSLRs can only dream about.

      I don't know, I've seen some pretty nice shots out of digital backs like this one: Phase One P 45 [], 39mp, 4:3 sensor, medium/large format. That being said, my next baby is to be this: EOS-1Ds Mark II [], 16.7mp full frame. That plus my slowly growing collection of L lenses (currently 17-40, 24-105, and 70-200) will keep me happy for quite a while (as I'd hope, for the investment).

  • I've done a lot of work with scientific grade CCDs and like other people are pointing out, there are unavoidable limits to the noise in your image. For a $50,000 scientific grade CCD, you are able to be cooled via solidstate (peltier) cooling down to around -50 degC, Using a LN2 dewar based unit, down to 77+ Kelvin. The rule of thumb is that dark current (thermal noise) reduces by a factor of 2 for every 5 degree (Kelvin) drop in the temperature of the CCD. The LN2 cooled cameras are how astro people ge
  • If all else fails, why not use Gimp (or Photoshop) to fix your pictures? One of the tools that I use almost too much is the selective Gaussian blur. You could select the sky with the magic wand and apply it as many times as needed. Or, if you don't have any clouds, why not just blur it?

    • Fixing a clear sky is actually pretty easy as long as it's not TOO messed up, but it sounds like you've got a decent camera so I'll assume it's alright. This is GIMP BTW. Here's how I would do it, isolate those nasty splotches with the magic wand tool leaving about a 10 pixel brder around them. Copy them to a new layer. GBlur them a bit (I find usually anything larger than 3x3 pixels doesn't look to right.). Ok, now select the transparent area with the select by color tool, then invert it (So that the
  • As many have already pointed out, you don't want all your pixels to be perfectly the same when you take a picture of the sky. A certain level of 'noise' (or 'grain' in film photography) is part of the picture. There are circumstances where too much noise can be annoying and it is possible to have faulty/dusty CCDs - but it doesn't sound like that is your problem. Both the GIMP and Photoshop have cloning tools etc that allow you to copy existing areas. This includes the noise from wherever you get the so
  • I'm a bit baffled by your question. If I were you I'd pick up Professional Photoshop by Dan Margulis. It's the best color correction book out there. He'll answer your questions. As for what you said about the sky, cameras are different than the human eye, and with a wide angle lens the sky SHOULD have a variation in color. You can fix this with a mask. Refer to Margulis to see when this is appropriate and how it should be done.
  • There is software that does this already. What you want is Noise Ninja and DxO. Noise ninja can build a custom noise profile for your camera, and DxO can correct standard errors with lenses. DxO might not be available for your consumer camera.

    Noise Ninja will compensate for the parts of your sensor that are naturally, always, noisy. DxO will correct vignetting and distortion from lenses.
  • Canon has some software where you can calibrate your camera by taking a picture of a white screen....that is mainly for SLRs where you can get dust on the sensor or somewhere between the lens and the sensor.

    I think what you want is something that removes what is called noise. For that I would use neatimage, noise ninja or GREYCstoration.
  • If you want high quality digital pictures, you must get a better digital camera. A point and shoot (such as a Olympus 7.1MP) will produce significantly lower image quality than a DSLR. You may be surprised, but a lower MP DLSR will take better pictures than a higher MP point and shoot.

    Check out the reviews and especially the sample photos of the following:
    Olympus E-330 Point and Shoot (7.4MP) [] Like to yours
    Nikon D50 DSLR (6MP) []
    Canon XTi DSLR (10MP) []
  • by n3k5 ( 606163 )
    No, it can't be done; the artifacts you want to eliminate aren't so consistent that you could prepare a canned antidote beforehand. There's special anti-noise software that works on RAW images and can make use of calibration shots. GIMP and Photoshop are best suited to laborious manual retouching, but of course you can integrate specialised anti-noise software into your Photoshop/GIMP workflow. (I'm not recommending a particular product because I don't have personal experience with such tools.) Either way,
  • it lacks 16-bit support and color management. Most people won't need 16-bit support but if you plan on printing your photos or need to do drastic adjustments it's a must. And without color management your photos will look very different on other people's monitors or printers.

    And let's not forget the atrocious printing with GIMP, compounded with both matters above.

    There's a reason why PhotoShop is the most asked-for Linux application.
  • It is not really clear just what the problem with your image might be from what you write. For example, the higher the ISO value you shoot with, the nosier an image will be. Increased sensivity comes at the price of more cross talk between pixel sites on the CCD resulting in noise in the image. ISO 100 should be quite clean on most modern cameras, but ISO 1600 is likely to be problematic. Another possible problem could be your lens. If the "variance" you mention is distributed around the edges and espe
  • > When I inspected a section of clear blue sky on a bright, sunny day (which I've
    > long believed to be relatively good reference of uniform color and brightness)

    Wow, this internet thing is great. I love the fact that we're able to communicate with one another, despite the fact that we apparently inhabit completely different worlds, if not alternate universes.

    On Earth, the sky is nothing if not variegated.

    If you want uniform color and brightness, photograph a natural cavern several levels down from th

"I'm not afraid of dying, I just don't want to be there when it happens." -- Woody Allen