Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
It's funny.  Laugh.

Human Eyes as Digital Cameras? 45

Posted by Cliff
from the right-out-of-a-cyberpunk-novel dept.
Mad Dog Kenrod asks: "A recent ad campaign for a digital camera had the slogan (something like) 'imagine being able to take a picture from your head and show it to people' - it was basically showcasing how small the camera was. This got me thinking: most people simply want to 'snap what they see'. Given that the human eye already has a very workable lens, and a retina which (I assume) is similar in technology to a digital camera, how feasible would it be to 'tap into' the optic nerve (not the brain, because by then the 'image' is probably something else entirely) and turn the signals from all those rods and cones into pixels?"

"Given we can do C.A.T. scans, would it even be feasible to do this from outside the head (say, with sufficient miniaturization, from the arm of your glasses)?

Of course, you would lack other things like zooms and filters and even an ability to 'frame' the picture (and there'd be problems for people with eye disease), but I propose that, for the majority of us who just want to quickly 'snap what we see' this would make for the smallest, lightest camera possible.

I know nothing about what would be involved in making this happen, so would be interested in people's thoughts."

This discussion has been archived. No new comments can be posted.

Human Eyes as Digital Cameras?

Comments Filter:
  • it would be possible.

    But much sooner then that, perhaps real soon if not already, you could simply build a digital camera into, say, a pair of sunglasses...
  • that can be answered by popping the question into Google and looking at the first 10 results..

    God I hate slashdot!
  • Nearly Impossible (Score:5, Informative)

    by lliiffee (413498) on Tuesday April 01, 2003 @01:49PM (#5639409) Homepage
    Just a few cells back from the retina, the visual signals have allready been 'encoded' in a way which would make a straight pixelmap hard to attain. (Each neuron here corresponds to a wierd gaussian thing centered around a given point) Furthermore, the signals aren't sent down a single neural train, they go all over the place all willy-nilly. Theoretically, these things would be overcome but the most serious problem is that our eyes at any given moment only look at a tiny, tiny bit of space. The illusion of a continuous field of vision is created by the brain in an amazing process which is not very well understood.
    • Easy solution: don't tap the optic nerve. Tap the retina. See the post about the cat elsewhere in the comments for this story.
      • Re:Nearly Impossible (Score:3, Interesting)

        by MacJedi (173)
        The problem is that if you tap in at that point (and let's pretend that you could sink enough electrodes into the retina; if you're tapping in at that level you'd have to hit a significant percentage of them) the raw image would be very poor. You'd have to do all the processing yourself, in hardware and the required processing is not fully understood.

        I'd suggest that you'd be better off letting the brain do most of the processing and take output from the visual cortex. I believe there has been some succ

        • There is also the problem of the image at the back of the eye is flipped, so every image recovered would need to be flipped over.

          It would probably be easier to monitor the activity in the visual portion of the brain and translate the activity into an image then trying to understand the mess of nerves in the optic nerve bundle.

          • Wow, that's a really big problem. They haven't solved that yet, you know. Whenever you get an upside-down picture, the print shops have to throw it away cause it's useless. It would take years on a massive parallel cluster to flip the image back...

            Daniel
  • by rask22 (144831) on Tuesday April 01, 2003 @01:50PM (#5639418)
    There has already been research in this area using cats. The researchers were able to reconstruct images of what the cat was actually seeing. Pretty amazing stuff if you ask me.

    link: http://www.berkeley.edu/news/media/releases/99lega cy/10-15-1999.html
  • Perception (Score:4, Interesting)

    by xyzzy (10685) on Tuesday April 01, 2003 @01:51PM (#5639422) Homepage
    The problem is that you'd probably get a *shitty* picture. Or at best, it wouldn't reproduce "what you saw" any more than a regular camera does.

    The majority of what you "see" is exactly because of the post-processing your brain does, as well as your eye and optic nerve. This occurs both in the optic realm (shading, motion, etc), and because your brain applies all kinds of cognitive processes to the visual signal. It isn't simply a passive sensor like a CCD.
  • Assuming this isn't a dumb april fools joke (are they lame this year, or what?)...

    No.

    What you see is the result of a whole lot of post-processing by a supercomputer called 'your brain'. The input from the optic nerve is quite inferior to the image you see.

    For instance, your digital camera would have a blind spot [colostate.edu] in every picture. It's also upside down, and probably non-uniform in its curvature.
  • [sabac.co.yu]
    Here you can see where they were on this not so long ago

    Short version: They hooked up 177 halfway down a cat's optic path and were able to create images/movies from the info they recieved. One Problem is how hard it is to connect to all the nerves without disrupting their message. the other problem is the image info changes as it moves from the eye to the brain, so it gets processed as it travels. They were only able to interpret the image information at a certain spot on the way to the brain.

    [harvard.edu]
    You can
  • by termos (634980)
    Porn will never be the same again... after shooting it with grandma's eyes!
  • Imagine that you are an Electrical Engineer and you are given a new camera technology, the specs are that the red, green, and blue sensors are randomly distributed. The distribution of sensors is also non-uniform spatially, most are in the middle. The number of sensors for each color also varies. Also, the responses from each sensor also overlap in irregular ways, no two sensors have the exact same response to the same stimulus. Oh yeah, the signals from the sensors are unmapped, we have no idea which sign
  • It could be done (someone else mentioned the cat experiments) but long before you got the resolution of the eye, you'd run into something we used to call the "pincushion problem" -- by the time you've got enough electrodes to capture the information, you no longer have the tissue of interest, you have a pincushion -- and pincushions don't act like normal tissue.

    But let's assume you did it somehow (nanotech, maybe -- everyone knows nanotech can do ANY magic desired). The eye isn't really like a digital cam
  • The way I remember it from biology (which is a stretch at this point ;), each person interprets the information from the cones and rods differently.

    In other words, each picture taken off the optic nerve would be relative to the person who saw it.

    We learn to associate a color with the information we get, but one person might see "red" when a cone is active, another might see "red" when a rod is active.

    If you could tap into the light coming direclty into the eye, maybe, but that is a hardware mod, not a si
    • So does this mean that people who get eyeball or brain transplants are going to see things in weird colors?
      • Or more important, Jahf, does this mean that the color know as "red" could be associated by someone else to what I associate to "green" for example?

        I've been pondering about this for realy some time. I don't know if i'm making myself clear. But in other words, could it be that "my" red = "your" green or other color??
  • in the book cyborg by martin caidin (the book the $6e+6 man was based on) our hero had a camera for an eye instead of the spiffy telephoto ir one he had on tv. he had to pop it out to change the film. today he could just stick a usb cable in his eye. also the bionic woman's ear would have ogg playback capability.

  • Why don't you lay down right here and I'll give it a try? *pulling out a scapel*

  • It is conceivable that eyeglasses could be made, ala- Tom Cruise era Mission Impossible. They wouldn't be the greatest, but you could record what the eyeglass-wearer sees.
  • ...I think whoever cracks this one is going to die richer than Bill Gates. The amount of pictures I'd take on a summer day's walk around town looking at the barely-dressed ladies would necessitate a 20Gb hard disk stuffed up my ass. :-)

    (Of course, doing it with a camera behind some with sunglasses would be a good start.)

    And I'm sure there'd be significant applications in the medical and military fields. I've been thinking how cool this would be for years...

    68K.
  • I would say that only spys, perverts etc. who wanted totally concealed cameras would find that mounting a camera on your glasses wasn't many orders of magnitude better. And slivers of slow glass will be easier for that. ;-)

    Anyhow, technology to do this thru glasses will be needed to enable all the various fabulous things we will do once glasses become a favoured ocmputer interface; HUD overlays, for example, will gain tremendously from knowing what it is you are seeing.

    As Scott Adams puts it, we all want
  • What if someone could intercept those images? Talk about "Being John Malcovich"! That's the last thing I need, someone getting a hold of pictures of me doing obscene things while dressed like Scooby Doo. Talk about humiliating!

The meta-Turing test counts a thing as intelligent if it seeks to devise and apply Turing tests to objects of its own creation. -- Lew Mammel, Jr.

Working...