Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Build Entertainment

Ask Slashdot: Tips On 2D To Stereo 3D Conversion? 125

An anonymous reader writes "I'm interested in converting 2D video to Stereoscopic 3D video — the Red/Cyan Anaglyph type in particular (to ensure compatibility with cardboard Anaglyph glasses). Here's my questions: Which software(s) or algorithms can currently do this, and do it well? Also, are there any 3D TVs on the market that have a high quality 2D-to-3D realtime conversion function in them? And finally, if I were to try and roll my own 2D-to-3D conversion algorithm, where should I start? Which books, websites, blogs or papers should I look at?" I'd never even thought about this as a possibility; now I see there are some tutorials available; if you've done it, though, what sort of results did you get? And any tips for those using Linux?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Tips On 2D To Stereo 3D Conversion?

Comments Filter:
  • Here's a tip (Score:4, Insightful)

    by Hatta ( 162192 ) on Tuesday January 24, 2012 @11:44AM (#38805807) Journal

    Don't do it.

  • by TheGratefulNet ( 143330 ) on Tuesday January 24, 2012 @11:48AM (#38805873)

    we all were suckered. we tried it, hated it and moved on.

    each time they try to re-invent this, its still just an effects gimmick.

    you'll soon grow bored.

    don't invest anything in this. its a reocurring cash grab due to industry boredom.

    and as a fulltime glasses wearer, I'd never be caught dead with cardboard glasses over my regular ones. an absurd concept if there ever was one.

  • Re:Here's a tip (Score:4, Insightful)

    by spidercoz ( 947220 ) on Tuesday January 24, 2012 @11:49AM (#38805885) Journal
    Exactly. People with the proper equipment and money fail at this regularly.
  • Re:Here's a tip (Score:5, Insightful)

    by vlm ( 69642 ) on Tuesday January 24, 2012 @12:06PM (#38806177)

    Exactly. People with the proper equipment and money fail at this regularly.

    I think it is important to arrive at the correct mindset. This has never stopped people from snapping pix at weddings and sporting events and tourist traps, even if their pix look like garbage compared to a pro photo on a postcard or whatever.

    If you want to do it for fun, heck yes go for it. Go Go Go. You don't need help just try it.

    If you think you'll turn out something that means anything to anyone else in the world, you'll probably be disappointed. Insert stereotype of goans when someone wants to show you old fashioned slides of their vacation. Although that old tech is getting kind of retro cool now.

  • Re:Here's a tip (Score:5, Insightful)

    by Jappus ( 1177563 ) on Tuesday January 24, 2012 @03:11PM (#38809055)

    Don't do it.

    Well, there is a way to do it, a very elegant way even. One that can be, for all purposes and intents, as good as you can get with the raw material; even to the point where the average human will not be able to tell the difference.

    The thing is: That solution has a big catch. How big? Well, to put it mildly, you will most likely win the Turing Award in the process of doing so and will at some point end up with a Nobel Prize in your hand, too. As you can imagine, the solution is: Artificial Intelligence; and if you want to really do it, only strong artificial intelligence will do.

    The fact is, as others have quite succinctly pointed out, that the issue is in determining what is "in front" and what is "in the background" on top of how far away everything is. This is, quite simply, impossible to do right if you approach it as a purely algorithmic picture-to-picture problem. There is just not enough information inside the frames/movie to do it well enough even at the best of times.

    So, what do you do? Easy, you import external information. Things like: "This is a tree; That is a human. A tree is bigger than a human. Both take up the same space in the picture. Assumption: The human is closer than the tree. Proof: The tree casts a shadow on the human and the only light source is behind the tree. Angles point to a distance of 20 meters between human and tree. Etc. pp."

    This line of reasoning imports lots of information from the outside; essential things like "What does a tree/human look like?", and "What are their relations to each other size-wise?". But if you grant that this information can be derived and used by an AI, the result can be a very precise derivation of the distances between objects.

    It is exactly the same line of reasoning the human brain uses for large distances (where the parallax of your eyes is too small, focus is unimportant and difference between eye positions negligible), or when you have lost vision in one eye (or just plainly covered it). Even though your brain suddenly has only half the information, it is capable of giving you a good feeling for distance and depth.

    Of course, it doesn't always work, as far too many optical illusions like the Ames-Room show, but it works significantly better than a "pure" picture-to-picture approach and is the sole reason why almost everyone here feels that 2D-3D conversions are so horrible:

    Their brain tells them, that what they see just can't be correct, even if their eyes have actually seen it.

    But of course, just using 2 cameras is much simpler. So good luck with (strong) AI. I would be surprised if you solved this issue all by yourself. :)

The one day you'd sell your soul for something, souls are a glut.

Working...