Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Businesses Entertainment Games Hardware Technology

Ask Slashdot: How Did Real-Time Ray Tracing Become Possible With Today's Technology? 145

dryriver writes: There are occasions where multiple big tech manufacturers all announce the exact same innovation at the same time -- e.g. 4K UHD TVs. Everybody in broadcasting and audiovisual content creation knew that 4K/8K UHD and high dynamic range (HDR) were coming years in advance, and that all the big TV and screen manufacturers were preparing 4K UHD HDR product lines because FHD was beginning to bore consumers. It came as no surprise when everybody had a 4K UHD product announcement and demo ready at the same time. Something very unusual happened this year at GDC 2018 however. Multiple graphics and GPU companies, like Microsoft, Nvidia, and AMD, as well as other game developers and game engine makers, all announced that real-time ray tracing is coming to their mass-market products, and by extension, to computer games, VR content and other realtime 3D applications.

Why is this odd? Because for many years any mention of 30+ FPS real-time ray tracing was thought to be utterly impossible with today's hardware technology. It was deemed far too computationally intensive for today's GPU technology and far too expensive for anything mass market. Gamers weren't screaming for the technology. Technologists didn't think it was doable at this point in time. Raster 3D graphics -- what we have in DirectX, OpenGL and game consoles today -- was very, very profitable and could easily have evolved further the way it has for another 7 to 8 years. And suddenly there it was: everybody announced at the same time that real-time ray tracing is not only technically possible, but also coming to your home gaming PC much sooner than anybody thought. Working tech demos were shown. What happened? How did real-time ray tracing, which only a few 3D graphics nerds and researchers in the field talked about until recently, suddenly become so technically possible, economically feasible, and so guaranteed-to-be-profitable that everybody announced this year that they are doing it?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: How Did Real-Time Ray Tracing Become Possible With Today's Technology?

Comments Filter:
  • Simple: Hardware got more powerful.

    • Re: (Score:3, Interesting)

      by KingMotley ( 944240 )

      I would guess they figured with graphics cards having 3500+ cores and ample memory for massive lookup tables, suddenly it seems feasible. That or a patent just expired, or both.

    • by Roger W Moore ( 538166 ) on Monday March 26, 2018 @09:22PM (#56331415) Journal
      According to this video [youtube.com] it is not just more powerful hardware but also that they came up with the idea to use only a fraction of the rays normally required and to then used a power denoise algorithm to generate the final image.
      • by Anonymous Coward on Monday March 26, 2018 @11:00PM (#56331737)

        This pegs it. Ray tracing is being used selectively as part of the standard raster pipeline we have today. It's still not feasible for real time raytracing for every pixel and handling reflections perfectly may never be doable.

        • by Pseudonym ( 62607 ) on Tuesday March 27, 2018 @03:05AM (#56332289)

          Just to be clear, it's not "reflections" (as in mirrors) that's the problem necessarily, it's the fuzzy effects: diffuse, glossy, translucent, aerosols, etc.

          It may never be doable because of Blinn's Law.

          • It may never be doable because of Blinn's Law.

            Blinn's law says that the time to do something remains constant with the usual result that improvements go into the quality of the result. This is pretty self-evident with real-time video since your time budget is fixed by needed to maintain the frame rate and for any given frame rate you might as well use all the time available. Hence, it says nothing about whether "true" real-time ray tracing will ever be possible. The real question is not whether it will become possible but whether it will become pract

        • by Anonymous Coward

          Handling reflections perfectly is trivially easy with raytracers, and there have been plenty of real-time raytracers even for low-end consumer hardware in the 90's.

          People tend to have a misunderstanding on what raytracing is. There are many different rendering techniques with different advantages and drawbacks. Many have the idea that raytracing is a holy grail that simulates the physics of light to produce photorealistic images but is too expensive to run in real-time on consumer hardware. It's weird becau

          • Handling reflections perfectly is trivially easy with raytracers, and there have been plenty of real-time raytracers even for low-end consumer hardware in the 90's.

            Even with a simplistic ray model, the more objects there are in the scene, the more complex the trace becomes. Until you hit an object, you don't know at what angle the ray diverges from that object; it may, for instance, go through several partially transparent objects before it reaches a light source, or it may reflect off of several items that

            • by guruevi ( 827432 )

              It's not just simplifying the scene, it's also simplifying the ray tracer model. Instead of tracing every pixel, it takes a group of pixels and renders them all the same and then applies some filters after the fact to make it look better. So instead of tracing the source of 1920x1080 pixels it's tracing perhaps a few hundred 32x32 or 64x64 sections and then applying a denoiser.

              • by fyngyrz ( 762201 )

                Which is another (rather, a further) way of saying "it's not really a ray tracer at all" and therefore no, we don't have realtime ray tracing. :)

    • by Anonymous Coward

      They have been working on it for 10+ years [slashdot.org]

      now.

    • by mikael ( 484 )

      They started using neural networks and machine learning to look at optimizing algorithms. Things like solving anti-aliasing problems. Normally they had to super-sample every pixel hundreds of times. Most cases, the result is the same. An ML algorithm let them figure out the most important cases and reduce the number of samples needed. Instant speedup.

    • it's just used for effects not for full scene. they are not shooting a ray per pixel(or multiple). and following it where it goes. for most of the scene they don't need to do that and for the reflective surfaces they only need to do it once sfor every so many pixels.

      people we're doing partial limited realtime raytracing to make some effects on pentium 100mhz's.. that you use raytracing as part of an effect is just.. well, it's not the same as raytracing with povray - not at all.

    • Re: (Score:2, Insightful)

      Simple: Hardware got more powerful.

      Simple: Graphics card vendors required some new buzzword to sell their most expensive hardware.

    • Don't feed the trolls. ;)
  • by kiminator ( 4939943 ) on Monday March 26, 2018 @08:36PM (#56331205)

    The short answer is that it isn't brand-new. As the article mentions, nVidia has been doing real-time ray-tracing demos for about a decade. Various tricks and approximations are used to make this a reality. Game developers have largely shied away from it because similar results can be achieved with typically better performance using other methods.

    This announcement, particularly with the involvement of Microsoft, indicates that the companies finally feel that the technology is mature enough to actually be used in a game. My guess is that it will likely still be some time before it is put to use. The fact that it was announced at the same time likely indicates that the three companies have been working together for some time behind closed-doors to agree upon the DirectX Raytracing API.

    • by Anonymous Coward

      Indeed, they could do this many, many years ago, the problem was always having enough rays to make it worth while. Ray tracing a couple rays in real time is relatively easy, but tracing enough of them to get results that are better than the current system was hard.

      I remember messing around with ray tracing programs back in the '90s and they were great, but it took huge amounts of time to render a scene. Far too long for even simple board games, let alone a typical 3D game.

      At some point, it was inevitable th

      • Re: (Score:3, Interesting)

        by Shinobi ( 19308 )

        Even if ray-tracing isn't used for graphics, you can use it for the sound engine, with the benefit of making it hardware accelerated. Would make a whole lot of games more interesting, for example Stealthers like Thief, horror games etc etc.

        • by Shinobi ( 19308 )

          So apparently there's a mod out there that doesn't know that you can actually do ray-traced audio propagation.

          Here's a bit of light viewing on how it's been used in one field since the 80's: https://youtu.be/ZY1Kiih8sTU [youtu.be]

        • by Megol ( 3135005 )

          AFAIK Aureal did a limited form of this for their 3D soundcards. IIRC tracing a few "rays" in a simplified world model to simulate reflections etc.

      • I recall a Quake demo done with ray tracing in the 90s. They estimated it would take well over a 30,000 Hz Pentium whatever to do it in real time. It looked great, witb light projecting through stained glass windows.

    • Also, apparently their current demos are running on some severely beefy hardware (e.g. here [arstechnica.com]), and they're only using raytracing for a portion of the scene. This may play a part in gaming eventually, and it's good that the APIs are getting out there, but it probably will be a while before it makes it to the next Call of Duty game.
    • Game developers have largely shied away from it because similar results can be achieved with typically better performance using other methods.

      True, but it was also a self-reinforcing trend. Raster-based graphics was how it all started, so chip-makers and API providers where optimizing for that, which in turn entrenched this way of rendering further, 25 years on, graphics chips and API's are an amalgamation of super-optimized functions, tricks and hacks to squeeze every ounce of performance from that rendering methodology. And the fact is that this way of rendering has very little to do with the real world.
      I would welcome the trend towards ray-tra

  • by Anonymous Coward

    More specifically, Nvidia made custom hardware to do the calculations. [nvidia.com]

  • Terminology (Score:5, Interesting)

    by B.Stolk ( 132572 ) on Monday March 26, 2018 @08:39PM (#56331217) Homepage

    Be careful with your terminology.

    Real Time Ray Tracing with one primary ray and one shadow ray for each pixel, was viable last year as well (at 1080p.)
    But this will not render indirect light, thus no Global Illumination.

    You may be referring to Real Time Path Tracing, where you need to shoot a lot of rays for every pixel.
    This is currently not possible, and also not possible in this year's GDC demos: I think most of the demos were hybrids (rasterizing+tracing), and definitely not full Global Illumination.

    • Re:Terminology (Score:4, Interesting)

      by ganv ( 881057 ) on Monday March 26, 2018 @08:58PM (#56331301)
      You are touching on an important issue here. 'Real-time' and 'Ray-Tracing' are both open to definition: At what screen resolution, frame rate, and number of rays? And is it a hybrid or full ray tracing solution? That asked, I am very interested to know the answer in the original post: Is this a tipping point where they finally decided the hardware is good enough to market the ray tracing they have been working on or is there some substantial improvement in algorithms or in custom hardware dedicated to ray tracking?
      • by natex84 ( 706770 )
        I think the current demos were also done on some insanely high-end Quad GPU configured workstation, that costs around $60k ( https://arstechnica.com/gaming... [arstechnica.com] and https://www.nvidia.com/en-us/d... [nvidia.com] ).

        And even with all of the above hardware, I think they are just running at 24fps?

        It looks like this is a long ways off, for a reasonably priced high-end home gaming machine (ie. $4k budget).
    • Yeah, this is the answer. They showed off hybrid systems so they'll do SOME things using ray tracing where most of the heavy lifting is still going to be rasterized for the foreseeable future.

  • by Anonymous Coward on Monday March 26, 2018 @08:41PM (#56331227)

    Real time ray tracing was a topic since I was in college 20 years ago. A lot of PhD students have done their research. Algorithms optimized. Hardware advancing. Futurists at major corporations (fancy name for people responsible for monitoring tech) saw hardware evolving.

    So Microsoft probably saw that it was going to be viable in 5 years (or whenever) in 2013. They probably starting developing an API in conjunction with major hardware manufacturers. So they all worked together to bring ray tracing to the masses eventually.

    Now all this work is paying off and consumers get the end product.

    This is no different than major corporations using augmented reality to validate construction designs. Ya, they are doing that atleast in labs. Take the 3D model as built and validate that it is as built. Or quickly look at something in the real world and figure out where the 3D model data is and figure out in a second what vendor supplied the broken part. Ya, this stuff is happening!

    • Or even 30 years ago ... well ray tracing was anyway. Real time ray tracing was a topic in '87 much like the holo deck on ST:NG was; a, "won't it be cool when ?", kind of topic.

      It's been a long time coming and it's still not quite here as posters further up have laid out, but it's getting closer.

      • by epine ( 68316 ) on Tuesday March 27, 2018 @01:37PM (#56335111)

        Or even 30 years ago ... well ray tracing was anyway.

        Back in 1980, the University of Waterloo mathematics and computer science building had a locked public display case featuring artifacts from the senior-level computer graphics course, most of which involved ray tracing (standard chessboard-reflected-in-shiny-sphere kinds of things, but with the scenes aggressively simplified—like only three chess pieces of a dozen polygons each and the board reduced to sixteen squares).

        What separated the great from the good was treatment of subtleties such as getting the specular highlighting right. I was young and naive and didn't have Wikipedia at my fingertips, so I don't recall much more.

        They also had a public information kiosk back in 1980 which used some kind of polygon-fill graphics language to render an interactive dial-up information browser in all the best oversaturated colours. Just like the Internet, if the Internet consisted of exactly one host, and it was dead slow. But shiny! Everyone tried it out—for exactly three minutes (that would be about your third screen rendered).

        They also had rooms full of card readers, which they couldn't bother themselves to eliminate until some fancy anniversary shindig a few years later.

        In one terminal room you'd have IBM 3270 block-oriented displays, in the next you'd have the god-awful WIDJET terminals, in the next you'd have IBM PCs running APL (for statistics students), in the next you had Commodore SuperPETs, custom tweaked by the Computer Systems Group or related ecosystem (these people were later responsible for the Watcom C++ compiler).

        WIDJET (Waterloo interactive direct job-entry terminals) was basically a JCL [wikipedia.org] front-end with the ability to store about six whole files on non-card storage, where mostly you sat twiddling your thumbs waiting for your runs to pass through the job submission queue.

        One such room was more advanced and you could *edit* your next assignment with your previous run still in the queue, but you had to sign up way ahead of time to actually get a seat in this room.

        In the other room, you might submit a compile, sit there for ten minutes watching it bounce up and down the queue (upper-year jobs had priority) and then decide "oh, I did something wrong in the previous assignment" and you'd click "edit" on some previous source file, only to discover the message "job submission cancelled" popping up on the status line of your display and since you'd already spent years with the immense power of the TRS-80 or Apple II at your fingertips, you'd bash your head into the desk and wonder how you'd become mired in this technological institution of hazing, abuse, and mediocrity.

        The SuperPETs were okay, but the custom Pascal did some kind of partial compile to catch syntax errors before running the interpreter. The error reporting was beyond horrible. 50% of all syntax errors were the same message: "syntax error near or before end of file." It was much like Donald Trump tweaking "WRONG!" at you if you got a single brace out of alignment.

        "Sorry! I'll hunt through my entire program looking for the one I messed up." This worked for localized changes. But generally what you actually did after a big edit was added or removed braces at random until you found the problem through a tedious process of bisection (which, however, was an order of magnitude less tedious than dealing with WIDJET, even in the good room).

        My first year at Waterloo was the biggest computer science mindfuck of my entire life.

        Eventually I discovered you could kind of get into a groove with the IBM 3270 terminals—as archaic as they seemed—if you blinked longer than normal after each press of the ENTER key. Plus, these had chat, so you could send "nice sunrise" messages to all your friends at 05:00.

        Fro

  • by Junta ( 36770 ) on Monday March 26, 2018 @08:46PM (#56331251)

    Demos of real time raytracing have dated back to 2009 or earlier, albeit with various limitations. Raster based rendering is faster and going to raytracing means much better lighting, at the expense of some resolution/geometric detail/hardware requirements.

    I think what is being seen is that we've been well beyond the point of diminishing returns as far as raster can reasonably get in terms of better quality. Sure, we can cram more and more polygons and sure we can raise the bar to 4k resolution, but the bang for the back is small. Given that situation, video card market has an issue, they need to do demand generation.

    So pushing 4k for gaming helps, and 8k would be nice, but it's just really hard to tell the difference at this point.

    If that's a hard sell, then VR certainly can knock things back if it gets traction. With wider FOV, stereoscopic rendering, and optimal experience being at least 90Hz, that would certainly deliver. However, as much as I am a fan of it, it's far from a given that VR is ever going to be large enough to drive adoption. at volumes that can sate the business needs of the GPU vendors.

    So raytracing is a third option to make best of breed graphics card noticeably struggle with something that's very visually apparent. Suddenly the market content with status quo of ever refined raster graphics simply must make the leap to have some marketable advancement.

    The people pessimistic about any such advancement just continuously have their expectations calibrated to how fast it can perform raster graphics. Raytracing does mean having to step back, but we have enough headroom to take the hit.

    • by Anonymous Coward

      Actually, VR is probably the reason these guys are switching to ray tracing. Here is a very interesting article on how they manage to reduce latency by using ray tracing instead of rasterizing: https://www.roadtovr.com/exclusive-nvidia-research-reinventing-display-pipeline-future-vr-part-2/ [roadtovr.com]
      In short, with ray tracing you have the lens warping at no cost, and they can throw more rays at the center of the image (where it really matters because the retina has a rather small high resolution zone), and less rays

    • by ( 4475953 )

      I agree that it's about demand generation to sell more hardware, but I disagree about the quality of rasterization. I've bought an AMD 1800x with Geoforce GTX 1080, which is not the latest highest high end but still fairly high end, and the high end games I've looked at since then still smear everything with post-shaders and have not enough level of detail at a distance (with crap like "depth of field" turned off, of course). I don't know, maybe game designers are blind but when I look out of my windows I s

      • by Junta ( 36770 )

        I think in that scenario, there may be differences to be had, but there's much less chance a consumer will look at far-away scenery and think a marginal increase in that is worth a couple hundred more dollars.

        Contrasted with lighting/reflection/refraction improvements, which can be pretty dramatic.

        That was simply my point, that going further may be possible without pulling in some raytracing, but the subjective impression isn't going to differ nearly as much.

  • This story reminded me of https://www.euclideon.com/ [euclideon.com] ... They compose everything as "atoms" instead of triangles. If I ever make it to Oz, i'd like to check it out.
  • by Anonymous Coward on Monday March 26, 2018 @08:48PM (#56331265)

    It is using rasterization for the scene, and then hybrid ray tracing for the reflective, shadow, and opacity components were it applies. This is not true ray-tracing, but rather another rendering "trick" to optimize for speed. Most of our rendering for games today is approximations, because it is good enough. Same thing here.

    • by AmiMoJo ( 196126 )

      That's how most ray-tracing works these days. Ray-tracing plus the same kind of hacks for things like sub-surface scattering, radiosity, motion blur, air/heat effects, material simulation without having to simulate every fibre of clothing etc.

      On top of all that you have artistic considerations. Few games want to look like reality, they want to look stylized and hyper-real. They want to use tricks that focus the player's attention, just like films and TV shows do.

  • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Monday March 26, 2018 @09:03PM (#56331323) Homepage

    Ray tracing is an impressive technical feat, but the argument against it for gaming still stands:

    However fast you get at ray tracing, you can instead use that power for rasterization and do far far more.

    The day may come where that gap doesn't matter anymore, or where we find a way to overcome it... but for now, I think this technology will primarily be used to help accelerate very simple environments, and more complex for offline rendering.

    • by shaitand ( 626655 ) on Monday March 26, 2018 @09:28PM (#56331437) Journal
      "However fast you get at ray tracing, you can instead use that power for rasterization and do far far more."

      Not when you are real time rendering the graphics to be integrated on the fly along side real time light vectors flying at the retina. Think pokemon go, without the phone, or the screen, and with the pokemon actually sitting on your counter with it's legs hanging down.
    • Comment removed based on user account deletion
    • The main benefit is we get realistic refractive and reflective effects - which tend to be pretty slow with rasterisation as well. Not sure if we'll be seeing some sort of hybrid technique soon.
  • by rsilvergun ( 571051 ) on Monday March 26, 2018 @09:06PM (#56331333)
    Remember the jump from the SNES/Genesis to Playstation/Saturn? How about the first time you saw a Dreamcast game? Those were big leaps. But PS2 to PS3? If you'd been paying PC games you'd seen stuff on par with PS3. And PS3 to 4 was hardly a leap at all.

    The trouble is modern graphics have gotten _hard_ to make. Pixel shaders are a bitch. They're too labor intensive. What's needed is something that lets you do great graphics with less man hours and fewer bugs. If ray tracing isn't gonna do that then it might as well be PC's answer to 3D TVs. Especially if it's only kicking out 30 FPS.
    • by Kjella ( 173770 )

      The trouble is modern graphics have gotten _hard_ to make. Pixel shaders are a bitch. They're too labor intensive. What's needed is something that lets you do great graphics with less man hours and fewer bugs. If ray tracing isn't gonna do that then it might as well be PC's answer to 3D TVs.

      I think that problem extends far beyond graphics, if you're going for realistic graphics it also has to behave and interact with everything like it's real. Like if the wind is blowing everything has to be fluttering in the wind. If you're making a dog it can't just look like a dog, it has to move like an actual dog with bones and muscles dragged down by gravity and leave paw prints in the mud. To say nothing of humans, uncanny valley here we come. Heck, I think it'd be difficult just to properly simulate ta

  • You misuse the term "raster graphics", as that refers to bitmaps, not vector based real-time 3D rendering. Real-time ray-tracing has been around for almost two decades, but it wasn't mainstream until recently. As the parallel computational power of GPU increases, ray-tracing becomes the inevitable future.
    • by Megol ( 3135005 )

      Well current solutions do use raster graphics - but I'd be very surprised if realtime raytracing didn't use it too. Representation -> Renderer -> Framebuffer -> Screen.
      You are of course right about the misuse of the term.

      There have been at least one polygon renderer that didn't use a framebuffer (don't remember the name/company) but with severe limitations: for every rendered pixel a hit test had to be done with every polygon which strongly reduced the numbers of renderable polygons. Add the proble

  • We dont know how many of them are real real-time ray tracing and how many are me-too announcement for PR.

    Second just because they call it real time ray tracing does not mean it really is. It could be a technique to create a low res ray tracing to calculate the radiosity, reflectivity etc and then use the standard raster graphics to use these to adjust the texture maps of raster graphics.

    So let us wait and see if what they call real time ray tracing and what we call real time ray tracing are one and the s

  • All the companies that announced this and the media that is reporting on it all gloss over the fact that the Ray Tracing that is being talked about is supplementary the standard rasterized shading engines. We aren't going to be getting 100% ray traced scenes. We'll get rasterized scenes with things like reflections and shadows being ray traced. This will fix the issue that rasterization has where objects outside of the viewing frustum are computationally ignored. 100% ray tracing is still a pipe dream w

  • What the fuck is the poster on about. Ray Tracing has been a known researched thing that was on the horizon for years, it hasn't suddenly miraculously appeared and was always expected once the hardware crossed the threshold of performance for it to be released for more general consumption.
  • As long as magic leap has been vaporware, the graphics vendors have known real time ray tracing was going to be required in a big way and have been working on it behind the scenes.
  • by locater16 ( 2326718 ) on Monday March 26, 2018 @10:22PM (#56331629)
    The first thing to understand is that the Nvidia and MS press releases are complete crap. Oh it "works". The UE4 Star Wars thing is real enough. It's also running on $12,000 worth of GPUs at 30fps in 1080p, and is only partially raytraced.

    Now does this mean realtime raytracing isn't here? Well the answer is no, it is here. The game Claybook is entirely raytraced, unlike the DirectX demos. It's in early access and runs on nothing more than an Xbox One or PS4, and does so at 60 frames per second. Here's the trailer: https://www.youtube.com/watch?... [youtube.com] Now THIS is possible because graphics programmers have gotten quite clever over the years. The cleverest bit is called signed distance fields. This can be thought of a volume of points, or boxes, that all store the nearest distance to a solid "surface". Going through this structure allows you to raytrace very very quickly, as you know how much empty space you can skip each time without hitting anything. And since this data is relatively small for each point it doesn't use up a lot of memory either. It's so fast and low memory you can run a demo in your browser here: https://www.shadertoy.com/view... [shadertoy.com]

    Obviously there's a bunch of other clever programming going on in Claybook and other titles. But SDF's are the biggest thing to understand. That and that the MS Raytracing API is totally uninteresting from a performance perspective. In fact it's rather awful.
    • by UnknownSoldier ( 67820 ) on Tuesday March 27, 2018 @04:28AM (#56332475)

      > The UE4 Star Wars thing is real enough. It's also running on $12,000 worth of GPUs at 30fps in 1080p

      Uhm, try $50K for the Nvidia DGX Station running on four Volta GPUs.

      > Well the answer is no, it is here.

      Outcast, back in 1999, did real-time ray tracing and voxel rendering [shamusyoung.com]

      The only difference is 20 years later we can do it hardware.

      • by Bobtree ( 105901 )

        Outcast, back in 1999

        That summary is crap. Outcast was only raycasting a 2d heightfield. No raytracing, no voxels.

        • The original Art Directoror disagrees.

          He used the word voxel 11 times and refers to the 2D heightfield as "voxel tiles" but technically that's correct. A 2D heightfield is just one way to represent a sub-set of 3D voxels.

          http://francksauer.com/index.p... [francksauer.com]

          You are right about the ray casting though.

          Was Quake Wars: Ray Traced the first ray-traced game?

    • by Bobtree ( 105901 )

      The game Claybook is entirely raytraced

      No, it isn't. Here are their GDC 2018 slides: https://www.dropbox.com/s/s9tz... [dropbox.com]

      They are using Unreal Engine 4 for shadow cascades, ambient occlusion, lighting, motion blur, and presumably the background scene. The SDFs are raytraced for first-hit surface intersections, soft shadows, and extra ambient occlusion. The visual giveaway is that there are no reflective surfaces.

  • Full realistic ray tracing to provide real global illumination be it real time or otherwise has always been impossible. Ray tracing is only a simulation and has always employed artificial cutoffs and other hacks. Why? Well light is analogue, to simulate it properly requires a literally infinite amount of processing power. I doubt there'll be anything different about this tech, it's just a new api.

  • by Joviex ( 976416 ) on Monday March 26, 2018 @11:43PM (#56331847)
    You fundamentally misunderstood the technology; partial solutions for partial rendering with interpolation is not 100% raytracing.
  • It sounds like OP is asking "Why are GPUs so damn powerful so soon?"

    In a word: Cryptocurrency mining

    People were willing to pay for faster and faster GPUs for their mining farms and their profits allowed it. Look at the overwhelming demand for dedicated space heaters, er, miners. If there was no money in it, it wouldn't have happened.

    It has been said that porn built the internet. Based on traffic share, that's a safe bet. There was money in being a 1 frame per second camgirl and there was money in video, whi

  • It's still not possible (in high resolution/high framerate) on current consumer GPU's.. The star wars demo for instance was done on a set of very VERY expensive (not available to the public) GPU's which will not be available for another few years.
  • Raytracing and rasterizing behave in memory differently, making the tradeoff between them less about processing and more about memory size. Rasterization and raytracing perform the same functions, but in a different order. Rasterization iterates over every triangle to see if it covers each pixel, and repeats that loop over every triangle for every pixel in the image. Raytracing iterates over every pixel, testing for intersection with each scene primitive.

    The big difference is how memory is used in this proc

  • People who are good at what they do are typically monitoring or communicating with other people who are good at the same thing.

    Or they're at least all watching the same thing and great minds think alike and all that. Kind of like telescopes. No one had even thought of a telescope, yet when optics became good enough, the telescope was independently invented by several different people within a year of each other, but all far enough apart that communication among them would have taken longer than a year, s
  • I remember back in 2004, when Beowulf clusters were still a cool thing, Quake iii getting the real time raytracing treatment. Great to see it in individual machines now, but I also wonder what cluster tech has replaced the old Beowulf...

    https://games.slashdot.org/story/04/06/07/2350243/quake-iii-gets-real-time-ray-tracing-treatment

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...