Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games

Ask Slashdot: Why Are 3D Games, VR/AR Still Rendered Using Polygons In 2019? 230

dryriver writes: A lot of people seem to believe that computers somehow need polygons, NURBS surfaces, voxels or point clouds "to be able to define and render 3D models to the screen at all." This isn't really true. All a computer needs to light, shade, and display a 3D model is to know the answer to the question "is there a surface point at coordinate XYZ or not." Many different mathematical structures or descriptors can be dreamed up that can tell a computer whether there is indeed a 3D model surface point at coordinate XYZ or behind a given screen pixel XY. Polygons/triangles are a very old approach to 3D graphics that was primarily designed not to overstress the very limited CPU and RAM resources of the first computers capable of displaying raster 3D graphics. The brains who invented the technique back in the late 1960s probably figured that by the 1990s at the latest, their method would be replaced by something better and more clever. Yet here we are in 2019 buying pricey Nvidia, AMD, and other GPUs that are primarily polygon/triangle accelerators.

Why is this? Creating good-looking polygon models is still a slow, difficult, iterative and money intensive task in 2019. A good chunk of the $60 you pay for an AAA PC or console game is the sheer amount of time, manpower and effort required to make everything in a 15-hour-long game experience using unwieldy triangles and polygons. So why still use polygons at all? Why not dream up a completely new "there is a surface point here" technique that makes good 3D models easier to create and may render much, much faster than polygons/triangles on modern hardware to boot? Why use a 50-year-old approach to 3D graphics when new, better approaches can be pioneered?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Why Are 3D Games, VR/AR Still Rendered Using Polygons In 2019?

Comments Filter:
  • So... (Score:5, Interesting)

    by Anonymous Coward on Friday April 26, 2019 @08:49PM (#58498904)

    Which mathematical model would you LIKE to use?

    All of these solve one key issue: it's fucking expensive to represent detail at the molecular, atomic, subatomic level... how far down do you want to go?

    Instead, I define a construct to represent that set of points (which are by definition infinite) as a single entity, vastly decreasing my computational workload.

    Something like... say a bounded plane. Maybe with three vertices, each with 3 coordinates. That means I only have to keep track of 9 numbers instead of an infinnite number of points.

    • Re:So... (Score:5, Insightful)

      by Anonymous Coward on Friday April 26, 2019 @09:04PM (#58498968)

      A lot of people seem to believe that computers somehow need polygons . . . . . This isn't really true.

      Yes, it is true. You do need polygons.

      Unfortunately, the author of this piece is a dumbass who doesn't know what he is talking about. Notice he doesn't say that there is a better way to do things, without polygons, he just says "somebody should invent a better way".

      No shit, Sherlock.

      Many times, the answer to a question like this is very simple: If there was a better way, somebody would have thought of it and we would be doing it.

      • Re:So... (Score:4, Interesting)

        by Narcocide ( 102829 ) on Friday April 26, 2019 @09:13PM (#58499002) Homepage

        Well, there are still hardware limitations he's clearly not aware of. Computers got much faster and storage got much denser, but we're not quite at the point where we can gain any type of performance or production cost benefit by ditching polygons in favor of a raw 3d pixel map.

        • by raymorris ( 2726007 ) on Friday April 26, 2019 @10:28PM (#58499206) Journal

          The author wonders why we've been building 3D objects from triangles for the last 50 years. Apparently he doesn't know that we've been building 3D objects from triangles for at least 4,000 years - because it's a really good way to do it.

          • by jellomizer ( 103300 ) on Saturday April 27, 2019 @09:13AM (#58500336)

            Not to mention that modern computers are digital not analog. Every point will need to be approximated to an integer value.
            We still have problems with curves. So it is better off just doing more triangles then other wobbly shapes. As a computer can draw a thousand triangles faster then it can draw a proper sine curve. But using these thousand triangles we can get a good approximation of the sine curve with less cost.
            Also to note, as you have pointed out. We have learned to measure a 3D world with triangles for thousands of years. In terms of programming it is easier to code a way that you can comprehend then trying to simulate an extremely abstract concept.

        • The author does mention voxels, he was implying those were bad and old too.

      • Re:So... (Score:5, Insightful)

        by Shikaku ( 1129753 ) on Friday April 26, 2019 @09:19PM (#58499024)

        Reminds me of the Sega Saturn. One of the first 3D consoles, but horrible to use and make for devs for many reasons but the biggest was one simple reason: "polygons" are all quadrilaterals, not triangles. That really hindered any 3D that would have been planned. This goes into the same reason that will keep coming up on this topic: triangle polygons work better and faster than everything else people have tried.

        • One of the first 3D consoles, but horrible to use and make for devs for many reasons but the biggest was one simple reason: "polygons" are all quadrilaterals, not triangles.

          This seems a strange perspective by today's standards. I think most 3D modellers today would say that quads are much easier to work with for a variety of reasons: easier loop cuts, easier subdivision when you need more detail, cleaner lines when binding an animated character or other object to a skeleton or other rig, etc. Of course those quads still get converted to triangles at a lower level for rendering purposes, but I don't know anyone in the industry today who works with triangles as their preferred a

          • by spitzak ( 4019 )

            You always need a few triangles on the model to make the shape you desire with a fairly even distribution of density of the quads on the surface.
            It is correct that subdivision likes quads much more than triangles and it is desirable to get as few triangles as possible, but you can't get none.

      • Re:So... (Score:5, Informative)

        by wierd_w ( 1375923 ) on Friday April 26, 2019 @10:00PM (#58499126)

        Not exactly true AC.

        Take NURBS surfaces. These are basically 3D vector boundary profiles. they are still infinitely thin skins that you apply one or more textures to, but they are not polygonal meshes. They are actual curved surfaces that are ruled by mathematical definitions. As such, you can zoom in on them endlessly close, and have no sharp edges-- unless you want one.

        However, they are tricky to use, and can cause renderers to flip their shit when you feed them mathematically possible, but physically dubious structures like mobeus strips, because of their unorientability. (the renderer has difficulty determining which parts of the strip to occlusion cull, which side to apply texture on, etc.) They are also more resource intensive to use en-mass, compared to triangle polygon meshes.

        • by Anonymous Coward

          Plus it can be darn hard to get a flat surface with NURBS, and some things actually are flat.

          • Re: So... (Score:3, Funny)

            by Anonymous Coward

            NURBS - Nobody Understands Rational B-Splines

        • by ceoyoyo ( 59147 )

          Fundamentally, 3D surface rendering is about taking a point cloud and interpolating the missing points when you need to. To do that you need a connectivity model (triangles are simplest) and an interpolation method. Most rendering uses linear interpolation because it's fast an easy.

          NURBS is spline interpolation, but it still uses a polygon mesh (in 2D).

      • Re:So... (Score:4, Interesting)

        by irchans ( 527097 ) on Saturday April 27, 2019 @09:03AM (#58500300)

        Yes, it is true. You do need polygons.

        Unfortunately, the author .... doesn't know what he is talking about .... If there was a better way, somebody would have thought of it

        I agree that polygons are very useful for graphics. I also agree that many people have thought about how to represent 3D objects, so most of the simple ways to represent 3D objects have been explored. I also think that for most 3D objects, polygons are the most computationally efficient representation.

        I disagree with the statement "You need polygons." For example, a sphere can be represented by x^2 + y^2 + z^2 = 1, no polygons.

        One (inefficient) way to represent any 3D volume would be as a union of spheres or ellipsoids (assuming no infinitely thin 3D objects and also assuming a finite level of resolution). Generalizing that approach, you could represent any 3D object as the union of objects with each object represented as {(x,y,z) | f(x,y,z) less than or equal to 0} where f(x,y,z) is a polynomial in x, y, and z.

        There are many other ways. You could represent a 3D objects as the interior of a 2 dimensional manifold (a 2D manifold is just a precise mathematical term for a 2D surface) where the manifold is represented by overlapping patches of homeomorphic mappings from the interior of a circle into 3D space.

        The thing is that the information in a 3D scene needs to be represented somehow and polygons are one of the easiest ways to represent 2D surfaces. If the surfaces have no corners, then the information can be encoded efficiently with NURBS or B-splines which warp the flat polygons.

        • Re:So... (Score:5, Informative)

          by Savantissimo ( 893682 ) on Saturday April 27, 2019 @03:00PM (#58501542) Journal

          "One (inefficient) way to represent any 3D volume would be as a union of spheres..."

          Representing 3-D models as spheres can be quite efficient using Conformal Geometric Algebra, which also uses the same representation for points (0-radius sphere) and planes (infinite-radius spheres). It also has point pairs (1-d spheres), flat points (flat point : point pair :: point : sphere), circles (2-D sphere) and lines (infinite-radius circles. It does this by using two additional dimensions and Clifford Algebra, but using it is quite simple, even middle schoolers should be able to use it. 3D Euclidean Geometry through Conformal Geometric Algebra (a GAViewer tutorial) [science.uva.nl]

          This technology applied by the British company Geomerics and incorporated in game engines enabled real-time radiosity lighting in games, for instance letting arbitrarily-placed fireballs light up the scene. Some of the best papers on Geometric Algebra are by the Cambridge professor founders of Geomerics such as Chris Doran. See University of Cambridge Geometric Algebra Resources [cam.ac.uk]

    • For fun I tried making a ray tracer based on spheres instead of triangles before. A nice part is that calculating the normal is better. A bad part is that often you want shapes with straight lines; and those are painful to model with intersections of very large spheres (like one to define the surface and 4 more to define the bounding box for that surface). And hardware is really good at triangle math now.
      • Sphere is only one of the primitives a raytracer needs, and is the only finite one you need. The sphere of course divides the universe up into inside and outside.
        The infinite plane is another. The plane divides the universe up into an inside and outside just as the sphere does.

        The flat surface you wanted will normally come from the infinite plane. All convex objects with flat sides can be represented as the intersection of a set of infinite planes. All non-convex objects with flat sides can be represente
    • by Immerman ( 2627577 ) on Saturday April 27, 2019 @01:24AM (#58499638)

      I like that they answered their own question without noticing:

      Polygons/triangles are a very old approach to 3D graphics that was primarily designed not to overstress the very limited CPU and RAM resources of the first computers capable of displaying raster 3D graphics.

      Polygons are used because they're a very efficient way to use the limited resources available to dramatic effect. Doesn't matter how many resources you have, they're still limited. Unless your alternative is something truly incredibly clever that can leverage the larger amounts of resources beyond some inflection point in a better-than-linear manner, it will probably deliver less impressive result than sticking with polygons. At least so long as you're dealing with objects with clearly defined surfaces.

      They seem to be conflating *rendering* and *creating* models though. For rendering it's hard to beat the performance of polygons and the various tricks that can be done with them. For creation though there have long been many alternatives - voxels, (pseudo)fractal algorithms, virtual clay, etc., many of which are rendered, even in the modeling tools, as polygonal approximations, but are internally described in a very different fashion. Perhaps that distinction is confusing them.

      • by AmiMoJo ( 196126 )

        If you look at the history of graphics one thing is clear: brute force always wins.

        Lots of different, very clever techniques for improving rendering speed have been created over the years. In the end they all fell by the wayside as the next generation of GPUs just threw more and more power at the problem. Thousands of parallel cores, insane memory bandwidth.

        What theoretical benefits exist from these alternative schemes tend to fall away when you consider two factors.

        1) All the tools for creating games use p

        • I'm trying to think of clever techniques that fell by the wayside. Seems like most of the good ones are still in use, they just migrated from "clever technique" to "standard technique", since they wrung even more impressive results from all that ever-increasing brute force.

          • by Scoth ( 879800 )

            Most of the ones I can think of are mostly due to march of technology than them not working well, like flat shading to gouraud shading to phong shading. I guess not a lot ended up using full voxel engines after some big hype in the 90s, although they may be making something of a comeback with the increase in computer power. Maybe using quads instead of tris, a la the Sega Saturn, although one could argue whether that's clever or not?

            • Unless I'm very much mistaken, those techniques (or their descendants) are still widely used for lighting, they're just combined with texture-mapping to provide surface detail. What used to be cutting-edge "this is all the hardware can handle" techniques are now just one aspect of a much more sophisticated rendering pipeline.

              Quads though - I'm not sure there was ever a good argument for them, other than being able to conveniently piggyback on a lot of the existing 2D sprite-rendering hardware. If they're

      • That last part is key. The OP talks about the problems of renting and then switched to the fact that game cost is based on design. Well a button press can turn any design into a polygon so he's conflating some incredibly different things.

    • by jrumney ( 197329 )

      Triangles and polygons are a low overhead way to simplify the problem, and computation has advanced to the point where we have a choice:

      1) Use our increased computing power to change to a more complex model.
      2) Use our increased computing power to get more detail, smoother motion etc using the same model.

      Probably choosing option 1 will give us a bit of 2 - smoother shading without changing the detail level for example, but if 1 was the best option for older GPUs, it is likely the best option for current

      • by djinn6 ( 1868030 )

        Or 3) Use our increased computing power to do what we did previously, but with less effort.

        If it continues, one day we'll see Hollywood-tier graphics from indie devs.

    • Re:So... (Score:5, Funny)

      by Hognoxious ( 631665 ) on Saturday April 27, 2019 @06:45AM (#58500026) Homepage Journal

      Which mathematical model would you LIKE to use?

      Duh! Blockchains, of course.

    • by spitzak ( 4019 )

      Actually we are using polygons *more* than ever before in computer graphics. Very early 3D would use spheres and cones and conic sections (and CSG of these). Somewhat later NURBS were very popular (kind of a 2-d square patch defined by curves). And there were triangles. Everything has been replaced by triangles and quads now, along with subdivision surfaces (which are a method of generating finer smoother triangles from a rougher mesh).
      There is a reason for this: polygons allow arbitrary topology. If you in

  • False premise ? (Score:5, Insightful)

    by Anonymous Coward on Friday April 26, 2019 @08:52PM (#58498918)

    > "A good chunk of the $60 you pay for an AAA PC or console game is the sheer amount of time, manpower and effort required to make everything in a 15-hour-long game experience using unwieldy triangles and polygons"

    This doesn't seem true at all. The time is spent making beautiful art. The translation to polygons is mostly automated by the toolsets, now, I believe.

    • by cob666 ( 656740 )

      > "A good chunk of the $60 you pay for an AAA PC or console game is the sheer amount of time, manpower and effort required to make everything in a 15-hour-long game experience using unwieldy triangles and polygons"

      This doesn't seem true at all. The time is spent making beautiful art. The translation to polygons is mostly automated by the toolsets, now, I believe.

      This is so far from the truth. Most modern games are designed using already established engines and SDKs. In addition to the modelling part of any game, you have people working on the story and character scripts, voice actors, sound effects, music, artistst creating textures and a myriad of other graphics, testers, marketing and advertising that all go into a AAA title game. The modelling is usually handled by the engine SDK, unlike every other facet of putting out a game.

      • The modelling is usually handled by the engine SDK, unlike every other facet of putting out a game.

        Do you mean the "rendering"? Because as a game dev I can assure you: the modeling is handled by modelers, who are people.

        Your point about the many other costs of games is true--askagamedev.tumblr.com has many posts breaking down the many costs--but, like, I think OP (who is also off-base) is trying to make the point that there are increasingly convenient ways to transfer models into game-ready assets, whereas the original question is less about the human cost of making the art, and more about the method use

      • The modelling is usually handled by the engine SDK, unlike every other facet of putting out a game.

        Really? All the people I know on that end of things use tools like 3ds max, ZBrush, cinema4D, Houdini and even Blender.

  • by iCEBaLM ( 34905 ) on Friday April 26, 2019 @08:53PM (#58498924)

    Jesus fuck, how is the computer supposed to know where the surfaces are if it doesn't have some kind of spacial map of the world it's trying to render? That spacial map is made out of polygons until you figure out some new way to do it.

  • by 140Mandak262Jamuna ( 970587 ) on Friday April 26, 2019 @08:53PM (#58498936) Journal
    ... smearing patterns of ink on ground up wood pulp, that is over 700 years old.

    That round thing all these vehicles are riding on is 3000 years old. And using combustion to make raw food edible and digestable ... why, that is at least 500,000 years old.

    Out with the old, and in with the new. That's the mantra.

  • by Gravis Zero ( 934156 ) on Friday April 26, 2019 @09:00PM (#58498956)

    NURBS are great and all but they require significant computations while saving on data. Polygons on the other hand only need some fast matrix manipulation but plenty of data. Add some texturing tricks and your polygons are data heavy but good enough to fool the user's perception. Currently it costs less money to add a boatload of ram than it is to add trigonometric computational power.

    • by furiousgeorge ( 30912 ) on Friday April 26, 2019 @09:24PM (#58499044)

      NURBS are not great. They are a pain in the ass. Try making a character out of nurbs one day. Getting continuity on edges, seams, cracking, adding local detail (here's a hint - you can't. You have to add iso lines across the entire surface). Adding details after a texturing pass will make you want to kill yourself.
      Subdivs - maybe.

        But the premise of this entire post sure comes across as somebody with no idea what he's talking about.

      • by Rockoon ( 1252108 ) on Friday April 26, 2019 @10:47PM (#58499264)
        Agreed but perhaps re-stated: nurbs are continually an issue because the volume they describe is non-linear and implicit rather than linear and explicit. Practical rendering may violate the linearity or the explicitness, but not both. I've only seen nurbs used extensively when raycasted or raytraced. Part of this may have been due to the marching square patents but thats another issue.

        The demo scene used to regularly throw some nurbs in, and they did so because it wasnt at all easy to do it realtime. Now that its easy (20 years of moore's law) and patents on the the polygon version have expired, nobody is doing it.

        The developers have voted.

        On the geometry side, the endgame is and always has been constructive solid geometry (CSG) and you really only need to support a couple core primitives. Stuff like polygons can be emulated but carry all the same big data hassles that the regular ol' list-of-polygons geometry that we currently use does.

        Fundamentally it is obvious that polygons arent it. Polygons are both not primitive enough and too primitive all at the same time. Their strengths are also their weaknesses. Polygons cannot be optimal.

        I would call polygons the vertex centric view of the geometry-describing problem. A list of "neighboring" vertices describe a face that the renderer uses.
        On the other hand CSG is the volume centric view of the geometry-describing problem. A list of "neighboring" volumes describe the faces that the renderer uses.

        Nurbs are interesting and cool, but it is a terrible rendering primitive. Nobody uses it unless they have to. CSG is superior with a casting renderer, and polygons are still the core of a rasterizing renderer so if you are bothering with this abstraction its for something only it offers.
      • Comment removed based on user account deletion
  • Surface meshing could be done another way, Go ahead and publish your results or shop them around.
  • by auzy ( 680819 ) on Friday April 26, 2019 @09:07PM (#58498976)

    What does the author propose is used to determine if there is a surface point there? Magic?

    Great question.. Unfortunately, pointless if the author doesn't have a single idea. More sounds like they're a journalist trying to get info to write a new article

    • I'd prefer the journalist aspect. As it is, the submission has less value than used toilet paper.
    • What does the author propose is used to determine if there is a surface point there? Magic?

      You are making the same fundamental error the first question in the summary makes.

      There is always a surface.

      The backward solutions to the rendering problem ask "which pixels?" for every surface. These are the rasterizers.
      The forward solutions to the rendering problem ask "which surface?" for every pixel. These are the raycasters and raytracers.

      The question "Is there a surface point there?" in all its forms doesnt make any sense at all given these facts. Its a fundamental error. A gross misunderstan

    • Actually that is about the only thing the author got right. It's the premise he got wrong. All the techniques he talks about are already used, but in the end a button press converts the result to a polygon.

  • by Aardappel ( 50644 ) on Friday April 26, 2019 @09:08PM (#58498978) Homepage

    The primary reason (for AAA titles at least) is that games compete on visual quality, and that the best quality is still easiest to achieve with polygons.

    Todays GPUs are perfectly capable of alternate rendering methods.. ray-tracing / ray-marching of voxels or signed distance fields, point clouds, volume rendering etc. You can find endless examples of these online, and it would be totally possible to make a game with some of these. In fact there are plenty of examples of ray-traced older games (the various Quake's, Minecraft, etc) looking pretty cool. But here's the thing: given the same amount of GPU power, you will not achieve the same visual fidelity than can be achieved with more traditional methods. They are not competitive, as the effort spent per pixel is much higher.

    Then there's the issue of tooling and training: the AAA games industry is full of people that are really good at creating these polygon meshes, with their very familiar professional tools. Want to change to a different rendering style, now need to train your artists to compose models out of mathematical functions? Good luck!

    These should be less of an issue for indie games: these games are totally fine not maxing out the GPU in exchange for a unique look. But even for those the switch may be daunting.

  • I'm pretty sure AMD and nVidia would love to hear better alternatives. And while it is strictly true that the first phase of the graphics pipeline does involve geometry and ultimately it's projection onto a 2D plane where that surface question is asked, the final stage where it is asked does not strictly need geometry as an input to answer it. The compute facilities of modern graphics cards are pretty amazing, provided you know how to work with them.

    In other words, most of these accelerators are capable of

  • by UnknownSoldier ( 67820 ) on Friday April 26, 2019 @09:13PM (#58499004)

    Your "new" rendering method _already_ exists.

    First, go read Inigo Quilez's Rendering Worlds with two triangles PDF [iquilezles.org]

    Recall that there are two ways to render surfaces:

    * Explicit (Triangles)
    * Implicit (Signed Distance Fields, etc)

    The problem is how do you easily texture implicit surfaces?

    GPUs accelerate triangles (explicit surfaces) because:

    * Vertices guaranteed to be co-planar, which simplifies the math,
    * The perspective divide for Texture Mapping is relatively straightforward,
    * Easy for artists to model a High Polygon Model (1 million+ triangles) and create a "Low Poly" ~50,000 triangles.
    * Fast
    * Easy to understand

    Ray Marching Signed Distance Fields (Implicit Surfaces) are dog slow because you need to keep stepping and calculating the intersection points over and over again.

    Ray Tracing is dog slow on GPUs because it touches VRAM (GPU memory) in random order. GPUs were NOT designed for that like CPUs are.

    Path Tracing is also hideously slow because you need to shoot MILLIONS of rays to avoid "speckle" artifacts of under-sampling.

    More so with 3 bounce lighting to "properly" calculate Global Illumination.

    Every methodology has strengths and weaknesses.

    - Non polygonal surfaces like clouds look like ass with polygon blending
    - Ray Marching requires N steps PER ray. Where N is 64 to 128 steps.

    There is no "Holy Grail" of rendering because simulating reality requires a crap ton of math and computers are still slow at it.

    Yes, Real Time ray tracing is finally here, after 20+ years of failed hardware, but that will take a few years to go mainstream.

    • Thank you, thank you, thank you. I posted the same too. I should have read the comments first...
  • that were basically distorted sprites, and you can see how well that worked out. Not saying I don't love me some Panzer Dragoon (especially Saga) or Radiant Silvergun, but polygons + shaders pretty much work.

    Also, I think we're all massively over estimating the role of polygons and underestimating pixel shaders. This isn't 1995. We don't scenes out of triangles all that much. There's an article [adriancourreges.com] on what it takes to render a frame of Deus Ex and there's a hell of a lot more going on than texturing some wi
    • The Saturn's problem wasn't rectangles vs. triangles. It was not having hardware for transparency. Developers described the platform as a pile of chips on a board, because they had to do everything. A small handful of games had transparency effects, but they had to do it by brute force and wound up dedicating a CPU to the task. Not putting transparency into the GPU meant having to have two CPUs, and that made the Saturn expensive.

      The Playstation also had a much, much better controller than anything on the S

      • See here [youtube.com] for how it worked. And on most TVs dithering could do a convincing effect for the few instances where the Saturn's oddball design meant that transparencies couldn't be done by the hardware. I didn't realize, for example, that the life bars in Street Fighter Alpha 2 weren't transparent until I got my hands on an S-Video cable. You couldn't tell over composite, which is what 90% of folks had.
        • I guess you're talking about 2d, where the saturn could do a very bad transparency effect that only worked on one sprite [mattgreer.org]. But I'm talking about 3d, where there was no transparency effect at all. Meanwhile, the Playstation could do transparent 3d objects. I didn't specify, so your confusion is understandable. The fact is that this one feature was drastically important, and Sega just skipped it. And 2d transparency was also important, and Sega mostly skipped it — sprites were composited against the back

  • by Anonymous Coward

    Fun fact. Before Microsoft created the Direct3D standard, nVidia produced a curve rendering chip. D3D came out, and nearly bankrupted them. They chose to make their next chip a hardware accelerated version of the D3D spec. And now we're here.

    If you tried to change it, nobody will have art assets or experience with the other modelling techniques. Shaders will have to work differently. If one of the big 3 of gaming (Sony, Nintendo, or MS) tried to force a change, and somehow convinced one of the big 2 o

  • Because asking "why don't they do it a different way" isn't the same as creating a different way and doing all the additional work to bring the new way up to the same quality and productivity as the old way. The old way was refined and improved 1000 times and it has great software tools to support its use and smoothe out its problems.

    Knowing a "better way" doesn't get you very far without all the support work being done. And you don't even propose a "better way". It's a thousand mile walk and you haven't

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday April 26, 2019 @09:46PM (#58499082)
    Comment removed based on user account deletion
    • by guruevi ( 827432 )

      I was about to say the same thing: rendering polygons is cheap, everything else is expensive. You also have to think that not everyone can afford a GTX2080 in their rig. I play AAA games on a laptop with an Intel GPU chip. Sure it's slower and doesn't run 300fps at 4K but it satisfies my need for an occasional shooting match with friends.

  • by Quakeulf ( 2650167 ) on Friday April 26, 2019 @09:48PM (#58499086)
    Here's an example at ShaderToy: https://www.shadertoy.com/view... [shadertoy.com]
  • Yeah, good luck with that. You seem to be completely clueless how this works. Also, what do you think mathematicians do all the time?

    This must be one of the most stupid /. stories in quite a while.

  • You asked why??? well... X,Y,Z coordinates represent a point in space. With two Coordinates you can make a line. With three coordinates you can make a triangle. A triangle is the minimum amount of coordinates to create a surface. that is why!!!....seems efficient. perhaps if the point in space was a spherical object? or maybe the line was a circle?
  • Triangles are Easy (Score:5, Informative)

    by cdecoro ( 882384 ) on Friday April 26, 2019 @09:53PM (#58499118)

    Graphics cards use triangles (not general polygons, but usually triangles) because they are easy to render quickly. All the GPU needs to do is an easy 4x4 matrix multiplication per vertex to get the screen space coordinates, and rasterize the points that fall inside. (And in most cases, the vertices are shared and indexed between multiple triangles, thus less than vertex transformations are required per triangle.) Moreover, the transformations are highly parallelizable, as the GPU is doing the same matrix multiplication to a large number of vertices, simultaneously and independently.

    A triangle mesh is a linear approximation of an arbitrary surface. You can get arbitrarily closer to that surface by subdividing the triangles. And if you make that high enough, you are well below the pixel size. (FYI, this idea of "micropolygons" was the basis for Pixar's original Renderman software.)

    It's important to realize that most games and other 3D applications are not geometry-bound, they are fill-bound. A significantly larger amount of time is spent on shading. For example, assume that you have a scene with 10 shadow-mapped lights. This means that for every pixel that is ultimately rendered to the screen, the GPU performed *at least* 10 lookups into a shadow map to compute lighting visibility, and then evaluated and summed the reflectance function for each of those (each of which may itself involve multiple texture lookups). And depending on the draw ordering, it may be the case that those evaluations were wasted, because the fragment ended up being occluded by a later-drawn fragment that was in front of it. Multiply that by the number of pixels. And if you're using supersampled antialiasing, then multiply that by the sampling rate.

    There has been a lot of work on non-polygonal representations of geometry. In the early 2000's, point-based rendering was a very active area of interest (search for "QSplat" for a prominent example). NURBS (or parametric surfaces generally) have long been used in modeling applications. There are also plenty of examples of implicit surfaces or voxel-based rendering. But given the rate at which GPU speed has increased, it is often faster and easier to just use more triangles.

    Of course, there are things that are not well-represented by triangles. As noted, a triangle mesh approximates a surface. If you have something that doesn't have a surface (fog or smoke, for example), a non-triangle representation could well be preferred. But for the most part, the objects that we want to render can be sufficiently approximated with triangles.

  • by vux984 ( 928602 ) on Friday April 26, 2019 @10:03PM (#58499132)

    "All a computer needs to light, shade, and display a 3D model is to know the answer to the question "is there a surface point at coordinate XYZ or not."

    False.

    And with that out of the way, the rest of the thesis put forward can be discarded.

    Seriously, if all you could do was query {x,y,z} for true/false to a 'surfaceHere' test, you'd be unable to render anything.

    The question that actually needs to be answered not "is there a surface point at {X,Y,Z}" but the far more difficult question: "what surface in the scene does a ray cast at a particular angle in 3-D space intercept first, and at what {X,Y,Z} coordinate does the interception take place, and at what angle is the interception"

    It turns out that spheres and triangles are the two easiest things to test. This is just the mathematics involved. It has nothing to do with limited cpu power or RAM. For a sphere, you need to determine where the ray's closest approach is to the sphere -- by solving the equation of a line perpendicular to the ray passing through the center of sphere, and then calculating the length and seeing if it is less than the radius -- then you know that the ray intersects the sphere.

    For a triangle, you need to solve the equation for the point at which the ray intersects the plane defined by the triangle (which it will unless the plane is parallel to the ray); and then whether that interception point is within the bounds of the triangle. this is also pretty straightforward.

    Couple that with purpose built hardware for specifically solving this triangle problem, with multiple pipelines, each capable of solving this problem in parallel for different triangles... and you get GPUs.

    Further, the majority of the shapes we want to model can very efficiently approximated to arbitrary precision with a triangle mesh so we use triangle meshes for those things. They are the right tool for the job.

    There a few exceptions -- things like trees and hair blowing in the wind, or fur, or smoke, for example. And all kinds of special processing is done specifically for these cases.

    But go back to the original question ... how does he propose "is there a surface point at {X,Y,Z}" going to handle fur or wind blown hair? Honestly if that's the only heuristic you had, you'd be pretty hard pressed to render a rotating cube under a single light.

  • Find a more computationally efficient method and we can talk. I think we are starting to get there with ray tracing hardware though.

  • In other words, Poly ain't gone.

  • If you invent a fundamentally new rendering approach for virtual environments, you're also rebuilding the entire ecosystem of tools and skills to create high-quality content for it on an industrial scale. We have multiple generations of digital artists at this point that have iterated and learned on a triangle-geometry rendering theory. The content creation tools are made for it, the material creation tools are made for it, the project management tools are made for it, even the terms we use are build arou

  • The vast majority of what needs to be rendered is solid. Solids are bounded by surfaces. Surfaces are manifolds. Manifolds can be decomposed into polygons to any desired accuracy and rendered efficiently. Clear enough?

  • Wow, these comments are horrific. Zero creativity or imagination. And the flames! Relax, guys!

    Here's something to think about: When you visualize a scene in your mind's eye, how is your brain representing and rendering the geometry? Is it using triangles? NURBs? Or is it using something novel that you haven't thought about yet?

    Nvidia is currently using neural nets to enhance rendering ( https://blogs.nvidia.com/blog/2017/07/31/nvidia-research-brings-ai-to-computer-graphics/ ), and so is deep mind ( https://

    • Trees don't look realistic in games because they are typically just scenery, so it's not worth the effort.

      If an Ents of Middle Earth game ever gets made, where the main characters are trees, the trees will look great.

  • Why Are 3D Games, VR/AR Still Rendered Using Polygons In 2019?

    Because 3D games are rendered identically to 2D games. There is absolutely no difference in how you have to display a 3D game, it's still LCDs screens powered by the same graphics cards either way.

    Polygons/triangles are a very old approach to 3D graphics that was primarily designed not to overstress the very limited CPU and RAM resources of the first computers capable of displaying raster 3D graphics.

    Who told you that games no longer stress system resources and that we have processes to space on inefficient methods of rendering?

  • He's either a) using really bad terminology to ask why we're using rasteriszation over ray tracing techniques or b) using somewhat bad terminology to ask why we're using polygons over implicit surfaces. In both cases the answer is really the same.... modern GPU hardware is designed and optimized for doing lots of rasterization work using polygons, and it's a far more efficient use of the GPU hardware than either the use of implicit surfaces or ray tracing. Both ray tracing and the use of implicit surfaces
  • makes good 3D models easier to create

    Remember that gaming isn't the only use of 3D; There are plenty of product designers, plenty of VFX editors, plenty of 3D animators, and countless other users of 3D modellers and raytracers all completely unconstrained by the triangle rendering of GPUs.

    Anyone who could considerably improve 3D modelling / rendering as you suggest would be very rich, and alternatives already do exist like parametric shapes for CAD and whatever modelling tools like ZBrush use internally.

  • Polys are fast, they're sooooo fast. Hooray linear algebra!

    Voxels are uhh, well they're cubes. And you have to have so many of them to not look like squares they're not worth it. Distance fields don't scale to tiny details, as they're effected both by surface area and volume, and so they take up way too much memory if they get detailed. Point clouds and surfels don't have that nice mathematical property of implicit relationships to neighboring points/surfels that makes triangles so compact in memory and
  • by AHuxley ( 892839 ) on Saturday April 27, 2019 @04:09AM (#58499804) Journal
    Very skilled people thought of different methods before GPU use was common.
    Then different GPU products attempted their own math and developer support. They failed in the marketplace.
    Finally the consumers and developers settled on a type of math and code the GPU could support.
    Every one is happy. The people making the OS, the GPU designers, the people working on advanced computer game code.
    The people buying a new GPU know their games will work on the GPU they have.
    They know the next few years of games they buy will work on their new GPU.
    What are the options? Back to a new VR version of what was Glide API, 3dfx, DirectX, OpenGL for VR and a per game GPU support?
  • My background is in physics, and one of the most important themes I learned during the studies was making approximations. Don't waste time on something that only affects the 6th significant digit, because nobody will notice it. It's a lot like avoiding early optimizations in coding. Sure, in some cases you may need to refine the result, but often the only way to get started is by very crude approximations. These things are being used constantly for real work, and the linear interpolation in polygon graphic

  • "Nvidia, AMD, and other GPUs that are primarily polygon/triangle accelerators. "

    They are making the games to be played using these types of devices.

  • Why use a 50-year-old approach to 3D graphics when new, better approaches can be pioneered?

    What's stopping you from inventing it?

    So why still use polygons at all? Why not dream up a completely new "there is a surface point here" technique that makes good 3D models easier to create and may render much, much faster than polygons/triangles on modern hardware to boot?

    Finding a point in a polygon is really fast. The math is really easy to implement in hardware. Got lots and lots of polygons? sounds like an embarrassingly parallel problem. which is why GPUs exist. Too bad polygons are poor at approximating many real world scenes.

  • There are many other techniques, I particularly like signed distance fields.
    However, polygons are still the best when it comes to combining ease of modeling, artistic freedom and reasonable rendering time and quality.

    There are countless impressive demos showing alternatives techniques but in these cases, the scene is made to make the technique look good. In practice, what we want is the opposite.

  • ... in 2.5D normal-mapping pseudo-3D simulations or similar environments. Yeah, sure the viewport is orders of magnitude faster and you get some neat effects dirt cheap, but add in any meaningful 3D interaction or multiplayer and you're neat 3D "replacement" will fall flat on it's nose.

    The reason we use polygons is precisely because it is the most effective way of doing virtual 3D. It's the only way to cut corners (quite literally actually) whilst maintaining a feasible game environment.

    That's why we have e

  • With "3D" rendered to a regular display, you can get away with using complex textures to approximate 3D detail (like hair on a head, treasure in a chest, etc). With stereo-viewing 3D, the illusion falls flat on its face. Things like voxel-depth can help... a little... but VR/AR really DOES raise the bar.

  • Do a search for "Euclidean rendering", and then evaluate the approach, value proposition, and challenges faced by the Euclidean rendering engine. Then ask yourself why they've pivoted their business model from rendering artificial environments to scientific and medical point clouds.

  • Same reason. They are cheap and convenient, easy to compute.

  • In calculus, a common technique is to subdivide a region into smaller and smaller polygons, until the polygons are so small that you can't distinguish between the mass of polygons and the true shape itself. This allows math to come up with reasonable answers.

  • So It may be a valid question, but the poster answers it in the last line
    "Why use a 50-year-old approach to 3D graphics when new, better approaches can be pioneered?"

    Precisely. New approaches can be pioneered, so can nuclear fusion power and intercontinental freight rockets, so why do we still use ships and hydro power?

    Asking why we don't use a mechanism yet to be invented is really kinda dumb. Now asking who is working on that future mechanism, and when they realistically expect it to be usa

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...