



Ask Slashdot: Why Are 3D Games, VR/AR Still Rendered Using Polygons In 2019? 230
dryriver writes: A lot of people seem to believe that computers somehow need polygons, NURBS surfaces, voxels or point clouds "to be able to define and render 3D models to the screen at all." This isn't really true. All a computer needs to light, shade, and display a 3D model is to know the answer to the question "is there a surface point at coordinate XYZ or not." Many different mathematical structures or descriptors can be dreamed up that can tell a computer whether there is indeed a 3D model surface point at coordinate XYZ or behind a given screen pixel XY. Polygons/triangles are a very old approach to 3D graphics that was primarily designed not to overstress the very limited CPU and RAM resources of the first computers capable of displaying raster 3D graphics. The brains who invented the technique back in the late 1960s probably figured that by the 1990s at the latest, their method would be replaced by something better and more clever. Yet here we are in 2019 buying pricey Nvidia, AMD, and other GPUs that are primarily polygon/triangle accelerators.
Why is this? Creating good-looking polygon models is still a slow, difficult, iterative and money intensive task in 2019. A good chunk of the $60 you pay for an AAA PC or console game is the sheer amount of time, manpower and effort required to make everything in a 15-hour-long game experience using unwieldy triangles and polygons. So why still use polygons at all? Why not dream up a completely new "there is a surface point here" technique that makes good 3D models easier to create and may render much, much faster than polygons/triangles on modern hardware to boot? Why use a 50-year-old approach to 3D graphics when new, better approaches can be pioneered?
Why is this? Creating good-looking polygon models is still a slow, difficult, iterative and money intensive task in 2019. A good chunk of the $60 you pay for an AAA PC or console game is the sheer amount of time, manpower and effort required to make everything in a 15-hour-long game experience using unwieldy triangles and polygons. So why still use polygons at all? Why not dream up a completely new "there is a surface point here" technique that makes good 3D models easier to create and may render much, much faster than polygons/triangles on modern hardware to boot? Why use a 50-year-old approach to 3D graphics when new, better approaches can be pioneered?
So... (Score:5, Interesting)
Which mathematical model would you LIKE to use?
All of these solve one key issue: it's fucking expensive to represent detail at the molecular, atomic, subatomic level... how far down do you want to go?
Instead, I define a construct to represent that set of points (which are by definition infinite) as a single entity, vastly decreasing my computational workload.
Something like... say a bounded plane. Maybe with three vertices, each with 3 coordinates. That means I only have to keep track of 9 numbers instead of an infinnite number of points.
Re:So... (Score:5, Insightful)
A lot of people seem to believe that computers somehow need polygons . . . . . This isn't really true.
Yes, it is true. You do need polygons.
Unfortunately, the author of this piece is a dumbass who doesn't know what he is talking about. Notice he doesn't say that there is a better way to do things, without polygons, he just says "somebody should invent a better way".
No shit, Sherlock.
Many times, the answer to a question like this is very simple: If there was a better way, somebody would have thought of it and we would be doing it.
Re:So... (Score:4, Interesting)
Well, there are still hardware limitations he's clearly not aware of. Computers got much faster and storage got much denser, but we're not quite at the point where we can gain any type of performance or production cost benefit by ditching polygons in favor of a raw 3d pixel map.
Also, 4,000 years, not 50 (Score:5, Interesting)
The author wonders why we've been building 3D objects from triangles for the last 50 years. Apparently he doesn't know that we've been building 3D objects from triangles for at least 4,000 years - because it's a really good way to do it.
Re:Also, 4,000 years, not 50 (Score:5, Insightful)
Not to mention that modern computers are digital not analog. Every point will need to be approximated to an integer value.
We still have problems with curves. So it is better off just doing more triangles then other wobbly shapes. As a computer can draw a thousand triangles faster then it can draw a proper sine curve. But using these thousand triangles we can get a good approximation of the sine curve with less cost.
Also to note, as you have pointed out. We have learned to measure a 3D world with triangles for thousands of years. In terms of programming it is easier to code a way that you can comprehend then trying to simulate an extremely abstract concept.
Re: (Score:2)
The author does mention voxels, he was implying those were bad and old too.
Re:So... (Score:5, Insightful)
Reminds me of the Sega Saturn. One of the first 3D consoles, but horrible to use and make for devs for many reasons but the biggest was one simple reason: "polygons" are all quadrilaterals, not triangles. That really hindered any 3D that would have been planned. This goes into the same reason that will keep coming up on this topic: triangle polygons work better and faster than everything else people have tried.
Re: (Score:2)
One of the first 3D consoles, but horrible to use and make for devs for many reasons but the biggest was one simple reason: "polygons" are all quadrilaterals, not triangles.
This seems a strange perspective by today's standards. I think most 3D modellers today would say that quads are much easier to work with for a variety of reasons: easier loop cuts, easier subdivision when you need more detail, cleaner lines when binding an animated character or other object to a skeleton or other rig, etc. Of course those quads still get converted to triangles at a lower level for rendering purposes, but I don't know anyone in the industry today who works with triangles as their preferred a
Re: (Score:2)
You always need a few triangles on the model to make the shape you desire with a fairly even distribution of density of the quads on the surface.
It is correct that subdivision likes quads much more than triangles and it is desirable to get as few triangles as possible, but you can't get none.
Re: (Score:2)
There's a little overhead in recalculating the polygon properties for each tile, but from an application programmer point of view, it's just a 3D graphics API.
Re:So... (Score:5, Informative)
Not exactly true AC.
Take NURBS surfaces. These are basically 3D vector boundary profiles. they are still infinitely thin skins that you apply one or more textures to, but they are not polygonal meshes. They are actual curved surfaces that are ruled by mathematical definitions. As such, you can zoom in on them endlessly close, and have no sharp edges-- unless you want one.
However, they are tricky to use, and can cause renderers to flip their shit when you feed them mathematically possible, but physically dubious structures like mobeus strips, because of their unorientability. (the renderer has difficulty determining which parts of the strip to occlusion cull, which side to apply texture on, etc.) They are also more resource intensive to use en-mass, compared to triangle polygon meshes.
Re: So... (Score:1)
Plus it can be darn hard to get a flat surface with NURBS, and some things actually are flat.
Re: So... (Score:3, Funny)
NURBS - Nobody Understands Rational B-Splines
Re: (Score:2)
Fundamentally, 3D surface rendering is about taking a point cloud and interpolating the missing points when you need to. To do that you need a connectivity model (triangles are simplest) and an interpolation method. Most rendering uses linear interpolation because it's fast an easy.
NURBS is spline interpolation, but it still uses a polygon mesh (in 2D).
Re:So... (Score:4, Interesting)
Yes, it is true. You do need polygons.
Unfortunately, the author .... doesn't know what he is talking about .... If there was a better way, somebody would have thought of it
I agree that polygons are very useful for graphics. I also agree that many people have thought about how to represent 3D objects, so most of the simple ways to represent 3D objects have been explored. I also think that for most 3D objects, polygons are the most computationally efficient representation.
I disagree with the statement "You need polygons." For example, a sphere can be represented by x^2 + y^2 + z^2 = 1, no polygons.
One (inefficient) way to represent any 3D volume would be as a union of spheres or ellipsoids (assuming no infinitely thin 3D objects and also assuming a finite level of resolution). Generalizing that approach, you could represent any 3D object as the union of objects with each object represented as {(x,y,z) | f(x,y,z) less than or equal to 0} where f(x,y,z) is a polynomial in x, y, and z.
There are many other ways. You could represent a 3D objects as the interior of a 2 dimensional manifold (a 2D manifold is just a precise mathematical term for a 2D surface) where the manifold is represented by overlapping patches of homeomorphic mappings from the interior of a circle into 3D space.
The thing is that the information in a 3D scene needs to be represented somehow and polygons are one of the easiest ways to represent 2D surfaces. If the surfaces have no corners, then the information can be encoded efficiently with NURBS or B-splines which warp the flat polygons.
Re:So... (Score:5, Informative)
"One (inefficient) way to represent any 3D volume would be as a union of spheres..."
Representing 3-D models as spheres can be quite efficient using Conformal Geometric Algebra, which also uses the same representation for points (0-radius sphere) and planes (infinite-radius spheres). It also has point pairs (1-d spheres), flat points (flat point : point pair :: point : sphere), circles (2-D sphere) and lines (infinite-radius circles. It does this by using two additional dimensions and Clifford Algebra, but using it is quite simple, even middle schoolers should be able to use it. 3D Euclidean Geometry through Conformal Geometric Algebra (a GAViewer tutorial) [science.uva.nl]
This technology applied by the British company Geomerics and incorporated in game engines enabled real-time radiosity lighting in games, for instance letting arbitrarily-placed fireballs light up the scene. Some of the best papers on Geometric Algebra are by the Cambridge professor founders of Geomerics such as Chris Doran. See University of Cambridge Geometric Algebra Resources [cam.ac.uk]
Intersections of sphere surfaces aren't horrible. (Score:3)
Re: (Score:3)
The infinite plane is another. The plane divides the universe up into an inside and outside just as the sphere does.
The flat surface you wanted will normally come from the infinite plane. All convex objects with flat sides can be represented as the intersection of a set of infinite planes. All non-convex objects with flat sides can be represente
The answer is in the question (Score:5, Insightful)
I like that they answered their own question without noticing:
Polygons/triangles are a very old approach to 3D graphics that was primarily designed not to overstress the very limited CPU and RAM resources of the first computers capable of displaying raster 3D graphics.
Polygons are used because they're a very efficient way to use the limited resources available to dramatic effect. Doesn't matter how many resources you have, they're still limited. Unless your alternative is something truly incredibly clever that can leverage the larger amounts of resources beyond some inflection point in a better-than-linear manner, it will probably deliver less impressive result than sticking with polygons. At least so long as you're dealing with objects with clearly defined surfaces.
They seem to be conflating *rendering* and *creating* models though. For rendering it's hard to beat the performance of polygons and the various tricks that can be done with them. For creation though there have long been many alternatives - voxels, (pseudo)fractal algorithms, virtual clay, etc., many of which are rendered, even in the modeling tools, as polygonal approximations, but are internally described in a very different fashion. Perhaps that distinction is confusing them.
Re: (Score:3)
If you look at the history of graphics one thing is clear: brute force always wins.
Lots of different, very clever techniques for improving rendering speed have been created over the years. In the end they all fell by the wayside as the next generation of GPUs just threw more and more power at the problem. Thousands of parallel cores, insane memory bandwidth.
What theoretical benefits exist from these alternative schemes tend to fall away when you consider two factors.
1) All the tools for creating games use p
Re: (Score:2)
I'm trying to think of clever techniques that fell by the wayside. Seems like most of the good ones are still in use, they just migrated from "clever technique" to "standard technique", since they wrung even more impressive results from all that ever-increasing brute force.
Re: (Score:2)
Most of the ones I can think of are mostly due to march of technology than them not working well, like flat shading to gouraud shading to phong shading. I guess not a lot ended up using full voxel engines after some big hype in the 90s, although they may be making something of a comeback with the increase in computer power. Maybe using quads instead of tris, a la the Sega Saturn, although one could argue whether that's clever or not?
Re: (Score:2)
Unless I'm very much mistaken, those techniques (or their descendants) are still widely used for lighting, they're just combined with texture-mapping to provide surface detail. What used to be cutting-edge "this is all the hardware can handle" techniques are now just one aspect of a much more sophisticated rendering pipeline.
Quads though - I'm not sure there was ever a good argument for them, other than being able to conveniently piggyback on a lot of the existing 2D sprite-rendering hardware. If they're
Re: (Score:2)
That last part is key. The OP talks about the problems of renting and then switched to the fact that game cost is based on design. Well a button press can turn any design into a polygon so he's conflating some incredibly different things.
Re: (Score:2)
Triangles and polygons are a low overhead way to simplify the problem, and computation has advanced to the point where we have a choice:
1) Use our increased computing power to change to a more complex model.
2) Use our increased computing power to get more detail, smoother motion etc using the same model.
Probably choosing option 1 will give us a bit of 2 - smoother shading without changing the detail level for example, but if 1 was the best option for older GPUs, it is likely the best option for current
Re: (Score:2)
Or 3) Use our increased computing power to do what we did previously, but with less effort.
If it continues, one day we'll see Hollywood-tier graphics from indie devs.
Re:So... (Score:5, Funny)
Duh! Blockchains, of course.
Re: (Score:2)
Actually we are using polygons *more* than ever before in computer graphics. Very early 3D would use spheres and cones and conic sections (and CSG of these). Somewhat later NURBS were very popular (kind of a 2-d square patch defined by curves). And there were triangles. Everything has been replaced by triangles and quads now, along with subdivision surfaces (which are a method of generating finer smoother triangles from a rougher mesh).
There is a reason for this: polygons allow arbitrary topology. If you in
Re:So... (Score:5, Interesting)
> It's vector graphics, especially the text.
That's NOT true for games at all.
People are using 2D textures to store SDF (Signed Distance Fields) after Valve literally wrote the white paper on it. Improved Alpha-Tested Magnification for Vector Textures and Special Effects [akamaihd.net]
There is a summary of the various approaches, bitmap vs SDF vs Vector Glyphs on the GPU here. [aras-p.info]
Here is one SDF demo [amazonaws.com].
Re:So... (Score:4)
That was some really interesting reading, and quite solid work. Thanks for the links.
I also found Shape Decomposition for Multi-channelDistance Fields [dspace.cvut.cz] by Viktor Chlumsky to be informative.
Re: (Score:3)
Yup, Viktor write his Ph.D on how to get sharp corners with a SDF texture.
One of the weaknesses of a SDF texture is that sharp corners are NOT preserved. Since SDF textures are traditionally monochrome you can use the extra channels (green & blue) to store "sharpness".
False premise ? (Score:5, Insightful)
> "A good chunk of the $60 you pay for an AAA PC or console game is the sheer amount of time, manpower and effort required to make everything in a 15-hour-long game experience using unwieldy triangles and polygons"
This doesn't seem true at all. The time is spent making beautiful art. The translation to polygons is mostly automated by the toolsets, now, I believe.
Re: (Score:3)
> "A good chunk of the $60 you pay for an AAA PC or console game is the sheer amount of time, manpower and effort required to make everything in a 15-hour-long game experience using unwieldy triangles and polygons"
This doesn't seem true at all. The time is spent making beautiful art. The translation to polygons is mostly automated by the toolsets, now, I believe.
This is so far from the truth. Most modern games are designed using already established engines and SDKs. In addition to the modelling part of any game, you have people working on the story and character scripts, voice actors, sound effects, music, artistst creating textures and a myriad of other graphics, testers, marketing and advertising that all go into a AAA title game. The modelling is usually handled by the engine SDK, unlike every other facet of putting out a game.
Re: (Score:3)
The modelling is usually handled by the engine SDK, unlike every other facet of putting out a game.
Do you mean the "rendering"? Because as a game dev I can assure you: the modeling is handled by modelers, who are people.
Your point about the many other costs of games is true--askagamedev.tumblr.com has many posts breaking down the many costs--but, like, I think OP (who is also off-base) is trying to make the point that there are increasingly convenient ways to transfer models into game-ready assets, whereas the original question is less about the human cost of making the art, and more about the method use
Re: (Score:2)
The modelling is usually handled by the engine SDK, unlike every other facet of putting out a game.
Really? All the people I know on that end of things use tools like 3ds max, ZBrush, cinema4D, Houdini and even Blender.
Idiots pontificating... (Score:3)
Jesus fuck, how is the computer supposed to know where the surfaces are if it doesn't have some kind of spacial map of the world it's trying to render? That spacial map is made out of polygons until you figure out some new way to do it.
Re: (Score:1)
Re: Idiots pontificating... (Score:2)
Re: (Score:2)
It isn't just endemic on Slashdot: it is the entire tech industry. Guys who make six figures coding websites think they know everything.
And don't even get me started on ... (Score:5, Insightful)
That round thing all these vehicles are riding on is 3000 years old. And using combustion to make raw food edible and digestable ... why, that is at least 500,000 years old.
Out with the old, and in with the new. That's the mantra.
Simple: they are cheaper (Score:5, Informative)
NURBS are great and all but they require significant computations while saving on data. Polygons on the other hand only need some fast matrix manipulation but plenty of data. Add some texturing tricks and your polygons are data heavy but good enough to fool the user's perception. Currently it costs less money to add a boatload of ram than it is to add trigonometric computational power.
Re:Simple: they are cheaper (Score:5, Informative)
NURBS are not great. They are a pain in the ass. Try making a character out of nurbs one day. Getting continuity on edges, seams, cracking, adding local detail (here's a hint - you can't. You have to add iso lines across the entire surface). Adding details after a texturing pass will make you want to kill yourself.
Subdivs - maybe.
But the premise of this entire post sure comes across as somebody with no idea what he's talking about.
Re:Simple: they are cheaper (Score:5, Informative)
The demo scene used to regularly throw some nurbs in, and they did so because it wasnt at all easy to do it realtime. Now that its easy (20 years of moore's law) and patents on the the polygon version have expired, nobody is doing it.
The developers have voted.
On the geometry side, the endgame is and always has been constructive solid geometry (CSG) and you really only need to support a couple core primitives. Stuff like polygons can be emulated but carry all the same big data hassles that the regular ol' list-of-polygons geometry that we currently use does.
Fundamentally it is obvious that polygons arent it. Polygons are both not primitive enough and too primitive all at the same time. Their strengths are also their weaknesses. Polygons cannot be optimal.
I would call polygons the vertex centric view of the geometry-describing problem. A list of "neighboring" vertices describe a face that the renderer uses.
On the other hand CSG is the volume centric view of the geometry-describing problem. A list of "neighboring" volumes describe the faces that the renderer uses.
Nurbs are interesting and cool, but it is a terrible rendering primitive. Nobody uses it unless they have to. CSG is superior with a casting renderer, and polygons are still the core of a rasterizing renderer so if you are bothering with this abstraction its for something only it offers.
Re: (Score:2)
Subdivision surfaces, on the other hand, arent being "moved to"
Re: (Score:2)
surface meshing could be done another way... (Score:1)
What is this rubbish? (Score:5, Insightful)
What does the author propose is used to determine if there is a surface point there? Magic?
Great question.. Unfortunately, pointless if the author doesn't have a single idea. More sounds like they're a journalist trying to get info to write a new article
Re: (Score:2)
Re: (Score:2)
What does the author propose is used to determine if there is a surface point there? Magic?
You are making the same fundamental error the first question in the summary makes.
There is always a surface.
The backward solutions to the rendering problem ask "which pixels?" for every surface. These are the rasterizers.
The forward solutions to the rendering problem ask "which surface?" for every pixel. These are the raycasters and raytracers.
The question "Is there a surface point there?" in all its forms doesnt make any sense at all given these facts. Its a fundamental error. A gross misunderstan
Re: (Score:2)
Actually that is about the only thing the author got right. It's the premise he got wrong. All the techniques he talks about are already used, but in the end a button press converts the result to a polygon.
best bang for the buck (Score:5, Insightful)
The primary reason (for AAA titles at least) is that games compete on visual quality, and that the best quality is still easiest to achieve with polygons.
Todays GPUs are perfectly capable of alternate rendering methods.. ray-tracing / ray-marching of voxels or signed distance fields, point clouds, volume rendering etc. You can find endless examples of these online, and it would be totally possible to make a game with some of these. In fact there are plenty of examples of ray-traced older games (the various Quake's, Minecraft, etc) looking pretty cool. But here's the thing: given the same amount of GPU power, you will not achieve the same visual fidelity than can be achieved with more traditional methods. They are not competitive, as the effort spent per pixel is much higher.
Then there's the issue of tooling and training: the AAA games industry is full of people that are really good at creating these polygon meshes, with their very familiar professional tools. Want to change to a different rendering style, now need to train your artists to compose models out of mathematical functions? Good luck!
These should be less of an issue for indie games: these games are totally fine not maxing out the GPU in exchange for a unique look. But even for those the switch may be daunting.
Lots of money awaits a good answer... (Score:2)
I'm pretty sure AMD and nVidia would love to hear better alternatives. And while it is strictly true that the first phase of the graphics pipeline does involve geometry and ultimately it's projection onto a 2D plane where that surface question is asked, the final stage where it is asked does not strictly need geometry as an input to answer it. The compute facilities of modern graphics cards are pretty amazing, provided you know how to work with them.
In other words, most of these accelerators are capable of
Re: (Score:2)
The article is wrong really. It's not about knowing if there's a surface at x,y,z at all. The only question you need to answer is "What colors do I need to shade the pixels in my one or more 2d pixel grids (i.e. displays)". You can do that using very many methods: you can decompress a jpeg and shade the pixels based on their position within that jpeg image (or mpeg), you can write a pixel shader that calculates the value of every pixel based on whether the pixel is even or odd, you can of course utilize the
New render method ALREADY exists (Score:5, Informative)
Your "new" rendering method _already_ exists.
First, go read Inigo Quilez's Rendering Worlds with two triangles PDF [iquilezles.org]
Recall that there are two ways to render surfaces:
* Explicit (Triangles)
* Implicit (Signed Distance Fields, etc)
The problem is how do you easily texture implicit surfaces?
GPUs accelerate triangles (explicit surfaces) because:
* Vertices guaranteed to be co-planar, which simplifies the math,
* The perspective divide for Texture Mapping is relatively straightforward,
* Easy for artists to model a High Polygon Model (1 million+ triangles) and create a "Low Poly" ~50,000 triangles.
* Fast
* Easy to understand
Ray Marching Signed Distance Fields (Implicit Surfaces) are dog slow because you need to keep stepping and calculating the intersection points over and over again.
Ray Tracing is dog slow on GPUs because it touches VRAM (GPU memory) in random order. GPUs were NOT designed for that like CPUs are.
Path Tracing is also hideously slow because you need to shoot MILLIONS of rays to avoid "speckle" artifacts of under-sampling.
More so with 3 bounce lighting to "properly" calculate Global Illumination.
Every methodology has strengths and weaknesses.
- Non polygonal surfaces like clouds look like ass with polygon blending
- Ray Marching requires N steps PER ray. Where N is 64 to 128 steps.
There is no "Holy Grail" of rendering because simulating reality requires a crap ton of math and computers are still slow at it.
Yes, Real Time ray tracing is finally here, after 20+ years of failed hardware, but that will take a few years to go mainstream.
Re: (Score:2)
Re: (Score:2)
No worries!
That's a good shadertoy by the "god" of SDF rendering: i.q. :-)
Have you seen this neat HOWTO: Raymarching [shadertoy.com] tutorial?
Re: (Score:2)
Re: (Score:3)
Yes.
1. Positional audio has been done for about 20 years.
2. You can accelerate the audio calculation by shooting rays in a 360 degree 3D sphere and see how the "sound rays" reflect / interact with the environment.
3. You can accelerate this with CUDA / OpenCL.
4. Ray tracing for graphics and audio can share data.
Re: (Score:3)
It I can and has already been done.
Pretty sure I read some argument for letting audio pass walls too though. The equivalent of visual transparency. Like if you are outside a door/room and it doesn't really have an opening there but normally the sound would bounce away through some other room and back to the corridor you are standing in without sound passing the wall/door you'd of course hear it down the corridor but that may not be all that realistic.
The Sega Saturn used rectangles (Score:2)
Also, I think we're all massively over estimating the role of polygons and underestimating pixel shaders. This isn't 1995. We don't scenes out of triangles all that much. There's an article [adriancourreges.com] on what it takes to render a frame of Deus Ex and there's a hell of a lot more going on than texturing some wi
Re: (Score:3)
The Saturn's problem wasn't rectangles vs. triangles. It was not having hardware for transparency. Developers described the platform as a pile of chips on a board, because they had to do everything. A small handful of games had transparency effects, but they had to do it by brute force and wound up dedicating a CPU to the task. Not putting transparency into the GPU meant having to have two CPUs, and that made the Saturn expensive.
The Playstation also had a much, much better controller than anything on the S
The Saturn had hardware transparency (Score:2)
Re: (Score:2)
I guess you're talking about 2d, where the saturn could do a very bad transparency effect that only worked on one sprite [mattgreer.org]. But I'm talking about 3d, where there was no transparency effect at all. Meanwhile, the Playstation could do transparent 3d objects. I didn't specify, so your confusion is understandable. The fact is that this one feature was drastically important, and Sega just skipped it. And 2d transparency was also important, and Sega mostly skipped it — sprites were composited against the back
It's happening. It's just slow. (Score:1)
Fun fact. Before Microsoft created the Direct3D standard, nVidia produced a curve rendering chip. D3D came out, and nearly bankrupted them. They chose to make their next chip a hardware accelerated version of the D3D spec. And now we're here.
If you tried to change it, nobody will have art assets or experience with the other modelling techniques. Shaders will have to work differently. If one of the big 3 of gaming (Sony, Nintendo, or MS) tried to force a change, and somehow convinced one of the big 2 o
Because it is a solved problem (Score:2)
Because asking "why don't they do it a different way" isn't the same as creating a different way and doing all the additional work to bring the new way up to the same quality and productivity as the old way. The old way was refined and improved 1000 times and it has great software tools to support its use and smoothe out its problems.
Knowing a "better way" doesn't get you very far without all the support work being done. And you don't even propose a "better way". It's a thousand mile walk and you haven't
Comment removed (Score:5, Insightful)
Re: (Score:2)
Raymarching and distance fields to the rescue (Score:3)
So just "dream up" some new mathematics? (Score:2)
Yeah, good luck with that. You seem to be completely clueless how this works. Also, what do you think mathematicians do all the time?
This must be one of the most stupid /. stories in quite a while.
You asked why? (Score:1)
Triangles are Easy (Score:5, Informative)
Graphics cards use triangles (not general polygons, but usually triangles) because they are easy to render quickly. All the GPU needs to do is an easy 4x4 matrix multiplication per vertex to get the screen space coordinates, and rasterize the points that fall inside. (And in most cases, the vertices are shared and indexed between multiple triangles, thus less than vertex transformations are required per triangle.) Moreover, the transformations are highly parallelizable, as the GPU is doing the same matrix multiplication to a large number of vertices, simultaneously and independently.
A triangle mesh is a linear approximation of an arbitrary surface. You can get arbitrarily closer to that surface by subdividing the triangles. And if you make that high enough, you are well below the pixel size. (FYI, this idea of "micropolygons" was the basis for Pixar's original Renderman software.)
It's important to realize that most games and other 3D applications are not geometry-bound, they are fill-bound. A significantly larger amount of time is spent on shading. For example, assume that you have a scene with 10 shadow-mapped lights. This means that for every pixel that is ultimately rendered to the screen, the GPU performed *at least* 10 lookups into a shadow map to compute lighting visibility, and then evaluated and summed the reflectance function for each of those (each of which may itself involve multiple texture lookups). And depending on the draw ordering, it may be the case that those evaluations were wasted, because the fragment ended up being occluded by a later-drawn fragment that was in front of it. Multiply that by the number of pixels. And if you're using supersampled antialiasing, then multiply that by the sampling rate.
There has been a lot of work on non-polygonal representations of geometry. In the early 2000's, point-based rendering was a very active area of interest (search for "QSplat" for a prominent example). NURBS (or parametric surfaces generally) have long been used in modeling applications. There are also plenty of examples of implicit surfaces or voxel-based rendering. But given the rate at which GPU speed has increased, it is often faster and easier to just use more triangles.
Of course, there are things that are not well-represented by triangles. As noted, a triangle mesh approximates a surface. If you have something that doesn't have a surface (fog or smoke, for example), a non-triangle representation could well be preferred. But for the most part, the objects that we want to render can be sufficiently approximated with triangles.
Sometimes there are stupid questions (Score:5, Insightful)
"All a computer needs to light, shade, and display a 3D model is to know the answer to the question "is there a surface point at coordinate XYZ or not."
False.
And with that out of the way, the rest of the thesis put forward can be discarded.
Seriously, if all you could do was query {x,y,z} for true/false to a 'surfaceHere' test, you'd be unable to render anything.
The question that actually needs to be answered not "is there a surface point at {X,Y,Z}" but the far more difficult question: "what surface in the scene does a ray cast at a particular angle in 3-D space intercept first, and at what {X,Y,Z} coordinate does the interception take place, and at what angle is the interception"
It turns out that spheres and triangles are the two easiest things to test. This is just the mathematics involved. It has nothing to do with limited cpu power or RAM. For a sphere, you need to determine where the ray's closest approach is to the sphere -- by solving the equation of a line perpendicular to the ray passing through the center of sphere, and then calculating the length and seeing if it is less than the radius -- then you know that the ray intersects the sphere.
For a triangle, you need to solve the equation for the point at which the ray intersects the plane defined by the triangle (which it will unless the plane is parallel to the ray); and then whether that interception point is within the bounds of the triangle. this is also pretty straightforward.
Couple that with purpose built hardware for specifically solving this triangle problem, with multiple pipelines, each capable of solving this problem in parallel for different triangles... and you get GPUs.
Further, the majority of the shapes we want to model can very efficiently approximated to arbitrary precision with a triangle mesh so we use triangle meshes for those things. They are the right tool for the job.
There a few exceptions -- things like trees and hair blowing in the wind, or fur, or smoke, for example. And all kinds of special processing is done specifically for these cases.
But go back to the original question ... how does he propose "is there a surface point at {X,Y,Z}" going to handle fur or wind blown hair? Honestly if that's the only heuristic you had, you'd be pretty hard pressed to render a rotating cube under a single light.
Re: (Score:2)
I didn't go into nearly enough detail, which isn't possible in a couple paragraphs.
The type of rasterization you are describing is basically a simplified special case of ray tracing, and it works because of all the simplifying assumptions you can make when working with triangles; resulting in degenerate cases. Even the final raseterization steps can be *thought* of as still basically a ray cast orthoganonally to the camera coordinate system. (Granted its not implemented as a raycast but result pixel color
Because they're efficient... (Score:1)
Find a more computationally efficient method and we can talk. I think we are starting to get there with ray tracing hardware though.
Wanna Cracker (Score:1)
In other words, Poly ain't gone.
It's production tool pipelines as well (Score:2)
If you invent a fundamentally new rendering approach for virtual environments, you're also rebuilding the entire ecosystem of tools and skills to create high-quality content for it on an industrial scale. We have multiple generations of digital artists at this point that have iterated and learned on a triangle-geometry rendering theory. The content creation tools are made for it, the material creation tools are made for it, the project management tools are made for it, even the terms we use are build arou
Simple: manifolds (Score:2)
The vast majority of what needs to be rendered is solid. Solids are bounded by surfaces. Surfaces are manifolds. Manifolds can be decomposed into polygons to any desired accuracy and rendered efficiently. Clear enough?
Why is nobody talking about neural networks? (Score:2)
Wow, these comments are horrific. Zero creativity or imagination. And the flames! Relax, guys!
Here's something to think about: When you visualize a scene in your mind's eye, how is your brain representing and rendering the geometry? Is it using triangles? NURBs? Or is it using something novel that you haven't thought about yet?
Nvidia is currently using neural nets to enhance rendering ( https://blogs.nvidia.com/blog/2017/07/31/nvidia-research-brings-ai-to-computer-graphics/ ), and so is deep mind ( https://
Re: (Score:2)
If an Ents of Middle Earth game ever gets made, where the main characters are trees, the trees will look great.
Nonsense (Score:2)
Why Are 3D Games, VR/AR Still Rendered Using Polygons In 2019?
Because 3D games are rendered identically to 2D games. There is absolutely no difference in how you have to display a 3D game, it's still LCDs screens powered by the same graphics cards either way.
Polygons/triangles are a very old approach to 3D graphics that was primarily designed not to overstress the very limited CPU and RAM resources of the first computers capable of displaying raster 3D graphics.
Who told you that games no longer stress system resources and that we have processes to space on inefficient methods of rendering?
Re: (Score:2)
The graphics cards you use to run 3D games are the same cards used to run 2D games. You don't need a special card with specific 3d features as no more 3d features are used to render 2 screens instead of just one.
Some friends put together a virtual reality setup in high school 20 years ago using a even older graphics cards.
Re: (Score:2)
I think based on the context 3D in this context means VR headset style technology. They don't mean sprites vs models. I don't think polygons were ever used in sprite based 2d games. I remember 2D vs 3D graphics cards. But what this article is talking about is rendering 1 screen vs 2 screens to fool your brain into seeing a 3D image.
Because you get more bang for the transistor (Score:2)
Gaming isn't the only 3D (Score:2)
makes good 3D models easier to create
Remember that gaming isn't the only use of 3D; There are plenty of product designers, plenty of VFX editors, plenty of 3D animators, and countless other users of 3D modellers and raytracers all completely unconstrained by the triangle rendering of GPUs.
Anyone who could considerably improve 3D modelling / rendering as you suggest would be very rich, and alternatives already do exist like parametric shapes for CAD and whatever modelling tools like ZBrush use internally.
They're fast, and it's not what's expensive (Score:2)
Voxels are uhh, well they're cubes. And you have to have so many of them to not look like squares they're not worth it. Distance fields don't scale to tiny details, as they're effected both by surface area and volume, and so they take up way too much memory if they get detailed. Point clouds and surfels don't have that nice mathematical property of implicit relationships to neighboring points/surfels that makes triangles so compact in memory and
Math and GPU (Score:3)
Then different GPU products attempted their own math and developer support. They failed in the marketplace.
Finally the consumers and developers settled on a type of math and code the GPU could support.
Every one is happy. The people making the OS, the GPU designers, the people working on advanced computer game code.
The people buying a new GPU know their games will work on the GPU they have.
They know the next few years of games they buy will work on their new GPU.
What are the options? Back to a new VR version of what was Glide API, 3dfx, DirectX, OpenGL for VR and a per game GPU support?
Approximations, they just work (Score:2)
My background is in physics, and one of the most important themes I learned during the studies was making approximations. Don't waste time on something that only affects the 6th significant digit, because nobody will notice it. It's a lot like avoiding early optimizations in coding. Sure, in some cases you may need to refine the result, but often the only way to get started is by very crude approximations. These things are being used constantly for real work, and the linear interpolation in polygon graphic
You have answered your own question (Score:2)
They are making the games to be played using these types of devices.
We're waiting (Score:2)
Why use a 50-year-old approach to 3D graphics when new, better approaches can be pioneered?
What's stopping you from inventing it?
So why still use polygons at all? Why not dream up a completely new "there is a surface point here" technique that makes good 3D models easier to create and may render much, much faster than polygons/triangles on modern hardware to boot?
Finding a point in a polygon is really fast. The math is really easy to implement in hardware. Got lots and lots of polygons? sounds like an embarrassingly parallel problem. which is why GPUs exist. Too bad polygons are poor at approximating many real world scenes.
Because they are the best compromise (Score:2)
There are many other techniques, I particularly like signed distance fields.
However, polygons are still the best when it comes to combining ease of modeling, artistic freedom and reasonable rendering time and quality.
There are countless impressive demos showing alternatives techniques but in these cases, the scene is made to make the technique look good. In practice, what we want is the opposite.
Good luck doing plausible physics ... (Score:2)
... in 2.5D normal-mapping pseudo-3D simulations or similar environments. Yeah, sure the viewport is orders of magnitude faster and you get some neat effects dirt cheap, but add in any meaningful 3D interaction or multiplayer and you're neat 3D "replacement" will fall flat on it's nose.
The reason we use polygons is precisely because it is the most effective way of doing virtual 3D. It's the only way to cut corners (quite literally actually) whilst maintaining a feasible game environment.
That's why we have e
VR/AR has texture issues, too (Score:2)
With "3D" rendered to a regular display, you can get away with using complex textures to approximate 3D detail (like hair on a head, treasure in a chest, etc). With stereo-viewing 3D, the illusion falls flat on its face. Things like voxel-depth can help... a little... but VR/AR really DOES raise the bar.
It's already been done (Score:2)
Do a search for "Euclidean rendering", and then evaluate the approach, value proposition, and challenges faced by the Euclidean rendering engine. Then ask yourself why they've pivoted their business model from rendering artificial environments to scientific and medical point clouds.
Why do video screens still use pixels? (Score:2)
Same reason. They are cheap and convenient, easy to compute.
Calculus does this too (Score:2)
In calculus, a common technique is to subdivide a region into smaller and smaller polygons, until the polygons are so small that you can't distinguish between the mass of polygons and the true shape itself. This allows math to come up with reasonable answers.
Ask Slashdot: Why ask and answer your own question (Score:2)
So It may be a valid question, but the poster answers it in the last line
"Why use a 50-year-old approach to 3D graphics when new, better approaches can be pioneered?"
Precisely. New approaches can be pioneered, so can nuclear fusion power and intercontinental freight rockets, so why do we still use ships and hydro power?
Asking why we don't use a mechanism yet to be invented is really kinda dumb. Now asking who is working on that future mechanism, and when they realistically expect it to be usa
Re: Distance functions (Score:1)
It's the problem of projection into two dimensions. You could have a triangle and corner A maybe further from corner B than corner C from corner A, but for any given triangle XYZ, there is a projection of ABC that appears identical to ABC. Sort of like you might see a bunch of projections that make B seem far from A but in the end it wasn't B at all, it was C that was far away from A. In the real world this would manifest in a cost function in which B appears much more costly to render than C but the opposi
Re:Distance functions (Score:4)
Great idea, Yogi! How about you propose all these trivial-to-find much-better replacements for existing ways of doing things, and we can go out and implement them.
Just answer the bloody question (Score:5, Insightful)
If this single post doesn't show what a cesspool of idiots and posers this place has become, I don't know what does. Long ago this would have been laughed off the queue because it shows a basic lack of technical understanding.
You know what's even worse than a dumb question? An idiot who doesn't answer it because he wants to demonstrate how smart he is by complaining how dumb the question is. Anyone who has to make someone else look dumber to look smart...
Now, to answer the question. Polygons are doing for computers what nature already does, which is quantize things into readily processable chunks. If you want continuous curvature as a method of modelling, say like a bunch of bezier surfaces, sure it's probably doable but then calculating what is hiding what becomes monstrous and in the end it doesn't actually gain you anything. And the reason for this is entropy.
Every object has an amount of entropy. The more complex the object, the more entropy it has and the more complex its model has to be to render it accurately. How that entropy is modeled is irrelevant. The more accurately you want to represent it, the more of its entropy you have to model and process. There is no magic wand around this. Computers just aren't fast enough to process the amount of entropy in fully-realistic models, so they have to simplify the models. And whether you simplify the real-world object by breaking down it's infinite series of continuous curvatures into a finite set of flat "quanta" (triangles) that more or less tangent the surface, or if you do it by reducing the infinity of continuous curves down to a lower number of simpler but smoother curves makes no difference to the actual accuracy of the end result. It's a matter of information theory. Real-world objects contain a certain amount of entropy. You can't represent that entropy with a simplification without loss of fidelity. And the more accurately you want to model it, the more of that entropy you have to process. You can call that a law of nature, because it is.
A method of simplifying a real-world object's entropy is what I will call it's modelling method. And when I say modelling method, I mean both the underlying method of modelling plus the way those models are processed and manipulated. Each modelling method has an associated efficiency. The same way heat pumps move heat entropy with a certain efficiency, the 3D model method moves information with a certain efficiency. We have gotten better over the years at moving heat, but we are nowhere near the theoretical limits for how efficient we can get. But we are refining it year by year and getting better. We can move heat around at lower energy cost as we refine the method. If we were to totally abandon the heat pump method we are using now (compression and adiabatic expansion) for something different, a century of refinement of that method would be lost and we would likely find our heat pump efficiencies dropped before they got better.
It's the same thing for triangle/polygon modelling. It's not a 100% efficient method of simplifying informational entropy, but we've gotten really good with it and are getting even better year over year. That doesn't say that some new method won't be invented tomorrow that is more efficient. But even if it was. Even if we discovered a method today that is more efficient, then it still won't have all that much effect, because the problem with modelling today isn't the fact that our method is all that inefficient, it's that the amount of entropy it takes to make a model look realistic is just a whole lot and there is literally laws of nature that prevent you from doing that any way other than just a certain amount of brute force.
So abandoning a method we've refined the efficiency of for half a century doesn't even have the potential to buy us much. In the end, all a different model method (method of simplification of a real-world object into a model) does is make the way a simplified model differs from rea
Re: (Score:2)
The OP is clueless about Ray Tracing, Ray Marching, Path Tracing, Point Clouds, etc.
See my answer to the OP's question here. [slashdot.org]