10 Things That Need To Die For Next-Gen

Lately I’ve been thinking about things in graphics that have long worn out their welcome, and I started a list of techniques that I hope will be nowhere in sight once everyone moves on to next-gen console hardware (or starts truly exploiting high-end PC hardware). Here they are, in no particular order:

  1. Phong/Blinn-Phong - we need more expressive BRDF’s for our materials, and these guys are getting in the way. Phong is just plain bad, as it doesn’t even produce realistic streaks at glancing angles (due to using the reflection vector and not the halfway vector like with microfacet-based BRDF’s). Energy-conserving Blinn-Phong with a proper fresnel factor is a huge step in the right direction, but we can still do better. Personally I’m a big fan of Cook-Torrance for isotropic materials. It requires quite a bit more math compared to Blinn-Phong, but if there’s one thing modern GPU’s are good at it’s crunching through some ALU-heavy shader code. Anisotropic BRDF’s are also really important for a lot of materials, and I think we need to start getting them working side-by-side with Cook-Torrance in our deferred renderers. Another important hurdle to overcome is making sure our pre-rendered specular environment maps match our BRDF. Blurring the mip levels is a good approximation for Phong, but not so much for Blinn-Phong or Cook-Torrance. For anisotropic BRDF’s, it’s not even close.

  2. Specular Aliasing - during this generation we got pretty good at making things bumpy and shiny. What we didn’t get good at was making sure all of that bumpy shiny stuff didn’t turn into aliasing hell. Stephen Hill gave a great summary of the current lay of the land when it comes to specular antialiasing techniques, as well as new technique for pre-computing them into gloss maps. Unfortunately those techniques are either formulated in terms of  Blinn-Phong (in the case of Toksvig AA), or aim to replace Blinn-Phong (in the case of LEAN/CLEAN). This means that we still have a bit more work to do if we want to move on to Cook-Torrance. However I think these techniques have given us a great starting point, and I’m fairly confident that with some information about the variance of a normal map we can tackle the problem for other BRDF’s. Selective supersampling is another (expensive) possibility for problematic scenarios, which can even be seamlessly integrated into MSAA on DX11 hardware.

  3. SSAO - I don’t think too many would agree with me on this one, but I’ve just never been a big fan of SSAO. It was tremendously clever idea when it came out, and it certainly has improved quite a bit since its original inception. However I don’t think I’ll ever get over the idea that the technique is fundamentally handicapped in terms of the information it has to work with. I’d really like to aggressively pursue alternative techniques based on primitive shapes and/or low-resolution representations of scene meshes, with the hopes getting AO that’s more stable and captures larger scale occlusion. But I’m sure we’ll end up still using SSAO to fill in the cracks (pun intended).

  4. DXT5 Normal Maps - this one is a no-brainer, we just need to ditch current consoles and embrace BC5.

  5. Geometry Aliasing - we’ve been fighting this one for a long time now, but it still lingers. In fact you could almost argue that this problem has gotten worse due to the widespread use of deferred rendering and HDR rendering. Screen-space techniques like MLAA and FXAA have given us a great big band-aid to throw over the problem, but they are exactly that: a band-aid. They’re never going to completely solve the problem on their own, which means that we need to find better ways to make use of MSAA if we really want to solve some of the tougher cases. For deferred rendering this means being smart about which subsamples we shade, as well as how we shade them. Andrew Lauritzen’s work has given us a great starting point, but I’d imagine we’ll need to specifically tailor our approach for whatever target hardware we’re working with. For dealing with HDR, we need to be aware of the problems caused by tone mapping and make sure that we effectively work around them. Humus’s approach of individually tone mapping each subsample produces the desired result, however it also means keeping subsamples around until you perform tone mapping (which is the last stop in your post processing pipeline if you’re doing everything in HDR). Andrew Lauritzen suggested a clever idea over on the beyond3d forums, which was to apply tone mapping, resolve, then apply the inverse tone mapping operator to get an HDR value back. I tried it and it does work…at least as long as you’re using a tone mapping operator that’s easily invertible. The inverse operator can get really nasty in some cases, and won’t exist at all if you end up clamping to 1.0 as part of your tone mapping. Speaking of tone mapping…

  6. Crappy Tone Mapping - no more linear, no more Reinhard. Filmic tone mapping is awesome, and easy to integrate if you use the variant proposed by John Hable.

  7. No GI - I don’t think that we all need to go for real-time GI, since most games don’t need completely dynamic lighting or geometry. However it’s time to stop faking GI with ambient lights/fill lights/bounce lights/whatever. Having a good GI bake produces smoother, more realistic lighting that’s easier for artists to author. And if people do go for real-time GI techniques, I hope that they can do so without taking a severe drop in quality compared to offline solutions.

  8. Crappy Depth of Field and Motion Blur - both of these guys have a lot of room for improvement. Depth of field has gotten some recent attention from the heavy hitters like Crytek, DICE, and Epic, which has mostly focused on reproducing iris-shaped blur patterns for realistic bokeh effects. This has resulted in some promising research, but I think the jury is still out on the best way to approach the problem. There’s also the issue of foreground blur with proper transparency, which is still something of an elephant in the room. Motion blur, on the other hand, isn’t getting quite as much attention. Working only screen-space is extremely limiting for motion blur, in fact I would say it’s even more limiting than it is for depth of field. There have been attempts at supplementing screen-space approaches with fins or geometry stretching, but personally I’m still not satisfied with the results of my own experiments. What I’d really love to do is some sort of cheap, scaled-down version of stochastic rasterization combined with screen-space blurring to help remove the noise. Nvidia has some recent research in this area, so I’m holding out hope.

  9. Sprite-based Lens Flares - these looked cheesy when they first showed up in games, and they still do. Using FFT to perform convolutions in frequency space lets you convolve highlights with arbitrary (low-resolution) kernels, which is still limiting but in many ways looks a lot better than sprites. But what we really need is for someone to get this working in a game, so that we can all have awesome physically-based flares. :D

  10. Simple Fog - we’ve got shaders now…we don’t need to do the same old depth-based fog. The time has come to upgrade to a physically-based scattering model, such as this one.


Comments:

Mantis -

Now, picture this; what if instead of the above, we made better games? Picture if Bethesda diverted $50k from their “artists” to the QA department; who knows, the main quest might work half the time and the PS3 port might be stable for more than about 20 hours!


#### [Mantis]( "prayingmantis@toothfairy.com") -

Not that better graphics are a bad thing, obviously.


#### [default_ex]( "default_ex@live.com") -

A really good solution to most of these problems is to develop a new color space, one which can carry energy information (like xyY) but is additive in nature (like RGB). Think about it, how many of these effects are energy dependent but how much we substitute energy with luminance: HDR, inscatter, lens flare, self-illumination. Though I think specular work is the saddest of all our rendering tech so far. Specular light in the real world is just reflection of light energy, shape deformed by the surface geometry. Yet we continue the travesty of only conveying the light color, never conveying the light source (bulb) itself in the specular reflection.


#### [djmips](http://gravatar.com/djmips "david.c.galloway@gmail.com") -

How about just get rid of lens flares period? I never liked them in games or in movies… Why do they have to show up in first person views especially? I’ve never had my eyes flare like that. :)


#### [MJP](http://mynameismjp.wordpress.com/ "mpettineo@gmail.com") -

Nice additions Nathan! I’ve always wanted to experiment with Sample Distribution Shadow Maps, although I worry about temporal aliasing as you adjust the split depths. Calculating the adjusted split depths on the GPU also concerns me, as it limits your ability to cull on the CPU. However I really like the general approach, and definitely agree that we need to make better use of shadow resolution and not just use wide filtering kernels (especially if we want plausible soft shadows)


#### [default_ex]( "default_ex@live.com") -

Even something like a pixel shader version of smart filtering (http://web.archive.org/web/20070624082603/http://www.hiend3d.com/smartflt.html) would be a massive improvement on shadow mapping. The blocky unfiltered shadow maps convey the shape of the shadow well enough, just need to enhance that shape to fit screen resolution. I’ve tried using HQX filters recently, but that tends to introduce stray pixels at the better defined portions of the shadow map, which is likely a flaw in my implementation.


#### [Rim]( "remigius@netforge.nl") -

“I’ve just never been a big fan of SSAO.” I did a paper ages ago concluding SSAO likely wouldn’t take off as a common technique. Boy was I wrong, so I’ll agree with you here out of spite :)


#### [u2bleank](http://u2bleank.tumblr.com/ "dtoyou@bleank.com") -

11. Simple shader for blending operations


#### [Nathan Reed](http://reedbeta.com/ "nathaniel.reed@gmail.com") -

I’d like to add: 11. Shadow Aliasing. The trend has been to blur the $%#! out of the shadow maps to hide it, but that’s not always desirable. There’s some interesting work on that front from Andrew Lauritzen, always a heavy hitter. 12. Visible polygonization of shapes that should be smooth—especially at silhouette edges. We have hardware tessellation now; let’s use it! 13. Characters whose only hair options are bald, short or tight buns/ponytails/dreadlocks. There has been some great work lately on hair simulation and rendering which has yet to be put to use in a game, to my knowledge.


#### [What I’ve been working on for the past 2 years | The Danger Zone](http://mynameismjp.wordpress.com/2013/06/11/what-ive-been-working-on/ "") -

[…] most out of next-gen hardware. By my count we’ve crossed off around 8 or so of the things on my list, and hopefully the entire industry will collectively figure out how to make all of them extinct. […]


#### [Louis Castricato](http://wirezapp.net "ljcrobotic@yahoo.com") -

We could just replace ssao with ssdo, or hbao. Both produce much better results and still maintain great framerates. Also, I implemented the improved fog in under 45min. New record =P



Graphics

1896 Words

2011-12-06 01:54 -0800