Material Layering

How do you shade a shiny metal object with a layer of dirt or rust on top (or underneath)? Every application seems to have an opinion on this. The problem they solve is called Material Layering, and while it seems simple as a concept, it has quite a few catches, and quite a few solutions.

Coco packs an incredible level of detail in each sequence. Each material is made of many layers.

As physically based approaches are now pervasive, material layering has become again a topic of development and efforts are going into trying to standardize its description and its desired look.

I have been meaning to write about this for a while, as a lot of questions come up, from how to flatten a material to make it more efficient to render, to why blending between shiny and rough items doesn't always look right. I'll try to cover some of the answers that have I have seen in offline renders as well as some authoring tools.

Color Layering

If we go back far enough, Renderman had Reyes, a renderer that lets you literally ask for lighting at any point in your RSL (Renderman Shading Language) shader.

A popular way to calculate Material Layering was to get the result of each material, complete with its illumination, turn it into an RGBA layer, or ColorOpacity, and do an alpha composition. We also passed along just enough information to do displacement layering. The structure was called ColorDisp (1 color, 1 float, 1 int, 2 vectors). Simple enough, this approach was quite good at getting realistic results. As we started being able to package and publish material layers for later use, the average number of layers grew over time, and we ran into this technique's main drawback: the cost in computing complex, complete lighting at each material layer. We didn't have physically based rendering or importance sampling yet, but in fact, even after we introduced physically based rendering in Monsters University, we still relied on this quite a bit. So 5 or 6 layers meant calculating all the lights 5 or 6 times, which quickly became a bottleneck.

Pattern Layering

Starting in Cars in 2003, we experimented with blending not the results of each layer but rather their inputs. Albedo got layered with albedo, specular color with specular color. The final composed parameters were used to calculate illumination lobes, and a final color.
Not all materials use the same illumination system or the same parameters, so this was not a complete solution. If you grouped your layers carefully, with this technique you could get good results and reduce, say, 8 layers into only 2 or 3 pre-composed materials with unique illumination properties.

Pixar Cars, 2006, was rendered with a mix of ColorDisp and pattern layering techniques.

If you use this blending for a some or all of your layers, with a bit of hand holding, results still look good with a much reduced overhead. So after we switched to Physically based Rendering, with improvements on Monsters University and Cars 2, this became the main layering technique used at Pixar.
Used carelessly however, the technique is far from perfect: not all parameters blend equally well. If you blend directly related parameters separately, across two materials, you will get badly blended pixels around the layering zone (where presence is not solid 0 or 1). Why is that? Say you have a grey, RGB = (.4, .4, .4) material with .5 diffuseGain. Its overall diffuse response is (.2, .2, .2). Now let's say you mix it with a black material: (0,0,0) and 0 diffuseGain. If the presence of the black overlay material was exactly half, or 0.5, you will end up with a color of (.2, .2, .2) and a diffuseGain of .25. The resulting color is... RGB (.05, .05, .05)! While the accurate average of the two should have looked more like a (.1, .1, .1), double that.
That is because we are blending separately two aspects that are always considered, in the renderer, as one. We use the result of diffuseColor * diffuseGain, so that is what we should be blending: premultiplied results. That goes for any color that also has a multiplier applied to it.

This same premultiplication gets trickier when you are calculating factors that affect non-linearly the resulting pixels, like a dmfp (diffuse mean free path) in subsurface scattering.

Surface normals present their own challenges. We have historically preferred to layer bump vectors (the equivalent of  displacement vectors, they create a virtual point which we later used to compute a final normal). Bump Vectors have the advantage of blending more correctly than surface normals, and allow for more complex layering behaviors, such as displacement/bump accumulation, erosion and softening. Unfortunately, Bump Vectors rendered much faster in good old Reyes, where the derivatives needed for normal calculations were basically free. The extra cost in a path tracer is measurable.
Recently, the development focus is going more towards pre-computed maps we call BumpRoughness, based on LEADR mapping. While they share the same issues as layering normal maps, they are more accurate, faster to render, and they behave better when the shader details get filtered away.

An interesting parameter to interpolate is specular roughness. For at least two reasons. One: in most classical descriptions of physically based microfacet models such as Beckmann or Ggx, roughness is not a linear quantity. That means that double the roughness will not give you a highlight that is double as large. Most valid values for roughness tend to be crammed between 0 and 0.2. Some shading models, such as Disney's Brdf (PxrDisney in Renderman) address this by blending a normalized and linearized parameter instead, that behaves well between 0 and 1.

But that's not all. Quite simply put, the average of the render of two speculars is not, in fact, the render of the average of the same two speculars. This becomes quite obvious when you have a lot of variance over the roughness (e.g. a shiny metal and a dull layer of dust), with a not-quite-solid coverage (e.g. presence of the dust is 50%). The artifacts you get are rougher surfaces with little detail, and even some visible errors in the blending areas.

On top of this RenderMan preview shaderBall, mixing roughness will make the underlying shiny layer look rougher, rather than dimmer with a rough layer on top. Note the roughness artifacts on the highlights of the base too.

Mixing resulting values from the speculars directly, will yield a result that is closer to reality.

We work around the roughness blending errors by having more than one specular lobe. We have three, sorted by their recommended roughness range. ClearCoat (0-0.1), Specular (0.1-0.3), RoughSpecular (0.3-1). These three allow us to reach interesting, rich sheen looks. But especially when layering, they ensure that the blend between speculars does not create interpolated roughness that are too far from either material being layered.

Rust shader by Harsh Agrawal, rendered in Renderman


Without getting into more exotic surface effects such as metal flakes, scratches, fuzz or iridescence, layering these three lobes separately in practice has solved 95% of our visible layering issues, giving us accurate results.


Cars 3's Storm, closeup of metal flakes

At Pixar we still layer patterns, compose the results in a virtual structure of premultiplied values (V-Struct) and feed the resulting values directly to PxrSurface. We are not alone in this: many studios do their look development in procedural and/or painting applications where a lot the visual complexity is baked and at the end, pre-layered values are fed to an individual uber-shader with a largely predetermined number of illumination lobes (or Bxdf's)

Bxdf Layering

In a perfect path tracing world, with accurate prediction of where most of the light will be coming from, layering BXDF results directly is not as much more expensive than pattern layering any more. The layering logic needs to know what is covering what and how, affecting the Probability Density Functions (PDF) and their resulting samples.

We did try this approach during Finding Dory, but did not publish it with Renderman (do not confuse with PxrLayerSurface, which relies on pattern layering). While more work was needed to do it really efficiently, there are advantages to it: blending is more accurate, and complex pipeline solutions like VStructs become unnecessary. Also, more lean and diverse Bxdf become feasible with less reliance on one UberShader to rule them all. It will be exciting to see what happens in this field in the coming years.

Illumination lobe based layering

Bxdf, in an UberShader world, are composed of multiple illumination lobes. Seemingly simple ones, like diffuse or specular, as well as complex ones, like Path Traced Subsurface Scattering. Often times the resulting light coming out of these lobes is simply added together into a final color.
In the real world, however, these illumination properties often come from distinct layers of the physical material. For example, in a car paint, there will be a metal layer, a paint layer with metal flakes embedded, and a clear coat on top of it all. The more light the top lobe reflects, the less light the underlying layers will receive, and therefore emit. Also, the top layer might modify the light properties and diffuse it in a way that will affect what happens to the lower layers. This can get complicated quickly, beyond what pattern layering can do.

An easy way to begin emulating this, is using the Fresnel coefficient that affects your specular layers, inverting it, and applying it to all the other lobes. 
E.g. If the light is largely reflected by the specular clear coat at grazing angles, we should dim appropriately the response of the other layers. That is so the overall response doesn't blow out beyond physically valid coefficients, which is bad if you care about your render times and quality. If you have more than one specular lobe, you may have to do this multiple times.



There are a few good papers on this topic, like this one in 2014 by Jakob, D'Eon, Jakob and of course, Steve Marschner. While the science is good, I haven't seen a full featured implementation in commercial renderers.

This is something that will be fun to see evolve in these years, as well as how standard descriptions of layered materials will take shape, in Usd, MaterialX, and MDL, to name a few.

Comments

Popular Posts