URP后处理自定义Volume方式
原文:https://www.jianshu.com/p/a5456036ab95
URP自带的后处理功能是放一个Volume,在上面加挂常用后处理,例如Bloom、Tonemap等,调节相应参数;在主相机上,勾选PostProcessing,这样就能得到后处理功能。
如果想要自定义后处理,一般方式是写RenderFeature,这样的优势是,每个自定义的后处理彼此独立,调节渲染位置方便,只要在URP的Renderer里面调整即可,但也有相对的劣势,做后处理时,基本是要把原来的RT复制一份出来,然后再用后处理算法渲染到颜色RT上,每一个后处理都要这么做,这样无疑时浪费了很多资源,明明双缓冲互相交换就可以了,却还要多Blit这么多次,这大概是RenderFeature相对自由的代价吧。
URP中实现的后处理功能是双缓冲互相Blit,并且再移动平台的TBR架构下做了一些优化,例如,根据我的理解,当RT被设置为渲染目标时,颜色Buffer和深度Buffer的内容是否要在意,是否要立刻从内存读到GPU的SRAM,这意味着延迟执行的渲染命令将会立刻执行,同样的,当存储时是否要立刻写回内存。
步骤1.定义设置面板
定义一个后处理面板的方式相对简单,例如我想把屏幕乘上一个固定的颜色,可以这样编写:
namespace UnityEngine.Rendering.Universal
{
[System.Serializable, VolumeComponentMenu("CustomVolume/MultiplyColor")]
public sealed class MultiplyColor : VolumeComponent, IPostProcessComponent
{
public MaterialParameter material = new MaterialParameter(null, true);
public ColorParameter color = new ColorParameter(Color.white, false);
public bool IsActive()
{
if (material.value == null)
return false;
return true;
}
public bool IsTileCompatible()
{
return false;
}
public override void Override(VolumeComponent state, float interpFactor)
{
base.Override(state, interpFactor);
}
}
}
ColorParameter是我们定义的属性面板,除此外,还有FloatParameter、IntParameter、ClampParameter。
当一些我们需要的属性找不到时,例如枚举,或者Material,我们需要自定义属性,上面的MaterialParameter就是我自定义的:
[Serializable]
public sealed class MaterialParameter : VolumeParameter<Material>
{
public MaterialParameter(Material value, bool overrideState = false)
: base(value, overrideState) { }
}
VolumeParameter有些方法可以重载,具体可以点进去看看。
现在就可以再Volume的面板上加挂这个后处理设置面板了:
如果需要自定义Volume面板,可以继承VolumeComponentEditor,具体参照ChannelMixerEditor的编写:
namespace UnityEditor.Rendering.Universal
{
[VolumeComponentEditor(typeof(ChannelMixer))]
sealed class ChannelMixerEditor : VolumeComponentEditor
{
//something
}
}
步骤2.编写后处理逻辑
只定义面板,显然是不可能有后处理功能的,而 URP现在也没暴露出自定义VolumePass的方法,如果想自己实现后处理,需要改写部分URP源码。
幸运的是,这部分代码并不是特别难看。
URP Volume自带的后处理中有如下几种 :
后处理 | 名称 | 引用文件 |
---|---|---|
Bloom | 辉光 | PostProcessPass |
Channel Mixer | 通道混合 | ColorGradingLutPass |
Chromatic Aberration | 色差 | PostProcessPass |
Color Adjustments | 颜色调整 | ColorGradingLutPass & PostProcessPass |
Color Curves | 颜色曲线 | ColorGradingLutPass |
Color Lookup | LUT | PostProcessPass |
Depth Of Field | 景深 | PostProcessPass |
FileGrain | PostProcessPass | |
Lens Distortion | 镜头扭曲 | PostProcessPass |
Lift, Gamma, Gain | ColorGradingLutPass | |
Motion Blur | 动态模糊 | PostProcessPass |
Panini Projection | 帕尼尼投影 | PostProcessPass |
Shadows, Midtones, Hightlights | ColorGradingLutPass | |
Split Toning | ColorGradingLutPass | |
Tonemapping | 色调映射 | ColorGradingLutPass & PostProcessPass |
Vignette | 镜头黑边 | PostProcessPass |
White Balance | 白平衡 | ColorGradingLutPass |
上面两个文件都是继承自SRP的ScriptableRenderPass,我主要关心的是PostProcessPass,因为大多数后效都被这个文件引用。
开头可以看见几个定义:
public class PostProcessPass : ScriptableRenderPass
{
RenderTextureDescriptor m_Descriptor;
RenderTargetHandle m_Source;
RenderTargetHandle m_Destination;
//...
其中m_Source将会是我们需要渲染的原始图像,m_Destination在FrameDebug中,将是_AfterPostProcessTexture:
然后定义了一票后处理组件成员,并在Execute中获取:
DepthOfField m_DepthOfField;
MotionBlur m_MotionBlur;
PaniniProjection m_PaniniProjection;
Bloom m_Bloom;
LensDistortion m_LensDistortion;
ChromaticAberration m_ChromaticAberration;
Vignette m_Vignette;
ColorLookup m_ColorLookup;
ColorAdjustments m_ColorAdjustments;
Tonemapping m_Tonemapping;
FilmGrain m_FilmGrain;
//Execute
public override void Execute(ScriptableRenderContext context, ref RenderingData renderingData)
{
// Start by pre-fetching all builtin effect settings we need
// Some of the color-grading settings are only used in the color grading lut pass
var stack = VolumeManager.instance.stack;
m_DepthOfField = stack.GetComponent<DepthOfField>();
m_MotionBlur = stack.GetComponent<MotionBlur>();
m_PaniniProjection = stack.GetComponent<PaniniProjection>();
m_Bloom = stack.GetComponent<Bloom>();
m_LensDistortion = stack.GetComponent<LensDistortion>();
m_ChromaticAberration = stack.GetComponent<ChromaticAberration>();
m_Vignette = stack.GetComponent<Vignette>();
m_ColorLookup = stack.GetComponent<ColorLookup>();
m_ColorAdjustments = stack.GetComponent<ColorAdjustments>();
m_Tonemapping = stack.GetComponent<Tonemapping>();
m_FilmGrain = stack.GetComponent<FilmGrain>();
//do something
主要的逻辑都编写在Execute中,这是我们重点关注目标。
开头就是问当前是否是FinalPass:
if (m_IsFinalPass)
{
var cmd = CommandBufferPool.Get(k_RenderFinalPostProcessingTag);
RenderFinalPass(cmd, ref renderingData);
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}
else if (CanRunOnTile())
{
// TODO: Add a fast render path if only on-tile compatible effects are used and we're actually running on a platform that supports it
// Note: we can still work on-tile if FXAA is enabled, it'd be part of the final pass
}
else
{
var cmd = CommandBufferPool.Get(k_RenderPostProcessingTag);
Render(cmd, ref renderingData);
context.ExecuteCommandBuffer(cmd);
CommandBufferPool.Release(cmd);
}
URP的FXAA是在所有相机,包括Base相机以及Overlay相机渲染之后进行的,可以发现,设置了FXAA后,最后一个调用是FinalPass,如果不是,则只是普通的Blit:
调用次序是在ForwardRenderer.cs的Setup中,假如是Stack中 最后一个相机,并且开了FXAA,则调用Pass的后处理:
bool lastCameraInTheStack = renderingData.resolveFinalTarget;
bool hasCaptureActions = renderingData.cameraData.captureActions != null && lastCameraInTheStack;
bool applyFinalPostProcessing = anyPostProcessing && lastCameraInTheStack &&
renderingData.cameraData.antialiasing == AntialiasingMode.FastApproximateAntialiasing;
//something
if (applyFinalPostProcessing)
{
m_FinalPostProcessPass.SetupFinalPass(sourceForFinalPass);
EnqueuePass(m_FinalPostProcessPass);
}
回到PostProcessPass中:
public void SetupFinalPass(in RenderTargetHandle source)
{
m_Source = source;
m_Destination = RenderTargetHandle.CameraTarget;
m_IsFinalPass = true;
m_HasFinalPass = false;
m_EnableSRGBConversionIfNeeded = true;
}
m_IsFinalPass被设置为true,于是调用RenderFinalPass走最终渲染流程;当不是FinalPass时,调用Render走普通后处理流程。
Render中声明了两个局部双缓冲:
int source = m_Source.id;
int destination = -1;
两个取得缓冲的方法:
int GetSource() => source;
int GetDestination()
{
if (destination == -1)
{
cmd.GetTemporaryRT(ShaderConstants._TempTarget, GetStereoCompatibleDescriptor(), FilterMode.Bilinear);
destination = ShaderConstants._TempTarget;
tempTargetUsed = true;
}
else if (destination == m_Source.id && m_Descriptor.msaaSamples > 1)
{
// Avoid using m_Source.id as new destination, it may come with a depth buffer that we don't want, may have MSAA that we don't want etc
cmd.GetTemporaryRT(ShaderConstants._TempTarget2, GetStereoCompatibleDescriptor(), FilterMode.Bilinear);
destination = ShaderConstants._TempTarget2;
tempTarget2Used = true;
}
return destination;
}
交换缓冲区方法:
void Swap() => CoreUtils.Swap(ref source, ref destination);
然后就是一堆类似下面的流程,检查下后处理是否开启,相机是不是场景相机,然后调用方法;做完后还要换下缓冲区。
// Motion blur
if (m_MotionBlur.IsActive() && !cameraData.isSceneViewCamera)
{
using (new ProfilingScope(cmd, ProfilingSampler.Get(URPProfileId.MotionBlur)))
{
DoMotionBlur(cameraData.camera, cmd, GetSource(), GetDestination());
Swap();
}
}
不过一些流程是特殊的,Bloom、LensDistortion、ChromaticAberration、Vignette、ColorGrading等,这些被合成用一个Shader去做,方法是用一个UberShader,开关Keyword,做完后直接Blit到m_Destination(_AfterPostProcessTexture)上。
这样的话,如果我们想在这些后处理之后渲染,就要想办法把这一步(直接Blit到m_Destination)打开,打开的方法后面再提。
我们现在知道了大致框架,所以按照URP源码的形式继续编写。
首先定义组件:
// Custom effects settings
MultiplyColor m_MultiplyColor;
然后再Execute中获取组件
//Custom
m_MultiplyColor = stack.GetComponent<MultiplyColor>();
然后再Render中检查Active并执行Pass:
enum CustomProfileId
{
MultiplyColor,
}
//Render方法
//
//
// Custom Effect_______________________________________________________________________
//
if (m_MultiplyColor.IsActive() && !cameraData.isSceneViewCamera)
{
using (new ProfilingScope(cmd, ProfilingSampler.Get(CustomProfileId.MultiplyColor)))
{
DoMultiplyColor(cameraData.camera, cmd, GetSource(), GetDestination());
Swap();
}
}
void DoMultiplyColor(Camera camera, CommandBuffer cmd, int source, int destination)
{
var material = m_MultiplyColor.material.value;
cmd.SetGlobalTexture("_BlitTex", source);
cmd.SetGlobalColor("_Color", m_MultiplyColor.color.value);
Blit(cmd, source, BlitDstDiscardContent(cmd, destination), material, 0);
}
这样就能产生实际的效果了。
Uber后处理后的后处理
场景
美术反馈,当主相机打开FXAA时,UI也变得很糊。用FrameDebug看了一眼,FXAA是在FinalPass做的,是所有Overlay相机渲染之后进行一次抗锯齿,而我们希望的是只有场景被抗锯齿,因此我自定义了一个FXAA的Volume Component,从FinalPass的Shader中摘取FXAA部分源码编写了Shader并创建材质,并按照上面的方法将代码添加到PostProcessPass中,但最后得到的图像没有Bloom效果,检查了一下发现,FXAA的Shader中包含这种计算:
//采样5次
half3 color = Load(positionSS, 0, 0).xyz;
half3 rgbNW = Load(positionSS, -1, -1);
half3 rgbNE = Load(positionSS, 1, -1);
half3 rgbSW = Load(positionSS, -1, 1);
half3 rgbSE = Load(positionSS, 1, 1);
//将颜色钳制到0-1之间
rgbNW = saturate(rgbNW);
rgbNE = saturate(rgbNE);
rgbSW = saturate(rgbSW);
rgbSE = saturate(rgbSE);
color = saturate(color);
Bloom一般是颜色大于1的自发光材质,当颜色被限制在1以下时,就产生不了Bloom光,因此我需要将FXAA放到Bloom之后渲染。
代码分析
URP的后处理的顺序是代码决定的,就算Volume中挂载的顺序不同,执行时的顺序也时一致的。
因此当代码决定由Bloom、LensDistortion、ChromaticAberration等等组成UberShader最后渲染时,UberShader肯定时最后渲染的,Unity为此做了特殊的优化,当最后渲染时,设置不同的RT访问、存储属性,源码是这样的:
var colorLoadAction = RenderBufferLoadAction.DontCare;
if (m_Destination == RenderTargetHandle.CameraTarget && !cameraData.isDefaultViewport)
colorLoadAction = RenderBufferLoadAction.Load;
RenderTargetIdentifier cameraTarget = (cameraData.targetTexture != null) ? new RenderTargetIdentifier(cameraData.targetTexture) : BuiltinRenderTextureType.CameraTarget;
cameraTarget = (m_Destination == RenderTargetHandle.CameraTarget) ? cameraTarget : m_Destination.Identifier();
cmd.SetRenderTarget(cameraTarget, colorLoadAction, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare);
bool finishPostProcessOnScreen = renderingData.resolveFinalTarget || (m_Destination == RenderTargetHandle.CameraTarget || m_HasFinalPass == true);
//如果是VR,用Blit,否则用DrawMesh渲染全屏面片
if (m_IsStereo)
{
Blit(cmd, GetSource(), BuiltinRenderTextureType.CurrentActive, m_Materials.uber);
if (!finishPostProcessOnScreen)
{
cmd.SetGlobalTexture("_BlitTex", cameraTarget);
Blit(cmd, BuiltinRenderTextureType.CurrentActive, m_Source.id, m_BlitMaterial);
}
}
else
{
cmd.SetViewProjectionMatrices(Matrix4x4.identity, Matrix4x4.identity);
if (m_Destination == RenderTargetHandle.CameraTarget)
cmd.SetViewport(cameraData.pixelRect);
cmd.DrawMesh(RenderingUtils.fullscreenMesh, Matrix4x4.identity, m_Materials.uber);
if (!finishPostProcessOnScreen)
{
cmd.SetGlobalTexture("_BlitTex", cameraTarget);
cmd.SetRenderTarget(m_Source.id, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare);
cmd.DrawMesh(RenderingUtils.fullscreenMesh, Matrix4x4.identity, m_BlitMaterial);
}
cmd.SetViewProjectionMatrices(cameraData.camera.worldToCameraMatrix, cameraData.camera.projectionMatrix);
}
判定逻辑挺复杂,但如果简化来看,就是几点:
1.设置RT和访问、存储属性;
2.是否有FinalPass;
3.将源缓冲source按照uber shader blit到总目标m_Destination中;
4.如果有FinalPass,则再将m_Destination复制到总原缓冲mSource
复制到总原缓冲mSource的原因是,当有FinalPass(MSAA的resolve或FXAA)时,将直接从mSource中取得颜色,因此要复制回mSource,具体代码参见RenderFinalPass->cmd.SetGlobalTexture(“_BlitTex”, m_Source.Identifier())
当我们有其他需求,需要在Bloom这些UberShader之后做,又不想用RenderFeature时,就需要把这些步骤拆开。
代码改写
unity确信uber shader肯定时最后一个做的后处理,而我要做的是把它当作一个普通的后处理,因此直接Blit即可:
cmd.SetGlobalTexture("_BlitTex", GetSource());
Blit(cmd, GetSource(), GetDestination(), m_Materials.uber, 0);
Swap();
然后做我们的FXAA:
//声明: FXAA m_FXAA;
//初始化:m_FXAA = stack.GetComponent<FXAA>();
if (m_FXAA.IsActive() && !cameraData.isSceneViewCamera)
{
using (new ProfilingScope(cmd, ProfilingSampler.Get(CustomProfileId.FXAA)))
{
DoFXAA(cameraData.camera, cmd, GetSource(), GetDestination());
Swap();
}
}
然后将Source复制到总目标RT——m_Destination中:
//设置Blit源为source
cmd.SetGlobalTexture("_BlitTex", GetSource());
//和原来一样
var colorLoadAction = RenderBufferLoadAction.DontCare;
if (m_Destination == RenderTargetHandle.CameraTarget && !cameraData.isDefaultViewport)
colorLoadAction = RenderBufferLoadAction.Load;
RenderTargetIdentifier cameraTarget = (cameraData.targetTexture != null) ? new RenderTargetIdentifier(cameraData.targetTexture) : BuiltinRenderTextureType.CameraTarget;
cameraTarget = (m_Destination == RenderTargetHandle.CameraTarget) ? cameraTarget : m_Destination.Identifier();
cmd.SetRenderTarget(cameraTarget, colorLoadAction, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare);
bool finishPostProcessOnScreen = renderingData.resolveFinalTarget || (m_Destination == RenderTargetHandle.CameraTarget || m_HasFinalPass == true);
//稍有改动
if (m_IsStereo)
{
Blit(cmd, GetSource(), BuiltinRenderTextureType.CurrentActive, m_BlitMaterial);
}
else
{
cmd.SetViewProjectionMatrices(Matrix4x4.identity, Matrix4x4.identity);
if (m_Destination == RenderTargetHandle.CameraTarget)
cmd.SetViewport(cameraData.pixelRect);
cmd.DrawMesh(RenderingUtils.fullscreenMesh, Matrix4x4.identity, m_BlitMaterial);
cmd.SetViewProjectionMatrices(cameraData.camera.worldToCameraMatrix, cameraData.camera.projectionMatrix);
}
if(!finishPostProcessOnScreen)
{
if (m_IsStereo)
{
cmd.SetGlobalTexture("_BlitTex", cameraTarget);
Blit(cmd, BuiltinRenderTextureType.CurrentActive, m_Source.id, m_BlitMaterial);
}
else
{
cmd.SetViewProjectionMatrices(Matrix4x4.identity, Matrix4x4.identity);
cmd.SetGlobalTexture("_BlitTex", cameraTarget);
cmd.SetRenderTarget(m_Source.id, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare);
cmd.DrawMesh(RenderingUtils.fullscreenMesh, Matrix4x4.identity, m_BlitMaterial);
cmd.SetViewProjectionMatrices(cameraData.camera.worldToCameraMatrix, cameraData.camera.projectionMatrix);
}
}
这样就成功把uber shader与复制到目标分成两步。
我的Combined post-processing stack部分代码总体如下:
// Combined post-processing stack
using (new ProfilingScope(cmd, ProfilingSampler.Get(URPProfileId.UberPostProcess)))
{
// Reset uber keywords
m_Materials.uber.shaderKeywords = null;
// Bloom goes first
bool bloomActive = m_Bloom.IsActive();
if (bloomActive)
{
using (new ProfilingScope(cmd, ProfilingSampler.Get(URPProfileId.Bloom)))
SetupBloom(cmd, GetSource(), m_Materials.uber);
}
// Setup other effects constants
SetupLensDistortion(m_Materials.uber, cameraData.isSceneViewCamera);
SetupChromaticAberration(m_Materials.uber);
SetupVignette(m_Materials.uber);
SetupColorGrading(cmd, ref renderingData, m_Materials.uber);
// Only apply dithering & grain if there isn't a final pass.
SetupGrain(cameraData, m_Materials.uber);
SetupDithering(cameraData, m_Materials.uber);
if (RequireSRGBConversionBlitToBackBuffer(cameraData) && m_EnableSRGBConversionIfNeeded)
m_Materials.uber.EnableKeyword(ShaderKeywordStrings.LinearToSRGBConversion);
cmd.SetGlobalTexture("_BlitTex", GetSource());
Blit(cmd, GetSource(), GetDestination(), m_Materials.uber, 0);
Swap();
if (m_FXAA.IsActive() && !cameraData.isSceneViewCamera)
{
using (new ProfilingScope(cmd, ProfilingSampler.Get(CustomProfileId.FXAA)))
{
DoFXAA(cameraData.camera, cmd, GetSource(), GetDestination());
Swap();
}
}
cmd.SetGlobalTexture("_BlitTex", GetSource());
var colorLoadAction = RenderBufferLoadAction.DontCare;
if (m_Destination == RenderTargetHandle.CameraTarget && !cameraData.isDefaultViewport)
colorLoadAction = RenderBufferLoadAction.Load;
RenderTargetIdentifier cameraTarget = (cameraData.targetTexture != null) ? new RenderTargetIdentifier(cameraData.targetTexture) : BuiltinRenderTextureType.CameraTarget;
cameraTarget = (m_Destination == RenderTargetHandle.CameraTarget) ? cameraTarget : m_Destination.Identifier();
cmd.SetRenderTarget(cameraTarget, colorLoadAction, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare);
bool finishPostProcessOnScreen = renderingData.resolveFinalTarget || (m_Destination == RenderTargetHandle.CameraTarget || m_HasFinalPass == true);
if (m_IsStereo)
{
Blit(cmd, GetSource(), BuiltinRenderTextureType.CurrentActive, m_BlitMaterial);
}
else
{
cmd.SetViewProjectionMatrices(Matrix4x4.identity, Matrix4x4.identity);
if (m_Destination == RenderTargetHandle.CameraTarget)
cmd.SetViewport(cameraData.pixelRect);
cmd.DrawMesh(RenderingUtils.fullscreenMesh, Matrix4x4.identity, m_BlitMaterial);
cmd.SetViewProjectionMatrices(cameraData.camera.worldToCameraMatrix, cameraData.camera.projectionMatrix);
}
if(!finishPostProcessOnScreen)
{
if (m_IsStereo)
{
cmd.SetGlobalTexture("_BlitTex", cameraTarget);
Blit(cmd, BuiltinRenderTextureType.CurrentActive, m_Source.id, m_BlitMaterial);
}
else
{
cmd.SetViewProjectionMatrices(Matrix4x4.identity, Matrix4x4.identity);
cmd.SetGlobalTexture("_BlitTex", cameraTarget);
cmd.SetRenderTarget(m_Source.id, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare);
cmd.DrawMesh(RenderingUtils.fullscreenMesh, Matrix4x4.identity, m_BlitMaterial);
cmd.SetViewProjectionMatrices(cameraData.camera.worldToCameraMatrix, cameraData.camera.projectionMatrix);
}
}
// Cleanup
if (bloomActive)
cmd.ReleaseTemporaryRT(ShaderConstants._BloomMipUp[0]);
if (tempTargetUsed)
cmd.ReleaseTemporaryRT(ShaderConstants._TempTarget);
if (tempTarget2Used)
cmd.ReleaseTemporaryRT(ShaderConstants._TempTarget2);
}
Post Link: URP后处理自定义Volume方式