Unity的一个强大之处是它的AssetStore,良好的生态必然带来优质的插件。今天就来聊聊已经被安利过很多次的Lighting Box。
先抄一段产品介绍,此插件可以让用户在没有任何照明经验的前提下也能在几秒钟到几分钟内获得3A级照明。此插件具有高品质的镶嵌地形着色器和易于使用的地形控制器组件,此插件支持室内和室外场景,拥有真正的体积光照和全局雾资源,可以前向渲染和延迟渲染。此插件还拥有自动对焦功能,这使得用户可以轻松使用景深。该插件的主要特性如下:1.可以实时保存的解决方案。2.可以保存和载入光照配置文件。3.支持Post Processing Stack 2 插件。4.基于最好的后处理特效设置。5.内置地形着色器。6.内置POM和Tessellation着色器。7.基于距离的粒子着色器。8.高品质的太阳耀斑镜头特效。9.夜间和雪的场景样品。
好吧,下面开始看具体内容。
在实际工程中,会有一个叫做LightingBox_Helper的GO,挂载LB_LightingBoxHelper.cs,还有另外一个GO Global Volume,挂载PPS的volume component。然后在它们俩背后还有一个大佬LB_LightingBox.cs,是EditorWindow的子类。下面我们先简单的读一下LB_LightingBox.cs的源码
[MenuItem("Window/Lighting Box 2 %E")] static void Init() { // Get existing open window or if none, make a new one: //// LB_LightingBox window = (LB_LightingBox)EditorWindow.GetWindow(typeof(LB_LightingBox)); System.Type inspectorType = System.Type.GetType("UnityEditor.InspectorWindow,UnityEditor.dll"); LB_LightingBox window = (LB_LightingBox)EditorWindow.GetWindow("Lighting Box 2", true, new System.Type[] {inspectorType} ); //这是个简单的创建窗口的代码,首先通过EditorWindow.GetWindow来取得窗口实例,然后展现 window.Show(); window.autoRepaintOnSceneChange = true; //当这个变量为true时,如果unity编辑视窗(注意不只是scene视窗,其他窗口)只要有变动,就会重画窗口,为false就不会。 window.maxSize = new Vector2 (1000f, 1000f); window.minSize = new Vector2 (387f, 1000f); }
void OnEnable() { arrowOn = Resources.Load ("arrowOn") as Texture2D; arrowOff = Resources.Load ("arrowOff") as Texture2D; if (!GameObject.Find ("LightingBox_Helper")) { GameObject helperObject = new GameObject ("LightingBox_Helper"); //这里就是为啥LightingBox_Helper会像中毒了一样,删除了也会自动创建 helperObject.AddComponent(); helper = helperObject.GetComponent (); } EditorApplication.hierarchyWindowChanged += OnNewSceneOpened; currentScene = EditorSceneManager.GetActiveScene().name; if (System.String.IsNullOrEmpty (EditorPrefs.GetString (EditorSceneManager.GetActiveScene ().name))) LB_LightingProfile = Resources.Load ("DefaultSettings")as LB_LightingProfile; else LB_LightingProfile = (LB_LightingProfile)AssetDatabase.LoadAssetAtPath (EditorPrefs.GetString (EditorSceneManager.GetActiveScene ().name), typeof(LB_LightingProfile)); OnLoad (); //这个函数中会获取mainCamera和LightingBox_Helper,进行参数设置后调用三大主要函数UpdatePostEffects、UpdateSettings、Update_Sun。 //但是这三大函数在LB_LightingBoxHelper.cs中也有,其中UpdatePostEffects和Update_Sun完全一样。只是针对UpdateSettings这个函数,LB_LightingBox.cs中有一些额外的设置,具体为:Update_LightingMode、Update_LightSettings、Update_ColorSpace、Update_RenderPath、Update_Shadows、Update_LightProbes、Update_AutoMode。这些我觉得也确实应该在Editor中设置好,而非在实际运行是进行调整。 }
由于这毕竟是个EditorWindow类的函数,所以会有OnGUI,不得不说,这个插件的编辑器写的很好看,分为Scene、Effect、Color、Screen四个大类,每个大类都有数个小类。默认打开第一大类,然后可以看到其中的小类都是被折叠起来的,每个都可以分开打开,还都标有注释(注释也可以一键隐藏/显示),这些都在OnGUI中,具体内容见下面,以后写插件的时候可以借鉴下。
LightingBox的所有配置信息都保存在这里,比如Camera、Lightmap、LightingMode、LightSettings、ColorSpace、RenderPath、lightprobe、Shadows、skyBox、ambientLight、sunFlare、sunColor、Volumetric Light、Sun Shaft、Fog、DOF(Auto Focus)、Bloom、Color Grading、Foliage、Snow、AA、AO、Vignette、MotionBlur、Chromattic、SSR、Stochastic SSR。读完这个函数,相当于整个LightingBox都读完了。
这里会通过Unity内置API改变Ambient。RenderSettings.ambientMode:UnityEngine.Rendering.AmbientMode.Skybox、UnityEngine.Rendering.AmbientMode.Flat(Color)、UnityEngine.Rendering.AmbientMode.Trilight(Gradient)
这里会通过Unity内置API改变Sun的intensity、bounceIntensity、flare。
这里会通过Unity内置API改变Light Settings。
这里会通过Unity内置API改变其他的一些属性。
LightingBox的Volumetric Lighting使用的是LightingBox自带的Volumetric Lighting
LightingBox自带的Volumetric Lighting脚本
代码源自:https://github.com/SlightlyMad/VolumetricLights
二话不说,开始读代码,LightBox的入口类为LB_LightingBox.cs,OnEnable()为初始化函数,暂时跳过,在OnGUI里面,可以看到Volumetric Light相关的内容,都是一些参数设置,这里最主要的参数为VolumetricLightType和VLightLevel,然后通过helper.Update_VolumetricLight (mainCamera, VL_Enabled, vLight, vLightLevel);,使得Volumetric Lighting生效。进入到LB_LightingBoxHelper.cs的Update_VolumetricLight函数,其实这里也没做什么,只是给camera挂上VolumetricLightRenderer组件,给符合条件的light挂上VolumetricLight组件。
下面开始阅读VolumetricLightRenderer.cs
void Awake() { _camera = GetComponent(); if (_camera.actualRenderingPath == RenderingPath.Forward) _camera.depthTextureMode = DepthTextureMode.Depth; //需要Depth pass _currentResolution = Resolution; Shader shader = Shader.Find("Hidden/BlitAdd"); if (shader == null) throw new Exception("Critical Error: \"Hidden/BlitAdd\" shader is missing. Make sure it is included in \"Always Included Shaders\" in ProjectSettings/Graphics."); _blitAddMaterial = new Material(shader); shader = Shader.Find("Hidden/BilateralBlur"); if (shader == null) throw new Exception("Critical Error: \"Hidden/BilateralBlur\" shader is missing. Make sure it is included in \"Always Included Shaders\" in ProjectSettings/Graphics."); _bilateralBlurMaterial = new Material(shader); _preLightPass = new CommandBuffer(); _preLightPass.name = "PreLight"; ChangeResolution(); //根据Resolution的设定,创建一个全尺寸的_volumeLightTexture,然后创建一套半尺寸/四分之一尺寸的_halfVolumeLightTexture & _halfDepthBuffer / _quarterVolumeLightTexture & _quarterDepthBuffer if (_pointLightMesh == null) { GameObject go = GameObject.CreatePrimitive(PrimitiveType.Sphere); _pointLightMesh = go.GetComponent ().sharedMesh; //获取一个球的mesh Destroy(go); } if (_spotLightMesh == null) { _spotLightMesh = CreateSpotLightMesh(); //制作一个spot的mesh,其实就是三个圆圈,链接在一起,组成一个聚光灯的罩子 } if (_lightMaterial == null) { shader = Shader.Find("Sandbox/VolumetricLight"); if (shader == null) throw new Exception("Critical Error: \"Sandbox/VolumetricLight\" shader is missing. Make sure it is included in \"Always Included Shaders\" in ProjectSettings/Graphics."); _lightMaterial = new Material(shader); } if (_defaultSpotCookie == null) { _defaultSpotCookie = DefaultSpotCookie; } LoadNoise3dTexture(); //从NoiseVolume读取设置,创建一个3D texture _noiseTexture GenerateDitherTexture(); //创建 _ditheringTexture }
void OnEnable() { //_camera.RemoveAllCommandBuffers(); if(_camera.actualRenderingPath == RenderingPath.Forward) _camera.AddCommandBuffer(CameraEvent.AfterDepthTexture, _preLightPass); //在进行光照计算之前,加入一个CommandBuffer else _camera.AddCommandBuffer(CameraEvent.BeforeLighting, _preLightPass); //在进行光照计算之前,加入一个CommandBuffer }
//由于确保RT尺寸和分辨率一致 void Update() { //#if UNITY_EDITOR if (_currentResolution != Resolution) { _currentResolution = Resolution; ChangeResolution(); } if ((_volumeLightTexture.width != _camera.pixelWidth || _volumeLightTexture.height != _camera.pixelHeight)) ChangeResolution(); //#endif }
public void OnPreRender() { // use very low value for near clip plane to simplify cone/frustum intersection Matrix4x4 proj = Matrix4x4.Perspective(_camera.fieldOfView, _camera.aspect, 0.01f, _camera.farClipPlane); proj = GL.GetGPUProjectionMatrix(proj, true); _viewProj = proj * _camera.worldToCameraMatrix; _preLightPass.Clear(); bool dx11 = SystemInfo.graphicsShaderLevel > 40; if (Resolution == VolumtericResolution.Quarter) { Texture nullTexture = null; // down sample depth to half res _preLightPass.Blit(nullTexture, _halfDepthBuffer, _bilateralBlurMaterial, dx11 ? 4 : 10); //_bilateralBlurMaterial的第4/10个pass为:将_CameraDepthTexture的四个点取min/max后得到一个点,存入_halfDepthBuffer // down sample depth to quarter res _preLightPass.Blit(nullTexture, _quarterDepthBuffer, _bilateralBlurMaterial, dx11 ? 6 : 11); //_bilateralBlurMaterial的第6/11个pass为:将_HalfResDepthBuffer的四个点取min/max后得到一个点,存入_quarterDepthBuffer _preLightPass.SetRenderTarget(_quarterVolumeLightTexture); } else if (Resolution == VolumtericResolution.Half) { Texture nullTexture = null; // down sample depth to half res _preLightPass.Blit(nullTexture, _halfDepthBuffer, _bilateralBlurMaterial, dx11 ? 4 : 10); _preLightPass.SetRenderTarget(_halfVolumeLightTexture); } else { _preLightPass.SetRenderTarget(_volumeLightTexture); } _preLightPass.ClearRenderTarget(false, true, new Color(0, 0, 0, 1)); UpdateMaterialParameters(); //将上面涉及到的RT以及_noiseTexture和_ditheringTexture的handle赋值给shader if (PreRenderEvent != null) PreRenderEvent(this, _viewProj); //这里是真的体积光的绘制,绘制到_quarterVolumeLightTexture/_halfVolumeLightTexture/_volumeLightTexture,否则这些RT其实是空的 }
[ImageEffectOpaque] //绘制完不透明后,就可以把体积光的结果混合进去了 public void OnRenderImage(RenderTexture source, RenderTexture destination) { if (Resolution == VolumtericResolution.Quarter) { RenderTexture temp = RenderTexture.GetTemporary(_quarterDepthBuffer.width, _quarterDepthBuffer.height, 0, RenderTextureFormat.ARGBHalf); temp.filterMode = FilterMode.Bilinear; // horizontal bilateral blur at quarter res Graphics.Blit(_quarterVolumeLightTexture, temp, _bilateralBlurMaterial, 8); //_bilateralBlurMaterial的第8个pass为:将_quarterVolumeLightTexture沿水平方向进行取QUARTER_RES_BLUR_KERNEL_SIZE * 2 + 1 = 13个点的颜色,然后从_QuarterResDepthBuffer中获取到这13个点的depth值,以这些depth值和中心点depth值的距离为权重,计算得到最终显示出现的颜色,保存到temp上 // vertical bilateral blur at quarter res Graphics.Blit(temp, _quarterVolumeLightTexture, _bilateralBlurMaterial, 9); //_bilateralBlurMaterial的第9个pass为:将temp沿垂直方向进行取QUARTER_RES_BLUR_KERNEL_SIZE * 2 + 1 = 13个点的颜色,然后从_QuarterResDepthBuffer中获取到这13个点的depth值,以这些depth值和中心点depth值的距离为权重,计算得到最终显示出现的颜色,保存到_quarterVolumeLightTexture上 // upscale to full res Graphics.Blit(_quarterVolumeLightTexture, _volumeLightTexture, _bilateralBlurMaterial, 7); //_bilateralBlurMaterial的第7个pass为:检测当前点_CameraDepthTexture的depth值与_QuarterResDepthBuffer周围4个点的depth值,如果差距不大,则直接返回_quarterVolumeLightTexture上该点的颜色,否则,则返回_quarterVolumeLightTexture上depth值最小点的颜色,存入_volumeLightTexture RenderTexture.ReleaseTemporary(temp); } else if (Resolution == VolumtericResolution.Half) { RenderTexture temp = RenderTexture.GetTemporary(_halfVolumeLightTexture.width, _halfVolumeLightTexture.height, 0, RenderTextureFormat.ARGBHalf); temp.filterMode = FilterMode.Bilinear; // horizontal bilateral blur at half res Graphics.Blit(_halfVolumeLightTexture, temp, _bilateralBlurMaterial, 2); //_bilateralBlurMaterial的第2个pass为:将_halfVolumeLightTexture沿水平方向进行取HALF_RES_BLUR_KERNEL_SIZE * 2 + 1 = 11个点的颜色,然后从_HalfResDepthBuffer中获取到这11个点的depth值,以这些depth值和中心点depth值的距离为权重,计算得到最终显示出现的颜色,保存到temp上 // vertical bilateral blur at half res Graphics.Blit(temp, _halfVolumeLightTexture, _bilateralBlurMaterial, 3); //_bilateralBlurMaterial的第3个pass为:将temp沿垂直方向进行取HALF_RES_BLUR_KERNEL_SIZE * 2 + 1 = 11个点的颜色,然后从_HalfResDepthBuffer中获取到这11个点的depth值,以这些depth值和中心点depth值的距离为权重,计算得到最终显示出现的颜色,保存到_halfVolumeLightTexture上 // upscale to full res Graphics.Blit(_halfVolumeLightTexture, _volumeLightTexture, _bilateralBlurMaterial, 5); //_bilateralBlurMaterial的第5个pass为:检测当前点_CameraDepthTexture的depth值与_HalfResDepthBuffer周围4个点的depth值,如果差距不大,则直接返回_halfVolumeLightTexture上该点的颜色,否则,则返回_halfVolumeLightTexture上depth值最小点的颜色,存入_volumeLightTexture RenderTexture.ReleaseTemporary(temp); } else { RenderTexture temp = RenderTexture.GetTemporary(_volumeLightTexture.width, _volumeLightTexture.height, 0, RenderTextureFormat.ARGBHalf); temp.filterMode = FilterMode.Bilinear; // horizontal bilateral blur at full res Graphics.Blit(_volumeLightTexture, temp, _bilateralBlurMaterial, 0); //_bilateralBlurMaterial的第0个pass为:将_volumeLightTexture沿水平方向进行取FULL_RES_BLUR_KERNEL_SIZE * 2 + 1 = 15个点的颜色,然后从_CameraDepthTexture中获取到这15个点的depth值,以这些depth值和中心点depth值的距离为权重,计算得到最终显示出现的颜色,保存到temp上 // vertical bilateral blur at full res Graphics.Blit(temp, _volumeLightTexture, _bilateralBlurMaterial, 1); //_bilateralBlurMaterial的第1个pass为:将_volumeLightTexture沿垂直方向进行取FULL_RES_BLUR_KERNEL_SIZE * 2 + 1 = 15个点的颜色,然后从_CameraDepthTexture中获取到这15个点的depth值,以这些depth值和中心点depth值的距离为权重,计算得到最终显示出现的颜色,保存到_volumeLightTexture上 RenderTexture.ReleaseTemporary(temp); } // add volume light buffer to rendered scene _blitAddMaterial.SetTexture("_Source", source); Graphics.Blit(_volumeLightTexture, destination, _blitAddMaterial, 0); //将体积光作为半透明处理,最终结果为source.xyz * _volumeLightTexture.a + _volumeLightTexture.xyz }
下面看light上挂载的VolumetricLight.cs
void Start() { #if UNITY_5_5_OR_NEWER if (SystemInfo.graphicsDeviceType == GraphicsDeviceType.Direct3D11 || SystemInfo.graphicsDeviceType == GraphicsDeviceType.Direct3D12 || SystemInfo.graphicsDeviceType == GraphicsDeviceType.Metal || SystemInfo.graphicsDeviceType == GraphicsDeviceType.PlayStation4 || SystemInfo.graphicsDeviceType == GraphicsDeviceType.Vulkan || SystemInfo.graphicsDeviceType == GraphicsDeviceType.XboxOne) { _reversedZ = true; } #endif _commandBuffer = new CommandBuffer(); _commandBuffer.name = "Light Command Buffer"; _cascadeShadowCommandBuffer = new CommandBuffer(); _cascadeShadowCommandBuffer.name = "Dir Light Command Buffer"; _cascadeShadowCommandBuffer.SetGlobalTexture("_CascadeShadowMapTexture", new UnityEngine.Rendering.RenderTargetIdentifier(UnityEngine.Rendering.BuiltinRenderTextureType.CurrentActive)); _light = GetComponent(); //_light.RemoveAllCommandBuffers(); if(_light.type == LightType.Directional) { _light.AddCommandBuffer(LightEvent.BeforeScreenspaceMask, _commandBuffer); _light.AddCommandBuffer(LightEvent.AfterShadowMap, _cascadeShadowCommandBuffer); } else _light.AddCommandBuffer(LightEvent.AfterShadowMap, _commandBuffer); Shader shader = Shader.Find("Sandbox/VolumetricLight"); if (shader == null) throw new Exception("Critical Error: \"Sandbox/VolumetricLight\" shader is missing. Make sure it is included in \"Always Included Shaders\" in ProjectSettings/Graphics."); _material = new Material(shader); // new Material(VolumetricLightRenderer.GetLightMaterial()); }
void OnEnable() { VolumetricLightRenderer.PreRenderEvent += VolumetricLightRenderer_PreRenderEvent; //这个很重要,与VolumetricLightRenderer的PreRenderEvent进行了关联,并在OnPreRender执行 //这个就是实际绘制的体积光(point/spot的话其实就是绘制前面制造的mesh),绘制到_quarterVolumeLightTexture/_halfVolumeLightTexture/_volumeLightTexture,使用到了材质_material,其中使用到了RayMarch算法,但是循环次数为_SampleCount可控,所以其实可以用,大致算法就是获取到从相机到目标mesh这段距离上,每个点的光照信息(通过raydir和lightdir求dot,再加入一些scatter、extinction等信息)相加即可。 }
void Update() { _commandBuffer.Clear(); //由于_commandBuffer在VolumetricLightRenderer的OnPreRender每帧都赋值一次,所以这里每次clear一次,或许可以优化? }
总结一下,先得到depth texture(根据精度得到不同尺寸的depth texture),以每个light为单位,绘制真的体积光(point就是一个球,spot就是一个锤形,算法用raymarch),然后把这个体积光的RT以depth texture为因子进行周围若干点的计算取加权和,再把这个加权和根据depth texture精度做一下恢复,最后把这个恢复精度的值作为半透明处理,最终结果为source.xyz * _volumeLightTexture.a + _volumeLightTexture.xyz。得到包含体积光的场景。
优化方案:
LightingBox的Sun Shaft使用的是LightingBox自带的Sun Shaft
LightingBox自带的Sun Shaft脚本
SunShaft可以选择使用/不使用depth texture。区别是:有depth texture的话,判断是否漏出skybox更加准确。但是如果skybox没有使用默认的skybox,而是自己做了个天空穹,这两个都不准确。
好吧,继续读,入口函数还是在LB_LightingBox.cs中,在这里设置一些参数设置,主要是SunShaftsResolution、shaftDistance、shaftBlur、shaftColor,然后通过helper.Update_SunShaft (mainCamera, SunShaft_Enabled, shaftQuality, shaftDistance, shaftBlur, shaftColor, sunLight.transform);,使得 sunLight的Sun Shaft生效。进入到LB_LightingBoxHelper.cs的Update_SunShaft函数,其实这里也没做什么,只是给camera挂上SunShafts组件,给GO sun创建一个子GO Shaft Caster,设置其localPosition,并将其赋值给camera的SunShafts组件的属性sunTransform。
下面开始阅读SunShafts.cs
这个类是PostEffectsBase的子类,所以在Start函数中会调用CheckResources函数
public override bool CheckResources () { CheckSupport (useDepthTexture); //SunShaft也需要depth pass sunShaftsMaterial = CheckShaderAndCreateMaterial (sunShaftsShader, sunShaftsMaterial); simpleClearMaterial = CheckShaderAndCreateMaterial (simpleClearShader, simpleClearMaterial); if (!isSupported) ReportAutoDisable (); return isSupported; }
void OnRenderImage (RenderTexture source, RenderTexture destination) { if (CheckResources()==false) { Graphics.Blit (source, destination); return; } // we actually need to check this every frame if (useDepthTexture) GetComponent().depthTextureMode |= DepthTextureMode.Depth; //这个和上面CheckResources重复了吧 int divider = 4; if (resolution == SunShaftsResolution.Normal) divider = 2; else if (resolution == SunShaftsResolution.High) divider = 1; Vector3 v = Vector3.one * 0.5f; if (sunTransform) v = GetComponent ().WorldToViewportPoint (sunTransform.position); else v = new Vector3(0.5f, 0.5f, 0.0f); int rtW = source.width / divider; int rtH = source.height / divider; RenderTexture lrColorB; RenderTexture lrDepthBuffer = RenderTexture.GetTemporary (rtW, rtH, 0); // mask out everything except the skybox // we have 2 methods, one of which requires depth buffer support, the other one is just comparing images sunShaftsMaterial.SetVector ("_BlurRadius4", new Vector4 (1.0f, 1.0f, 0.0f, 0.0f) * sunShaftBlurRadius ); //并没有用到,后面又重新赋值了 sunShaftsMaterial.SetVector ("_SunPosition", new Vector4 (v.x, v.y, v.z, maxRadius)); sunShaftsMaterial.SetVector ("_SunThreshold", sunThreshold); if (!useDepthTexture) { var format= GetComponent ().allowHDR ? RenderTextureFormat.DefaultHDR: RenderTextureFormat.Default; RenderTexture tmpBuffer = RenderTexture.GetTemporary (source.width, source.height, 0, format); RenderTexture.active = tmpBuffer; GL.ClearWithSkybox (false, GetComponent ()); //将tmpBuffer这块RT上绘制上skybox的颜色 sunShaftsMaterial.SetTexture ("_Skybox", tmpBuffer); Graphics.Blit (source, lrDepthBuffer, sunShaftsMaterial, 3); //sunShaftsMaterial第3个pass的用处为:根据skybox的颜色和当前source的颜色判断(颜色相近,则为没有被挡住),没有被挡住的skybox处的结果为skybox的颜色-_SunThreshold,得到的rgb三个通道相加。然后,乘以根据距离SunShaft Position的距离,从近到远作为从大到小的因子。其它skybox没有露出来的地方为0 RenderTexture.ReleaseTemporary (tmpBuffer); } else { Graphics.Blit (source, lrDepthBuffer, sunShaftsMaterial, 2); //sunShaftsMaterial第2个pass的用处为:根据depth texture判断,没有被挡住的skybox处的结果为skybox的颜色-_SunThreshold,得到的rgb三个通道相加。然后,乘以根据距离SunShaft Position的距离,从近到远作为从大到小的因子。其它skybox没有露出来的地方为0 } // paint a small black small border to get rid of clamping problems DrawBorder (lrDepthBuffer, simpleClearMaterial); //将lrDepthBuffer的四周边界处,clear成(0,0,0,0) // radial blur: radialBlurIterations = Mathf.Clamp (radialBlurIterations, 1, 4); float ofs = sunShaftBlurRadius * (1.0f / 768.0f); sunShaftsMaterial.SetVector ("_BlurRadius4", new Vector4 (ofs, ofs, 0.0f, 0.0f)); sunShaftsMaterial.SetVector ("_SunPosition", new Vector4 (v.x, v.y, v.z, maxRadius)); //重复赋值 for (int it2 = 0; it2 < radialBlurIterations; it2++ ) { // each iteration takes 2 * 6 samples // we update _BlurRadius each time to cheaply get a very smooth look lrColorB = RenderTexture.GetTemporary (rtW, rtH, 0); Graphics.Blit (lrDepthBuffer, lrColorB, sunShaftsMaterial, 1); //sunShaftsMaterial的第1个pass的用处为:将lrDepthBuffer这块RT上某个点的值,沿着该点指向_SunPosition的方向,以sunShaftBlurRadius为步长,采样SAMPLES_INT次,这里类似bloom可以优化。如果有物件的话,那么该地方在lrDepthBuffer这块RT上的值为0,然后由于这个bloom的算法,它迎着太阳反方向的地方的值也就变成0,就会出现光线被遮挡的现象。 RenderTexture.ReleaseTemporary (lrDepthBuffer); ofs = sunShaftBlurRadius * (((it2 * 2.0f + 1.0f) * 6.0f)) / 768.0f; sunShaftsMaterial.SetVector ("_BlurRadius4", new Vector4 (ofs, ofs, 0.0f, 0.0f) ); lrDepthBuffer = RenderTexture.GetTemporary (rtW, rtH, 0); Graphics.Blit (lrColorB, lrDepthBuffer, sunShaftsMaterial, 1); //改变步长大小,再来一次bloom RenderTexture.ReleaseTemporary (lrColorB); ofs = sunShaftBlurRadius * (((it2 * 2.0f + 2.0f) * 6.0f)) / 768.0f; sunShaftsMaterial.SetVector ("_BlurRadius4", new Vector4 (ofs, ofs, 0.0f, 0.0f) ); } // put together: if (v.z >= 0.0f) sunShaftsMaterial.SetVector ("_SunColor", new Vector4 (sunColor.r, sunColor.g, sunColor.b, sunColor.a) * sunShaftIntensity); else sunShaftsMaterial.SetVector ("_SunColor", Vector4.zero); // no backprojection ! sunShaftsMaterial.SetTexture ("_ColorBuffer", lrDepthBuffer); Graphics.Blit (source, destination, sunShaftsMaterial, (screenBlendMode == ShaftsScreenBlendMode.Screen) ? 0 : 4); //sunShaftsMaterial的第0个pass的用处为:将lrDepthBuffer的值乘以_SunColor*sunShaftIntensity + 原本source颜色。 //sunShaftsMaterial的第4个pass的用处为:将lrDepthBuffer的值乘以_SunColor*sunShaftIntensity * (1 - 原本source颜色) + 原本source颜色。 RenderTexture.ReleaseTemporary (lrDepthBuffer); }
整体来说,sunshaft还是很简单的,就是现在暴露的天空盒地方,sunshaftposition圆形范围内,从内往外渐变弱的写入一个因子,然后把这个写有因子的RT沿着SunShaft的方向做bloom,最后拿这个bloom后的因子乘以SunShaft的颜色,去和最终图片颜色混合即可。
优化方案:
LightingBox的Global Fog分为三类,Global、Height、Distance。而底层实现,其实是借用了LightingBox自带的GlobalFog、Unity内置的Fog以及PPS中的fog
LightingBox自带的GlobalFog脚本
GlobalFog需要depth texture。
好吧,继续读,入口函数还是在LB_LightingBox.cs中,在这里设置一些参数设置,主要是fDistance、fHeight、fheightDensity、fColor、fogIntensity,然后通过helper.Update_GlobalFog (mainCamera, Fog_Enabled, vFog, fDistance, fHeight, fheightDensity, fColor, fogIntensity);,使得生效。进入到LB_LightingBoxHelper.cs的Update_GlobalFog函数,这里根据输入参数进行了一些设置,并设置了Unity build-in的雾参数(RenderSettings.fog/fogColor/fogMode/fogDensity)。
下面开始阅读GlobalFog.cs
//这个类是PostEffectsBase的子类,所以在Start函数中会调用CheckResources函数 public override bool CheckResources () { CheckSupport (true); //CheckSupport,这个特性需要depth texture。所以如果不支持depth texture,那么就结束了。如果支持,则将Camera的depthTextureMode打开为DepthTextureMode.Depth。这样会先绘制一次depth pass,特别影响性能,在之前的手机中是基本耗不起的,但是如今手游效果都剑指Console单机游戏了,多个depth pass也就不大惊小怪了。 fogMaterial = CheckShaderAndCreateMaterial (fogShader, fogMaterial); //检查该脚本关联的Shader和Material是否支持。Shader为LB_LightingBox脚本赋值的"Hidden/GlobalFog",材质球为空。那么这里会根据Shader创建一个材质球,但是需要注意的是,该材质球的hideFlags为HideFlags.DontSave,也就是该材质球不会在切换场景的时候销毁,而必须使用DestroyImmediate才能销毁。 if (!isSupported) ReportAutoDisable (); return isSupported; }
OnRenderImage,这个函数是在绘制结束,开始进行后处理的时候调用的,在这里还加上了[ImageEffectOpaque]属性,说明是要在所有不透明绘制完毕之后开始执行这个函数,也就是对所有不透明物件进行后处理,而跟半透明的物件没有关系。(现在可以通过CommandBuffer在更多时间点进行后处理,或者直接通过SRP调整绘制顺序了,所以这种做法应该会慢慢的落伍吧。)
[ImageEffectOpaque] //DIMA FOR WATER void OnRenderImage (RenderTexture source, RenderTexture destination) { if (CheckResources()==false || (!distanceFog && !heightFog)) { Graphics.Blit (source, destination); //如果在前面的CheckResources中发现不支持DepthTexture或者"Hidden/GlobalFog"这个Shader不支持,又或者是,这里要渲染的Global Fog不是Height/Distance(如果是Global的话,是通过PPS的fog和Unity内置的Fog,而非通过LightingBox自带的GlobalFog),则跳过这一步,直接将绘制到的图片显示到屏幕上。 return; } Camera cam = GetComponent(); Transform camtr = cam.transform; float camNear = cam.nearClipPlane; float camFar = cam.farClipPlane; float camFov = cam.fieldOfView; float camAspect = cam.aspect; Matrix4x4 frustumCorners = Matrix4x4.identity; float fovWHalf = camFov * 0.5f; Vector3 toRight = camtr.right * camNear * Mathf.Tan (fovWHalf * Mathf.Deg2Rad) * camAspect; Vector3 toTop = camtr.up * camNear * Mathf.Tan (fovWHalf * Mathf.Deg2Rad); Vector3 topLeft = (camtr.forward * camNear - toRight + toTop); float camScale = topLeft.magnitude * camFar/camNear; topLeft.Normalize(); topLeft *= camScale; Vector3 topRight = (camtr.forward * camNear + toRight + toTop); topRight.Normalize(); topRight *= camScale; Vector3 bottomRight = (camtr.forward * camNear + toRight - toTop); bottomRight.Normalize(); bottomRight *= camScale; Vector3 bottomLeft = (camtr.forward * camNear - toRight - toTop); bottomLeft.Normalize(); bottomLeft *= camScale; frustumCorners.SetRow (0, topLeft); frustumCorners.SetRow (1, topRight); frustumCorners.SetRow (2, bottomRight); frustumCorners.SetRow (3, bottomLeft); var camPos= camtr.position; float FdotC = camPos.y-height; float paramK = (FdotC <= 0.0f ? 1.0f : 0.0f); float excludeDepth = (excludeFarPixels ? 1.0f : 2.0f); fogMaterial.SetMatrix ("_FrustumCornersWS", frustumCorners); fogMaterial.SetVector ("_CameraWS", camPos); fogMaterial.SetVector ("_HeightParams", new Vector4 (height, FdotC, paramK, heightDensity*0.5f)); fogMaterial.SetVector ("_DistanceParams", new Vector4 (-Mathf.Max(startDistance,0.0f), excludeDepth, 0, 0)); var sceneMode= RenderSettings.fogMode; var sceneDensity= RenderSettings.fogDensity; var sceneStart= RenderSettings.fogStartDistance; var sceneEnd= RenderSettings.fogEndDistance; Vector4 sceneParams; bool linear = (sceneMode == FogMode.Linear); float diff = linear ? sceneEnd - sceneStart : 0.0f; float invDiff = Mathf.Abs(diff) > 0.0001f ? 1.0f / diff : 0.0f; sceneParams.x = sceneDensity * 1.2011224087f; // density / sqrt(ln(2)), used by Exp2 fog mode sceneParams.y = sceneDensity * 1.4426950408f; // density / ln(2), used by Exp fog mode sceneParams.z = linear ? -invDiff : 0.0f; sceneParams.w = linear ? sceneEnd * invDiff : 0.0f; fogMaterial.SetVector ("_SceneFogParams", sceneParams); fogMaterial.SetVector ("_SceneFogMode", new Vector4((int)sceneMode, useRadialDistance ? 1 : 0, 0, 0)); //将摄像机frustum的四个角的向量信息、摄像机位置、LB_LightingBox中设置的fogIntensity、startDistance以及Height Fog相关的height、heightDensity等信息传入材质球。然后通过CustomGraphicsBlit绘制,Distance用pass 1, Height用pass 2, Distance + Height则用pass 0。(我以为会通过Graphics.Blit,结果用的是类似OpenGL ES1.1的Begin/End,仔细看了一下,发现传入的顶点坐标Z值有蹊跷,通过Z值告诉Shader传入点为BL/BR/TL/TR四个角的顶点,这个蛮有意思的,因为在Shader里面做判断也行,但是会出现条件判断,造成性能问题。) int pass = 0; if (distanceFog && heightFog) pass = 0; // distance + height else if (distanceFog) pass = 1; // distance only else pass = 2; // height only CustomGraphicsBlit (source, destination, fogMaterial, pass); //下面仔细看"Hidden/GlobalFog"这个Shader,三个Pass的Vertex Shader都是一样的,先将Z值作为index,往PS传入不同的interpolatedRay(四个角的向量信息),然后将Z值设置为0.1后,计算出裁减空间的坐标,连同UV一起传给PS。PS中先根据uv坐标和depth pass得到的深度图取出深度,然后通过Linear01Depth转成线性深度,乘以interpolatedRay,加上摄像机世界坐标后,得到该点的世界坐标。如果是Distance Fog就比较简单,直接用length取距离然后减去Camera近裁减面即可。如果是Height Fog,则是关联了摄像机高度、height、heightDensity、当前点世界坐标等一系列参数。然后再根据Fog的种类linear/exp/exp2计算得到最终的fogFac,然后通过lerp把雾颜色和场景颜色混合(天空盒使用场景颜色),得到Fog后的颜色。 }
Unity内置的Fog
PPS中的fog
LightingBox的Depth Of Field使用的是LightingBox自带的DepthOfField,然而其实PPS中也提供了一套的DepthOfField
LightingBox自带的Depth Of Field脚本
代码源自:https://github.com/Brackeys/Efaround/blob/master/Efaround%20Unity%20Project/Assets/Standard%20Assets/Effects/CinematicEffects(BETA)/DepthOfField/DepthOfField.cs
LightingBox的Bloom使用的就是PPS中提供的Bloom
LightingBox的ColorGrading使用的就是PPS中提供的ColorGrading以及PPS中提供的AutoExposure(自动白平衡)
LightingBox的Foliage,其实是将SpeedTree的Shader替换成LightingBox自己的Shader,然后改变风速、颜色等信息。由于手游项目用不到SpeedTree,所以暂时跳过
LightingBox的Snow,其实是将StandardShader替换成LightingBox自己的Snow Standard Shader,然后改变Snow的albedo、normal、intensity信息。虽然手游中也有Snow,但是我们用的是自己写的,所以暂时跳过
LightingBox的AA使用的就是PPS中提供的AA
LightingBox的AO使用的就是PPS中提供的AO
LightingBox的Vignette使用的就是PPS中提供的Vignette
LightingBox的Motion Blur使用的就是PPS中提供的Motion Blur
LightingBox的Chromattic Aberration使用的就是PPS中提供的Chromattic Aberration
LightingBox的Screen Space Reflections使用的就是PPS中提供的Screen Space Reflections
LightingBox的Stochastic Screen Space Reflections使用的是LightingBox自带的Stochastic Screen Space Reflections
LightingBox自带的Stochastic Screen Space Reflections脚本
代码源自:https://github.com/cCharkes/StochasticScreenSpaceReflection
OK,lightingBox最后一个自带的效果,默认只支持Deferred渲染。入口函数还是在LB_LightingBox.cs中,在这里设置一些参数设置,主要是ResolutionMode、rayDistance、screenFadeSize、smoothnessRange、DebugMode,然后通过helper.Update_StochasticSSR(mainCamera, ST_SSR_Enabled,resolutionMode,debugPass,rayDistance,screenFadeSize,smoothnessRange);,使得 StochasticScreenSpaceReflection生效。进入到LB_LightingBoxHelper.cs的Update_StochasticSSR函数,其实这里也没做什么,只是给camera挂上StochasticSSR组件
下面开始阅读StochasticSSR.cs
private void Awake() { noise = Resources.Load("tex_BlueNoise_1024x1024_UNI") as Texture2D; m_camera = GetComponent(); if (Application.isPlaying) m_camera.depthTextureMode |= DepthTextureMode.Depth | DepthTextureMode.MotionVectors; //需要depth pass和motion vector pass else m_camera.depthTextureMode = DepthTextureMode.Depth; }
private void OnPreCull() { jitterSample = GenerateRandomOffset(); }
[ImageEffectOpaque] //只针对不透明物件的一个后效 private void OnRenderImage(RenderTexture source, RenderTexture destination) { int width = m_camera.pixelWidth; int height = m_camera.pixelHeight; int rayWidth = width / (int)rayMode; int rayHeight = height / (int)rayMode; int resolveWidth = width / (int)resolveMode; int resolveHeight = height / (int)resolveMode; rendererMaterial.SetVector("_JitterSizeAndOffset", new Vector4 ( (float)rayWidth / (float)noise.width, (float)rayHeight / (float)noise.height, jitterSample.x, jitterSample.y ) ); rendererMaterial.SetVector("_ScreenSize", new Vector2((float)width, (float)height)); rendererMaterial.SetVector("_RayCastSize", new Vector2((float)rayWidth, (float)rayHeight)); rendererMaterial.SetVector("_ResolveSize", new Vector2((float)resolveWidth, (float)resolveHeight)); UpdatePrevMatrices(source, destination); UpdateRenderTargets(width, height); //根据实际宽高创建6个RT temporalBuffer、mainBuffer0、mainBuffer1、mipMapBuffer0、mipMapBuffer1、mipMapBuffer2 UpdateVariable(); project = new Vector4(Mathf.Abs(m_camera.projectionMatrix.m00 * 0.5f), Mathf.Abs(m_camera.projectionMatrix.m11 * 0.5f), ((m_camera.farClipPlane * m_camera.nearClipPlane) / (m_camera.nearClipPlane - m_camera.farClipPlane)) * 0.5f, 0.0f); rendererMaterial.SetVector("_Project", project); RenderTexture rayCast = CreateTempBuffer(rayWidth, rayHeight, 0, RenderTextureFormat.ARGBHalf); RenderTexture rayCastMask = CreateTempBuffer(rayWidth, rayHeight, 0, RenderTextureFormat.RHalf); RenderTexture depthBuffer = CreateTempBuffer(width / (int)depthMode, height / (int)depthMode, 0, RenderTextureFormat.RFloat); rayCast.filterMode = rayFilterMode; depthBuffer.filterMode = FilterMode.Point; rendererMaterial.SetTexture("_RayCast", rayCast); rendererMaterial.SetTexture("_RayCastMask", rayCastMask); rendererMaterial.SetTexture("_CameraDepthBuffer", depthBuffer); // Depth Buffer Graphics.SetRenderTarget(depthBuffer); rendererMaterial.SetPass(4); //Stochastic SSR的第4个pass的用处为:把depth texture _CameraDepthTexture写入RT DrawFullScreenQuad(); //通过绘制一个四边形,通过上面的材质球将depth texture _CameraDepthTexture绘制到 RT depthBuffer switch (debugPass) { case SSRDebugPass.Reflection: case SSRDebugPass.Cubemap: case SSRDebugPass.CombineNoCubemap: case SSRDebugPass.RayCast: case SSRDebugPass.ReflectionAndCubemap: case SSRDebugPass.SSRMask: case SSRDebugPass.Jitter: Graphics.Blit(source, mainBuffer0, rendererMaterial, 1); //Stochastic SSR的第1个pass的用处为:将当前的图片source - _CameraReflectionsTexture(没见过这个贴图,看上去貌似是reflectionprobe或者天空盒)贴图对应点的颜色 break; case SSRDebugPass.Combine: if (Application.isPlaying) Graphics.Blit(mainBuffer1, mainBuffer0, rendererMaterial, 8); //Stochastic SSR的第8个pass的用处为:根据当前UV和motionVector算出之前的UV,然后从mainBuffer1上获取到前一帧uv位置的值 else Graphics.Blit(source, mainBuffer0, rendererMaterial, 1); break; } // Raycast pass renderBuffer[0] = rayCast.colorBuffer; renderBuffer[1] = rayCastMask.colorBuffer; Graphics.SetRenderTarget(renderBuffer, rayCast.depthBuffer); //设置写入的Color RT和Depth RT,这个需要MRT,也就是最少ES 3.0的支持 //Graphics.Blit(null, rendererMaterial, 3); rendererMaterial.SetPass(3); //Stochastic SSR的第3个pass的用处为:这个就复杂了,大致就是逐像素的获取到摄像机到该像素点的viewDir,以及根据传入了粗糙度和随机值的ImportanceSampleGGX函数得到的一个向量方向作为法线方向,得到一个光线方向。然后通过RayMarch算法以及_CameraDepthTexture获取到该像素点沿着光线方向第一个深度值小于该像素点的值,将该像素点的屏幕空间坐标等信息做为输出MRT的第一个RT的rgb值(alpha值保存的是PDF(probability density function 概率采样函数),另外一个RT保存的是mask,当沿着光线寻找深度值小于该像素点的值,如果找到了则mask为1,否则mask为0),这个得到的像素点的颜色就是用于作为当前像素点的反射颜色。 //整个过程中,使用到了GBuffer的_CameraGBufferTexture2获取到normal( // RT2: normal (rgb), --unused, very low precision-- (a) outGBuffer2 = half4(data.normalWorld * 0.5f + 0.5f, 1.0f);),以及_CameraGBufferTexture1获取到specular(// RT1: spec color (rgb), smoothness (a) - sRGB rendertarget outGBuffer1 = half4(data.specularColor, data.smoothness);) DrawFullScreenQuad(); //通过绘制一个四边形,通过上面的材质球将信息存放在rayCast.colorBuffer和rayCastMask.colorBuffer。 ReleaseTempBuffer(depthBuffer); RenderTexture resolvePass = CreateTempBuffer(resolveWidth, resolveHeight, 0, RenderTextureFormat.DefaultHDR); if (useMipMap) { dirX[0] = new Vector2(width, 0.0f); dirX[1] = new Vector2(dirX[0].x / 4.0f, 0.0f); dirX[2] = new Vector2(dirX[1].x / 2.0f, 0.0f); dirX[3] = new Vector2(dirX[2].x / 2.0f, 0.0f); dirX[4] = new Vector2(dirX[3].x / 2.0f, 0.0f); dirY[0] = new Vector2(0.0f, height); dirY[1] = new Vector2(0.0f, dirY[0].y / 4.0f); dirY[2] = new Vector2(0.0f, dirY[1].y / 2.0f); dirY[3] = new Vector2(0.0f, dirY[2].y / 2.0f); dirY[4] = new Vector2(0.0f, dirY[3].y / 2.0f); rendererMaterial.SetInt("_MaxMipMap", maxMipMap); Graphics.Blit(mainBuffer0, mipMapBuffer0); // Copy the source frame buffer to the mip map buffer for (int i = 0; i < maxMipMap; i++) { rendererMaterial.SetVector("_GaussianDir", new Vector2(1.0f / dirX[i].x, 0.0f)); rendererMaterial.SetInt("_MipMapCount", mipLevel[i]); Graphics.Blit(mipMapBuffer0, mipMapBuffer1, rendererMaterial, 6); //Stochastic SSR的第6个pass的用处为:针对mipMapBuffer0这个纹理的第_MipMapCount层,水平方向采样NumSamples个点,然后算出加权和,如果_Fireflies,则再加一层模糊处理 rendererMaterial.SetVector("_GaussianDir", new Vector2(0.0f, 1.0f / dirY[i].y)); rendererMaterial.SetInt("_MipMapCount", mipLevel[i]); Graphics.Blit(mipMapBuffer1, mipMapBuffer0, rendererMaterial, 6); //同上,只是这次进行垂直方向的混合 Graphics.SetRenderTarget(mipMapBuffer2, i); DrawFullScreenQuad(); //?? } Graphics.Blit(mipMapBuffer2, resolvePass, rendererMaterial, 0); // Resolve pass using mip map buffer //Stochastic SSR的第0个pass的用处为:这个也很复杂了,大致就是根据当前像素视角和法线的方向以及粗糙度,当视角和法线夹角趋近于0,或者粗糙度很高的时候,或者说当上面计算出来的hitPacked距离当前像素点比较远的时候,使用比较模糊的mipmap作为反射的图片,根据采样rayCast贴图得到上面计算出来的hitPacked的位置,得到uv坐标,对_MainTex进行采样得到RGB值,当hitPacked比较靠近屏幕的边界时,alpha值为borderDist/_EdgeFactor。如果使用了_Fireflies,则模糊一下,如果使用了_UseNormalization,则通过viewPos、hitViewPos、viewNormal、roughness计算得到一个BRDF值作为weight算加权和。得到SSR后的颜色结果。 //ReleaseTempBuffer(mainBuffer0); } else { Graphics.Blit(mainBuffer0, resolvePass, rendererMaterial, 0); // Resolve pass without mip map buffer //ReleaseTempBuffer(mainBuffer0); } rendererMaterial.SetTexture("_ReflectionBuffer", resolvePass); ReleaseTempBuffer(rayCast); ReleaseTempBuffer(rayCastMask); if (useTemporal && Application.isPlaying) { rendererMaterial.SetFloat("_TScale", scale); rendererMaterial.SetFloat("_TResponse", response); rendererMaterial.SetFloat("_TMinResponse", minResponse); rendererMaterial.SetFloat("_TMaxResponse", maxResponse); RenderTexture temporalBuffer0 = CreateTempBuffer(width, height, 0, RenderTextureFormat.DefaultHDR); rendererMaterial.SetTexture("_PreviousBuffer", temporalBuffer); Graphics.Blit(resolvePass, temporalBuffer0, rendererMaterial, 5); // Temporal pass //Stochastic SSR的第5个pass的用处为:取该像素点当前周围9个点的最小值、最大值、当前值,以及根据_CameraMotionVectorsTexture算出来的上一帧所在UV在上一帧_PreviousBuffer所在图片的颜色值,根据motion vector的尺寸算出一个因子,做lerp得到最终该点的颜色值。写入RT temporalBuffer0 rendererMaterial.SetTexture("_ReflectionBuffer", temporalBuffer0); Graphics.Blit(temporalBuffer0, temporalBuffer); ReleaseTempBuffer(temporalBuffer0); //这个有问题吧,_ReflectionBuffer后面还有用呢 } switch (debugPass) { case SSRDebugPass.Reflection: case SSRDebugPass.Cubemap: case SSRDebugPass.CombineNoCubemap: case SSRDebugPass.RayCast: case SSRDebugPass.ReflectionAndCubemap: case SSRDebugPass.SSRMask: case SSRDebugPass.Jitter: Graphics.Blit(source, destination, rendererMaterial, 2); break; case SSRDebugPass.Combine: if (Application.isPlaying) { Graphics.Blit(source, mainBuffer1, rendererMaterial, 2); //Stochastic SSR的第2个pass的用处为:输出颜色:sceneColor.rgb = sceneColor.rgb - cubemap.rgb,然后根据_CameraGBufferTexture0得到diffuse和occlusion,(// RT0: diffuse color (rgb), occlusion (a) - sRGB rendertarget outGBuffer0 = half4(data.diffuseColor, data.occlusion);),temporalBuffer0作为间接光specular,通过UNITY_BRDF_PBS算出reflection.rgb。然后以temporalBuffer0.a为因子,把sceneColor.rgb、cubemap、reflection揉在一起算出最终颜色 Graphics.Blit(mainBuffer1, destination); } else Graphics.Blit(source, destination, rendererMaterial, 2); break; } ReleaseTempBuffer(resolvePass); prevViewProjectionMatrix = viewProjectionMatrix; }
总的来说,就是将看到的物件,根据视角方向和一个加了随机值的法线,计算出光线方向,找到光线的来源点,将该来源点的颜色混到到当前点上,得到最终结果。具体做法是:先拿上一帧的结果mainBuffer1,根据UV和motionVector,得到当前点上一帧所在的点在上一帧图片上的颜色存入mainBuffer0,之后把这个mainBuffer0做bloom和mipmap处理,得到mipMapBuffer2。然后计算得到一个RT用于保存当前点根据光线追踪计算得到的光线源头点的坐标信息。然后把mipMapBuffer2信息根据刚才算出来每个点光线追踪源头点的坐标信息,得到一张纯反射SSR的颜色图片resolvePass。如果开启了TemporalAA,则将resolvePass周围9个点以及前一帧UV得到的点得到的若干颜色值,算TAA。最后把这个纯反射SSR的颜色图片与原图片做混合,得到最终结果。
优化方案:
原创技术文章,撰写不易,转载请注明出处:电子设备中的画家|王烁 于 2018 年 9 月 6 日发表,原文链接(http://geekfaner.com/unity/blog14_LightingBox.html)