{"id":74,"date":"2018-01-11T21:27:26","date_gmt":"2018-01-11T13:27:26","guid":{"rendered":"http:\/\/blog.coolcoding.cn\/blog\/?p=74"},"modified":"2020-09-27T19:58:33","modified_gmt":"2020-09-27T11:58:33","slug":"dynamic-weather-effects","status":"publish","type":"post","link":"https:\/\/blog.coolcoding.cn\/?p=74","title":{"rendered":"\u52a8\u6001\u5929\u6c14\u6548\u679c\uff08Dynamic Weather Effects\uff09"},"content":{"rendered":"<h2><b>Introduction<\/b><\/h2>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Well-implemented weather effects such as rain, fog, and lightning greatly add to the realism of an outdoor scene. Rain and other types of precipitation are the most challenging to render convincingly in real-time. this article presents an innovative method of rendering rain, snow, sleet<\/span><span style=\"font-weight: 400;\">, and hail that smoothly blends between any type and density of precipitation and is highly art directable<\/span><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Prior approaches to rendering precipitation of fall into one of two classes: particles or image space.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Using one particle to model each drop of precipitation is still the most common solution. these particle systems, however, have a high computational cost directly proportional to the number of particles in use.While particle rendering consumes GUP cycles, particle simulation either runs on the CPU, or consumes additional GPU cycles [Tariq07].<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Image space techniques use layers of scrolling textures to model precipitation [Wang05] [Tatarchuk07]. These techniques only consume GPU cycles for simulation and rendering. They are, however, inherently 2D and thus suffer from artifacts when moving the camera, especially at high speeds, making them unsuitable for many games.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Neither of the above methods suit our needs for <\/span><i><span style=\"font-weight: 400;\">Project Gotham Racing 4<\/span><\/i><span style=\"font-weight: 400;\">: Particle system are too costly for the desired number of particles, and an image space approach creates too many visual <\/span><b>artifacts<\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p>&nbsp;<\/p>\n<h1><b>Particle simulationi and rendering<\/b><\/h1>\n<p><span style=\"font-weight: 400;\">Our solution simulates and renders only a small subset of all particles in the world. In particular, we choose a world space axis-aligned cube near the origin that is 30m on a side and randomly populate) in with a fixed number, this is, 1000 particles, We only simulate these 1000 particles. Particles that fall or blow out of this cube immediately wrap around to reenter the cube from the opposite side.Thus, this cube always contains the same number of particles throughout the simulation.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Because the particles in the cube always wrap in all directions, it is possible to tile the cube in any direction without visible boundaries, akin to the wrap texture addressing mode.In fact, arbitrarily placing any world space axis-aligned cube of the same dimensions into the world encloses the identical set of particles as contained in the original cube near the origin.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Out particle simulation simply offsets each particle by gravity and wind. the CPU computes both offsets every frame and passes the result as simple offset vectors to the vertex shader. The vertex shader thus computes a particle\u2019s current position every time it renders; that is, we only ever store the initial particle positions in a static vertex buffer, The fmod function ensures that each particle wraps back into the original cube:<\/span><\/p>\n<p>&nbsp;<\/p>\n<pre><span style=\"font-weight: 400;\">float3 offsets= gravityOffset+ windOffset;<\/span>\n\n<span style=\"font-weight: 400;\">float3 postion= fmode(initalPosition+ offsets, BoxSize);<\/span><\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Figure 5.1.2 A world space axis-aligned box at the origin contains a fixed number of particles and tiles over the entire world. Any world space axis-aligned box of the same dimensions contains the identical set of particles. We place one such box in front of the camera.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">When the CPU computes the current gravityOffset, it simply adds a delta value to the previous gravityOffset result. The delta value is user-controlled; larger offsets simulate heavier particles. This delta value may be time-varying to emulate changing weights of particles. Computing the windOffset is analogous. In particular, we use a Perlin noise function for the wind direction that gives the appearance of ever-changing wind currents. To avoid eventual overflows in these offset vectors. the CPU also needs to apply the fmod function. the gravityOffset and windOffset vectors are the same for all particles.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">To efficiently render this world of particles, we choose a cube directly in front of the camera and render the particles within it. In other worlds, all particles are additionally offset by the camera position and a forward offset that is the view direction scaled by distance of the cube to the camera:<\/span><\/p>\n<pre>float3 offsets= gravityOffset+ winOffsets;\n\noffsets-= cameraPosition+ forwardOffset+ BoxSize\/2;\n\nfloat3 postion= fmod(initalPostion+ offsets, BoxSize);\n\nposition+= cameraPosition+ forwardOffset+ BoxSize\/2;<\/pre>\n<p><span style=\"font-weight: 400;\">N\u4e3a\u573a\u666f\u504f\u79fb\u91cf, \u9700\u8981\u628a\u5750\u6807\u62c9\u56de\u5230\u539f\u59cb\u4f4d\u7f6e, \u53d8\u6362\u540e\u518d\u62c9\u56de\u4e16\u754c\u5750\u6807)<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">We choose the original cube\u2019s dimensions, this is, 30m on a side, see above, and the cube\u2019s distance to the camera so that particles between the camera and the cube are clipped by the near plane or otherwise invisible.(?). and particles beyond the far sides of the cube are too far to be visible, or are convincingly represented as distance fog.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">To increase visual complexity we render the set of particles inside the cube multiple times. each set of particles runs a different instance of our simulation and uses an additional random, but constant, 3D offset vector:<\/span><\/p>\n<p>&nbsp;<\/p>\n<pre><span style=\"font-weight: 400;\">float3 offsets= gravityOffset[i]+ winOffsets[i]+ randomOffset[i];<\/span>\n\n<span style=\"font-weight: 400;\">offsets+= cameraPosition+ forwardOffset+ BoxSize\/2;<\/span>\n\n<span style=\"font-weight: 400;\">float3 postion= fmod(initalPostion+ offsets, BoxSize);<\/span>\n\n<span style=\"font-weight: 400;\">position+= cameraPosition+ forwardOffset+ BoxSize\/2;<\/span><\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">we find that 1 to 20 simulation instances and thus particle set renderings provide sufficient variation to look believable.More than 20 instances make the precipitation look too dense, and have too much performance overhead. the actual number of iinstances veries with the type of percipitation and is artist-controlled.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">the above pseudo code is incorrect due to a small, but important, implementation detail: the fmod function produces result in the range [ -BoxSize, Bosize], this is, negative values wrap to [-BoxSize, 0], and positive values wrap to the [0, BoxSize], Yet we expect all values to wrap to the range [0, BoxSize].<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">to account for this behavior we apply fmod to the sum of the offset vectors, which is thus guaranteed to be range [-BoxSize, BoxSize]. We generate all initial particle position and its offsets are thus always positive and in the range[0, 3BoxSize]. applying fmod to this sum thus creates a result in the desired range [0, BoxSize]:<\/span><\/p>\n<p>&nbsp;<\/p>\n<pre><span style=\"font-weight: 400;\">float3 offsets= gravityOffset[i]+ winOffsets[i]+ randomOffset[i];<\/span>\n\n<span style=\"font-weight: 400;\">offsets+= cameraPosition+ forwardOffset+ BoxSize\/2;<\/span>\n\n<span style=\"font-weight: 400;\">offsets+= fmod(offsets, BosSize);<\/span>\n\n<span style=\"font-weight: 400;\">float3 postion= fmod(initalPostion+ offsets, BoxSize);<\/span>\n\n<span style=\"font-weight: 400;\">position+= cameraPosition+ forwardOffset- BoxSize\/2;<\/span><\/pre>\n<p>&nbsp;<\/p>\n<h1><b>Rendering Motion-Blurred Particles<\/b><\/h1>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Precipitation particles draw as an indexed trangle list or, if the GPU support is, as a quad list from the static vertex buffer of initial positions. each position in the static vertex buffer generates a quad via instancing with a stream of four pairs of UV coordinates. the quad\u2019s consturction ensure it is always camera facing.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">While real rain consists of fast-moving droplets of water, viewing these through a camera with finite exposure times result in motion-blurred streaks. to emulate motion blur we require two positions for every particle: its current position and its previous position. the current position is the result of our particle simulation. because all particles in a simulation instance move identically, they all the same velocity. a particle\u2019s previous position is thus equal to the current position minus a global, per simulation instance velocity. to directly control length of the streaks, we scale this veocity by a user-controlled constant.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">The screen-space positions of the top and bottom vertices of each particle are thus the previous and current postions transformed by the previous and current view projection matrices, respectively.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Listing 5.1.1 Computing the screen-space positions of the bottom and top vertices of each montion-blurred particle<\/span><\/p>\n<p>&nbsp;<\/p>\n<pre><span style=\"font-weight: 400;\">float4 worldPosPrev= worldPos+ g_vVelocity* g_fHeightScale;<\/span>\n\n<span style=\"font-weight: 400;\">worldPosPrev.w= 1.0f;<\/span>\n\n<span style=\"font-weight: 400;\">float4 bottom= mul(worldPos, g_mViewProj);<\/span>\n\n<span style=\"font-weight: 400;\">float4 top= mul(worldPosPrev, g_mViewProjPrev);<\/span><\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Next we calculate how to offset a particle\u2019s vertices to the left and right so that the particle faces the camera and is rectangular. the difference between the top and bottom vertices in screen-space is a 2D vector v= top-bottom. we compute a vector w as:<\/span><\/p>\n<p>&nbsp;<\/p>\n<pre><span style=\"font-weight: 400;\">v= <\/span><span style=\"font-weight: 400;\">x \/ <\/span><span style=\"font-weight: 400;\">y    <\/span><span style=\"font-weight: 400;\"> w=<\/span><span style=\"font-weight: 400;\">-y \/ <\/span><span style=\"font-weight: 400;\">x<\/span><\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">this is prependicular to v. Offsetting the top and bottom positions of a particle by w and -w results in a camera-facting and rectangular billboard.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Listing 5.1.2 provides the code to construct per-particle, camera-facing quads. it us a vertex\u2019s UV coordinates to decide how to offset each vertex.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Listing 5.1.2 Vertex shader code to construct camera-facing quads<\/span><\/p>\n<p>&nbsp;<\/p>\n<pre><span style=\"font-weight: 400;\">float 4 ConstructBillboard(float4 postion, float2 uv)<\/span>\n\n<span style=\"font-weight: 400;\">{<\/span>\n\n<span style=\"font-weight: 400;\">float4 worldPos= position;<\/span>\n\n<span style=\"font-weight: 400;\">float4 worldPosPrev= postion+ g_vVelocity* g_fHeightScale;<\/span>\n\n<span style=\"font-weight: 400;\">worldPosPrev.w= 1.0f;<\/span>\n\n\n\n\n<span style=\"font-weight: 400;\">float4 bottom= mul(worldPos, g_mViewProj);<\/span>\n\n<span style=\"font-weight: 400;\">float4 topPrev= mul(worldPrev, g_mViewProjPrev);<\/span>\n\n<span style=\"font-weight: 400;\">float2 dir= (topPrev.xy\/topPrev.w) - (bottom.xy\/bottom.w);<\/span>\n\n<span style=\"font-weight: 400;\">float2 dirPerp= normalize( float2( -dir.y, dir.x));<\/span>\n\n\n\n\n<span style=\"font-weight: 400;\">float4 projPos;<\/span>\n\n<span style=\"font-weight: 400;\">projPos= lerp( topPrev, bottom, uv.y);<\/span>\n\n<span style=\"font-weight: 400;\">projPos.xy+= (0.5f- uv.x) * dirPerp * g_fWidthScale;<\/span>\n\n<span style=\"font-weight: 400;\">return projPos;<\/span>\n\n<span style=\"font-weight: 400;\">}<\/span><\/pre>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Figure 5.1.3 the screen-space vectors v and w are perpendicular and span a camera-facing rectangle for every particle.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">The above implementation of motion blur, however, suffers from two visual problems, first, the faster the camera moves, the brighter the rain becomes.the cause of this effect is all particles retain the same brightess regardless of their screen size of how much they stretch. we thus fade each particle proportional to its screen-space velocity, as shown in listing 5.1.3.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400;\">Listing 5.1.3 fading out fast-moving particles<\/span><\/p>\n<p><span style=\"font-weight: 400;\">&#8230;<\/span><\/p>\n<pre><span style=\"font-weight: 400;\">float4 bottom= mul(worldPos, g_mViewProj);<\/span>\n\n<span style=\"font-weight: 400;\">float4 top= mul(worldPosPrev, g_mViewProj);<\/span>\n\n<span style=\"font-weight: 400;\">flaot4 topPrev= mul(worldPosPrev, g_mViewProjPrev);<\/span>\n\n<span style=\"font-weight: 400;\">float2 dir= (top.xy\/top.w) - (bottom.xy\/bottom.w);<\/span>\n\n<span style=\"font-weight: 400;\">float2 dirPrev= (topPrev.xy\/topPrev.w) - (bottom.xy\/bottom.w);<\/span>\n\n\n\n\n<span style=\"font-weight: 400;\">float len= length(dir);<\/span>\n\n<span style=\"font-weight: 400;\">float lenPrev= length(dirPrev);<\/span>\n\n<span style=\"font-weight: 400;\">float lenColorScale= saturate( len\/ lenPrev);<\/span><\/pre>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction &nbsp; Well-implemented weather effects su [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"_links":{"self":[{"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=\/wp\/v2\/posts\/74"}],"collection":[{"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=74"}],"version-history":[{"count":6,"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=\/wp\/v2\/posts\/74\/revisions"}],"predecessor-version":[{"id":82,"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=\/wp\/v2\/posts\/74\/revisions\/82"}],"wp:attachment":[{"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=74"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=74"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.coolcoding.cn\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=74"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}