动态天气效果(Dynamic Weather Effects)

2018/01 11 21:01

Introduction

 

Well-implemented weather effects such as rain, fog, and lightning greatly add to the realism of an outdoor scene. Rain and other types of precipitation are the most challenging to render convincingly in real-time. this article presents an innovative method of rendering rain, snow, sleet, and hail that smoothly blends between any type and density of precipitation and is highly art directable.

 

Prior approaches to rendering precipitation of fall into one of two classes: particles or image space.

 

Using one particle to model each drop of precipitation is still the most common solution. these particle systems, however, have a high computational cost directly proportional to the number of particles in use.While particle rendering consumes GUP cycles, particle simulation either runs on the CPU, or consumes additional GPU cycles [Tariq07].

 

Image space techniques use layers of scrolling textures to model precipitation [Wang05] [Tatarchuk07]. These techniques only consume GPU cycles for simulation and rendering. They are, however, inherently 2D and thus suffer from artifacts when moving the camera, especially at high speeds, making them unsuitable for many games.

 

Neither of the above methods suit our needs for Project Gotham Racing 4: Particle system are too costly for the desired number of particles, and an image space approach creates too many visual artifacts.

 

Particle simulationi and rendering

Our solution simulates and renders only a small subset of all particles in the world. In particular, we choose a world space axis-aligned cube near the origin that is 30m on a side and randomly populate) in with a fixed number, this is, 1000 particles, We only simulate these 1000 particles. Particles that fall or blow out of this cube immediately wrap around to reenter the cube from the opposite side.Thus, this cube always contains the same number of particles throughout the simulation.

 

Because the particles in the cube always wrap in all directions, it is possible to tile the cube in any direction without visible boundaries, akin to the wrap texture addressing mode.In fact, arbitrarily placing any world space axis-aligned cube of the same dimensions into the world encloses the identical set of particles as contained in the original cube near the origin.

 

Out particle simulation simply offsets each particle by gravity and wind. the CPU computes both offsets every frame and passes the result as simple offset vectors to the vertex shader. The vertex shader thus computes a particle’s current position every time it renders; that is, we only ever store the initial particle positions in a static vertex buffer, The fmod function ensures that each particle wraps back into the original cube:

 

float3 offsets= gravityOffset+ windOffset;

float3 postion= fmode(initalPosition+ offsets, BoxSize);

 

Figure 5.1.2 A world space axis-aligned box at the origin contains a fixed number of particles and tiles over the entire world. Any world space axis-aligned box of the same dimensions contains the identical set of particles. We place one such box in front of the camera.

 

When the CPU computes the current gravityOffset, it simply adds a delta value to the previous gravityOffset result. The delta value is user-controlled; larger offsets simulate heavier particles. This delta value may be time-varying to emulate changing weights of particles. Computing the windOffset is analogous. In particular, we use a Perlin noise function for the wind direction that gives the appearance of ever-changing wind currents. To avoid eventual overflows in these offset vectors. the CPU also needs to apply the fmod function. the gravityOffset and windOffset vectors are the same for all particles.

 

To efficiently render this world of particles, we choose a cube directly in front of the camera and render the particles within it. In other worlds, all particles are additionally offset by the camera position and a forward offset that is the view direction scaled by distance of the cube to the camera:

float3 offsets= gravityOffset+ winOffsets;

offsets-= cameraPosition+ forwardOffset+ BoxSize/2;

float3 postion= fmod(initalPostion+ offsets, BoxSize);

position+= cameraPosition+ forwardOffset+ BoxSize/2;

N为场景偏移量, 需要把坐标拉回到原始位置, 变换后再拉回世界坐标)

 

We choose the original cube’s dimensions, this is, 30m on a side, see above, and the cube’s distance to the camera so that particles between the camera and the cube are clipped by the near plane or otherwise invisible.(?). and particles beyond the far sides of the cube are too far to be visible, or are convincingly represented as distance fog.

 

To increase visual complexity we render the set of particles inside the cube multiple times. each set of particles runs a different instance of our simulation and uses an additional random, but constant, 3D offset vector:

 

float3 offsets= gravityOffset[i]+ winOffsets[i]+ randomOffset[i];

offsets+= cameraPosition+ forwardOffset+ BoxSize/2;

float3 postion= fmod(initalPostion+ offsets, BoxSize);

position+= cameraPosition+ forwardOffset+ BoxSize/2;

 

we find that 1 to 20 simulation instances and thus particle set renderings provide sufficient variation to look believable.More than 20 instances make the precipitation look too dense, and have too much performance overhead. the actual number of iinstances veries with the type of percipitation and is artist-controlled.

 

the above pseudo code is incorrect due to a small, but important, implementation detail: the fmod function produces result in the range [ -BoxSize, Bosize], this is, negative values wrap to [-BoxSize, 0], and positive values wrap to the [0, BoxSize], Yet we expect all values to wrap to the range [0, BoxSize].

 

to account for this behavior we apply fmod to the sum of the offset vectors, which is thus guaranteed to be range [-BoxSize, BoxSize]. We generate all initial particle position and its offsets are thus always positive and in the range[0, 3BoxSize]. applying fmod to this sum thus creates a result in the desired range [0, BoxSize]:

 

float3 offsets= gravityOffset[i]+ winOffsets[i]+ randomOffset[i];

offsets+= cameraPosition+ forwardOffset+ BoxSize/2;

offsets+= fmod(offsets, BosSize);

float3 postion= fmod(initalPostion+ offsets, BoxSize);

position+= cameraPosition+ forwardOffset- BoxSize/2;

 

Rendering Motion-Blurred Particles

 

Precipitation particles draw as an indexed trangle list or, if the GPU support is, as a quad list from the static vertex buffer of initial positions. each position in the static vertex buffer generates a quad via instancing with a stream of four pairs of UV coordinates. the quad’s consturction ensure it is always camera facing.

 

While real rain consists of fast-moving droplets of water, viewing these through a camera with finite exposure times result in motion-blurred streaks. to emulate motion blur we require two positions for every particle: its current position and its previous position. the current position is the result of our particle simulation. because all particles in a simulation instance move identically, they all the same velocity. a particle’s previous position is thus equal to the current position minus a global, per simulation instance velocity. to directly control length of the streaks, we scale this veocity by a user-controlled constant.

 

The screen-space positions of the top and bottom vertices of each particle are thus the previous and current postions transformed by the previous and current view projection matrices, respectively.

 

Listing 5.1.1 Computing the screen-space positions of the bottom and top vertices of each montion-blurred particle

 

float4 worldPosPrev= worldPos+ g_vVelocity* g_fHeightScale;

worldPosPrev.w= 1.0f;

float4 bottom= mul(worldPos, g_mViewProj);

float4 top= mul(worldPosPrev, g_mViewProjPrev);

 

Next we calculate how to offset a particle’s vertices to the left and right so that the particle faces the camera and is rectangular. the difference between the top and bottom vertices in screen-space is a 2D vector v= top-bottom. we compute a vector w as:

 

v= x / y     w=-y / x

 

this is prependicular to v. Offsetting the top and bottom positions of a particle by w and -w results in a camera-facting and rectangular billboard.

 

Listing 5.1.2 provides the code to construct per-particle, camera-facing quads. it us a vertex’s UV coordinates to decide how to offset each vertex.

 

Listing 5.1.2 Vertex shader code to construct camera-facing quads

 

float 4 ConstructBillboard(float4 postion, float2 uv)

{

float4 worldPos= position;

float4 worldPosPrev= postion+ g_vVelocity* g_fHeightScale;

worldPosPrev.w= 1.0f;




float4 bottom= mul(worldPos, g_mViewProj);

float4 topPrev= mul(worldPrev, g_mViewProjPrev);

float2 dir= (topPrev.xy/topPrev.w) - (bottom.xy/bottom.w);

float2 dirPerp= normalize( float2( -dir.y, dir.x));




float4 projPos;

projPos= lerp( topPrev, bottom, uv.y);

projPos.xy+= (0.5f- uv.x) * dirPerp * g_fWidthScale;

return projPos;

}

 

Figure 5.1.3 the screen-space vectors v and w are perpendicular and span a camera-facing rectangle for every particle.

 

The above implementation of motion blur, however, suffers from two visual problems, first, the faster the camera moves, the brighter the rain becomes.the cause of this effect is all particles retain the same brightess regardless of their screen size of how much they stretch. we thus fade each particle proportional to its screen-space velocity, as shown in listing 5.1.3.

 

Listing 5.1.3 fading out fast-moving particles

float4 bottom= mul(worldPos, g_mViewProj);

float4 top= mul(worldPosPrev, g_mViewProj);

flaot4 topPrev= mul(worldPosPrev, g_mViewProjPrev);

float2 dir= (top.xy/top.w) - (bottom.xy/bottom.w);

float2 dirPrev= (topPrev.xy/topPrev.w) - (bottom.xy/bottom.w);




float len= length(dir);

float lenPrev= length(dirPrev);

float lenColorScale= saturate( len/ lenPrev);