If you believe this content violates Code of conduct please provide details below. All reports are strictly confidential.
For this roadmap preview we had a nice chat with Benjamin Block from the Physics department to talk about his role in the development of the GPU particle support and our new experimental GPU fluid dynamics.
Could you please give us a brief introduction of yourself and your job?
I am Benjamin Block, and I work as a senior programmer in our physics department. Most of my recent time was devoted to the development of GPU particle support for our new particle system called Wavicle, which was introduced in CRYENGINE V.
Generally, we are looking for more tasks that can be migrated to the GPU using general purpose compute shaders, so that we can make more use of the GPU when it is not busy doing graphics calculations. Due to the nature of my tasks, I work very closely with our rendering team.
The particle system was an obvious candidate, but there are other tasks, such as more advanced skinning, tiled shading, and our new cloud rendering technology that can be done on the GPU.
Luckily, I also found some time to research more experimental things such as on-GPU fluid dynamics.
In your words, what is this feature and how does it work?
For Wavicle, I developed a pipeline that allocates all the resources needed for a particle effect component on the GPU, so the particles are spawned, updated, and managed on the GPU with nearly zero CPU side interference. Since the particle data is stored in GPU buffers anyway, it can be drawn directly from that memory.
With the GPU particle pipeline, it is possible to process a much larger amount of particles than it is possible in the CPU pipeline, simply because the GPU typically has much more pure arithmetic computation power available. This makes complex procedural motion such as the motion generated with our Simplex Noise and Curl Noise effectors feasible for a large number of particles.
But having all the data on the GPU has the additional benefit that other GPU data is available to the particles as well (e.g. the depth buffer, which allows us to do Screen Space Collisions).
Since we had already gone that far, it is now easy to come up with more advanced things that can be done with the data on the GPU.
I prototyped fluid simulations before the GPU particle system, but there wasn’t a proper infrastructure in our engine to support it, and compute shaders were still an afterthought. If I recall correctly, this preliminary work sparked the development of the GPU particle system, ultimately. The new particle system was already very fast and flexible on the CPU, so until then, there was no pressing reason to do a GPU particles pipeline, but there are some things that are not only slow but simply impossible (without killing real-time performance completely) on the CPU.
The fluid simulation in its current form is an SPH simulation. I tried out a lot of different approaches and tweaks, but the basic idea behind it is that instead of simulating a flow field on a grid, you define the fluid by a number of flow particles where each of the particles carry a “smeared out” contribution to the total mass of the fluid as well as a velocity. So when you want to calculate the flow velocity at a specific point in the fluid domain, you first need to figure out which particles are in the neighborhood of that point – you need fast neighborhood queries e.g. an acceleration structure – and sum over each particles velocity weighted with its mass contribution to that point.
The fluid simulation is instantiated by adding a GPU Fluid Dynamics feature to a particle effect component. The fluid simulation will generate a flow field and particles spawned from that component will be influenced by the flow.
What are the advantages to this system?
SPH simulations originate from astrophysics, and interestingly, they are the only simulations that produce the spiral arms of galaxies. They have become very popular in computer graphics, since because you have a discrete number of fluid particles, you can easily control the computational effort because the simulation only takes place where fluid mass is distributed, instead of simulating a whole grid domain regardless of its fluid content. On the downside there are a lot of adjustable parameters that can need a lot of tweaking to get nice results.
How long did you work on the development of the Fluid Dynamics?
It was kind of an on/off thing over several months. As I said, the main focus was the GPU particle pipeline and accompanying infrastructure itself.
How does the integration of DX12, impact your particle integration?
When I started out with the implementation of the GPU particles, there was not much a priori support for computer shaders.
For DX12, we developed a completely new graphics pipeline with a much better abstraction of the rendering process where we have user-defined stages. Usually such a stage is used to draw some aspect of the scene geometry, or a full-screen pass, but there is also a compute stage, which can be used to do arbitrary computations on the GPU.
This new pipeline is used for DX11 as well with great benefit, so I dare say the performance gain that we see is due to the improvements of our own technology, and the need to rethink the rendering process to support modern (DX12 era) hardware.
What is even better is that we are soon able to run most compute shaders asynchronously, so ideally, compute shader tasks can use all remaining GPU resources when the GPU is not busy doing graphics calculations.
What’s the most complex type of simulation you can produce with this system?
The fluid dynamics feature is still an experimental feature in the general framework of our new particle system, which is incredibly flexible. The particle system supports modifiers for all properties that are available in the different particle component features.
This is more limited on the GPU currently, because everything has to work together with the rest of the infrastructure that we have.
What I find most useful is that CPU particles with arbitrary complex behavior can spawn second generation GPU particles, which together allow for super interesting effects. Since all the different features can be combined in one or the other, there is clearly no limit on the complexity of what can be done with it.
For the Fluid Dynamics feature specifically, I am excited about what the designers will make out of it, but I can see all kinds of opportunities for smoke/fog or magic effects. The feature is still in early experimental stages, so I am not completely sure where it will end up and how it will be implemented in its final form.
Can you re-mesh a fluid?
No, because there is no clear defined surface, currently. The simulation generates a flow field which is used to drag along particles. The simulation itself is not drawn, but only the particles that are affected by that simulation.
The simulation can be tweaked in such a way that it looks like there is a phase boundary (e.g. a liquid surface) by tweaking the cohesion parameter. This will make boundary fluid particles try to stick together and minimize the surface, as a liquid does.
I want to look more into surface reconstruction methods in the future.
What’s next for this feature?
The fluid simulation itself is working reasonably well so far, I would like to research more into options to do better collision handling. Acceleration structures that are good for graphics are not necessarily suited for physics and AI and the other way round. I think it would be important to come up with infrastructure that supports all of that.
The main limitation of the fluid simulation itself is probably that it is still a heavy and complex effect. Luckily it seems that for a lot of cases, a pretty low number of fluid particles already creates a believable flow field. We still need to tweak it and find out about suited use cases for this technology.