Student Projects

Smoke simulations Using Machine Learning

Project for Mathematical Structures of Complex Systems, Heidelberg University, WS 2019/2020

by Ulrich Prestel


The physical phenomena we observe in our day- to day lives are governed by complex non-linear equations which are typically high-dimensional and naturally hard to solve. Especially in fluid dynamics, the continuous models which describe how the fluid behaves over time, are often not solvable by a purely analytical approach. Thankfully, due to large amounts of progress in nu- merical analysis and computational methods, powerful methods have been developed to solve these equations very efficiently and accurately, but due to their complexity, they are very costly to solve as the scale increases.

In this project, we take a different view on this problem: instead of relying on analytic ex- pressions or numerical simulations directly, we use a deep learning approach to infer physical functions based on data. More specifically, we will focus on the temporal evolution of complex functions that arise in the context of fluid flows. Because of the importance for engineering, physics and our environment overall, fluids represent particularly interesting candidates for deep learning models.

Related work

The history of using deep learning approaches in physics-problems is a quite short one, simply because the field is new.
The most important paper for our project is the DeepFluid-paper [Kim et al., 2018], because it tries to solve a similar problem. Their approach is to use an autoencoder architecture and a network to navigate the latent coordinates. 

Another approach is the one from Raissi et al. [2018]. They use a fully connected Network to get the speed vector for a given position and time. The most interesting part of this paper is the loss function, which incorporates terms derived from the Naiver Stokes equations. This does not match our plan to simulate fluids in a given geometry, because of that we did not further follow the paper.

Guo and Li [2016] try to predict the steady flow of an object in a wind tunnel using an encoder to encode the geometry as a SDF (see 2.3.1) and decodes the latent coordinates with two networks responsible for the x and y portion of the speed. They also mask the output of the decoders by setting every value inside the geometry to zero. While this paper does not try to solve our exact problem, we use SDFs to model obstacles and smoke sources.

Data generation

A main focus of our project was to generate sensible data for our network to learn the dynamics of fluid flow. All of our simulations are run in two dimensions to simplify the learning- and the data- generation process. However, our methods could easily be extended to three dimensions. We focused on simulating smoke, as it is more visually interesting.



R. Fedkiw, J. Stam, and H. W. Jensen. Visual simulation of smoke. 2001. URL http:// graphics.ucsd.edu/~henrik/papers/smoke/smoke.pdf.

X. Guo and W. Li. Convolutional neural networks for steady flow approx- imation. 2016. URL https://www.autodeskresearch.com/publications/ convolutional-neural-networks-steady-flow-approximation.

B. Kim, V. C. Azevedo, N. Thuerey, T. Kim, M. H. Gross, and B. Solenthaler. Deep fluids: A generative network for parameterized fluid simulations. CoRR, abs/1806.02071, 2018. URL http://arxiv.org/abs/1806.02071.

M. Raissi, A. Yazdani, and G. E. Karniadakis. Hidden fluid mechanics: A navier-stokes informed deep learning framework for assimilating flow visualization data. CoRR, abs/1808.04327, 2018. URL http://arxiv.org/abs/1808.04327.

Y. Xie, E. Franz, M. Chu, and N. Thuerey. tempogan: A temporally coherent, volumetric GAN for super-resolution fluid flow. CoRR, abs/1801.09710, 2018. URL http://arxiv.org/abs/ 1801.09710.

A. Radford, L. Metz, S. Chintala. Unsupervised Representation Learning with Deep Con- volutional Generative Adversarial Networks. CoRR ,abs/1511.06434, 2015. URL https: //arxiv.org/abs/1511.06434.

A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, B. Frey. Adversarial Autoencoders. URL https://arxiv.org/abs/1511.05644.

I. Tolstikhin, O. Bousquet, S. Gelly and B. Schoelkopf. Wasserstein Autoencoders. URL https: //arxiv.org/abs/1711.01558.