https://blog.tensorflow.org/2020/03/simulating-universe-in-tensorflow.html?hl=es_419
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWDukqKZJWJ6qa0v6E_HtUG0h9YQ9gA2HOabwiM_RDdnWzZndp6_-QjwcclTYllbF0SVqeHKUUNJD41pbOqaQq3y5atOLjGVQDYc5jiEtGu55ieZcvxR8nkHMoMqkQmER8ZFlkFl-EpIA/s1600/image.png
Guest post by Chirag Modi, François Lanusse, Mustafa Mustafa, Uroŝ Seljak from Berkeley Center for Cosmological Physics, CosmoStat Laboratory, and the Lawrence Berkeley National Laboratory
Introduction
Numerical
simulations1 of the large scale structure of the Universe are fundamental tools used by cosmologists to make sense of the vast amount of data collected by cosmological surveys. These simulations are typically extremely computationally expensive and are usually run offline on massive supercomputers. However, what if we could make these simulations extremely fast and integrate them with machine learning components in a single unified framework? This is what a new N-body cosmological simulation code,
FlowPM2, is geared to do. In this blog post, we will show you how to simulate your own tiny Universe in TensorFlow and explain why this is an exciting prospect to cosmologists.
|
Figure 1 : (Blue) Structures observed in the Universe in 2dFGRS survey. (Red) Corresponding structures generated in the Millenium N-Body simulation |
N-body cosmological simulations
N-body simulations in cosmology evolve the Universe from its birth at the Big Bang to the present day. They do so by distributing a large number of particles in a box according to the initial distribution of matter, and then move these particles in time according to gravitational forces. The resulting final particle distribution reproduces the large scale structures of our Universe. For example, it reflects how galaxies and galaxy clusters are distributed today.
The formation and evolution of these structures depend on some of the fundamental questions about our Universe such as how much matter is there in the Universe? How fast is the Universe expanding? What are the properties of the dark matter and dark energy that drive this expansion?
Modern cosmological surveys map out these structures with the most powerful telescopes, over the largest areas of sky and going back billions of years in time. Matching the predictions from N-body simulations with the observations of these surveys helps us answer aforementioned fundamental questions, and hence improve our understanding of the birth and evolution of our Universe.
FlowPM, a TensorFlow cosmological N-body solver
N-body simulations have been the workhorse tools in the cosmology community for decades. To benefit from the recent advances in the field of machine learning and statistical inference, we introduce
FlowPM, a pure TensorFlow implementation of cosmological N-body simulations. We provide a
Google Colab notebook to experiment with these simulations. Note: our code is currently written using TF1 (we will explore updating to TF2 in the future).
After setting the parameters of simulations such as box length and grid size, the code-snippet to execute the simulation is quite simple and included in its entirety below. It generates the observed large scale structures as shown in Figure 2.
# Generate Gaussian initial conditions for the matter distribution
initial_conditions = flowpm.linear_field(N, L, ipklin, batch_size=batch)
# Sample particles, i.e. generate the initial displacement and velocity
state = flowpm.lpt_init(initial_conditions, a0=a0)
# Evolve particles from initial state down to present time with N-Body simulations
final_state = flowpm.nbody(state, stages, N)
# Visualize final density field i.e interpolate the particles to a grid
Final_field = flowpm.cic_paint(tf.zeros_like(initial_conditions), final_state[0])
#Execute the graph!
with tf.Session() as sess:
ic, istate, fstate, sim = sess.run([initial_conditions, state, final_state, final_field])
|
Figure 2 : (Left) The initial distribution of matter in the Universe, at the beginning of the N-Body simulation. (Right) The final distribution of matter, at the final snapshot of the simulation. The large scale structures, with collapsed halos, long filaments and empty voids can clearly be seen. |
So, what is the benefit of having these simulations in TensorFlow? The advantages of such a framework can primarily be split into 2 broad categories:
1. Analysis & Inference: Simulations in TensorFlow have a unique capability that cosmologists did not have before -
differentiability. Such capability opens doors to new analytic tools for scientists such as developing efficient simulation-based inference techniques. It also allows us to quantify the response of the final observations with respect to the various input parameters. In our group at
Berkeley Center for Cosmological Physics (BCCP, UC Berkeley), we are interested in going backwards in time and reconstructing the
initial conditions4 of our Universe from the observations of large scale structures today. This involves solving a highly non-linear optimization problem in
millions of dimensions which is only feasible with a differentiable simulation such as FlowPM. An illustration of this reconstruction is shown below in GIF 1.
|
GIF 1 : gif showing the reconstruction of the initial conditions of the Universe (left) from large scale structures in the final dark matter Universe field (right). |
2. Hybrid Physical/Deep Learning Simulations: It allows one to develop hybrid forward models where we use deep learning components as a part of the N-body simulations. With current computational resources, it is impossible to simulate all the components simultaneously with very high accuracy. For example, depending on their scientific goal, current simulations regularly make trade-offs between different elements: like the wide range of length scales observed in the Universe, the range of masses of galaxies observed, and the diverse physical processes involved in forming these galaxies.
However now, depending on the scientific requirements, we can improve upon such trade-offs with using deep learning surrogate models to include these elements in a way that interfaces naturally with the underlying N-Body simulations. An example of this hybrid simulation
3 developed in our group at BCCP is shown in Figure 3. Say we want to simulate the gas momentum density in the Universe. These observables are currently simulated with expensive hydro-simulations, but can now instead be generated with FlowPM end-to-end at 1000x less cost.
|
Figure 3 : An example of hybrid simulation where we supplement dark matter output of PM simulation (left) with a 2 layer non-linear transformation (network) to simulate the gas momentum density in the Universe (center). Compare it against the truth that is simulated with a 1000x more expensive hydro-simulation (right) |
All of this may appear simple in principle, but there is another challenge; our Universe is
huge! To match the observations of current and future surveys accurately, we will need to evolve billions of particles at the same time. This makes the N-body cosmological simulations challenging in two ways:
1. Continuously evolving billions of particles is computationally very expensive. Furthermore, to estimate the gravitational force between all particles, we will need to count all the particle-pairs in the simulations. This scales as N
2 which makes such computations prohibitive. Fortunately, there are approximate schemes to make this tractable. The one we employ here in FlowPM is a
particle-mesh (PM) approach. In a PM approach, for the purpose of estimating gravitational force, we discretise the space on a regular mesh of size N
g, and then compute forces over the whole space using highly optimized 3D Fast Fourier Transforms. This reduces the computational cost from N
2g to N
glog(N
g).
2. Despite these algorithmic optimizations, billions of particles also make these simulations very memory intensive. As a result, simulations of useful sizes which require meshes of a minimum of 1024x1024x1024, do not fit on a single GPU. Hence, we need a
model parallelism framework to develop large-scale simulations and that’s where
Mesh TensorFlow comes in.
Mesh TensorFlow framework allows us to easily describe our simulation in terms of distributed tensors, keeping track behind the scenes of distributed gradients and memory communications between devices. By writing our N-body solver in Mesh TensorFlow, we can distribute these massive simulation volumes on supercomputers across many devices. In such a simulation, every process and mesh-component evolves a different region of space at every time-step. Using the same simulation code, we can simultaneously evolve 128 independent Universes of 128x128x128 grid on cloud TPUs or a Universe with 1024x1024x1024 grid size on 64 GPUs at national computing facilities like NERSC. In addition to enabling large simulations, a model parallelism framework also allows us to speed up intermediate size simulations by splitting the computation across multiple processors. This is demonstrated in the following Figure 4 where we show that on average, FlowPM simulations are 40x faster than the current differentiable python simulations, FastPM.
|
Figure 4 : We compare the time scaling with number of processors for 1 step in 2563 grid PM simulation in FastPM (CPU based python code run on Cori Haswell cores) & FlowPM (GPU based Mesh TensorFlow code run on Cori GPUs) |
Outlook
Numerical simulations of our Universe have formed the backbone of the large scale structure cosmology for more than three decades. With FlowPM, we are taking the first steps to integrate these simulations with deep learning components in a single unified framework while maintaining the exact physical understanding of the underlying phenomenon. In cosmology, this combination has opened doors to developing novel analytic tools as well as push modeling into regimes that was hitherto intractable. These are areas of active research, made increasingly urgent with the next generation of cosmological surveys, that will observe tens of millions of objects in the Universe coming online at the turn of the decade. This confluence of physical modeling and machine learning has largely been made possible due to the model parallelism framework of Mesh TensorFlow, and we hope that the component analytic and computing tools developed with FlowPM will also benefit large scale scientific applications in other disciplines beyond cosmology.
We would like to earnestly acknowledge the support of our colleagues at NERSC - Wahid Bhimji, Steve Farrell, Peter Harrington, Prabhat and at Google - Niki Parmar, Thiru Palanisamy, Noam Shazeer, Youlong Cheng, Zak Stone as well as others who have pointed us to relevant resources, actively discussed ways to optimize and improve these simulations and provided useful feedback.
References:
- FastPM (underlying PM scheme for FlowPM)- https://arxiv.org/abs/1603.00476
- FlowPM code in TF: https://github.com/modichirag/flowpm
- Dai et al.: https://drive.google.com/open?id=0B7_TnnOHCrvBcWxHR2tVUkR2N0xDbHo3TUxyN2hZemtZSUJn
- Reconstruction of initial conditions with Neural Networks - https://arxiv.org/abs/1805.02247
- Mesh Tensorflow : https://github.com/tensorflow/mesh
https://arxiv.org/abs/1811.02084
- Parallel FlowPM code with MeshTF: https://github.com/modichirag/flowpm/tree/mesh