https://blog.tensorflow.org/2019/11/identifying-exoplanets-with-neural.html?hl=zh_TW
https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiuQ-YhfZwsHEcTv16ryS0k3qYtPpfys_9X6LF7t48fJheNUv507_BgiIzJ4km-VOmSf11_oNFk_qzS3A_FwmExx3S7PZOVDTYxz9t70ijga2Fo-lv09To3IxOEPfDbfqZFiiV_21DJTPM/s1600/unnamed.gif
A guest post by Anne Dattilo, a PhD student in Astronomy and Astrophysics at the University of California Santa Cruz.
Introduction
What is an exoplanet? How do we find them? Most importantly,
why do we want to find them? Exoplanets are planets outside of our Solar System - they orbit any star other than our Sun. We can find these exoplanets via a few methods: radial velocity, transits, direct imaging, and microlensing. The most popular method, and what I used to find planets, is the transit method.
Finding Exoplanets
The “transit method” of finding exoplanets was not the first method used to detect them, but it has been the most prolific, finding over 4,000 planets
1. When we plot the brightness over time of a star we call it a
light curve. If a planet were to pass in front of the star from our point of view, the light we see would dim and there would be a dip in our light curve - a transit event. If this happens periodically, that is a sign that there may be a planet around that star.
|
Demonstration of a planetary transit. When a planet passes in front of its host star. The light we receive from the star dims in a characteristic shape. |
There have been many telescopes dedicated to finding planets using this method, both ground- and space-based. These include WASP2, NASA’s Kepler Space Telescope, its second mission called K23, and its successor TESS4. These telescopes point at parts of the sky continuously to observe thousands of stars at once. The longer they observe, the more planets they may find!
Because these telescopes look at so many stars, there can also be false-positive signals in the data such as eclipsing binaries, other astrophysical phenomena, or instrumental noise. I specifically used K2 data for my neural network, and because the telescope became unstable, there was a lot of additional noise in my data that acted as convincing planet candidates.
Kepler did not observe a few stars in its second mission; it observed
thousands. It is difficult enough for a human astronomer to go through a small set of data and consistently find planet candidates, but it is extremely difficult to go through 200,000 signals and be consistent, timely, and unbiased in identifying planets versus false positive signals. This calls for an automatic, unbiased method of planet candidate identification.
Neural Networks
It had already been shown that planets could be found with a CNN by my colleagues, Chris Shallue and Andrew Vanderburg, who discovered multi-planet systems in the original
Kepler data
5. The previous model had also been built with TensorFlow, and for someone who had never done any machine learning, it was easy to learn and build from the ground up.
I used CNNs to find planets in the K2 data. My CNN was based upon the work of Shallue and Vanderburg and changed to work with my much noisier data. I used K2 campaigns 1-16, excluding campaigns 9 and 11 because they primarily focused on microlensing targets. The extracted light curves for these campaigns can be found
here. These light curves were then searched for periodic events following the methods described by Vanderburg 2016
6. This process resulted in 51,711 signals, 31,575 of which were classified by hand into three categories in order to create the training set for the neural network.
A CNN works best when the data are of similar shape and size, but phase-folded light curves can come in many shapes and sizes based on the characteristics of the planetary system. The orbital period of planets differ, as well as the depth of their transit.
I processed my data into two image features: a local view and a global view. A ‘global view’ is the entire phase-folded light curve with the transit event in the center, binned so that each global view is the same length. A ‘local view’ is a zoomed-in look at the transit event, with only two durations on either side of the event instead of the entire period (also binned so that each local view is the same length). These features are normalized so that the transit depth is always -1. We now have two looks at the possible planets that are the same shape and size!
|
Figure taken from Dattilo et al 20197 demonstrating the difference between a global and local view |
I trained the CNN on 27,634 signals that were labeled one of three categories:
- “E” for eclipsing binary
- “J” for junk/instrumental artifact
- “C” for planet candidate
The drop off in signals for the final training set was due to some “failing” in the preprocessing stage, described above. We trained the CNN on 80% of the signals, leaving 10% for validation and 10% for our final test set. We trained using the Adam optimizer8 and ran for 4000 iterations.
|
Architecture of the final CNN |
Conclusion
Once trained, we have a test set that we can learn metrics for how well the model has learned what a planet is based on the predictions for each signal. This model can be used on a new set of data to find new planets. My model was successful after training, achieving a 98% accuracy over the test set. So successful that I was able to use it to identify and verify two new exoplanets!
In the introduction, I asked why we want to find exoplanets, and there are many reasons. We only discovered planets outside of our Solar System about 30 years ago, so the field is very young. We need to increase our sample size of planets to learn how planets form, how common they are, and how details planet populations differ. To find planets like our Earth, we need to increase the sensitivity of our discovery methods and using a technique like this one can help.
If you want to learn more, a complete write up of my methods can be found here, and my code can be found here. Please note, this code was written using an older version of TensorFlow, and if you’re learning this today, you should begin with version 2.0. You can find the tutorials here. If you would like to start testing your own neural network on planet data, a full run-through can be found here, by Chris Shallue. This model code was written in an earlier version of TensorFlow, but version 2.0 was released recently and is easier than before. To get started with all types of deep learning, check out TensorFlow tutorials, just like I did!
Anne Dattilo is currently a PhD student in Astronomy and Astrophysics at the University of California Santa Cruz studying exoplanet demographics. This work was done as an undergrad at The University of Texas at Austin with Dr. Andrew Vanderburg and Chris Shallue and the financial support of the John W. Cox Foundation.
Notes