1. Mind the gap : the discrepancy between simulation and reality drives interpretations of the Galactic Center ExcessSascha Caron, Christopher Eckner, Luc Hendriks, Gudlaugur Johannesson, Roberto Ruiz de Austri, Gabrijela Zaharijas, 2023, original scientific article Keywords: gamma-ray astrophysics, dark matter, galactic center Published in RUNG: 15.01.2024; Views: 1559; Downloads: 7 Full text (7,22 MB) This document has many files! More... |
2. AutoSourceID-Classifier : star-galaxy classification using a convolutional neural network with spatial informationF. Stoppa, Saptashwa Bhattacharyya, R. Ruiz de Austri, P. Vreeswijk, S. Caron, Gabrijela Zaharijas, S. Bloemen, G. Principe, D. Malyshev, Veronika Vodeb, 2023, original scientific article Abstract: Aims: Traditional star-galaxy classification techniques often rely on feature estimation from catalogs, a process susceptible to introducing inaccuracies, thereby potentially jeopardizing the classification’s reliability. Certain galaxies, especially those not manifesting as extended sources, can be misclassified when their shape parameters and flux solely drive the inference. We aim to create a robust and accurate classification network for identifying stars and galaxies directly from astronomical images.
Methods: The AutoSourceID-Classifier (ASID-C) algorithm developed for this work uses 32x32 pixel single filter band source cutouts
generated by the previously developed AutoSourceID-Light (ASID-L) code. By leveraging convolutional neural networks (CNN) and
additional information about the source position within the full-field image, ASID-C aims to accurately classify all stars and galaxies within a survey. Subsequently, we employed a modified Platt scaling calibration for the output of the CNN, ensuring that the derived probabilities were effectively calibrated, delivering precise and reliable results.
Results: We show that ASID-C, trained on MeerLICHT telescope images and using the Dark Energy Camera Legacy Survey (DECaLS) morphological classification, is a robust classifier and outperforms similar codes such as SourceExtractor. To facilitate a rigorous comparison, we also trained an eXtreme Gradient Boosting (XGBoost) model on tabular features extracted by SourceExtractor.
While this XGBoost model approaches ASID-C in performance metrics, it does not offer the computational efficiency and reduced
error propagation inherent in ASID-C’s direct image-based classification approach. ASID-C excels in low signal-to-noise ratio and crowded scenarios, potentially aiding in transient host identification and advancing deep-sky astronomy. Keywords: astronomical databases, data analysis, statistics, image processing Published in RUNG: 12.12.2023; Views: 1597; Downloads: 6 Full text (10,31 MB) This document has many files! More... |
3. AutoSourceID-FeatureExtractor : optical image analysis using a two-step mean variance estimation network for feature estimation and uncertainty characterisationF. Stoppa, R. Ruiz de Austri, P. Vreeswijk, Saptashwa Bhattacharyya, S. Caron, S. Bloemen, Gabrijela Zaharijas, G. Principe, Veronika Vodeb, P. J. Groot, E. Cator, G. Nelemans, 2023, original scientific article Abstract: Aims: In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data.
Methods: The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment.
Results: We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities. Keywords: data analysis, image processing, astronomical databases Published in RUNG: 08.11.2023; Views: 1472; Downloads: 9 Link to file This document has many files! More... |
4. AutoSourceID-Light : Fast optical source localization via U-Net and Laplacian of GaussianF. Stoppa, P. Vreeswijk, S. Bloemen, Saptashwa Bhattacharyya, S Caron, G. Jóhannesson, R. Ruiz de Austri, C. Van den Oetelaar, Gabrijela Zaharijas, P.J. Groot, E. Cator, G. Nelemans, 2022, original scientific article Abstract: Aims: With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they
are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical
images.
Methods: We show that the ASID-L algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter provides outstanding performance in the localization of sources. A U-Net network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location.
Results: Using ASID-L on the optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with SExtractor and show that our method outperforms this more widely used method rapidly detects more sources not only in low and mid-density fields, but particularly in areas with more than 150 sources per square arcminute. The training set and code used in this paper are publicly available. Keywords: astronomical databases, data analysis, image processing Published in RUNG: 23.01.2023; Views: 2346; Downloads: 0 This document has many files! More... |
5. Identification of point sources in gamma rays using U-shaped convolutional neural networks and a data challengeBoris Panes, Christopher Eckner, Luc Hendriks, Sascha Caron, Klaas Dijkstra, Gudlaugur Johannesson, Roberto Ruiz de Austri, Gabrijela Zaharijas, 2021, original scientific article Keywords: gamma rays, astroparticle physics, data analysis Published in RUNG: 17.02.2022; Views: 2773; Downloads: 8 Link to full text This document has many files! More... |
6. |
7. Localisation and classification of gamma ray sources using neural networksChris van den Oetelaar, Saptashwa Bhattacharyya, Boris Panes, Sascha Caron, Gabrijela Zaharijas, Roberto Ruiz de Austri, Guõlaugur Jóhannesson, 2021, published scientific conference contribution Abstract: With limited statistics and spatial resolution of current detectors, accurately localising and separating gamma-ray point sources from the dominating interstellar emission in the GeV energy range is challenging. Motivated by the challenges of the traditional methods used for the gamma-ray source detection, here we demonstrate the application of deep learning based algorithms to automatically detect and classify point sources, which can be applied directly to the binned Fermi-LAT data and potentially be generalised to other wavelengths. For the point source detection task, we use popular deep neural network structure U-NET, together with image segmentation, for precise localisation of sources, various clustering algorithms were tested on the segmented images. The training samples are based on the source properties of AGNs and PSRs from the latest Fermi-LAT source catalog, in addition to the background interstellar emission. Finally, we have created a more
complex but robust training data generation exploiting full detector potential, increasing spatial resolution at the highest energies. Keywords: gamma-rays, deep learning, computer vision Published in RUNG: 01.10.2021; Views: 2591; Downloads: 43 Link to full text This document has many files! More... |