1. AutoSourceID-Classifier : star-galaxy classification using a convolutional neural network with spatial informationF. Stoppa, Saptashwa Bhattacharyya, R. Ruiz de Austri, P. Vreeswijk, S. Caron, Gabrijela Zaharijas, S. Bloemen, G. Principe, D. Malyshev, Veronika Vodeb, 2023, original scientific article Abstract: Aims: Traditional star-galaxy classification techniques often rely on feature estimation from catalogs, a process susceptible to introducing inaccuracies, thereby potentially jeopardizing the classification’s reliability. Certain galaxies, especially those not manifesting as extended sources, can be misclassified when their shape parameters and flux solely drive the inference. We aim to create a robust and accurate classification network for identifying stars and galaxies directly from astronomical images.
Methods: The AutoSourceID-Classifier (ASID-C) algorithm developed for this work uses 32x32 pixel single filter band source cutouts
generated by the previously developed AutoSourceID-Light (ASID-L) code. By leveraging convolutional neural networks (CNN) and
additional information about the source position within the full-field image, ASID-C aims to accurately classify all stars and galaxies within a survey. Subsequently, we employed a modified Platt scaling calibration for the output of the CNN, ensuring that the derived probabilities were effectively calibrated, delivering precise and reliable results.
Results: We show that ASID-C, trained on MeerLICHT telescope images and using the Dark Energy Camera Legacy Survey (DECaLS) morphological classification, is a robust classifier and outperforms similar codes such as SourceExtractor. To facilitate a rigorous comparison, we also trained an eXtreme Gradient Boosting (XGBoost) model on tabular features extracted by SourceExtractor.
While this XGBoost model approaches ASID-C in performance metrics, it does not offer the computational efficiency and reduced
error propagation inherent in ASID-C’s direct image-based classification approach. ASID-C excels in low signal-to-noise ratio and crowded scenarios, potentially aiding in transient host identification and advancing deep-sky astronomy. Keywords: astronomical databases, data analysis, statistics, image processing Published in RUNG: 12.12.2023; Views: 1589; Downloads: 6 Full text (10,31 MB) This document has many files! More... |
2. AutoSourceID-FeatureExtractor : optical image analysis using a two-step mean variance estimation network for feature estimation and uncertainty characterisationF. Stoppa, R. Ruiz de Austri, P. Vreeswijk, Saptashwa Bhattacharyya, S. Caron, S. Bloemen, Gabrijela Zaharijas, G. Principe, Veronika Vodeb, P. J. Groot, E. Cator, G. Nelemans, 2023, original scientific article Abstract: Aims: In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data.
Methods: The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment.
Results: We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities. Keywords: data analysis, image processing, astronomical databases Published in RUNG: 08.11.2023; Views: 1464; Downloads: 9 Link to file This document has many files! More... |
3. Investigating the VHE gamma-ray sources using deep neural networksVeronika Vodeb, Saptashwa Bhattacharyya, G. Principe, Gabrijela Zaharijas, R. Austri, F. Stoppa, S. Caron, D. Malyshev, 2023, published scientific conference contribution Abstract: The upcoming Cherenkov Telescope Array (CTA) will dramatically improve the point-source sensitivity compared to the current Imaging Atmospheric Cherenkov Telescopes (IACTs). One of the key science projects of CTA will be a survey of the whole Galactic plane (GPS) using both southern and northern observatories, specifically focusing on the inner galactic region. We extend a deep learning-based image segmentation software pipeline (autosource-id) developed on Fermi-LAT data to detect and classify extended sources for the simulated CTA GPS. Using updated instrument response functions for CTA (Prod5), we test this pipeline on simulated gamma-ray sources lying in the inner galactic region (specifically 0∘Keywords: deep neural network, cosmic-rays, CTA, classification Published in RUNG: 31.08.2023; Views: 2367; Downloads: 7 Full text (962,45 KB) This document has many files! More... |