Repository of University of Nova Gorica

Search the repository
A+ | A- | Help | SLO | ENG

Query: search in
search in
search in
search in
* old and bologna study programme


1 - 6 / 6
First pagePrevious page1Next pageLast page
Borderless aeasthetics : the new ugly
Sandra Jovanovska, 2024, master's thesis

Abstract: Through the lens of ugliness, the purpose of this Master’s thesis is to explore a potential model of a new unrestricted aesthetics. I, hereby, refer to an aesthetics beyond its canonical order, an individualistically-driven scheme of standards or perhaps no standards at all. All can be simplified with Eco’s quote on the opposition of the beautiful and the ugly: ’A beautiful nose shouldn’t be longer than that or shorter than that, on the contrary, an ugly nose can be as long as the one of Pinocchio, or as big as the trunk of an elephant, or like the beak of an eagle, and so ugliness is unpredictable, and offers an infinite range of possibility’. While the aesthetics of beauty has already positioned framework of rules in regards to proportion, symmetry, and harmony, the aesthetics of ugliness has no particular guidelines and limitations whatsoever. Unlike the beautiful, what we perceive as ugly doesn’t have its lawfulness, because for a long time in the history of art, ugliness was just the opposite face of beauty. As a consequence, the ugly embodies a big category of undetermined standards in visual arts and culture, which leads to it becoming a large unmapped territory of boundless autonomy. The ugly is in that context the key to facing and unleashing our phenomenological fears of bleak dark deformed realities that lie unchallenged and unaddressed on account of ugliness’ taboo status. Thus, when familiarised, I believe ugliness in art has a powerful impact, a quality that we have to yet begin to understand to get a full image of ourselves, for if we rely on beauty, as we did for such a long time in history, we are depriving ourselves of a true holistic proportion in art.
Keywords: art, man, ugliness, new, aesthetics, beauty, artist, time, image, Dada, history, context, different, body, personal, culture, transform, political, philosophy, standard, perspective.
Published in RUNG: 10.05.2024; Views: 276; Downloads: 5
.pdf Full text (2,86 MB)
This document has many files! More...

AutoSourceID-Classifier : star-galaxy classification using a convolutional neural network with spatial information
F. Stoppa, Saptashwa Bhattacharyya, R. Ruiz de Austri, P. Vreeswijk, S. Caron, Gabrijela Zaharijas, S. Bloemen, G. Principe, D. Malyshev, Veronika Vodeb, 2023, original scientific article

Abstract: Aims: Traditional star-galaxy classification techniques often rely on feature estimation from catalogs, a process susceptible to introducing inaccuracies, thereby potentially jeopardizing the classification’s reliability. Certain galaxies, especially those not manifesting as extended sources, can be misclassified when their shape parameters and flux solely drive the inference. We aim to create a robust and accurate classification network for identifying stars and galaxies directly from astronomical images. Methods: The AutoSourceID-Classifier (ASID-C) algorithm developed for this work uses 32x32 pixel single filter band source cutouts generated by the previously developed AutoSourceID-Light (ASID-L) code. By leveraging convolutional neural networks (CNN) and additional information about the source position within the full-field image, ASID-C aims to accurately classify all stars and galaxies within a survey. Subsequently, we employed a modified Platt scaling calibration for the output of the CNN, ensuring that the derived probabilities were effectively calibrated, delivering precise and reliable results. Results: We show that ASID-C, trained on MeerLICHT telescope images and using the Dark Energy Camera Legacy Survey (DECaLS) morphological classification, is a robust classifier and outperforms similar codes such as SourceExtractor. To facilitate a rigorous comparison, we also trained an eXtreme Gradient Boosting (XGBoost) model on tabular features extracted by SourceExtractor. While this XGBoost model approaches ASID-C in performance metrics, it does not offer the computational efficiency and reduced error propagation inherent in ASID-C’s direct image-based classification approach. ASID-C excels in low signal-to-noise ratio and crowded scenarios, potentially aiding in transient host identification and advancing deep-sky astronomy.
Keywords: astronomical databases, data analysis, statistics, image processing
Published in RUNG: 12.12.2023; Views: 765; Downloads: 4
.pdf Full text (10,31 MB)
This document has many files! More...

AutoSourceID-FeatureExtractor : optical image analysis using a two-step mean variance estimation network for feature estimation and uncertainty characterisation
F. Stoppa, R. Ruiz de Austri, P. Vreeswijk, Saptashwa Bhattacharyya, S. Caron, S. Bloemen, Gabrijela Zaharijas, G. Principe, Veronika Vodeb, P. J. Groot, E. Cator, G. Nelemans, 2023, original scientific article

Abstract: Aims: In astronomy, machine learning has been successful in various tasks such as source localisation, classification, anomaly detection, and segmentation. However, feature regression remains an area with room for improvement. We aim to design a network that can accurately estimate sources' features and their uncertainties from single-band image cutouts, given the approximated locations of the sources provided by the previously developed code AutoSourceID-Light (ASID-L) or other external catalogues. This work serves as a proof of concept, showing the potential of machine learning in estimating astronomical features when trained on meticulously crafted synthetic images and subsequently applied to real astronomical data. Methods: The algorithm presented here, AutoSourceID-FeatureExtractor (ASID-FE), uses single-band cutouts of 32x32 pixels around the localised sources to estimate flux, sub-pixel centre coordinates, and their uncertainties. ASID-FE employs a two-step mean variance estimation (TS-MVE) approach to first estimate the features and then their uncertainties without the need for additional information, for example the point spread function (PSF). For this proof of concept, we generated a synthetic dataset comprising only point sources directly derived from real images, ensuring a controlled yet authentic testing environment. Results: We show that ASID-FE, trained on synthetic images derived from the MeerLICHT telescope, can predict more accurate features with respect to similar codes such as SourceExtractor and that the two-step method can estimate well-calibrated uncertainties that are better behaved compared to similar methods that use deep ensembles of simple MVE networks. Finally, we evaluate the model on real images from the MeerLICHT telescope and the Zwicky Transient Facility (ZTF) to test its transfer learning abilities.
Keywords: data analysis, image processing, astronomical databases
Published in RUNG: 08.11.2023; Views: 737; Downloads: 7
URL Link to file
This document has many files! More...

AutoSourceID-Light : Fast optical source localization via U-Net and Laplacian of Gaussian
F. Stoppa, P. Vreeswijk, S. Bloemen, Saptashwa Bhattacharyya, S Caron, G. Jóhannesson, R. Ruiz de Austri, C. Van den Oetelaar, Gabrijela Zaharijas, P.J. Groot, E. Cator, G. Nelemans, 2022, original scientific article

Abstract: Aims: With the ever-increasing survey speed of optical wide-field telescopes and the importance of discovering transients when they are still young, rapid and reliable source localization is paramount. We present AutoSourceID-Light (ASID-L), an innovative framework that uses computer vision techniques that can naturally deal with large amounts of data and rapidly localize sources in optical images. Methods: We show that the ASID-L algorithm based on U-shaped networks and enhanced with a Laplacian of Gaussian filter provides outstanding performance in the localization of sources. A U-Net network discerns the sources in the images from many different artifacts and passes the result to a Laplacian of Gaussian filter that then estimates the exact location. Results: Using ASID-L on the optical images of the MeerLICHT telescope demonstrates the great speed and localization power of the method. We compare the results with SExtractor and show that our method outperforms this more widely used method rapidly detects more sources not only in low and mid-density fields, but particularly in areas with more than 150 sources per square arcminute. The training set and code used in this paper are publicly available.
Keywords: astronomical databases, data analysis, image processing
Published in RUNG: 23.01.2023; Views: 1474; Downloads: 0
This document has many files! More...

PHOTOGRAPHY AND THE NARRATIVE : How stories are told through the photographic medium
Tiziano Biagi, 2020, undergraduate thesis

Abstract: Some of the earliest pieces of evidence in art history show how people told stories with pictures and, throughout the centuries, this habit developed with the introduction of new techniques, themes, and tools. Given the value of authenticity that has often been ascribed to photography since its invention, this medium had to overcome criticism before its value as fine art was recognised. This diploma thesis analyses in which ways photography is capable to carry narratives. It also analyses the roles that the viewer, the photographer, and the image itself play in this process. To do so, this work examines notable theories on the topic, the intentions behind the photographers’ creative process, and the visual components of images. By focusing both on ideas introduced by scholars and on photographic works – including my diploma project Dune Mosse – the thesis underlines the importance that social and cultural contexts have in the narration of a story.
Keywords: Narrative, Storytelling, Interpretation, Context, Intention, Viewer, Image, Photographer, Photographic genres, Personal documentary photography
Published in RUNG: 13.10.2020; Views: 3745; Downloads: 134
.pdf Full text (16,30 MB)
This document has many files! More...

Search done in 0.04 sec.
Back to top