Speaker
Description
Galaxy mergers can be used to probe galaxy evolution and to test cosmological models. Traditional high-redshift merger detection techniques, however, are resource-intensive. For instance, manual detection is both time-consuming and susceptible to human bias, while automated approaches require high-quality, space-based observations to obtain parameters such as the Sérsic index, Gini coefficient and visual separation between galaxies. Machine learning provides an alternative classification technique. Recent work used a convolutional neural network (CNN), a type of deep learning commonly applied to visual data, to identify high-redshift galaxy mergers in simulated images. The images include both “pristine” data (no added noise) and more realistic “noisy” data (added noise). We explore how these preliminary results can be improved using more complex network architectures such as ResNet and Inception. We also implement a domain-adversarial neural network (DANN) that can learn features from one domain (pristine images) that are relevant to another domain (noisy images). CNNs struggle with these multi-domain tasks; a conventional CNN trained on pristine images, for example, achieves an accuracy of only 53% on noisy data. In contrast, our DANN achieves an accuracy of 70% on noisy data. This suggests that domain adaptation algorithms may be a powerful tool for applying knowledge learned from large-scale simulations to real observations from astronomical surveys.