Return to search

Machine Learning for Gravitational-Wave Astronomy: Methods and Applications for High-Dimensional Laser Interferometry Data

Gravitational-wave astronomy is an emerging field in observational astrophysics concerned with the study of gravitational signals proposed to exist nearly a century ago by Albert Einstein but only recently confirmed to exist. Such signals were theorized to result from astronomical events such as the collisions of black holes, but they were long thought to be too faint to measure on Earth. In recent years, the construction of extremely sensitive detectors—including the Laser Interferometer Gravitational-Wave Observatory (LIGO) project—has enabled the first direct detections of these gravitational waves, corroborating the theory of general relativity and heralding a new era of astrophysics research.

As a result of their extraordinary sensitivity, the instruments used to study gravitational waves are also subject to noise that can significantly limit their ability to detect the signals of interest with sufficient confidence. The detectors continuously record more than 200,000 time series of auxiliary data describing the state of a vast array of internal components and sensors, the environmental state in and around the detector, and so on. This data offers significant value for understanding the nearly innumerable potential sources of noise and ultimately reducing or eliminating them, but it is clearly impossible to monitor, let alone understand, so much information manually. The field of machine learning offers a variety of techniques well-suited to problems of this nature.

In this thesis, we develop and present several machine learning–based approaches to automate the process of extracting insights from the vast, complex collection of data recorded by LIGO detectors. We introduce a novel problem formulation for transient noise detection and show for the first time how an efficient and interpretable machine learning method can accurately identify detector noise using all of these auxiliary data channels but without observing the noise itself. We present further work employing more sophisticated neural network–based models, demonstrating how they can reduce error rates by over 60% while also providing LIGO scientists with interpretable insights into the detector’s behavior. We also illustrate the methods’ utility by demonstrating their application to a specific, recurring type of transient noise; we show how we can achieve a classification accuracy of over 97% while also independently corroborating the results of previous manual investigations into the origins of this type of noise.

The methods and results presented in the following chapters are applicable not only to the specific gravitational-wave data considered but also to a broader family of machine learning problems involving prediction from similarly complex, high-dimensional data containing only a few relevant components in a sea of irrelevant information. We hope this work proves useful to astrophysicists and other machine learning practitioners seeking to better understand gravitational waves, extremely complex and precise engineered systems, or any of the innumerable extraordinary phenomena of our civilization and universe.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/vqz7-1t18
Date January 2022
CreatorsColgan, Robert Edward
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0023 seconds