Space-time distortions – sounds like something right out of science-fiction doesn’t it? But gravitational waves, i.e ripples in the curvature of spacetime that propagate as waves at the speed of light, are more relevant than a plot twist. Once proper analysis on gravitational waves is done, it allows us to see much further into the universe we’ve ever seen before. And now, AI will help us speed this analysis up like never before.
What is it all about?
A neural network developed by researchers at the Kavli Institute for Particle Astrophysics and Cosmology, a joint institute of Stanford University and the SLAC National Accelerator Laboratory, can identify and analyse images of ‘gravitational lensing’. According to this phenomenon, initially suggested by Albert Einstein, giant masses, like stars, curve light around themselves, allowing us to look beyond what would have been normally visible with a conventional telescope mechanism. That is possible because when powerful telescopes like the Hubble telescope are applied to gravitational waves, it greatly extends their range as well as resolution. On the other hand, unlike the clear images produced by telescopes, gravitational lenses usually distort observable objects into smeared rings and arcs, making them all the more difficult to observe and analyse.
And what does the AI do?
According to the developers, the AI is trained to identify images of gravitational lenses and carry out analysis on them much faster than previous methods – upto 10 million times faster. Yeah, you read that right. If a human would take weeks, even months on the task, this neural network could do it in fractions of seconds.
After being shown almost half a million images of gravitational lenses, the neural network was able to identify not only new gravitational lenses, but also how their mass was distributed, and how great the magnification levels of the background galaxy were.
Interestingly, although the KIPAC scientists tested the neural networks on the Sherlock high-performance computing cluster at the Stanford Research Computing Center, they could have done their analysis on a laptop or even on a cell phone, they said. In fact, one of the neural networks they tested was designed to work on iPhones. Apparently, if you want to identify images of a faraway galaxy, there’s an app for that.
How useful is this?
The answer to that question would be – very useful, from an astronomical perspective. With powerful upcoming telescopes like the Large Synoptic Survey Telescope (a 3.2 Gigapixel camera under construction at the Stanford Linear Accelerator Centre) expected to reveal thousands of gravitational lenses, this neural network could augment human researchers in the project to an unprecedented extent.