The title from AP is misleading. They are not using any "AI" but a simple principle component analysis and a Monte Carlo minimizer.
> In this approach, we apply principal components
analysis (PCA) to a large library of high-fidelity, high-
resolution general relativistic magnetohydrodynamic
(GRMHD) simulations and obtain an orthogonal basis of
image components. PRIMO then uses a Markov Chain Monte
Carlo (MCMC) approach to sample the space of linear
combinations of the Fourier transforms of a number of PCA
components while minimizing a loss function that compares the
resulting interferometric maps to the EHT data.
As an exercise in matching up MHD models with real data this can be an interesting study. Too bad the reporting is so off point, as usual. But I guess adding "AI" to your titles gets you more clicks these days.
I'm not sure what you mean, but the techniques they are using are not new and are not related to artificial intelligence. They use a 120 year old numerical technique to "super-sample" their expensive simulations, which allows them to "fit" the simulations to data in a robust way. They do the calculations in a computer because there are too many numbers to do it on paper. That does not mean it has anything to do with AI.
I understood that PCA is there as one of the employed ways to augment the input data, then other ways are employed for the analysis of the data - that is where the literal "Artificial Intelligence" happens.
--
Edit: see also, for example:
> In contrast, PCA finds correlations between different regions in Fourier space in the training data, which allows PRIMO to generate physically motivated inferences for the unobserved Fourier components
...inferences are drawn on the interpolated data. Automated model building.
> We present a new reconstruction of the Event Horizon Telescope (EHT) image of the M87 black hole from the 2017 data set. We use PRIMO, a novel dictionary-learning-based algorithm that uses high-fidelity simulations of accreting black holes as a training set. By learning the correlations between the different regions of the space of interferometric data, this approach allows us to recover high-fidelity images even in the presence of sparse coverage and reach the nominal resolution of the EHT array. The black hole image comprises a thin bright ring with a diameter of 41.5 ± 0.6 μas and a fractional width that is at least a factor of 2 smaller than previously reported. This improvement has important implications for measuring the mass of the central black hole in M87 based on the EHT images
...«from the 2017 _data set_»
Edit, for more clarity:
> clean up
The image is reconstructed (not modified). Rebuilt. They used a different interpreter of the data.
Edit2: ...although there has been some amount of "gap-filling" ("best guessing").
I assume that the scientists are using the raw data directly, the probably aren’t even bothering to convert it to a format that is visible on a monitor. Same way this AI touch up is almost certainly just to give the public something to see
The point of the Event Horizon Telescope (ETH) was actually to attempt to produce something like an image of the black hole at the center of the galaxy, so in a sense it was a pretty-picture mission. The point of this study was to attempt to link up the observations with models in "picture-space".
There are certainly other, less pretty, ways to looks at the data, but producing something that can be intuitively understood, such as a 2D image, can also be helpful scientifically.
The significance of the original image was in verifying that our laws and models of black holes truly reflected reality.
This AI enhanced image used simulations to guess at the missing data. Doesn’t that defeat the purpose of the black hole image in the first place?
This situation reminded me of when Samsung used AI and professional moon photos to insert details into blurry photos of the moon taken on their camera [0]. Isn’t that image no longer a current, true representation?
PRIMO is a way to fit models to the data in image-space. Since the models are very expensive to run it's not feasible to sample the model parameters directly in order to find a fit to data. Instead they take a library of model outputs and parameters and do a principle component analysis in image-space. This allows them to very cheaply generate new samples without running the complete simulation for each new set of parameters.
Principle component analysis is an old school analytical numerical method and has nothing to do with AI.
> «The EHT is a very sparse array of telescopes. This is something we cannot do anything about because we need to put our telescopes on the tops of mountains and these mountains are few and far apart from each other [...] As a result, our telescope array has a lot of "holes" and we need to rely on algorithms that allow us to fill in the missing data»
This is not the "AI" part though. The thing that the press is calling "AI" is the principle component analysis (PCA) which they use to sample their model without having to generate a full MHD simulation at each grid point.
PCA was invented in 1901. It hardly qualifies as AI.
> This is the first time we have used machine learning to fill in the gaps where we don't have data
After PCA, they used some gradient descent for the next steps.
--
Sorry, edit: there are more components of AI which I understood (in the not fully analytical reading of the two divulgative articles and a brief skim of the research article): right above «machine learning» is said to be used to «fill in the gaps», and elsewhere it seems that a data interpreter is built to go from the obeservational data from the telescope to the results, qualitative and quantitative (e.g. shape and mass of the object - as you read in the other post).
> In this approach, we apply principal components analysis (PCA) to a large library of high-fidelity, high- resolution general relativistic magnetohydrodynamic (GRMHD) simulations and obtain an orthogonal basis of image components. PRIMO then uses a Markov Chain Monte Carlo (MCMC) approach to sample the space of linear combinations of the Fourier transforms of a number of PCA components while minimizing a loss function that compares the resulting interferometric maps to the EHT data.
As an exercise in matching up MHD models with real data this can be an interesting study. Too bad the reporting is so off point, as usual. But I guess adding "AI" to your titles gets you more clicks these days.