So, there was quite a lot of talk, in the workshop I just returned from, about the use of machine-learning for nonlinear physics (like astrophysics, atmospheric, or ocean turbulence).
In the end, my take on this problem has not changed much and can be summed up as follows.
- It can do some rather good things (like accelerating things or making things a bit more precise) on *interpolation* problems (i.e. to do inference on problems it was explicitly trained for, in physical regimes it was trained for based on full-physics simulations), and may even help to limit the use of full-scale HPC/numerical resoures in this context IF USED SMARTLY AND REASONABLY (a big if when I look at the way many astronomers are using it).
- It essentially remains untrustable and uncontrolled garbage for *extrapolation* problems, i.e. to provide new results in physical regimes well-beyond those it was trained on, and beyond those that we can explicitly simulate with physics codes. It notably remains utterly terrible when there is no proper separation of scale in the problem, that is for, well, most of the hard key problems we have to deal with in this field.
Most of the hype is about the latter, though. The progress with interpolation/fitting is real, but is much more incremental and much less flamboyant than the hypothetical conceptual breakthroughs promised by the proponents of AI. That ML performs well on interpolation problems is not particularly a surprise or controversial in itself either, as these things are essentially giant fitting factories.
All of this makes it very clear that it is critical to *define* what AI means, and the kind of things one wants to apply ML or other techniques on , before any useful and reasonable conversation on their merits can be had. #AI #physics #machinelearning