Deep Learning to Discover Coordinates for Dynamics: Autoencoders & Physics Informed Machine Learning

131,972
0
2021-08-13に共有
Joint work with Nathan Kutz:    / @nathankutzuw  

Discovering physical laws and governing dynamical systems is often enabled by first learning a new coordinate system where the dynamics become simple. This is true for the heliocentric Copernican system, which enabled Kepler's laws and Newton's F=ma, for the Fourier transform, which diagonalizes the heat equation, and many others. In this video, we discuss how deep learning is being used to discover effective coordinate systems where simple dynamical systems models may be discovered.

Citable link for this video at: doi.org/10.52843/cassyni.4zpjhl
@eigensteve on Twitter
eigensteve.com
databookuw.com

Some useful papers:
www.pnas.org/content/116/45/22445 [SINDy + Autoencoders]
www.nature.com/articles/s41467-018-07210-0 [Koopman + Autoencoders]
arxiv.org/abs/2102.12086 [Koopman Review Paper]

This video was produced at the University of Washington

コメント (21)
  • Knowing a lot about autoencoders already, it is useful to see how they start to dissipate into other research areas, like physics (my favorite!). Great to see a good explanation of ML as a tool for further discovery. Thanks for this video!
  • YT algorithm does know where to take me, never thought i'd sit through a lecture in my leisure time fully engaged. Very well done!
  • Awesome work! Thanks for sharing in such a digestible way! I feel we cannot even start to imagine in how many different fields this approach could be used.
  • Incredible work your team is doing. So much to think about, with incredibly wide ranging applications
  • @gammaian
    Your channel is incredible Prof. Brunton, thank you for your work! There is so much value here
  • This is the most amazing stuff you guys have came up with so far!!! Awesome…great job.
  • Fantastic discussion! Love that you cover the complexities so in-depth.
  • Thank you for your videos, Steve! Also, your gesticulation eases the complexity of your talk significantly. Keep up with the good work!
  • Thank you for this vid. Really great content you are putting out for the community Steve.
  • I might just have found my research topic for my master's. Fascinating, thanks. Besides that, the quality of the video deserves remarks: Dark background which is good for eyes, persistently high quality graphics, and a narrator who does his best to create understanding with a decent use of English.
  • Awesome work. I can't believe I understood most of this topic. One of the best explanations I have seen so far.
  • @iestynne
    This was a super interesting one. Thank you very much for another engaging whirlwind tour through recent advances in computer science! :)
  • @__-op4qm
    very kindly structured explanations like this can make everyone feel welcome and interested) This is exactly why subbed to this channel almost 2 years ago; all the videos are very, inviting, welcoming and by the end leave a calm sense of curiosity balanced with a pinch of reassurance, free of any unnecessary panic. In other places these types of subjects are often presented with a thick padding of jargon and dry math abstractions, but not here. Here the explanations are distilled into a sparse latent form without loss of generality and with a clear reminder of the real life value of these methods.
  • @lablive
    I'm lucky to meet this work positioned between the 3rd and 4th science paradigms. As mentioned at the end of this video, I think the key to the interpretability is to take advantage of inductive biases described as existing models or algorithms for forward/inverse problems to design the encoder, decoder, and loss function.
  • @Ejnota
    how much i love this videos and the quality of the software they use
  • This is a really good video. Really well explained and it let me see how your field was using this tech. Thanks for posting it. It sounds like you are doing a lot of interesting research. I'll keep an eye on your channel now that the algorithm recommended it to me.
  • thanks so much! this definitely helped me get into deep learning dynamical systems. I am working on a problem where I want to classify the state of a viral particle near a membrane. I transformed a lot of simulation frames into structural descriptors. I am at the point where I need to decide on an architecture and loss functions to learn. I have begun naively with a dense neural network. This however seems very interesting, not directly but it could be another input for the DNN. The Z could be describing certain constant dynamics surrounding the viral particle which could help classify the state. Anyway, thanks a lot!
  • Brought here by YT algorithm while finishing my BS thesis on non-phsysics-informed auto-encoders to learn from Shallow Water Equations. I will definitely dedicate further studies on the lecture content. Thanks!
  • I've just been learning about how to use PCA to reduce dimensionality. Now I see one can go further and learn the meaning of the linear combination at the bottleneck. I don't really understand how one can use additional loss functions to find that meaning, but now I know it can be found. I'll need to think about it. Thank you.
  • Hi Steve, very interesting video. One remark on the slides that you use: I tend to watch videos with closed captions despite me having average hearing because it helps me keep track of what you're saying. I can imagine that people with hear impairments will also do this, but sometimes elements on your slides will overlap with YouTube's space for subtitles, like the derivative at 1:45. Perhaps this is something you could take into account, particularly for slides that do not contain many different elements and allow for scaling. Thanks again.