dualmindblade [he/him]

  • 9 Posts
  • 122 Comments
Joined 4 years ago
cake
Cake day: September 21st, 2020

help-circle

















  • It really is, another thing I find remarkable is that all the magic vectors (features) were produced automatically without looking at the actual output of the model, only activations in a middle layer of the network, and using a loss function that is purely geometric in nature, it has no idea the meaning of the various features it is discovering.

    And the fact that this works seems to confirm, or at least almost confirm, a non trivial fact about how transformers do what they do. I always like to point out that we know more about the workings of the human brain than we do about the neural networks we have ourselves created. Probably still true, but this makes me optimistic we’ll at least cross that very low bar in the near future.






  • Okay just thinking out loud here, everything I’ve seen so far works as you described, the training data is taken either from reality or generated by a traditional solver. I’m not sure this is a fundamental limitation though, you should be able to create a loss function that asks “how closely does the output satisfy the PDE?” rather than “how closely does the output match the data generated by my solver?”. But anyway you wouldn’t need to improve on the accuracy of the most accurate methods to get something useful, if the NN is super fast and has acceptable accuracy you can use that to do the bulk of your optimization and then use a regular simulation and or reality to check the result and possibly do some fine-tuning.