Expanded from a post on X, which I felt didn’t do a good job expressing all of what I meant. The past few years of “AI for life science” has been all about the models: AlphaFold 3, neural-network potentials, protein language models, binder generation, docking, co-folding, ADME/tox prediction, and so on. But Chai-2 (and lots of related work) shows us that the vibes are shifting. Models themselves are becoming just a building block;







