New to LEVITY? Start here! Want to know more about who’s behind LEVITY? Check out this page. 🙏🏼 Not subscribed to the LEVITY podcast on Youtube yet? Do it here. 🎧 More of a listener? The podcast is also available on Spotify, Apple Podcasts and other places.

“That is far beyond what we could do pre-AI”

These are terrible times for skeptics. I was reminded of that again when I heard about the latest progress from the collaboration between AI heavyweight Open AI and longevity biotech startup Retro Biosciences.

It wasn’t the headlines that caught my attention. Those focused on AI-designed versions of SOX2 and KLF4 - two of the famous Yamanaka factors - that delivered a 50-fold boost in reprogramming efficiency. That’s impressive in itself, since the original Yamanaka factors are notoriously inefficient and slower than a sloth. The new versions also seem to reduce DNA damage, one of the core hallmarks of aging, more effectively.

All great news! But not the real reason to be cheerful.

Until now, AI in biology mostly meant optimization: tweaking things we already knew, nudging drug leads, staying close to the comfort zone of existing data. What Retro and Open AI showed with their customized model, GPT-4b micro, is different. The model stepped outside the territory evolution had explored and proposed designs that had never existed before. Yet still worked, and worked better.

I wouldn’t call that optimization. It’s more like that thing curmudgeons say AI can’t do: invent stuff.

And it changes how bold scientists can be. In the old world, experiments had to move cautiously. You could only alter a protein by a couple of “pearls” on its amino acid necklace at a time, because bigger leaps almost always made the whole thing break. Fifteen years of careful work produced versions barely distinguishable from nature’s originals.

“Rather than fine-tuning known proteins, it produced dramatically novel sequences”

Aubrey de Grey

Now that barrier is gone. Open AI’s model can rewrite big stretches of these proteins and still land on designs that function - and often outperform the natural ones. It means researchers no longer have to inch forward. They can take bigger creative swings, explore regions they never would have dared touch, and still have a real chance of success.

As Aubrey de Grey put it on X:

“Rather than fine-tuning known proteins, it produced dramatically novel sequences, which were greatly superior in function to naturally evolved factors. That is far beyond what we could do pre-AI.”

And once you have that capability, you can start to manufacture the very data skeptics say we don’t have. The loop is straightforward: AI suggests new sequences → the lab tests them → the results, whether wins or failures, feed back into the model. Rinse and repeat. Each cycle produces fresh data, and the model becomes sharper with every turn.

That’s the real breakthrough here - not just a more efficient route to cellular reprogramming, but a glimpse of a new greatly accelerated scientific workflow.

Keep Reading

No posts found