Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Generative AI for Biology Workshop

Improving Genomic Models via Task-Specific Self-Pretraining

Sohan Mupparapu · Parameswari Krishnamurthy · Ratish Surendran Puduppully

Keywords: [ DNA language models ] [ low-resource learning ] [ self-supervised learning ]


Abstract:

Pretraining DNA language models (DNALMs) on the full human genome is resource-intensive, yet often considered necessary for strong downstream performance. Inspired by recent findings in NLP and long-context modeling, we explore an alternative: self-pretraining on task-specific, unlabeled data. Using the BEND benchmark, we show that DNALMs trained with self-pretraining match or exceed the performance of models trained from scratch under identical compute. While genome-scale pretraining may still offer higher absolute performance, task-specific self-pretraining provides a practical and compute-efficient strategy for building stronger supervised baselines. We will release code, pretrained model and finetuned models to support reproducibility.

Chat is not available.