Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Generative AI for Biology Workshop

Exploring Adversarial Robustness in Classification tasks using DNA Language Models

Hyunwoo Yoo · Haebin Shin · Kaidi Xu · Gail Rosen

Keywords: [ Adversarial Training ] [ Adversarial Robustness ] [ Antimicrobial Resistance ] [ Promoter Detection ] [ DNA Language Models ] [ Genomic Classification ] [ Backtranslation ] [ Bioinformatics ] [ Codon-level Perturbation ]


Abstract:

DNA Language Models, such as GROVER, DNABERT2 and the Nucleotide Transformer, operate on DNA sequences that inherently contain sequencing errors, mutations, and laboratory-induced noise, which may significantly impact model performance.Despite the importance of this issue, the robustness of DNA language models remains largely underexplored.In this paper, we comprehensively investigate their robustness in DNA classification by applying various adversarial attack strategies: the character (nucleotide substitutions), word (codon modifications), and sentence levels (back-translation-based transformations) to systematically analyze model vulnerabilities.Our results demonstrate that DNA language models are highly susceptible to adversarial attacks, leading to significant performance degradation. Furthermore, we explore adversarial training method as a defense mechanism, which enhances both robustness and classification accuracy.This study highlights the limitations of DNA language models and underscores the necessity of robustness in bioinformatics.

Chat is not available.