ChemRAG: Benchmarking Retrieval-Augmented Generation for Chemistry

1University of Illinois Urbana-Champaign, 2National Institutes of Health

Abstract

Retrieval-augmented generation (RAG) has emerged as a powerful framework for enhancing large language models (LLMs) with external knowledge, particularly in scientific domains that demand specialized and dynamic information. Despite its promise, the application of RAG in the chemistry domain remains underexplored, primarily due to the lack of high-quality, domain-specific corpora and well-curated evaluation benchmarks. In this work, we introduce ChemRAG-Bench, a comprehensive benchmark designed to systematically assess the effectiveness of RAG across a diverse set of chemistry-related tasks. The accompanying chemistry corpus integrates heterogeneous knowledge sources, including scientific literature, the PubChem database, PubMed abstracts, textbooks, and Wikipedia entries. In addition, we present ChemRAG-Toolkit, a modular and extensible RAG toolkit that supports five retrieval algorithms and eight LLMs. Using ChemRAG-Toolkit, we demonstrate that RAG yields a substantial performance gain—achieving an average relative improvement of 17.4% over direct inference methods. We further conduct in-depth analyses on retriever architectures, corpus selection, and the number of retrieved passages, culminating in practical recommendations to guide future research and deployment of RAG systems in the chemistry domain.

BibTeX

@misc{zhong2025chemrag,
      title={Benchmarking Retrieval-Augmented Generation for Chemistry}, 
      author={Xianrui Zhong and Bowen Jin and Siru Ouyang and Yanzhen Shen and Qiao Jin and Yin Fang and Zhiyong Lu and Jiawei Han},
      year={2025},
      eprint={2505.07671},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.07671}, 
}