Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

SaProt: Protein language modeling with structure-aware vocabulary

Published in International Conference on Learning Representations (ICLR) 2024, 2023

Large-scale protein language models (PLMs), such as the ESM family, have achieved remarkable performance in various downstream tasks related to protein structure and function by undergoing unsupervised training on residue sequences. They have become essential tools for researchers and practitioners in biology. However, a limitation of vanilla PLMs is their lack of explicit consideration for protein structure information, which suggests the potential for further improvement. Motivated by this, we introduce the concept of a “structure-aware vocabulary” that integrates residue tokens with structure tokens. The structure tokens are derived by encoding the 3D structure of proteins using Foldseek. We then propose SaProt, a large-scale general-purpose PLM trained on an extensive dataset comprising approximately 40 million protein sequences and structures. Through extensive evaluation, our SaProt model surpasses well-established and renowned baselines across 10 significant downstream tasks, demonstrating its exceptional capacity and broad applicability.

Download Paper

Protein language models-assisted optimization of a uracil-N-glycosylase variant enables programmable T-to-G and T-to-C base editing

Published in Molecular Cell, 2024

Current base editors (BEs) use DNA deaminases, including cytidine deaminase in cytidine BE (CBE) or adenine deaminase in adenine BE (ABE), to facilitate transition nucleotide substitutions. Combining CBE or ABE with glycosylase enzymes can induce limited transversion mutations. Nonetheless, a critical demand remains for BEs capable of generating alternative mutation types, such as T>G corrections. In this study, we leveraged pre-trained protein language models to optimize a uracil-N-glycosylase (UNG) variant with altered specificity for thymines (eTDG). Notably, after two rounds of testing fewer than 50 top-ranking variants, more than 50% exhibited over 1.5-fold enhancement in enzymatic activities. When eTDG was fused with nCas9, it induced programmable T-to-S (G/C) substitutions and corrected db/db diabetic mutation in mice (up to 55%). Our findings not only establish orthogonal strategies for developing novel BEs but also demonstrate the capacities of protein language models for optimizing enzymes without extensive task-specific training data.

Download Paper

SaprotHub: Making Protein Modeling Accessible to All Biologists

Published in biorxiv, 2024

Training and deploying deep learning models pose challenges for users without machine learning (ML) expertise. SaprotHub offers a user-friendly platform that democratizes the process of training, utilizing, storing, and sharing protein ML models, fostering collaboration within the biology community—all achievable with just a few clicks, regardless of ML background. At its core, Saprot is an advanced, near-universal protein language model. Through its ColabSaprot framework, it supports potentially hundreds of protein training and prediction applications, enabling the co-construction and co-sharing of these trained models. This enhances user engagement and drives community-wide innovation.

Download Paper

ProTrek: Navigating the Protein Universe through Tri-Modal Contrastive Learning

Published in biorxiv, 2024

ProTrek redefines protein exploration by seamlessly fusing sequence, structure, and natural language function (SSF) into an advanced tri-modal language model. Through contrastive learning, ProTrek bridges the gap between protein data and human understanding, enabling lightning-fast searches across nine SSF pairwise modality combinations. Trained on vastly larger datasets, ProTrek demonstrates quantum leaps in performance: (1) Elevating protein sequence-function interconversion by 30-60 fold; (2) Surpassing current alignment tools (i.e., Foldseek and MMseqs2) in both speed (100-fold acceleration) and accuracy, identifying functionally similar proteins with diverse structures; and (3) Outperforming ESM-2 in 9 of 11 downstream prediction tasks, setting new benchmarks in protein intelligence. These results suggest that ProTrek will become a core tool for protein searching, understanding, and analysis.

Download Paper

Toward De Novo Protein Design from Natural Language

Published in biorxiv, 2024

De novo protein design represents a fundamental pursuit in protein engineering, yet current deep learning approaches remain constrained by their narrow design scope. Here we present Pinal, a large-scale frontier framework comprising 16 billion parameters and trained on 1.7 billion protein-text pairs, that bridges natural language understanding with protein design space, translating human design intent into novel protein sequences. Instead of a straightforward end-to-end text-to-sequence generation, Pinal implements a two-stage process: first generating protein structures based on language instructions, then designing sequences conditioned on both the generated structure and the language input. This strategy effectively constrains the search space by operating in the more tractable structural domain. Through comprehensive experiments, we demonstrate that Pinal achieves superior performance compared to existing approaches, including the concurrent work ESM3, while exhibiting robust generalization to novel protein structures beyond the PDB database.

Download Paper

Decoding the Molecular Language of Proteins with Evola

Published in biorxiv, 2025

Proteins, nature’s intricate molecular machines, are the products of billions of years of evolution and play fundamental roles in sustaining life. Yet, deciphering their molecular language - that is, understanding how protein sequences and structures encode and determine biological functions - remains a cornerstone challenge in modern biology. Here, we introduce Evola, an 80 billion frontier protein-language generative model designed to decode the molecular language of proteins. By integrating information from protein sequences, structures, and user queries, Evola generates precise and contextually nuanced insights into protein function. A key innovation of Evola lies in its training on an unprecedented AI-generated dataset: 546 million protein question-answer pairs and 150 billion word tokens, designed to reflect the immense complexity and functional diversity of proteins. Post-pretraining, Evola integrates Direct Preference Optimization (DPO) to refine the model based on preference signals and Retrieval-Augmented Generation (RAG) for external knowledge incorporation, improving response quality and relevance. To evaluate its performance, we propose a novel framework, Instructional Response Space (IRS), demonstrating that Evola delivers expert-level insights, advancing research in proteomics and functional genomics while shedding light on the molecular logic encoded in proteins.

Download Paper

talks

Introduction to Protein Language Models

Published:

Abstract: Large language models have demonstrated remarkable effectiveness in natural language processing. For amino acid sequences, which are very similar to natural language, protein large language models are gradually playing a very important role in many biological fields. In this session, we will compare the similarities between natural language and proteins, then introduce how large language models are applied to proteins, and discuss the applications of large language models in biology.

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.