Workshop: Machine Learning on HPC Systems (MLHPCS)

View on GitHub

Winning The Next Pandemic War Through High-Performance Computing and Self-Supervised Learning

Abstract

Motivation: Natural Language Processing (NLP) continues improving substantially through auto-regressive (AR) and auto-encoding (AE) Language Models (LMs). These LMs require expensive computing resources for self-supervised learning from huge unlabelled text corpora. The information learned is transferred through so-called embeddings to downstream prediction tasks. Computational biology and bioinformatics provide vast gold mines of structured and sequentially ordered text data leading to extraordinarily successful protein sequence LMs that promise new frontiers for generative and predictive tasks at low inference cost. As recent NLP advances link corpus size to model size and accuracy, we addressed two questions: (1) To which extent can High-Performance Computing (HPC) up-scale protein LMs to larger databases and larger models? (2) To which extent can LMs extract features from single proteins to get closer to the performance of methods using evolutionary information?

Methodology: Here, we trained two auto-regressive language models (Transformer-XL and XLNet) and four auto-encoder models (BERT, Albert, Electra and T5) on up to 2.1 billion protein sequences taken from the Big Fat Database (BFD), today’s largest set of protein sequences (corresponding to 22- and 112-times, respectively of the entire English Wikipedia). The LMs were trained on the Summit supercomputer, using 936 nodes with 6 GPUs each (in total 5616 GPUs) and one TPU Pod, using V3-1024 cores.

Results: We validated the feasibility of training big LMs on proteins and the advantage of up-scaling LMs to larger models supported by more data. The latter was assessed by predicting secondary structure in three- and eight-states (Q3=81%-87%, Q8=70-77), localization for 10 cellular compartments (Q10=81%), and whether a protein is membrane-bound or water-soluble (Q2=91%). Dimensionality reduction revealed that the LM-embeddings from unlabelled data (only protein sequences) captured important biophysical properties of the protein alphabet, namely the amino acids, and their well-orchestrated interplay in governing the shape of proteins. In the analogy of NLP, this implied having learned some of the grammar of the language of life realized in protein sequences. The successful up-scaling of protein LMs through HPC outperformed the state-of-the-art without using evolutionary information, thereby bypassing expensive database searches. Additionally, it opens the door to a new deep learning method that allows us to be more prepared against the next pandemic by understanding more about protein sequences.

Speaker

Ahmed Elnaggar is a Ph.D. candidate at the Technical University of Munich. His main focus of research is self-supervised learning on various modalities (Text, Protein, Source code, Images, and speech) using high-performance computing.