conST: an interpretable multi-modal contrastive learning framework for spatial transcriptomics
Published in biorXiv (Major revision at Nucleic Acids Research), 2022
Recommended citation: Zong Y, Yu T, Wang X,... & Li, Y. conST: an interpretable multi-modal contrastive learning framework for spatial transcriptomics[J]. bioRxiv (Major revision at Nucleic Acids Research (IF = 16.97)), 2022. https://scholar.google.com/citations?view_op=view_citation&hl=zh-CN&user=1Cw8oZ4AAAAJ&citation_for_view=1Cw8oZ4AAAAJ:d1gkVwhDpl0C
Yongshuo Zong, Tingyang Yu, Xuesong Wang, Yixuan Wang, Zhihang Hu, Yu Li
Spatially resolved transcriptomics (SRT) shows its impressive power in yielding biological insights into neuroscience, disease study, and even plant biology. We propose conST, a powerful and flexible SRT data analysis framework utilizing contrastive learning techniques. conST can learn low-dimensional embeddings by effectively integrating multi-modal SRT data, i.e. gene expression, spatial information, and morphology (if applicable). The learned embeddings can be then used for various downstream tasks, including clustering, trajectory and pseudotime inference, cell-to-cell interaction, etc. Our framework is interpretable in that it is able to find the correlated spots that support the clustering, which matches the CCI interaction pairs as well, providing more confidence to clinicians when making clinical decisions.