Abstract and 1. Introduction

2. Background

2.1 Amortized Stochastic Variational Bayesian GPLVM

2.2 Encoding Domain Knowledge through Kernels

3. Our Model and Pre-Processing and Likelihood

3.2 Encoder

4. Results and Discussion and 4.1 Each Component is Crucial to Modifies Model Performance

4.2 Modified Model achieves Significant Improvements over Standard Bayesian GPLVM and is Comparable to SCVI

4.3 Consistency of Latent Space with Biological Factors

4. Conclusion, Acknowledgement, and References

A. Baseline Models

B. Experiment Details

C. Latent Space Metrics

D. Detailed Metrics

4.3 CONSISTENCY OF LATENT SPACE WITH BIOLOGICAL FACTORS

An advantage of our model is the ability to incorporate biologically interpretable data to boost latent space interpretability and overall performance. In particular, we compared our learned latent space with previous expert-labelled inferences on the innate immunity dataset in Kumasaka et al. (2021). Pretraining on well-initialized latents and finetuning our model with a PerSE-ARD+Linear kernel allowed us to recover latents consistent with those inferred and biologically motivated in Kotliar et al. (2019) (Figure 4 (top row)) while also separating cells by their treatment conditions (Figure 4 (bottom row)). Moreover, as indicated by the color gradations in the right two UMAP plots in the bottom row, the model’s learned latent space is able to distinguish immune response pseudotime directions. This shows how initializations can be done on the amortized BGPLVM encoder-decoder models.

This paper is available on arxiv under CC BY-SA 4.0 DEED license.

Authors:

(1) Sarah Zhao, Department of Statistics, Stanford University, ([email protected]);

(2) Aditya Ravuri, Department of Computer Science, University of Cambridge ([email protected]);

(3) Vidhi Lalchand, Eric and Wendy Schmidt Center, Broad Institute of MIT and Harvard ([email protected]);

(4) Neil D. Lawrence, Department of Computer Science, University of Cambridge ([email protected]).