Network Comparison with Interpretable Contrastive Network Representation Learning


  • Takanori Fujiwara University of California, Davis
  • Jian Zhao University of Waterloo
  • Francine Chen Toyota Research Institute
  • Yaoliang Yu University of Waterloo
  • Kwan-Liu Ma University of California, Davis



contrastive learning, network representation learning, interpretability, network comparison, visualization


Identifying unique characteristics in a network through comparison with another network is an essential network analysis task. For example, with networks of protein interactions obtained from normal and cancer tissues, we can discover unique types of interactions in cancer tissues. This analysis task could be greatly assisted by contrastive learning, which is an emerging analysis approach to discover salient patterns in one dataset relative to another. However, existing contrastive learning methods cannot be directly applied to networks as they are designed only for high-dimensional data analysis. To address this problem, we introduce a new analysis approach called contrastive network representation learning (cNRL). By integrating two machine learning schemes, network representation learning and contrastive learning, cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another. Within this approach, we also design a method, named i-cNRL, which offers interpretability in the learned results, allowing for understanding which specific patterns are only found in one network. We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets. Furthermore, we compare i-cNRL and other potential cNRL algorithm designs through quantitative and qualitative evaluations.




How to Cite

Fujiwara, T., Zhao, J., Chen, F., Yu, Y., & Ma, K.-L. (2022). Network Comparison with Interpretable Contrastive Network Representation Learning. Journal of Data Science, Statistics, and Visualisation, 2(5).