Abstract and 1. Introduction

  1. Related Work

  2. Preliminaries and Notations

  3. Differentiable Structural Information

    4.1. A New Formulation

    4.2. Properties

    4.3. Differentiability & Deep Graph Clustering

  4. LSEnet

    5.1. Embedding Leaf Nodes

    5.2. Learning Parent Nodes

    5.3. Hyperbolic Partitioning Tree

  5. Experiments

    6.1. Graph Clustering

    6.2. Discussion on Structural Entropy

  6. Conclusion, Broader Impact, and References Appendix

A. Proofs

B. Hyperbolic Space

C. Technical Details

D. Additional Results

5.3. Hyperbolic Partitioning Tree

In the principle of structural information minimization, the optimal partitioning tree is constructed to describe the selforganization of the graph (Li & Pan, 2016; Wu et al., 2023)

The overall procedure of training LSEnet is summarized in Algorithm 1. Specifically, in Lines 1-9, we learn level-wise assignments and node embeddings in a bottom-up manner. In Lines 10-17, the optimal partitioning tree is given via a top-down process in which we avoid empty leaf nodes and fix single chains in the relaxed tree (case one and case two in Theorem 4.8). Further details of the procedure are shown in Algorithm 2 given in Appendix C.2.

In brief, LSEnet combines the advantages of both structural entropy and hyperbolic space, and uncovers the clustering structures without predefined cluster number K.

Authors:

(1) Li Sun, North China Electric Power University, Beijing 102206, China (ccesunli@ncepu.edu);

(2) Zhenhao Huang, North China Electric Power University, Beijing 102206, China;

(3) Hao Peng, Beihang University, Beijing 100191, China;

(4) Yujie Wang, North China Electric Power University, Beijing 102206, China;

(5) Chunyang Liu, Didi Chuxing, Beijing, China;

(6) Philip S. Yu, University of Illinois at Chicago, IL, USA.


This paper is available on arxiv under CC BY-NC-SA 4.0 Deed (Attribution-Noncommercial-Sharelike 4.0 International) license.