This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: -aRtHLuFwK5bG1lXEge-nYUkDRfjv652NZ3GtK4Yzf0
Cover

Unfair Generalization in Graph Neural Networks (GNNs)

Written by @computational | Published on 2024/10/22

TL;DR
The proof of Theorem 2 demonstrates that GNNs tend to generalize better on test groups closer to the training set in structural distance. Through linear approximations and bounding loss functions, it shows that the difference in generalization performance between groups can be attributed to structural disparities, highlighting potential fairness issues in GNN applications.

Authors:

(1) Junwei Su, Department of Computer Science, the University of Hong Kong and jwsu@cs.hku.hk;

(2) Chuan Wu, Department of Computer Science, the University of Hong Kong and cwu@cs.hku.hk.

Abstract and 1 Introduction

2 Related Work

3 Framework

4 Main Results

5 A Case Study on Shortest-Path Distance

6 Conclusion and Discussion, and References

7 Proof of Theorem 1

8 Proof of Theorem 2

9 Procedure for Solving Eq. (6)

10 Additional Experiments Details and Results

11 Other Potential Applications

8 Proof of Theorem 2

Before diving into the detailed proof, we present an outline of the structure of the proof and prove a lemma which we use in the proof of the theorem. Outline of the proof for the thereom

1. Suppose we are given Vi and Vj , two test groups which satisfy the premise of the theorem;

2. Then, we can approximate and bound the loss of each vertex in these groups based on the nearest vertex in the training set by extending the result from Theorem 1;

3. If we can show that there exists a constant independent of the property of each test group, then we obtain the results of Theorem 2.

This paper is available on arxiv under CC BY 4.0 DEED license.

[story continues]


Written by
@computational
Computational: We take random inputs, follow complex steps, and hope the output makes sense. And then blog about it.

Topics and
tags
graph-neural-networks|topology-awareness|semi-supervised-learning|active-learning|computer-vision|deep-learning-spline-theory|gnn-limitations|k-shot-learning
This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: -aRtHLuFwK5bG1lXEge-nYUkDRfjv652NZ3GtK4Yzf0