This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: YORtkyOP5rY5t6T_4mj0j6apDpt6LxkV98X63XJ4vow
Cover

Medical Image Synthesis: S-CycleGAN for RUSS and Segmentation

Written by @instancing | Published on 2025/11/5

TL;DR
This article presents a novel Decision Boundary-Aware Distillation methodology for Instance-Incremental Learning that requires no access to old data.

Abstract and 1 Introduction

  1. Related works

  2. Problem setting

  3. Methodology

    4.1. Decision boundary-aware distillation

    4.2. Knowledge consolidation

  4. Experimental results and 5.1. Experiment Setup

    5.2. Comparison with SOTA methods

    5.3. Ablation study

  5. Conclusion and future work and References

Supplementary Material

  1. Details of the theoretical analysis on KCEMA mechanism in IIL
  2. Algorithm overview
  3. Dataset details
  4. Implementation details
  5. Visualization of dusted input images
  6. More experimental results

4. Methodology

As shown in Fig. 2 (a), the occurrence of concept drift in new observations leads to the emergence of outer samples that the existing model fails on. The new IIL has to broaden the decision boundary to these outer samples as well as avoiding the catastrophic forgetting (CF) on the old boundary. Conventional knowledge distillation-based methods rely on some preserved exemplars [22] or auxiliary data [33, 34] to resist CF. However, in the proposed IIL setting, we have no access to any old data other than new observations. Distillation based on these new observations conflicts with learning new knowledge if no new parameters are added to the model. To strike a balance between learning and not forgetting, we propose a decision boundary-aware distillation method that requires no old data. During learning, the new knowledge learned by the student is intermittently consolidated back to the teacher model, which brings better generalization and is a pioneer attempt in this area.

Figure 3. Comparison between (a) previous distillation-based method which inferences with student model (S) and (b) the proposed decision boundary-aware distillation (DBD) with knowledge consolidation (KC). We use teacher model (T) for inference.

Authors:

(1) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(2) Weifu Fu, Tencent Youtu Lab;

(3) Yuhuan Lin, Tencent Youtu Lab;

(4) Jialin Li, Tencent Youtu Lab;

(5) Yifeng Zhou, Tencent Youtu Lab;

(6) Yong Liu, Tencent Youtu Lab;

(7) Qiang Nie, Hong Kong University of Science and Technology (Guangzhou);

(8) Chengjie Wang, Tencent Youtu Lab.


This paper is available on arxiv under CC BY-NC-ND 4.0 Deed (Attribution-Noncommercial-Noderivs 4.0 International) license.

[story continues]


Written by
@instancing
Pioneering instance management, driving innovative solutions for efficient resource utilization, and enabling a more sus

Topics and
tags
concept-drift|knowledge-distillation|image-synthesis|s-cyclegan|distillation|distillation-based-method|medical-image-synthesis|segmentation
This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: YORtkyOP5rY5t6T_4mj0j6apDpt6LxkV98X63XJ4vow