This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: MlGyznAVcmJDSPVqixr_bZu5xEfRKAgZUSCP6ioKyo8
Cover

Extracting Solutions from SDP Relaxations via Rank-One Approximation

Written by @hyperbole | Published on 2026/1/15

TL;DR
Learn how to extract high-quality solutions from SDP relaxations using scaled eigendirections, Gaussian randomizations, and matrix column scaling to minimize the target loss function.

Abstract and 1. Introduction

  1. Related Works

  2. Convex Relaxation Techniques for Hyperbolic SVMs

    3.1 Preliminaries

    3.2 Original Formulation of the HSVM

    3.3 Semidefinite Formulation

    3.4 Moment-Sum-of-Squares Relaxation

  3. Experiments

    4.1 Synthetic Dataset

    4.2 Real Dataset

  4. Discussions, Acknowledgements, and References

A. Proofs

B. Solution Extraction in Relaxed Formulation

C. On Moment Sum-of-Squares Relaxation Hierarchy

D. Platt Scaling [31]

E. Detailed Experimental Results

F. Robust Hyperbolic Support Vector Machine

B Solution Extraction in Relaxed Formulation

B.1 Semidefinite Relaxation

Typically the top eigendirection is selected as the best candidate.

B.2 Moment-Sum-of-Squares Relaxation

In moment-sum-of-squares relaxation, the decision variable is the truncated multi-sequence z, but we could decode the solution from the moment matrix 𝑀𝜅[z] it generates. We are able to extract the part in TMS that corresponds to w = [𝑤0, 𝑤1, ..., 𝑤𝑑], by reading off these entries from the moment matrix, which is already a good enough solution.

Authors:

(1) Sheng Yang, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA (shengyang@g.harvard.edu);

(2) Peihan Liu, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA (peihanliu@fas.harvard.edu);

(3) Cengiz Pehlevan, John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, Center for Brain Science, Harvard University, Cambridge, MA, and Kempner Institute for the Study of Natural and Artificial Intelligence, Harvard University, Cambridge, MA (cpehlevan@seas.harvard.edu).


This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.

[3] a method mentioned in slide 14 of https://web.stanford.edu/class/ee364b/lectures/sdp-relax_slides.pdf

[story continues]


Written by
@hyperbole
Amplifying words and ideas to separate the ordinary from the extraordinary, making the mundane majestic.

Topics and
tags
deep-learning|sdp-solution-extraction|rank-one-approximation|gaussian-randomization|top-eigendirection-scaling|semidefinite-programming|optimization-loss-minimization|matrix-column-scaling
This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: MlGyznAVcmJDSPVqixr_bZu5xEfRKAgZUSCP6ioKyo8