I. Introduction

II. Maximum Entropy Tomography

III. Numerical Experiments

IV. Conclusion and Extensions

V. Acknowledgments and References

A. MENT

The MENT algorithm [19–22] leverages a Gauss-Seidel relaxation method to optimize the Lagrange functions in Eq. (6). After initializing the distribution to the prior within the measurement boundaries

the Lagrange functions are updated as

MENT maximizes entropy by design: fitting the data generates an exact solution to the constrained optimization problem. MENT is also efficient: it stores the exact number of parameters needed to define the maximum-entropy distribution and typically converges in a few epochs. Finally, MENT is essentially free of hyperparameters.

The MENT formulation above is valid for n:m tomography, but the integrals in Eq. (9) limit the value of n in practice. Ongoing work aims to demonstrate efficient implementations when n = 4 [19, 23]. Extension to n = 6 may be possible, but it has yet to be demonstrated, and the runtime would likely be quite long if there were many high-resolution measurements. Even if the algorithm converged, sampling particles from the posterior (Eq. (6)) would be a nontrivial extra step

Authors:

(1) Austin Hoover, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37830, USA ([email protected]);

(2) Jonathan C. Wong, Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China.


This paper is available on arxiv under CC BY 4.0 DEED license.