This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: STrrvV0gNPcAFmq_tgeU3PhzUGV87uR5GP3KNoZrJz0
Cover

How Approximate Anchored Value Iteration Handles Errors in Decision-Making Models

Written by @anchoring | Published on 2025/1/14

TL;DR
Approximate Anchored Value Iteration (Apx-Anc-VI) is shown to be robust against Bellman operator evaluation errors, offering performance comparable to standard Approximate VI.

Authors:

(1) Jongmin Lee, Department of Mathematical Science, Seoul National University;

(2) Ernest K. Ryu, Department of Mathematical Science, Seoul National University and Interdisciplinary Program in Artificial Intelligence, Seoul National University.

Abstract and 1 Introduction

1.1 Notations and preliminaries

1.2 Prior works

2 Anchored Value Iteration

2.1 Accelerated rate for Bellman consistency operator

2.2 Accelerated rate for Bellman optimality opera

3 Convergence when y=1

4 Complexity lower bound

5 Approximate Anchored Value Iteration

6 Gauss–Seidel Anchored Value Iteration

7 Conclusion, Acknowledgments and Disclosure of Funding and References

A Preliminaries

B Omitted proofs in Section 2

C Omitted proofs in Section 3

D Omitted proofs in Section 4

E Omitted proofs in Section 5

F Omitted proofs in Section 6

G Broader Impacts

H Limitations

5 Approximate Anchored Value Iteration

In this section, we show that the anchoring mechanism is robust against evaluation errors of the Bellman operator, just as much as the standard approximate VI.

This paper is available on arxiv under CC BY 4.0 DEED license.

[story continues]


Written by
@anchoring
Anchoring provides a steady start, grounding decisions and perspectives in clarity and confidence.

Topics and
tags
reinforcement-learning|dynamic-programming|nesterov-acceleration|machine-learning-optimization|value-iteration|value-iteration-convergence|bellman-error|fixed-point-iteration
This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: STrrvV0gNPcAFmq_tgeU3PhzUGV87uR5GP3KNoZrJz0