This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: JjHhrj2Fdw9YGzrXt6F7s3ADB3H9_7AyW9cI4B6Jdm8
Cover

Unpacking Key Proofs in Reinforcement Learning

Written by @anchoring | Published on 2025/1/16

TL;DR
This section simplifies the proofs for Theorems 3 and 4 in reinforcement learning, explaining the behavior and convergence of the Bellman operator in accessible terms for those new to the field.

Authors:

(1) Jongmin Lee, Department of Mathematical Science, Seoul National University;

(2) Ernest K. Ryu, Department of Mathematical Science, Seoul National University and Interdisciplinary Program in Artificial Intelligence, Seoul National University.

Abstract and 1 Introduction

1.1 Notations and preliminaries

1.2 Prior works

2 Anchored Value Iteration

2.1 Accelerated rate for Bellman consistency operator

2.2 Accelerated rate for Bellman optimality opera

3 Convergence when y=1

4 Complexity lower bound

5 Approximate Anchored Value Iteration

6 Gauss–Seidel Anchored Value Iteration

7 Conclusion, Acknowledgments and Disclosure of Funding and References

A Preliminaries

B Omitted proofs in Section 2

C Omitted proofs in Section 3

D Omitted proofs in Section 4

E Omitted proofs in Section 5

F Omitted proofs in Section 6

G Broader Impacts

H Limitations

C Omitted proofs in Section 3

First, we present the following lemma.

where the second inequality comes form nonexpansiveness of T.

Now, we present the proof of Theorem 3.

Next, we prove the Theorem 4.

This paper is available on arxiv under CC BY 4.0 DEED license.

[story continues]


Written by
@anchoring
Anchoring provides a steady start, grounding decisions and perspectives in clarity and confidence.

Topics and
tags
reinforcement-learning|dynamic-programming|nesterov-acceleration|machine-learning-optimization|value-iteration|value-iteration-convergence|bellman-error|bellman-operator-proofs
This story on HackerNoon has a decentralized backup on Sia.
Transaction ID: JjHhrj2Fdw9YGzrXt6F7s3ADB3H9_7AyW9cI4B6Jdm8