Table of Links
-
Preliminaries
-
Selfish Mining Attack
C PROOF OF THEOREM 3.1
Proof. Before we prove the theorem, we start by making several observations and introducing additional notation. The proof assumes familiarity with basic notions of Markov chains and MDPs, for which we refer the reader to [22, 26]. First, we observe that every strategy in the MDP M = (π, π΄, π, π 0) gives rise to an ergodic Markov chain. To see this, observe that the initial state π 0 is reached with positive probability from any other state in the MDP, since honest miners with positive probability mine and add new blocks to the main chain for π consecutive time steps upon which the MDP would return to the initial state π 0.
Next, we define two auxiliary reward functions in the MDP M:
We are now ready to prove the theorem. To prove the first part of the theorem claim, observe that for each π½ β [0, 1] and for each strategy π we have
Authors:
(1) Krishnendu Chatterjee, IST Austria, Austria ([email protected]);
(2) Amirali Ebrahimzadeh, Sharif University of Technology, Iran ([email protected]);
(3) Mehrdad Karrabi, IST Austria, Austria ([email protected]);
(4) Krzysztof Pietrzak, IST Austria, Austria ([email protected]);
(5) Michelle Yeo, National University of Singapore, Singapore ([email protected]);
(6) ΓorΔe Ε½ikeliΔ, Singapore Management University, Singapore ([email protected]).
This paper is