Table of Links
-
Contextuality for general probabilistic theories
3.2 Operational theory associated to a GPT system
-
Hierarchy of contextuality and 4.1 Motivation and the resource theory
4.2 Contextuality of composite systems
-
Discussion
A Physicality of the Holevo projection
5.2 Relation with previous works on contextuality and GPTs
On the framework of M¨uller and Garner. On the technical level, our work makes use of the tools developed in [32]. For instance, the concept of univalent simulations of GPT systems plays an especially important role. Even though we use similar tools, our goal is quite different. M¨uller and Garner use the concept of univalent simulations to formulate a notion of generalized noncontextuality that applies to generic effective and corresponding fundamental theories. The effective theory is the theory emerging from the operational statistics of the experiment and the fundamental theory is a fine-grained theory of the effective ones (so, not just the simplicial theory, as in the case of standard noncontextuality).
They use such noncontextuality as a plausibility criterion for testing the validity of the fundamental theory given the effective theory. This approach becomes particularly meaningful when the fundamental theory is assumed to be quantum theory, since it allows the authors to provide experimental tests of quantum theory. Related works which employ an alternative definition of noncontextuality are the ones of Gitton and Woods [48] and [49]. For ongoing debate on the merits of the different approaches see [49] and [76].
On simplex embeddability. Other influential sources of inspiration for our work are [47, 51, 52]. As we already mentioned in the introduction, [47] introduces the notion of simplex embeddability as the geometrical criterion to assess whether a GPT system is noncontextual.[7] The criterion is extended to accessible GPT fragments in [51], where the latter correspond to more general mathematical objects than GPTs and characterize generic prepare-and-measure experimental setups. The work of [52] provides an algorithm for testing contextuality in any prepare-and-measure scenario. In particular, if it exists, it returns an explicit noncontextual model for the scenario and, if not, it provides the minimum amount of (depolarizing) noise which would be required until a noncontextual model would become possible. They call this measure of contextuality the robustness of nonclassicality. The latter is related to our notion of error of univalent simulation, but it is different insofar as it requires a specific noise channel (e.g. depolarizing noise channel), whereas the error of univalent simulation does not require that. Moreover, the robustness of nonclassicality is not defined as a measure within a well defined resource theory.
On accessible GPT fragments. It has been argued in [51] that in realistic experiments GPT systems are generally not appropriate to study contextuality. In particular, the following discrepancy arises: A GPT system obtained from an operational description of the experiment, as in Remark 9, provides states and effects observable in practice in this experiment, but does not limit the in principle implementable ones. However, the preparation equivalences preserved by a preparation-noncontextual ontological model (see Section 2 and Definition 10) are in principle operational equivalences.
The free objects of [44] are the noncontextual behaviours, namely those which admit of a generalized noncontextual model.
The operations of [44] are given by pre- and post-processing on the preparations, measurements and outcomes and correspond to channel simulations. The free operations correspond to channel simulations such that all the operational equivalences of the simulated system are images of operational equivalences of the simulating system through the simulation map.
Let us consider an example where both resource theories can be applied to highlight the differences between the resource theory of [44] and the one presented in this work.
An additional contribution of the present work is the following. In [44] the authors define a notion of composite of two behaviours which is equivalent to the minimal composite of Definition 5 in our framework. They then show that this composite is consistent with their resource theory, in the sense that the free objects are closed under this composite. In the present work we show that any consistent composite of GPT systems, not just the minimal composite of Definition 5 is compatible with the resource theory of contextuality.
Authors:
(1) Lorenzo Catani, International Iberian Nanotechnology Laboratory, Av. Mestre Jose Veiga s/n, 4715-330 Braga, Portugal ([email protected]);
(2) Thomas D. Galley, Institute for Quantum Optics and Quantum Information, Austrian Academy of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria and Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Vienna, Austria ([email protected]);
(3) Tomas Gonda, Institute for Theoretical Physics, University of Innsbruck, Austria ([email protected]).
This paper is
[7] Another work that is closely related to [47] is that of Shahandeh [50]. However, the criterion of classicality introduced there additionally requires the simplex to have the same dimension of the GPT system. We do not adopt this approach. For example, it deems the rebit stabilizer theory to be nonclassical. However, the rebit stabilizer theory, in the prepare-and-measure scenario, admits a noncontextual ontological model given by the Spekkens toy theory [66], as has been shown in several works [77–79].