Abstract: The study of quantum correlation sets initiated by Tsirelson in the 1980s and originally motivated by questions in the foundations of quantum mechanics has more recently been tied to questions in quantum cryptography, complexity theory, operator space theory, group theory, and more. Synchronous correlation sets introduced by Paulsen et al. [J. Funct. Anal. 270, 2188–2222 (2016)] are a subclass of correlations that has proven particularly useful to study and arises naturally in applications. We show that any correlation that is almost synchronous, in a natural ℓ1 sense, arises from a state and measurement operators that are well-approximated by a convex combination of projective measurements on a maximally entangled state. This extends a result of Paulsen et al. [J. Funct. Anal. 270, 2188–2222 (2016)] that applies to exactly synchronous correlations. Crucially, the quality of approximation is independent of the dimension of the Hilbert spaces or of the size of the correlation. Our result allows one to reduce the analysis of many classes of nonlocal games, including rigidity properties, to the case of strategies using maximally entangled states that are generally easier to manipulate.

Publication: Journal of Mathematical Physics Vol.: 63 No.: 2 ISSN: 0022-2488

ID: CaltechAUTHORS:20211006-163212999

]]>

Abstract: The complexity class NP characterizes the collection of computational problems that have efficiently verifiable solutions. With the goal of classifying computational problems that seem to lie beyond NP, starting in the 1980s complexity theorists have considered extensions of the notion of efficient verification that allow for the use of randomness (the class MA), interaction (the class IP), and the possibility to interact with multiple proofs, or provers (the class MIP). The study of these extensions led to the celebrated PCP theorem and its applications to hardness of approximation and the design of cryptographic protocols. In this work, we study a fourth modification to the notion of efficient verification that originates in the study of quantum entanglement. We prove the surprising result that every problem that is recursively enumerable, including the Halting problem, can be efficiently verified by a classical probabilistic polynomial-time verifier interacting with two all-powerful but noncommunicating provers sharing entanglement. The result resolves long-standing open problems in the foundations of quantum mechanics (Tsirelson's problem) and operator algebras (Connes' embedding problem).

Publication: Communications of the ACM Vol.: 64 No.: 11 ISSN: 0001-0782

ID: CaltechAUTHORS:20200417-131646685

]]>

Abstract: Self-testing is a method to characterise an arbitrary quantum system based only on its classical input-output correlations, and plays an important role in device-independent quantum information processing as well as quantum complexity theory. Prior works on self-testing require the assumption that the system's state is shared among multiple parties that only perform local measurements and cannot communicate. Here, we replace the setting of multiple non-communicating parties, which is difficult to enforce in practice, by a single computationally bounded party. Specifically, we construct a protocol that allows a classical verifier to robustly certify that a single computationally bounded quantum device must have prepared a Bell pair and performed single-qubit measurements on it, up to a change of basis applied to both the device's state and measurements. This means that under computational assumptions, the verifier is able to certify the presence of entanglement, a property usually closely associated with two separated subsystems, inside a single quantum device. To achieve this, we build on techniques first introduced by Brakerski et al. (2018) and Mahadev (2018) which allow a classical verifier to constrain the actions of a quantum device assuming the device does not break post-quantum cryptography.

Publication: Quantum Vol.: 5ISSN: 2521-327X

ID: CaltechAUTHORS:20200417-132557882

]]>

Abstract: We consider a new model for the testing of untrusted quantum devices, consisting of a single polynomial time bounded quantum device interacting with a classical polynomial time verifier. In this model, we propose solutions to two tasks—a protocol for efficient classical verification that the untrusted device is “truly quantum” and a protocol for producing certifiable randomness from a single untrusted quantum device. Our solution relies on the existence of a new cryptographic primitive for constraining the power of an untrusted quantum device: post-quantum secure trapdoor claw-free functions that must satisfy an adaptive hardcore bit property. We show how to construct this primitive based on the hardness of the learning with errors (LWE) problem.

Publication: Journal of the ACM Vol.: 68 No.: 5 ISSN: 0004-5411

ID: CaltechAUTHORS:20210921-144712064

]]>

Abstract: The generation of certifiable randomness is the most fundamental information-theoretic task that meaningfully separates quantum devices from their classical counterparts. We propose a protocol for exponential certified randomness expansion using a single quantum device. The protocol calls for the device to implement a simple quantum circuit of constant depth on a 2D lattice of qubits. The output of the circuit can be verified classically in linear time, and is guaranteed to contain a polynomial number of certified random bits assuming that the device used to generate the output operated using a (classical or quantum) circuit of sub-logarithmic depth. This assumption contrasts with the locality assumption used for randomness certification based on Bell inequality violation and more recent proposals for randomness certification based on computational assumptions. Furthermore, to demonstrate randomness generation it is sufficient for a device to sample from the ideal output distribution within constant statistical distance. Our procedure is inspired by recent work of Bravyi et al. (Science 362(6412):308–311, 2018), who introduced a relational problem that can be solved by a constant-depth quantum circuit, but provably cannot be solved by any classical circuit of sub-logarithmic depth. We develop the discovery of Bravyi et al. into a framework for robust randomness expansion. Our results lead to a new proposal for a demonstrated quantum advantage that has some advantages compared to existing proposals. First, our proposal does not rest on any complexity-theoretic conjectures, but relies on the physical assumption that the adversarial device being tested implements a circuit of sub-logarithmic depth. Second, success on our task can be easily verified in classical linear time. Finally, our task is more noise-tolerant than most other existing proposals that can only tolerate multiplicative error, or require additional conjectures from complexity theory; in contrast, we are able to allow a small constant additive error in total variation distance between the sampled and ideal distributions.

Publication: Communications in Mathematical Physics Vol.: 382 No.: 1 ISSN: 0010-3616

ID: CaltechAUTHORS:20190320-100502117

]]>

Abstract: We introduce a three-player nonlocal game, with a finite number of classical questions and answers, such that the optimal success probability of 1 in the game can only be achieved in the limit of strategies using arbitrarily high-dimensional entangled states. Precisely, there exists a constant 0 < c ≤ 1 such that to succeed with probability 1 − ε in the game it is necessary to use an entangled state of at leastΩ(ε^(−c)) qubits, and it is sufficient to use a state of at most O(ε⁻¹) qubits. The game is based on the coherent state exchange game of Leung et al. (CJTCS 2013). In our game, the task of the quantum verifier is delegated to a third player by a classical referee. Our results complement those of Slofstra (arXiv:1703.08618) and Dykema et al. (arXiv:1709.05032), who obtained two-player games with similar (though quantitatively weaker) properties based on the representation theory of finitely presented groups and C∗-algebras respectively.

Publication: Quantum Vol.: 4ISSN: 2521-327X

ID: CaltechAUTHORS:20190204-154622144

]]>

Abstract: We show that every language in QMA admits a classical-verifier, quantum-prover zero-knowledge argument system which is sound against quantum polynomial-time provers and zero-knowledge for classical (and quantum) polynomial-time verifiers. The protocol builds upon two recent results: a computational zero-knowledge proof system for languages in QMA, with a quantum verifier, introduced by Broadbent et al. (FOCS 2016), and an argument system for languages in QMA, with a classical verifier, introduced by Mahadev (FOCS 2018).

Publication: Quantum Vol.: 4ISSN: 2521-327X

ID: CaltechAUTHORS:20190320-095213331

]]>

Abstract: Rapid technological advances point to a near future where engineered devices based on the laws of quantum mechanics are able to implement computations that can no longer be emulated on a classical computer. Once that stage is reached, will it be possible to verify the results of the quantum device? Recently, Mahadev introduced a solution to the following problem: Is it possible to delegate a quantum computation to a quantum device in a way that the final outcome of the computation can be verified on a classical computer, given that the device may be faulty or adversarial and given only the ability to generate classical instructions and obtain classical readout information in return? Mahadev's solution combines the framework of interactive proof systems from complexity theory with an ingenious use of classical cryptographic techniques to tie a "cryptographic leash'' around the quantum device. In these notes I give a self-contained introduction to her elegant solution, explaining the required concepts from complexity, quantum computing, and cryptography, and how they are brought together in Mahadev's protocol for classical verification of quantum computations.

Publication: Bulletin of the American Mathematical Society Vol.: 57 No.: 1 ISSN: 0273-0979

ID: CaltechAUTHORS:20200316-150528835

]]>

Abstract: Quantum mechanics and the theory of operator algebras have been intertwined since their origin. In the 1930s [20] von Neumann laid the foundations for the theory of (what are now known as) von Neumann algebras with the explicit goal of establishing Heisenberg’s matrix mechanics on a rigorous footing (quoting from the preface, in the translation by Beyer: “The object of this book is to present the new quantum mechanics in a unified representation which, so far as it is possible and useful, is mathematically rigorous”). Following the initial explorations of Murray and von Neumann, the new theory took on a life of its own, eventually leading to multiple applications unrelated to quantum mechanics, such as to free probability or noncommutative geometry.

Publication: Notices of the American Mathematical Society Vol.: 66 No.: 10 ISSN: 0002-9920

ID: CaltechAUTHORS:20200728-152043230

]]>

Abstract: Quantum cryptography promises levels of security that are impossible to attain in a classical world. Can this security be guaranteed to classical users of a quantum protocol, who may not even trust the quantum devices used to implement the protocol? This central question dates back to the early 1990s when the challenge of achieving Device-Independent Quantum Key Distribution (DIQKD) was first formulated. We answer the challenge by rigorously proving the device-independent security of an entanglement-based protocol building on Ekert's original proposal for quantum key distribution. The proof of security builds on techniques from the classical theory of pseudo-randomness to achieve a new quantitative understanding of the non-local nature of quantum correlations.

Publication: Communications of the ACM Vol.: 62 No.: 4 ISSN: 0001-0782

ID: CaltechAUTHORS:20190321-152633091

]]>

Abstract: Device-independent security is the gold standard for quantum cryptography: not only is security based entirely on the laws of quantum mechanics, but it holds irrespective of any a priori assumptions on the quantum devices used in a protocol, making it particularly applicable in a quantum-wary environment. While the existence of device-independent protocols for tasks such as randomness expansion and quantum key distribution has recently been established, the underlying proofs of security remain very challenging, yield rather poor key rates, and demand very high quality quantum devices, thus making them all but impossible to implement in practice. We introduce a technique for the analysis of device-independent cryptographic protocols. We provide a flexible protocol and give a security proof that provides quantitative bounds that are asymptotically tight, even in the presence of general quantum adversaries. At a high level our approach amounts to establishing a reduction to the scenario in which the untrusted device operates in an identical and independent way in each round of the protocol. This is achieved by leveraging the sequential nature of the protocol and makes use of a newly developed tool, the “entropy accumulation theorem” of Dupuis, Fawzi, and Renner [Entropy Accumulation, preprint, 2016]. As concrete applications we give simple and modular security proofs for device-independent quantum key distribution and randomness expansion protocols based on the CHSH inequality. For both tasks, we establish essentially optimal asymptotic key rates and noise tolerance. In view of recent experimental progress, which has culminated in loophole-free Bell tests, it is likely that these protocols can be practically implemented in the near future.

Publication: SIAM Journal on Computing Vol.: 48 No.: 1 ISSN: 0097-5397

ID: CaltechAUTHORS:20190206-150209557

]]>

Abstract: We relate the amount of entanglement required to play linear system non-local games near-optimally to the hyperlinear profile of finitely presented groups. By calculating the hyperlinear profile of a certain group, we give an example of a finite non-local game for which the amount of entanglement required to play ϵ-optimally is at least Ω(1/ϵ^k), f or some k > 0. Since this function approaches infinity as ϵ approaches zero, this provides a quantitative version of a theorem of the first author.

Publication: Annales Henri Poincaré Vol.: 19 No.: 10 ISSN: 1424-0637

ID: CaltechAUTHORS:20180926-132554192

]]>

Abstract: Bell-inequality violations establish that two systems share some quantum entanglement. We give a simple test to certify that two systems share an asymptotically large amount of entanglement, n EPR states. The test is efficient: unlike earlier tests that play many games, in sequence or in parallel, our test requires only one or two CHSH games. One system is directed to play a CHSH game on a random specified qubit i, and the other is told to play games on qubits {i,j}, without knowing which index is i. The test is robust: a success probability within delta of optimal guarantees distance O(n^{5/2} sqrt{delta}) from n EPR states. However, the test does not tolerate constant delta; it breaks down for delta = Omega~(1/sqrt{n}). We give an adversarial strategy that succeeds within delta of the optimum probability using only O~(delta^{-2}) EPR states.

Publication: Quantum Vol.: 2ISSN: 2521-327X

ID: CaltechAUTHORS:20171108-142443122

]]>

Abstract: We show that for any ε > 0 there is an XOR game G = G(ε) with Θ(ε^(−1/5)) inputs for one player and Θ(ε^(−2/5)) inputs for the other player such that Ω(ε^(−1/5)) ebits are required for any strategy achieving bias that is at least a multiplicative factor (1−ε) from optimal. This gives an exponential improvement in both the number of inputs or outputs and the noise tolerance of any previously-known self-test for highly entangled states. Up to the exponent −1/5 the scaling of our bound with ε is tight: for any XOR game there is an ε-optimal strategy using ⌈ε^(−1)⌉ ebits, irrespective of the number of questions in the game.

Publication: Quantum Information and Computation Vol.: 18 No.: 7-8 ISSN: 1533-7146

ID: CaltechAUTHORS:20180926-101512002

]]>

Abstract: Device-independent cryptography goes beyond conventional quantum cryptography by providing security that holds independently of the quality of the underlying physical devices. Device-independent protocols are based on the quantum phenomena of non-locality and the violation of Bell inequalities. This high level of security could so far only be established under conditions which are not achievable experimentally. Here we present a property of entropy, termed “entropy accumulation”, which asserts that the total amount of entropy of a large system is the sum of its parts. We use this property to prove the security of cryptographic protocols, including device-independent quantum key distribution, while achieving essentially optimal parameters. Recent experimental progress, which enabled loophole-free Bell tests, suggests that the achieved parameters are technologically accessible. Our work hence provides the theoretical groundwork for experimental demonstrations of device-independent cryptography.

Publication: Nature Communications Vol.: 9ISSN: 2041-1723

ID: CaltechAUTHORS:20180130-110708768

]]>

Abstract: The success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad et al. [Math. Phys. 356, 65 (2017)]. The convergence proof, however, relies on “rigorous renormalization group” (RRG) techniques which differ fundamentally from existing algorithms. We introduce a practical adaptation of the RRG procedure which, while no longer theoretically guaranteed to converge, finds matrix product state ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in realistic situations. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a treelike manner. We evaluate the algorithm numerically, finding similar performance to density matrix renormalization group (DMRG) in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.

Publication: Physical Review B Vol.: 96 No.: 21 ISSN: 2469-9950

ID: CaltechAUTHORS:20170627-090122309

]]>

Abstract: One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance (Landau et al. in Nat Phys, 2015) gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unsolved, including whether there exist efficient algorithms when the ground space is degenerate (and of polynomial dimension in the system size), or for the polynomially many lowest energy states, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm, based on a rigorously justified RG type transformation, for finding low energy states for 1D Hamiltonians acting on a chain of nparticles. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly(n) degenerate ground spaces and an n^(O(log n)) algorithm for the poly(n) lowest energy states (under a mild density condition). For these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is Õ(nM(n)), where M(n) is the time required to multiply two n × n matrices.

Publication: Communications in Mathematical Physics Vol.: 356 No.: 1 ISSN: 0010-3616

ID: CaltechAUTHORS:20160321-072746620

]]>

Abstract: In this work we consider the ground space connectivity problem for commuting local Hamiltonians. The ground space connectivity problem asks whether it is possible to go from one (efficiently preparable) state to another by applying a polynomial length sequence of 2-qubit unitaries while remaining at all times in a state with low energy for a given Hamiltonian H. It was shown in [Gharibian and Sikora, ICALP15] that this problem is QCMA-complete for general local Hamiltonians, where QCMA is defined as QMA with a classical witness and BQP verifier. Here we show that the commuting version of the problem is also QCMA-complete. This provides one of the first examples where commuting local Hamiltonians exhibit complexity theoretic hardness equivalent to general local Hamiltonians.

Publication: Quantum Vol.: 1ISSN: 2521-327X

ID: CaltechAUTHORS:20171011-112512941

]]>

Abstract: The field of quantum information is born out of a sequence of surprising discoveries in the 1980s, all building on the same deep insight: the counter-intuitive quantum properties of particles such as photons or electrons can be put to task in order to accomplish certain computational, cryptographic, and information-theoretic tasks impossible to realize by purely classical means. A famous example is the cryptographic problem of key distribution, for which Bennett and Brassard devised the first quantum protocol in 1984 [6] and whose security relies on the no-cloning principle of quantum mechanics. Another example is the computational problem of factoring large numbers, for which Shor devised the first efficient quantum algorithm in 1994 [32] by exploiting the possibility for quantum systems to evolve in superpositions of exponentially many different states.

Publication: New Journal of Physics Vol.: 18 No.: 10 ISSN: 1367-2630

ID: CaltechAUTHORS:20161205-151744898

]]>

Abstract: We show that for any Є > 0 the problem of finding a factor (2 - Є) approximation to the entangled value of a three-player XOR game is NP-hard. Equivalently, the problem of approximating the largest possible quantum violation of a tripartite Bell correlation inequality to within any multiplicative constant is NP-hard. These results are the first constant-factor hardness of approximation results for entangled games or quantum violations of Bell inequalities shown under the sole assumption that P≠NP. They can be thought of as an extension of Håstad's optimal hardness of approximation results for MAX-E3-LIN2 [J. ACM, 48 (2001), pp. 798--859] to the entangled-player setting. The key technical component of our work is a soundness analysis of a plane-vs-point low-degree test against entangled players. This extends and simplifies the analysis of the multilinearity test by Ito and Vidick [Proceedings of the 53rd FOCS, IEEE, Piscataway, NJ, 2012, pp. 243-252]. Our results demonstrate the possibility of efficient reductions between entangled-player games and our techniques may lead to further hardness of approximation results.

Publication: SIAM Journal on Computing Vol.: 45 No.: 3 ISSN: 0097-5397

ID: CaltechAUTHORS:20161103-145636436

]]>

Abstract: The detectability lemma is a useful tool for probing the structure of gapped ground states of frustration-free Hamiltonians of lattice spin models. The lemma provides an estimate on the error incurred by approximating the ground space projector with a product of local projectors. We provide a simpler proof for the detectability lemma which applies to an arbitrary ordering of the local projectors, and show that it is tight up to a constant factor. As an application, we show how the lemma can be combined with a strong converse by Gao to obtain local spectral gap amplification: We show that by coarse graining a local frustration-free Hamiltonian with a spectral gap γ>0 to a length scale O(γ^(−1/2)), one gets a Hamiltonian with an Ω(1) spectral gap.

Publication: Physical Review B Vol.: 93 No.: 20 ISSN: 1098-0121

ID: CaltechAUTHORS:20160318-153303794

]]>

Abstract: In the context of multiplayer games, the parallel repetition problem can be phrased as follows: given a game G with optimal winning probability 1 - α and its repeated version G^n (in which n games are played together, in parallel), can the players use strategies that are substantially better than ones in which each game is played independently? This question is relevant in physics for the study of correlations and plays an important role in computer science in the context of complexity and cryptography. In this paper, the case of multiplayer non-signaling games is considered, i.e., the only restriction on the players is that they are not allowed to communicate during the game. For complete-support games (games where all possible combinations of questions have non-zero probability to be asked) with any number of players, we prove a threshold theorem stating that the probability that non-signaling players win more than a fraction 1-α+β of the n games is exponentially small in nβ^2 for every 0 ≤ β ≤ α. For games with incomplete support, we derive a similar statement for a slightly modified form of repetition. The result is proved using a new technique based on a recent de Finetti theorem, which allows us to avoid central technical difficulties that arise in standard proofs of parallel repetition theorems.

Publication: IEEE Transactions on Information Theory Vol.: 62 No.: 3 ISSN: 0018-9448

ID: CaltechAUTHORS:20160318-101440389

]]>

Abstract: This review article is concerned with a recently uncovered connection between operator spaces, a noncommutative extension of Banach spaces, and quantum nonlocality, a striking phenomenon which underlies many of the applications of quantum mechanics to information theory, cryptography, and algorithms. Using the framework of nonlocal games, we relate measures of the nonlocality of quantum mechanics to certain norms in the Banach and operator space categories. We survey recent results that exploit this connection to derive large violations of Bell inequalities, study the complexity of the classical and quantum values of games and their relation to Grothendieck inequalities, and quantify the nonlocality of different classes of entangled states.

Publication: Journal of Mathematical Physics Vol.: 57 No.: 1 ISSN: 0022-2488

ID: CaltechAUTHORS:20160225-142342994

]]>

Abstract: Quantum entanglement is known to provide a strong advantage in many two-party distributed tasks. We investigate the question of how much entanglement is needed to reach optimal performance. For the first time we show that there exists a purely classical scenario for which no finite amount of entanglement suffices. To this end we introduce a simple two-party nonlocal game H, inspired by Lucien Hardy’s paradox. In our game each player has only two possible questions and can provide bit strings of any finite length as answer. We exhibit a sequence of strategies which use entangled states in increasing dimension d and succeed with probability 1 - O(d^(-c)) for some c ≥ 0.13. On the other hand, we show that any strategy using an entangled state of local dimension d has success probability at most 1 - Ω (d^(-2)). In addition, we show that any strategy restricted to producing answers in a set of cardinality at most d has success probability at most 1 - Ω (d^(-2)). Finally, we generalize our construction to derive similar results starting from any game G with two questions per player and finite answers sets in which quantum strategies have an advantage.

Publication: Quantum Information and Computation Vol.: 15 No.: 15-16 ISSN: 1533-7146

ID: CaltechAUTHORS:20160818-080941623

]]>

Abstract: We introduce quantum XOR games, a model of two-player, one-round games that extends the model of XOR games by allowing the referee’s questions to the players to be quantum states. We give examples showing that quantum XOR games exhibit a wide range of behaviors that are known not to exist for standard XOR games, such as cases in which the use of entanglement leads to an arbitrarily large advantage over the use of no entanglement. By invoking two deep extensions of Grothendieck’s inequality, we present an efficient algorithm that gives a constant-factor approximation to the best performance that players can obtain in a given game, both in the case that they have no shared entanglement and that they share unlimited entanglement. As a byproduct of the algorithm, we prove some additional interesting properties of quantum XOR games, such as the fact that sharing a maximally entangled state of arbitrary dimension gives only a small advantage over having no entanglement at all.

Publication: ACM Transactions on Computation Theory Vol.: 7 No.: 4 ISSN: 1942-3454

ID: CaltechAUTHORS:20160321-083901879

]]>

Abstract: The density matrix renormalization group method has been extensively used to study the ground state of 1D many-body systems since its introduction two decades ago. In spite of its wide use, this heuristic method is known to fail in certain cases and no certifiably correct implementation is known, leaving researchers faced with an ever-growing toolbox of heuristics, none of which is guaranteed to succeed. Here we develop a polynomial time algorithm that provably finds the ground state of any 1D quantum system described by a gapped local Hamiltonian with constant ground-state energy. The algorithm is based on a framework that combines recently discovered structural features of gapped 1D systems with an efficient construction of a class of operators called approximate ground-state projections (AGSPs). The combination of these tools yields a method that is guaranteed to succeed in all 1D gapped systems. An AGSP-centric approach may help guide the search for algorithms for more general quantum systems, including for the central challenge of 2D systems, where even heuristic methods have had more limited success.

Publication: Nature Physics Vol.: 11 No.: 7 ISSN: 1745-2473

ID: CaltechAUTHORS:20150422-093309397

]]>

Abstract: We study the behavior of the entangled value of two-player one-round projection games under parallel repetition. We show that for any projection game G of entangled value 1−ϵ<1, the value of the k-fold repetition of G goes to zero as O((1−ϵ^c)^k), for some universal constant c≥1 furthermore the constraint graph of G is expanding, we obtain the optimal c = 1. Previously exponential decay of the entangled value under parallel repetition was only known for the case of XOR and unique games. To prove the theorem, we extend an analytical framework introduced by Dinur and Steurer for the study of the classical value of projection games under parallel repetition. Our proof, as theirs, relies on the introduction of a simple relaxation of the entangled value that is perfectly multiplicative. The main technical component of the proof consists in showing that the relaxed value remains tightly connected to the entangled value, thereby establishing the parallel repetition theorem. More generally, we obtain results on the behavior of the entangled value under products of arbitrary (not necessarily identical) projection games. Relating our relaxed value to the entangled value is done by giving an algorithm for converting a relaxed variant of quantum strategies that we call “vector quantum strategy” to a quantum strategy. The algorithm is considerably simpler in case the bipartite distribution of questions in the game has good expansion properties. When this is not the case, the algorithm relies on a quantum analogue of Holenstein’s correlated sampling lemma which may be of independent interest. Our “quantum correlated sampling lemma” generalizes results of van Dam and Hayden on universal embezzlement to the following approximate scenario: two non-communicating parties, given classical descriptions of bipartite states |ψ⟩,|φ⟩, respectively, such that |ψ⟩≈|φ⟩, are able to locally generate a joint entangled state |Ψ⟩≈|ψ⟩≈|φ⟩ using an initial entangled state that is independent of their inputs.

Publication: Computational Complexity Vol.: 24 No.: 2 ISSN: 1016-3328

ID: CaltechAUTHORS:20150615-140934465

]]>

Abstract: Quantum cryptography promises levels of security that are impossible to replicate in a classical world. Can this security be guaranteed even when the quantum devices on which the protocol relies are untrusted? This central question dates back to the early 1990s when the challenge of achieving device-independent quantum key distribution was first formulated. We answer this challenge by rigorously proving the device-independent security of a slight variant of Ekert's original entanglement-based protocol against the most general (coherent) attacks. The resulting protocol is robust: While assuming only that the devices can be modeled by the laws of quantum mechanics and are spatially isolated from each other and from any adversary's laboratory, it achieves a linear key rate and tolerates a constant noise rate in the devices. In particular, the devices may have quantum memory and share arbitrary quantum correlations with the eavesdropper. The proof of security is based on a new quantitative understanding of the monogamous nature of quantum correlations in the context of a multiparty protocol.

Publication: Physical Review Letters Vol.: 113 No.: 14 ISSN: 0031-9007

ID: CaltechAUTHORS:20150108-142044094

]]>

Abstract: The classical Grothendieck inequality has applications to the design of approximation algorithms for NP-hard optimization problems. We show that an algorithmic interpretation may also be given for a noncommutative generalization of the Grothendieck inequality due to Pisier and Haagerup. Our main result, an efficient rounding procedure for this inequality, leads to a polynomial-time constant-factor approximation algorithm for an optimization problem which generalizes the Cut Norm problem of Frieze and Kannan, and is shown here to have additional applications to robust principal component analysis and the orthogonal Procrustes problem.

Publication: Theory of Computing Vol.: 10 No.: 1 ISSN: 1557-2862

ID: CaltechAUTHORS:20200731-152129927

]]>

Abstract: We provide alternative proofs of two recent Grothendieck theorems for jointly completely bounded bilinear forms, originally due to Pisier and Shlyakhtenko (Grothendieck's theorem for operator spaces, Invent. Math. 150(2002), 185-217) and Haagerup and Musat (The Effros-Ruan conjecture for bilinear forms on C*-algebras, Invent. Math. 174(2008), 139-163). Our proofs are elementary and are inspired by the so-called embezzlement states in quantum information theory. Moreover, our proofs lead to quantitative estimates.

Publication: Journal of Operator Theory Vol.: 71 No.: 2 ISSN: 1841-7744

ID: CaltechAUTHORS:20160318-152323237

]]>

Abstract: The study of quantum-mechanical violations of Bell inequalities is motivated by the investigation, and the eventual demonstration, of the nonlocal properties of entanglement. In recent years, Bell inequalities have found a fruitful re-formulation using the language of multiplayer games originating from Computer Science. This paper studies the nonlocal properties of entanglement in the context of the simplest such games, called XOR games. When there are two players, it is well known that the maximum bias—the advantage over random play—of players using entanglement can be at most a constant times greater than that of classical players. Recently, Pérez-García et al. (Commun. Mathe. Phys. 279:455, 2008) showed that no such bound holds when there are three or more players: the use of entanglement can provide an unbounded advantage, and scale with the number of questions in the game. Their proof relies on non-trivial results from operator space theory, and gives a non-explicit existence proof, leading to a game with a very large number of questions and only a loose control over the local dimension of the players’ shared entanglement. We give a new, simple and explicit (though still probabilistic) construction of a family of three-player XOR games which achieve a large quantum-classical gap (QC-gap). This QC-gap is exponentially larger than the one given by Pérez-García et. al. in terms of the size of the game, achieving a QC-gap of order √N with N^2 questions per player. In terms of the dimension of the entangled state required, we achieve the same (optimal) QC-gap of √N for a state of local dimension N per player. Moreover, the optimal entangled strategy is very simple, involving observables defined by tensor products of the Pauli matrices. Additionally, we give the first upper bound on the maximal QC-gap in terms of the number of questions per player, showing that our construction is only quadratically off in that respect. Our results rely on probabilistic estimates on the norm of random matrices and higher-order tensors which may be of independent interest.

Publication: Communications in Mathematical Physics Vol.: 321 No.: 1 ISSN: 0010-3616

ID: CaltechAUTHORS:20160318-154623344

]]>

Abstract: The classical PCP theorem is arguably the most important achievement of classical complexity theory in the past quarter century. In recent years, researchers in quantum computational complexity have tried to identify approaches and develop tools that address the question: does a quantum version of the PCP theorem hold? The story of this study starts with classical complexity and takes unexpected turns providing fascinating vistas on the foundations of quantum mechanics and multipartite entanglement, topology and the so-called phenomenon of topological order, quantum error correction, information theory, and much more; it raises questions that touch upon some of the most fundamental issues at the heart of our understanding of quantum mechanics. At this point, the jury is still out as to whether or not such a theorem holds. This survey aims to provide a snapshot of the status in this ongoing story, tailored to a general theory-of-CS audience.

Publication: ACM SIGACT News Vol.: 44 No.: 2 ISSN: 0163-5700

ID: CaltechAUTHORS:20140910-135821275

]]>

Abstract: We study multipartite entanglement in the context of XOR games. In particular, we study the ratio of the entangled and classical biases, which measure the maximum advantage of a quantum or classical strategy over a uniformly random strategy. For the case of two-player XOR games, Tsirelson proved that this ratio is upper bounded by the celebrated Grothendieck constant. In contrast, Pérez-García et al. proved the existence of entangled states that give quantum players an unbounded advantage over classical players in a three-player XOR game. We show that the multipartite entangled states that are most often seen in today’s literature can only lead to a bias that is a constant factor larger than the classical bias. These states include GHZ states, any state local-unitarily equivalent to combinations of GHZ and maximally entangled states shared between different subsets of the players (e.g., stabilizer states), as well as generalizations of GHZ states of the form ∑iɑi|i〉...|i〉 for arbitrary amplitudes ɑi. Our results have the following surprising consequence: classical three-player XOR games do not follow an XOR parallel repetition theorem, even a very weak one. Besides this, we discuss implications of our results for communication complexity and hardness of approximation. Our proofs are based on novel applications of extensions of Grothendieck’s inequality, due to Blei and Tonge, and Carne, generalizing Tsirelson’s use of Grothendieck’s inequality to bound the bias of two-player XOR games.

Publication: Quantum Information and Computation Vol.: 13 No.: 3-4 ISSN: 1533-7146

ID: CaltechAUTHORS:20140909-144447941

]]>

Abstract: Randomness extraction involves the processing of purely classical information and is therefore usually studied with in the framework of classical probability theory. However, such a classical treatment is generally too restrictive for applications where side information about the values taken by classical random variables may be represented by the state of a quantum system. This is particularly relevant in the context of cryptography, where an adversary may make use of quantum devices. Here, we show that the well-known construction paradigm for extractors proposed by Trevisan is sound in the presence of quantum side information. We exploit the modularity of this paradigm to give several concrete extractor constructions, which, e.g., extract all the conditional (smooth) min-entropy of the source using a seed of length polylogarithmic in the input, or only require the seed to be weakly random.

Publication: SIAM Journal on Computing Vol.: 41 No.: 4 ISSN: 0097-5397

ID: CaltechAUTHORS:20160322-084353163

]]>

Abstract: We introduce a protocol through which a pair of quantum mechanical devices may be used to generate n random bits that are ε-close in statistical distance from n uniformly distributed bits, starting from a seed of O(log n log 1/ ϵ) uniform bits. The bits generated are certifiably random, based only on a simple statistical test that can be performed by the user, and on the assumption that the devices obey the no-signalling principle. No other assumptions are placed on the devices' inner workings: it is not necessary to even assume the validity of quantum mechanics.

Publication: Philosophical Transactions A: Mathematical, Physical and Engineering Sciences Vol.: 370 No.: 1971 ISSN: 1364-503X

ID: CaltechAUTHORS:20200804-084834826

]]>

Abstract: Given two sets A, B ⊆ R_n, a measure of their correlation is given by the expected squared inner product between random x ϵ A and y ϵ B. We prove an inequality showing that no two sets of large enough Gaussian measure (at least e^(-δn) for some constant δ > 0) can have correlation substantially lower than would two random sets of the same size. Our proof is based on a concentration inequality for the overlap of a random Gaussian vector on a large set. As an application, we show how our result can be combined with the partition bound of Jain and Klauck to give a simpler proof of a recent linear lower bound on the randomized communication complexity of the Gap-Hamming-Distance problem due to Chakrabarti and Regev.

Publication: Chicago Journal of Theoretical Computer Science Vol.: 18 No.: 1 ISSN: 1073-0486

ID: CaltechAUTHORS:20200804-133447851

]]>

Abstract: We prove that the Banach algebra formed by the space of compact operators on a Hilbert space endowed with the Schur product is a quotient of a uniform algebra (also known as a Q-algebra). Together with a similar result of Pérez-García for the trace class, this completes the answer to a long-standing question of Varopoulos.

Publication: Journal of Functional Analysis Vol.: 262 No.: 1 ISSN: 0022-1236

ID: CaltechAUTHORS:20200728-153958382

]]>

Abstract: A central question in our understanding of the physical world is how our knowledge of the whole relates to our knowledge of the individual parts. One aspect of this question is the following: to what extent does ignorance about a whole preclude knowledge of at least one of its parts? Relying purely on classical intuition, one would certainly be inclined to conjecture that a strong ignorance of the whole cannot come without significant ignorance of at least one of its parts. Indeed, we show that this reasoning holds in any noncontextual (NC) hidden-variable model (HV). Curiously, however, such a conjecture is false in quantum theory: we provide an explicit example where a large ignorance about the whole can coexist with an almost perfect knowledge of each of its parts. More specifically, we provide a simple information-theoretic inequality satisfied in any NC HV, but which can be arbitrarily violated by quantum mechanics.

Publication: Physical Review Letters Vol.: 107 No.: 3 ISSN: 0031-9007

ID: CaltechAUTHORS:20160318-151328788

]]>

Abstract: We establish the first hardness results for the problem of computing the value of one-round games played by a verifier and a team of provers who can share quantum entanglement. In particular, we show that it is NP-hard to approximate within an inverse polynomial the value of a one-round game with (i) a quantum verifier and two entangled provers or (ii) a classical verifier and three entangled provers. Previously it was not even known if computing the value exactly is NP-hard. We also describe a mathematical conjecture, which, if true, would imply hardness of approximation of entangled-prover games to within a constant. Using our techniques we also show that every language in PSPACE has a two-prover one-round interactive proof system with perfect completeness and soundness 1-1/poly even against entangled provers. We start our proof by describing two ways to modify classical multiprover games to make them resistant to entangled provers. We then show that a strategy for the modified game that uses entanglement can be “rounded” to one that does not. The results then follow from classical inapproximability bounds. Our work implies that, unless P=NP, the values of entangled-prover games cannot be computed by semidefinite programs that are polynomial in the size of the verifier's system, a method that has been successful for more restricted quantum games.

Publication: SIAM Journal on Computing Vol.: 40 No.: 3 ISSN: 0097-5397

ID: CaltechAUTHORS:20110713-155400829

]]>

Abstract: Recent numerical investigations [K. Pál and T. Vértesi, Phys. Rev. A 82, 022116 (2010)] suggest that the I3322 inequality, arguably the simplest extremal Bell inequality after the CHSH inequality, has a very rich structure in terms of the entangled states and measurements that maximally violate it. Here we show that for this inequality the maximally entangled state of any dimension achieves the same violation than just a single EPR pair. In contrast, stronger violations can be achieved using higher dimensional states which are less entangled. This shows that the maximally entangled state is not the most nonlocal resource, even when one restricts attention to the most simple extremal Bell inequalities.

Publication: Physical Review A Vol.: 83 No.: 5 ISSN: 1050-2947

ID: CaltechAUTHORS:20160318-115742532

]]>

Abstract: The central question in quantum multi-prover interactive proof systems is whether or not entanglement shared among provers affects the verification power of the proof system. We study for the first time positive aspects of prior entanglement and show how it can be used to parallelize any multi-prover quantum interactive proof system to a one-round system with perfect completeness, soundness bounded away from one by an inverse-polynomial in the input size, and one extra prover. Alternatively, we can also parallelize to a three-turn system with the same number of provers, where the verifier only broadcasts the outcome of a coin flip. This “public-coin” property is somewhat surprising, since in the classical case public-coin multi-prover interactive proofs are equivalent to single-prover ones.

Publication: Computational Complexity Vol.: 18 No.: 2 ISSN: 1016-3328

ID: CaltechAUTHORS:20200805-150138118

]]>

Abstract: Geometric intuition suggests that the Néron–Tate height of Heegner points on a rational elliptic curve E should be asymptotically governed by the degree of its modular parametrisation. In this paper, we show that this geometric intuition asymptotically holds on average over a subset of discriminants. We also study the asymptotic behaviour of traces of Heegner points on average over a subset of discriminants and find a difference according to the rank of the elliptic curve. By the Gross–Zagier formulae, such heights are related to the special value at the critical point for either the derivative of the Rankin–Selberg convolution of E with a certain weight one theta series attached to the principal ideal class of an imaginary quadratic field or the twisted L-function of E by a quadratic Dirichlet character. Asymptotic formulae for the first moments associated with these L-series and L-functions are proved, and experimental results are discussed. The appendix contains some conjectural applications of our results to the problem of the discretisation of odd quadratic twists of elliptic curves.

Publication: Canadian Journal of Mathematics Vol.: 60 No.: 6 ISSN: 0008-414X

ID: CaltechAUTHORS:20190320-142201374

]]>

Abstract: The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2^(O(n)), which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2^(O(n)) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+ϵ)^n polynomial-time operations, and whose space requirement is (4/3+ ϵ)^(n/2) polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions.

Publication: Journal of Mathematical Cryptology Vol.: 2 No.: 2 ISSN: 1862-2976

ID: CaltechAUTHORS:20200804-103250325

]]>