Abstract: We study the problem of one-shot channel simulation of DMCs with unlimited shared randomness. For any fixed tolerance measured in total variational distance, we propose an achievability bound and a converse bound on the size of the code to simulate the channel. The achievability bound utilizes the convex split lemma, whereas the converse bound is the result of the relationships between smoothed max-divergences and the max-mutual information. The achievability proof does not rely on a "universal state" (compared with some previous related works), and provides a tighter bound. Using the two bounds, we also provide an alternative proof to the reverse Shannon theorem.

ID: CaltechAUTHORS:20220804-765722000

]]>

Abstract: Numerical codes that require arbitrary precision floating point (APFP) numbers for their core computation are dominated by elementary arithmetic operations due to the super-linear complexity of multiplication in the number of mantissa bits. APFP computations on conventional software-based architectures are made exceedingly expensive by the lack of native hardware support, requiring elementary operations to be emulated using instructions operating on machine-word-sized blocks. In this work, we show how APFP multiplication on compile-time fixed-precision operands can be implemented as deep FPGA pipelines with a recursively defined Karatsuba decomposition on top of native DSP multiplication. When comparing our design implemented on an Alveo U250 accelerator to a dual-socket 36-core Xeon node running the GNU Multiple Precision Floating-Point Reliable (MPFR) library, we achieve a 9.8× speedup at 4.8 GOp/s for 512-bit multiplication, and a 5.3× speedup at 1.2 GOp/s for 1024-bit multiplication, corresponding to the throughput of more than 351× and 191× CPU cores, respectively. We apply this architecture to general matrix-matrix multiplication, yielding a 10× speedup at 2.0 GOp/s over the Xeon node, equivalent to more than 375× CPU cores, effectively allowing a single FPGA to replace a small CPU cluster. Due to the significant dependence of some numerical codes on APFP, such as semidefinite program solvers, we expect these gains to translate into real-world speedups. Our configurable and flexible HLS-based code provides as high-level software interface for plug-and-play acceleration, published as an open source project.

ID: CaltechAUTHORS:20220614-222241000

]]>

Abstract: Stabilized cat qubits that possess biased noise channel with bit-flip errors exponentially smaller than phase-flip errors. Together with a set of bias-preserving (BP) gates, cat qubits are a promising candidate for realizing hardware efficient quantum error correction and fault-tolerant quantum computing. Compared to dissipatively stabilized cat qubits, the Kerr cat qubits can in principle support faster gate operations with higher gate fidelity, benefiting from the large energy gap that protects the code space. However, the leakage of the Kerr cats can increase the minor type of errors and compromise the noise bias. Both the fast implementation of gates and the interaction with environment can lead to such detrimental leakage if no sophisticated controls are applied. In this work, we introduce new fine-control techniques to overcome the above obstacles for Kerr cat qubits. To suppress the gate leakage, we use the derivative-based transition suppression technique to design derivative-based controls for the Kerr BP gates. We show that the fine-controlled gates can simultaneously have high gate fidelity and high noise bias and when applied to concatenated quantum error correction, can not only improve the logical error rate but also reduce resource overhead. To suppress the environment-induced leakage, we introduce colored single-photon dissipation, which can continuously cool the Kerr cats and suppress the minor errors while not enhancing the major errors.

No.: 12015
ID: CaltechAUTHORS:20220307-189714000

]]>

Abstract: A locally testable code is an error-correcting code that admits very efficient probabilistic tests of membership. Tensor codes provide a simple family of combinatorial constructions of locally testable codes that generalize the family of Reed-Muller codes. The natural test for tensor codes, the axis-parallel line vs. point test, plays an essential role in constructions of probabilistically checkable proofs. We analyze the axis-parallel line vs. point test as a two-prover game and show that the test is sound against quantum provers sharing entanglement. Our result implies the quantum-soundness of the low individual degree test, which is an essential component of the MIP* = RE theorem. Our proof also generalizes to the infinite-dimensional commuting-operator model of quantum provers.

Publication: arXiv
ID: CaltechAUTHORS:20220202-191902193

]]>

Abstract: Quantum state transfer involves two parties who use pre-shared entanglement and noiseless communication in order to transfer parts of a quantum state. In this work, we quantity the communication cost of one-shot state splitting in terms of the partially smoothed max-information. We then give an analysis of state splitting in the moderate deviation regime, where the error in the protocol goes sub-exponentially fast to zero as a function of the number of i.i.d. copies. The main technical tool we derive is a tight relation between the partially smoothed max-information and the hypothesis testing relative entropy, which allows us to obtain the expansion of the partially smoothed max-information for i.i.d. states in the moderate deviation regime. This then also establishes the moderate deviation analysis for other variants of state transfer such as state merging and source coding.

ID: CaltechAUTHORS:20220722-768863000

]]>

Abstract: We demonstrate the possibility of (sub)exponential quantum speedup via a quantum algorithm that follows an adiabatic path of a gapped Hamiltonian with no sign problem. The Hamiltonian that exhibits this speed-up comes from the adjacency matrix of an undirected graph whose vertices are labeled by n-bit strings, and we can view the adiabatic evolution as an efficient O(poly(n))-time quantum algorithm for finding a specific “EXIT” vertex in the graph given the “ENTRANCE” vertex. On the other hand we show that if the graph is given via an adjacency-list oracle, there is no classical algorithm that finds the “EXIT” with probability greater than exp(−n^δ) using at most exp(n^δ) queries for δ= 1/5 − o(1). Our construction of the graph is somewhat similar to the “welded-trees” construction of Childs et al., but uses additional ideas of Hastings for achieving a spectral gap and a short adiabatic path.

ID: CaltechAUTHORS:20220802-839191000

]]>

Abstract: We report the observation of topological phonon transport within a multiscale optomechanical crystal structure consisting of an array of over 800 cavity-optomechanical elements. Using sensitive, spatially resolved optical read-out we detect thermal phonons in a 0.325 − 0.34 GHz band traveling along a topological edge channel, with substantial reduction in backscattering. This represents an important step from the pioneering macroscopic mechanical systems work towards topological phononic systems at the nanoscale.

Publication: arXiv
ID: CaltechAUTHORS:20200915-092939162

]]>

Abstract: Many quantum algorithms critically rely on quantum walk search, or the use of quantum walks to speed up search problems on graphs. However, the main results on quantum walk search are scattered over different, incomparable frameworks, such as the hitting time framework, the MNRS framework, and the electric network framework. As a consequence, a number of pieces are currently missing. For example, recent work by Ambainis et al. (STOC'20) shows how quantum walks starting from the stationary distribution can always find elements quadratically faster. In contrast, the electric network framework allows quantum walks to start from an arbitrary initial state, but it only detects marked elements. We present a new quantum walk search framework that unifies and strengthens these frameworks, leading to a number of new results. For example, the new framework effectively finds marked elements in the electric network setting. The new framework also allows to interpolate between the hitting time framework, minimizing the number of walk steps, and the MNRS framework, minimizing the number of times elements are checked for being marked. This allows for a more natural tradeoff between resources. In addition to quantum walks and phase estimation, our new algorithm makes use of quantum fast-forwarding, similar to the recent results by Ambainis et al. This perspective also enables us to derive more general complexity bounds on the quantum walk algorithms, e.g., based on Monte Carlo type bounds of the corresponding classical walk. As a final result, we show how in certain cases we can avoid the use of phase estimation and quantum fast-forwarding, answering an open question of Ambainis et al.

Publication: Schloss Dagstuhl - Leibniz-Zentrum für Informatik No.: 187
ID: CaltechAUTHORS:20210517-104446034

]]>

Abstract: Aaronson and Ambainis (2009) and Chailloux (2018) showed that fully symmetric (partial) functions do not admit exponential quantum query speedups. This raises a natural question: how symmetric must a function be before it cannot exhibit a large quantum speedup? In this work, we prove that hypergraph symmetries in the adjacency matrix model allow at most a polynomial separation between randomized and quantum query complexities. We also show that, remarkably, permutation groups constructed out of these symmetries are essentially the only permutation groups that prevent super-polynomial quantum speedups. We prove this by fully characterizing the primitive permutation groups that allow super-polynomial quantum speedups. In contrast, in the adjacency list model for bounded-degree graphs-where graph symmetry is manifested differently-we exhibit a property testing problem that shows an exponential quantum speedup. These results resolve open questions posed by Ambainis, Childs, and Liu (2010) and Montanaro and de Wolf (2013).

ID: CaltechAUTHORS:20210630-171353593

]]>

Abstract: Rare-earth ions embedded in crystals are promising optically addressable spin qubits. We demonstrate this potential by measuring the optical and spin transition properties of single ¹⁷¹Yb³⁺ ions coupled to nanophotonic resonators fabricated in YVO₄.

ID: CaltechAUTHORS:20210611-082710176

]]>

Abstract: For all n ≥ 1, we give an explicit construction of m × m matrices A_1,…,A_n with m = 2^([n/2]) such that for any d and d × d matrices A′_1,…,A′_n that satisfy ∥A_′i−A′_j∥S_1 ≤ ∥A_i−A_j∥S_1 ≤ (1+δ)∥A′_i−A′_j∥S_1 for all i,j∈{1,…,n} and small enough δ = O(n^(−c)), where c > 0 is a universal constant, it must be the case that d ≥ 2^([n/2]−1). This stands in contrast to the metric theory of commutative ℓ_p spaces, as it is known that for any p ≥ 1, any n points in ℓ_p embed exactly in ℓ^d_p for d = n(n−1)/2. Our proof is based on matrices derived from a representation of the Clifford algebra generated by n anti-commuting Hermitian matrices that square to identity, and borrows ideas from the analysis of nonlocal games in quantum information theory.

No.: 2266
ID: CaltechAUTHORS:20190320-095834301

]]>

Abstract: We present a quasi-polynomial time classical algorithm that estimates the partition function of quantum many-body systems at temperatures above the thermal phase transition point. It is known that in the worst case, the same problem is NP-hard below this point. Together with our work, this shows that the transition in the phase of a quantum system is also accompanied by a transition in the hardness of approximation. We also show that in a system of n particles above the phase transition point, the correlation between two observables whose distance is at least Ω(logn) decays exponentially. We can improve the factor of logn to a constant when the Hamiltonian has commuting terms or is on a 1D chain. The key to our results is a characterization of the phase transition and the critical behavior of the system in terms of the complex zeros of the partition function. Our work extends a seminal work of Dobrushin and Shlosman on the equivalence between the decay of correlations and the analyticity of the free energy in classical spin models. On the algorithmic side, our result extends the scope of a recent approach due to Barvinok for solving classical counting problems to quantum many-body systems.

ID: CaltechAUTHORS:20210226-083215255

]]>

Abstract: We present an algorithmic framework for quantum-inspired classical algorithms on close-to-low-rank matrices, generalizing the series of results started by Tang’s breakthrough quantum-inspired algorithm for recommendation systems [STOC’19]. Motivated by quantum linear algebra algorithms and the quantum singular value transformation (SVT) framework of Gilyén et al. [STOC’19], we develop classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions. Our results give compelling evidence that in the corresponding QRAM data structure input model, quantum SVT does not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quantum linear algebra, our results, combined with sampling lemmas from previous work, suffices to generalize all recent results about dequantizing quantum machine learning algorithms. In particular, our classical SVT framework recovers and often improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving. We also give additional dequantization results on low-rank Hamiltonian simulation and discriminant analysis. Our improvements come from identifying the key feature of the quantum-inspired input model that is at the core of all prior quantum-inspired results: ℓ²-norm sampling can approximate matrix products in time independent of their dimension. We reduce all our main results to this fact, making our exposition concise, self-contained, and intuitive.

ID: CaltechAUTHORS:20210226-083945792

]]>

Abstract: A proof of quantumness is a method for provably demonstrating (to a classical verifier) that a quantum device can perform computational tasks that a classical device with comparable resources cannot. Providing a proof of quantumness is the first step towards constructing a useful quantum computer. There are currently three approaches for exhibiting proofs of quantumness: (i) Inverting a classically-hard one-way function (e.g. using Shor’s algorithm). This seems technologically out of reach. (ii) Sampling from a classically-hard-to-sample distribution (e.g. BosonSampling). This may be within reach of near-term experiments, but for all such tasks known verification requires exponential time. (iii) Interactive protocols based on cryptographic assumptions. The use of a trapdoor scheme allows for efficient verification, and implementation seems to require much less resources than (i), yet still more than (ii). In this work we propose a significant simplification to approach (iii) by employing the random oracle heuristic. (We note that we do not apply the Fiat-Shamir paradigm.) We give a two-message (challenge-response) proof of quantumness based on any trapdoor claw-free function. In contrast to earlier proposals we do not need an adaptive hard-core bit property. This allows the use of smaller security parameters and more diverse computational assumptions (such as Ring Learning with Errors), significantly reducing the quantum computational effort required for a successful demonstration.

Publication: arXiv
ID: CaltechAUTHORS:20200728-144326318

]]>

Abstract: The AKLT spin chain is the prototypical example of a frustration-free quantum spin system with a spectral gap above its ground state. Affleck, Kennedy, Lieb, and Tasaki also conjectured that the two-dimensional version of their model on the hexagonal lattice exhibits a spectral gap. In this paper, we introduce a family of variants of the two-dimensional AKLT model depending on a positive integer n, which is defined by decorating the edges of the hexagonal lattice with one-dimensional AKLT spin chains of length n. We prove that these decorated models are gapped for all n ≥ 3.

Publication: Contemporary Mathematics No.: 741 ISSN: 0271-4132

ID: CaltechAUTHORS:20200922-071519332

]]>

Abstract: We introduce a protocol between a classical polynomial-time verifier and a quantum polynomial-time prover that allows the verifier to securely delegate to the prover the preparation of certain single-qubit quantum states The prover is unaware of which state he received and moreover, the verifier can check with high confidence whether the preparation was successful. The delegated preparation of single-qubit states is an elementary building block in many quantum cryptographic protocols. We expect our implementation of "random remote state preparation with verification", a functionality first defined in (Dunjko and Kashefi 2014), to be useful for removing the need for quantum communication in such protocols while keeping functionality. The main application that we detail is to a protocol for blind and verifiable delegated quantum computation (DQC) that builds on the work of (Fitzsimons and Kashefi 2018), who provided such a protocol with quantum communication. Recently, both blind an verifiable DQC were shown to be possible, under computational assumptions, with a classical polynomial-time client (Mahadev 2017, Mahadev 2018). Compared to the work of Mahadev, our protocol is more modular, applies to the measurement-based model of computation (instead of the Hamiltonian model) and is composable. Our proof of security builds on ideas introduced in (Brakerski et al. 2018).

ID: CaltechAUTHORS:20200109-143243905

]]>

Abstract: We study multiprover interactive proof systems. The power of classical multiprover interactive proof systems, in which the provers do not share entanglement, was characterized in a famous work by Babai, Fortnow, and Lund (Computational Complexity 1991), whose main result was the equality MIP = NEXP. The power of quantum multiprover interactive proof systems, in which the provers are allowed to share entanglement, has proven to be much more difficult to characterize. The best known lower-bound on MIP* is NEXP ⊆ MIP*, due to Ito and Vidick (FOCS 2012). As for upper bounds, MIP* could be as large as RE, the class of recursively enumerable languages. The main result of this work is the inclusion of NEEXP = NTIME[2^(2poly(n))] ⊆ MIP*. This is an exponential improvement over the prior lower bound and shows that proof systems with entangled provers are at least exponentially more powerful than classical provers. In our protocol the verifier delegates a classical, exponentially large MIP protocol for NEEXP to two entangled provers: the provers obtain their exponentially large questions by measuring their shared state, and use a classical PCP to certify the correctness of their exponentially-long answers. For the soundness of our protocol, it is crucial that each player should not only sample its own question correctly but also avoid performing measurements that would reveal the other player's sampled question. We ensure this by commanding the players to perform a complementary measurement, relying on the Heisenberg uncertainty principle to prevent the forbidden measurements from being performed.

ID: CaltechAUTHORS:20200109-143243997

]]>

Abstract: We show that any language solvable in nondeterministic time exp( exp(⋯exp(n))), where the number of iterated exponentials is an arbitrary function R(n), can be decided by a multiprover interactive proof system with a classical polynomial-time verifier and a constant number of quantum entangled provers, with completeness 1 and soundness 1 − exp(−Cexp(⋯exp(n))), where the number of iterated exponentials is R(n)−1 and C>0 is a universal constant. The result was previously known for R=1 and R=2; we obtain it for any time-constructible function R. The result is based on a compression technique for interactive proof systems with entangled provers that significantly simplifies and strengthens a protocol compression result of Ji (STOC’17). As a separate consequence of this technique we obtain a different proof of Slofstra’s recent result on the uncomputability of the entangled value of multiprover games (Forum of Mathematics, Pi 2019). Finally, we show that even minor improvements to our compression result would yield remarkable consequences in computational complexity theory and the foundations of quantum mechanics: first, it would imply that the class MIP* contains all computable languages; second, it would provide a negative resolution to a multipartite version of Tsirelson’s problem on the relation between the commuting operator and tensor product models for quantum correlations.

Publication: arXiv
ID: CaltechAUTHORS:20190204-112657116

]]>

Abstract: In privacy amplification, two mutually trusted parties aim to amplify the secrecy of an initial shared secret X in order to establish a shared private key K by exchanging messages over an insecure communication channel. If the channel is authenticated the task can be solved in a single round of communication using a strong randomness extractor; choosing a quantum-proof extractor allows one to establish security against quantum adversaries. In the case that the channel is not authenticated, this simple solution is no longer secure. Nevertheless, Dodis and Wichs (STOC’09) showed that the problem can be solved in two rounds of communication using a non-malleable extractor, a stronger pseudo-random construction than a strong extractor. We give the first construction of a non-malleable extractor that is secure against quantum adversaries. The extractor is based on a construction by Li (FOCS’12), and is able to extract from source of min-entropy rates larger than 1 / 2. Combining this construction with a quantum-proof variant of the reduction of Dodis and Wichs, due to Cohen and Vidick (unpublished) we obtain the first privacy amplification protocol secure against active quantum adversaries.

Publication: arXiv No.: 11477
ID: CaltechAUTHORS:20190320-102401828

]]>

Abstract: The problem of reliably certifying the outcome of a computation performed by a quantum device is rapidly gaining relevance. We present two protocols for a classical verifier to verifiably delegate a quantum computation to two non-communicating but entangled quantum provers. Our protocols have near-optimal complexity in terms of the total resources employed by the verifier and the honest provers, with the total number of operations of each party, including the number of entangled pairs of qubits required of the honest provers, scaling as O(g\log g) for delegating a circuit of size g. This is in contrast to previous protocols, whose overhead in terms of resources employed, while polynomial, is far beyond what is feasible in practice. Our first protocol requires a number of rounds that is linear in the depth of the circuit being delegated, and is blind, meaning neither prover can learn the circuit or its input. The second protocol is not blind, but requires only a constant number of rounds of interaction. Our main technical innovation is an efficient rigidity theorem which allows a verifier to test that two entangled provers perform measurements specified by an arbitrary m-qubit tensor product of single-qubit Clifford observables on their respective halves of m shared EPR pairs, with a robustness that is independent of m. Our two-prover classical-verifier delegation protocols are obtained by combining this rigidity theorem with a single-prover quantum-verifier protocol for the verifiable delegation of a quantum computation, introduced by Broadbent.

Publication: arXiv No.: 11478
ID: CaltechAUTHORS:20190320-123759874

]]>

Abstract: In this chapter we address the topic of quantum thermodynamics in the presence of additional observables beyond the energy of the system. In particular we discuss the special role that the generalized Gibbs ensemble plays in this theory, and derive this state from the perspectives of a micro-canonical ensemble, dynamical typicality and a resource-theory formulation. A notable obstacle occurs when some of the observables do not commute, and so it is impossible for the observables to simultaneously take on sharp microscopic values. We show how this can be circumvented, discuss information-theoretic aspects of the setting, and explain how thermodynamic costs can be traded between the different observables. Finally, we discuss open problems and future directions for the topic.

Publication: arXiv No.: 195
ID: CaltechAUTHORS:20190213-103051707

]]>

Abstract: Thermodynamics can be formulated in either of two approaches, the phenomenological approach, which refers to the macroscopic properties of systems, and the statistical approach, which describes systems in terms of their microscopic constituents. We establish a connection between these two approaches by means of a new axiomatic framework that can take errors and imprecisions into account. This link extends to systems of arbitrary sizes including very small systems, for which the treatment of imprecisions is pertinent to any realistic situation. Based on this, we identify the quantities that characterise whether certain thermodynamic processes are possible with entropy measures from information theory. In the error-tolerant case, these entropies are so-called smooth min and max entropies. Our considerations further show that in an appropriate macroscopic limit there is a single entropy measure that characterises which state transformations are possible. In the case of many independent copies of a system (the so-called i.i.d. regime), the relevant quantity is the von Neumann entropy. Transformations among microcanonical states are characterised by the Boltzmann entropy.

Publication: arXiv No.: 195
ID: CaltechAUTHORS:20190211-151339114

]]>

Abstract: Quantum metrology has many important applications in science and technology, ranging from frequency spectroscopy to gravitational wave detection. Quantum mechanics imposes a fundamental limit on measurement precision, called the Heisenberg limit, which can be achieved for noiseless quantum systems, but is not achievable in general for systems subject to noise. Here we study how measurement precision can be enhanced through quantum error correction, a general method for protecting a quantum system from the damaging effects of noise. We find a necessary and sufficient condition for achieving the Heisenberg limit using quantum probes subject to Markovian noise, assuming that noiseless ancilla systems are available, and that fast, accurate quantum processing can be performed. When the sufficient condition is satisfied, the quantum error-correcting code achieving the best possible precision can be found by solving a semidefinite program. We also show that noiseless ancilla are not needed when the signal Hamiltonian and the error operators commute. Finally we provide two explicit, archetypal examples of quantum sensors: qubits undergoing dephasing and a lossy bosonic mode.

No.: 10934
ID: CaltechAUTHORS:20190606-092443914

]]>

Abstract: Forthcoming exascale digital computers will further advance our knowledge of quantum chromodynamics, but formidable challenges will remain. In particular, Euclidean Monte Carlo methods are not well suited for studying real-time evolution in hadronic collisions, or the properties of hadronic matter at nonzero temperature and chemical potential. Digital computers may never be able to achieve accurate simulations of such phenomena in QCD and other strongly-coupled field theories; quantum computers will do so eventually, though I'm not sure when. Progress toward quantum simulation of quantum field theory will require the collaborative efforts of quantumists and field theorists, and though the physics payoff may still be far away, it's worthwhile to get started now. Today's research can hasten the arrival of a new era in which quantum simulation fuels rapid progress in fundamental physics.

ID: CaltechAUTHORS:20190122-113145453

]]>

Abstract: We give a protocol for producing certifiable randomness from a single untrusted quantum device that is polynomial-time bounded. The randomness is certified to be statistically close to uniform from the point of view of any computationally unbounded quantum adversary, that may share entanglement with the quantum device. The protocol relies on the existence of post-quantum secure trapdoor claw-free functions, and introduces a new primitive for constraining the power of an untrusted quantum device. We then show how to construct this primitive based on the hardness of the learning with errors (LWE) problem. The randomness protocol can also be used as the basis for an efficiently verifiable "quantum supremacy" proposal, thus answering an outstanding challenge in the field.

ID: CaltechAUTHORS:20190201-143229032

]]>

Abstract: We show that given an explicit description of a multiplayer game, with a classical verifier and a constant number of players, it is QMA-hard, under randomized reductions, to distinguish between the cases when the players have a strategy using entanglement that succeeds with probability 1 in the game, or when no such strategy succeeds with probability larger than 1/2. This proves the “games quantum PCP conjecture” of Fitzsimons and the second author (ITCS'15), albeit under randomized reductions. The core component in our reduction is a construction of a family of two-player games for testing n-qubit maximally entangled states. For any integer n ≥ 2, we give such a game in which questions from the verifier are O(log n) bits long, and answers are poly(loglogn) bits long. We show that for any constant ε ≥ 0, any strategy that succeeds with probability at least 1 - ε in the test must use a state that is within distance δ(ε) = O(ε c ) from a state that is locally equivalent to a maximally entangled state on n qubits, for some universal constant c > 0. The construction is based on the classical plane-vs-point test for multivariate low-degree polynomials of Raz and Safra (STOC'97). We extend the classical test to the quantum regime by executing independent copies of the test in the generalized Pauli X and Z bases over Fq, where q is a sufficiently large prime power, and combine the two through a test for the Pauli twisted commutation relations. Our main complexity-theoretic result is obtained by combining this family of games with techniques from the classical PCP literature. More specifically, we use constructions of PCPs of proximity introduced by Ben-Sasson et al. (CCC'05), and crucially rely on a linear property of such PCPs. Another consequence of our results is a deterministic reduction from the games quantum PCP conjecture to a suitable formulation of the constraint satisfaction quantum PCP conjecture.

ID: CaltechAUTHORS:20190201-143229217

]]>

Abstract: We show that it is NP-hard to approximate, to within an additive constant, the maximum success probability of players sharing quantum entanglement in a two-player game with classical questions of logarithmic length and classical answers of constant length. As a corollary, the inclusion NEXP subseteq MIP^*, first shown by Ito and Vidick (FOCS'12) with three provers, holds with two provers only. The proof is based on a simpler, improved analysis of the low-degree test of Raz and Safra (STOC'97) against two entangled provers.

Publication: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH, Wadern/Saarbruecken, Germany
ID: CaltechAUTHORS:20180822-141142977

]]>

Abstract: In a recent work, Moshkovitz [FOCS'14] presented a transformation n two-player games called "fortification", and gave an elementary proof of an (exponential decay) parallel repetition theorem for fortified two-player projection games. In this paper, we give an analytic reformulation of Moshkovitz's fortification framework, which was originally cast in combinatorial terms. This reformulation allows us to expand the scope of the fortification method to new settings. First, we show any game (not just projection games) can be fortified, and give a simple proof of parallel repetition for general fortified games. Then, we prove parallel repetition and fortification theorems for games with players sharing quantum entanglement, as well as games with more than two players. This gives a new gap amplification method for general games in the quantum and multiplayer settings, which has recently received much interest. An important component of our work is a variant of the fortification transformation, called "ordered fortification", that preserves the entangled value of a game. The original fortification of Moshkovitz does not in general preserve the entangled value of a game, and this was a barrier to extending the fortification framework to the quantum setting.

No.: 67
ID: CaltechAUTHORS:20160321-071142064

]]>

Abstract: One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance by Landau et al. gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unresolved, including whether there exist rigorous efficient algorithms when the ground space is degenerate (and poly(n) dimensional), or for the poly(n) lowest energy states for 1D systems, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm for finding low energy states for 1D systems, based on a rigorously justified renormalization group (RG)-type transformation. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly(n) degenerate ground spaces and an n^(O(log n)) algorithm for the poly(n) lowest energy states for 1D systems (under a mild density condition). We note that for these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is O(nM(n)), where M(n) is the time required to multiply two n by n matrices.

Publication: Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik GmbH, Wadern/Saarbruecken, Germany
ID: CaltechAUTHORS:20200804-100730896

]]>

Abstract: We establish a converse bounds on the private transmission capabilities of a quantum channel. The main conceptual development builds firmly on the notion of a private state, which is a powerful, uniquely quantum method for simplifying the tripartite picture of privacy involving local operations and public classical communication to a bipartite picture of quantum privacy involving local operations and classical communication. This approach has previously led to some of the strongest upper bounds on secret key rates, including the squashed entanglement and the relative entropy of entanglement. Here we use this approach along with a “privacy test” to establish a general meta-converse bound for private communication.

ID: CaltechAUTHORS:20170816-155206778

]]>

Abstract: We introduce a simple two-player test which certifies that the players apply tensor products of Pauli σ_X and σ_Z observables on the tensor product of n EPR pairs. The test has constant robustness: any strategy achieving success probability within an additive of the optimal must be poly(ε)-close, in the appropriate distance measure, to the honest n-qubit strategy. The test involves 2n-bit questions and 2-bit answers. The key technical ingredient is a quantum version of the classical linearity test of Blum, Luby, and Rubinfeld. As applications of our result we give (i) the first robust self-test for n EPR pairs; (ii) a quantum multiprover interactive proof system for the local Hamiltonian problem with a constant number of provers and classical questions and answers, and a constant completeness-soundness gap independent of system size; (iii) a robust protocol for verifiable delegated quantum computation with a constant number of quantum polynomial-time provers sharing entanglement.

ID: CaltechAUTHORS:20170710-154654821

]]>

Abstract: We study the parallel repetition of one-round games involving players that can use quantum entanglement. A major open question in this area is whether parallel repetition reduces the entangled value of a game at an exponential rate - in other words, does an analogue of Raz's parallel repetition theorem hold for games with players sharing quantum entanglement? Previous results only apply to special classes of games. We introduce a class of games we call anchored. We then introduce a simple transformation on games called anchoring, inspired in part by the Feige-Kilian transformation, that turns any (multiplayer) game into an anchored game. Unlike the Feige-Kilian transformation, our anchoring transformation is completeness preserving. We prove an exponential-decay parallel repetition theorem for anchored games that involve any number of entangled players. We also prove a threshold version of our parallel repetition theorem for anchored games. Together, our parallel repetition theorems and anchoring transformation provide the first hardness amplification techniques for general entangled games. We give an application to the games version of the Quantum PCP Conjecture.

ID: CaltechAUTHORS:20170710-152910604

]]>

Abstract: A Markov chain is a tripartite quantum state ρABC where there exists a recovery map RB→BC such that ρABC = RB→BC(ρAB). More generally, an approximate Markov chain ρABC is a state whose distance to the closest recovered state RB→BC(ρAB) is small. Recently it has been shown that this distance can be bounded from above by the conditional mutual information I(A : C|B)ρ of the state. We improve on this connection by deriving the first bound that is tight in the commutative case and features an explicit recovery map that only depends on the reduced state pBC. The key tool in our proof is a multivariate extension of the Golden-Thompson inequality, which allows us to extend logarithmic trace inequalities from two to arbitrarily many matrices.

ID: CaltechAUTHORS:20170816-174117187

]]>

Abstract: Cooling of nanomechanical resonators to their motional ground state [1, 2] triggered recent achievements like non-classical mechanical state preparation [3] or coherent optical to microwave photon conversion [4]. Implementations of such system with optomechanical crystal (OMC) resonators use the co-localization of optical and acoustic modes in a periodically patterned device layer of a silicon-on-insulator (SOI) chip. An initialization of mechanical resonator to low thermal occupations is required by most quantum optomechanical operations. Reaching small thermal mechanical mode occupations and mechanical Q-factors ≳ 10^6 requires pre-cooling to millikelvin temperatures, where even weak optical absorption induces unfavorable local heating [5]. Commonly used one-dimensional nanobeam OMC resonators have significantly smaller thermal connectivity to the cool environment and reduced robustness against undesired heating than their two-dimensional counterparts. On the other hand drawbacks of 2D OMCs were their complex fabrication and weaker interaction strengths of acoustic and optical modes [6]. Here, we present a modified 2D OMC cavity that both exhibits simulated coupling strengths comparable to the previous optomechanical nanobeams and reduces the complexity for nanofabrication compared to previous 2D OMCs.

ID: CaltechAUTHORS:20180622-124453312

]]>

Abstract: The integration of rare-earth ions in an on-chip photonic platform would enable quantum repeaters and scalable quantum networks. While ensemble-based quantum memories have been routinely realized, implementing single rare-earth ion qubit remains an outstanding challenge due to its weak photoluminescence. Here we demonstrate a nanophotonic platform consisting of yttrium vanadate (YVO) photonic crystal nanobeam resonators coupled to a spectrally dilute ensemble of Nd ions. The cavity acts as a memory when prepared with spectral hole burning, meanwhile it permits addressing of single ions when high-resolution spectroscopy is employed. For quantum memory, atomic frequency comb (AFC) protocol was implemented in a 50 ppm Nd:YVO nanocavity cooled to 480 mk. The high-fidelity quantum storage of time-bin qubits is demonstrated with a 80% efficient WSi superconducting nanowire single photon detector (SNSPD). The small mode volume of the cavity results in a peak atomic spectral density of <10 ions per homogeneous linewidth, suitable for probing single ions when detuned from the center of the inhomogeneous distribution. The high-cooperativity coupling of a single ion yields a strong signature (20%) in the cavity reection spectrum, which could be detected by our efficient SNSPD. We estimate a signal-to-noise ratio exceeding 10 for addressing a single Nd ion with its 879.7nm transition. This, combines with the AFC memory, constitutes a promising platform for preparation, storage and detection of rare-earth qubits on the same ship.

No.: 10118 ISSN: 0277786X

ID: CaltechAUTHORS:20170505-135335967

]]>

Abstract: An ideal system of n qubits has 2^n dimensions. This exponential grants power, but also hinders characterizing the system's state and dynamics. We study a new problem: the qubits in a physical system might not be independent. They can "overlap," in the sense that an operation on one qubit slightly affects the others. We show that allowing for slight overlaps, n qubits can fit in just polynomially many dimensions. (Defined in a natural way, all pairwise overlaps can be ≤ ϵ in n^(O(1/ϵ^2)) dimensions.) Thus, even before considering issues like noise, a real system of n qubits might inherently lack any potential for exponential power. On the other hand, we also provide an efficient test to certify exponential dimensionality. Unfortunately, the test is sensitive to noise. It is important to devise more robust tests on the arrangements of qubits in quantum devices.

Publication: 8th Innovations in Theoretical Computer Science Conference (ITCS 2017) No.: 67
ID: CaltechAUTHORS:20171011-113818136

]]>

Abstract: Conventional statistical mechanics describes large systems and averages over many particles or over many trials. But work, heat, and entropy impact the small scales that experimentalists can increasingly control, e.g., in single-molecule experiments. The statistical mechanics of small scales has been quantified with two toolkits developed in quantum information theory: resource theories and one-shot information theory. The field has boomed recently, but the theorems amassed have hardly impacted experiments. Can thermodynamic resource theories be realized experimentally? Via what steps can we shift the theory toward physical realizations? Should we care? I present eleven opportunities in physically realizing thermodynamic resource theories.

ID: CaltechAUTHORS:20151215-103552216

]]>

Abstract: Can quantum computers solve optimization problems much more quickly than classical computers? One major piece of evidence for this proposition has been the fact that Quantum Annealing (QA) finds the minimum of some cost functions exponentially more quickly than classical Simulated Annealing (SA). One such cost function is the simple “Hamming weight with a spike” function in which the input is an n-bit string and the objective function is simply the Hamming weight, plus a tall thin barrier centered around Hamming weight n/4. While the global minimum of this cost function can be found by inspection, it is also a plausible toy model of the sort of local minima that arise in realworld optimization problems. It was shown by Farhi, Goldstone and Gutmann [1] that for this example SA takes exponential time and QA takes polynomial time, and the same result was generalized by Reichardt [2] to include barriers with width n^ζ and height n^α for ζ + α ≤ 1/2. This advantage could be explained in terms of quantummechanical “tunneling.” Our work considers a classical algorithm known as Simulated Quantum Annealing (SQA) which relates certain quantum systems to classical Markov chains. By proving that these chains mix rapidly, we show that SQA runs in polynomial time on the Hamming weight with spike problem in much of the parameter regime where QA achieves an exponential advantage over SA. While our analysis only covers this toy model, it can be seen as evidence against the prospect of exponential quantum speedup using tunneling. Our technical contributions include extending the canonical path method for analyzing Markov chains to cover the case when not all vertices can be connected by low-congestion paths. We also develop methods for taking advantage of warm starts and for relating the quantum state in QA to the probability distribution in SQA. These techniques may be of use in future studies of SQA or of rapidly mixing Markov chains in general.

Publication: N/A
ID: CaltechAUTHORS:20160622-152913947

]]>

Abstract: Rotational anisotropy nonlinear harmonic generation (RA-NHG) is an all-optical technique by which crystallographic, magnetic, and electronic symmetries of crystalline materials’ bulk surface and interfaces may be examined. It also allows characterization of nanostructures and biological tissue as well as imaging applications. In this chapter, we describe the principles behind RA-NHG, discuss current experimental approaches, and review key experimental findings since 2009.

ID: CaltechAUTHORS:20170711-110806843

]]>

Abstract: The relative entropy is the basic concept underlying various information measures like entropy, conditional entropy and mutual information. Here, we discuss how to make use of variational formulas for measured relative entropy and quantum relative entropy for understanding the additivity properties of various entropic quantities that appear in quantum information theory. In particular, we show that certain lower bounds on quantum conditional mutual information are superadditive.

ID: CaltechAUTHORS:20160824-093212627

]]>

Abstract: We present a coherent microwave to telecom signal converter based on the electro-optical effect using a crystalline WGM-resonator coupled to a 3D microwave cavity, achieving high photon conversion efficiency of 0.1% with MHz bandwidth.

Publication: arXiv
ID: CaltechAUTHORS:20160406-090159955

]]>

Abstract: Quantum information and computation provide a fascinating twist on the notion of proofs in computational complexity theory. For instance, one may consider a quantum computational analogue of the complexity class NP, known as QMA, in which a quantum state plays the role of a proof (also called a certificate or witness), and is checked by a polynomial-time quantum computation. For some problems, the fact that a quantum proof state could be a superposition over exponentially many classical states appears to offer computational advantages over classical proof strings. In the interactive proof system setting, one may consider a verifier and one or more provers that exchange and process quantum information rather than classical information during an interaction for a given input string, giving rise to quantum complexity classes such as QIP, QSZK, and QMIP* that represent natural quantum analogues of IP, SZK, and MIP. While quantum interactive proof systems inherit some properties from their classical counterparts, they also possess distinct and uniquely quantum features that lead to an interesting landscape of complexity classes based on variants of this model. In this survey we provide an overview of many of the known results concerning quantum proofs, computational models based on this concept, and properties of the complexity classes they define. In particular, we discuss non-interactive proofs and the complexity class QMA, single-prover quantum interactive proof systems and the complexity class QIP, statistical zero-knowledge quantum interactive proof systems and the complexity class QSZK, and multiprover interactive proof systems and the complexity classes QMIP, QMIP*, and MIP*.

Publication: Foundations and Trends in Theoretical Computer Science Vol.: 11 No.: 1-2
ID: CaltechAUTHORS:20160622-144016671

]]>

Abstract: Rare-earth-ion doped crystals are state-of-the-art materials for optical quantum memories and quantum transducers between optical and microwave photons. Here we describe our progress towards a nanophotonic quantum memory based on a rare-earth (Neodymium) doped yttrium orthosilicate (YSO) photonic crystal resonator. The Purcell-enhanced coupling of the 883 nm transitions of Neodymium (Nd^(3+)) ions to the nano-resonator results in increased optical depth, which could in principle facilitate highly efficient photon storage via cavity impedance matching. The atomic frequency comb (AFC) memory protocol can be implemented in the Nd:YSO nano-resonator by efficient optical pumping into the long-lived Zeeman state. Coherent optical signals can be stored and retrieved from the AFC memory. We currently measure a storage efficiency on par with a bulk crystal Nd:YSO memory that is millimeters long. Our results will enable multiplexed on-chip quantum storage and thus quantum repeater devices using rare-earth-ions.

No.: 9762 ISSN: 0277786X

ID: CaltechAUTHORS:20160930-111458520

]]>

Abstract: Fully homomorphic encryption is an encryption method with the property that any computation on the plaintext can be performed by a party having access to the ciphertext only. Here, we formally define and give schemes for quantum homomorphic encryption, which is the encryption of quantum information such that quantum computations can be performed given the ciphertext only. Our schemes allow for arbitrary Clifford group gates, but become inefficient for circuits with large complexity, measured in terms of the non-Clifford portion of the circuit (we use the “π/8” non-Clifford group gate, also known as the T-gate). More specifically, two schemes are proposed: the first scheme has a decryption procedure whose complexity scales with the square of the number of T-gates (compared with a trivial scheme in which the complexity scales with the total number of gates); the second scheme uses a quantum evaluation key of length given by a polynomial of degree exponential in the circuit’s T-gate depth, yielding a homomorphic scheme for quantum circuits with constant T-depth. Both schemes build on a classical fully homomorphic encryption scheme. A further contribution of ours is to formally define the security of encryption schemes for quantum messages: we define quantum indistinguishability under chosen plaintext attacks in both the public- and private-key settings. In this context, we show the equivalence of several definitions. Our schemes are the first of their kind that are secure under modern cryptographic definitions, and can be seen as a quantum analogue of classical results establishing homomorphic encryption for circuits with a limited number of multiplication gates. Historically, such results appeared as precursors to the breakthrough result establishing classical fully homomorphic encryption.

No.: 9216
ID: CaltechAUTHORS:20150521-152945536

]]>

Abstract: We experimentally demonstrate that disorder can induce a topologically non-trivial phase. We implement this “Topological Anderson Insulator” in arrays of evanescently coupled waveguides and demonstrate its unique features.

ID: CaltechAUTHORS:20160325-092400946

]]>

Abstract: We present the basic ideas and techniques utilized in recent work on optomechanical crystals. Optomechanical crystals are nanofabricated cavity optomechanical systems where the confinement of light and motion is obtained by nanopatterning periodic structures in thin-films. In this chapter we start from a basic review of the properties of optical and elastic waves in nanostructures, before introducing the properties and design of periodic structures. After reviewing fabrication and characterization methods, experimental results in 1D and 2D systems are presented.

ID: CaltechAUTHORS:20140907-085235613

]]>

Abstract: We show that quantum-to-classical channels, i.e., quantum measurements, can be asymptotically simulated by an amount of classical communication equal to the quantum mutual information of the measurement, if sufficient shared randomness is available. This result generalizes Winter's measurement compression theorem for fixed independent and identically distributed inputs [Winter, CMP 244 (157), 2004] to arbitrary inputs, and more importantly, it identifies the quantum mutual information of a measurement as the information gained by performing it, independent of the input state on which it is performed. Our result is a generalization of the classical reverse Shannon theorem to quantum-to-classical channels. In this sense, it can be seen as a quantum reverse Shannon theorem for quantum-to-classical channels, but with the entanglement assistance and quantum communication replaced by shared randomness and classical communication, respectively. Our proof is based on quantum-proof randomness extractors and the post-selection technique for quantum channels [Christandl et al., PRL 102 (020504), 2009].

ID: CaltechAUTHORS:20150227-081211253

]]>

Abstract: Many constructions of randomness extractors are known to work in the presence of quantum side information, but there also exist extractors which do not [Gavinsky et al., STOC'07]. Here we find that spectral extractors with a bound on the second largest eigenvalue - considered as an operator on the Hilbert-Schmidt class - are quantum-proof. We then discuss fully quantum extractors and call constructions that also work in the presence of quantum correlations decoupling. As in the classical case we show that spectral extractors are decoupling. The drawback of classical and quantum spectral extractors is that they always have a long seed, whereas there exist classical extractors with exponentially smaller seed size. For the quantum case, we show that there exists an extractor with extremely short seed size d = O(log(1/ε)), where ε > 0 denotes the quality of the randomness. In contrast to the classical case this is independent of the input size and min-entropy and matches the simple lower bound d ≥ log(1/ε).

ID: CaltechAUTHORS:20150227-083629007

]]>

Abstract: We propose a quantum theory of rotating light beams and study some of its properties. Such beams are polychromatic and have either a slowly rotating polarization or a slowly rotating transverse mode pattern. We show there are, for both cases, three different natural types of modes that qualify as rotating, one of which is a new type not previously considered. We discuss differences between these three types of rotating modes on the one hand and non-rotating modes as viewed from a rotating frame of reference on the other, thus resolving some paradoxes mentioned recently.

No.: 6905
ID: CaltechAUTHORS:20181109-123252947

]]>

Abstract: The k-Local Hamiltonian problem is a natural complete problem for the complexity class QMA, the quantum analog of NP. It is similar in spirit to MAX- k -SAT, which is NP-complete for k ≥ 2. It was known that the problem is QMA-complete for any k ≥ 3. On the other hand 1-Local Hamiltonian is in P, and hence not believed to be QMA-complete. The complexity of the 2-Local Hamiltonian problem has long been outstanding. Here we settle the question and show that it is QMA-complete. We provide two independent proofs; our first proof uses a powerful technique for analyzing the sum of two Hamiltonians; this technique is based on perturbation theory and we believe that it might prove useful elsewhere. The second proof uses elementary linear algebra only. Using our techniques we also show that adiabatic computation with two-local interactions on qubits is equivalent to standard quantum computation.

No.: 3328
ID: CaltechAUTHORS:20191011-072647725

]]>