##### 08:10 - 8:30 | **Welcome to QIP 2021**

08:10 | Welcome and Intro Videos

08:20 | Keynote by Robert König and Michael Wolf

### Session 1

Main stage (A)

Abstract

Authors: Zhengfeng Ji, Anand Natarajan, __Thomas Vidick__, John Wright, Henry Yuen

**Abstract**

Boris Tsirelson in 1993 implicitly posed "Tsirelson's Problem", a question about the possible equivalence between two different ways of modeling locality, and hence entanglement, in quantum mechanics. Tsirelson's Problem gained prominence through work of Fritz, Navascues et al., and Ozawa a decade ago that establishes its equivalence to the famous "Connes' Embedding Problem" in the theory of von Neumann algebras.

Recently we gave a negative answer to Tsirelson's Problem and Connes' Embedding Problem by proving a seemingly stronger result in quantum complexity theory. This result is summarized in the equation MIP* = RE between two complexity classes.

In the talk I will present and motivate Tsirelson's problem, and outline its connection to Connes' Embedding Problem. I will then explain the connection to quantum complexity theory and show how ideas developed in the past two decades in the study of classical and quantum interactive proof systems led to the characterization (which I will explain) MIP* = RE and the negative resolution of Tsirelson's Problem.

Based on joint work with Ji, Natarajan, Wright and Yuen available at arXiv:2001.04383.

Abstract

*Affiliations: Columbia University | Macquarie University | Google | Google Research | Google | University of Washington | Google*

**Abstract**

We show how to achieve the highest efficiency yet for simulations with arbitrary basis sets by using a representation of the Coulomb operator known as tensor hypercontraction (THC). We use THC to express the Coulomb operator in a non-orthogonal basis, which we are able to block encode by separately rotating each term with angles that are obtained via QROM. Our algorithm has the best complexity scaling for an arbitrary basis, as well as the best complexity for the specific case of FeMoCo. By optimising the surface code resources, we show that FeMoCo can be simulated using about 4 million physical qubits and 3.5 days of runtime, assuming 1 s cycle times and physical gate error rates no worse than 0.1%.

##### 10:00 - 10:30 | Round tables with Thomas Vidick and Dominic Berry

At the end of the session, you can meet with the speakers directly. Round tables take place in parallel, each speaker having their own meeting.

### Session 2

Main stage (A)

Abstract

**Affiliations: ENS Lyon | University of Cambridge | ENS Lyon **

**Abstract**

In the analysis of quantum information processing tasks, the choice of distance measure between states or channels often plays a crucial role. This submission introduces new quantum Rnyi divergences for states and channels that are based on a convex optimization program involving the matrix geometric mean. These divergences have mathematical and computational properties that make them applicable to a wide variety of problems. We use these Rnyi divergences to obtain semidefinite programming lower bounds on the key rates for device-independent cryptography, and in particular we find a new bound on the minimal detection efficiency required to perform device-independent quantum key distribution without additional noisy preprocessing. Furthermore, we give several applications to quantum Shannon theory, in particular proving that adaptive strategies do not help in the strong converse regime for quantum channel discrimination and obtaining improved bounds for quantum capacities.

Abstract

*Affiliations: University of Ottawa | LIP6, CNRS/Sorbonne Université *

**Abstract**

We provide several advances to the understanding of the class of Quantum Merlin-Arthur proof systems (QMA), the quantum analogue of NP. Our central contribution is proving a longstanding conjecture that the Consistency of Local Density Matrices (CLDM) problem is QMA-hard under Karp reductions. The input of CLDM consists of local reduced density matrices on sets of at most k qubits, and the problem asks if there is an n-qubit global quantum state that is locally consistent with all of the k-qubit local density matrices. The containment of CLDM in QMA and the QMA-hardness under Turing reductions were proved by Liu [APPROX-RANDOM 2006]. Liu also conjectured that CLDM is QMA-hard under Karp reductions, which is desirable for applications, and we finally prove this conjecture. We establish this result using the techniques of simulatable codes of Grilo, Slofstra, and Yuen [FOCS 2019], simplifying their proofs and tailoring them to the context of QMA. In order to develop applications of CLDM, we propose a framework that we call locally simulatable proofs for QMA: this provides QMA proofs that can be efficiently verified by probing only k qubits and, furthermore, the reduced density matrix of any k-qubit subsystem of a good witness can be computed in polynomial time, independently of the witness. Within this framework, we show several advances in zero-knowledge in the quantum setting. We show for the first time a commit-and-open computational zero-knowledge proof system for all of QMA, as a quantum analogue of a ``sigma'' protocol. We then define a Proof of Quantum Knowledge, which guarantees that a prover is effectively in possession of a quantum witness in an interactive proof, and show that our zero-knowledge proof system satisfies this definition.

##### 12:00 - 12:30 | Round tables with Peter Brown and Alex Bredariol Grilo

At the end of the session, you can meet with the speakers directly. Round tables take place in parallel, each speaker having their own meeting.

##### 12:30 - 13:00 | **Breakfast Club - Networking**

Find out who else is here and discuss via video chat in a casual atmosphere different serious (Quantum Computing: hype or hope?) and not so serious topics (What’s your favourite Bavarian dish?). Join the "Social activities" tab in Meetanyway. Topics change daily. You can join and leave tables as you wish.

13:00 - 14:00 | **Break**

##### 13:00 on QIP TV | **Lab tour
"Single atoms in crossed optical fiber cavities" with Dominik Niemietz**

Join the QIP TV stage for an insight of the work done by **Gerhard Rempe's group at the Max-Planck-Institute for Quantum Optics** in Garching near Munich.

14:00 - 15:00 | **Mentoring Session #1**

Prior registration is required and only available for QIP 2021 participants. Specific times and dates are sent out via email.

### Session 3A

Main stage (A)

Abstract

*Affiliations: IBM Quantum Research, IBM Research, MIT-IBM A.I. Lab | University of Sheffield *

**Abstract**

We introduce a framework for constructing a quantum error correcting code from {\it any} classical error correcting code. This includes CSS codes and goes beyond the stabilizer formalism to allow quantum codes to be constructed from classical codes that are not necessarily linear or self-orthogonal. We give an algorithm that explicitly constructs quantum codes with linear distance and constant rate from classical codes with a linear distance and rate. As illustrations for small size codes, we obtain Steane's $7-$qubit code uniquely from Hamming's [7,4,3] code, and obtain other error detecting quantum codes from other explicit classical codes of length 4 and 6. Motivated by quantum LDPC codes and the use of physics to protect quantum information, we introduce a new 2-local frustration free quantum spin chain Hamiltonian whose ground space we analytically characterize completely. By mapping classical codewords to basis states of the ground space, we utilize our framework to demonstrate that the ground space contains explicit quantum codes with linear distance. This side-steps the Bravyi-Terhal no-go theorem because our work allows for more general quantum codes beyond the stabilizer and/or linear codes. This model may be called an example of {\it subspace} quantum LDPC codes with linear distance.

Abstract

*Affiliations: AWS Center for Quantum Computing | University of York | University of Chicago *

**Abstract**

Quantum communication is an important branch of quantum information science, promising unconditional security to classical communication and providing the building block of a future large-scale quantum network. Noise in realistic quantum communication channels imposes fundamental limits on the communication rates of various quantum communication tasks. It is therefore crucial to identify or bound the quantum capacities of a quantum channel. Here, we consider Gaussian channels that model energy loss and thermal noise errors in realistic optical and microwave communication channels and study their various quantum capacities in the energy-constrained scenario. We provide improved lower bounds to various energy-constrained quantum capacities of these fundamental channels and show that higher communication rates can be attained than previously believed. Specifically, we show that one can boost the transmission rates of quantum information and private classical information by using a correlated multi-mode thermal state instead of the single-mode thermal state of the same energy.

Abstract

*Affiliations: Freie Universität Berlin | ETH Zurich | National Institute of Standards and Technology, Gaithersburg, MD | ETH Zurich | FU Berlin | Caltech *

**Abstract**

Noise in quantum metrology reduces the sensitivity to which one can determine an unknown parameter in the evolution of a quantum state, such as time. Here, we consider a probe system prepared in a pure state that evolves according to a given Hamiltonian. We study the resulting local sensitivity of the probe to time after the application of a given noise channel. We show that the decrease in sensitivity due to the noise is equal to the sensitivity that the environment gains with respect to the energy of the probe. We obtain necessary and sufficient conditions for when the probe does not suffer any sensitivity loss; these conditions are analogous to, but weaker than, the Knill-Laflamme quantum error correction conditions. New upper bounds on the sensitivity of the noisy probe are obtained via our uncertainty relation, by applying known sensitivity lower bounds on the environments system. Our time-energy uncertainty relation also generalizes to any two arbitrary parameters whose evolutions are generated by Hermitian operators. This uncertainty relation asserts a general trade-off between the sensitivities that two parties can achieve for any two respective parameters of a single quantum system, in terms of the commutator of the associated generators. We consider applications to strongly interacting many-body probes. We find probe states for general interaction graphs of Ising and Heisenberg interactions that are robust to any single located error. For a 1D spin chain with nearest-neighbor interactions subject to amplitude damping noise on each site, we verify numerically that our probe state does not lose any sensitivity to first order in the noise parameter.

16:30 - 17:00**Round tables with Yingkai Ouyang, Kyungjoo Noh, and Philippe Faist**At the end of the session, you can meet with the speakers directly. Round tables take place in parallel, each speaker having their own meeting.

### Session 3B

Stage B

Abstract

*Affiliations: QuICS | University of Maryland | UC Berkeley | LIP6, CNRS/Sorbonne Université | QuICS | Caltech | Caltech // Tel Aviv University | Tel Aviv University *

**Abstract**

A non-interactive zero-knowledge (NIZK) proof system for a language L in NP allows a prover (who is provided with an instance x and a witness w) to compute a classical certificate for the claim that x is in L, with the following properties: 1) the protocol can be verified efficiently, and 2) the protocol does not reveal any information about w, besides the fact that it exists (i.e., that x is in L). While NIZKs are known to be impossible in the plain model (i.e., with no additional trusted resource), they are well studied in alternative models and have seen widespread application in classical cryptography. Given the importance of NIZKs, and more generally zero-knowledge protocols, in classical cryptography, there has been a recent effort to achieve such protocols for QMA, a natural quantum analog of NP. However, all previous results only achieved interactive protocols, limiting their cryptographic use. Moreover, they all rely on quantum communication between the prover and the verifier, which may be difficult to achieve. In this submission, we present two NIZK protocols for QMA in the Common Reference String (CRS) model, with additional offline setup. Both protocols are achieved through the homomorphic computation of classical NIZKs for NP, and rely on the hardness of the Learning With Errors problem. However, each of them then combines this core idea with different (seemingly incomparable) techniques: 1) our first protocol makes use of quantum teleportation and quantum communication in an offline setup phase, with a classical online phase; our second protocol leverages techniques for classical verification of quantum computations, and is the only known NIZK for QMA to be completely classical, as well as reusable, meaning that a single setup allows to prove many theorems. Security of the latter is in the Quantum Random Oracle model. // We construct the first constant-round zero-knowledge classical argument for NP secure against quantum attacks. We assume the existence of Quantum Fully Homomorphic Encryption and other standard primitives, known based on the Learning with Errors Assumption for quantum algorithms. As a corollary, we also obtain the first constant-round zero-knowledge quantum argument for QMA. At the heart of our protocol is a new no-cloning non-black-box simulation technique.

Abstract

*Affiliations: Weizmann Institute of Science | University of Toronto *

**Abstract**

We present a garbling scheme for quantum circuits, thus achieving a decomposable randomized encoding scheme for quantum computation. Specifically, we show how to compute an encoding of a given quantum circuit and quantum input, from which it is possible to derive the output of the computation and nothing else. In the classical setting, garbled circuits (and randomized encodings in general) are a versatile cryptographic tool with many applications such as secure multiparty computation, delegated computation, depth-reduction of cryptographic primitives, complexity lower-bounds, and more. However, a quantum analogue for garbling general circuits was not known prior to this work. We hope that our quantum randomized encoding scheme can similarly be useful for applications in quantum computing and cryptography. To illustrate the usefulness of quantum randomized encoding, we use it to design a conceptually-simple zero-knowledge (ZK) proof system for the complexity class QMA. Our protocol has the so-called S format with a single-bit challenge, and allows the inputs to be delayed to the last round. The only previously-known ZK S-protocol for QMA is due to Broadbent and Grilo (FOCS 2020), which does not have the aforementioned properties.

Abstract

*Affiliations: QUICS, University of Maryland | QUICS, University of Maryland, and National Institute of Standards and Technology | University of Waterloo *

**Abstract**

When two spatially separated parties make measurements on an unknown entangled quantum state, what correlations can they achieve? How difficult is it to determine whether a given correlation is quantum? This question is central to problems in quantum communication and computation. Previous work has shown that the general membership problem for quantum correlations is computationally undecidable. In the current work we show something stronger: there is a family of constant-sized correlations --- that is, correlations for which the number of measurements and number of measurement outcomes are fixed --- such that solving the quantum membership problem for this family is computationally impossible. Intuitively, our result means that the undecidability that arises in understanding Bell experiments is innate, and is not dependent on varying the number of measurements in the experiment. This places strong constraints on the types of descriptions that can be given for quantum correlation sets. Our proof is based on a combination of techniques from quantum self-testing and from undecidability results of the third author for linear system nonlocal games.

16:30 - 17:00**Round tables with Shih-Han Hung, Omri Shmueli, Zvika Brakerski, and Honghao Fu**At the end of the session, you can meet with the speakers directly. Round tables take place in parallel, each speaker having their own meeting.

### Session 3C

Stage C

Abstract

*Affiliations: Carnegie Mellon University *

**Abstract**

Entanglement lies at the root of quantum theory. It is a remarkable resource that is generally believed to diminish when entangled systems interact with their environment. On the contrary, we find that engaging a system with its environment increases its ability to retain entanglement. The maximum rate of retaining entanglement is given by the quantum capacity. We counter-intuitively boost the quantum capacity of a channel by leaking almost all quantum information to the channel's environment. This result also has surprising implication for quantum key distribution: maximum rates for key distribution are boosted by allowing leakage of information to the eavesdropping environment. These boosts exploit two-letter level non-additivity in the channel's coherent information. The resulting non-additivity has a far larger magnitude and a qualitatively wider extent than previously known. This wide extent is proven using a new insight into the von-Neumann entropy which shows that logarithmic singularities in the entropy are a source of quantum capacity and non-additivity. These singularities can be used further to show a new type of non-additivity displayed by a zero quantum capacity qubit amplitude damping channel used in parallel with a simple qutrit channel.

Abstract

*
Affiliations: University of Copenhagen | University of Copenhagen | University of Nebraska-Lincoln *

**Abstract**

We consider correlations, $p_{n,x}$, arising from measuring a maximally entangled state using $n$ measurements with two outcomes each, constructed from $n$ projections that add up to $xI$. We show that the correlations $p_{n,x}$ robustly self-test the underlying states and measurements. To achieve this, we lift the group-theoretic Gowers-Hatami based approach for proving robust self-tests to a more natural algebraic framework. A key step is to obtain an analogue of the Gowers-Hatami theorem allowing to perturb an ``approximate" representation of the relevant algebra to an exact one. For n=4, the correlations $p_{n,x}$ self-test the maximally entangled state of every odd dimension as well as 2-outcome projective measurements of arbitrarily high rank. The only other family of constant-size self-tests for strategies of unbounded dimension is due to Fu (QIP 2020) who presents such self-tests for an infinite family of maximally entangled states with \emph{even} local dimension. Therefore, we are the first to exhibit a constant-size self-test for measurements of unbounded dimension as well as all maximally entangled states with odd local dimension. In addition, correlations $p_{4,x}$ represent the first self-tests for measurements of rank higher than one.

Abstract

*Affiliations: University of Illinois | University of Chicago *

**Abstract**

Trace inequalities are powerful techniques in studying quantum entropy. The physics of quantum field theory and holography nonetheless motivate entropy inequalities in von Neumann algebras that lack a useful notion of a trace. Haagerup and Kosaki L_p spaces enable re-expressing trace inequalities in scenarios that start with a non-tracial von Neumann algebra, and we show how the generalized Araki-Lieb-Thirring and Golden-Thompson inequalities from (Sutter, Berta & Tomamichel 2017) port to general von Neumann algebras. Using an approximation method of Haagerup, we prove the tightened recovery map correction to data processing for relative entropy in (potentially non-tracial) von Neumann algebras. We also prove a p-fidelity version. Furthermore, we prove that non-decrease of relative entropy is equivalent to the existence of an L_1-isometry implementing the channel on both input states.

16:30 - 17:00**Round tables with Vikesh Siddhu, Jitendra Prakash, and Nicholas LaRacuente**At the end of the session, you can meet with the speakers directly. Round tables take place in parallel, each speaker having their own meeting.

##### 17:00 - 18:15 | Break

##### 18:15 - 19:00 | Mingling

One on one networking session - a great way to meet new people, old friends, and make valuable connections. Participants get randomly matched for one on one talks for 3 minutes. You can try it at the 2D conference space (MeetAnyway).

### Session 4A

Main stage (A)

Abstract

*Affiliations: Google | Massachusetts Institute of Technology | not applicable | Harvard University *

**Abstract**

The Quantum Approximate Optimization Algorithm (QAOA) is a general-purpose algorithm for combinatorial optimization problems whose performance can only improve with the number of layers p. While QAOA holds promise as an algorithm that can be run on near-term quantum computers, its computational power has not been fully explored. In this work, we study the QAOA applied to the Sherrington-Kirkpatrick (SK) model, which can be understood as energy minimization of n spins with all-to-all random signed couplings. There is a recent classical algorithm by Montanari that, assuming a widely believed conjecture, can be tailored to efficiently find an approximate solution for a typical instance of the SK model to within (1-epsilon) times the ground state energy. We can only hope to match its performance with the QAOA. Our main result is a novel technique that allows us to evaluate the typical-instance energy of the QAOA applied to the SK model. We produce a formula for the expected value of the energy, as a function of the 2p QAOA parameters, in the infinite size limit that can be evaluated on a computer with O(16^p) complexity. We evaluate the formula up to p=12, and find that the QAOA at p=11 outperforms the standard semidefinite programming algorithm. Moreover, we show concentration: With probability tending to one as n goes to infinity, measurements of the QAOA will produce strings whose energies concentrate at our calculated value. As an algorithm running on a quantum computer, there is no need to search for optimal parameters on an instance-by-instance basis since we can determine them in advance. What we have here is a new framework for analyzing the QAOA, and our techniques can be of broad interest for evaluating its performance on more general problems where classical algorithms may fail.

Abstract

*Affiliations: Tufts University | Cambridge Quantum Computing | Tufts University *

**Abstract**

We describe how contextuality may be used to advantage in variational quantum eigensolvers. Contextuality is a characteristic feature of quantum mechanics, and identifying contextuality in quantum algorithms provides a means for distinguishing them from their classical counterparts. We first describe how contextuality may be identified in variational quantum eigensolvers (VQEs), which are a leading algorithm for noisy intermediate-scale quantum computers. We then show how to construct a classical phase-space model for any noncontextual Hamiltonian, which provides a classical simulation algorithm for noncontextual VQE and allows us to prove that the noncontextual Hamiltonian problem is only NP-complete, rather than QMA-complete. Finally, we describe an approximation method called contextual subspace VQE that permits us to partition a general Hamiltonian into a noncontextual part and a contextual part, and estimate its ground state energy using a technique that combines classical simulation of the noncontextual part with quantum simulation of the contextual part. By using more quantum resources (in qubits and simulated terms of the Hamiltonian), we can increase the accuracy of the approximation. We tested contextual subspace VQE on electronic structure Hamiltonians, and found that to reach chemical accuracy in most cases it requires fewer qubits and simulated terms than standard VQE.

Abstract

*Affiliations: UC Berkeley | UC Berkeley | University of Illinois, Urbana Champaign | Princeton University and NTT Research *

**Abstract**

We investigate the round complexity of maliciously-secure two-party quantum computation (2PQC) with setup, and obtain the following results: - A three-message protocol (two-message if only one party receives output) in the common random string (CRS) model assuming classical two-message oblivious transfer (OT) with post-quantum malicious security. This round complexity is optimal for the sequential communication setting. Under the additional assumption of reusable malicious designated-verifier non-interactive zero-knowledge (MDV-NIZK) arguments for NP, our techniques give an MDV-NIZK for QMA. Each of the assumptions mentioned above is known from the quantum hardness of learning with errors (QLWE). - A protocol with two simultaneous rounds of communication, in a quantum preprocessing model, assuming sub-exponential QLWE. In fact, we construct a three-round protocol in the CRS model with only two rounds of online communication, which implies the above result. Along the way, we develop a new delayed simulation technique that we call simulation via teleportation, which may be useful in other settings. In addition, we perform a preliminary investigation into barriers and possible approaches for two round 2PQC in the CRS model, including an impossibility result for a natural class of simulators, and a proof-of-concept construction from a strong form of quantum virtual black-box (VBB) obfuscation. Prior to our work, maliciously-secure 2PQC required round complexity linear in the size of the quantum circuit.

20:30 - 21:00**Round tables with Leo Zhou, William Kirby, and James Bartusek**At the end of the session, you can meet with the speakers directly. Round tables take place in parallel, each speaker having their own meeting.

### Session 4B

Stage B

Abstract

*Affiliations: California Institute of Technology | Perimeter Institute for Theoretical Physics | AWS Center for Quantum Computing *

**Abstract**

We consider quantum circuits consisting of randomly chosen two-local gates and study the number of gates needed for the distribution over measurement outcomes for typical circuit instances to be anti-concentrated, roughly meaning that the probability mass is not too concentrated on a small number of measurement outcomes. Understanding the conditions for anti-concentration is important for determining which quantum circuits are difficult to simulate classically, as anti-concentration has been in some cases an ingredient of mathematical arguments that simulation is hard and in other cases a necessary condition for easy simulation. Our definition of anti-concentration is that the expected collision probability, that is, the probability that two independently drawn outcomes will agree, is only a constant factor larger than if the distribution were uniform. We show that when the 2-local gates are each drawn from the Haar measure (or any two-design), at least O(n log(n)) gates (and thus O(log(n)) circuit depth) are needed for this condition to be met on an n qudit circuit. In both the case where the gates are nearest-neighbor on a 1D ring and the case where gates are long-range, we show O(n log(n)) gates are also sufficient, and we precisely compute the optimal constant prefactor for the n log(n). The technique we employ relies upon a mapping from the expected collision probability to the partition function of an Ising-like classical statistical mechanical model, which we manage to bound using stochastic and combinatorial techniques.

Abstract

*Affiliations: UC Berkeley | University of Chicago | UC Berkeley | University of California, Berkeley *

**Abstract**

Understanding the power of random quantum circuit sampling experiments has emerged as one of the most pressing topics in the near-term quantum era. In this work we make progress toward bridging the major remaining gaps between theory and experiment, incorporating the effects of experimental imperfections into the theoretical hardness arguments. We do this first by proving that computing the output probability of an $m$-gate random quantum circuit to within additive imprecision $2^{-O(m^{1+\epsilon})}$ is #P-hard for any $\epsilon>0$, an exponential improvement over the prior hardness results of Bouland et al. and Movassagh which were resistant to imprecision $2^{-O(m^3)}$. This improvement very nearly reaches the threshold ($2^{-O(m)}/\text{poly}(m)$) sufficient to establish the hardness of sampling for constant-depth random quantum circuits. To prove this result we introduce new error reduction techniques for polynomial interpolation, as well as a new robust Berlekamp-Welch argument over the Reals which may be of independent interest. Second we show that these results are still true in the presence of a constant rate of noise, so long as the noise rate is below the error detection threshold. That is, even though random circuits with a constant noise rate converge rapidly to the maximally mixed state, the (exponentially) small deviations in their output probabilities away from uniformity remain difficult to compute. Interestingly, we then show that our two main results are in tension with one another, and the latter result implies the former result is essentially optimal with respect to additive imprecision error, even with substantial generalizations of our techniques.

Abstract

*Affiliations: University of Calgary | University of Calgary | University of Calgary *

**Abstract**

We consider the problem of sampling from a target distribution in a Markov chain P. The random walk starts in a state drawn from the stationary distribution of P, walks according to P, and eventually stops once some predefined conditions are satisfied, generating a desired target distribution tau. We present a quantum algorithm that generates the target distribution tau using quadratically fewer steps than the optimal random walk. Our algorithm uses a generalization of controlled quantum walks. We introduce a quantum analogue of the exit frequencies of a random walk and use it to prove a relationship between sampling in random and quantum walks. Our framework yields simple and natural proofs. Applications of the framework include: A quantum rejection sampling algorithm achieving a quadratic speedup. A proof that controlled quantum walks can emulate generalized quantum interpolated walks. A proof that the extended hitting time is a measure of the complexity of a sampling problem. Efficient sampling is an important computational task used in e.g. simulations based stochastic processes. This is the first quantum algorithm that achieves a quadratic speed-up for sampling from general probability distributions over states of a Markov chain starting from the stationary distribution.

20:30 - 21:00 |** Round tables with Alexander Dalzell, Yunchao Liu, and Dante Bencivenga**At the end of the session, you can meet with the speakers directly. Round tables take place in parallel, each speaker having their own meeting.

### Session 4C

Stage C

Abstract

*Affiliations: Microsoft | AWS Center for Quantum Computing | Microsoft *

**Abstract**

Estimating the reducing overhead of existing fault tolerance schemes is a crucial step toward realizing scalable quantum computers. Many of the most promising schemes are based upon two-dimensional (2D) topological codes such as the surface and color codes. In these schemes, universal computation is typically achieved using readily implementable Clifford operations along with a less convenient and more costly implementation of the $T$ gate. In our work, we compare the cost of fault-tolerantly implementing the $T$-gate in 2D color codes using two leading approaches: state distillation and code switching to a 3D color code. We report that state distillation is more resource-efficient than code switching, in terms of both qubit overhead and space-time overhead. In particular, we find a $T$ gate threshold via code switching of $0.07(1)\%$ under circuit noise, almost an order of magnitude below that for distillation with 2D color codes. To arrive at this result, we provide and implement a simplified end-to-end recipe for code switching, detailing each step and providing important optimization considerations. We not only find numerical overhead estimates of this code switching protocol, but also lower bound various conceivable improvements. We also optimize the 2D color code for circuit noise yielding it's largest threshold to date $0.37(1)\%$, and adapt and optimize the restriction decoder and find a threshold of $0.80(5)\%$ for the 3D color code with perfect measurements under $Z$ noise. We foresee that this analysis will influence the choice of which FT schemes and which salable hardware designs should be pursued in future.

Abstract

*Affiliations: Ghent University | IBM T.J. Watson Research Center | Ghent University | University of Ghent *

**Abstract**

We consider a two-dimensional quantum memory of qubits on a torus encoding an extended Fibonacci string-net model, and construct error correction strategies when those qubits are subjected to depolarizing noise. In the case of a fixed-rate sampling noise model, we find an error correcting threshold of 4.75% with a clustering decoder. Using the concept of tube algebras, we construct a set of measurements and of quantum gates which map arbitrary qubit errors to the Turaev-Viro subspace. Tensor network techniques then allow to quantitatively study the action of Pauli noise on that subspace. We perform Monte-Carlo simulations of the Fibonacci code, and compare the performance of several decoders. To the best of our knowledge, this is the first time that a threshold has been calculated for a two-dimensional error correcting code in which universal quantum computation can be performed in its code space.

Abstract

*Affiliations: Microsoft Station Q / KITP @ UCSB | University of Maryland, | Princeton University | California Institute of Technology | Stanford University *

**Abstract**

Fracton phases exhibit striking behavior which appears to render them beyond the standard topological quantum field theory (TQFT) paradigm for classifying gapped quantum matter. Here, we explore fracton phases from the perspective of defect TQFTs and show that topological defect networks-networks of topological defects embedded in stratified 3+1D TQFTs-provide a unified framework for describing various types of gapped fracton phases. In this picture, the sub-dimensional excitations characteristic of fractonic matter are a consequence of mobility restrictions imposed by the defect network. We conjecture that all gapped phases, including fracton phases, admit a topological defect network description and support this claim by explicitly providing such a construction for many well-known fracton models, including the X-Cube and Haah's B code. To highlight the generality of our framework, we also provide a defect network construction of a novel fracton phase hosting non-Abelian fractons. As a byproduct of this construction, we obtain a generalized membrane-net description for fractonic ground states as well as an argument that our conjecture implies no topological fracton phases exist in 2+1D gapped systems. Our work also sheds light on new techniques for constructing higher order gapped boundaries of 3+1D TQFTs.

20:30 - 21:00 |** Round tables with Michael Beverland, Alexis Schotte, and Dominic Williamson**At the end of the session, you can meet with the speakers directly. Round tables take place in parallel, each speaker having their own meeting.

##### 21:00 - 23:00 | Poster Session A

**Posters in Rooms A.1 to A.4 are presenting.** Visit the 3D poster rooms and use the "Video chat" button under each poster to engage with the poster representative during this time. This is linked to their poster in MeetAnyway, therefore access is possible from either platform.

See the list of all posters presenting at QIP 2021.