The Toric Code


Abstract

The toric code is one of the first examples of a surface code \cite{Kitaev}, where a lattice consisting of qubits is deployed for the purposes of error correction. The toric code can be seen as a tool for realizing a quantum memory by exploiting certain topological properties that result from embedding the lattice on the surface of a torus. The model is best understood through the stabilizer formalism for quantum error correction. This post discusses the general properties of the toric code in this regard, and explains how Pauli errors on the encoded state can be corrected. Errors manifest themselves in the toric code as anyonic excitations. Although the theory of anyons provides an insightful interpretation to the dynamics of the toric code, the model will be presented here with minimal emphasis on the anyonic properties at play without sacrificing the essence of the toric code.




Preliminary remarks

Consider  a $k \times k$ square consisting of $k^2$ square faces, $k^2$ vertices, and $2k^2$ edges. By identifying the top of the lattice with the bottom, and the left side with the right side, the lattice can be thought of as being embedded on the surface of a genus-$1$ torus (hence the name 'toric' code). Alternatively, the $k \times k$ lattice can be though of as having periodic boundary conditions, where a path leaving the lattice from the top side returns to the lattice from the corresponding point on the bottom side of the lattice. Likewise, a path leaving the torus from either the left or right sides would return from the opposite side. Instead of visualizing the lattice on the surface of the torus, this latter picture will be used throughout for convenience.

It is worth mentioning that the choice of a $k\times k$ square lattice is somewhat arbitrary in the sense that most of the essential features exhibited by the toric code also hold for $k\times k'$ square lattices with $k\neq k'$. It will be justified later what consequences this has for the code in regards to error correction. Moreover, the choice of using a square lattice can also be modified to include lattice configurations of arbitrary shape. In such cases however, the operators that act on the lattice may also need modification. For the purposes of simplicity, this paper will work with a $k\times k$ square lattice in order to exemplify the properties of the toric code in a more accessible manner.

In the toric code, a qubit is placed on each edge of the lattice so that there are $N:=2k^2$ qubits for a $k \times k$ lattice. Thus, the Hilbert space of the system under consideration is of the form
\[
\Hil^N=\TENSOR{j=1}{N}\Hil^2_j,
\]
where $\Hil^2_j$ is the two-dimensional Hilbert space of quibit $j$. For notational purposes, when some single qubit unitary operator $U$ is applied to only qubit $j$, write $U_j$ in order to specify the appropriate Hilbert space $\Hil^2_j$  that  $U$ is meant to act on. That is,
\[
U_j=I\otimes\dots\otimes I\otimes U\otimes I\otimes\dots\otimes I
\]
will denote the unitary operator of the whole system $\Hil^N$ that only applies the single qubit unitary $U$ to the qubit of $\Hil^2_j$ and acts trivially on all other qubits (here, $I$ denotes the single qubit identity operator on $\Hil^2$). In this way, the action of the unitary $U$ on the space $\Hil^2_j\subset\Hil^N$ of a single qubit is extended to an operation that acts on the whole space $\Hil^N$. Therefore, if two single qubit unitaries $U $ and $V$ are to be applied to qubits $j_1$ and $j_2$, respectively, it is well defined to simply write this operation as the product $U_{j_1}V_{j_2}$.


The Stabilizer Group

 For the toric code, there are two basic type of operators that act on the qubits of the lattice. These operators will be used to construct an algebraic set of operators important for the purposes of error correction.  Define the following two operators on $\Hil^N$ for each vertex $v$ and face $f$ of the lattice, where $\sigma^x$ and $\sigma^z$ are the standard single qubit Pauli operators,
\[
A_v=\PROD{j\in star(v)}{}\sigma^x_j \ \ \ \text{and} \ \ \ B_f=\PROD{j\in boundary(f)}{}\sigma^z_j .
\]
Here, $star(v)$ represents the set of $4$ edges that meet at vertex $v$ so that $A_v$ is the operator that applies $\sigma^x$ to each of the $4$ qubits adjacent to vertex $v$, and $boundary(f)$ represents the set of $4$ edges that border the face $f$ so that $B_f$ is the operator that applies $\sigma^z$ to each of the $4$ qubits that border the particular face $f$. These operators are illustrated in the figure below. Note that, since both $\sigma^x$ and $\sigma^z$ are Hermitian operators, so are the $A_v$ and $B_f$. Moreover, both $A_v$ and $B_f$ have eigenvalues $+1$ and $-1$.


The lattice in the toric code has qubits placed on its edges depicted as black dots. The vertex operators $A_v$ applies $\px$ operators to the four edges around vertex $v$, and the face operators $B_f$ apply $\pz$ operators to the four edges bordering a face $f$.


The set $S$ consisting of all possible products of the operators $A_v$ and $B_f$ forms an abelian subgroup of the Pauli group $P_N$, which makes $S$ a stabilizer group (see Appendix). For this to be the case each of the $A_v$ and $B_f$  must commute with one another. Since $A_v$ and $B_p$ are either products of only $\sigma^x$s or only $\sigma^z$s it follows that $A_v$ and $A_{v'}$ commute for any vertices $v$ and $v'$, and that $B_f$ and $B_{f'}$ commute for any faces $f$ and $f'$, because $\sigma^x$ and $\sigma^z$ each commute with themselves. However, even though $\sigma^x$ and $\sigma^z$ anti-commute with one another $A_v$ and $B_f$ do indeed commute for any vertex $v$ and face $f$. To see this, there are two cases to consider. First, suppose $v$ and $f$ are sufficiently far apart so that there are no qubits in common that are acted on by the operators $A_v$ and $B_f$. In this case, $A_v$ and $B_f$ trivially commute since the operators $\sigma^x_j$ and $\sigma^z_{j'}$, with $j\in star(v)$ and $j'\in boundary(f)$, act on different subspaces. The only other possibility  to consider is when the vertex $v$ happens to be one of the corners of the face $f$. In this case, there are two distinct qubits in the intersection of $star(v)$ and $boundary(f)$ as shown in Figure:

Adjacent vertex and face operators commute because they have two edges in common.


Each of these two qubits is acted on by $\sigma^x$ and $\sigma^z$, but since $\sigma^x\sigma^z=-\sigma^z\sigma^x$ there are two minus signs that result from the action of $A_v$ and $B_f$ on each of the two qubits which then cancel implying that $A_vB_f=B_fA_v$. Thus, it has been shown that each of the $A_v$ and $B_f$ commute with one another so that the set $S$ generated by their products is indeed a stabilizer group by definition.

It is worthwhile to calculate the number of independent generators of $S$. A generating set of $S$ is a collection of elements of $S$ such that each element of $S$ can be expressed as some product of elements from the generating set. In addition, it is required that the elements of the generating set be independent, meaning that no element of the generating set can be expressed as a product of the other elements of the generating set. Since the elements of $S$ are all expressible by products of the operators $A_v$ and $B_f$, finding a minimal generating set of $S$ comes down to determining if any of the $A_v$s or $B_f$s can be expressed in terms of others.

In fact, it will now be shown that the following two relationships hold:
\[
\PROD{v}{}A_v=I \ \ \text{and} \ \ \PROD{f}{}B_f=I,
\]
where the products range over all vertices $v$ and faces $f$ of the lattice and $I$ is the identity operator on the whole space $\Hil^N$. This is easily seen by noting that for any operator $A_v$ (or $B_f$) acting on a particular vertex $v$ (or face $f$) of the lattice, there are four adjacent operators $A_{v_i}$ (or $B_{f_i}$) where each of the adjacent operators have one edge in common with $A_v$ (or $B_f$). Therefore, the action of two $\sigma^x$ (or $\sigma^z$) operations on the common edges cancel since $\sigma^x\sigma^x=I=\sigma^z\sigma^z$, which cancels the action of the original $A_v$ (or $B_f$).  Similarly, each of these four vertices $v_i$ (or faces $f_i$) have three other vertices (or faces) adjacent to them not including the original vertex $v$ (or face $f$). Then the action of the corresponding $A$ (or $B$) will cancel the $\sigma^x$ (or $\sigma^z$) operations that act on the shared edges. Continuing in this way, the simultaneous action of every $A_v$ (or $B_f$) on the lattice will cancel each other resulting in the trivial action $I$. This pattern is illustrated in the following two figures:


 The product of all face operators gives the identity $I$ since each edge of the lattice is acted on by to $\pz$ operators and ${\pz}^2=I$.

 The product of all vertex operators gives the identity $I$ since each edge of the lattice is acted on by to $\px$ operators and ${\px}^2=I$.

The relationship derived above implies that any single $A_{v'}$ (or $B_{f'}$) can be expressed as the product of all other $A_{v}$ (or $B_{f}$) with $v\neq v'$ (and $f\neq f'$). That is, since $A_v^2=I$ and $B_f^2=I$, each $A_v$  and $B_f$ is its own inverse. It then follows that
\[
\PROD{v\neq v'}{}A_v=A_{v'} \ \ \text{and} \ \ \PROD{f\neq f'}{}B_f=B_{f'}.
\]
Hence, since there are $k^2$ many $A_v$ operators and also $k^2$ many $B_f$ operators defined on the $k\times k$ lattice, there are only $k^2-1$ independent operators of each variety. This shows that a minimal generating set for the stabilizer group $S$ consists of $G:=2(k^2-1)$ independent elements.

This size of the generating set of $S$ will be relevant when analyzing the code space of the toric code to be presented in the next section. The elements of $S$, and in particular the operators forming the generating set of $S$, will play a crucial role in the correction of errors by serving as check operators. Measuring the generators on the qubits of the lattice will (in the ideal case) yield information on whether or not errors have been inflicted on the system, and also serve as means to correct possible the possible errors.


The Code Space

 The stabilizer group $S$ generated by the operators $A_v$ and $B_f$ consists of operators that all commute with each other. It is a general result in the theory of linear algebra that such a collection of operators can all be simultaneously diagonalized. This means that there exists a simultaneous eigenspace which is a subspace of $\Hil^N$ consisting of eigenvectors of every element of $S$. Thus, consider the space
 \[
 \Hil_S:=\{\ket{\psi}\in\Hil^N : A_v\ket{\psi}=\ket{\psi}, B_f\ket{\psi}=\ket{\psi} \ \text{for all} \ v \ \text{and} \ f\},
 \]
that is the simultaneous eigenspace of eigenvectors of each $A_v$ and $B_f$ having eigenvalue $+1$. Recall that such considerations are possible since the $A_v$ and $B_f$ have eigenvalues $+1$ and $-1$. The subspace $\Hil_S\subset\Hil^N$ is called the code space of the stabilizer group $S$. This space is to be thought of as encoding information that is to remain protected from errors. Each of the states $\ket{\psi}\in\Hil_S$ are states of all $N$  physical qubits of the system, but they only effectively encode some number $N_{c}<N$ of logical qubits, where $N_c=log_2(dim(\Hil_S))$ is the dimension of $\Hil_S$. That is, the states $\ket{\psi}$ in $\Hil_S$ are $N$ qubit states constructed to protect the information contained in the state of only $N_c$ effective qubits through the means of redundancy. In the conventional error correcting nomenclature, the states $\ket{\psi}\in\Hil_S$ are often called code words.

Of course, now the natural question to ask is how many qubits $N_c$ are encoded by $\Hil_S$, or equivalently what is the dimension $N_c:=dim(\Hil_S)$ of the space. As consequence of the stabilizer formalism, the number of logical qubits $N_c$ encoded by the space $\Hil_S$ in general is given by $2^{N-G}$, where $N$ is the total number of qubits under consideration and $G$ is the number of independent stabilizer operators that generate $S$. In the toric code defined on a $k \times k$ lattice, $N=2k^2$ and $G=2(k^2-1)$ as calculated in the previous section. Then the dimension of the code pace $\Hil_S$ is given by $N_c=2^{N-2(k^2-1)}=2^2$. Hence $dim(\Hil_S)=4$ so that the code space $\Hil_S$ only encodes $2$ logical qubits. Intuitively, this result holds because each independent generator of $S$ can be thought of as halving the dimension of the global space $\Hil^N$.  Note that this number $N_c$ does not depend on the characteristic length scale $k$ of the lattice. This implies that no matter how large the lattice is made in the toric code, the number of encoded qubits always remains the same. This will have interesting consequences when considering the error correcting abilities of the toric code for different lattice sizes $k$.

After describing the properties of errors in the toric code and how the errors are to be corrected in the proceeding sections, it will be interesting to revisit and analyze the codespace $\Hil_S$ in order to more explicitly understand the structure of the space in a more topological context. This will be done in a later section.


Errors and Anyons

Assume now that the state of the qubits of the system $\Hil^N$ are prepared in an encoded state of $\Hil_S$ and no errors are present in the system. Then by construction, measuring the $A_v$ and $B_p$ operators will all yield eigenvalue $+1$ since $A_v\ket{\psi}=\ket{\psi}$ and $B_f\ket{\psi}=\ket{\psi}$ for $\ket{\psi}\in\Hil_S$. The violation of any of these conditions signals a possible error having occurred on the encoded state $\cw$. Thus, errors will be detected by performing syndrome measurements using the stabilizer generators $A_v$ and $B_f$.

In what follows, the schemes for detecting possible errors occurring on the encoded state $\ket{\psi}\in\Hil_S$ will initially be described in a case-by-case basis until enough intuition is gathered to describe a more general error correcting scheme. In particular, first we will only analyze $\pz$ errors, and then use this understanding to analogously reason about $\px$ errors. Further, some additional terminology will be introduced to ease the analysis and motivate further topological implications of the toric code. The new entities here will be quasiparticles that are commonly referred to in the literature as anyons. In essence, errors in the toric code manifest themselves as pairs of anyon excitations. Despite having a rich theory in themselves, the relevant details regarding anyon theory will not be presented here.The interested reader is invited to consult the following sources: \cite{Kitaev}, \cite{Rao}. Fortunately, because of the way the toric code model will be presented here, the unacquainted reader need not suffer from their ignorance.


The case of a single $\pz$ error

  Suppose now that some $\sigma^z_j$ error occurs on qubit $j$ of the lattice (but the location is not known) so that the encoded state $\cw$ is transformed into the erred state $\ket{\xi}=\pz_j\cw$. In this case, any $B_v$ commutes with $\pz_j$ as $B_v$ is also in terms of $\pz$ operators. Therefore, measuring any of the $B_v$ operators will also return a $+1$ eigenvalue since
\[
B_f\ket{\xi}=B_f\pz_j\cw=\pz B_f\cw=\pz\cw=\ket{\xi}.
\]

 Any $A_v$ such that $v$ is not one of the two vertices at the end of edge $j$, will trivially commute with $\pz_j$ by the mere fact that these operators act on different qubits of the lattice. However, there exists precisely two $A_v$ operators that will anti-commute with $\pz_j$. Namely, the two $A_v$ that correspond to the two vertices at the ends of edge $j$. These anti-commute because the $A_v$ operators are in terms of $\px$ operators and $\px$ and $\pz$ anti-commute. Hence, when the two $A_v$ operators that act on the qubit on edge $j$ are measured they will give an eigenvalue $-1$:
 \[
 A_v\ket{\xi}=A_v\pz_j\cw=-\pz A_v\cw=-\pz\cw=-\ket{\xi}.
 \]
The result of measuring these two $A_v$ operators gives information on exactly which qubit $j$ was inflicted with the $\pz$ error. Moreover, the error can be corrected by applying $\pz_j$ to the erred state which returns it back to the original state $\cw\in\Hil_S$.

The pair of vertices at the ends of the edge $j$ corresponding to the locations of the $\pz_j$ error can be thought of as the locations of a pair of anyon excitations. We will now introduce the notation $z$ to represent a $z$-type anyon, and place two $z$ anyons at these two vertex locations as shown in the figure below. In this way, the syndrome measurements given by the $A_v$ operators can be thought of as detecting the presence of a $z$ anyon at a particular vertex as shown in the figure:

A pair of $z$-anyons are present on the two vertices that uniquely determine the location of the $\pz$ error.



The case of a single $\px$ error

Instead, suppose know that single $\px_j$ error is inflicted on the qubit located on edge $j$, but the precise location of $j$ is not known. The detection and correction of such an error proceeds in an analogous manner to the $\pz_j$ error described in the previous section. This time, the encoded state is transformed into the erred state of the form $\ket{\xi}=\px_j\cw$. Since all the $A_v$ operators are in terms of $\px$ as well, they all commute with $\px_j$. Thus measuring each $A_v$ on the lattice gives an eigenvalue $+1$ yielding no information pertaining to the location of the error since
\[
A_v\ket{\xi}=A_v\px_j\cw=\px A_v\cw=\px\cw=\ket{\xi}.
\]
On the contrary, every $B_f$ operator will commute with the $\px_j$ error with the exception of two $B_f$ operators. In this case, the two anti-commuting operators will correspond two the adjacent faces of the lattice that share the common edge $j$. Measuring these these two operators leads to the detection of the $\px_j$ error due to a $-1$ eigenvalue:
\[
B_f\ket{\xi}=B_f\px_j\cw=-\px B_f\cw=-\px\cw=-\ket{\xi}.
\]
Since these two $B_f$ operators correspond to the only adjacent faces next to the error, they uniquely determine the error's location and the error can be corrected by simply applying another $\px_j$ operator to  bring the erred state $\ket{\xi}$ back to an encoded state $\cw\in\Hil_S$. To signify the presence of this error, a $x$-type anyon denoted by $x$ will be introduced. This time, however, two $x$ anyons will be placed on the two faces adjacent to the location of the $\px_j$ error as shown in the figure below. Similar to how the $A_v$ operators are able to detect the presence of a $z$}  anyons representing $\pz$ errors, the presence of the two $x$ anyons representing $\px$ errors are detected instead by the $B_f$ operators.

A pair of $x$-anyons are present on the two faces that uniquely determine the location of the $\px$ error.



Strings of multiple $\pz$ errors

In the previous section, it was shown how a single $\pz$ and a single $\px$ error can be detected and corrected. This strategy will also work for correcting multiple $\pz$ and $\px$ errors provided that the  errors act on qubits (edges) that are not adjacent. If there happens to be, say, multiple $\pz$ errors such that the location of the errors forms a chain as depicted in the figure shown below, then the error syndrome will be inherently ambiguous and more care must be taken in attempting to correct the errors.

In order to better reason about a chain of adjacent errors given by the path $\overline{p}$ on the lattice, introduce the following string operator
\[
S^z(\overline{p})=\PROD{j\in\overline{p}}{}\pz_j,
\]
which applies a $\pz_j$ along each edge $j$ that is a part of the path $\overline{p}$. The string $\St{z}{p}$ then represents the case where a sequence of adjacent $\pz$ errors have inflicted the qubits along the path $\overline{p}$ on the lattice as shown in the figure below. The case of a $\pz$ error occurring on a single qubit, as discussed in the previous section, is a special case of a string operator $\St{z}{p}$ where the path $\overline{p}$ simply consists of a single edge.

An error $\St{z}{p}$ represents $\pz$ errors applied to all edges along the path $\overline{p}$.


When the string $\St{z}{p}$ effects an encoded state $\cw\in\Hil_S$ the erred state is of the form $\ket{\xi}=\St{z}{p}\cw$. It will now be shown that detecting the exact locations of all the errors induced by the string $\St{z}{p}$ using the stabilizer operators $A_v$ and $B_f$ is inherently ambiguous. Naturally, all $B_f$ commute with $\St{z}{p}$ since $B_f$ consists of $\px$ operators, and so will not yield any useful syndrome information. One may be tempted to think that any $A_v$ operator whose vertex $v$ lies on any part of the path $\overline{p}$ will anti-commute with $\St{z}{p}$ thereby detecting the presence of all the $\pz$ errors, but this is not the case. Actually, any $A_v$ whose vertex $v$ lies on the path $\overline{p}$ with the exception of the two vertices at the endpoints of the path $\overline{p}$ will also commute with the string $\St{z}{p}$, because the path $p$ will always pass through two edges in $star(v)$. For each of these two edges the $\px$ from $A_v$ anti-commutes with the error $\pz$ on that edge producing a $-1$, but since the other $\px$ and $\pz$ acting on the other common edge also produces a $-1$ the effect of the two will cancel yielding a trivial syndrome measurement. The only two $A_v$ that manage to detect an error produced by $\St{z}{p}$ via a $-1$ syndrome are the two $A_v$ corresponding to the endpoints of the path $\overline{p}$. These two $A_v$ anti-commute with $\St{z}{p}$ because $A_v$ and $\St{z}{p}$ only act on a single common edge in this case. Hence, there are two $z$ anyons that reside at the endpoints of $\overline{p}$ as shown in the figure below. This exemplifies the important property of the $z$ anyons that they will always appear as pairs whenever $\pz$ errors are present.

Two $z$-anyons are placed at the endpoints of the error string $\St{z}{p}$ to signify the vertex locations with nontrivial syndrome measurements.


Since the syndrome measurements in the case of a string $\St{z}{p}$ of $\pz$ errors only gives information specifying the endpoints of the error string $\St{z}{p}$, how then are all the errors to be corrected? It would be ideal if the exact path defining the string $\St{z}{p}$ was known, because then the errors can be corrected by simply applying all the $\pz$ operators comprising the string. To understand how to overcome this obstacle consider the following.

 Since the exact form of the error string $\St{z}{p}$ is unknown, and only the endpoints of the string can be detected, the best one could do is to guess a path $\overline{p_g}$ that has the common endpoints with the actually error path $\overline{p}$. Thus, consider some string operator $\St{z}{p_g}$, where the path $\overline{p_g}$ has the same endpoints as the actual error path $\overline{p}$. Recall, that these endpoints are just the locations at which the pair of \cir{$z$} anyons reside. The union of these two paths $\overline{p}$ and $\overline{p_g}$ (denoted by the concatenation $\overline{pp_g}$) forms a closed loop $\overline{L}=\overline{pp_g}$ on the lattice. The union of the two strings can then be written as $\St{z}{L}=\St{z}{p_g}\St{z}{p}\cw$. If the string $\St{z}{p_g}$ is applied to the erred state $\ket{\xi}=\St{z}{p}\cw$, it is transformed to the state
\[
\ket{\xi'}=\St{z}{p_g}\ket{\xi}=\St{z}{p_g}\St{z}{p}\cw=\St{z}{L}\cw.
\]

  Depending on the structure of the loop $\overline{L}$ it may be the case that $\ket{\xi'}=\cw$ implying that the state is returned to its original encoded state. However, it can also be the case that $\ket{\xi'}=\ket{\psi'}\in\Hil_s$ where $\ket{\psi'}\neq\cw$. In this latter case, the erred state $\ket{\xi}$ is not returned to the original state $\cw\in\Hil_S$. Instead, it is transformed to some other different encoded state $\ket{\psi}\in\Hil_S$ and error recovery fails. When this occurs a logical operation has been performed on the encoded state---an undesired effect when the objective is to merely preserve the state $\cw$. 


A loop detour on the torus
 This approach of guessing a string $\St{z}{p_g}$ in hopes of correcting some error string $\St{z}{p}$ by forming a loop $\overline{L}=\overline{pp_g}$ succeeds depending on the nature of the loop $\overline{L}$. Keeping in mind that the lattice under considerations is embedded on the surface of a torus, any loop on the lattice come in two varieties. Wether or not the error is corrected depends on which type of loop is formed in $\St{z}{L}$.

 In general, a loop in a plane always partitions the planar region into two disjoint parts: an inside and an outside. Such loops can always be contracted to a point on the surface. A loop of this variety will be referred to as a trivial loop. On the surface of a torus, this property of a loop being able to contract to a point does not always hold. For instance, consider a closed loop that wraps around the hole of the torus. It is impossible to contract such a loop to a point on the surface of the torus. A non contractible loop on the torus does not partition the surface of the torus into two disjoint regions, and thus does not have a well defined 'inside' or 'outside'. A loop that cannot be contracted to a single point will be called a nontrivial loop. For a torus, there are three such classes of nontrivial loops: ones that loop around the hole, ones that loop around the `equator' of the torus, and ones that loop around both the hole and the equator. Of course, nontrivial loops may loop multiple times around the torus in these various ways.

 In the lattice picture with periodic boundary conditions representing the surface of a torus, the nontrivial loops are ones that pass through the periodic boundary on any of the sides. A path on this lattice will form a trivial loop if it never passes through a boundary. If a path does pass through a boundary, it can still form a trivial loop provided that it passes back through that same boundary before joining itself.


Recovering from $\pz$ errors
 In regards to error recovery, consider some string loop $\St{z}{L}$ that has been constructed such that $\overline{L}$ is a trivial loop on the torus. In this case, the loop $\overline{L}$ forms the boundary of an inner region as shown in the figure below. In fact, this loop string $\St{z}{L}$ can be expressed completely in terms of certain $B_f$ operators as
 \[
 \St{z}{L}=\PROD{f\in inside(\overline{L})}{}B_f,
 \]
where the set $inside(\overline{L})$ consists of all the faces inside of the loop $\overline{L}$. This is true because in such a product of $B_f$ operators, any edge inside of the loop is acted on by two adjacent $B_f$ operators so that the action of both of the two $\pz$ operators on that edge is the identity map. The only participating edges in this product of $B_f$ operators that are acted on nontrivially are precisely those that comprise the loop $\overline{L}$. Now, since every $B_f$ is an element of the stabilizer $S$, the action of $\St{z}{L}$ on any state $\cw\in\Hil_S$  is trivial. Then if some erred state is of the form $\ket{\xi}=\St{z}{p}\cw$ after the encoded state $\cw$ is inflicted with a the string $\St{z}{p}$, and another guessed string $\St{z}{p_g}$ is applied so that $\St{z}{L}$ (where $\overline{L}=\overline{pp_g}$) forms a trivial loop, the erred state $\ket{\xi}$ is transformed as
\[
\St{z}{p_g}\ket{\xi}=\St{z}{p_g}\St{z}{p}\cw=\St{z}{L}\cw=\PROD{f\in inside(\overline{L})}{}B_f\cw=\cw.
\]
This shows that error recovery is successful if the error string  is made into a trivial loop.


 A trivial loop can be expressed as the product of all face operators $B_f$ corresponding to the faces contained in the loop $\overline{L}$.



On the other hand, in the presence of a $\St{z}{p}$ error string, suppose a string $\St{z}{p_g}$ was guessed so that the union of the two form a loop string $\St{z}{L}$, such that $\overline{L}$  is a nontrivial loop. The loop $\overline{L}$ no longer partitions the surface into two disjoint regions. In this scenario, it is impossible to express $\St{z}{L}$ exclusively in terms of $B_f$ operators as done in the case of a trivial loop. This means that $\St{z}{L}\notin S$, since it cannot be generated by elements of $S$. Yet, $\St{z}{L}$ still commutes with every element of $S$, because the loop $\overline{L}$ has no endpoints by definition. More explicitly, for any vertex $v$ on the loop, the loop will always pass through exactly two edges in $star(v)$ making $A_v$ commute with $\St{z}{L}$. Hence, no element of $S$ is able to detect any of the errors inflicted by $\St{z}{L}$. Therefore, despite attempting to correct the error on the state $\cw\in\Hil_S$, a logical operation is inadvertently applied to the state transforming it to some other $\ket{\psi'}\in\Hil_S$ and error recovery fails.

An elegant interpretation of the error recovery procedure is provided in terms of anyons. When the qubits of the lattice encode some state $\cw\in\Hil_S$ which is error free, no anyons are present. As mentioned previously, the existence of an open error string $\St{z}{p}$ results in a pair of $z$ anyons at the strings endpoints. In creating a loop $\overline{L}=\overline{pp_g}$ by applying another string $\St{z}{p_g}$ that starts at one of the end points (where one of the $z$ anyons is located) and then joining the path to the other endpoint (where the other $z$ anyon is located), the anyon pair can be thought of as fusing or annihilating one another. In a sense, $\sigma^z$ errors manifest themselves as pairs of $z$ anyons that reside on the lattice's vertices, and can be made to move around the lattice by applying strings of $\pz$ operators. The objective of successful error recovery is not only just to bring these pairs of $z$ anyons together so they annihilate and disappear, but to do so in such a way that when they fuse no anyon would have made a non-trivial loop around the torus in order to prevent some logical operation from occurring to the encoded state.


The Dual Lattice

Up until now, the discussion has been mostly focused on $\pz$ errors and how to correct them. The focus will now shift to correcting $\px$ errors. Fortunately, the understanding and intuition developed in the previous sections naturally extend over to this case. This transition will be assisted by considering the dual lattice as an abstract aid to reason about $\px$ errors.

Relative to the actual square lattice under consideration, the dual lattice is the lattice that has vertices at the center of the faces of the main lattice, and has the center of its faces at the locations of the vertices of the main lattice. In the dual lattice, the qubits still reside on the edges and remain in the same location as the main lattice. Naturally, the dual of the dual lattice is just the main lattice again. The main lattice (in solid lines) and the dual lattice (in dashed lines) are depicted together in the  figures shown below.

The utility in considering the dual lattice comes from being able to reason about $\px$ errors analogously to the way $\pz$ errors were analyzed. Just as paths were considered on the main lattice, paths $\overline{p'}$ will be considered on the dual lattice and will be referred to as co-paths. When displaying figures in the rest of this paper the dual lattice will not be depicted, and co-paths will be drawn as dashed lines as shown in the figure above. One useful consequence of considering the dual lattice is that the vertex operators $A_v$ represented in terms of $star(v)$ on the main lattice can now be perceived as face operators on the dual lattice as shown in the figure:

The main lattice and the dual lattice are depicted with the dual lattice as dashed lines. A co-path $\overline{p'}$ of the dual lattice as shown. The vertex operators $A_v$ can be thought of as a face operator on the dual lattice.

A somewhat pointless, yet kind of neat, interpretation of how the lattice relates to its dual is the following. By rotating each edge about its center by an angle of $\pi/2$, the lattice can be transformed into is dual. With this in mind it becomes more clear how a vertex operator on the main lattice becomes a face operator on the dual lattice as animating in the figure below:

The square lattice can be transformed into its dual by rotating each edge about its center by an angle of $\pi/2$. This turns vertex operators on the main lattice into face operators on the dual lattice.




Correcting $\px$ Errors
To represent multiple $\px$ errors that are adjacent to each other, define the string operator
\[
\St{x}{p'}=\PROD{v\in \overline{p'}}{}A_v,
\]
where the product ranges over vertices $v$ on the co-path $\overline{p'}$ of the dual lattice. For an open co-path $\overline{p'}$, a string $\St{x}{p'}$ manifests two $x$ anyons at its endpoints which lie on the faces of the main lattice. Similar to the correction of $\pz$ errors via the fusion of $z$ anyons, the objective of error recovery for $\px$ errors will be to fuse pairs of $x$ together so that they only form trivial co-loops on the dual lattice. If two $x$ are fused after a nontrivial co-loop has been travelled, a logical operation will be performed on the encoded state instead.

Any trivial co-loop $\overline{L'}$ can be expressed in terms of $A_v$ operators as
\[
\St{x}{L'}=\PROD{v\in inside(L')}{}A_v,
\]
where now the product ranges over all vertices of the main lattice contained inside of the co-loop $\overline{L'}$. Thus, $\St{x}{L'}\in S$ and so acts trivially on any encoded state in $\Hil_S$. Moreover, any nontrivial loop $\overline{L}$ lies outside of the stabilizer and so cannot be expressed in terms of the operators in $S$. However, for a nontrivial loop $\overline{L'}$ the operator $\St{x}{L'}$ commutes with every element of the stabilizer $S$. Hence, an error of this form can be detected by any syndrome measurement. If non-trivial loops of $\px$ operators inflict some encoded state to be protected, a logical operation will inevitably be applied and error recovery fails. The reason why this is all the case follows from analogous arguments described in the previous section.


General Errors

In the previous sections, $\pz$ and $\px$ errors were analyzed seperately in special cases where only a single string of errors if either type were described. Here, we saw that strings of errors manifest themselves as pairs of anyons---$z$ anyons on the vertices of the lattice for $\pz$ errors and pairs of $x$ anyons on the faces of the lattice for $\px$ errors. In the most general setting, it is expected that multiple strings of errors of both types can occur on the lattice at any time. Moreover, a particular qubit of the lattice may suffer from both a $\pz$ and a $\px$ error, in which case a $\py$ error is produced. In this case, a $y$ anyon will be introduced and should be thought of as the quasiparticle associated to the pair of $z$ and $x$ anyons at that site.


When multiple strings of errors of the same type are present on the lattice, there is an inherent ambiguity of how the pairs of anyons should be fused in order to correct the errors since the only information that is provided from a syndrome measurement is the locations of the anyons.  Any such matching of the pairs will result in loops/co-loops (perhaps multiple) on the lattice or dual lattice. Actually, the choice made in this fusion process is somewhat arbitrary. For proper error recovery to take place all that is necessary is for none of the anyons to traverse a nontrivial loop. However, ensuring that error recovery proceeds in this way is still difficult and there is no sure way to guarantee all loops are made trivially.  A configuration of errors illustrating this general setting is shown in the figure below. In addition, two other following figures are shown where guesses are made in an attempt to correct the errors. The lighter paths are meant to denote the actually string of errors that were originally present on the lattice. The darker paths represent the paths that were guessed. In both cases, it can be seen that some of the loops formed are trivial loops in which case those particular errors will be corrected, but there are also some nontrivial loops present which result in logical operations being applied to the encoded state.

 In the general case of multiple errors, all that is revealed from the syndrome measurements is the locations os anyon pairs.

 A possible correction attempt, where the lighter paths are the actually error strings, and the darker paths are the guessed paths. All errors in the case are corrected, except for the leftmost string of $\pz$ errors where a nontrivial loop has been created. 

 Another possible correction attempt, where the lighter paths are the actually error strings, and the darker paths are the guessed paths. All $\px$ errors in the case are corrected, but the $\pz$ error strings have been formed into one nontrivial loop.




The problem of being able to correct multiple errors in the toric code thus reduces to another problem: that of matching anyon pairs accordingly as to only form trivial loops. This is essentially a problem that requires additional post-processing utilizing the information given from the syndrome measurements. The most natural strategy is to guess strings that are of minimal length in fusing the anyon pairs. In the literature, such a strategy is referred to as minimal weight matching \cite{Dennis}. One reason why this strategy is justified is because if the probability of a single error occurring on a qubit is small, then it us unlikely that the error strings will be very long. It is more likely that say, $m$, isolated errors occur at different locations of the lattice, than it is for all $m$ errors to occur along a single string. If this is the case, then anyon pairs will tend to stay close to one another. Correcting the errors may then be achieved by fusing anyon pairs that are closest to each other.

For some string of of either $\pz$ or $\px$ errors, define the length of the string to be the number of edges in the path defining the string. Then for a $k\times k$ lattice, if some string were to form a nontrivial loop its length must be at least $k$. This number $k$ is the code distance of the toric code. This means that, at least in principle, any error string of length less than $k$ can be corrected provided that error recovery proceeds in an appropriate fashion. However, for errors of length greater than $k$ it may not be possible to recover from the error and a logical operation being performed on the encoded state may be inevitable. Let $\lfloor c \rfloor$ denotes the floor function defined as the largest integer less than or equal to $c$. If some error string has length $\lfloor \frac{k-1}{2}\rfloor$, then the error can always be corrected by joining the anyon pair through a minimal length path.

Despite the $k \times k$ toric code only ever being able to encode $2$ logical qubits in the code space regardless of the size $k$ of the lattice, the benefit of using a larger lattice is apparent.  If the probability for an error happening on a single qubit is $p$, and different errors remain uncorrelated, then it can be seen that the probability of an error string with length $k$ occurring decreases exponentially. Thus, the larger $k$ is, the more unlikely it is for an error string to form a nontrivial loop.


Logical Operations

 In general error correcting schemes, a state $\cw\in\Hil_S$ of the code space is prepared and the main objective is to keep the state invariant. In the presence of errors the state can be mapped outside of the encoded space. This may not always be the case however. It is possible that the original state get mapped to another state $\ket{\psi'}\in\Hil_S$ such that $\cw\neq\ket{\psi'}$. In such an occurrence, it was said the the encoded state $\cw$ experiences a logical operation. In the toric code, these logical operations occurred when a nontrivial loop was executed on the lattice.

 To understand what kind of operations these logical operations on the encoded state correspond to, recall that their were two types of nontrivial loops on the surface of the lattice. One of the nontrivial loops corresponded to a loop that wraps around the hole of the torus, and the other corresponded to a loop that traverses the `equator' of the torus. In the lattice picture, one of these loops pass through the 'north-south' boundary, and the other through the 'east-west' boundary. Likewise, there were two analogous nontrivial co-loops for the dual lattice.


  By representing the encoded state as $\cw=\ket{\overline{00}}\in\Hil_S$, where the label $\overline{00}$ has been used to emphasize that the state encodes two logical qubits, the nontrivial loops can be interpreted as logical $\pz$ and $\px$ operations. For a nontrivial string of $\pz$ operators looping in the `north-south' direction, write $\overline{Z_1}$ to represent a logical $\pz$ operation applied to the first logical qubit of $\cw$. The nontrivial loop in the 'east-west' direction then corresponds to a logical $\pz$ operation applied to the second logical qubit of $\cw$, denoted by $\overline{Z_2}$. Likewise, the two nontrivial loops of $\px$ errors on the dual lattice correspond to logical $\px$ operations denoted by $\overline{X_1}$ and $\overline{X_2}$. It does not matter how any of these nontrivial loops are formed, in the sense that the exact path described by a particular string operators forming the nontrivial loop is irrelevant. All that matters is which topological class the loop falls in. If a certain loop is traversed $n$  times, then the logical operation executed is one of $Z_i^n$ or $X_i^n$ depending on the direction and wether or not the loop was on the lattice or dual lattice, respectively. If a loop traverses multiple topologically distinct nontrivial loops, then the corresponding logical operation that is performed is the composition of the respective logical operations from each loop in the order that the loops were traversed.
 
 In terms of logical operations on the encoded qubits, all that can be performed are logical Pauli operations on the two encoded qubits. Hence, universal quantum computation cannot be performed using only these limited operations, since arbitrary two qubit gates cannot be realized. In particular, any two qubit entangling gate can not be performed on the logical qubits since only tensor products of two Pauli gates can be executed by the logical operations $Z_i$ and $X_i$.


Generalizations and Conclusion
In summary, the $k\times k$ toric code uses $2k^2$ total qubits placed on the edges of the square lattice to effectively encode $2$ logical qubits. Its main function is to serve as a quantum memory that protects the state of two qubits from Pauli errors. Errors on the lattice manifest themselves as pairs of anyon excitations.  The objective of error correction is to fuse the anyons together by forming only trivial loops with the anyons in the process. If nontrivial loops were performed then a logical operation is performed on the encoded state corresponding to $\px$ and $\pz$ operations or combinations thereof. The distance of the code was shown to be $k$, which corresponds to the shortest length path needed to form a non-trivial loop on the lattice. The larger the lattice dimension $k$, the more unlikely it is to form a nontrivial loop, and thus the chances of proper error recovery increase.

It is not necessary that the dimensions of the square lattice be symmetrical. In the case of a $k\times k'$ lattice, with $k\neq k'$, the code distance will correspond to the minimum value of $k$ and $k'$ since this will correspond to the shortest possible length an error string needs to make before it forms a nontrivial loop. The limitation of only being able to encoded two logical qubits can also be lifted if the lattice is embedded on a higher genus surface. For a surface of genus $g$ the number of encoded qubits is given by $4^g$, since each additional hole effectively adds 4 new topological classes of nontrivial loops. It is worth mentioning that it is not absolutely essential to embed the lattice on a non-planar surface with nonzero genus. Remaining on the plane, the toric code can be generalized by redefining the boundaries to have appropriate properties. In this scenario, more logical qubits can be encoded by adding "holes" to the planar region by removing qubits and stabilizers from the region. The holes are present, the more logical qubits can be encoded. Such generalizations are considered in \cite{Dennis}.

In the toric code model introduced here, the permissible logical operations are not rich enough to allow for universal quantum computation. The toric code model is captured more generally in the  quantum double models defined for any finite group $G$. Abstractly, this uses the mathematical theory concerning group representations of quasi-trianglular Hopf algebras to define more general anyon dynamics. The toric code model presented here is just the special case of a quantum double over the cyclic group of order $2$. Generalizations in this regard have been made in the field, and can be found in \cite{Shor}. In order to achieve universal quantum computation the group under consideration must be non-abelian and non-solvable.\cite{Mochon} Even then the complexity of the scenario is increased significantly since the smallest group satisfying this property is the alternating group, $A_5$, consisting of all even permutations on five letters, which has order $60$.

The Channel Capacity of the Quantum Erasure Channel

Abstract

The quantum channel capacity $Q(\Phi)$ of a quantum channel $\Phi$ is defined as the maximum asymptotic rate at which information can be reliably sent through the channel. In this post, the channel capacity $Q(\Er_p)$ of the erasure channel $\Er_p$ is proved, where the erasure channel $\Er_p$ is defined as the quantum operation on a qubit which erases the state with probability $p$ and leaves it unchanged otherwise. It is proved that $Q(\Er_p)=1-2p$ for $0\leq p\leq 1/2$ and that $Q(\Er_p)=0$ for $1/2\leq p\leq 1$. The latter case is proved via a contradiction with the no-cloning theorem, and the former is proved through the existence of a quantum error correcting code in the stabilizer formalism which achieves the optimum rate.


Introduction



The theory of quantum information is ultimately concerned with defining and understanding the properties and essence of quantum systems, how they transform, and how they can be used as resource in various contexts. Moreover, it is of great theoretical interest and practical importance to obtain quantitative measures of what can or cannot be accomplished in the quantum realm. Perhaps the most prevalent feature of quantum systems is their inherent fragility and high sensitivity to unwanted noise. Unlike the seemingly robust nature of the classical world, quantum systems are vulnerable to phenomenon which seem to "destroy" their very own quantum mechanical nature.


With this is mind, consider the task of trying to send   quantum state to another party through a quantum channel capable of transmitting quantum information. Ideally, we would hope for perfect transmitting capabilities of such a channel, but it is more realistic to acknowledge some likelihood of error in which the quantum system is subjected to noise--or  unwanted transformations of its state---before being received at the other end. Equivalently, we may be interested in merely preserving the state of a quantum system over time as opposed to spatially relocating the system. Even then it is possible that the system may experience undesired errors to its state. Thus, it is worthwhile to understand the precise conditions under which  a quantum state could be reasonably recovered when transformed under a noisy quantum channel. The channel capacity of a quantum channel gives precise bounds on the rate at which quantum information can be reliably transferred through the channel and still be recovered with sufficiently high fidelity. In order to successfully transfer information through a noisy quantum channel it is beneficial and often necessary  to encode the desired state into another quantum system, which is then subjected to errors and subsequently decoded back to the original state that was intended to be sent. This latter scheme is the domain of quantum error correcting codes. This reveals an intimate connection between channel capacities and error correction, that is, the channel capacity sets fundamental limits on the existence of successful error correcting codes which attempt to correct errors induced by that channel.

In what follows, the notion of the channel capacity of a channel will be made more precise and formal. With this framework the channel capacity of the erasure channel will be rigorously calculated. In short, the erasure channel can be simply be thought of as some channel which "erases" or "destroys'' some quantum bit that passes through the channel with some probability $p$, and leaves the quantum bit unchanged otherwise. The erasure channel, although relatively primitive and crude in its action, characterizes the natural phenomenon of information being lost in some process. This differs from the related depolarizing channel, which instead of completely destroying the state effectively randomizes it. One crucial difference worth mentioning is that in the context of the depolarizing channel, the sender or receiver may not know whether the system being transmitted has been randomized. In the case of the erasure channel, we will assume that at least the receiver of the system is aware of the existence of an erasure when it occurs.

The proof for obtaining the channel capacity will proceed in three steps. For an erasure channel with a probability of erasure occurring given by $p>1/2$, it will be shown that the channel capacity must be $Q=0$ in order to avoid contradiction with a fundamental result of quantum theory concerning the inability to clone arbitrary quantum states---the no-cloning theorem.  Then in the case for $p\leq 1/2$, it will first be shown that the channel capacity must be bounded above by $Q\leq1-2p$ due to a property of channel capacities that prevents the total number of bits transmitted from being super-additive when a noisy and noiseless channel are considered together.\cite{Ben1} Finally, it will be argued through the construction of random stabilizer codes that error correcting codes do indeed exist \cite{Got} that achieve the channel capacity $Q=1-2p$ for the case of $p\leq 1/2$. Interestingly, our ability to obtain exact bounds for the erasure channel is in contrast for the channel capacity of the depolarizing channel (and most other channels), for which only upper and lower bounds are known.

As a note on the required background knowledge needed to follow this proof, it will be assumed that the reader is already aware of some basic concepts and axioms in the theory of quantum computation and information \cite{NC} and possesses a fair degree of mathematical maturity---namely, a working knowledge of linear algebra together with the basics of finite group theory will be assumed. Prior knowledge on the theory of error correction and the stabilizer formalism will be beneficial.


Notational Remarks

 For some Hilbert space $\Hil$, let $D(\Hil)$ denote the space of linear operators on $\Hil$ that are valid quantum density states (positive operators with unit trace). Moreover, let $C(\Hil_1,\Hil_2)$ denote the space of valid quantum channels (completely-positive and trace preserving maps) that transform states $\rho_1\in D(\Hil_1)$ to states $\rho_2\in D(\Hil_2)$. For convenience, define $C(\Hil):=C(\Hil,\Hil)$ for the case where the quantum channels map states from a particular space $\Hil$ to the same space $\Hil$.


Quantum Error Correcting Codes


 To formally define the channel capacity $Q(\Phi)$ of a quantum channel $\Phi$, we first need to formalize the notion of a quantum error correcting code. The objective of quantum error correction is to encode a quantum state $\rho$ into a particular subspace $\Hil_{code}\subset\Hil$ of a larger Hilbert space $\Hil$ by the means of some quantum operation $U_c: \rho\mapsto \rho_c$, where $\rho_c\in\Hil_{code}$. The operation $U_c$ is called the \emph{encoding operation}, the state $U_c(\rho)=\rho_c$ is called the \emph{encoded state} of $\rho$, and the space $\Hil_{code}$ is often referred  to as the \emph{code space} of a quantum error correcting code. States of $\Hil_{code}$ should have the desired property that they can be recovered when subjected to errors. More formally, let $\Phi\in C(\Hil)$ be some quantum channel of interest that represents the action of a possible error effecting the state $\rho_c$, and let $\Phi(\rho_c)=\rho_e$. Along with the existence of an encoding operation $U_c$, an error correcting code also has a \emph{decoding operation} given by $U_d$. This operation serves the purpose of decoding the possibly erred state $\rho_e$, and should ideally satisfy $U_d(\rho_e)=\rho$. An error correcting protocol as described here is summarized schematically in the figure below.

A circuit representing the general error error correction scheme.


In a less idealistic scenario, when $U_d(\rho_e)=\rho'\neq\rho$, we may be satisfied with having the resulting state $\rho' $ being approximately equal to $\rho$ through some measure of closeness. One such useful measure is given by the \emph{fidelity} $F(\rho,\rho')$ of two states $\rho,\rho'\in D(\Hil)$ of some Hilbert space $\Hil$, where
\[
F(\rho,\rho')=Tr(\sqrt{\rho^{\frac{1}{2}}\rho'\rho^{\frac{1}{2}}}),
\]
and $Tr(.)$ denotes the trace operation. The fidelity of two states satisfies $0\leq F(\rho,\rho')\leq1$, where increasing values of $F(\rho,\rho')$ implying that the states $\rho$ and $\rho'$ are closer.


Noisy Channel Coding and Channel Capacity

Let $\Phi\in C(\Hil)$ be some quantum channel, and consider its $n$-fold extension
\[
\Phi^{\otimes n}=\Phi\otimes\dots\otimes\Phi \in C(\Hil^{\otimes n})=C(\Hil)\otimes\dots\otimes C(\Hil),
\]
which essentially represents the channel $\Phi$ acting individually on each of the  $n$ copies of $\Hil$. Now, suppose there is some error correcting code with code space $\Hil_{code}\subset\Hil$. For noisy channel coding \cite{Nielsen}, we consider the $n$-fold code space $\Hil_{code}^{\otimes n}\subset \Hil^{\otimes n}$, and are interested in source states $\rho\in D(\Hil)$ such that there exists an encoding operation $\overline{U}_c$ that takes a source state and maps it to an encoded state $\overline{U}_c(\rho)=\rho_c\in D(\Hil_{code}^{\otimes n})$, which is then acted on by $\Phi^{\otimes n}$ and subsequently decoded by a decoding operation $\overline{U}_d$ yielding a state $\rho'\in D(\Hil)$ close to the original source state $\rho$ in regards to the fidelity $F(\rho,\rho')$.

 Suppose that the dimension of the space $\Hil$ is $2^m$ so that $\rho\in D(\Hil)$ is a state consisting of $m$ qubits. In this context, the error $\Phi$ can be thought of as operating on $n$ blocks, each consisting of $m$ qubits, as illustrating in the circuit below. Thus, this coding model can be thought of using $n$ applications of the channel $\Phi$ to successfully send $m$ qubits through the channel provided that the encoding and decoding operations $\overline{U}_c$ and $\overline{U}_d$ exist which preserve the original source state $\rho$.

A circuit representing a particular noisy channel coding scheme, where an input state $\rho$ of some number $m$ of qubits is acted on by $4$ blocks consisting of $m$ qubits each.

  The quantity $R=m/n$ is defined as the rate of the channel $\Phi$. In this way, the channel capacity $Q(\Phi)$ is defined as the maximum asymptotic rate at which information can be reliably sent through the channel $\Phi$. More precisely, the channel capacity is the largest value $Q(\Phi)$ such that for any $R\leq Q(\Phi)$ and $\epsilon>0$, there exists an error correcting code $\Hil_{code}^{\otimes n}$ with rate at least $R$ where for any $\rho$ with $\overline{U}_c(\rho)=\rho_c\in\Hil_{code}^{\otimes n}$, the resulting recovered state $\rho'$ satisfies $F(\rho,\rho')=1-\epsilon$.



Modelling the Erasure Channel

 Let $\ket{0}, \ket{1} \in \Hil^2$ denote the standard computational basis states of a Hilbert space $\Hil^2$ of dimension $2$ representing a qubit. To model an erasure we will embed the states $\ket{0}$ and $ \ket{1}$ into a higher dimensional space and introduce a third basis state $\ket{2}\in\Hil^3$ that is orthogonal to both $\ket{0}$ and $ \ket{1}$ so that the three states together span $\Hil^3$. This third state $\ket{2}$ then serves the purpose of representing an erasure of either the states $\ket{0}$ and $ \ket{1}$.

 Now consider appending to the state of a qubit an ancilliary register belonging to some Hilbert space $\Hil_E$ that represents the ``environment" of some appropriate dimension (which may depend on the context). Then the action on the basis states of the qubit of the erasure channel, denoted by $\mathcal{E}_p$, is defined as
 \begin{align*}
 \ket{0}\otimes\ket{0}_E& \ \ \overset{\mathcal{E}_p}\mapsto \ \ \sqrt{1-p}\ket{0}\otimes\ket{0}_E+\sqrt{p}\ket{2}\otimes\ket{1}_E \\
  \ket{1}\otimes\ket{0}_E& \ \ \overset{\mathcal{E}_p}\mapsto \ \ \sqrt{1-p}\ket{1}\otimes\ket{0}_E+\sqrt{p}\ket{2}\otimes\ket{2}_E, \\
 \end{align*}
 where the states $\ket{0}_E,\ket{1}_E,\ket{2}_E\in\Hil_E$ are orthogonal states of the environment, and the map $\mathcal{E}_p$ has been parametrized by some real number $0\leq p\leq1$ that gives the probability that  the qubit will be erased.


The Channel Capacity of the Erasure Channel


 To calculate the channel capacity $Q(\mathcal{E}_p)$ of the erasure channel, we must consider the scenario described in the previous section on noisy channel coding, where now the channel of interest is $\Phi=\mathcal{E}_p$. In what follows we will prove the following theorem:


Theorem: The channel capacity of the erasure channel.
 For the erasure channel $\mathcal{E}_p$, which erases the qubit state with probability $p$ and leaves the state  intact with probability $1-p$, the channel capacity $Q(\mathcal{E}_p)$ is given by

$$ Q(\mathcal{E}_p)=\left\{
 \begin{array}{rl}
  1-2p &,   0\leq p\leq\frac{1}{2}\\
 0 &,   \frac{1}{2}\leq p\leq 1
 \end{array}\right. $$


 The proof of this theorem will proceed in three parts. First, for the case $1/2\leq p \leq 1$, we will appeal to the no cloning theorem\cite{NC} to show that no error correcting procedure can exist which allows arbitrary high recovery of states through the channel $\Er_p$. Then for the case $0\leq p \leq 1/2$, the bound $Q(\Er_p)\leq 1-2p$ will be established arguing that the number of qubits transmitted of a noisy and perfectly noiseless channel is sub-additive\cite{Ben1}. Lastly, using the stabilizer formalism of quantum error correcting codes\cite{Got}, it will be shown that an error correcting code must exist which can actually achieve the rate $1-2p$.


The $p\geq 1/2$ Case



 That fact that $Q(\Er_p)=0$ for  $p\geq 1/2$ shows that it is impossible to transmit any information through the erasure channel $\Er_p$. This implies that there cannot exist any error correcting procedure that can effectively recover the input state $\rho$ with a fidelity arbitrarily close to $1$. To see why this is the case, we will appeal to the \emph{no-cloning} theorem, which states that it is impossible to have a general quantum operation which has the ability to ``clone", or make copies, of arbitrary quantum states.\cite{NC} Thus, we will argue that if there did exist an error correcting procedure with valid encoding and decoding operations which effectively preserved the initial state $\rho$, then such a procedure would also provide a way of cloning these quantum states and therefore contradict the no-cloning theorem. 

 For the sake of argument, it is worth anthropomorphizing the situation where quantum states pass through an erasure channel. Consider three different parties Alice, Bob, and Charlie. The erasure channel can be effectively realized through a particular interaction of these parties as follows. Suppose Alice has $n$ qubits and would like to send these qubits to Bob. Moreover, suppose Charlie has the ability to intercept the qubits Alice is trying to send to Bob, and also has the ability of sending erased states $\ket{2}$ to Bob. If Charlie intercepts Alice's qubit with probability $p$, in which case he sends Bob the erased state, and with probability $1-p$ Charlie allows Alice's qubit to pass onto Bob, then for large $n$ Bob will have received approximately $(1-p)n$ qubits and Charlie will be in possession of $pn$ qubits. For $p\geq 1/2$, Charlie will end up with more qubits than Bob, and if it were possible for Bob to recover the initial states of the qubits from some error correcting procedure then surely Charlie could as well using the same procedure. Therefore, considering both of their recovered states together would constitute two copies of original state. However, by the no-cloning theorem, such a procedure cannot exist. Hence, no recovery procedure can exist in this case, which implies that the channel capacity $Q(\Er_p)$ for $p\geq 1/2$ must be $0$.


Upper bounding $Q(\Er_p)$ for $0\leq p \leq1/2$ 

 Consider some imperfect channel $\Phi$ with channel capacity $0<Q(\Phi)<1$ and a related perfect channel $\Phi_I$ with channel capacity $Q(\Phi_I)=1$. Then if the perfect channel is used $a$ times, $aQ(\Phi_I)=a$ bits can be transmitted as expected, whereas if the imperfect channel is used $b$ times the number of bits that can be transmitted is given by $bQ(\Phi)<b$. When considered jointly, where the perfect and imperfect channels are used $a$ and $b$ times, respectively, let $c$ denote the number of bits that can be transferred when both channels are used. The number of qubits this joint channel is considered to be additive if $c=a+bQ(\Phi)$, sub-additive if $c\leq a+bQ(\Phi)$, and super-additive if $c\geq a+bQ(\Phi)$.

It will now be shown that the number of qubits that can be jointly transmitted for a perfect and imperfect channel, as described above, must be sub-additive. The proof proceeds in a manner similar to that presented in \cite{Ben1}. For the sake of contradiction, suppose that the number of jointly transmitted qubits is super-additive so that $c>a+bQ(\Phi)$ when the perfect channel is used $a$ times and the imperfect channel is used $b$ times. Observe that the perfect channel $\Phi_I$ being used $a$ times can be simulated by its imperfect counterpart $\Phi$ with channel capacity $Q(\Phi)$ by using the imperfect channel $d$ times such that $dQ(\Phi)=a$. Thus, letting $Q'(\Phi)$ denote the channel capacity of this channel, it follows that
\[
Q'(\Phi)=\frac{c}{d+b}>\frac{dQ(\Phi)+bQ(\Phi)}{d+b}=Q(\Phi),
\]
where the inequality follows from the assumption $c>a+bQ(\Phi)=dQ(\Phi)+bQ(\Phi)$. However, here only the original channel $\Phi$ was used so it cannot be the case that $Q'(\Phi)=Q(\Phi)> Q(\Phi)$. Therefore it must be the case that the number of bits jointly transmitted of the perfect and noisy channel is sub-additive implying that $c\leq a +bQ(\Phi)$.

 With this result in mind, consider $p_2\leq p_1$ and suppose $n$ qubits are to be sent and it is known ahead of time that $n(1-p_2/p_1)$ qubits will arrive intact and the remaining $n(p_2/p_1)$ qubits will be erased with probability $p_1$. This scenario can be though of as a perfect channel which transmits the $n(1-p_2/p_1)$ qubits and an imperfect erasure channel which is used $n(p_2/p_1)$ times with erasure probability $p_1$ and corresponding channel capacity $Q(\Er_{p_1})$. Then the joint channel will effectively transmit $c$ qubits where,
 \[
 c\leq n\left(1-\frac{p_2}{p_1}\right)+n\left(\frac{p_2}{p_1}\right)Q(\Er_{p_1}),
 \]
 due to the sub-additivity result just shown. Then the rate $R$ of this channel is given by
 \[
 R=\frac{c}{n}\leq 1-\frac{p_2}{p_1}+\frac{p_2}{p_1}Q(\Er_{p_1}).
 \]

 Now for sufficiently large $n$, the number of qubits transmitted intact will be $n(1-p_2/p_1)\approx n(1-p_2)$, and the number of qubits erased is $n(p_2/p_1)\approx np_2$. Which corresponds to the case of the erasure channel $\Er_{p_2}$ with erasure probability $p_2$. In this case, using the inequality just derived, it follows that the channel capacity $Q(\Er_{p_2})$ must satisfy
 \[
Q(\Er_{p_2})\leq 1-\frac{p_2}{p_1}+\frac{p_2}{p_1}Q(\Er_{p_1}),
 \]
 which holds for all $p_2\leq p_1$. From the previous section we had already shown that $Q(\Er_{1/2})=0$ for the erasure channel with erasure probability $1/2$. Furthermore, it trivially holds that $Q(\Er_{0})=1$ since this corresponds to the special case of a perfect channel where no erasures take place. Thus, for all $p\leq p_1=1/2$, we arrive at
 \[
 Q(\Er_{p})\leq 1-\frac{p}{1/2}+\frac{p}{1/2}Q(\Er_{1/2})=1-2p,
 \]
 giving an upper bound on the channel capacity $Q(\Er_{p})$ in the case where $p\leq 1/2$.
 

The existence of a code achieving the upper bound on $Q(\Er_p)$

It has now been shown that the quantum channel capacity of the erasure channel satisfies $Q(\Er_p)\leq 1-2p$ for erasure probability $0\leq p \leq 1/2$. Therefore, if it can also be shown that an error correcting code exists that allows for successful recovery of the initial source state, it must necessarily follow that $Q(\Er_p)= 1-2p$ in the case where $0\leq p \leq 1/2$. Although an explicit code will not be constructed, it will now be shown that a stabilizer code does indeed exist which is able to recover the state through counting arguments that exploit the properties of the stabilizer formalism. The existence proof constructed here borrows ideas deployed by Gottesman for a proof of the same claim \cite{Got}. The unacquainted reader may feel free to consult this post for relevant properties pertaining to stabilizer codes needed for this analysis.

Suppose the error correcting protocol in this case takes place over a Hilbert space of $n$ qubits, which encodes $k$ qubits. Thus, consider a random stabilizer group $S\subset P_n$ of the Pauli group on $n$ qubits, where $S$ consists of $n-k$ generators. The operators contained in the stabilizer $S$ then preserve the code space $\Hil_{code}$ encoding $k$ qubits. These $n-k$ generators of $S$ are chosen at random from the $4^n$ possible choices in $P_n$ for stabilizer generators, with the condition that each choice of a generator commute with the previous choices and be independent from the others in order to form a minimal generating set.

Now, consider some state of $\Hil_{code}\subset \Hil^{2^n}$ consisting of $n$ qubits which is sent through the erasure channel with probability of erasure $p$. Then for large, $p$ the approximate number of states that will be erased is given by $np$. Therefore, in order to attempt correcting these $np$ qubits, a syndrome measurement must be done by measuring the $n-k$ generators of $S$.  Ideally, this syndrome will yield information for uniquely determining the operator $E\in P_n$ that represents the error that was inflicted on the qubits. By applying $E^\dagger\in P_n$, the error can be corrected returning the corrupted state to its original state with high fidelity.

However, error recovery may fail if two distinct errors $E_1$ and $E_2$ have the same syndrome measurement. It is important to consider the likelihood $P_{fail}$ of failing to recover the state in this case. In general, for some error $E\in P_n$, a stabilizer generator either commutes or anti-commutes with $E$ and does so with  a probability of $1/2$ in each case. Since there are $n-k$ generators the probability of two operators in $P_n$ having the same syndrome is given by $(1/2)^{n-k}$. Moreover, since there are $4^{pn}$ elements of $P_n$ that have support on the $pn$ erased qubits, the probability that two distinct errors have the same syndrome measurement must be bounded by
\[
P_{fail}\leq4^{pn}\left(\frac{1}{2}\right)^{n-k}=2^{-n(1-2p-R)},
\]
where $R=k/n$ is the rate. Provided that the rate satisfies $R<1-2p$, the right-hand-side will converge to $0$ as $n$ gets arbitrarily large. This implies that the probability of failing to recover the state approaches zero in this limit, so that such random stabilizer codes succeed in correcting the states for any $0\leq p\leq 1/2$. Hence, since this rate of $1-2p$ coincides with the previously derived upper bound on the channel capacity, it must be the case that  $Q(\Er_p)=1-2p$.


Conclusion

In this post it is proved that the quantum channel capacity $Q(\Er_p)$ for the erasure channel $\Er_p$ satisfies $Q(\Er_p)=1-2p$ for $0\leq p\leq 1/2$ and $Q(\Er_p)=0$ for $1/2\leq p \leq $. It was argued in the latter case that the channel capacity must be $0$ in order to prevent a violation of the no-cloning theorem. The former case first involved finding an upper bound of $1-2p$ of the channel capacity, and then it was argued through the existence of random stabilizer codes that it is always possible to recover the state with high fidelity through such means, which implies that the upper bound in the rate of the channel can actually be achieved. The context of the erasure channel provides a rare instance where the the quantum channel capacity of the channel can be computed exactly. It seems that for most other channels of interest only upper and lower bounds have been found. There has been much on going research in the past and currently on open questions pertaining to these matters with many interesting qualitative and quantitative results.

The Stabilizer Formalism

Consider the single qubit unitary matrices commonly referred to as the Pauli matrices defined as:

$$I=\begin{pmatrix}1&0 \\ 0 &1\end{pmatrix},
X=\begin{pmatrix}0&1 \\ 1&0 \end{pmatrix},
Y=\begin{pmatrix}0&1 \\ -1&0 \end{pmatrix},
Z=\begin{pmatrix}1&0 \\ 0&-1 \end{pmatrix}.$$


 It is worth noting that the operator $Y$ is commonly defined instead as the operator $\sigma_Y=iY$, but the definition of $Y$ introduced here will be convenient for our purposes. These operators satisfy the following properties:
 \begin{align*}
 X^2&=Z^2=-Y^2=I \\
XY&=-YX=Z, \\
YZ &=-ZY=X, \\
 ZX &=-XZ=Y.
\end{align*}

 These Properties give the set $P=\{\pm I,\pm X, \pm Y, \pm X\}$ a group structure under the usual matrix multiplication. Define $P_n:=\{U_1\otimes\dots\otimes U_n \ | \ U_j\in P, 0\leq j\leq n\}$, as the set of $n$-fold tensor products of Pauli operators from $P$. The set $P_n$ also forms a group structure under the natural multiplication and is called the Pauli group with order $|P_n|=2^{2n+1}$.

  An important property about the Pauli operators is that they span the space of unitary operators acting on a single qubit. That is, any single qubit unitary $U$ can be expressed as
 \[
 U=c_II+c_XX+c_YY+c_ZZ,
 \]
 where the vector $(c_I,c_X,c_Y,c_Z)$ consists of complex numbers and is of unit norm. Similarly, any unitary operator acting on a $n$-qubit Hilbert space can be expressed in terms of elements of the Pauli group $P_n$.

 Moreover, the Pauli Group $P_n$ also satisfies the following properties:

  1.  Every $M\in P_n$ in unitary: $M^\dagger=M^{-1}$.
  2.  Every $M\in P_n$ satisfies $M^2=\pm I^{\otimes n}$.
  3.  If $M^2=I^{\otimes n}$, then $M=M^\dagger$; if $M^2=-I$, then $M=-M^\dagger$.
  4.  For any $M, N \in P_n$, either $MN=NM$ (they commute) or $MN=-NM$ (they anti-commute).
Consider some abelian subgroup $S\subset P_n$, consisting of elements that all commute with one another. Then all elements of $S$ can be simultaneously diagonalized. The subspace $\Hil_S\subset\Hil^{2^n}$ defined as
$$\Hil_S:=\{\ket{\psi}\in\Hil^{2^n} \ | \ M\ket{\psi}=\ket{\psi}  \text{for all}   M\in S \}$$
 consists of the simultaneous eigenspace with eigenvalue $+1$ of elements of $S$. The space $\Hil_S$ is called the \emph{stabilizer code} associated with $S$, and $S$ is called the stabilizer of the code.

A generating set of $S$ is a collection of elements of $S$ such that each element of $S$ can be expressed as some product of elements from the generating set. In addition, it is required that the elements of the generating set be independent, meaning that no element of the generating set can be expressed as a product of the other elements of the generating set. It can be shown \cite{Got} that if $S$ has $n-k$ generators, then the codes space $\Hil_S$ has dimension $2^k$ implying that it can effectively encode $k$ qubits.

 Index the elements of a generating set of a stabilizer $S$ as $\{M_1,\dots,M_{n-k}\}$. The utility of the stabilizer formalism for quantum error correction comes from the fact that the elements of $S$ serve as operators for diagnosing possible errors that may occur to an encoded state of $\ket{\psi}\in\Hil_S$. In general, an error can be represented in terms elements $E_a\in P_n$. Then since every $E_a$ either commutes or anti-commutes with some generator $M_j\in S$, the following two cases may occur.
  •  If $E_a$ anti-commutes with some $M_j$, then for $\ket{\psi}\in\Hil_S$,
    \[ M_jE_a\ket{\psi}=-E_aM_j\ket{\psi}=-E_a\ket{\psi},\] which implies that the error can be detected if the the erred state $E_a\ket{\psi}$ is acted on by $M_j$.

  • If $E_a$ commutes with some $M_j$, then for $\ket{\psi}\in\Hil_S$,
    \[ M_jE_a\ket{\psi}=E_aM_j\ket{\psi}=E_a\ket{\psi}, \] and the error may go undetected when the erred state $E_a\ket{\psi}$ is acted on by $M_j$.
 A more thorough error syndrome can be provided by measuring each of the $n-k$  stabilizer generators. That is for a particular error $E_a$, consider the set of values $\{s_{a,j} \}$, where each $s_{a,j}\in\{0,1\}$ satisfies
 \[
 M_jE_a=(-1)^{s_{a,j}}E_aM_j.
 \]
 If it is the case that for every $a\neq b$, with $s_{a,j}\neq s_{b,j}$ for all $j$, then the code is considered to be non degenerate and there will be no ambiguity in what error occurred allowing for the error to be corrected by measuring the $n-k$ generators of $S$.

 Another condition which must be satisfied by the stabilizer $S$ in order to ensure complete error recovery due to arbitrary errors is that, for each possible error $E_a, E_b$ and any $\ket{\psi}\in\Hil_S$,
 \[
 \bra{\psi}E_a^\dagger E_b\ket{\psi}=C_{ab},
 \]
 such that the constants $C_{ab}$ are independent of $\ket{\psi}$. This condition can be equivalently shown to hold if one of the following holds for each possible pair of errors $E_a$ and $E_b$:
  1.  $E_a^\dagger E_b\in S$,
  2.  There exists an $M\in S$ that anti-commutes with $E_a^\dagger E_b$.
In this way, error recovery may fail if both conditions are violated. That is, if there exists some $E_a^\dagger E_b$ that commutes with every element of $S$, but yet $E_a^\dagger E_b\not\in S$. In this circumstance, the operator $E_a^\dagger E_b$ that preserves the code space $\Hil_S$ but still modifies it in a non trivial way, implying that encoded information may still be transformed. In addition, both $E_a$ and $E_b$ will have the same syndrome leaving an inherent ambiguity on how either error should be corrected, and any mistake in diagnosis can apply a nontrivial transformation to the encoded space.

An impossible operation: the transpose

Question: Is the transpose a valid quantum operation?

 To make things a little more rigorous, let an operation $\Lambda$ on qubits be defined as $\Lambda(\rho)=\rho^T$, where $\rho^T$ denotes the transpose of $\rho$.

Consider the one qubit  state $\ket{\psi_+}=\frac{1}{\sqrt{2}}(\ket{0}+i\ket{1})$ so that
\[ \begin{align*}
 \ket{\psi_+}\bra{\psi_+}&=\frac{1}{2}(\ket{0}+i\ket{1})(\bra{0}-i\bra{1}) \\
 &=\frac{1}{2}(\ket{0}\bra{0}-i\ket{0}\bra{1}+i\ket{1}\bra{0}+\ket{1}\bra{1}) \\
 &=\frac{1}{2}\begin{pmatrix} 1&-i \\ i&1\end{pmatrix}.
 \end{align*}\]
 Then
\[ \begin{align*}
 \Lambda(\ket{\psi_+}\bra{\psi_+})=\frac{1}{2}\begin{pmatrix} 1&-i \\ i&1\end{pmatrix}^T
&=\frac{1}{2}\begin{pmatrix} 1&i \\ -i&1\end{pmatrix} \\
&=\frac{1}{2}(\ket{0}\bra{0}+i\ket{0}\bra{1}-i\ket{1}\bra{0}+\ket{1}\bra{1}) \\
&=\frac{1}{2}(\ket{0}-i\ket{1})(\bra{0}+i\bra{1}) \\
&=\ket{\psi_-}\bra{\psi_-},
  \end{align*}\]
  where $\ket{\psi_-}=\frac{1}{\sqrt{2}}(\ket{0}-i\ket{1})$. Then the inner product of $\ket{\psi_-}$ and $\ket{\psi_+}$ is
\[  \begin{align*}
  \ip{\psi_-}{\psi_+}&=\frac{1}{2}(\bra{0}+i\bra{1})(\ket{0}+i\ket{1}) \\
  &=\frac{1}{2}(\ip{0}{0}+i\ip{0}{1}+i\ip{1}{0}-\ip{1}{1}) \\
  &=\frac{1}{2}(1+0+0-1) \\
  &=0.
  \end{align*}\]
  Thus, $\ket{\psi_+}$ and $\ket{\psi_-}$ are orthogonal pure states such that $\Lambda(\ket{\psi_+}\bra{\psi_+})=\ket{\psi_-}\bra{\psi_-}$.
 
 Now, it will be proven that there does not exist a unitary operation $U$ such that $\Lambda(\rho)=U\rho U^\dagger$ for all $\rho$.
 
  It suffices to show that such a unitary does not exist in the single qubit (two dimensional) case. For the sake of contradiction suppose that such a unitary $U$ does exist. Moreover, consider the two states $\ket{0}$ and $\ket{\psi_+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$, whose density operators are given by
\[
  \ket{0}\bra{0}=\begin{pmatrix} 1&0 \\0&0 \end{pmatrix} \ \ \  \text{and} \ \ \ \ket{\psi_+}\bra{\psi_+}=\frac{1}{2}\begin{pmatrix} 1&1\\1&1 \end{pmatrix}.
\]
  Then
 \[ \begin{align*}
 \Lambda(\ket{0}\bra{0})&=\begin{pmatrix} 1&0 \\0&0 \end{pmatrix}^T=\begin{pmatrix} 1&0 \\0&0 \end{pmatrix}=\ket{0}\bra{0} \\
    \Lambda(\ket{\psi_+}\bra{\psi_+})&=\frac{1}{2}\begin{pmatrix} 1&1\\1&1 \end{pmatrix}^T=\frac{1}{2}\begin{pmatrix} 1&1\\1&1 \end{pmatrix}=\ket{\psi_+}\bra{\psi_+},
  \end{align*}\]
  which shows that these two states remain the same under the transposition operation $\Lambda$. Therefore, these two states must also be left unchanged by the action of the unitary $U$. That is, it must be the case that
  \[\begin{align*}
  U\ket{0}\bra{0}U^\dagger&=\ket{0}\bra{0},\\
  U\ket{\psi_+}\bra{\psi_+}U^\dagger&=\ket{\psi_+}\bra{\psi_+},
  \end{align*}\]
  or equivalently that $U\ket{0}=\ket{0}$ and $U\ket{\psi_+}=\ket{\psi_+}$. By thinking of these states and the action of $U$ as a rotation on the Block sphere, this implies that both of these states remain fixed and must therefore lie on the axis of rotation of $U$. More explicitly, recall that a one-qubit unitary can be represented as
\[
  U=e^{i\alpha}\cos(\theta/2)I-ie^{i\alpha}\sin(\theta/2)\left(c_x X +c_Y Y + c_Z Z\right),
\]
  where $\alpha$ is just a phase factor, $\theta$ gives the angle of rotation, and $(c_X,c_Y,c_Z)$ is a unit vector.
 
    Then since the state $\ket{0}$ lies on the $z$-axis of the Bloch sphere, the rotation of $U$ must have its axis of rotation as the $z-$ axis in order to keep $\ket{0}$ fixed, which implies that $c_X,c_Y=0$. Furthermore, since the state $\ket{\psi_+}$ lies along the $y$-axis, this also implies that $U$ must rotate about the $y$-axis in order to ket $\ket{\psi_+}$ fixed, but this implies that $c_X,C_Z=)$. These implications together imply that $c_X,c_Y,c_Z=0$,so that $U=I$. However, as seen above the transpose operation $U$ does not act as the identity operation all states. This is a contradiction, and therefore there cannot exist a unitary $U$ satisfying the desired conditions.

  Another, perhaps more general, way to see why such a unitary cannot exist is too note that any operation of the form $\Phi(\rho)=U\rho U^\dagger$ defined with some unitary $U$ always yields a valid quantum operation. This means that $\Phi(\rho)$ is both trace preserving and a completely positive map by definition. On the other hand,  although the transpose operation $\Lambda$ is trace preserving it is not completely positive in general, and thus cannot define a valid quantum operation by definition.

An impossible operation: mapping every state to an orthogonal state

Here, we'll question the existence of a quantum operation that maps every quantum state to an orthogonal state relative to its input. More specifically: is there a one-qubit unitary operation $U$ that maps each pure state $\ket{\psi}$ to some state $U\ket{\psi}=\ket{\psi'}$ such that $\ip{\psi}{\psi'}=0$.

I claim that there does not exist such a unitary!

Suppose, for the sake of contradiction, that such a unitary $U$ did exist. Consider an arbitrary one-qubit state $\ket{\psi}$. Let $U\ket{\psi}=\ket{\psi'}$ so that $\ip{\psi}{\psi'}=0$. Moreover, let $U\ket{\psi'}=\ket{\psi''}$ so that $\ip{\psi''}{\psi'}=0$ as well by the assumption of the existence of such a unitary. Now, consider the state $\ket{\phi}=\frac{1}{\sqrt{2}}(\ket{\psi}+\ket{\psi'})$, and let $U\ket{\phi}=\ket{\phi'}$ so that $\ip{\phi}{\phi'}=0$. Then it must also follow that
\[
U\ket{\phi}=\frac{1}{\sqrt{2}}(U\ket{\psi}+U\ket{\psi'})=\frac{1}{\sqrt{2}}(\ket{\psi'}+\ket{\psi''})=\ket{\phi'}.
\]
This implies that the inner product of $\ket{\phi}$ and $\ket{\phi}$ must also satisfy
\[\begin{align*}
\ip{\phi}{\phi'}&=\frac{1}{2}(\bra{\psi'}+\bra{\psi''})(\ket{\psi'}+\ket{\psi''}) \\
&=\frac{1}{2}(\ip{\psi'}{\psi'}+\ip{\psi'}{\psi''}+\ip{\psi''}{\psi'}+\ip{\psi''}{\psi''}) \\
&=\frac{1}{2}(1+0+0+1) \\
&=1,
\end{align*}\]
but then $1=\ip{\phi}{\phi'}=0$ which a contradiction. Therefore, no such unitary can exist.