Jul 30 2024
math.CO arXiv:2407.19344v1
The $m \times n$ king graph consists of all locations on an $m \times n$ chessboard, where edges are legal moves of a chess king. %where each vertex represents a square on a chessboard and each edge is a legal move. Let $P_{m \times n}(z)$ denote its domination polynomial, i.e., $\sum_{S \subseteq V} z^{|S|}$ where the sum is over all dominating sets $S$. We prove that $P_{m \times n}(-1) = (-1)^{\lceil m/2\rceil \lceil n/2\rceil}$. In particular, the number of dominating sets of even size and the number of odd size differs by $\pm 1$. %The numbers can not be equal because the total number of dominating sets is always odd. This property does not hold for king graphs on a cylinder or a torus, or for the grid graph. But it holds for $d$-dimensional kings, where $P_{n_1\times n_2\times\cdots\times n_d}(-1) = (-1)^{\lceil n_1/2\rceil \lceil n_2/2\rceil\cdots \lceil n_d/2\rceil}$.
Many problems in high-dimensional statistics appear to have a statistical-computational gap: a range of values of the signal-to-noise ratio where inference is information-theoretically possible, but (conjecturally) computationally intractable. A canonical such problem is Tensor PCA, where we observe a tensor $Y$ consisting of a rank-one signal plus Gaussian noise. Multiple lines of work suggest that Tensor PCA becomes computationally hard at a critical value of the signal's magnitude. In particular, below this transition, no low-degree polynomial algorithm can detect the signal with high probability; conversely, various spectral algorithms are known to succeed above this transition. We unify and extend this work by considering tensor networks, orthogonally invariant polynomials where multiple copies of $Y$ are "contracted" to produce scalars, vectors, matrices, or other tensors. We define a new set of objects, tensor cumulants, which provide an explicit, near-orthogonal basis for invariant polynomials of a given degree. This basis lets us unify and strengthen previous results on low-degree hardness, giving a combinatorial explanation of the hardness transition and of a continuum of subexponential-time algorithms that work below it, and proving tight lower bounds against low-degree polynomials for recovering rather than just detecting the signal. It also lets us analyze a new problem of distinguishing between different tensor ensembles, such as Wigner and Wishart tensors, establishing a sharp computational threshold and giving evidence of a new statistical-computational gap in the Central Limit Theorem for random tensors. Finally, we believe these cumulants are valuable mathematical objects in their own right: they generalize the free cumulants of free probability theory from matrices to tensors, and share many of their properties, including additivity under additive free convolution.
PULSAR (personalized ultrafractionated stereotactic adaptive radiotherapy) is a form of radiotherapy method where a patient is given a large dose or pulse of radiation a couple of weeks apart rather than daily small doses. The tumor response is then monitored to determine when the subsequent pulse should be given. Pre-clinical trials have shown better tumor response in mice that received immunotherapy along with pulses spaced 10 days apart. However, this was not the case when the pulses were 1 day apart. Therefore, a synergistic effect between immunotherapy and PULSAR is observed when the pulses are spaced out by a certain number of days. In our study, we aimed to develop a mathematical model that can capture the synergistic effect by considering a time-dependent weight function that takes into account the spacing between pulses. By determining feasible parameters, and applying reasonable conditions, we utilize our model to simulate murine trials with varying sequencing of pulses. We successfully demonstrate that our model is simple to implement and can generate tumor volume data that is consistent with the pre-clinical trial data. Our model has the potential to aid in the development of clinical trials of PULSAR therapy.
In this review article, we discuss connections between the physics of disordered systems, phase transitions in inference problems, and computational hardness. We introduce two models representing the behavior of glassy systems, the spiked tensor model and the generalized linear model. We discuss the random (non-planted) versions of these problems as prototypical optimization problems, as well as the planted versions (with a hidden solution) as prototypical problems in statistical inference and learning. Based on ideas from physics, many of these problems have transitions where they are believed to jump from easy (solvable in polynomial time) to hard (requiring exponential time). We discuss several emerging ideas in theoretical computer science and statistics that provide rigorous evidence for hardness by proving that large classes of algorithms fail in the conjectured hard regime. This includes the overlap gap property, a particular mathematization of clustering or dynamical symmetry-breaking, which can be used to show that many algorithms that are local or robust to changes in their input fail. We also discuss the sum-of-squares hierarchy, which places bounds on proofs or algorithms that use low-degree polynomials such as standard spectral methods and semidefinite relaxations, including the Sherrington-Kirkpatrick model. Throughout the manuscript, we present connections to the physics of disordered systems and associated replica symmetry breaking properties.
Grigoriev (2001) and Laurent (2003) independently showed that the sum-of-squares hierarchy of semidefinite programs does not exactly represent the hypercube $\{\pm 1\}^n$ until degree at least $n$ of the hierarchy. Laurent also observed that the pseudomoment matrices her proof constructs appear to have surprisingly simple and recursively structured spectra as $n$ increases. While several new proofs of the Grigoriev-Laurent lower bound have since appeared, Laurent's observations have remained unproved. We give yet another, representation-theoretic proof of the lower bound, which also yields exact formulae for the eigenvalues of the Grigoriev-Laurent pseudomoments. Using these, we prove and elaborate on Laurent's observations. Our arguments have two features that may be of independent interest. First, we show that the Grigoriev-Laurent pseudomoments are a special case of a Gram matrix construction of pseudomoments proposed by Bandeira and Kunisky (2020). Second, we find a new realization of the irreducible representations of the symmetric group corresponding to Young diagrams with two rows, as spaces of multivariate polynomials that are multiharmonic with respect to an equilateral simplex.
Sep 30 2021
math.CA arXiv:2109.14036v1
Trigonometry is the study of circular functions, which are functions defined on the unit circle $x^2+y^2 =1$, where distances are measured using the Euclidean norm. When distances are measured using the $L_p$-norm, we get generalized trigonometric functions. These are parametrizations of the unit $p$-circle $|x|^p+|y|^p =1$. Investigating these new functions leads to interesting connections involving double angle formulas, norms induced by inner products, Stirling numbers, Bell polynomials, Lagrange inversion, gamma functions, and generalized $\pi$ values.
Embedding graphs in a geographical or latent space, i.e.\ inferring locations for vertices in Euclidean space or on a smooth manifold or submanifold, is a common task in network analysis, statistical inference, and graph visualization. We consider the classic model of random geometric graphs where $n$ points are scattered uniformly in a square of area $n$, and two points have an edge between them if and only if their Euclidean distance is less than $r$. The reconstruction problem then consists of inferring the vertex positions, up to the symmetries of the square, given only the adjacency matrix of the resulting graph. We give an algorithm that, if $r=n^\alpha$ for any $\alpha > 0$, with high probability reconstructs the vertex positions with a maximum error of $O(n^\beta)$ where $\beta=1/2-(4/3)\alpha$, until $\alpha \ge 3/8$ where $\beta=0$ and the error becomes $O(\sqrt{\log n})$. This improves over earlier results, which were unable to reconstruct with error less than $r$. Our method estimates Euclidean distances using a hybrid of graph distances and short-range estimates based on the number of common neighbors. We extend our results to the surface of the sphere in $\R^3$ and to hypercubes in any constant fixed dimension. Additionally we examine the extent to which reconstruction is still possible when the original adjacency lists have had a subset of the edges independently deleted at random.
A 2-coloring of a hypergraph is a mapping from its vertices to a set of two colors such that no edge is monochromatic. Let $H_k(n,m)$ be a random $k$-uniform hypergraph on $n$ vertices formed by picking $m$ edges uniformly, independently and with replacement. It is easy to show that if $r \geq r_c = 2^{k-1} \ln 2 - (\ln 2) /2$, then with high probability $H_k(n,m=rn)$ is not 2-colorable. We complement this observation by proving that if $r \leq r_c - 1$ then with high probability $H_k(n,m=rn)$ is 2-colorable.
We study the problem of efficiently refuting the k-colorability of a graph, or equivalently certifying a lower bound on its chromatic number. We give formal evidence of average-case computational hardness for this problem in sparse random regular graphs, showing optimality of a simple spectral certificate. This evidence takes the form of a computationally-quiet planting: we construct a distribution of d-regular graphs that has significantly smaller chromatic number than a typical regular graph drawn uniformly at random, while providing evidence that these two distributions are indistinguishable by a large class of algorithms. We generalize our results to the more general problem of certifying an upper bound on the maximum k-cut. This quiet planting is achieved by minimizing the effect of the planted structure (e.g. colorings or cuts) on the graph spectrum. Specifically, the planted structure corresponds exactly to eigenvectors of the adjacency matrix. This avoids the pushout effect of random matrix theory, and delays the point at which the planting becomes visible in the spectrum or local statistics. To illustrate this further, we give similar results for a Gaussian analogue of this problem: a quiet version of the spiked model, where we plant an eigenspace rather than adding a generic low-rank perturbation. Our evidence for computational hardness of distinguishing two distributions is based on three different heuristics: stability of belief propagation, the local statistics hierarchy, and the low-degree likelihood ratio. Of independent interest, our results include general-purpose bounds on the low-degree likelihood ratio for multi-spiked matrix models, and an improved low-degree analysis of the stochastic block model.
We study the problem of recovering a planted matching in randomly weighted complete bipartite graphs $K_{n,n}$. For some unknown perfect matching $M^*$, the weight of an edge is drawn from one distribution $P$ if $e \in M^*$ and another distribution $Q$ if $e \notin M^*$. Our goal is to infer $M^*$, exactly or approximately, from the edge weights. In this paper we take $P=\exp(\lambda)$ and $Q=\exp(1/n)$, in which case the maximum-likelihood estimator of $M^*$ is the minimum-weight matching $M_{\text{min}}$. We obtain precise results on the overlap between $M^*$ and $M_{\text{min}}$, i.e., the fraction of edges they have in common. For $\lambda \ge 4$ we have almost perfect recovery, with overlap $1-o(1)$ with high probability. For $\lambda < 4$ the expected overlap is an explicit function $\alpha(\lambda) < 1$: we compute it by generalizing Aldous' celebrated proof of the $\zeta(2)$ conjecture for the un-planted model, using local weak convergence to relate $K_{n,n}$ to a type of weighted infinite tree, and then deriving a system of differential equations from a message-passing algorithm on this tree.
For the tensor PCA (principal component analysis) problem, we propose a new hierarchy of increasingly powerful algorithms with increasing runtime. Our hierarchy is analogous to the sum-of-squares (SOS) hierarchy but is instead inspired by statistical physics and related algorithms such as belief propagation and AMP (approximate message passing). Our level-$\ell$ algorithm can be thought of as a linearized message-passing algorithm that keeps track of $\ell$-wise dependencies among the hidden variables. Specifically, our algorithms are spectral methods based on the Kikuchi Hessian, which generalizes the well-studied Bethe Hessian to the higher-order Kikuchi free energies. It is known that AMP, the flagship algorithm of statistical physics, has substantially worse performance than SOS for tensor PCA. In this work we 'redeem' the statistical physics approach by showing that our hierarchy gives a polynomial-time algorithm matching the performance of SOS. Our hierarchy also yields a continuum of subexponential-time algorithms, and we prove that these achieve the same (conjecturally optimal) tradeoff between runtime and statistical power as SOS. Our proofs are much simpler than prior work, and also apply to the related problem of refuting random $k$-XOR formulas. The results we present here apply to tensor PCA for tensors of all orders, and to $k$-XOR when $k$ is even. Our methods suggest a new avenue for systematically obtaining optimal algorithms for Bayesian inference problems, and our results constitute a step toward unifying the statistical physics and sum-of-squares approaches to algorithm design.
In 1969, Strassen shocked the world by showing that two n x n matrices could be multiplied in time asymptotically less than $O(n^3)$. While the recursive construction in his algorithm is very clear, the key gain was made by showing that 2 x 2 matrix multiplication could be performed with only 7 multiplications instead of 8. The latter construction was arrived at by a process of elimination and appears to come out of thin air. Here, we give the simplest and most transparent proof of Strassen's algorithm that we are aware of, using only a simple unitary 2-design and a few easy lines of calculation. Moreover, using basic facts from the representation theory of finite groups, we use 2-designs coming from group orbits to generalize our construction to all n (although the resulting algorithms aren't optimal for n at least 3).
We use invasion percolation to compute numerical values for bond and site percolation thresholds $p_c$ (existence of an infinite cluster) and $p_u$ (uniqueness of the infinite cluster) of tesselations $\{P,Q\}$ of the hyperbolic plane, where $Q$ faces meet at each vertex and each face is a $P$-gon. Our values are accurate to six or seven decimal places, allowing us to explore their functional dependency on $P$ and $Q$ and to numerically compute critical exponents. We also prove rigorous upper and lower bounds for $p_c$ and $p_u$ that can be used to find the scaling of both thresholds as a function of $P$ and $Q$.
We derive upper and lower bounds on the degree $d$ for which the Lovász $\vartheta$ function, or equivalently sum-of-squares proofs with degree two, can refute the existence of a $k$-coloring in random regular graphs $G_{n,d}$. We show that this type of refutation fails well above the $k$-colorability transition, and in particular everywhere below the Kesten-Stigum threshold. This is consistent with the conjecture that refuting $k$-colorability, or distinguishing $G_{n,d}$ from the planted coloring model, is hard in this region. Our results also apply to the disassortative case of the stochastic block model, adding evidence to the conjecture that there is a regime where community detection is computationally hard even though it is information-theoretically possible. Using orthogonal polynomials, we also provide explicit upper bounds on $\vartheta(\overline{G})$ for regular graphs of a given girth, which may be of independent interest.
Community detection in graphs is the problem of finding groups of vertices which are more densely connected than they are to the rest of the graph. This problem has a long history, but it is undergoing a resurgence of interest due to the need to analyze social and biological networks. While there are many ways to formalize it, one of the most popular is as an inference problem, where there is a "ground truth" community structure built into the graph somehow. The task is then to recover the ground truth knowing only the graph. Recently it was discovered, first heuristically in physics and then rigorously in probability and computer science, that this problem has a phase transition at which it suddenly becomes impossible. Namely, if the graph is too sparse, or the probabilistic process that generates it is too noisy, then no algorithm can find a partition that is correlated with the planted one---or even tell if there are communities, i.e., distinguish the graph from a purely random one with high probability. Above this information-theoretic threshold, there is a second threshold beyond which polynomial-time algorithms are known to succeed; in between, there is a regime in which community detection is possible, but conjectured to require exponential time. For computer scientists, this field offers a wealth of new ideas and open questions, with connections to probability and combinatorics, message-passing algorithms, and random matrix theory. Perhaps more importantly, it provides a window into the cultures of statistical physics and statistical inference, and how those cultures think about distributions of instances, landscapes of solutions, and hardness.
We show how to construct highly symmetric algorithms for matrix multiplication. In particular, we consider algorithms which decompose the matrix multiplication tensor into a sum of rank-1 tensors, where the decomposition itself consists of orbits under some finite group action. We show how to use the representation theory of the corresponding group to derive simple constraints on the decomposition, which we solve by hand for n=2,3,4,5, recovering Strassen's algorithm (in a particularly symmetric form) and new algorithms for larger n. While these new algorithms do not improve the known upper bounds on tensor rank or the matrix multiplication exponent, they are beautiful in their own right, and we point out modifications of this idea that could plausibly lead to further improvements. Our constructions also suggest further patterns that could be mined for new algorithms, including a tantalizing connection with lattices. In particular, using lattices we give the most transparent proof to date of Strassen's algorithm; the same proof works for all n, to yield a decomposition with $n^3 - n + 1$ terms.
We consider the problem of Gaussian mixture clustering in the high-dimensional limit where the data consists of $m$ points in $n$ dimensions, $n,m \rightarrow \infty$ and $\alpha = m/n$ stays finite. Using exact but non-rigorous methods from statistical physics, we determine the critical value of $\alpha$ and the distance between the clusters at which it becomes information-theoretically possible to reconstruct the membership into clusters better than chance. We also determine the accuracy achievable by the Bayes-optimal estimation algorithm. In particular, we find that when the number of clusters is sufficiently large, $r > 4 + 2 \sqrt{\alpha}$, there is a gap between the threshold for information-theoretically optimal performance and the threshold at which known algorithms succeed.
In the rendezvous problem, two parties with different labelings of the vertices of a complete graph are trying to meet at some vertex at the same time. It is well-known that if the parties have predetermined roles, then the strategy where one of them waits at one vertex, while the other visits all $n$ vertices in random order is optimal, taking at most $n$ steps and averaging about $n/2$. Anderson and Weber considered the symmetric rendezvous problem, where both parties must use the same randomized strategy. They analyzed strategies where the parties repeatedly play the optimal asymmetric strategy, determining their role independently each time by a biased coin-flip. By tuning the bias, Anderson and Weber achieved an expected meeting time of about $0.829 n$, which they conjectured to be asymptotically optimal. We change perspective slightly: instead of minimizing the expected meeting time, we seek to maximize the probability of meeting within a specified time $T$. The Anderson-Weber strategy, which fails with constant probability when $T= \Theta(n)$, is not asymptotically optimal for large $T$ in this setting. Specifically, we exhibit a symmetric strategy that succeeds with probability $1-o(1)$ in $T=4n$ steps. This is tight: for any $\alpha < 4$, any symmetric strategy with $T = \alpha n$ fails with constant probability. Our strategy uses a new combinatorial object that we dub a "rendezvous code," which may be of independent interest. When $T \le n$, we show that the probability of meeting within $T$ steps is indeed asymptotically maximized by the Anderson-Weber strategy. Our results imply new lower bounds, showing that the best symmetric strategy takes at least $0.638 n$ steps in expectation. We also present some partial results for the symmetric rendezvous problem on other vertex-transitive graphs.
We study the problem of detecting a structured, low-rank signal matrix corrupted with additive Gaussian noise. This includes clustering in a Gaussian mixture model, sparse PCA, and submatrix localization. Each of these problems is conjectured to exhibit a sharp information-theoretic threshold, below which the signal is too weak for any algorithm to detect. We derive upper and lower bounds on these thresholds by applying the first and second moment methods to the likelihood ratio between these "planted models" and null models where the signal matrix is zero. Our bounds differ by at most a factor of root two when the rank is large (in the clustering and submatrix localization problems, when the number of clusters or blocks is large) or the signal matrix is very sparse. Moreover, our upper bounds show that for each of these problems there is a significant regime where reliable detection is information- theoretically possible but where known algorithms such as PCA fail completely, since the spectrum of the observed matrix is uninformative. This regime is analogous to the conjectured 'hard but detectable' regime for community detection in sparse graphs.
We give upper and lower bounds on the information-theoretic threshold for community detection in the stochastic block model. Specifically, consider the symmetric stochastic block model with $q$ groups, average degree $d$, and connection probabilities $c_\text{in}/n$ and $c_\text{out}/n$ for within-group and between-group edges respectively; let $\lambda = (c_\text{in}-c_\text{out})/(qd)$. We show that, when $q$ is large, and $\lambda = O(1/q)$, the critical value of $d$ at which community detection becomes possible---in physical terms, the condensation threshold---is \[ d_\textc = \Theta\!\left( \frac\log qq \lambda^2 \right) \u2009, \]with tighter results in certain regimes. Above this threshold, we show that any partition of the nodes into $q$ groups which is as `good' as the planted one, in terms of the number of within- and between-group edges, is correlated with it. This gives an exponential-time algorithm that performs better than chance; specifically, community detection becomes possible below the Kesten-Stigum bound for $q \ge 5$ in the disassortative case $\lambda < 0$, and for $q \ge 11$ in the assortative case $\lambda >0$ (similar upper bounds were obtained independently by Abbe and Sandon). Conversely, below this threshold, we show that no algorithm can label the vertices better than chance, or even distinguish the block model from an \ER random graph with high probability. Our lower bound on $d_\text{c}$ uses Robinson and Wormald's small subgraph conditioning method, and we also give (less explicit) results for non-symmetric stochastic block models. In the symmetric case, we obtain explicit results by using bounds on certain functions of doubly stochastic matrices due to Achlioptas and Naor; indeed, our lower bound on $d_\text{c}$ is their second moment lower bound on the $q$-colorability threshold for random graphs with a certain effective degree.
We give upper and lower bounds on the information-theoretic threshold for community detection in the stochastic block model. Specifically, let $k$ be the number of groups, $d$ be the average degree, the probability of edges between vertices within and between groups be $c_\mathrm{in}/n$ and $c_\mathrm{out}/n$ respectively, and let $\lambda = (c_\mathrm{in}-c_\mathrm{out})/(kd)$. We show that, when $k$ is large, and $\lambda = O(1/k)$, the critical value of $d$ at which community detection becomes possible -- in physical terms, the condensation threshold -- is \[ d_c = \Theta\!\left( \frac\log kk \lambda^2 \right) \u2009, \]with tighter results in certain regimes. Above this threshold, we show that the only partitions of the nodes into $k$ groups are correlated with the ground truth, giving an exponential-time algorithm that performs better than chance -- in particular, detection is possible for $k \ge 5$ in the disassortative case $\lambda < 0$ and for $k \ge 11$ in the assortative case $\lambda > 0$. (Similar upper bounds were obtained independently by Abbe and Sandon.) Below this threshold, we use recent results of Neeman and Netrapalli (who generalized arguments of Mossel, Neeman, and Sly) to show that no algorithm can label the vertices better than chance, or even distinguish the block model from an Erdős-Rényi random graph with high probability. We also rely on bounds on certain functions of doubly stochastic matrices due to Achlioptas and Naor; indeed, our lower bound on $d_c$ is the second moment lower bound on the $k$-colorability threshold for random graphs with a certain effective degree.
QPot is an R package for analyzing two-dimensional systems of stochastic differential equations. It provides users with a wide range of tools to simulate, analyze, and visualize the dynamics of these systems. One of QPot's key features is the computation of the quasi-potential, an important tool for studying stochastic systems. Quasi-potentials are particularly useful for comparing the relative stabilities of equilibria in systems with alternative stable states. This paper describes QPot's primary functions, and explains how quasi-potentials can yield insights about the dynamics of stochastic systems. Three worked examples guide users through the application of QPot's functions.
A $k$-uniform, $d$-regular instance of Exact Cover is a family of $m$ sets $F_{n,d,k} = \{ S_j \subseteq \{1,...,n\} \}$, where each subset has size $k$ and each $1 \le i \le n$ is contained in $d$ of the $S_j$. It is satisfiable if there is a subset $T \subseteq \{1,...,n\}$ such that $|T \cap S_j|=1$ for all $j$. Alternately, we can consider it a $d$-regular instance of Positive 1-in-$k$ SAT, i.e., a Boolean formula with $m$ clauses and $n$ variables where each clause contains $k$ variables and demands that exactly one of them is true. We determine the satisfiability threshold for random instances of this type with $k > 2$. Letting $d^\star = \frac{\ln k}{(k-1)(- \ln (1-1/k))} + 1$, we show that $F_{n,d,k}$ is satisfiable with high probability if $d < d^\star$ and unsatisfiable with high probability if $d > d^\star$. We do this with a simple application of the first and second moment methods, boosting the probability of satisfiability below $d^\star$ to $1-o(1)$ using the small subgraph conditioning method.
Feb 24 2015
math.PR arXiv:1502.06136v1
We consider correlation decay in the hard-core model with fugacity $\lambda$ on a rooted tree $T$ in which the arity of each vertex is independently Poisson distributed with mean $d$. Specifically, we investigate the question of which parameter settings $(d, \lambda)$ result in strong spatial mixing, weak spatial mixing, or neither. (In our context, weak spatial mixing is equivalent to Gibbs uniqueness.) For finite fugacity, a zero-one law implies that these spatial mixing properties hold either almost surely or almost never, once we have conditioned on whether $T$ is finite or infinite. We provide a partial answer to this question, which implies in particular that 1. As $d \to \infty$, weak spatial mixing on the Poisson tree occurs whenever $\lambda < f(d) - o(1)$ but not when $\lambda$ is slightly above $f(d)$, where $f(d)$ is the threshold for WSM (and SSM) on the $d$-regular tree. This suggests that, in most cases, Poisson trees have similar spatial mixing behavior to regular trees. 2. When $1 < d \le 1.179$, there is weak spatial mixing on the Poisson($d$) tree for all values of $\lambda$. However, strong spatial mixing does not hold for sufficiently large $\lambda$. This is in contrast to regular trees, for which strong spatial mixing and weak spatial mixing always coincide. For infinite fugacity SSM holds only when the tree is finite, and hence almost surely fails on the Poisson($d$) tree when $d>1$. We show that WSM almost surely holds on the Poisson($d$) tree for $d < \mathbf{e}^{1/\sqrt{2}}/\sqrt{2} =1.434...$, but that it fails with positive probability if $d>\mathbf{e}$.
We establish a precise relationship between spherical harmonics and Fourier basis functions over a hypercube randomly embedded in the sphere. In particular, we give a bound on the expected Boolean noise sensitivity of a randomly rotated function in terms of its "spherical sensitivity," which we define according to its evolution under the spherical heat equation. As an application, we prove an average case of the Gotsman-Linial conjecture, bounding the sensitivity of polynomial threshold functions subjected to a random rotation.
We show that there exists a family of groups $G_n$ and nontrivial irreducible representations $\rho_n$ such that, for any constant $t$, the average of $\rho_n$ over $t$ uniformly random elements $g_1, \ldots, g_t \in G_n$ has operator norm $1$ with probability approaching 1 as $n \rightarrow \infty$. More quantitatively, we show that there exist families of finite groups for which $\Omega(\log \log |G|)$ random elements are required to bound the norm of a typical representation below $1$. This settles a conjecture of A. Wigderson.
We propose a new conjecture on some exponential sums. These particular sums have not apparently been considered in the literature. Subject to the conjecture we obtain the first effective construction of asymptotically good tree codes. The available numerical evidence is consistent with the conjecture and is sufficient to certify codes for significant-length communications.
In analogy with epsilon-biased sets over Z_2^n, we construct explicit epsilon-biased sets over nonabelian finite groups G. That is, we find sets S subset G such that | Exp_x in S rho(x)| <= epsilon for any nontrivial irreducible representation rho. Equivalently, such sets make G's Cayley graph an expander with eigenvalue |lambda| <= epsilon. The Alon-Roichman theorem shows that random sets of size O(log |G| / epsilon^2) suffice. For groups of the form G = G_1 x ... x G_n, our construction has size poly(max_i |G_i|, n, epsilon^-1), and we show that a set S ⊂G^n considered by Meka and Zuckerman that fools read-once branching programs over G is also epsilon-biased in this sense. For solvable groups whose abelian quotients have constant exponent, we obtain epsilon-biased sets of size (log |G|)^1+o(1) poly(epsilon^-1). Our techniques include derandomized squaring (in both the matrix product and tensor product senses) and a Chernoff-like bound on the expected norm of the product of independently random operators that may be of independent interest.
We consider Achlioptas processes for k-SAT formulas. We create a semi-random formula with n variables and m clauses, where each clause is a choice, made on-line, between two or more uniformly random clauses. Our goal is to delay the satisfiability/unsatisfiability transition, keeping the formula satisfiable up to densities m/n beyond the satisfiability threshold alpha_k for random k-SAT. We show that three choices suffice to raise the threshold for any k >= 3, and that two choices suffice for all 3 <= k <= 25. We also show that two choices suffice to lower the threshold for all k >= 3, making the formula unsatisfiable at a density below alpha_k.
The proliferation of models for networks raises challenging problems of model selection: the data are sparse and globally dependent, and models are typically high-dimensional and have large numbers of latent variables. Together, these issues mean that the usual model-selection criteria do not work properly for networks. We illustrate these challenges, and show one way to resolve them, by considering the key network-analysis problem of dividing a graph into communities or blocks of nodes with homogeneous patterns of links to the rest of the network. The standard tool for doing this is the stochastic block model, under which the probability of a link between two nodes is a function solely of the blocks to which they belong. This imposes a homogeneous degree distribution within each block; this can be unrealistic, so degree-corrected block models add a parameter for each node, modulating its over-all degree. The choice between ordinary and degree-corrected block models matters because they make very different inferences about communities. We present the first principled and tractable approach to model selection between standard and degree-corrected block models, based on new large-graph asymptotics for the distribution of log-likelihood ratios under the stochastic block model, finding substantial departures from classical results for sparse graphs. We also develop linear-time approximations for log-likelihoods under both the stochastic block model and the degree-corrected model, using belief propagation. Applications to simulated and real networks show excellent agreement with our approximations. Our results thus both solve the practical problem of deciding on degree correction, and point to a general approach to model selection in network analysis.
Subsets of F_2^n that are eps-biased, meaning that the parity of any set of bits is even or odd with probability eps close to 1/2, are powerful tools for derandomization. A simple randomized construction shows that such sets exist of size O(n/eps^2), and known deterministic constructions achieve sets of size O(n/eps^3), O(n^2/eps^2), and O((n/eps^2)^5/4). Rather than derandomizing these sets completely in exchange for making them larger, we attempt a partial derandomization while keeping them small, constructing sets of size O(n/eps^2) with as few random bits as possible. The naive randomized construction requires O(n^2/eps^2) random bits. We give two constructions. The first uses Nisan's space-bounded pseudorandom generator to partly derandomize a folklore probabilistic construction of an error-correcting code, and requires O(n log (1/eps)) bits. Our second construction requires O(n log (n/eps)) bits, but is more elementary; it adds randomness to a Legendre symbol construction on Alon, Goldreich, Håstad, and Peralta, and uses Weil sums to bound high moments of the bias.
Chang's lemma is a useful tool in additive combinatorics and the analysis of Boolean functions. Here we give an elementary proof using entropy. The constant we obtain is tight, and we give a slight improvement in the case where the variables are highly biased.
If each edge (u,v) of a graph G=(V,E) is decorated with a permutation pi_u,v of k objects, we say that it has a permuted k-coloring if there is a coloring sigma from V to 1,...,k such that sigma(v) is different from pi_u,v(sigma(u)) for all (u,v) in E. Based on arguments from statistical physics, we conjecture that the threshold d_k for permuted k-colorability in random graphs G(n,m=dn/2), where the permutations on the edges are uniformly random, is equal to the threshold for standard graph k-colorability. The additional symmetry provided by random permutations makes it easier to prove bounds on d_k. By applying the second moment method with these additional symmetries, and applying the first moment method to a random variable that depends on the number of available colors at each vertex, we bound the threshold within an additive constant. Specifically, we show that for any constant epsilon > 0, for sufficiently large k we have 2 k ln k - ln k - 2 - epsilon < d_k < 2 k ln k - ln k - 1 + epsilon. In contrast, the best known bounds on d_k for standard k-colorability leave an additive gap of about ln k between the upper and lower bounds.
In the context of statistical physics, Chandrasekharan and Wiese recently introduced the \emphfermionant $\Ferm_k$, a determinant-like quantity where each permutation $\pi$ is weighted by $-k$ raised to the number of cycles in $\pi$. We show that computing $\Ferm_k$ is #P-hard under Turing reductions for any constant $k > 2$, and is $\oplusP$-hard for $k=2$, even for the adjacency matrices of planar graphs. As a consequence, unless the polynomial hierarchy collapses, it is impossible to compute the immanant $\Imm_\lambda \,A$ as a function of the Young diagram $\lambda$ in polynomial time, even if the width of $\lambda$ is restricted to be at most 2. In particular, if $\Ferm_2$ is in P, or if $\Imm_\lambda$ is in P for all $\lambda$ of width 2, then $\NP \subseteq \RP$ and there are randomized polynomial-time algorithms for NP-complete problems.
In many real-world networks, nodes have class labels, attributes, or variables that affect the network's topology. If the topology of the network is known but the labels of the nodes are hidden, we would like to select a small subset of nodes such that, if we knew their labels, we could accurately predict the labels of all the other nodes. We develop an active learning algorithm for this problem which uses information-theoretic techniques to choose which nodes to explore. We test our algorithm on networks from three different domains: a social network, a network of English words that appear adjacently in a novel, and a marine food web. Our algorithm makes no initial assumptions about how the groups connect, and performs well even when faced with quite general types of network structure. In particular, we do not assume that nodes of the same class are more likely to be connected to each other---only that they connect to the rest of the network in similar ways.
We prove new lower bounds on the likely size of a maximum independent set in a random graph with a given average degree. Our method is a weighted version of the second moment method, where we give each independent set a weight based on the total degree of its vertices.
As we add rigid bars between points in the plane, at what point is there a giant (linear-sized) rigid component, which can be rotated and translated, but which has no internal flexibility? If the points are generic, this depends only on the combinatorics of the graph formed by the bars. We show that if this graph is an Erdos-Renyi random graph G(n,c/n), then there exists a sharp threshold for a giant rigid component to emerge. For c < c_2, w.h.p. all rigid components span one, two, or three vertices, and when c > c_2, w.h.p. there is a giant rigid component. The constant c_2 ≈3.588 is the threshold for 2-orientability, discovered independently by Fernholz and Ramachandran and Cain, Sanders, and Wormald in SODA'07. We also give quantitative bounds on the size of the giant rigid component when it emerges, proving that it spans a (1-o(1))-fraction of the vertices in the (3+2)-core. Informally, the (3+2)-core is maximal induced subgraph obtained by starting from the 3-core and then inductively adding vertices with 2 neighbors in the graph obtained so far.
Approximate algebraic structures play a defining role in arithmetic combinatorics and have found remarkable applications to basic questions in number theory and pseudorandomness. Here we study approximate representations of finite groups: functions f:G -> U_d such that Pr[f(xy) = f(x) f(y)] is large, or more generally Exp_x,y ||f(xy) - f(x)f(y)||^2$ is small, where x and y are uniformly random elements of the group G and U_d denotes the unitary group of degree d. We bound these quantities in terms of the ratio d / d_min where d_min is the dimension of the smallest nontrivial representation of G. As an application, we bound the extent to which a function f : G -> H can be an approximate homomorphism where H is another finite group. We show that if H's representations are significantly smaller than G's, no such f can be much more homomorphic than a random function. We interpret these results as showing that if G is quasirandom, that is, if d_min is large, then G cannot be embedded in a small number of dimensions, or in a less-quasirandom group, without significant distortion of G's multiplicative structure. We also prove that our bounds are tight by showing that minors of genuine representations and their polar decompositions are essentially optimal approximate representations.
We show that there exists a family of irreducible representations R_i (of finite groups G_i) such that, for any constant t, the average of R_i over t uniformly random elements g_1, ..., g_t of G_i has operator norm 1 with probability approaching 1 as i limits to infinity. This settles a conjecture of Wigderson in the negative.
Quantum computers can break the RSA and El Gamal public-key cryptosystems, since they can factor integers and extract discrete logarithms. If we believe that quantum computers will someday become a reality, we would like to have \emphpost-quantum cryptosystems which can be implemented today with classical computers, but which will remain secure even in the presence of quantum attacks. In this article we show that the McEliece cryptosystem over \emphwell-permuted, well-scrambled linear codes resists precisely the attacks to which the RSA and El Gamal cryptosystems are vulnerable---namely, those based on generating and measuring coset states. This eliminates the approach of strong Fourier sampling on which almost all known exponential speedups by quantum algorithms are based. Specifically, we show that the natural case of the Hidden Subgroup Problem to which the McEliece cryptosystem reduces cannot be solved by strong Fourier sampling, or by any measurement of a coset state. We start with recent negative results on quantum algorithms for Graph Isomorphism, which are based on particular subgroups of size two, and extend them to subgroups of arbitrary structure, including the automorphism groups of linear codes. This allows us to obtain the first rigorous results on the security of the McEliece cryptosystem in the face of quantum adversaries, strengthening its candidacy for post-quantum cryptography.
Consider a group G such that there is no homomorphism f:G to +1,-1. In that case, how close can we come to such a homomorphism? We show that if f has zero expectation, then the probability that f(xy) = f(x) f(y), where x, y are chosen uniformly and independently from G, is at most 1/2(1+1/sqrtd), where d is the dimension of G's smallest nontrivial irreducible representation. For the alternating group A_n, for instance, d=n-1. On the other hand, A_n contains a subgroup isomorphic to S_n-2, whose parity function we can extend to obtain an f for which this probability is 1/2(1+1/n \choose 2). Thus the extent to which f can be "more homomorphic" than a random function from A_n to +1,-1 lies between O(n^-1/2) and Omega(n^-2).
In many networks, vertices have hidden attributes, or types, that are correlated with the networks topology. If the topology is known but these attributes are not, and if learning the attributes is costly, we need a method for choosing which vertex to query in order to learn as much as possible about the attributes of the other vertices. We assume the network is generated by a stochastic block model, but we make no assumptions about its assortativity or disassortativity. We choose which vertex to query using two methods: 1) maximizing the mutual information between its attributes and those of the others (a well-known approach in active learning) and 2) maximizing the average agreement between two independent samples of the conditional Gibbs distribution. Experimental results show that both these methods do much better than simple heuristics. They also consistently identify certain vertices as important by querying them early on.
A version of group cohomology for locally compact groups and Polish modules has previously been developed using a bar resolution restricted to measurable cochains. That theory was shown to enjoy analogs of most of the standard algebraic properties of group cohomology, but various analytic features of those cohomology groups were only partially understood. This paper re-examines some of those issues. At its heart is a simple dimension-shifting argument which enables one to `regularize' measurable cocycles, leading to some simplifications in the description of the cohomology groups. A range of consequences are then derived from this argument. First, we prove that for target modules that are Fréchet spaces, the cohomology groups agree with those defined using continuous cocycles, and hence they vanish in positive degrees when the acting group is compact. Using this, we then show that for Fréchet, discrete or toral modules the cohomology groups are continuous under forming inverse limits of compact base groups, and also under forming direct limits of discrete target modules. Lastly, these results together enable us to establish various circumstances under which the measurable-cochains cohomology groups coincide with others defined using sheaves on a semi-simplicial space associated to the underlying group, or sheaves on a classifying space for that group. We also prove in some cases that the natural quotient topologies on the measurable-cochains cohomology groups are Hausdorff.
We present a simple, natural #P-complete problem. Let G be a directed graph, and let k be a positive integer. We define q(G;k) as follows. At each vertex v, we place a k-dimensional complex vector x_v. We take the product, over all edges (u,v), of the inner product <x_u,x_v>. Finally, q(G;k) is the expectation of this product, where the x_v are chosen uniformly and independently from all vectors of norm 1 (or, alternately, from the Gaussian distribution). We show that q(G;k) is proportional to G's cycle partition polynomial, and therefore that it is #P-complete for any k>1.
Celebrated work of Jerrum, Sinclair, and Vigoda has established that the permanent of a 0,1 matrix can be approximated in randomized polynomial time by using a rapidly mixing Markov chain. A separate strand of the literature has pursued the possibility of an alternate, purely algebraic, polynomial-time approximation scheme. These schemes work by replacing each 1 with a random element of an algebra A, and considering the determinant of the resulting matrix. When A is noncommutative, this determinant can be defined in several ways. We show that for estimators based on the conventional determinant, the critical ratio of the second moment to the square of the first--and therefore the number of trials we need to obtain a good estimate of the permanent--is (1 + O(1/d))^n when A is the algebra of d by d matrices. These results can be extended to group algebras, and semi-simple algebras in general. We also study the symmetrized determinant of Barvinok, showing that the resulting estimator has small variance when d is large enough. However, for constant d--the only case in which an efficient algorithm is known--we show that the critical ratio exceeds 2^n / n^O(d). Thus our results do not provide a new polynomial-time approximation scheme for the permanent. Indeed, they suggest that the algebraic approach to approximating the permanent faces significant obstacles. We obtain these results using diagrammatic techniques in which we express matrix products as contractions of tensor products. When these matrices are random, in either the Haar measure or the Gaussian measure, we can evaluate the trace of these products in terms of the cycle structure of a suitably random permutation. In the symmetrized case, our estimates are then derived by a connection with the character theory of the symmetric group.
Sep 20 2008
math.AP arXiv:0809.3261v1
We show uniqueness of solutions to the two-phase Stefan problem which have signed measures as initial data.
We reduce a case of the hidden subgroup problem (HSP) in SL(2; q), PSL(2; q), and PGL(2; q), three related families of finite groups of Lie type, to efficiently solvable HSPs in the affine group AGL(1; q). These groups act on projective space in an almost 3-transitive way, and we use this fact in each group to distinguish conjugates of its Borel (upper triangular) subgroup, which is also the stabilizer subgroup of an element of projective space. Our observation is mainly group-theoretic, and as such breaks little new ground in quantum algorithms. Nonetheless, these appear to be the first positive results on the HSP in finite simple groups such as PSL(2; q).
It is known that any quantum algorithm for Graph Isomorphism that works within the framework of the hidden subgroup problem (HSP) must perform highly entangled measurements across \Omega(n \log n) coset states. One of the only known models for how such a measurement could be carried out efficiently is Kuperberg's algorithm for the HSP in the dihedral group, in which quantum states are adaptively combined and measured according to the decomposition of tensor products into irreducible representations. This ``quantum sieve'' starts with coset states, and works its way down towards representations whose probabilities differ depending on, for example, whether the hidden subgroup is trivial or nontrivial. In this paper we show that no such approach can produce a polynomial-time quantum algorithm for Graph Isomorphism. Specifically, we consider the natural reduction of Graph Isomorphism to the HSP over the the wreath product S_n≀Z_2. Using a recently proved bound on the irreducible characters of S_n, we show that no algorithm in this family can solve Graph Isomorphism in less than e^\Omega(\sqrtn) time, no matter what adaptive rule it uses to select and combine quantum states. In particular, algorithms of this type can offer essentially no improvement over the best known classical algorithms, which run in time e^O(\sqrtn \log n).
It is known that any quantum algorithm for Graph Isomorphism that works within the framework of the hidden subgroup problem (HSP) must perform highly entangled measurements across Omega(n log n) coset states. One of the only known models for how such a measurement could be carried out efficiently is Kuperberg's algorithm for the HSP in the dihedral group, in which quantum states are adaptively combined and measured according to the decomposition of tensor products into irreducible representations. This ``quantum sieve'' starts with coset states, and works its way down towards representations whose probabilities differ depending on, for example, whether the hidden subgroup is trivial or nontrivial. In this paper we give strong evidence that no such approach can succeed for Graph Isomorphism. Specifically, we consider the natural reduction of Graph Isomorphism to the HSP over the the wreath product S_n ≀Z_2. We show, modulo a group-theoretic conjecture regarding the asymptotic characters of the symmetric group, that no matter what rule we use to adaptively combine quantum states, there is a constant b > 0 such that no algorithm in this family can solve Graph Isomorphism in e^b sqrtn time. In particular, such algorithms are essentially no better than the best known classical algorithms, whose running time is e^O(sqrtn \log n).
Since the discovery of the figure-8 orbit for the three-body problem [Moore 1993] a large number of periodic orbits of the n-body problem with equal masses and beautiful symmetries have been discovered. However, most of those that have appeared in the literature are either planar or are obtained from perturbations of planar orbits. Here we exhibit a number of new three-dimensional periodic n-body orbits with equal masses and cubic symmetry. We found these orbits numerically by minimizing the action as a function of the trajectories' Fourier coefficients. We also give numerical evidence that a planar 3-body orbit first found in [Hénon, 1976], rediscovered by [Moore 1993], and found to exist for different masses by [Nauenberg 2001], is dynamically stable. It is a pleasure to dedicate this paper to Philip Holmes.
Most current methods for identifying coherent structures in spatially-extended systems rely on prior information about the form which those structures take. Here we present two new approaches to automatically filter the changing configurations of spatial dynamical systems and extract coherent structures. One, local sensitivity filtering, is a modification of the local Lyapunov exponent approach suitable to cellular automata and other discrete spatial systems. The other, local statistical complexity filtering, calculates the amount of information needed for optimal prediction of the system's behavior in the vicinity of a given point. By examining the changing spatiotemporal distributions of these quantities, we can find the coherent structures in a variety of pattern-forming cellular automata, without needing to guess or postulate the form of that structure. We apply both filters to elementary and cyclical cellular automata (ECA and CCA) and find that they readily identify particles, domains and other more complicated structures. We compare the results from ECA with earlier ones based upon the theory of formal languages, and the results from CCA with a more traditional approach based on an order parameter and free energy. While sensitivity and statistical complexity are equally adept at uncovering structure, they are based on different system properties (dynamical and probabilistic, respectively), and provide complementary information.
Jun 02 2005
math.AP arXiv:math/0506022v1
We consider local solutions of the two-phase Stefan problem with a "mushy" region. We show that if a (distributional) solution u is locally square integrable then the temperature is continuous.
We compute the probability of satisfiability of a class of random Horn-SAT formulae, motivated by a connection with the nonemptiness problem of finite tree automata. In particular, when the maximum clause length is 3, this model displays a curve in its parameter space along which the probability of satisfiability is discontinuous, ending in a second-order phase transition where it becomes continuous. This is the first case in which a phase transition of this type has been rigorously established for a random constraint satisfaction problem.
Understanding the structure of the Internet graph is a crucial step for building accurate network models and designing efficient algorithms for Internet applications. Yet, obtaining its graph structure is a surprisingly difficult task, as edges cannot be explicitly queried. Instead, empirical studies rely on traceroutes to build what are essentially single-source, all-destinations, shortest-path trees. These trees only sample a fraction of the network's edges, and a recent paper by Lakhina et al. found empirically that the resuting sample is intrinsically biased. For instance, the observed degree distribution under traceroute sampling exhibits a power law even when the underlying degree distribution is Poisson. In this paper, we study the bias of traceroute sampling systematically, and, for a very general class of underlying degree distributions, calculate the likely observed distributions explicitly. To do this, we use a continuous-time realization of the process of exposing the BFS tree of a random graph with a given degree distribution, calculate the expected degree distribution of the tree, and show that it is sharply concentrated. As example applications of our machinery, we show how traceroute sampling finds power-law degree distributions in both delta-regular and Poisson-distributed random graphs. Thus, our work puts the observations of Lakhina et al. on a rigorous footing, and extends them to nearly arbitrary degree distributions.
We study a Hele-Shaw problem with a mushy region obtained as a Mesa type limit of one phase Stefan problems in exterior domains. We study the convergence, determine some of the qualitative properties and regularity of the unique limiting solution, and prove regularity of the free boundary of this limit under very general conditions on the initial data. Indeed, our results handle changes in topology and multiple injection slots.
Given any integer d >= 3, let k be the smallest integer such that d < 2k log k. We prove that with high probability the chromatic number of a random d-regular graph is k, k+1, or k+2, and that if (2k-1) \log k < d < 2k \log k then the chromatic number is either k+1 or k+2.
Many NP-complete constraint satisfaction problems appear to undergo a "phase transition'' from solubility to insolubility when the constraint density passes through a critical threshold. In all such cases it is easy to derive upper bounds on the location of the threshold by showing that above a certain density the first moment (expectation) of the number of solutions tends to zero. We show that in the case of certain symmetric constraints, considering the second moment of the number of solutions yields nearly matching lower bounds for the location of the threshold. Specifically, we prove that the threshold for both random hypergraph 2-colorability (Property B) and random Not-All-Equal k-SAT is 2^k-1 ln 2 -O(1). As a corollary, we establish that the threshold for random k-SAT is of order Theta(2^k), resolving a long-standing open problem.
We solve the problem of one-dimensional Peg Solitaire. In particular, we show that the set of configurations that can be reduced to a single peg forms a regular language, and that a linear-time algorithm exists for reducing any configuration to the minimum number of pegs. We then look at the impartial two-player game, proposed by Ravikumar, where two players take turns making peg moves, and whichever player is left without a move loses. We calculate some simple nim-values and discuss when the game separates into a disjunctive sum of smaller games. In the version where a series of hops can be made in a single move, we show that neither the P-positions nor the N-positions (i.e. wins for the previous or next player) are described by a regular or context-free language.
Jun 10 2000
math.CO arXiv:math/0006066v2
Using mostly elementary considerations, we find out who wins the game of Domineering on all rectangular boards of width 2, 3, 5, and 7. We obtain bounds on other boards as well, and prove the existence of polynomial-time strategies for playing on all boards of width 2, 3, 4, 5, 7, 9, and 11. We also comment briefly on toroidal and cylindrical boards.
We solve the problem of one-dimensional peg solitaire. In particular, we show that the set of configurations that can be reduced to a single peg forms a regular language, and that a linear-time algorithm exists for reducing any configuration to the minimum number of pegs.
Mar 07 2000
math.CO arXiv:math/0003039v1
It is well-known that the question of whether a given finite region can be tiled with a given set of tiles is NP-complete. We show that the same is true for the right tromino and square tetromino on the square lattice, or for the right tromino alone. In the process, we show that Monotone 1-in-3 Satisfiability is NP-complete for planar cubic graphs. In higher dimensions, we show NP-completeness for the domino and straight tromino for general regions on the cubic lattice, and for simply-connected regions on the four-dimensional hypercubic lattice.
May 04 1999
math.CO arXiv:math/9905012v1
We calculate the generating functions for the number of tilings of rectangles of various widths by the right tromino, the $L$ tetromino, and the $T$ tetromino. This allows us to place lower bounds on the entropy of tilings of the plane by each of these. For the $T$ tetromino, we also derive a lower bound from the solution of the Ising model in two dimensions.
We examine the one-humped map at the period-doubling transition to chaos, and ask whether its long-term memory is stack-like (last-in, first-out) or queue-like (first-in, first-out). We show that it can be recognized by a real-time automaton with one queue, or two stacks, and give several new grammatical characterizations of it. We argue that its memory has a queue-like character, since a single stack does not suffice. We also show that its dynamical zeta function, generating function and growth function are transcendental. The same results hold for any period-multiplying cascade. We suggest that transcendentality might be a sign of dynamical phase transitions in other systems as well.
We propose a definition of QNC, the quantum analog of the efficient parallel class NC. We exhibit several useful gadgets and prove that various classes of circuits can be parallelized to logarithmic depth, including circuits for encoding and decoding standard quantum error-correcting codes, or more generally any circuit consisting of controlled-not gates, controlled pi-shifts, and Hadamard gates. Finally, while we note the Quantum Fourier Transform can be parallelized to linear depth, we conjecture that an even simpler `staircase' circuit cannot be parallelized to less than linear depth, and might be used to prove that QNC < QP.