Papers updated in last 183 days (1725 results)

Last updated:  2024-10-24
$\Sigma$-Check: Compressed $\Sigma$-protocol Theory from Sum-check
Shang Gao, Chen Qian, Tianyu Zheng, Yu Guo, and Bin Xiao
The theory of compressed $\Sigma$-protocols [AC20, ACF21] provides a standardized framework for creating efficient $\Sigma$-protocols. This method involves two main phases: first, amortization, which combines multiple instances that satisfy a homomorphic relation into a single instance; and second, Bulletproofs compression [BBB+18], which minimizes communication overhead to a logarithmic scale during the verification of the combined instance. For high-degree polynomial (non-homomorphic) relations, a circuit-based linearization technique is used to transform each instance into a linear relation, resulting in a protocol with at least linear complexity. In this paper, we provide a direct method to extend the compressed $\Sigma$-protocol theory to polynomial relations, named $\Sigma$-Check. One major contribution of $\Sigma$-Check is to eliminate the linear cost associated with linearization in amortization. To achieve this, we employ a sum-check during this phase to ensure a logarithmic communication cost. To the best of our knowledge, $\Sigma$-Check is the first work to achieve a logarithmic amortization for polynomial relations. Nevertheless, without linearization, the amortized relation may not be linear, which hinders us from using Bulletproofs compression. To overcome this problem, we employ another sum-check during the compression phase to effectively manage high-degree relations. Additionally, we propose several variants of our techniques and adapt them for arithmetic circuit relations. We also demonstrate the practicality of our compressed $\Sigma$-protocol theory through applications such as binary proofs, range proofs, and partial knowledge proofs. Our basic protocols are initially based on the Discrete Logarithm (DL) assumption, and we have further extended these to incorporate the Strong-RSA assumption and the Generalized Discrete Logarithm Representation (GDLR) assumption. Our work expands the scope of compressed $\Sigma$-protocol theory and provides a robust foundation for real-world cryptographic applications.
Last updated:  2024-10-24
Theoretical Approaches to Solving the Shortest Vector Problem in NP-Hard Lattice-Based Cryptography with Post-SUSY Theories of Quantum Gravity in Polynomial Time
Trevor Nestor
The Shortest Vector Problem (SVP) is a cornerstone of lattice-based cryptography, underpinning the security of numerous cryptographic schemes like NTRU. Given its NP-hardness, efficient solutions to SVP have profound implications for both cryptography and computational complexity theory. This paper presents an innovative framework that integrates concepts from quantum gravity, noncommutative geometry, spectral theory, and post-SUSY particle physics to address SVP. By mapping high-dimensional lattice points to spin foam networks and by means of Hamiltonian engineering, it is theoretically possible to devise new algorithms that leverage the interactions topologically protected Majorana fermion particles have with the gravitational field through the spectral action principle to loop through these spinfoam networks where SVP vectors could then be encoded onto the spectrum of the corresponding Dirac operators within the system. We establish a novel approach that leverages post-SUSY physics and theories of quantum gravity to achieve algorithmic speedups beyond those expected by conventional quantum computers. This interdisciplinary methodology not only proposes potential polynomial-time algorithms for SVP but also bridges gaps between theoretical physics and cryptographic applications, providing further insights into the Riemann Hypothesis (RH) and the Hilbert-Polya Conjecture.
Last updated:  2024-10-23
Convex Consensus with Asynchronous Fallback
Andrei Constantinescu, Diana Ghinea, Roger Wattenhofer, and Floris Westermann
Convex Consensus (CC) allows a set of parties to agree on a value $v$ inside the convex hull of their inputs with respect to a predefined abstract convexity notion, even in the presence of byzantine parties. In this work, we focus on achieving CC in the best-of-both-worlds paradigm, i.e., simultaneously tolerating at most $t_s$ corruptions if communication is synchronous, and at most $t_a \leq t_s$ corruptions if it is asynchronous. Our protocol is randomized, which is a requirement under asynchrony, and we prove that it achieves optimal resilience. In the process, we introduce communication primitives tailored to the network-agnostic model. These are a deterministic primitive allowing parties to obtain intersecting views (Gather), and a randomized primitive leading to identical views (Agreement on a Core-Set). Our primitives provide stronger guarantees than previous counterparts, making them of independent interest.
Last updated:  2024-10-23
Public-Key Cryptography through the Lens of Monoid Actions
Hart Montgomery and Sikhar Patranabis
We provide a novel view of public-key cryptography by showing full equivalences of certain primitives to "hard" monoid actions. More precisely, we show that key exchange and two-party computation are exactly equivalent to monoid actions with certain structural and hardness properties. To the best of our knowledge, this is the first "natural" characterization of the mathematical structure inherent to any key exchange or two-party computation protocol, and the first explicit proof of the necessity of mathematical structure for public-key cryptography. We then utilize these characterizations to show new black-box separation results. Concretely, we obtain the following results: - Two-Party Key Exchange: We show that that any two-party noninteractive key exchange protocol is equivalent to the existence of an abelian monoid action equipped with a natural hardness property, namely (distributional) unpredictability. More generally, we show that any $k$-round (two-party) key exchange protocol is essentially equivalent to the existence of a (distributional) unpredictable monoid action with certain commutator-like properties. Rudich (Crypto '91) shows a black-box separation of $k$-round and $(k+1)$-round key exchange for any $k$; we use our generic primitive here to formalize this result and extend it to efficient key exchange protocols (where communication is poly($k$)). - Two-Party Computation: We show that any maliciously secure two-party computation protocol is also equivalent to a monoid action with commutator-like properties and certain hardness guarantees. We then use a generic version of this primitive to show a black-box separation between $k$-round semi-honest secure two-party computation and $(k+1)$-round maliciously secure two-party computation. This yields the first black-box separation (to our knowledge) between $k$-round and $(k+1)$-round maliciously secure two-party computation protocols.
Last updated:  2024-10-23
Keeping Up with the KEMs: Stronger Security Notions for KEMs and automated analysis of KEM-based protocols
Cas Cremers, Alexander Dax, and Niklas Medinger
Key Encapsulation Mechanisms (KEMs) are a critical building block for hybrid encryption and modern security protocols, notably in the post-quantum setting. Given the asymmetric public key of a recipient, the primitive establishes a shared secret key between sender and recipient. In recent years, a large number of abstract designs and concrete implementations of KEMs have been proposed, e.g., in the context of the NIST process for post-quantum primitives. In this work, we (i) establish stronger security notions for KEMs, and (ii) develop a symbolic analysis method to analyze security protocols that use KEMs. First, we generalize existing security notions for KEMs in the computational setting, introduce several stronger security notions, and prove their relations. Our new properties formalize in which sense outputs of the KEM uniquely determine, i.e., bind, other values. Our new binding properties can be used, e.g., to prove the absence of attacks that were not captured by prior security notions, such as re-encapsulation attacks. Second, we develop a family of fine-grained symbolic models that correspond to our hierarchy of computational security notions, and are suitable for the automated analysis of KEM-based security protocols. We encode our models as a library in the framework of the Tamarin prover. Given a KEM-based protocol, our approach can automatically derive the minimal binding properties required from the KEM; or, if also given a concrete KEM, can analyze if the protocols meets its security goals. In case studies, Tamarin automatically discovers, e.g., that the key exchange protocol proposed in the original Kyber paper requires stronger properties from the KEM than were proven in the paper.
Last updated:  2024-10-22
$\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning
Harish Karthikeyan and Antigoni Polychroniadou
Our work aims to minimize interaction in secure computation due to the high cost and challenges associated with communication rounds, particularly in scenarios with many clients. In this work, we revisit the problem of secure aggregation in the single-server setting where a single evaluation server can securely aggregate client-held individual inputs. Our key contribution is the introduction of One-shot Private Aggregation ($\mathsf{OPA}$) where clients speak only once (or even choose not to speak) per aggregation evaluation. Since each client communicates only once per aggregation, this simplifies managing dropouts and dynamic participation, contrasting with multi-round protocols and aligning with plaintext secure aggregation, where clients interact only once. We construct $\mathsf{OPA}$ based on LWR, LWE, class groups, DCR and demonstrate applications to privacy-preserving Federated Learning (FL) where clients {speak once}. This is a sharp departure from prior multi-round FL protocols whose study was initiated by Bonawitz et al. (CCS, 2017). Moreover, unlike the YOSO (You Only Speak Once) model for general secure computation, $\mathsf{OPA}$ eliminates complex committee selection protocols to achieve adaptive security. Beyond asymptotic improvements, $\mathsf{OPA}$ is practical, outperforming state-of-the-art solutions. We benchmark logistic regression classifiers for two datasets, while also building an MLP classifier to train on MNIST, CIFAR-10, and CIFAR-100 datasets. We build two flavors of $\mathsf{OPA}$ (1) from (threshold) key homomorphic PRF and (2) from seed homomorphic PRG and secret sharing. The threshold Key homomorphic PRF addresses shortcomings observed in previous works that relied on DDH and LWR in the work of Boneh et al. (CRYPTO, 2013), marking it as an independent contribution to our work. Moreover, we also present new threshold key homomorphic PRFs based on class groups or DCR or the LWR assumption.
Last updated:  2024-10-22
AQQUA: Augmenting Quisquis with Auditability
George Papadoulis, Danai Balla, Panagiotis Grontas, and Aris Pagourtzis
We propose AQQUA: a digital payment system that combines auditability and privacy. AQQUA extends Quisquis by adding two authorities; one for registration and one for auditing. These authorities do not intervene in the everyday transaction processing; as a consequence, the decentralized nature of the cryptocurrency is not disturbed. Our construction is account-based. An account consists of an updatable public key which functions as a cryptographically unlinkable pseudonym, and of commitments to the balance, the total amount of coins spent, and the total amount of coins received. In order to participate in the system a user creates an initial account with the registration authority. To protect their privacy, whenever the user wants to transact they create unlinkable new accounts by updating their public key and the total number of accounts they own (maintained in committed form). The audit authority may request an audit at will. The user must prove in zero-knowledge that all their accounts are compliant to specific policies. We formally define a security model capturing the properties that a private and auditable digital payment system should possess and we analyze the security of AQQUA under this model.
Last updated:  2024-10-22
Dynamic Decentralized Functional Encryptions from Pairings in the Standard Model
Duy Nguyen
Dynamic Decentralized Functional Encryption (DDFE), introduced by Chotard et al. (CRYPTO'20), represents a robust generalization of (Multi-Client) Functional Encryption. It allows users to dynamically join and contribute private inputs to individually controlled joint functions without requiring a trusted authority. Recently, Shi et al. (PKC'23) proposed the first Multi-Client Functional Encryption scheme for function-hiding inner products (FH-IP) without relying on random oracles. Unfortunately, their construction still requires a trusted key authority, leaving open the question of whether a full-fledged FH-IP-DDFE can exist in the standard model. In this work, we answer this question affirmatively by introducing Updatable Pseudorandom Zero Sharing, a novel concept that provides both the critical functionality and security properties needed to construct secure DDFE schemes in the standard model. Our second contribution is a novel proof strategy, which preserves adaptive security when transforming any functional encryption scheme for FH-IP into FH-IP-DDFE. Together, these two techniques enable a modular construction of FH-IP-DDFE that is secure against adaptive message and key queries in the standard model. Additionally, our pseudorandom zero-sharing scheme is highly versatile, enabling the first DDFE for attribute-weighted sums in the standard model, complementing the recent ROM-based construction by Agrawal et al. (CRYPTO'23).
Last updated:  2024-10-22
Basic Lattice Cryptography: The concepts behind Kyber (ML-KEM) and Dilithium (ML-DSA)
Vadim Lyubashevsky
This tutorial focuses on describing the fundamental mathematical concepts and design decisions used in the two ``main'' lattice schemes standardized by NIST and included in the CNSA 2.0 algorithmic suite. They are the KEM / encryption scheme CRYSTALS-Kyber (ML-KEM) and the signature scheme CRYSTALS-Dilithium (ML-DSA) . In addition, we will also give the main ideas behind other lattice-based KEMs like Frodo and NTRU.
Last updated:  2024-10-22
Spec-o-Scope: Cache Probing at Cache Speed
Uncategorized
Gal Horowitz, Eyal Ronen, and Yuval Yarom
Show abstract
Uncategorized
Over the last two decades, microarchitectural side channels have been the focus of a large body of research on the development of new attack techniques, exploiting them to attack various classes of targets and designing mitigations. One line of work focuses on increasing the speed of the attacks, achieving higher levels of temporal resolution that can allow attackers to learn finer-grained information. The most recent addition to this line of work is Prime+Scope [CCS '21], which only requires a single access to the L1 cache to confirm the absence of victim activity in a cache set. While significantly faster than prior attacks, Prime+Scope is still an order of magnitude slower than cache access. In this work, we set out to close this gap. We draw on techniques from research into microarchitectural weird gates, software constructs that exploit transient execution to perform arbitrary computation on cache state. We design the Spec-o-Scope gate, a new weird gate that performs 10 cache probes in quick succession, and forms the basis for our eponymous attack. Our Spec-o-Scope attack achieves an order of magnitude improvement in temporal resolution compared to the previous state-of-the-art of Prime+Scope, reducing the measurement time from ~70 cycles to only 5 --- only one cycle more than an L1 cache access. We experimentally verify that our attack can detect timing differences in a 5 cycle resolution. Finally, using our Spec-o-Scope attack, we show the first microarchitectural side-channel attack on an unmodified AES S-box-based implementation, which uses generic CPU features and does not require manipulation of the operating system's scheduler.
Last updated:  2024-10-22
Conjunctive Searchable Symmetric Encryption from Hard Lattices
Debadrita Talapatra, Sikhar Patranabis, and Debdeep Mukhopadhyay
Searchable Symmetric Encryption (SSE) supports efficient keyword searches over encrypted outsourced document collections while minimizing information leakage. All practically efficient SSE schemes supporting conjunctive queries rely crucially on quantum-broken cryptographic assumptions (such as discrete-log hard groups) to achieve compact storage and fast query processing. On the other hand, quantum-safe SSE schemes based on purely symmetric-key crypto-primitives either do not support conjunctive searches, or are practically inefficient. In particular, there exists no quantum-safe yet practically efficient conjunctive SSE scheme from lattice-based hardness assumptions. We solve this open question by proposing Oblivious Post-Quantum Secure Cross Tags (OQXT) – the first lattice-based practically efficient and highly scalable conjunctive SSE scheme. The technical centerpiece of OQXT is a novel oblivious cross-tag generation protocol with provable security guarantees derived from lattice-based hardness assumptions. We prove the post-quantum simulation security of OQXT with respect to a rigorously defined and thoroughly analyzed leakage profile. We then present a prototype implementation of OQXT and experimentally validate its practical efficiency and scalability over extremely large real-world databases. Our experiments show that OQXT has competitive end-to-end search latency when compared with the best (quantum-broken) conjunctive SSE schemes.
Last updated:  2024-10-22
Relations among new CCA security notions for approximate FHE
Sébastien Canard, Caroline Fontaine, Duong Hieu Phan, David Pointcheval, Marc Renard, and Renaud Sirdey
In a recent Eurocrypt'24 paper, Manulis and Nguyen have proposed a new CCA security notion, vCCA, and associated construction blueprints to leverage both CPA-secure and correct FHE beyond the CCA1 security barrier. However, because their approach is only valid under the correctness assumption, it leaves a large part of the FHE spectrum uncovered as many FHE schemes used in practice turn out to be approximate and, as such, do not satisfy the correctness assumption. In this paper, we improve their work by defining and investigating a variant of their security notion which is suitable for a more general case where approximate FHE are included. As the passive security of approximate FHE schemes is more appropriately captured by CPAD rather than CPA security, we start from the former notion to define our vCCAD new security notion. Although, we show that vCCA and vCCAD are equivalent when the correctness assumption holds, we establish that vCCAD security is strictly stronger than vCCA security in the general case. In doing so, we interestingly establish several new separation results between variants of CPAD security of increasing strength. This allows us to clarify the relationship between vCCA security and CPAD security, and to reveal that the security notions landscape is much simpler for correct FHE than when approximate ones are included --- in which case, for example, we establish that multiple challenges security notions are strictly stronger than single-challenge ones for both CPAD and vCCAD security. Lastly, we also give concrete construction blueprints, showing how to leverage some of the blueprints proposed by Manulis and Nguyen to achieve vCCAD security. As a result, vCCAD security is the strongest CCA security notion so far known to be achievable by both correct and approximate FHE schemes.
Last updated:  2024-10-22
Compact Pseudorandom Functional Encryption from Evasive LWE
Shweta Agrawal, Simran Kumari, and Shota Yamada
We provide the first construction of compact Functional Encryption (FE) for pseudorandom functionalities from the evasive LWE and LWE assumptions. Intuitively, a pseudorandom functionality means that the output of the circuit is indistinguishable from uniform for every input seen by the adversary. This yields the first compact FE for a nontrivial class of functions which does not rely on pairings. We demonstrate the power of our new tool by using it to achieve optimal parameters for both key-policy and ciphertext-policy Attribute Based Encryption (ABE) schemes for circuits of unbounded depth, from just the LWE and evasive LWE assumptions. This improves prior work along the twin axes of assumptions and performance. In more detail, this allows to: (i) replace the assumption of circular evasive LWE used in the work of Hseih, Lin and Luo (FOCS 2023) by plain evasive LWE, (ii) remove the need for the circular tensor LWE assumption in the work of Agrawal, Kumari and Yamada (CRYPTO, 2024), (iii) improve parameters obtained by both aforementioned works to achieve asymptotic optimality. Previously, optimal parameters for ABE schemes were only achieved using compact FE for P (Jain, Lin and Luo, Eurocrypt 2023) – we show that compact FE for a much weaker class (albeit with incomparable security) suffices. Thus we obtain the first optimal ABE schemes for unbounded depth circuits which can be conjectured post-quantum secure. Along the way, we define and construct a new primitive which we term laconic pseudorandom obfuscation from the same assumptions – this may be of independent interest.
Last updated:  2024-10-22
Maximizing the Utility of Cryptographic Setups: Secure PAKEs, with either functional RO or CRS
Yuting Xiao, Rui Zhang, and Hong-Sheng Zhou
For Password-Based Authenticated Key Exchange (PAKE), an idealized setup such as random oracle (RO) or a trusted setup such as common reference string (CRS) is a must in the universal composability (UC) framework (Canetti, FOCS 2001). Given the potential failure of a CRS or RO setup, it is natural to consider distributing trust among the two setups, resulting a CRS-or-RO-setup (i.e., CoR-setup). However, the infeasibility highlighted by Katz et al. (PODC 2014) suggested that it is impossible to construct UC-secure PAKE protocols with a straightforward CoR-setup (i.e., either the CRS is functional but the RO is compromised, or the RO is functional but the CRS is compromised). To circumvent this impossibility result, we investigate how to design UC-secure PAKE protocols with a fine-grained CoR-setup, where either the CRS is functional but the RO is non-functional, or vice versa. Different from the straightforward CoR-setup, a fine-grained non-functional setup is not necessarily completely compromised and fully controlled by the adversary; Instead, we consider this non-functional setup may still offer certain security properties. Certainly, the non-functional setup alone should be useless for achieving UC-security. We present a UC-secure PAKE protocol under two conditions: either the CRS is functional while the RO is non-functional (falling back to a collision-resistant hash function), or the RO is functional while the CRS is non-functional (falling back to a global CRS). Before presenting our construction, we first prove that a global CRS setup alone is insufficient for achieving UC-secure PAKE. This impossibility result highlights the non-triviality of our approach. To obtain our construction, we introduce several techniques as follows: (1) We propose a new variant of Non-Interactive Key Exchange (NIKE), called homomorphic NIKE with associated functions, which captures key properties of existing RO-based PAKE protocols. This new primitive serves as an important component in our construction. (2) We develop a ``Brute Force'' extraction strategy which allows us to provide security analysis for our UC-secure PAKE with a fine-grained CoR-setup for polynomial-sized password spaces. (3) We introduce a novel password space extension technique that enables the expansion of PAKE protocols from polynomial-sized to arbitrary-sized password spaces. (4) Finally, to ensure provable security for our password space extension in UC-secure PAKEs, we modify existing PAKE functionalities to prevent responses that reveal the correctness of password guesses. This is a reasonable adjustment, as our protocol provides only implicit authentication. We further present a PAKE protocol in the BPR framework (Bellare, Pointcheval, Rogaway, EuroCrypt 2000), assuming either the CRS is functional while the RO falls back to a collision-resistant hash function, or the RO is functional but the CRS trapdoor is allowed to be learned by the adversary.
Last updated:  2024-10-22
Scalable Mixnets from Two-Party Mercurial Signatures on Randomizable Ciphertexts
Masayuki Abe, Masaya Nanri, Miyako Ohkubo, Octavio Perez Kempner, Daniel Slamanig, and Mehdi Tibouchi
A mixnet developed by Hébant et al. (PKC '20) employs certified ciphertexts that carry homomorphic signatures from an authority, reducing the complexity of the shuffling proof, and thereby enabling efficient large-scale deployment. However, their privacy relies on trusting the authority, making it unsuitable for voting, the primary application of mixnets. Building on the prior work, we leverage recent advances in equivalence class signatures by replacing homomorphic signatures with newly developed two-party mercurial signatures on randomizable ciphertexts. This allows users and the authority to jointly sign ciphertexts and randomize keys, ciphertexts, and signatures, all while preserving the embedded messages. We demonstrate that our mixnet is suitable for receipt-free voting without requiring trust in the signing authority for privacy. To assess scalability, we compare our approach to other scalable mixnet solutions, implement our protocols, and provide concrete performance benchmarks. Our results show that our mixnet significantly outperforms existing alternatives in both computation and communication efficiency. Specifically, verifying the mixing process for 50,000 ciphertexts takes just 135 seconds on a commodity laptop using ten mixers, illustrating the practical viability of our approach.
Last updated:  2024-10-21
FOLEAGE: $\mathbb{F}_4$OLE-Based Multi-Party Computation for Boolean Circuits
Maxime Bombar, Dung Bui, Geoffroy Couteau, Alain Couvreur, Clément Ducros, and Sacha Servan-Schreiber
Secure Multi-party Computation (MPC) allows two or more parties to compute any public function over their privately-held inputs, without revealing any information beyond the result of the computation. Modern protocols for MPC generate a large amount of input-independent preprocessing material called multiplication triples, in an offline phase. This preprocessing can later be used by the parties to efficiently instantiate an input-dependent online phase computing the function. To date, the state-of-the-art secure multi-party computation protocols in the preprocessing model are tailored to secure computation of arithmetic circuits over large fields and require little communication in the preprocessing phase, typically O(N · m) to generate m triples among N parties. In contrast, when it comes to computing preprocessing for computations that are naturally represented as Boolean circuits, the state-of-the-art techniques have not evolved since the 1980s, and in particular, require every pair of parties to execute a large number of oblivious transfers before interacting to convert them to N-party triples, which induces an Ω(N^2 · m) communication overhead. In this paper, we introduce FOLEAGE, which addresses this gap by introducing an efficient preprocessing protocol tailored to Boolean circuits. FOLEAGE exhibits excellent performance: It generates m multiplication triples over F2 using only N · m + O(N^2 · log m) bits of communication for N-parties, and can concretely produce over 12 million triples per second in the 2-party setting on one core of a commodity machine. Our result builds upon an efficient Pseudorandom Correlation Generator (PCG) for multiplication triples over the field F4. Roughly speaking, a PCG enables parties to stretch a short seed into a large number of pseudorandom correlations non-interactively, which greatly improves the efficiency of the offline phase in MPC protocols. Our construction significantly outperforms the state-of-the-art, which we demonstrate via a prototype implementation. This is achieved by introducing a number of protocol-level, algorithmic-level, and implementation-level optimizations on the recent PCG construction of Bombar et al. (Crypto 2023) from the Quasi-Abelian Syndrome Decoding assumption.
Last updated:  2024-10-21
UC Non-Interactive, Proactive, Threshold ECDSA with Identifiable Aborts
Ran Canetti, Rosario Gennaro, Steven Goldfeder, Nikolaos Makriyannis, and Udi Peled
We present a distributed ECDSA protocol, for any number of signatories. The protocol improves on that of the authors (CCS'20), which in turn builds on the Gennaro & Goldfeder and Lindell & Nof protocols (CCS '18). Specifically: ** Only the last round of the protocol requires knowledge of the message, and the other rounds can take place in a preprocessing stage, lending to a non-interactive threshold ECDSA protocol. ** The protocol withstands adaptive corruption of signatories. Furthermore, it includes a periodic refresh mechanism and guarantees proactive security. ** The protocol achieves accountability by identifying corrupted signatories in case of failure to generate a valid signature. (Identifiable abort) Furthermore, we formulate a distributed signature ideal functionality within the UC framework that guarantees unforgeability, proactive security, and identifiable abort, and show that the protocol realizes this functionality in the global random oracle model, assuming Strong RSA, DDH, semantic security of the Paillier encryption, and a somewhat enhanced variant of existential unforgeability of ECDSA. This combination of properties (low latency, compatibility with cold-wallet architectures, proactive security, identifiable abort and universally composable security) make our protocol a good fit for threshold wallets for ECDSA-based cryptocurrencies.
Last updated:  2024-10-21
Computational Analysis of Plausibly Post-Quantum-Secure Recursive Arguments of Knowledge
Dustin Ray and Paulo L. Barreto
With the recent standardization of post-quantum cryptographic algorithms, research efforts have largely remained centered on public key exchange and encryption schemes. Argument systems, which allow a party to efficiently argue the correctness of a computation, have received comparatively little attention regarding their quantum-resilient design. These computational integrity frameworks often rely on cryptographic assumptions, such as pairings or group operations, which are vulnerable to quantum attacks. In this work, we present a fully implemented post-quantum secure argument system that compresses unbounded computation into a constant-sized space. We present a fully implemented prover which can argue the truth of any size computation, and verifier which can verify correctness in constant time. This work shows an extension of utility for computational integrity statements into the quantum domain. We provide real-world performance metrics demonstrating that post-quantum secure argument systems not only exist but can outperform classical systems in both efficiency and scalability, making such systems an attractive choice for practical deployment.
Last updated:  2024-10-21
SEC: Symmetric Encrypted Computation via Fast Look-ups
Debadrita Talapatra, Nimish Mishra, Arnab Bag, Sikhar Patranabis, and Debdeep Mukhopadhyay
Encrypted computation allows a client to securely outsource the storage and processing of sensitive private data to an untrusted third party cloud server. Fully homomorphic encryption (FHE) allows computing arbitrary functions over encrypted data, but incurs huge overheads and does not practically scale to large databases. Whereas, slightly weaker yet efficient constructions- Searchable Symmetric Encryption (SSE) support lookup-based evaluations of a restricted class of Boolean circuits over symmetrically encrypted data. In this paper, we investigate the use of SSE to efficiently perform arbitrary Boolean circuit evaluations over symmetrically encrypted data via look-ups. To this end, in this work, we propose Symmetric Encrypted Computation (SEC): the first practically efficient and provably secure lookup-based construction, analogous to traditional FHE, that supports evaluation of arbitrary Boolean circuits over symmetrically encrypted data. SEC relies on purely symmetric-key cryptoprimitives and achieves flexible performance versus leakage trade-offs. SEC extends and generalizes the functional capabilities of SSE, while inheriting its data privacy guarantees and desirable performance benefits. We provide a concrete construction of SEC and analyze its security with respect to a rigorously defined and thoroughly analyzed leakage profile. We also present a prototype implementation of SEC and experimentally validate its practical efficiency. Our experiments show that SEC outperforms state-of-the-art FHE schemes (such as Torus FHE) substantially, with around 1000× speed-up in basic Boolean gate evaluations. We further showcase the scalability of SEC for functions with multi-bit inputs via experiments performing encrypted evaluation of the entire AES-128 circuit, as well as three max-pooling layers of AlexNet architecture. For both sets of experiments, SEC outperforms state-of-the-art and accelerated FHE implementations by 1000× in terms of processing time, while incurring 250× lower storage.
Last updated:  2024-10-21
A zero-trust swarm security architecture and protocols
Alex Shafarenko
This report presents the security protocols and general trust architecture of the SMARTEDGE swarm computing platform. Part 1 describes the coordination protocols for use in a swarm production environment, e.g. a smart factory, and Part 2 deals with crowd-sensing scenarios characteristic of traffic-control swarms.
Last updated:  2024-10-21
SIGNITC: Supersingular Isogeny Graph Non-Interactive Timed Commitments
Knud Ahrens
Non-Interactive Timed Commitment schemes (NITC) allow to open any commitment after a specified delay $t_{\mathrm{fd}}$. This is useful for sealed bid auctions and as primitive for more complex protocols. We present the first NITC without repeated squaring or theoretical black box algorithms like NIZK proofs or one-way functions. It has fast verification, almost arbitrary delay and satisfies IND-CCA hiding and perfect binding. Our protocol is based on isogenies between supersingular elliptic curves making it presumably quantum secure, and all algorithms have been implemented as part of SQISign or other well-known isogeny-based cryptosystems. Additionally, it needs no trusted setup and can use known primes for SIKE or SQISign.
Last updated:  2024-10-21
State of the art of HFE variants Is it possible to repair HFE with appropriate perturbations?
Benoit COGLIATI, Gilles Macariot-Rat, Jacques Patarin, and Pierre Varjabedian
HFE (that stands for Hidden Field Equations) belongs to multivariate cryptography and was designed by Jacques Patarin in 1996 as a public key trapdoor suitable for encryption or signature. This original basic version is unfortunately known to have a super-polynomial attack, but as imagined since the beginning, it comes with various variants, one can describe as combinations of “modifiers”. In this work, we first present the state of the art of these HFE modifiers, along with their effect on the complexity of the main cryptanalysis techniques against HFE-based schemes. This allows us, in a second time, to identify a combination of two modifiers that has not yet been explored and may still be secure with efficient parameters. Based on our analysis, we propose a new signature scheme that offers extremely short signature sizes, with reasonable public key sizes and performance. In particular, we rely on the classical Feistel-Patarin technique to reduce signature sizes below two times the security parameter.
Last updated:  2024-10-21
SoK: Descriptive Statistics Under Local Differential Privacy
René Raab, Pascal Berrang, Paul Gerhart, and Dominique Schröder
Local Differential Privacy (LDP) provides a formal guarantee of privacy that enables the collection and analysis of sensitive data without revealing any individual's data. While LDP methods have been extensively studied, there is a lack of a systematic and empirical comparison of LDP methods for descriptive statistics. In this paper, we first provide a systematization of LDP methods for descriptive statistics, comparing their properties and requirements. We demonstrate that several mean estimation methods based on sampling from a Bernoulli distribution are equivalent in the one-dimensional case and introduce methods for variance estimation. We then empirically compare methods for mean, variance, and frequency estimation. Finally, we provide recommendations for the use of LDP methods for descriptive statistics and discuss their limitations and open questions.
Last updated:  2024-10-21
Pseudorandom Multi-Input Functional Encryption and Applications
Shweta Agrawal, Simran Kumari, and Shota Yamada
We construct the first multi-input functional encryption (MIFE) and indistinguishability obfuscation (iO) schemes for pseudorandom functionalities, where the output of the functionality is pseudorandom for every input seen by the adversary. Our MIFE scheme relies on LWE and evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) for constant arity functions, and a strengthening of evasive LWE for polynomial arity. Thus, we obtain the first MIFE and iO schemes for a nontrivial functionality from conjectured post-quantum assumptions. Along the way, we identify subtle issues in the proof of witness encryption from evasive LWE by prior work and believe that a similar strengthening of evasive LWE should also be required for their proof, for the same reasons as ours. We demonstrate the power of our new tools via the following applications: 1. Multi Input Predicate Encryption for Constant Arity. Assuming evasive LWE and LWE, we construct a multi-input predicate encryption scheme (MIPE) for P, supporting constant arity. The only prior work to support MIPE for P with constant arity by Agrawal et al. (Crypto, 2023) relies on a strengthening of Tensor LWE in addition to LWE and evasive LWE. 2. Multi Input Predicate Encryption for Polynomial Arity. Assuming a stronger variant of evasive LWE and LWE, we construct MIPE for P for polynomial arity. MIPE for polynomial arity supporting P was not known before, to the best of our knowledge. 3. Two Party ID Based Key Exchange. Assuming a stronger variant of evasive LWE and LWE, along with Decision Bilinear Diffie-Hellman, we provide the first two-party ID based Non-Interactive Key Exchange (ID-NIKE) scheme in the standard model. This leads to the first ID-NIKE in the standard model without using multilinear maps or indistinguishability obfuscation. 4. Instantiating the Random Oracle. We use our pseudorandom iO to instantiate the random oracle in several applications that previously used iO (Hohenberger, Sahai and Waters, Eurocrypt 2014) such as full-domain hash signature based on trapdoor permutations and more. Our tools of MIFE and iO for pseudorandom functionalities appear quite powerful and yield extremely simple constructions when used in applications. We believe they provide a new pathway for basing “extreme” cryptography, which has so far required full fledged iO, on the presumably weaker evasive LWE in the post quantum regime.
Last updated:  2024-10-21
Drifting Towards Better Error Probabilities in Fully Homomorphic Encryption Schemes
Olivier Bernard, Marc Joye, Nigel P. Smart, and Michael Walter
There are two security notions for FHE schemes the traditional notion of IND-CPA, and a more stringent notion of IND-CPA$^D$. The notions are equivalent if the FHE schemes are perfectly correct, however for schemes with negligible failure probability the FHE parameters needed to obtain IND-CPA$^D$ security can be much larger than those needed to obtain IND-CPA security. This paper uses the notion of ciphertext drift in order to understand the practical difference between IND-CPA and IND-CPA$^D$ security in schemes such as FHEW, TFHE and FINAL. This notion allows us to define a modulus switching operation (the main culprit for the difference in parameters) such that one does not require adapting IND-CPA cryptographic parameters to meet the IND-CPA$^D$ security level. Further, the extra cost incurred by the new techniques has no noticeable performance impact in practical applications. The paper also formally defines a stronger version for IND-CPA$^D$ security called sIND-CPA$^D$, which is proved to be strictly separated from the IND-CPA$^D$ notion. Criterion for turning an IND-CPA$^D$ secure public-key encryption into an sIND-CPA$^D$ one is also provided.
Last updated:  2024-10-21
Free-XOR Gate Bootstrapping
Chunling Chen, Xianhui Lu, Ruida Wang, Zhihao Li, Xuan Shen, and Benqiang Wei
The FHEW-like gate bootstrapping framework operates in a 2-bit plaintext space, where logic gates such as NAND, XOR, and AND are implemented by adding two ciphertexts and extracting the most significant bit. However, each gate operation requires bootstrapping with a primary cost of one blind rotation, which is expensive, when processing circuit operations for applications. We propose a novel Free-XOR gate bootstrapping framework based on a single-bit plaintext space, in which the XOR operation is realized by simply adding two ciphertexts, resulting in an almost free computational cost. To form a minimal complete set for logical operations, we design an algorithm for the AND gate within this framework. The AND gate cost of our Free-XOR gate bootstrapping involves two blind rotations. However, by utilizing a single-bit plaintext space to enhance noise tolerance and swapping some operations of the bootstrapping process, we can adopt a more compact parameter setting, which in turn accelerates the speed of blind rotation. We propose an instantiation of the NTRU-based AND gate operation, which requires two blind rotations. Despite the additional rotation, the overall computational cost is marginally lower than the state-of-the-art gate bootstrapping scheme LLW+ [TCHES24], which utilizes only a single blind rotation. In addition, our approach achieves a significant reduction in key size, reducing it to 3.3 times the size of LLW+ [TCHES24].
Last updated:  2024-10-21
Quantum Security of a Compact Multi-Signature
Shaoquan Jiang
With the rapid advance in quantum computing, quantum security is now an indispensable property for any cryptographic system. In this paper, we study how to prove the security of a complex cryptographic system in the quantum random oracle model. We first give a variant of Zhandry's compressed quantum random oracle (${\bf CStO}$), called compressed quantum random oracle with adaptive special points ({\bf CStO}$_s$). Then, we extend the on-line extraction technique of Don et al (EUROCRYPT'22) from {\bf CStO} to ${\bf CStO}_s$. We also extend the random experiment technique of Liu and Zhandry (CRYPTO'19) for extracting the ${\bf CStO}$ query that witnesses the future adversarial output. With these preparations, a systematic security proof in the quantum random oracle model can start with a random {\bf CStO} experiment (that extracts the witness for the future adversarial output) and then convert this game to one involving ${\bf CStO}_s$. Next, the on-line extraction technique for ${\bf CStO}_s$ can be applied to extract the witness for any on-line commitment. With this strategy, we give a security proof of our recent compact multi-signature framework that is converted from any weakly secure linear ID scheme. We also prove the quantum security of our recent lattice realization of this linear ID scheme, by iteratively applying the weakly collapsing protocol technique of Liu and Zhandry (CRYPTO 2019). Combining these two results, we obtain the first quantum security proof for a compact multi-signature.
Last updated:  2024-10-21
Practical Asynchronous MPC from Lightweight Cryptography
Atsuki Momose
We present an asynchronous secure multi-party computation (MPC) protocol that is practically efficient. Our protocol can evaluate any arithmetic circuit with linear communication in the number of parties per multiplication gate, while relying solely on computationally lightweight cryptography such as hash function and symmetric encryption. Our protocol is optimally resilient and tolerates $t$ malicious parties among $n = 3t+1$ parties. At the technical level, we manage to apply the \emph{player-elimination} paradigm to asynchronous MPC. This framework enables the detection and eviction of cheating parties by repeatedly attempting to generate Beaver triples. Once all malicious parties are eliminated, honest parties can proceed with efficient Beaver triple generation. While this approach is standard in synchronous MPC, it presents several technical challenges when adopted in an asynchronous network, which we address in this work.
Last updated:  2024-10-20
Rate-1 Statistical Non-Interactive Zero-Knowledge
Pedro Branco, Nico Döttling, and Akshayaram Srinivasan
We give the first construction of a rate-1 statistical non-interactive zero-knowledge argument of knowledge. For the $\mathsf{circuitSAT}$ language, our construction achieves a proof length of $|w| + |w|^\epsilon \cdot \mathsf{poly}(\lambda)$ where $w$ denotes the witness, $\lambda$ is the security parameter, $\epsilon$ is a small constant less than 1, and $\mathsf{poly}(\cdot)$ is a fixed polynomial that is independent of the instance or the witness size. The soundness of our construction follows from either the LWE assumption, or the $O(1)$-$\mathsf{LIN}$ assumption on prime-order groups with efficiently computable bilinear maps, or the sub-exponential DDH assumption. Previously, Gentry et al. (Journal of Cryptology, 2015) achieved NIZKs with statistical soundness and computational zero-knowledge with the aforementioned proof length by relying on the Learning with Errors (LWE) assumption.
Last updated:  2024-10-20
OT-PCA: New Key-Recovery Plaintext-Checking Oracle Based Side-Channel Attacks on HQC with Offline Templates
Haiyue Dong and Qian Guo
In this paper, we introduce OT-PCA, a novel approach for conducting Plaintext-Checking (PC) oracle based side-channel attacks, specifically designed for Hamming Quasi-Cyclic (HQC). By calling the publicly accessible HQC decoder, we build offline templates that enable efficient extraction of soft information for hundreds of secret positions with just a single PC oracle call. Our method addresses critical challenges in optimizing key-related information extraction, including maximizing decryption output entropy and ensuring error pattern independence, through the use of genetic-style algorithms. Extensive simulations demonstrate that our new attack method significantly reduces the required number of oracle calls, achieving a 2.4-fold decrease for hqc-128 and even greater reductions for hqc-192 and hqc-256 compared to current state-of-the-art methods. Notably, the attack shows strong resilience against inaccuracy in the PC oracle—when the oracle accuracy decreases to 95%, the reduction factor in oracle call requirements increases to 7.6 for hqc-128. Lastly, a real-world evaluation conducted using power analysis on a platform with an ARM Cortex-M4 microcontroller validates the practical applicability and effectiveness of our approach.
Last updated:  2024-10-20
Updatable Privacy-Preserving Blueprints
Bernardo David, Felix Engelmann, Tore Frederiksen, Markulf Kohlweiss, Elena Pagnin, and Mikhail Volkhov
Privacy-preserving blueprint schemes (Kohlweiss et al., EUROCRYPT'23) offer a mechanism for safeguarding user's privacy while allowing for specific legitimate controls by a designated auditor agent. These schemes enable users to create escrows encrypting the result of evaluating a function $y=P(t,x)$, with $P$ being publicly known, $t$ a secret used during the auditor's key generation, and $x$ the user's private input. Crucially, escrows only disclose the blueprinting result $y=P(t,x)$ to the designated auditor, even in cases where the auditor is fully compromised. The original definition and construction only support the evaluation of functions $P$ on an input $x$ provided by a single user. We address this limitation by introducing updatable privacy-preserving blueprint schemes (UPPB), which enhance the original notion with the ability for multiple users to non-interactively update the private user input $x$ while blueprinting. Moreover, UPPBs contain a proof that $y$ is the result of a sequence of valid updates, while revealing nothing else about the private inputs $\{x_i\}$ of updates. As in the case of privacy-preserving blueprints, we first observe that UPPBs can be realized via a generic construction for arbitrary predicates $P$ based on FHE and NIZKs. Our main result is UBlu, an efficient instantiation for a specific predicate comparing the values $x$ and $t$, where $x$ is the cumulative sum of users' private inputs and $t$ is a fixed private value provided by the auditor in the setup phase. This rather specific setting already finds interesting applications such as privacy-preserving anti-money laundering and location tracking, and can be extended to support more generic predicates. From the technical perspective, we devise a novel technique to keep the escrow size concise, independent of the number of updates, and reasonable for practical applications. We achieve this via a novel characterization of malleability for the algebraic NIZK by Couteau and Hartmann (CRYPTO’20) that allows for an additive update function.
Last updated:  2024-10-20
Universally Composable Non-Interactive Zero-Knowledge from Sigma Protocols via a New Straight-line Compiler
Megan Chen, Pousali Dey, Chaya Ganesh, Pratyay Mukherjee, Pratik Sarkar, and Swagata Sasmal
Non-interactive zero-knowledge proofs (NIZK) are essential building blocks in threshold cryptosystems like multiparty signatures, distributed key generation, and verifiable secret sharing, allowing parties to prove correct behavior without revealing secrets. Furthermore, universally composable (UC) NIZKs enable seamless composition in the larger cryptosystems. A popular way to construct NIZKs is to compile interactive protocols using the Fiat-Shamir transform. Unfortunately, Fiat-Shamir transformed NIZK requires rewinding the adversary and is not $\textit{straight-line extractable}$, making it at odds with UC. Using Fischlin's transform gives straight-line extractability, but at the expense of many repetitions of the underlying protocol leading to poor concrete efficiency and difficulty in setting parameters. In this work, we propose a simple new transform that compiles a Sigma protocol for an algebraic relation into a UC-NIZK protocol $\textit{without any overheads of repetition}$. - Given a Sigma protocol for proving m algebraic statements over n witnesses, we construct a compiler to transform it into a $\textit{straight-line extractable}$ protocol using an additively homomorphic encryption scheme AHE. Our prover executes the Sigma protocol's prover once and computes 2n encryptions. The verification process involves running the Sigma protocol verifier once and then computing n encryptions, which are homomorphically verified against the prover generated encryptions. - We apply the Fiat-Shamir transform to the above straight-line extractable Sigma protocol to obtain a UC-NIZK. We instantiate AHE using class group-based encryption where the public key of the encryption scheme is obliviously sampled using a suitable hash function. This yields a UC-NIZK protocol in the random oracle model.
Last updated:  2024-10-20
Low-Communication Updatable PSI from Asymmetric PSI and PSU
Guowei Ling, Peng Tang, and Weidong Qiu
Private Set Intersection (PSI) allows two mutually untrusted parties to compute the intersection of their private sets without revealing additional information. In general, PSI operates in a static setting, where the computation is performed only once on the input sets of both parties. Badrinarayanan et al. (\textit{PoPETs} 2022) initiated the study of Updatable PSI (UPSI), which extends this capability to dynamically updating sets, enabling both parties to securely compute the intersection as their sets are modified while incurring significantly less overhead than re-executing a conventional PSI. However, existing UPSI protocols either do not support arbitrary deletion of elements or incur high computational and communication overhead. In this work, we combine asymmetric PSI with Private Set Union (PSU) to present a novel UPSI protocol. Our UPSI protocol supports arbitrary additions and deletions of elements, offering a flexible approach to update sets. Furthermore, our protocol enjoys extremely low communication overhead, scaling linearly with the size of the update set while remaining independent of the total set size. We implement our protocol and compare it against state-of-the-art conventional PSI and UPSI protocols. Experimental results demonstrate that our UPSI protocol incurs $587$ to $755$ times less communication overhead than the recently proposed UPSI protocol (\textit{AsiaCrypt} 2024) that supports arbitrary additions and deletions. Moreover, our UPSI protocol has a significant advantage in low-bandwidth environments due to the exceptionally low communication overhead. Specifically, with an input size of $2^{22}$ and the size of the addition/deletion set being $2^{10}$, the existing UPSI protocol requires approximately $1650.45$, $1789.5$, and $3458.1$ seconds at bandwidths of $200$ Mbps, $50$ Mbps, and $5$ Mbps, respectively, whereas our UPSI protocol only requires around $13.01$, $13.75$, and $22.53$ seconds under the same conditions. Our open-source implementation is available at: \href{https://github.com/ShallMate/upsi}{https://github.com/ShallMate/upsi}.
Last updated:  2024-10-19
Good things come to those who wait: Dishonest-Majority Coin-Flipping Requires Delay Functions
Joseph Bonneau, Benedikt Bünz, Miranda Christ, and Yuval Efron
We reconsider Cleve's famous 1986 impossibility result on coin-flipping without an honest majority. Recently proposed constructions have circumvented this limit by using cryptographic delay functions. We show that this is necessary: a (weak) notion of delay functions is in fact implied by the existence of a protocol circumventing Cleve's impossibility. However, such delay functions are weaker than those used in existing constructions. We complete our result by showing an equivalence, that these weaker delay functions are also sufficient to construct not just fair dishonest-majority coin-flipping protocols, but also the stronger notion of a distributed randomness beacon. We also show that this is possible in a weaker communication model than previously considered, without the assumption of reliable broadcast or a public bulletin board.
Last updated:  2024-10-19
Volatile and Persistent Memory for zkSNARKs via Algebraic Interactive Proofs
Alex Ozdemir, Evan Laufer, and Dan Boneh
In verifiable outsourcing, an untrusted server runs an expensive computation and produces a succinct proof (called a SNARK) of the results. In many scenarios, the computation accesses a RAM that the server maintains a commitment to (persistent RAM) or that is initially zero (volatile RAM). But, SNARKs for such scenarios are limited by the high overheads associated with existing techniques for RAM checking. We develop new proofs about volatile, persistent, and sparse persistent RAM that reduce SNARK proving times. Our results include both asymptotic and concrete improvements---including a proving time reduction of up to 51.3$\times$ for persistent RAM. Along the way, we apply two tools that may be of independent interest. First, we generalize an existing construction to convert any algebraic interactive proof (AIP) into a SNARK. An AIP is a public-coin, non-succinct, interactive proof with a verifier that is an arithmetic circuit. Second, we apply Bézout's identity for polynomials to construct new AIPs for uniqueness and disjointness. These are useful for showing the independence of accesses to different addresses.
Last updated:  2024-10-19
$\widetilde{\mbox{O}}$ptimal Adaptively Secure Hash-based Asynchronous Common Subset
Hanwen Feng, Zhenliang Lu, and Qiang Tang
Asynchronous multiparty computation (AMPC) requires an input agreement phase where all participants have a consistent view of the set of private inputs. While the input agreement problem can be precisely addressed by a Byzantine fault-tolerant consensus known as Asynchronous Common Subset (ACS), existing ACS constructions with potential post-quantum security have a large $\widetilde{\mathcal{O}}(n^3)$ communication complexity for a network of $n$ nodes. This poses a bottleneck for AMPC in the same setting. In contrast, ACS has optimal constructions with quadratic communication complexity based on bilinear map assumptions. In this paper, we bridge this gap by introducing a nearly optimal ACS, which solely relies on the blackbox use of collision-resistant hash functions. It exhibits $\widetilde{\mathcal{O}}(n^2)$ communication complexity, expected constant round complexity, and security against adaptive adversaries who can corrupt up to $n/3$ nodes and perform ``after-fact-removal'' attacks. At the core of our new ACS is the first nearly optimal hash-based Multi-valued Validated Byzantine Agreement (MVBA). To reduce cubic communication while avoiding heavy cryptographic tools, we introduce a new design paradigm, with several new components. We define and analyze our MVBA and components within the UC-framework, facilitating their modular use in broader applications, particularly in AMPC.
Last updated:  2024-10-19
Do Not Disturb a Sleeping Falcon: Floating-Point Error Sensitivity of the Falcon Sampler and Its Consequences
Xiuhan Lin, Mehdi Tibouchi, Yang Yu, and Shiduo Zhang
Falcon is one of the three postquantum signature schemes already selected by NIST for standardization. It is the most compact among them, and offers excellent efficiency and security. However, it is based on a complex algorithm for lattice discrete Gaussian sampling which presents a number of implementation challenges. In particular, it relies on (possibly emulated) floating-point arithmetic, which is often regarded as a cause for concern, and has been leveraged in, e.g., side-channel analysis. The extent to which Falcon's use of floating point arithmetic can cause security issues has yet to be thoroughly explored in the literature. In this paper, we contribute to filling this gap by identifying a way in which Falcon's lattice discrete Gaussian sampler, due to specific design choices, is singularly sensitive to floating-point errors. In the presence of small floating-point discrepancies (which can occur in various ways, including the use of the two almost but not quite equivalent signing procedures ``dynamic'' and ``tree'' exposed by the Falcon API), we find that, when called twice on the same input, the Falcon sampler has a small but significant chance (on the order of once in a few thousand calls) of outputting two different lattice points with a very structured difference, that immediately reveals the secret key. This is in contrast to other lattice Gaussian sampling algorithms like Peikert's sampler and Prest's hybrid sampler, that are stable with respect to small floating-point errors. Correctly generated Falcon signatures include a salt that should in principle prevent the sampler to ever be called on the same input twice. In that sense, our observation has little impact on the security of Falcon signatures per se (beyond echoing warnings about the dangers of repeated randomness). On the other hand, it is critical for derandomized variants of Falcon, which have been proposed for use in numerous settings. One can mention in particular identity-based encryption, SNARK-friendly signatures, and sublinear signature aggregation. For all these settings, small floating point discrepancies have a chance of resulting in full private key exposure, even when using the slower, integer-based emulated floating-point arithmetic of Falcon's reference implementation.
Last updated:  2024-10-18
Subliminal Encrypted Multi-Maps and Black-Box Leakage Absorption
Amine Bahi, Seny Kamara, Tarik Moataz, and Guevara Noubir
We propose a dynamic, low-latency encrypted multi-map (EMM) that operates in two modes: low-leakage mode, which reveals minimal information such as data size, expected response length, and query arrival rate; and subliminal mode, which reveals only the data size while hiding metadata including query and update times, the number of operations executed, and even whether an operation was executed at all---albeit at the cost of full correctness. We achieve this by exploiting a tradeoff between leakage and latency, a previously underexplored resource in EMM design. In low-leakage mode, our construction improves upon existing work both asymptotically and empirically: it achieves optimal server-side storage, as well as communication and computational complexity that is independent of the maximum response length. In subliminal mode, it is the first construction to hide metadata. To analyze the latency and client-side storage of our construction, we utilize queuing theory and introduce a new queuing model, which may be of independent interest. To examine its metadata-hiding properties, we extend standard security definitions to account for metadata and prove a surprising result: if a scheme is subliminal in that it hides the execution of its operations, then it absorbs the leakage of any scheme that makes black-box use of it without sending additional messages. In other words, if a scheme is subliminal, then any scheme that makes black-box use of it will also be subliminal. We implement and evaluate our construction, demonstrating that our empirical results align with our theoretical analysis and that the scheme achieves a median query latency below $10$ milliseconds, making it practical for some applications.
Last updated:  2024-10-18
PAC-Private Algorithms
Mayuri Sridhar, Hanshen Xiao, and Srinivas Devadas
Provable privacy typically requires involved analysis and is often associated with unacceptable accuracy loss. While many empirical verification or approximation methods, such as Membership Inference Attacks (MIA) and Differential Privacy Auditing (DPA), have been proposed, these do not offer rigorous privacy guarantees. In this paper, we apply recently-proposed Probably Approximately Correct (PAC) Privacy to give formal, mechanized, simulation-based proofs for a range of practical, black-box algorithms: K-Means, Support Vector Machines (SVM), Principal Component Analysis (PCA) and Random Forests. To provide these proofs, we present a new simulation algorithm that efficiently determines anisotropic noise perturbation required for any given level of privacy. We provide a proof of correctness for this algorithm and demonstrate that anisotropic noise has substantive benefits over isotropic noise. Stable algorithms are easier to privatize, and we demonstrate privacy amplification resulting from introducing regularization in these algorithms; meaningful privacy guarantees are obtained with small losses in accuracy. We propose new techniques in order to reduce instability in algorithmic output and convert intractable geometric stability verification into efficient deterministic stability verification. Thorough experiments are included, and we validate our provable adversarial inference hardness against state-of-the-art empirical attacks.
Last updated:  2024-10-18
CountCrypt: Quantum Cryptography between QCMA and PP
Eli Goldin, Tomoyuki Morimae, Saachi Mutreja, and Takashi Yamakawa
We construct a quantum oracle relative to which $\mathbf{BQP}=\mathbf{QCMA}$ but quantum-computation-classical-communication (QCCC) key exchange, QCCC commitments, and two-round quantum key distribution exist. We also construct an oracle relative to which $\mathbf{BQP}=\mathbf{QMA}$, but quantum lightning (a stronger variant of quantum money) exists. This extends previous work by Kretschmer [Kretschmer, TQC22], which showed that there is a quantum oracle relative to which $\mathbf{BQP}=\mathbf{QMA}$ but pseudorandom state generators (a quantum variant of pseudorandom generators) exist. We also show that QCCC key exchange, QCCC commitments, and two-round quantum key distribution can all be used to build one-way puzzles. One-way puzzles are a version of "quantum samplable" one-wayness and are an intermediate primitive between pseudorandom state generators and EFI pairs, the minimal quantum primitive. In particular, one-way puzzles cannot exist if $\mathbf{BQP}=\mathbf{PP}$. Our results together imply that aside from pseudorandom state generators, there is a large class of quantum cryptographic primitives which can exist even if $\mathbf{BQP} = \mathbf{QCMA}$, but are broken if $\mathbf{BQP} = \mathbf{PP}$. Furthermore, one-way puzzles are a minimal primitive for this class. We denote this class "CountCrypt".
Last updated:  2024-10-18
Computational Attestations of Polynomial Integrity Towards Verifiable Machine Learning
Dustin Ray and Caroline El Jazmi
Machine-learning systems continue to advance at a rapid pace, demonstrating remarkable utility in various fields and disciplines. As these systems continue to grow in size and complexity, a nascent industry is emerging which aims to bring machine-learning-as-a-service (MLaaS) to market. Outsourcing the operation and training of these systems to powerful hardware carries numerous advantages, but challenges arise when privacy and the correctness of work carried out must be ensured. Recent advancements in the field of zero-knowledge cryptography have led to a means of generating arguments of integrity for any computation, which in turn can be efficiently verified by any party, in any place, at any time. In this work we prove the correct training of a differentially-private (DP) linear regression over a dataset of 50,000 samples on a single machine in less than 6 minutes, verifying the entire computation in 0.17 seconds. To our knowledge, this result represents the fastest known instance in the literature of provable-DP over a dataset of this size. We believe this result constitutes a key stepping-stone towards end-to-end private MLaaS.
Last updated:  2024-10-18
Improved Polynomial Division in Cryptography
Kostas Kryptos Chalkias, Charanjit Jutla, Jonas Lindstrom, Varun Madathil, and Arnab Roy
Several cryptographic primitives, especially succinct proofs of various forms, transform the satisfaction of high-level properties to the existence of a polynomial quotient between a polynomial that interpolates a set of values with a cleverly arranged divisor. Some examples are SNARKs, like Groth16, and polynomial commitments, such as KZG. Such a polynomial division naively takes $O(n \log n)$ time with Fast Fourier Transforms, and is usually the asymptotic bottleneck for these computations. Several works have targeted specific constructions to optimize these computations and trade-off one-time setup costs with faster online computation times. In this paper, we present a unified approach to polynomial division related computations for a diverse set of schemes. We show how our approach provides a common abstract lens which recasts and improves existing approaches. Additionally, we present benchmarks for the Groth16 and the KZG systems, illustrating the significant practical benefits of our approach in terms of speed, memory, and parallelizability. We get a speedup of $2\times$ over the state-of-the-art in computing all openings for KZG commitments and a speed-up of about $2-3\%$ for Groth16 proofs when compared against the Rust Arkworks implementation. Although our Groth16 speedup is modest, our approach supports twice the number of gates as Arkworks and SnarkJS as it avoids computations at higher roots of unity. Conversely this reduces the need for employing larger groups for bigger circuits. For example, our approach can support $2^{28}$ gates with BN254, as compared to $2^{27}$ for coset-based approaches, without sacrificing computational advantages. Our core technical contributions are novel conjugate representations and compositions of the derivative operator and point-wise division under the Discrete Fourier Transform. These allow us to leverage l'Hôpital's rule to efficiently compute polynomial division, where in the evaluation basis such divisions maybe of the form $0/0$. Our techniques are generic with potential applicability to many existing protocols.
Last updated:  2024-10-18
Sharp: Short Relaxed Range Proofs
Geoffroy Couteau, Dahmun Goudarzi, Michael Klooß, and Michael Reichle
We provide optimized range proofs, called $\mathsf{Sharp}$, in discrete logarithm and hidden order groups, based on square decomposition. In the former setting, we build on the paradigm of Couteau et al. (Eurocrypt '21) and optimize their range proof (from now on, CKLR) in several ways: (1) We introduce batching via vector commitments and an adapted $\Sigma$-protocol. (2) We introduce a new group switching strategy to reduce communication. (3) As repetitions are necessary to instantiate CKLR in standard groups, we provide a novel batch shortness test that allows for cheaper repetitions. The analysis of our test is nontrivial and forms a core technical contribution of our work. For example, for $\kappa = 128$ bit security and $B = 64$ bit ranges for $N = 1$ (resp. $N = 8$) proof(s), we reduce the proof size by $34\%$ (resp. $75\%$) in arbitrary groups, and by $66\%$ (resp. $88\%)$ in groups of order $256$-bit, compared to CKLR. As $\mathsf{Sharp}$ and CKLR proofs satisfy a “relaxed” notion of security, we show how to enhance their security with one additional hidden order group element. In RSA groups, this reduces the size of state of the art range proofs (Couteau et al., Eurocrypt '17) by $77\%$ ($\kappa = 128, B = 64, N = 1$). Finally, we implement our most optimized range proof. Compared to the state of the art Bulletproofs (Bünz et al., S&P 2018), our benchmarks show a very significant runtime improvement. Eventually, we sketch some applications of our new range proofs.
Last updated:  2024-10-18
The module action for isogeny based cryptography
Damien Robert
We extend the usual ideal action on oriented elliptic curves to a (Hermitian) module action on oriented (polarised) abelian varieties. Oriented abelian varieties are naturally enriched in $R$-modules, and our module action comes from the canonical power object construction on categories enriched in a closed symmetric monoidal category. In particular our action is canonical and gives a fully fledged symmetric monoidal action. Furthermore, we give algorithms to compute this action in practice, generalising the usual algorithms in rank~$1$. The action allows us to unify in the same framework, on the one hand isogeny based cryptography based on ordinary or oriented elliptic curves, and on the other hand the one based on supersingular elliptic curves defined over $\mathbb{F}_{p^2}$. In particular, from our point of view, supersingular elliptic curves over $\mathbb{F}_p$ are given by a rank~$1$ module action, while (the Weil restriction) of those defined over $\mathbb{F}_{p^2}$ are given by a rank~$2$ module action. As a consequence, rank~$2$ module action inversion is at least as hard as the supersingular isogeny path problem. We thus propose to use Hermitian modules as an avatar of a cryptographic symmetric monoidal action framework. This generalizes the more standard cryptographic group action framework, and still allows for a NIKE (Non Interactive Key Exchange). The main advantage of our action is that, presumably, Kuperberg's algorithm does not apply. Compared to CSIDH, this allows for more compact keys and much better scaling properties. In practice, we propose the key exchange scheme $\otimes$-MIKE (Tensor Module Isogeny Key Exchange). Alice and Bob start from a supersingular elliptic curve $E_0/\mathbb{F}_p$ and both compute a $2^n$-isogeny over $\mathbb{F}_{p^2}$. They each send the $j$-invariant of their curve. Crucially, unlike SIDH, no torsion information at all is required. Their common secret, given by the module action, is then a dimension~$4$ principally polarised abelian variety. We obtain a very compact post-quantum NIKE: only 64B for NIST level~$1$ security.
Last updated:  2024-10-18
Non-Interactive Threshold BBS+ From Pseudorandom Correlations
Sebastian Faust, Carmit Hazay, David Kretzler, Leandro Rometsch, and Benjamin Schlosser
The BBS+ signature scheme is one of the most prominent solutions for realizing anonymous credentials. Its prominence is due to properties like selective disclosure and efficient protocols for creating and showing possession of credentials. Traditionally, a single credential issuer produces BBS+ signatures, which poses significant risks due to a single point of failure. In this work, we address this threat via a novel $t$-out-of-$n$ threshold BBS+ protocol. Our protocol supports an arbitrary security threshold $t \leq n$ and works in the so-called preprocessing setting. In this setting, we achieve non-interactive signing in the online phase and sublinear communication complexity in the number of signatures in the offline phase, which, as we show in this work, are important features from a practical point of view. As it stands today, none of the widely studied signature schemes, such as threshold ECDSA and threshold Schnorr, achieve both properties simultaneously. To this end, we design specifically tailored presignatures that can be directly computed from pseudorandom correlations and allow servers to create signature shares without additional cross-server communication. Both our offline and online protocols are actively secure in the Universal Composability model. Finally, we evaluate the concrete efficiency of our protocol, including an implementation of the online phase and the expansion algorithm of the pseudorandom correlation generator (PCG) used during the offline phase. The online protocol without network latency takes less than $15 ms$ for $t \leq 30$ and credentials sizes up to $10$. Further, our results indicate that the influence of $t$ on the online signing is insignificant, $< 6 \%$ for $t \leq 30$, and the overhead of the thresholdization occurs almost exclusively in the offline phase. Our implementation of the PCG expansion is the first considering correlations between more than $3$ parties and shows that even for a committee size of $10$ servers, each server can expand a correlation of up to $2^{16}$ presignatures in about $600$ ms per presignature.
Last updated:  2024-10-18
Dumbo-MPC: Efficient Fully Asynchronous MPC with Optimal Resilience
Yuan Su, Yuan Lu, Jiliang Li, Yuyi Wang, Chengyi Dong, and Qiang Tang
Fully asynchronous multi-party computation (AMPC) has superior robustness in realizing privacy and guaranteed output delivery (G.O.D.) against asynchronous adversaries that can arbitrarily delay communications. However, none of these protocols are truly practical, as they either have sub-optimal resilience, incur cumbersome communication cost, or suffer from an online phase with extra cryptographic overhead. The only attempting implementation---HoneyBadgerMPC (hbMPC)---merely ensures G.O.D. in some implausible optimistic cases due to a non-robust offline pre-processing phase. We propose Dumbo-MPC a concretely efficient AMPC-as-a-service design with all phases G.O.D. and optimal resilience against $t<n/3$ malicious parties (where $n$ is the total number of parties). Same to hbMPC, Dumbo-MPC has a robust (almost) information-theoretic online phase that can efficiently perform online computations, given pre-processed multiplication triples. While for achieving all phases G.O.D., we design a novel dual-mode offline protocol that can robustly pre-process multiplication triples in asynchrony. The offline phase features $O(n)$ per-triple communication in the optimistic case, followed by a fully asynchronous fallback to a pessimistic path to securely restore G.O.D. in the bad case. To efficiently implement the pessimistic path, we devise a concretely efficient zk-proof for product relationship of secret shares over compact KZG polynomial commitments, which enables us to reduce the degree of two secret shares' product from $2t$ to $t$ and could be of independent interest. We also implement and extensively evaluate Dumbo-MPC (particularly its offline phase) in varying network settings with up to 31 AWS servers. To our knowledge, we provide the first implementation of AMPC with all-phase G.O.D. A recent asynchronous triple generation protocol from Groth and Shoup (GS23) is also implemented and experimentally compared. When $n = 31$, Dumbo-MPC generates 94 triples/sec (almost twice of GS23) in the pessimistic case and 349 triples/sec (6X of GS23) in the good case, such that 31 parties require only 2-8 min to prepare a private Vickrey auction of 100 bidders or 10-36 min for a mixing network of $2^{10}$ inputs.
Last updated:  2024-10-18
From One-Time to Two-Round Reusable Multi-Signatures without Nested Forking
Lior Rotem, Gil Segev, and Eylon Yogev
Multi-signature schemes are gaining significant interest due to their blockchain applications. Of particular interest are two-round schemes in the plain public-key model that offer key aggregation, and whose security is based on the hardness of the DLOG problem. Unfortunately, despite substantial recent progress, the security proofs of the proposed schemes provide rather insufficient concrete guarantees (especially for 256-bit groups). This frustrating situation has so far been approached either by relying on the security of seemingly stronger assumptions or by considering restricted classes of attackers (e.g., algebraic attackers, which are assumed to provide an algebraic justification of each group element that they produce). We present a complementing approach by constructing multi-signature schemes that satisfy two relaxed notions of security, whose applicability nevertheless ranges from serving as drop-in replacements to enabling expressive smart contract validation procedures. Our first notion, one-time unforgeability, extends the analogous single-signer notion by considering attackers that obtain a single signature for some message and set of signers of their choice. We construct a non-interactive one-time scheme based on any ring-homomorphic one-way function, admitting efficient instantiations based on the DLOG and RSA assumptions. Aggregated verification keys and signatures consist of two group elements and a single group element, respectively, and our security proof consists of a single application of the forking lemma (thus avoiding the substantial security loss exhibited by the proposed two-round schemes). Additionally, we demonstrate that our scheme naturally extends to a $t$-time scheme, where aggregated verification keys consist of $t+1$ group elements, while aggregated signatures still consist of a single group element. Our second notion, single-set unforgeability, considers attackers that obtain any polynomial number of signatures but are restricted to a single set of signers of their choice. We transform any non-interactive one-time scheme into a two-round single-set scheme via a novel forking-free construction that extends the seminal Naor-Yung tree-based approach to the multi-signer setting. Aggregated verification keys are essentially identical to those of the underlying one-time scheme, and the length of aggregated signatures is determined by that of the underlying scheme while scaling linearly with the length of messages (noting that long messages can always be hashed using a collision-resistant function). Instantiated with our one-time scheme, we obtain aggregated verification keys and signatures whose lengths are completely independent of the number of signers.
Last updated:  2024-10-18
Secure and efficient transciphering for FHE-based MPC
Diego F. Aranha, Antonio Guimarães, Clément Hoffmann, and Pierrick Méaux
Transciphering (or Hybrid-Homomorphic Encryption, HHE) is an es- tablished technique for avoiding ciphertext expansion in HE applications, saving communication and storage resources. Recently, it has also been shown to be a fundamental component in the practical construction of HE-based multi-party computation (MPC) protocols, being used both for input data and intermediary results (Smart, IMACC 2023). In these protocols, however, ciphers are used with keys that are jointly generated by multiple (possibly malicious) parties, which may require additional security assumptions that have been so far overlooked in the HHE literature. In this paper, we formalize this issue as a security against related-key attacks (RKA) problem and provide efficient solutions for it. We start by presenting an efficient method for homomorphically evaluating Mixed-Filter-Permutator (MFP) ciphers in leveled mode, enabling speedups of up to thousands of times compared to previous literature. For the multi-party scenario, we focus specifically on the Margrethe cipher (Hoffmann et al., INDOCRYPT 2023). We show that, contrary to other commonly used HHE ciphers (e.g. FLIP), Margrethe is out-of-the-box secure for any protocols that allow malicious parties to learn up to two related key streams, enabling security for the vast majority of static MPC protocols. For other cases, we quantify the loss of security based on the number of related key streams (which often depends on the number of malicious parties and specific protocol). Performance-wise, our implementation of Margrethe takes just 3.9 ms to transcipher 4 bit messages, being significantly faster than the state of the art in terms of latency.
Last updated:  2024-10-18
A New Fine Tuning Method for FHEW/TFHE Bootstrapping with IND-CPAD Security
Deokhwa Hong, Young-Sik Kim, Yongwoo Lee, and Eunyoung Seo
Fully homomorphic encryption (FHE) schemes enable computations on encrypted data, making them as a crucial component of privacy-enhancing technologies. Ducas and Micciancio introduced the FHEW scheme (Eurocrypt '15), which was further enhanced by Chillotti et al. with TFHE (Asiacrypt '17). These schemes support low-latency homomorphic evaluations of binary (or larger) gates due to their small parameter size. However, the evaluation failure probability in these schemes is highly sensitive to the choice of parameters, resulting in a limited range of viable parameters and a trade-off between failure probability and runtime. Recently, Cheon et al. proposed a key recovery attack on the FHEW/TFHE schemes based on a novel security model for FHE, known as IND-CPA$^\text{D}$ security (CCS '24). Mitigating this attack requires achieving a negligible failure probability (e.g., $2^{-64}$). However, the limited range of parameter options in FHEW/TFHE necessitates the adoption of parameter sets with unnecessarily low failure probabilities, leading to inefficient runtime. We propose a new bootstrapping method for the FHEW/TFHE shcemes that optimizes the trade-off between runtime and failure probability while maintaining ease of implementation. The proposed method allows selecting parameter sets that achieve the desired failure probabilities at various security levels, thereby maximizing runtime efficiency.
Last updated:  2024-10-18
Overlapped Bootstrapping for FHEW/TFHE and Its Application to SHA3
Deokhwa Hong, Youngjin Choi, Yongwoo Lee, and Young-Sik Kim
Homomorphic Encryption (HE) enables operations on encrypted data without requiring decryption, thus allowing for secure handling of confidential data within smart contracts. Among the known HE schemes, FHEW and TFHE are particularly notable for use in smart contracts due to their lightweight nature and support for arbitrary logical gates. In contrast, other HE schemes often require several gigabytes of keys and are limited to supporting only addition and multiplication. As a result, there has been significant work implementing smart contract functionalities over HE, broadening the potential applications of blockchain technology. However, a significant drawback of the FHEW/TFHE schemes is the need for bootstrapping after the execution of each binary gate. While bootstrapping reduces noise in the ciphertext, it also becomes a performance bottleneck due to its computational complexity. In this work, we propose an efficient new bootstrapping method for FHEW/TFHE that takes advantage of the flexible scaling factors of encrypted data. The proposed method is particularly beneficial in circuits with consecutive XOR gates. Moreover, we implement Keccak using FHEW/TFHE, as it is one of the most important functions in smart contracts. Our experimental results demonstrate that the proposed method reduces the runtime of Keccak over HE by 42%. Additionally, the proposed method does not require additional keys or parameter sets from the key-generating party and can be adopted by the computing party without need for any extra information.
Last updated:  2024-10-18
Structure-Preserving Compressing Primitives: Vector Commitments, Accumulators and Applications
Stephan Krenn, Omid Mir, and Daniel Slamanig
Compressing primitives such as accumulators and vector commitments, allow to rep- resent large data sets with some compact, ideally constant-sized value. Moreover, they support operations like proving membership or non-membership with minimal, ideally also constant- sized, storage and communication overhead. In recent years, these primitives have found numerous practical applications, with many constructions based on various hardness assumptions. So far, however, it has been elusive to construct these primitives in a strictly structure-preserving setting, i.e., in a bilinear group in a way that messages, commitments and openings are all elements of the two source groups. Interestingly, backed by existing impossibility results, not even conventional commitments with such constraints are known in this setting. In this paper we investigate whether strictly structure-preserving compressing primitives can be realized. We close this gap by presenting the first strictly structure-preserving commitment that is shrinking (and in particular constant-size). We circumvent existing impossibility results by employing a more structured message space, i.e., a variant of the Diffie-Hellman message space. Our main results are constructions of structure-preserving vector commitments (SPVC) as well as accumulators. We first discuss generic constructions and then present concrete con- structions under the Diffie-Hellman Exponent assumption. To demonstrate the usefulness of our constructions, we present various applications. Most notable, we present the first entirely prac- tical constant-size ring signature scheme in bilinear groups (i.e., the discrete logarithm setting). Concretely, using the popular BLS12-381 pairing-friendly curve, our ring signatures achieve a size of roughly 6500 bits.
Last updated:  2024-10-18
Reduce and Prange: Revisiting Prange's ISD for Solving LPN/RSD over Large Fields
Jiseung Kim and Changmin Lee
The Learning Parity with Noise (LPN) problem and its specific form, LPN with regularity, are widely utilized in crafting cryptographic primitives. Recently, extending these problems to operate over larger fields has been considered to enhance applications. Despite the broad analysis available for traditional LPN approaches, the exploration of LPN over large fields remains underdeveloped. This gap in research has led to the development of improved attacks, suggesting that primitives based on LPN over large fields may not meet necessary security standards. We have developed an algorithm that enhances the efficiency of solving the LPN over large fields. This method innovatively modifies the Gaussian elimination attack, traditionally known as Prange's information set decoding algorithm. Our key advancement involves the selective use of partial Gaussian elimination rather than employing the full Gaussian algorithm throughout, which we have termed the ``Reduce and Prange's (RP) algorithm." Additionally, we apply the RP algorithm to address the LPN problem over large fields with regular noise. Our RP highlights two key aspects: the vulnerability of existing schemes and the superiority of our approach compared to recent analyses. Our findings reveal that cryptographic applications, including Syndrome Decoding in the Head frameworks (Crypto'22, Asiacrypt'23, Eurocrypt'24) and Scalable Multiparty Garbling (CCS'23), are not secure enough. To be precise, the schemes fail to achieve their intended bit-security. Compared to the previous analysis by Liu et al. (Eurocrypt'24), we show that for LPN over large fields (e.g., 128-bit field size), the bit-security is reduced by 5-11 bits . Furthermore, our RP algorithm for solving LPN with regular noise outperforms recent results by Liu et al., Briaud, and Øygard (Eurocrypt'23) under certain parameter choices, leading to a reduction in bit-security by 5-20 bits for LPN with regular noise.
Last updated:  2024-10-18
Secure Computation with Parallel Calls to 2-ary Functions
Varun Narayanan, Shubham Vivek Pawar, and Akshayaram Srinivasan
Reductions are the workhorses of cryptography. They allow constructions of complex cryptographic primitives from simple building blocks. A prominent example is the non-interactive reduction from securely computing a ``complex" function $f$ to securely computing a ``simple" function $g$ via randomized encodings. Prior work equated simplicity with functions of small degree. In this work, we consider a different notion of simplicity where we require $g$ to only take inputs from a small number of parties. In other words, we want the arity of $g$ to be as small as possible. In more detail, we consider the problem of reducing secure computation of arbitrary functions to secure computation of functions with arity two (two is the minimal arity required to compute non-trivial functions). Specifically, we want to compute a function $f$ via a protocol that makes parallel calls to 2-ary functions. We want this protocol to be secure against malicious adversaries that could corrupt an arbitrary number of parties. We obtain the following results: - Negative Result: We show that there exists a degree-2 polynomial $p$ such that no protocol that makes parallel calls to 2-ary functions can compute $p$ with statistical security with abort. - Positive Results: We give two ways to bypass the above impossibility result. 1. Weakening the Security Notion. We show that every degree-2 polynomial can be computed with statistical privacy with knowledge of outputs (PwKO) by making parallel calls to 2-ary functions. Privacy with knowledge of outputs is weaker than security with abort. 2. Computational Security. We prove that for every function $f$, there exists a protocol for computing $f$ that makes parallel calls to 2-ary functions and achieves security with abort against computationally-bounded adversaries. The security of this protocol relies on the existence of semi-honest secure oblivious transfer. - Applications: We give connections between this problem and the task of reducing the encoding complexity of Multiparty Randomized Encodings (MPRE) (Applebaum, Brakerski, and Tsabary, TCC 2018). Specifically, we show that under standard computational assumptions, there exists an MPRE where the encoder can be implemented by an $\mathrm{NC}^0$ circuit with constant fan-out. - Extensions: We explore this problem in the honest majority setting and give similar results assuming one-way functions. We also show that if the parties have access to 3-ary functions then we can construct a computationally secure protocol in the dishonest majority setting assuming one-way functions.
Last updated:  2024-10-18
Does quantum lattice sieving require quantum RAM?
Beomgeun Cho, Minki Hhan, Taehyun Kim, Jeonghoon Lee, and Yixin Shen
In this paper, we study the requirement for quantum random access memory (QRAM) in quantum lattice sieving, a fundamental algorithm for lattice-based cryptanalysis. First, we obtain a lower bound on the cost of quantum lattice sieving with a bounded size QRAM. We do so in a new query model encompassing a wide range of lattice sieving algorithms similar to those in the classical sieving lower bound by Kirshanova and Laarhoven [CRYPTO 21]. This implies that, under reasonable assumptions, quantum speedups in lattice sieving require the use of QRAM. In particular, no quantum speedup is possible without QRAM. Second, we investigate the trade-off between the size of QRAM and the quantum speedup. We obtain a new interpolation between classical and quantum lattice sieving. Moreover, we show that further improvements require a novel way to use the QRAM by proving the optimality of some subroutines. An important caveat is that this trade-off requires a strong assumption on the efficient replacement of QRAM data, indicating that even speedups with a small QRAM are already challenging. Finally, we provide a circuit for quantum lattice sieving without using QRAM. Our circuit has a better depth complexity than the best classical algorithms but requires an exponential amount of qubits. To the best of our knowledge, this is the first quantum speedup for lattice sieving without QRAM in the standard quantum circuit model. We explain why this circuit does not contradict our lower bound, which considers the query complexity.
Last updated:  2024-10-18
Concretely Efficient Asynchronous MPC from Lightweight Cryptography
Akhil Bandarupalli, Xiaoyu Ji, Aniket Kate, Chen-Da Liu-Zhang, and Yifan Song
We consider the setting of asynchronous multi-party computation (AMPC) with optimal resilience $n=3t+1$ and linear communication complexity, and employ only ``lightweight'' cryptographic primitives, such as random oracle hash. In this model, we introduce two concretely efficient AMPC protocols for a circuit with $|C|$ multiplication gates: a protocol achieving fairness with $\mathcal{O}(|C|\cdot n + n^3)$ field elements of communication, and a protocol achieving guaranteed output delivery with $\mathcal{O}(|C|\cdot n + n^5)$ field elements. These protocols significantly improve upon the best prior AMPC protocol in this regime communicating $\mathcal{O}(|C|\cdot n + n^{14})$ elements. To achieve this, we introduce novel variants of asynchronous complete secret sharing (ACSS) protocols with linear communication in the number of sharings, providing different abort properties.
Last updated:  2024-10-18
HADES: Range-Filtered Private Aggregation on Public Data
Xiaoyuan Liu, Ni Trieu, Trinabh Gupta, Ishtiyaque Ahmad, and Dawn Song
In aggregation queries, predicate parameters often reveal user intent. Protecting these parameters is critical for user privacy, regardless of whether the database is public or private. While most existing works focus on private data settings, we address a public data setting where the server has access to the database. Current solutions for this setting either require additional setups (e.g., noncolluding servers, hardware enclaves) or are inefficient for practical workloads. Furthermore, they often do not support range predicates or boolean combinations commonly seen in real-world use cases. To address these limitations, we built HADES, a fully homomorphic encryption (FHE) based private aggregation system for public data that supports point, range predicates, and boolean combinations. Our one-round HADES protocol efficiently generates predicate indicators by leveraging the plaintext form of public data records. It introduces a novel elementwise-mapping operation and an optimized reduction algorithm, achieving latency efficiency within a limited noise budget. Our highly scalable, multi-threaded implementation improves performance over previous one-round FHE solutions by 204x to 6574x on end-to-end TPC-H queries, reducing aggregation time on 1M records from 15 hours to 38 seconds
Last updated:  2024-10-17
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
Stanislav Peceny, Srinivasan Raghuraman, Peter Rindal, and Harshal Shah
In this work we formalize the notion of a two-party permutation correlation $(A, B), (C, \pi)$ s.t. $\pi(A)=B+C$ for a random permutation $\pi$ of $n$ elements and vectors $A,B,C\in \mathbb{F}^n$. This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more. We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of $n$ elements, each of size $\ell=\log|\mathbb{F}|$ bits, with $O(n\ell)$ bit-OTs, time, communication, and only three rounds. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for $n=2^{20},\ell=128$ requires just 7 seconds & $\sim2\ell n$ bits of communication, a respective 40 & $1.1\times$ improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length $\ell$, i.e. the communication and number of OTs is $O(n\log \ell)$. The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms. Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Last updated:  2024-10-17
Fake It till You Make It: Enhancing Security of Bluetooth Secure Connections via Deferrable Authentication
Marc Fischlin and Olga Sanina
The Bluetooth protocol for wireless connection between devices comes with several security measures to protect confidentiality and integrity of data. At the heart of these security protocols lies the Secure Simple Pairing, wherewith the devices can negotiate a shared key before communicating sensitive data. Despite the good intentions, the Bluetooth security protocol has repeatedly been shown to be vulnerable, especially with regard to active attacks on the Secure Simple Pairing. We propose here a mechanism to limit active attacks on the Secure Connections protocol (the more secure version of the Secure Simple Pairing protocol), without infringing on the current Bluetooth protocol stack specification. The idea is to run an authentication protocol, like a classical challenge-response step for certified keys, within the existing infrastructure, even at a later, more convenient point in time. We prove that not only does this authentication step ensure freshness of future encryption keys, but an interesting feature is that it—a posteriori—also guarantees security of previously derived encryption keys. We next argue that this approach indeed prevents a large set of known attacks on the Bluetooth protocol.
Last updated:  2024-10-17
Logstar: Efficient Linear* Time Secure Merge
Suvradip Chakraborty, Stanislav Peceny, Srinivasan Raghuraman, and Peter Rindal
Secure merge considers the problem of combining two sorted lists into a single sorted secret-shared list. Merge is a fundamental building block for many real-world applications. For example, secure merge can implement a large number of SQL-like database joins, which are essential for almost any data processing task such as privacy-preserving fraud detection, ad conversion rates, data deduplication, and many more. We present two constructions with communication bandwidth and rounds tradeoff. Logstar, our bandwidth-optimized construction, takes inspiration from Falk and Ostrovsky (ITC, 2021) and runs in $O(n\log^*n)$ time and communication with $O(\log n)$ rounds. In particular, for all conceivable $n$, the $\log^*n$ factor will be equal to the constant $2$, and therefore we achieve a near-linear running time. Median, our rounds-optimized construction, builds on the classic parallel medians-based insecure merge approach of Valiant (SIAM J. Comput., 1975), later explored in the secure setting by Blunk et al. (2022), and requires $O(n \log^c n)$, $c \approx 1.71$, communication with $O(\log \log n)$ rounds. We introduce two additional constructions that merge input lists of different sizes. SquareRootMerge merges lists of sizes $n^{\frac{1}{2}}$ and $n$ and runs in $O(n)$ time and communication with $O(\log n)$ rounds. CubeRootMerge is closely inspired by Blunk et al.'s (2022) construction and merges lists of sizes $n^{\frac{1}{3}}$ and $n$. It runs in $O(n)$ time and communication with $O(1)$ rounds. We optimize our constructions for concrete efficiency. Today, concretely efficient secure merge protocols rely on standard techniques such as Batcher's merging network or generic sorting. These approaches require an $O(n \log n)$ size circuit of $O(\log n)$ depth. Despite significant research thrust, no work has been able to reduce their concrete costs. Our constructions are the first to be more efficient by improving their asymptotics and maintaining small constants. We analytically benchmark against these constructions and show that Logstar reduces bandwidth costs $\approx2.0\times$ and Median reduces rounds $\approx1.5\times$.
Last updated:  2024-10-17
Fast ORAM with Server-aided Preprocessing and Pragmatic Privacy-Efficiency Trade-off
Vladimir Kolesnikov, Stanislav Peceny, Ni Trieu, and Xiao Wang
Data-dependent accesses to memory are necessary for many real-world applications, but their cost remains prohibitive in secure computation. Prior work either focused on minimizing the need for data-dependent access in these applications, or reduced its cost by improving oblivious RAM for secure computation (SC-ORAM). Despite extensive efforts to improve SC-ORAM, the most concretely efficient solutions still require $\approx0.7$s per access to arrays of $2^{30}$ entries. This plainly precludes using MPC in a number of settings. In this work, we take a pragmatic approach, exploring how concretely cheap MPC RAM access could be made if we are willing to allow one of the participants to learn the access pattern. We design a highly efficient Shared-Output Client-Server ORAM (SOCS-ORAM) that has constant overhead, uses one round trip of interaction per access, and whose access cost is independent of array size. SOCS-ORAM is useful in settings with hard performance constraints, where one party in the computation is more trust-worthy and is allowed to learn the RAM access pattern. Our SOCS-ORAM is assisted by a third helper party that helps initialize (and reinitialize, as needed) the protocol and is designed for the honest-majority semi-honest corruption model. We implement our construction in C++ and report its performance. For an array of length $2^{30}$ with $4$B entries, we communicate $13$B per access and take essentially no overhead beyond network latency.
Last updated:  2024-10-17
A Tight Lower Bound on the TdScrypt Trapdoor Memory-Hard Function
Jeremiah Blocki and Seunghoon Lee
A trapdoor Memory-Hard Function is a function that is memory-hard to evaluate for any party who does not have a trapdoor, but is substantially less expensive to evaluate with the trapdoor. Biryukov and Perin (ASIACRYPT 2017) introduced the first candidate trapdoor Memory-Hard Function called Diodon which modifies a Memory-Hard Function called Scrypt by replacing a hash chain with repeated squaring modulo a composite number $N=pq$. The trapdoor, which consists of the prime factors $p$ and $q$, allows one to compute the function with significantly reduced cumulative memory cost (CMC) $O( n \log n \log^2 N )$ where $n$ denotes the running time parameter, e.g., the length of the hash chain or repeated squaring chain. By contrast, the best-known algorithm to compute Diodon without the trapdoor has the CMC $O(n^2\log N)$. Auerbach et al. (EUROCRYPT 2024) provided the first provable lower bound on the CMC of TdScrypt --- a specific instantiation of Diodon. In particular, in idealized models, they proved that the CMC of TdScrypt is $\Omega(\frac{n^2}{\log n}\log N)$ which almost matches the upper bound $O(n^2\log N)$ but is off by a multiplicative $\log n$ factor. In this work, we show how to tighten the analysis of Auerbach et al. (EUROCRYPT 2024) and eliminate the gap. In particular, our results imply that TdScrypt has the CMC at least $\Omega(n^2\log N)$.
Last updated:  2024-10-17
On pairing-friendly 2-cycles and SNARK-friendly 2-chains of elliptic curves containing a curve from a prime-order family
Tomáš Novotný
Cryptographic protocols such as zkSNARKs use 2-cycles of elliptic curves for efficiency, often relying on pairing computations. However, 2-cycles of pairing-friendly curves are hard to find, and the only known cases consist of an MNT4 and an MNT6 curve. In this work, we prove that a 2-cycle containing an MNT3 curve cannot be pairing-friendly. For other curve families, we have a similar result for cryptographically attractive field sizes. Thus we cannot hope to find new pairing-friendly 2-cycles using the current methods. Furthermore, we show that there are no SNARK-friendly 2-chains of elliptic curves from combinations of MNT, Freeman and BN curves of reasonable size, except for the (MNT4, MNT6) chains.
Last updated:  2024-10-17
On the vector subspaces of $\mathbb{F}_{2^n}$ over which the multiplicative inverse function sums to zero
Claude Carlet
We study the behavior of the multiplicative inverse function (which plays an important role in cryptography and in the study of finite fields), with respect to a recently introduced generalization of almost perfect nonlinearity (APNness), called $k$th-order sum-freedom, that extends a classic characterization of APN functions, and has also some relationship with integral attacks. This generalization corresponds to the fact that a vectorial function $F:\mathbb F_2^n\mapsto \mathbb F_2^m$ sums to a nonzero value over every $k$-dimensional affine subspace of $\mathbb F_2^n$, for some $k\leq n$ (APNness corresponds to $k=2$). The sum of the values of the inverse function $x\in \mathbb F_{2^n}\mapsto x^{2^n-2}\in \mathbb F_{2^n}$ over any affine subspace $A$ of $\mathbb{F}_{2^n}$ not containing 0 ({\em i.e.} being not a vector space) has been addressed, thanks to a simple expression of such sum, which shows that it never vanishes. We study in the present paper the case of vector (i.e. linear) subspaces, which is much less simple to handle. The sum depends on a coefficient in subspace polynomials. We study for which values of $k$ the multiplicative inverse function can sum to nonzero values over all $k$-dimensional vector subspaces. We show that, for every $k$ not co-prime with $n$, it sums to zero over at least one $k$-dimensional $\mathbb{F}_2$-subspace of $\mathbb{F}_{2^n}$. We study the behavior of the inverse function over direct sums of vector spaces and we deduce that the property of the inverse function to be $k$th-order sum-free happens for $k$ if and only if it happens for $n-k$. We derive several other results and we show that the set of values $k$ such that the inverse function is not $k$th-order sum-free is stable when adding two values of $k$ whose product is smaller than $n$ (and when subtracting two values under some conditions). We clarify the case of dimension at most 4 (equivalently, of co-dimension at most 4) and this allows to address, for every $n$, all small enough values of $k$ of the form $3a+4b$.
Last updated:  2024-10-17
Revisiting the Robustness of (R/M)LWR under Polynomial Moduli with Applications to Lattice-Based Compact SO-CCA Security
Haoxiang Jin, Feng-Hao Liu, Zhedong Wang, Yang Yu, and Dawu Gu
This work conducts a comprehensive investigation on determining the entropic hardness of (R/M)LWR under polynomial modulus. Particularly, we establish the hardness of (M)LWR for general entropic secret distributions from (Module) LWE assumptions based on a new conceptually simple framework called rounding lossiness. By combining this hardness result and a trapdoor inversion algorithm with asymptotically the most compact parameters, we obtain a compact lossy trapdoor function (LTF) with improved efficiency. Extending our LTF with other techniques, we can derive a compact all-but-many LTF and PKE scheme against selective opening and chosen ciphertext attacks, solely based on (Module) LWE assumptions within a polynomial modulus. Additionally, we show a search-to-decision reduction for RLWR with Gaussian secrets from a new R\'enyi Divergence-based analysis.
Last updated:  2024-10-17
Discrete Gaussians Modulo Sub-Lattices: New Leftover Hash Lemmas for Discrete Gaussians
Haoxiang Jin, Feng-Hao Liu, Zhedong Wang, and Dawu Gu
The Leftover Hash Lemma (LHL) is a powerful tool for extracting randomness from an entropic distribution, with numerous applications in cryptography. LHLs for discrete Gaussians have been explored in both integer settings by Gentry et al. (GPV, STOC'08) and algebraic ring settings by Lyubashevsky et al. (LPR, Eurocrypt'13). However, the existing LHLs for discrete Gaussians have two main limitations: they require the Gaussian parameter to be larger than certain smoothing parameters, and they cannot handle cases where fixed and arbitrary information is leaked. In this work, we present new LHLs for discrete Gaussians in both integer and ring settings. Our results show that the Gaussian parameter can be improved by a factor of $\omega(\sqrt{\log\lambda})$ and $O(\sqrt{N})$ compared to the regularity lemmas of GPV and LPR, respectively, under similar parameter choices such as the dimension and ring. Furthermore, our new LHLs can be applied to leaked discrete Gaussians, and the result can be used to establish asymptotic hardness of the extended MLWE assumptions, addressing an open question in recent works by Lyubashevsky et al. (LNP, Crypto'22). Our central techniques involve new fine-grained analyses of the min-entropy in discrete Gaussians modulo sublattices and should be of interest.
Last updated:  2024-10-17
Security Guidelines for Implementing Homomorphic Encryption
Jean-Philippe Bossuat, Rosario Cammarota, Ilaria Chillotti, Benjamin R. Curtis, Wei Dai, Huijing Gong, Erin Hales, Duhyeong Kim, Bryan Kumara, Changmin Lee, Xianhui Lu, Carsten Maple, Alberto Pedrouzo-Ulloa, Rachel Player, Yuriy Polyakov, Luis Antonio Ruiz Lopez, Yongsoo Song, and Donggeon Yhee
Fully Homomorphic Encryption (FHE) is a cryptographic primitive that allows performing arbitrary operations on encrypted data. Since the conception of the idea in [RAD78], it was considered a holy grail of cryptography. After the first construction in 2009 [Gen09], it has evolved to become a practical primitive with strong security guarantees. Most modern constructions are based on well-known lattice problems such as Learning with Errors (LWE). Besides its academic appeal, in recent years FHE has also attracted significant attention from industry, thanks to its applicability to a considerable number of real-world use-cases. An upcoming standardization effort by ISO/IEC aims to support the wider adoption of these techniques. However, one of the main challenges that standards bodies, developers, and end users usually encounter is establishing parameters. This is particularly hard in the case of FHE because the parameters are not only related to the security level of the system, but also to the type of operations that the system is able to handle. In this paper, we provide examples of parameter sets for LWE targeting particular security levels that can be used in the context of FHE constructions. We also give examples of complete FHE parameter sets, including the parameters relevant for correctness and performance, alongside those relevant for security. As an additional contribution, we survey the parameter selection support offered in open-source FHE libraries.
Last updated:  2024-10-17
Full Key-Recovery Cubic-Time Template Attack on Classic McEliece Decapsulation
Vlad-Florin Drăgoi, Brice Colombier, Nicolas Vallet, Pierre-Louis Cayrel, and Vincent Grosso
Classic McEliece is one of the three code-based candidates in the fourth round of the NIST post-quantum cryptography standardization process in the Key Encapsulation Mechanism category. As such, its decapsulation algorithm is used to recover the session key associated with a ciphertext using the private key. In this article, we propose a new side-channel attack on the syndrome computation in the decapsulation algorithm that recovers the private key, which consists of the private Goppa polynomial $g$ and the permuted support $\mathcal{L}$. The attack relies on both practical aspects and theoretical contributions, namely that the side-channel distinguisher can accurately discriminate elements of the permuted support $\mathcal{L}$, while relying only on a standard noisy Hamming weight leakage assumption and that there exists a cubic-time algorithm that uses this information to recover the private Goppa polynomial $g$. Compared with previous work targeting the Classic McEliece private key, this drastically improves both on the assumptions made in the attacker model and on the overall efficiency of the key-recovery algorithm. We have carried out the attack in practice on a microcontroller target running the reference implementation of Classic McEliece, and make the full attack source code available.
Last updated:  2024-10-17
Large-Scale MPC: Scaling Private Iris Code Uniqueness Checks to Millions of Users
Remco Bloemen, Bryan Gillespie, Daniel Kales, Philipp Sippl, and Roman Walch
In this work we tackle privacy concerns in biometric verification systems that typically require server-side processing of sensitive data (e.g., fingerprints and Iris Codes). Concretely, we design a solution that allows us to query whether a given Iris Code is similar to one contained in a given database, while all queries and datasets are being protected using secure multiparty computation (MPC). Addressing the substantial performance demands of operational systems like World ID and aid distributions by the Red Cross, we propose new protocols to improve performance by more than three orders of magnitude compared to the recent state-of-the-art system Janus (S&P 24). Our final protocol can achieve a throughput of over 690 thousand Iris Code comparisons per second on a single CPU core, while protecting the privacy of both the query and database Iris Codes. Furthermore, using Nvidia NCCL we implement the whole protocol on GPUs while letting GPUs directly access the network interface. Thus we are able to avoid the costly data transfer between GPUs and CPUs, allowing us to achieve a throughput of 4.29 billion Iris Code comparisons per second in a 3-party MPC setting, where each party has access to 8 H100 GPUs. This GPU implementation achieves the performance requirements set by the Worldcoin foundation and will thus be used in their deployed World ID infrastructure.
Last updated:  2024-10-17
A Deep Study of The Impossible Boomerang Distinguishers: New Construction Theory and Automatic Search Methods
Xichao Hu, Lin Jiao, Dengguo Feng, Yonglin Hao, Xinxin Gong, and Yongqiang Li
The impossible boomerang attack (IBA) is a combination of the impossible differential attack and boomerang attack, which has demonstrated remarkable power in the security evaluation of AES and other block ciphers. However, this method has not received sufficient attention in the field of symmetric cipher analysis. The only existing search method for impossible boomerang distinguishers (IBD), the core of IBAs, is the $\mathcal{UB}\text{-method}$, but it is considered rather rudimentary given current technological advancements and may result in missed opportunities for effective attacks. Therefore, this paper delves into a comprehensive study on the construction theory and automatic search method of IBDs. Theoretically, we propose 5 IBD constructions aligned with the techniques of arbitrary S-box, boomerang distinguisher, Boomerang Connectivity Table, U/L/EBCT and mixed tables for differential propagation for SPN-network block ciphers, and 2 IBD constructions accompanied by state propagation for block ciphers with any structure. Furthermore, we investigate the relationship among these IBD constructions and demonstrate that the most superior IBD aligns precisely with the original definition. Technically, we develop a general SAT-based automatic search tool for IBDs by introducing optimized search strategies of the composite model method and the mixed model method. This tool not only considers the details of each operation but also takes into account the impact of key schedule in a single-key setting. As applications, we first acquire 59584 4-round 1 active word truncated IBDs for AES-128, and 192 of those IBDs cannot be detected by the $\mathcal{UB} \text{-method}$. For Midori64, we first demonstrate the non-existence of $7$-round $1$ active word truncated IBDs, and obtain $7296$ $6$-round $1$ active word truncated IBDs, which is complementary to the finding that there are no existing $6$-round $1$ active word truncated IDs. For PRESENT-80, we get the first 6-round IBDs which cannot be detected by the $\mathcal{UB}\text{-method}$. Those results indicate that our method outperforms the $\mathcal{UB}\text{-method}$ and offer an advantage over IDs. We believe that our work can bring new insights to symmetric cipher analysis.
Last updated:  2024-10-17
A notion on S-boxes for a partial resistance to some integral attacks
Claude Carlet
In two recent papers, we introduced and studied the notion of $k$th-order sum-freedom of a vectorial function $F:\mathbb F_2^n\to \mathbb F_2^m$. This notion generalizes that of almost perfect nonlinearity (which corresponds to $k=2$) and has some relation with the resistance to integral attacks of those block ciphers using $F$ as a substitution box (S-box), by preventing the propagation of the division property of $k$-dimensional affine spaces. In the present paper, we show that this notion, which is rarely satisfied by vectorial functions, can be weakened while retaining the property that the S-boxes do not propagate the division property of $k$-dimensional affine spaces. This leads us to the property that we name $k$th-order $t$-degree-sum-freedom, whose strength decreases when $t$ increases, and which coincides with $k$th-order sum-freedom when $t=1$. The condition for $k$th-order $t$-degree-sum-freedom is that, for every $k$-dimensional affine space $A$, there exists a non-negative integer $j$ of 2-weight at most $t$ such that $\sum_{x\in A}(F(x))^j\neq 0$. We show, for a general $k$th-order $t$-degree-sum-free function $F$, that $t$ can always be taken smaller than or equal to $\min(k,m)$ under some reasonable condition on $F$, and that it is larger than or equal to $\frac k{\deg(F)}$, where $\deg(F)$ is the algebraic degree of $F$. We study examples for $k=2$ (case in which $t=1$ corresponds to APNness) showing that finding $j$ of 2-weight 2 can be challenging, and we begin the study of power functions, and in particular, of the multiplicative inverse function (used as S-box in the AES), for which we extend to $k$th-order $t$-degree-sum-freedom the result that it is $k$th-order sum-free if and only if it is $(n-k)$th-order sum-free. We begin the study of the cases of $k\in \{2,3,n-3,n-2,n-1,n\}$.
Last updated:  2024-10-17
On the practicality of quantum sieving algorithms for the shortest vector problem
Joao F. Doriguello, George Giapitzakis, Alessandro Luongo, and Aditya Morolia
One of the main candidates of post-quantum cryptography is lattice-based cryptography. Its cryptographic security against quantum attackers is based on the worst-case hardness of lattice problems like the shortest vector problem (SVP), which asks to find the shortest non-zero vector in an integer lattice. Asymptotic quantum speedups for solving SVP are known and rely on Grover's search. However, to assess the security of lattice-based cryptography against these Grover-like quantum speedups, it is necessary to carry out a precise resource estimation beyond asymptotic scalings. In this work, we perform a careful analysis on the resources required to implement several sieving algorithms aided by Grover's search for dimensions of cryptographic interests. For such, we take into account fixed-point quantum arithmetic operations, non-asymptotic Grover's search, the cost of using quantum random access memory (QRAM), different physical architectures, and quantum error correction. We find that even under very optimistic assumptions like circuit-level noise of $10^{-5}$, code cycles of 100 ns, reaction time of 1 $\mu$s, and using state-of-the-art arithmetic circuits and quantum error-correction protocols, the best sieving algorithms require $\approx 10^{13}$ physical qubits and $\approx 10^{31}$ years to solve SVP on a lattice of dimension 400, which is roughly the dimension for minimally secure post-quantum cryptographic standards currently being proposed by NIST. We estimate that a 6-GHz-clock-rate single-core classical computer would take roughly the same amount of time to solve the same problem. We conclude that there is currently little to no quantum speedup in the dimensions of cryptographic interest and the possibility of realising a considerable quantum speedup using quantum sieving algorithms would require significant breakthroughs in theoretical protocols and hardware development.
Last updated:  2024-10-17
A Framework for Group Action-Based Multi-Signatures and Applications to LESS, MEDS, and ALTEQ
Giuseppe D'Alconzo, Andrea Flamini, Alessio Meneghetti, and Edoardo Signorini
A multi-signature scheme allows a list of signers to sign a common message. They are widely used in scenarios where the same message must be signed and transmitted by $N$ users, and, instead of concatenating $N$ individual signatures, employing a multi-signature can reduce the data to be sent. In recent years there have been numerous practical proposals in the discrete logarithm setting, such as MuSig2 (CRYPTO'21) for the Schnorr signature. Recently, these attempts have been extended to post-quantum assumptions, with lattice-based proposals such as MuSig-L (CRYPTO'22). Given the growth of group action-based signatures, a natural question is whether a multi-signature can be built on the same models. In this work, we present the first construction of such a primitive relying on group action assumptions. We obtain a 3-round scheme achieving concurrent security in the ROM. Moreover, we instantiate it using the three candidates to the additional post-quantum NIST's call, namely LESS, MEDS and ALTEQ, obtaining a good compression rate for different parameters sets.
Last updated:  2024-10-17
Behemoth: transparent polynomial commitment scheme with constant opening proof size and verifier time
István András Seres and Péter Burcsi
Polynomial commitment schemes are fundamental building blocks in numerous cryptographic protocols such as verifiable secret sharing, zero-knowledge succinct non-interactive arguments, and many more. The most efficient polynomial commitment schemes rely on a trusted setup which is undesirable in trust-minimized applications, e.g., cryptocurrencies. However, transparent polynomial commitment schemes are inefficient (polylogarithmic opening proofs and/or verification time) compared to their trusted counterparts. It has been an open problem to devise a transparent, succinct polynomial commitment scheme or prove an impossibility result in the transparent setting. In this work, for the first time, we create a transparent, constant-size polynomial commitment scheme called Behemoth with constant-size opening proofs and a constant-time verifier. The downside of Behemoth is that it employs a cubic prover in the degree of the committed polynomial. We prove the security of our scheme in the generic group model and discuss parameter settings in which it remains practical even for the prover.
Last updated:  2024-10-17
A Note on Security Definitions for Secret Sharing with Certified Deletion
Dominique Bazin and Ryo Nishimaki
Bartusek and Raizes (CRYPTO 2024) proposed two security definitions for secret sharing, no-signaling certified deletion and adaptive certified deletion. We prove that adaptive certified deletion does not imply no-signaling certified deletion.
Last updated:  2024-10-17
Verifiable random function from the Deuring correspondence and higher dimensional isogenies
Antonin Leroux
In this paper, we introduce $\mathsf{DeuringVUF}$, a new Verifiable Unpredictable Function (VUF) protocol based on isogenies between supersingular curves. The most interesting application of this VUF is $\mathsf{DeuringVRF}$ a post-quantum Verifiable Random Function (VRF). The main advantage of this new scheme is its compactness, with combined public key and proof size of roughly 450 bytes, which is orders of magnitude smaller than other generic purpose post-quantum VRF constructions. This scheme is also the first post-quantum VRF satisfying unconditional uniqueness. We show that this scheme is practical by providing a first non-optimized C implementation that runs in roughly 20ms for verification and 175ms for evaluation. The function at the heart of our construction is the one that computes the codomain of an isogeny of big prime degree from its kernel. The evaluation can be performed efficiently with the knowledge of the endomorphism ring using a new ideal-to-isogeny algorithm introduced recently by Basso, Dartois, De Feo, Leroux, Maino, Pope, Robert and Wesolowski that uses computation of dimension $2$ isogenies between elliptic products to compute effectively the translation through the Deuring correspondence of any ideal. On the other hand, without the knowledge of the endomorphism ring, this computation appears to be hard. The security of our $\mathsf{DeuringVUF}$ holds under a new assumption call the one-more isogeny problem (OMIP). Another application of $\mathsf{DeuringVUF}$ is the first hash-and-sign signature based on isogenies in the standard model. While we don't expect the signature in itself to outperform the recent variants of SQIsign, it remains very competitive in both compactness and efficiency while providing a new framework to build isogeny-based signature that could lead to new interesting applications. We also introduce several new algorithms for the effective Deuring correspondence. In particular, we introduce an algorithm to translate an ideal of norm a big power of a small prime $\ell$ into the corresponding isogeny of dimension $1$ using isogenies between abelian variety of dimension $2$ as a tool. This algorithm can be used to improve the SQIsign signature scheme.
Last updated:  2024-10-17
Toward Key Independent Encryption based on Q-Problem
Abdelkader Laouid, Mostefa Kara, Mohammad Hammoudeh, and Abdullah T. Al-Essa
This paper defines a post-quantum encryption scheme based on discussion cryptography by introducing a new post-quantum hard problem called Q-Problem. The idea behind this scheme is to hide the keys of each entity, and the encryption process is based on secret message holders using only random private keys.
Last updated:  2024-10-17
Homomorphic Encryption with Authority
Joohee Lee and Joon-Woo Lee
Fully homomorphic encryption enables computations over encrypted data, which allows privacy-preserving services to be held between a server and a client. However, real-world applications demand practical considerations, especially concerning public safety and legal investigations. Existing FHE schemes focus solely on privacy, neglecting the societal risks posed by criminal activities utilizing privacy-preserving services. This paper introduces Homomorphic Encryption with Authority (HEwA), a novel framework that balances data privacy with public safety by incorporating an "authority" party. The proposed HEwA system operates in two phases: a normal phase, where client data privacy is protected, and an investigative phase, where the authority referring to a legally authorized entity such as government agencies exerts the right to recover suspicious client’s data. We formalize the security model for HEwA, ensuring that client privacy is protected during the normal phase while enabling authorities to recover encrypted data in the investigative phase. As a concrete example, we design an efficient HEwA system solely based on the CKKS homomorphic encryption scheme, which supports approximate computations over real-number data, making it highly suitable for fruitful applications in AI such as secure genomic analysis. We further provide rigorous security proofs. This new approach addresses the tension between privacy and public safety in cloud services, paving the way for responsible use of homomorphic encryption in practice.
Last updated:  2024-10-17
Revisiting Products of the Form $X$ Times a Linearized Polynomial $L(X)$
Christof Beierle
For a $q$-polynomial $L$ over a finite field $\mathbb{F}_{q^n}$, we characterize the differential spectrum of the function $f_L\colon \mathbb{F}_{q^n} \rightarrow \mathbb{F}_{q^n}, x \mapsto x \cdot L(x)$ and show that, for $n \leq 5$, it is completely determined by the image of the rational function $r_L \colon \mathbb{F}_{q^n}^* \rightarrow \mathbb{F}_{q^n}, x \mapsto L(x)/x$. This result follows from the classification of the pairs $(L,M)$ of $q$-polynomials in $\mathbb{F}_{q^n}[X]$, $n \leq 5$, for which $r_L$ and $r_M$ have the same image, obtained in [B. Csajbok, G. Marino, and O. Polverino. A Carlitz type result for linearized polynomials. Ars Math. Contemp., 16(2):585–608, 2019]. For the case of $n>5$, we pose an open question on the dimensions of the kernels of $x \mapsto L(x) - ax$ for $a \in \mathbb{F}_{q^n}$. We further present a link between functions $f_L$ of differential uniformity bounded above by $q$ and scattered $q$-polynomials and show that, for odd values of $q$, we can construct CCZ-inequivalent functions $f_M$ with bounded differential uniformity from a given function $f_L$ fulfilling certain properties.
Last updated:  2024-10-17
Modular Reduction in CKKS
Jaehyung Kim and Taeyeong Noh
The Cheon-Kim-Kim-Song (CKKS) scheme is renowned for its efficiency in encrypted computing over real numbers. However, it lacks an important functionality that most exact schemes have, an efficient modular reduction. This derives from the fundamental difference in encoding structure. The CKKS scheme encodes messages to the least significant bits, while the other schemes encode to the most significant bits (or in an equivalent manner). As a result, CKKS could enjoy an efficient rescaling but lost the ability to modular reduce inherently. Our key observation is that at the very bottom modulus, plaintexts encoded in the least significant bits can still enjoy the inherent modular reduction of RLWE. We suggest incorporating modular reduction as a primary operation for CKKS and exploring its impact on efficiency. We constructed a novel homomorphic modular reduction algorithm using the discrete bootstrapping from Bae et al. [Asiacrypt'24] and a new discretization algorithm from modulus switching. One of the key advantages of our modular reduction is that its computational complexity grows sublinearly ($O(\log k)$) as we increase the input range $[0,k)$, which is asymptotically better than the state-of-the-art with $\geq O(k)$. We checked our algorithms with concrete experiments. Notably, our modulo 1 function for input range $[0, 2^{20})$ takes only 44.9 seconds with 13.3 bits of (mean) precision, in a single-threaded CPU. Recall that modular reduction over such a large range was almost infeasible in the previous works, as they need to evaluate a polynomial of degree $> 2^{20}$ (or equivalent). As an application of our method, we compared a bit decomposition based on our framework with the state-of-the-art method from Drucker et al. [J.Cryptol'24]. Our method is $7.1 \times$ faster while reducing the failure probability by more than two orders of magnitude.
Last updated:  2024-10-17
Rondo: Scalable and Reconfiguration-Friendly Randomness Beacon
Xuanji Meng, Xiao Sui, Zhaoxin Yang, Kang Rong, Wenbo Xu, Shenglong Chen, Ying Yan, and Sisi Duan
We present Rondo, a scalable and reconfiguration-friendly distributed randomness beacon (DRB) protocol in the partially synchronous model. Rondo is the first DRB protocol that is built from batched asynchronous verifiable secret sharing (bAVSS) and meanwhile avoids the high $O(n^3)$ message cost, where $n$ is the number of nodes. Our key contribution lies in the introduction of a new variant of bAVSS called batched asynchronous verifiable secret sharing with partial output (bAVSS-PO). bAVSS-PO is a weaker primitive than bAVSS but allows us to build a secure and more scalable DRB protocol. We propose a bAVSS-PO protocol Breeze. Breeze achieves the optimal $O(n)$ messages for the sharing stage and allows Rondo to offer better scalability than prior DRB protocols. Additionally, to support the reconfiguration, we introduce Rondo-BFT, a dynamic and partially synchronous Byzantine fault-tolerant protocol inspired by Dyno (S\&P 2022). Unlike Dyno, Rondo-BFT provides a communication pattern that generates randomness beacon output periodically, making it well-suited for DRB applications. We implement our protocols and evaluate the performance on Amazon EC2 using up to 91 instances. Our evaluation results show that Rondo achieves higher throughput than existing works and meanwhile offers better scalability, where the performance does not degrade as significantly as $n$ grows.
Last updated:  2024-10-17
Revocable Encryption, Programs, and More: The Case of Multi-Copy Security
Prabhanjan Ananth, Saachi Mutreja, and Alexander Poremba
Fundamental principles of quantum mechanics have inspired many new research directions, particularly in quantum cryptography. One such principle is quantum no-cloning which has led to the emerging field of revocable cryptography. Roughly speaking, in a revocable cryptographic primitive, a cryptographic object (such as a ciphertext or program) is represented as a quantum state in such a way that surrendering it effectively translates into losing the capability to use this cryptographic object. All of the revocable cryptographic systems studied so far have a major drawback: the recipient only receives one copy of the quantum state. Worse yet, the schemes become completely insecure if the recipient receives many identical copies of the same quantum state---a property that is clearly much more desirable in practice. While multi-copy security has been extensively studied for a number of other quantum cryptographic primitives, it has so far received only little treatment in context of unclonable primitives. Our work, for the first time, shows the feasibility of revocable primitives, such as revocable encryption and revocable programs, which satisfy multi-copy security in oracle models. This suggest that the stronger notion of multi-copy security is within reach in unclonable cryptography more generally, and therefore could lead to a new research direction in the field.
Last updated:  2024-10-16
Distributed PIR: Scaling Private Messaging via the Users' Machines
Elkana Tovey, Jonathan Weiss, and Yossi Gilad
This paper presents a new architecture for metadata-private messaging that counters scalability challenges by offloading most computations to the clients. At the core of our design is a distributed private information retrieval (PIR) protocol, where the responder delegates its work to alleviate PIR's computational bottleneck and catches misbehaving delegates by efficiently verifying their results. We introduce DPIR, a messaging system that uses distributed PIR to let a server storing messages delegate the work to the system's clients, such that each client contributes proportional processing to the number of messages it reads. The server removes clients returning invalid results, which DPIR leverages to integrate an incentive mechanism for honest client behavior by conditioning messaging through DPIR on correctly processing PIR requests from other users. The result is a metadata-private messaging system that asymptotically improves scalability over prior work with the same threat model. We show through experiments on a prototype implementation that DPIR concretely improves performance by $3.25\times$ and $4.31\times$ over prior work and that the performance gap grows with the user base size.
Last updated:  2024-10-16
Computationally Secure Aggregation and Private Information Retrieval in the Shuffle Model
Adrià Gascón, Yuval Ishai, Mahimna Kelkar, Baiyu Li, Yiping Ma, and Mariana Raykova
The shuffle model has recently emerged as a popular setting for differential privacy, where clients can communicate with a central server using anonymous channels or an intermediate message shuffler. This model was also explored in the context of cryptographic tasks such as secure aggregation and private information retrieval (PIR). However, this study was almost entirely restricted to the stringent notion of information-theoretic security. In this work, we study computationally secure aggregation protocols and PIR in the shuffle model. Our starting point is the insight that the previous technique of shuffling additive shares can be improved in the computational setting. We show that this indeed holds under the standard learning parity with noise (LPN) assumption, but even better efficiency follows from plausible conjectures about the multi-disjoint syndrome decoding (MDSD) problem that we introduce and study in this work. We leverage the above towards improving the efficiency of secure aggregation and PIR in the shuffle model. For secure aggregation of long vectors, our protocols require $9\times$-$25\times$ less communication than the previous information-theoretic solutions. Our PIR protocols enjoy the simplicity and concrete efficiency benefits of multi-server PIR while only requiring a single server to store the database. Under the MDSD assumption, they improve over recent single-server PIR constructions by up to two orders of magnitude.
Last updated:  2024-10-16
Circular Insecure Encryption: from Long Cycles to Short Cycles
Zehou Wu
We prove that the existence of a CPA-secure encryption scheme that is insecure in the presence of key cycles of length $n$ implies the existence of such a scheme for key cycles of any length less than $n$. Equivalently, if every encryption scheme in a class is $n$-circular secure and this class is closed under our construction, then every encryption scheme in this class is $n'$-circular secure for $n' > n$.
Last updated:  2024-10-16
GAPP: Generic Aggregation of Polynomial Protocols
Chaya Ganesh, Sikhar Patranabis, Shubh Prakash, and Nitin Singh
We propose a generic framework called GAPP for aggregation of polynomial protocols. This allows proving $n$ instances of a polynomial protocol using a single aggregate proof that has $O(\log n)$ size, and can be verified using $O(\log^2 n)$ operations. The satisfiability of several univariate polynomial identities over a domain is reduced to the satisfiability of a single bivariate polynomial identity over a related domain, where the bivariate polynomials interpolate a batch of univariate polynomials over the domain. We construct an information-theoretic protocol for proving the satisfiability of the bivariate polynomial identity, which is then compiled using any bivariate polynomial commitment scheme (PCS) to yield an argument of knowledge for the aggregation relation. GAPP can be applied to several popular SNARKs over bilinear groups that are modeled as polynomial protocols in a black-box way. We present a new bivariate polynomial commitment scheme, bPCLB, with succinct verification that yields an efficient instantiation of GAPP. In addition, the prover only performs sublinear cryptographic operations in the evaluation proof. Towards constructing bPCLB, we show a new folding technique that we call Lagrangian folding. The bivariate PCS bPCLB and the Lagrangian folding scheme are of independent interest. We implement bPCLB and experimentally validate the practical efficiency of our GAPP instantiation. For the popular PLONK proof system, we achieve $25$-$30\%$ faster proof generation than the naive baseline of generating $n$ separate PLONK proofs. Compared to all existing aggregation schemes that incur additional prover overheads on top of the baseline, we achieve significantly more efficient proving, while retaining succinct verification. We demonstrate the versatility of our GAPP framework by outlining applications of practical interest: tuple lookups that significantly outperform existing lookup arguments in terms of prover overheads; and proofs for non-uniform computation with a la carte prover cost.
Last updated:  2024-10-16
Oblivious Turing Machine
Sofiane Azogagh, Victor Delfour, and Marc-Olivier Killijian
In the ever-evolving landscape of Information Tech- nologies, private decentralized computing on an honest yet curious server has emerged as a prominent paradigm. While numerous schemes exist to safeguard data during computation, the focus has primarily been on protecting the confidentiality of the data itself, often overlooking the potential information leakage arising from the function evaluated by the server. Recognizing this gap, this article aims to address the issue by presenting and implementing an innovative solution for ensuring the privacy of both the data and the program. We introduce a novel approach that combines the power of Fully Homomorphic Encryption with the concept of the Turing Machine model, resulting in the first fully secure practical, non-interactive oblivious Turing Machine. Our Oblivious Turing Machine construction is based on only three hypotheses, the hardness of the Ring Learning With Error problem, the ability to homomorphically evaluate non-linear functions and the capacity to blindly rotate elements of a data structure. Only based on those three assumptions, we propose an implementation of an Oblivious Turing Machine relying on the TFHE cryptosystem and present some implementation results.
Last updated:  2024-10-16
Blind zkSNARKs for Private Proof Delegation and Verifiable Computation over Encrypted Data
Mariana Gama, Emad Heydari Beni, Jiayi Kang, Jannik Spiessens, and Frederik Vercauteren
In this paper, we show for the first time it is practical to privately delegate proof generation of zkSNARKs proving up to $2^{20}$ R1CS constraints to a single server. We achieve this by homomorphically computing zkSNARK proof generation, an approach we call blind zkSNARKs. We formalize the concept of blind proofs, analyze their cryptographic properties and show that the resulting blind zkSNARKs remain sound when compiled using BCS compilation. Garg et al. gave a similar framework at CRYPTO 2024, but no practical instantiation for proving non-trivial computations was known. By delegating proof generation, we are able to reduce client computation time from 10 minutes to mere seconds, while server computation time remains limited to 20 minutes. We also propose a practical construction for vCOED supporting constraint sizes four orders of magnitude larger than the current state-of-the-art verifiable FHE-based approaches. These results are achieved by optimizing Fractal for the GBFV homomorphic encryption scheme, e.g. by designing specialized homomorphic circuits with two dimensional NTTs. Furthermore, we make the proofs publicly-verifiable by appending a zero-knowledge Proof of Decryption (PoD). We propose a new construction for PoDs, optimized for low proof generation time, exploiting modulus and ring switching in GBFV; these techniques might be of independent interest. Finally, we implement the latter protocol in C and report on execution time and proof sizes.
Last updated:  2024-10-16
Unclonable Functional Encryption
Arthur Mehta and Anne Müller
In a functional encryption (FE) scheme, a user that holds a ciphertext and a function-key can learn the result of applying the function to the plaintext message. Security requires that the user does not learn anything beyond the function evaluation. On the other hand, unclonable encryption (UE) is a uniquely quantum primitive, which ensures that an adversary cannot duplicate a ciphertext to decrypt the same message multiple times. In this work we introduce unclonable quantum functional encryption (UFE), which both extends the notion of FE to the quantum setting and also possesses the unclonable security of UE. We give a construction for UFE that supports arbitrary quantum messages and polynomialy-sized circuits, and achieves unclonable-indistinguishable security for dependently sampled function keys. In particular, our UFE guarantees that two parties cannot simultaneously recover the correct function outputs using two independently sampled function keys. Our construction combines quantum garbled circuits [BY22], and quantum-key unclonable encryption [AKY24], and leverages techniques from the plaintext expansion arguments in [Hir+23]. As an application we give the first construction for public-key UE with variable decryption keys. Lastly, we establish a connection between quantum indistinguishability obfuscation (qiO) and quantum functional encryption (QFE); Showing that any multi-input indistinguishability-secure quantum functional encryption scheme unconditionally implies the existence of qiO.
Last updated:  2024-10-16
High-Throughput Secure Multiparty Computation with an Honest Majority in Various Network Settings
Christopher Harth-Kitzerow, Ajith Suresh, Yongqin Wang, Hossein Yalame, Georg Carle, and Murali Annavaram
In this work, we present novel protocols over rings for semi-honest secure three-party computation (3PC) and malicious four-party computation (4PC) with one corruption. While most existing works focus on improving total communication complexity, challenges such as network heterogeneity and computational complexity, which impact MPC performance in practice, remain underexplored. Our protocols address these issues by tolerating multiple arbitrarily weak network links between parties without any substantial decrease in performance. Additionally, they significantly reduce computational complexity by requiring up to half the number of basic instructions per gate compared to related work. These improvements lead to up to twice the throughput of state-of-the-art protocols in homogeneous network settings and up to eight times higher throughput in real-world heterogeneous settings. These advantages come at no additional cost: Our protocols maintain the best-known total communication complexity per multiplication, requiring 3 elements for 3PC and 5 elements for 4PC. We implemented our protocols alongside several state-of-the-art protocols (Replicated 3PC, ASTRA, Fantastic Four, Tetrad) in a novel open-source C++ framework optimized for high throughput. Five out of six implemented 3PC and 4PC protocols achieve more than one billion 32-bit multiplications or over 32 billion AND gates per second using our implementation in a 25 Gbit/s LAN environment. This represents the highest throughput achieved in 3PC and 4PC so far, outperforming existing frameworks like MP-SPDZ, ABY3, MPyC, and MOTION by two to three orders of magnitude.
Last updated:  2024-10-16
Toward Optimal-Complexity Hash-Based Asynchronous MVBA with Optimal Resilience
Jovan Komatovic, Joachim Neu, and Tim Roughgarden
Multi-valued validated Byzantine agreement (MVBA), a fundamental primitive of distributed computing, enables $n$ processes to agree on a valid $\ell$-bit value, despite $t$ faulty processes behaving arbitrarily. Among hash-based protocols for the asynchronous setting with adaptive faults, the state-of-the-art HMVBA protocol has optimal $O(1)$ time complexity and near-optimal $O(n \ell + n^2 \kappa \log n)$ bit complexity, but tolerates only $t < n/5$ faults. We present REDUCER, an MVBA protocol that matches HMVBA's time and bit complexity and improves resilience to $t < n/4$. Like HMVBA, REDUCER relies solely on collision-resistant hash functions. Toward optimal one-third resilience, we also propose REDUCER++, an MVBA protocol with further improved $t < (1/3 - \epsilon)n$ resilience, for any fixed $\epsilon > 0$, assuming hash functions modeled as random oracles. Time and bit complexity of REDUCER++ remain constant and quasi-quadratic, respectively, with constants depending on $\epsilon$.
Last updated:  2024-10-16
Another L makes it better? Lagrange meets LLL and may improve BKZ pre-processing
Uncategorized
Sebastien Balny, Claire Delaplace, and Gilles Dequen
Show abstract
Uncategorized
We present a new variant of the LLL lattice reduction algorithm, inspired by Lagrange notion of pair-wise reduction, called L4. Similar to LLL, our algorithm is polynomial in the dimension of the input lattice, as well as in $\log M$, where $M$ is an upper-bound on the norm of the longest vector of the input basis. We experimentally compared the norm of the first basis vector obtained with LLL and L4 up to dimension 200. On average we obtain vectors that are up to $16\%$ shorter. We also used our algorithm as a pre-processing step for the BKZ lattice reduction algorithm with blocksize 24. In practice, up to dimension 140, this allows us to reduce the norm of the shortest basis vector on average by $3\%$, while the runtime does not significantly increases. In $10\%$ of our tests, the whole process was even faster.
Last updated:  2024-10-16
Sunfish: Reading Ledgers with Sparse Nodes
Giulia Scaffino, Karl Wüst, Deepak Maram, Alberto Sonnino, and Lefteris Kokoris-Kogias
The increased throughput offered by modern blockchains, such as Sui, Aptos, and Solana, enables processing thousands of transactions per second, but it also introduces higher costs for decentralized application (dApp) developers who need to track and verify changes in the state of their application. This is true because dApp developers run full nodes, which download and re-execute every transaction to track the global state of the chain. However, this becomes prohibitively expensive for high-throughput chains due to high bandwidth, computational, and storage requirements. A common alternative is to use light nodes. However, light nodes only verify the inclusion of a set of transactions and have no guarantees that the set is complete, i.e., that includes all relevant transactions. Under a dishonest majority, light nodes can also be tricked into accepting invalid transactions. To bridge the gap between full and light nodes, we propose and formalize a new type of blockchain node: the sparse node. A sparse node tracks only a subset of the blockchain’s state: it verifies that the received set of transactions touching the substate is complete, and re-executes those transactions to assess their validity. A sparse node retains important security properties even under adversarial majorities, and requires an amount of resources proportional to the number of transactions in the substate and to the size of the substate itself. We further present Sunfish, an instantiation of a sparse node protocol. Our analysis and evaluation show that Sunfish reduces the bandwidth consumption of real blockchain applications by several orders of magnitude when compared to a full node.
Last updated:  2024-10-16
Information Set Decoding for Ring-Linear Code
Giulia Cavicchioni, Alessio Meneghetti, and Giovanni Tognolini
Information set decoding (ISD) algorithms currently offer the most powerful tool to solve the two archetypal problems of coding theory, namely the Codeword Finding Problem and the Syndrome Decoding Problem. Traditionally, ISD have primarily been studied for linear codes over finite fields, equipped with the Hamming metric. However, recently, other possibilities have also been explored. These algorithms have been adapted to different ambient spaces and metrics, such as the rank metric or the Lee metric over $\mathbb Z_m$. In this paper, we show that it is possible to leverage the ring structure to construct more efficient decoding algorithms than those obtained by simply adapting ISD. In particular, we describe a framework that can be applied to any additive metric including Hamming and Lee, and that can be adapted to the case of the rank metric, providing algorithms to solve the two aforementioned problems, along with their average computational costs.
Last updated:  2024-10-16
Willow: Secure Aggregation with One-Shot Clients
James Bell-Clark, Adrià Gascón, Baiyu Li, Mariana Raykova, and Phillipp Schoppmann
A common drawback of secure vector summation protocols in the single-server model is that they impose at least one synchronization point between all clients contributing to the aggregation. This results in clients waiting on each other to advance through the rounds of the protocol, leading to large latency (or failures due to too many dropouts) even if the protocol is computationally efficient. In this paper we propose protocols in the single-server model where clients contributing data to the aggregation (i) send a single message to the server and (ii) can join aggregation sessions dynamically whenever they have resources, i.e., without the need for synchronizing their reporting time with any other clients. Our approach is based on a committee of parties that aid in the computation by running a setup phase before data collection starts, and a verification/decryption phase once it ends. Unlike existing committee-based protocols such as Flamingo (S\&P 2023), the cost for committee members can be made sub-linear in the number of clients, and does not depend on the size of the input client vectors. Our experimental evaluation shows that our protocol, even while allowing dynamic client participation, is competitive with the state of the art protocols that do not have that feature in both computation and communication.
Last updated:  2024-10-16
Commutative Cryptanalysis as a Generalization of Differential Cryptanalysis
Jules Baudrin, Christof Beierle, Patrick Felke, Gregor Leander, Patrick Neumann, Léo Perrin, and Lukas Stennes
Recently, Baudrin et al. analyzed a special case of Wagner's commutative diagram cryptanalysis, referred to as commutative cryptanalysis. For a family $(E_k)_k$ of permutations on a finite vector space $G$, commutative cryptanalysis exploits the existence of affine permutations $A,B \colon G \rightarrow G$, $I \notin \{A,B\}$ such that $E_k \circ A (x) = B \circ E_k(x)$ holds with high probability, taken over inputs $x$, for a significantly large set of weak keys $k$. Several attacks against symmetric cryptographic primitives can be formulated within the framework of commutative cryptanalysis, most importantly differential attacks, as well as rotational and rotational-differential attacks. Besides, the notion of $c$-differentials on S-boxes can be analyzed as a special case within this framework. We discuss the relations between a general notion of commutative cryptanalysis, with $A$ and $B$ being arbitrary functions over a finite Abelian group, and differential cryptanalysis, both from the view of conducting an attack on a symmetric cryptographic primitive, as well as from the view of a theoretical study of cryptographic S-boxes.
Last updated:  2024-10-16
Designing a General-Purpose 8-bit (T)FHE Processor Abstraction
Daphné Trama, Pierre-Emmanuel Clet, Aymen Boudguiga, and Renaud Sirdey
Making the most of TFHE programmable bootstrapping to evaluate functions or operators otherwise challenging to perform with only the native addition and multiplication of the scheme is a very active line of research. In this paper, we systematize this approach and apply it to build an 8-bit FHE processor abstraction, i.e., a software entity that works over FHE-encrypted 8-bits data and presents itself to the programmer by means of a conventional-looking assembly instruction set. In doing so, we provide several homomorphic LUT dereferencing operators based on variants on the tree-based method and show that they are the most efficient option for manipulating encryptions of 8-bit data (optimally represented as two basis 16 digits). We then systematically apply this approach over a set of around 50 instructions, including, notably, conditional assignments, divisions, or fixed-point arithmetic operations. We then test the approach on several simple algorithms, including the execution of a neuron with a sigmoid activation function over 16-bit precision. We conclude the paper by comparing our work to the FHE compilers available in the state of the art. Finally, this work reveals that a very limited set of functional bootstrapping patterns is versatile and efficient enough to achieve general-purpose FHE computations beyond the boolean circuit approach. As such, these patterns may be an appropriate target for further works on advanced software optimizations or hardware implementations.
Last updated:  2024-10-16
Towards Practical Oblivious Map
Xinle Cao, Weiqi Feng, Jian Liu, Jinjin Zhou, Wenjing Fang, Lei Wang, Quanqing Xu, Chuanhui Yang, and Kui Ren
Oblivious map (OMAP) is an important component in encrypted databases, utilized to safeguard against the server inferring sensitive information about client's encrypted key-value stores based on \emph{access patterns}. Despite its widespread usage and importance, existing OMAP solutions face practical challenges, including the need for a large number of interaction rounds between the client and server, as well as the substantial communication bandwidth requirements. For example, the state-of-the-art protocol named OMIX++ in VLDB 2024 still requires $O(\log{n})$ interaction rounds and $O(\log^2{n})$ communication bandwidth per access, where $n$ denote the total number of key-value pairs stored. In this work, we introduce more practical and efficient OMAP constructions. Consistent with all prior OMAPs, our proposed constructions also adapt only the \emph{tree-based Oblivious RAM} (ORAM) to achieve OMAP for enhanced practicality. In terms of complexity, our approach needs only $O(\log{n}/\log{\log{n}})$ interaction rounds and $O(\log^2{n}/\log{\log{n}})$ communication bandwidth per data access, achieving the lowest communication volume to the best our of knowledge. This improvement results from our two main contributions. First, unlike prior works that rely solely on search trees, we design a novel framework for OMAP that combines hash table with search trees. Second, we propose a more efficient tree-based ORAM named DAORAM, which is of significant independent interest. This newly developed ORAM noticeably accelerates our constructions. We implement both our proposed constructions and prior methods to experimentally demonstrate that our constructions substantially outperform prior methods in terms of efficiency.
Last updated:  2024-10-16
Batch Range Proof: How to Make Threshold ECDSA More Efficient
Guofeng Tang, Shuai Han, Li Lin, Changzheng Wei, and Ying Yan
With the demand of cryptocurrencies, threshold ECDSA recently regained popularity. So far, several methods have been proposed to construct threshold ECDSA, including the usage of OT and homomorphic encryptions (HE). Due to the mismatch between the plaintext space and the signature space, HE-based threshold ECDSA always requires zero-knowledge range proofs, such as Paillier and Joye-Libert (JL) encryptions. However, the overhead of range proofs constitutes a major portion of the total cost. In this paper, we propose efficient batch range proofs to improve the efficiency of threshold ECDSA. At the heart of our efficiency improvement is a new technical tool called Multi-Dimension Forking Lemma, as a generalization of the well-known general forking lemma [Bellare and Neven, CCS 2006]. Based on our new tool, we construct efficient batch range proofs for Paillier and JL encryptions, and use them to give batch multiplication-to-addition (MtA) protocols, which are crucial to most threshold ECDSA. Our constructions improve the prior Paillier-based MtA by a factor of 2 and the prior JL-based MtA by a factor of 3, in both computation and bandwidth in an amortized way. Our batch MtA can be used to improve the efficiency of most Paillier and JL based threshold ECDSA. As three typical examples, our benchmarking results show: -- We improve the Paillier-based CGGMP20 [Canetti et al., CCS 2020] in bandwidth by a factor of 2.1 to 2.4, and in computation by a factor of 1.5 to 1.7. -- By implementing threshold ECDSA with the batch JL MtA of XAL+23 [Xue et al., CCS 2023] and our batch JL MtA, respectively, our batch construction improves theirs in bandwidth by a factor of 2.0 to 2.29, and in computation by a factor of 1.88 to 2.09. -- When replacing OT-based MtA in DKLs24 [Doerner et al., S$\&$P 2024] with our Paillier-based batch MtA, we improve the bandwidth efficiency by $7.8\times$ at the cost of $5.7\times$ slower computation.
Last updated:  2024-10-16
Secure Transformer Inference
Mu Yuan, Lan Zhang, Guoliang Xing, and Xiang-Yang Li
Security of model parameters and user data is critical for Transformer-based services, such as ChatGPT. While recent strides in secure two-party protocols have successfully addressed security concerns in serving Transformer models, their adoption is practically infeasible due to the prohibitive cryptographic overheads involved. Drawing insights from our hands-on experience in developing two real-world Transformer-based services, we identify the inherent efficiency bottleneck in the two-party assumption. To overcome this limitation, we propose a novel three-party threat model that consists of model developer, model server, and data owner. Based on this framework, we design a semi-symmetric permutation-based protection scheme and present STIP, the first secure Transformer inference protocol without any inference accuracy loss. We analyze STIP's resistance to brute force, known-plaintext, and social engineering attacks and prove the privacy leakage upper bound using distance correlation. And we propose a method to integrate the trusted execution environment with STIP to make model parameters resistant to model extraction and fine-tuning attacks. Experiments on six representative series of Transformer models, with up to 70 billion parameters, in real systems show that STIP has strong security and no loss of accuracy. For auto-regressive token generation, STIP achieves 31.7 ms latency for LLaMA2-7b model, significantly reducing the 5-minute overhead of the state-of-the-art two-party protocols.
Last updated:  2024-10-16
Perceived Information Revisited II: Information-Theoretical Analysis of Deep-Learning Based Side-Channel Attacks
Akira Ito, Rei Ueno, and Naofumi Homma
Previous studies on deep-learning-based side-channel attacks (DL-SCAs) have shown that traditional performance evaluation metrics commonly used in DL, like accuracy and F1 score, are not effective in evaluating DL-SCA performance. Therefore, some previous studies have proposed new alternative metrics for evaluating the performance of DL-SCAs. Notably, perceived information (PI) and effective perceived information (EPI) are major metrics based on information theory. While it has been experimentally confirmed that these metrics can give the attack success rate (SR) for DL-SCAs, their theoretical validity remains unclear. In this paper, we propose a new theoretically valid performance evaluation metric called latent perceived information (LPI), which serves as an alternative to the existing metrics. LPI is defined as the mutual information between the output of the feature extractor of a neural network (NN) model and the intermediate value, representing the potential attack performance of the trained model. First, we prove that LPI provides an upper bound on the SR of a DL-SCA by modeling and formulating DL-SCA as a communication channel. Additionally, we clarify the conditions under which PI and EPI theoretically provide an upper bound on the SR from the perspective of LPI. For practical computation of LPI, we present two methods. One utilizes the Kraskov (KSG) estimator, a common mutual information estimator, and the other is based on logistic regression. While the KSG estimator is computationally intensive, it yields accurate LPI values. In contrast, the logistic regression is faster but provides a lower bound for LPI. Through experimental attacks on AES software and hardware implementations with masking countermeasures, we demonstrate that the LPI values estimated by these two methods are significantly similar, indicating the reliability and soundness of our proposed estimation techniques. Furthermore, we show that, by using the logistic regression as a classifier, we can significantly improve the attack performance of the trained model when the difference between the SR upper bound by the LPI and its actual SR is large. This indicates that LPI represents the potential for performance improvement in the trained model. Therefore, our study contributes to optimizing the distinguisher for attack performance using the trained model.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.