photo





Juan A. Garay, Ph.D.

Important: Starting Sept. 2, 2008, I'll be joining AT&T Labs - Research.
E-mail: juan.a.garay@gmail.com


Member of Technical Staff
Member of the Security Research Department in the Enabling Computing Technologies Domain, and former member of the Computing Sciences Research Center (Center 1127) of Bell Labs.



line separator

Research Interests

Professional Activities

Brief Bio

Research Collaborators

Selected Publications

Some Research Highlights

Recent Projects and Industrial Impact

line separator

Research Interests

Main: Cryptographic protocols, secure multi-party computation, privacy-preserving computation, provable security, cryptographic functions and primitives, distributed consistency and decision problems, broadcast encryption, secure multicast, network security.

Other areas: Distributed computing, algorithms, fault tolerance, networks.

line separator

Professional Activities

Conferences I'm Currently (or Have Recently Been) Involved In

Editor

line separator

Brief History

line separator

Research Collaborators

Include:

Amotz Bar-Noy (Brooklyn College), Mihir Bellare (UCSD), Piotr Berman (Penn State), Jose Brustoloni (University of Pittsburgh), Harry Buhrman (CWI), Bob Cahn (AT&T), Ran Canetti (IBM Research), Pau-Chen Cheng (IBM Research), Reza Curtmola (Johns Hopkins), Xiaotie Deng (City University of Hong Kong), Matthias Fitzi (ETH Zurich), Matt Franklin (UC Davis), Rosario Gennaro (IBM Research), Inder Gopal (Reefedge), Shyamnath Gollakota (MIT), Els van Herreweghen (IBM Research), Amir Herzberg (Bar-Ilan University), Jaap-Henk Hoepman (University of Twente), Gene Itkis (Boston University), Arun Iyengard (IBM Research), Markus Jakobsson (RSA Security), Charanjit Jutla (IBM Research), Seny Kamara (Johns Hopkins), Tiko Kameda (Simon Fraser University), Jonathan Katz (University of Maryland), Martin Kienzle (IBM Research), Hugo Krawzcyk (IBM Research), Chiu-Yuen Koo (Google), Dave Kristol, Shay Kutten (Technion), Lorenz Huelsbergen (Google), Phil MacKenzie (Google), Yishai Mansour (Tel-Aviv University), Ueli Maurer (ETH Zurich), Daniele Micciancio (UCSD), Mark Moir (Sun Microsystems), Yoram Moses (Technion), Moni Naor (Weizmann Institute of Science), Seffi Naor (Technion), Rafail Ostrovsky (UCLA), David Peleg (Weizmann Institute of Science), Ken Perry, Benny Pinkas (HP Labs), Carl Pomerance (Dartmouth College), Manoj Prabhakaran (University of Illinois Urbana-Champaign), Tal Rabin (IBM Research), C. Pandu Rangan (IIT Madras), Berry Schoenmakers (Technical University Eindhoven), Kannan Srinathan (International Institute of Information Technology, Hyderabad), Jessica Staddon (PARC), Michael Steiner (IBM Research), Bill Tetzlaff (IBM Research), John Tromp (CWI), Gene Tsudik (UC Irvine), S. Harsha Vardhan (CMU), Jose Villegas Bautista (Technical University Eindhoven), Paul Vitanyi (CWI), Michael Waidner (IBM Research), Avishai Wool (Tel-Aviv University), Ke Yang (Google), Bulent Yener (RPI), Moti Yung (Columbia University and Google).

line separator

Some Research Highlights

Secure Multi-Party Computation

In secure multi-party computation (MPC), n parties attempt to compute multi-party functionalities in the presence of an adversary who may corrupt up to t of them. MPC has been extensively studied since the statement of the problem in [Yao82]. For the standard model with secure pairwise channels between the parties, the first general solution to the problem was given by Goldreich, Micali and Wigderson [GMW87] with respect to computational security. Ben-Or, Goldwasser and Wigderson [BGW88] and Chaum, Crepeau and Damgard [CCD88] constructed the first general protocols with unconditional security. For the model where, in addition to the pairwise secure channels, a global broadcast channel is available, Rabin and Ben-Or [RB89] constructed an unconditional protocol that tolerates less than half of the parties being actively corrupted.

Traditionally, the models for MPC have assumed that certain communication "resources" (primitives) are available to the parties (e.g., point-to-point secure channels, broadcast). In the first paper below we initiate the study of what are the minimal cryptographic primitives needed to implement MPC among two or more players. The issue of complete primitives for the case of two players had been thoroughly studied. However, in the multi-party setting, when there are n > 2 players and t of them are corrupted, the question of what are the simplest complete primitives remained open for t >= n/3. (A primitive is called complete if any computation can be carried out by the players having access only to the primitive and local computation.) We consider this question, and introduce complete primitives of minimal cardinality for secure multi-party computation. The cardinality issue (number of players accessing the primitive) is essential in settings where primitives are implemented by some other means, and the simpler the primitive the easier it is to realize. We show that our primitives are complete and of minimal cardinality possible for most cases.

In the papers below, we study the problem of constructing MPC protocols that are completely fair --- meaning that either all the parties learn the output of the function, or nobody does --- even when a majority of the parties are corrupted. In this series of works, we first propose a framework for fair multi-party computation, within which we formulate a definition of secure and fair protocols. The definition follows the standard simulation paradigm, but is modified to allow the protocol to depend on the runing time of the adversary. In this way, we avoid a well-known impossibility result for fair MPC with corrupted majority (due to [Cleve'86]; in particular, our definition admits constructions that tolerate up to (n-1) corruptions, where n is the total number of parties. Next, we define a ``commit-prove-fair-open'' functionality and construct an efficient protocol that realizes it, using a new variant of a cryptographic primitive known as ``time-lines.'' With this functionality, we show that some of the existing secure MPC protocols can be easily transformed into fair protocols while preserving their security.
Putting these results together, we construct efficient, secure MPC protocols that are completely fair even in the presence of corrupted majorities. Furthermore, these protocols remain secure when arbitrarily composed with any protocols, which means, in particular, that they are concurrently-composable and non-malleable. Finally, as an example of our results, we show a very efficient protocol that fairly and securely solves the socialist millionaires' problem.
In the first paper below, the above is achieved by assuming that the protocol is aware of (an upper bound on) the running time of the adversary. In the second paper we get rid of this assumption.

Verifiable Secret Sharing

Secret sharing [Blakle'79,Shamir'79] is one of the most important primitives used for the construction of secure multi-party protocols. In secret sharing, a "dealer" wants to share a secret s among a set of n players such that no set of t players will be able to reconstruct the secret while any set of t+1 or more players will be able to reconstruct the secret by combining their shares. Verifiable secret sharing [CGMA'85] (VSS) extends ordinary secret sharing for the use in presence of active corruption where an adversary may corrupt up to t players in an arbitrary way. In VSS, it is required that no t players get any information about the secret whereas the n players together can reliably reconstruct the secret even if t of them deliver wrong information.
The round complexity of a VSS protocol is defined as the number of rounds performed in the sharing phase. Gennaro, Ishai, Kushilevitz and Rabin [GIKR'01] showed that three rounds are necessary and sufficient when n > 3t. Sufficiency, however, was only demonstrated by means of an inefficient (i.e., exponential-time) protocol, and the construction of an efficient three-round protocol was left as an open problem.
In the paper below, we present an efficient three-round protocol for VSS. The solution is based on a three-round solution of so-called weak verifiable secret sharing (WSS), for which we also prove that three rounds is a lower bound. Furthermore, we also demonstrate that one round is sufficient for WSS when n > 4t, and that VSS can be achieved in 1+epsilon amortized rounds (for any epsilon > 0) when n > 3t.

Oblivious Transfer

Say Alice has two bits b_1 and b_2 and Bob has one bit e. An Oblivious Transfer (OT) protocol [Rabin'81] allows Bob to retrieve b_e, while not revealing anything else: Alice does not learn which of the two bits Bob received, while Bob does not learn b_(1-e). OT is a fundamental cryptographic primitive that has theoretical as well as practical applications (e.g., secure two-party function evaluation for the former; coin flipping over the telephone, contract signing, and password authentication for the latter).
The first paper below investigates how to make an OT protocol work in an "Internet setting," where many OT protocols may be running concurrently in an asynchronous fashion. The second, more recent paper below focuses on the version of OT where each party has committed to his/her bits, and how in this case each party's actions can be verified. As an application, the paper also presents efficient protocols for secure two- and multi-party computation based on the Decision Diffie-Hellman problem.

Timed-Release Cryptography

While probably the most important pillar of cryptography is the believed existence of hard problems, the notion of moderately hard problems also bears promise. A moderately hard problem is one which is not computationally infeasible to solve, but also not easy. Moderately hard problems allow for so-called "timed-release cryptography," where the goal is for a party to transform (e.g., encrypt) a piece of information, so that it cannot be retrieved by anyone, until a pre-determined amount of time has passed.

In the two papers below we investigate the timed release and timed fair exchange of standard digital signatures. Such signatures, once released and/or exchanged, cannot be distinguished from signatures of the same type obtained without a timed release, making it transparent to an observer of the end result. While previous work has allowed timed release and exchange of signatures, these have not been standard, but special-purpose signatures. For the timed release, we build on recent work by Boneh and Naor [Crypto '00] on timed commitments, and introduce the notion of a reusable time-line, which, besides allowing the release of standard signatures, lowers the session costs of existing timed applications.

Our construction for the timed fair exchange of signatures follows the gradual release paradigm, and works on a new ``time'' structure that we call a mirrored time-line. Using this structure, we design a protocol for the timed fair exchange by two parties of arbitrary values (values lying on their respective mirrored time-lines). By applying the blinding techniques developed for the timed release application, we turn this protocol into a protocol for the timed fair exchange of standard signatures. The length of these mirrored time-lines makes another problem apparent, which is making sure that the underlying sequence has a period large enough so that cycling is not observed. We also show how to construct these structures so that, under reasonable assumptions, this is indeed the case.

Zero Knowledge

Recently there has been an interest in zero-knowledge protocols with stronger properties, such as concurrency, unbounded simulation soundness, non-malleability, and universal composability. In the paper below we show a new technique that uses a signature scheme that is existentially unforgeable against adaptive chosen message attacks to construct zero-knowledge protocols with these stronger properties in the common reference string model. For instance, using our technique we transform any so-called "Sigma-protocol" (which is honest-verifier zero-knowledge) into an unbounded simulation sound concurrent zero-knowledge protocol. We also introduce a variant of Sigma-protocols for which our technique further achieves the properties of non-malleability and/or universal composability. In addition to its conceptual simplicity, a main advantage of this new technique over previous ones is that it allows for very efficient instantiation based on the security of some efficient signature schemes and standard number-theoretic assumptions. For instance, one instantiation of our technique yields an unbounded simulation sound zero-knowledge protocol under the Strong RSA assumption, incurring an overhead of a small constant number of exponentiations, plus the generation of two signatures.

Contract Signing

In this work we consider the problem of fair exchange of digital signatures, and how two (or more) parties can sign a contract over a network in a way that is "fair" for both (all) of them. Typically, fair means (in the two-party case) that Alice should not be able to obtain from Bob a signature on a contract without Bob being able to get the same signature from Alice, and vice-versa. We extend this definition so that Alice should not even be able to convince anyone that she has the power to obatin a signature from Bob, unless Bob also has the power to obtain Alice's signature. We call this property abuse freeness. The papers below present efficient two-party and multi-party contract signing protocols that have this property.

Secure Multicast, Broadcast Encryption

Multicast communication is becoming the basis for a growing number of applications. It is therefore critical to provide founded security mechanisms for multicast communication. Yet, existing security protocols for multicast offer only very partial solutions. That was the motivation for the highly referenced paper below, where we first present a taxonomy of multicast scenarios on the Internet and point out the relevant security concerns. We then address two major security problems of multicast communication: source authentication, and key revocation.

Maintaining authenticity in multicast protocols is a much more complex problem than for unicast; in particular, known solutions are prohibitively inefficient in many cases. We present a solution that is reasonable for a range of scenarios. Our approach can be regarded as a `midpoint' between traditional Message Authentication Codes and digital signatures. We also present an improved and very efficient solution to another prevailing problem for multicast protocols, namely the key revocation problem.

In a broadcast encryption scheme, digital content is encrypted to ensure that only privileged users can recover the content from the encrypted broadcast. Key material is usually held in a ``tamper-resistant,'' replaceable, smartcard. A coalition of users may attack such a system by breaking their smartcards open, extracting the keys, and building ``pirate decoders'' based on the decryption keys they extract. In the paper below we suggest the notion of long-lived broadcast encryption as a way of adapting broadcast encryption to the presence of pirate decoders and maintaining the security of broadcasts to privileged users while rendering all pirate decoders useless.

When a pirate decoder is detected in a long-lived encryption scheme, the keys it contains are viewed as compromised and are no longer used for encrypting content. We regard recarding users as an expensive process, and thus only do so when the number of active keys on a user's card falls below a threshold level. This approach differs from that of most previously known revocation schemes because such schemes aim to preserve the ability to revoke a certain number of users at all times, and hence, must do a substantial amount of key replacement after each revocation. In addition, long-lived broadcast encryption schemes are a more comprehensive solution to piracy than traitor-tracing schemes, because the latter only seek to identify the makers of pirate decoders and don't deal with how to maintain secure broadcasts once keys have been compromised. Through a stochastic analysis, we show there is a long-lived broadcast encryption scheme that achieves a steady state in which only a small fraction of cards need to be replaced in each epoch.

Batch Verification

Let R(.) be a polynomial time-computable boolean relation. Suppose we are given a sequence ins_1,...,ins_n of instances and asked whether it is the case that R(ins_i)=1 for all i=1,...,n. The naive way to figure out the answer is to compute R(ins_i) for each i and check that we get 1 each time. But this takes n computations of R. Can one do any better?

The above is the "batch verification" problem. We initiate a broad investigation of it. We look at the possibility of designing probabilistic batch verifiers, or tests, for basic mathematical relations R. Our main results are for modular exponentiation, an expensive operation in terms of number of multiplications: here g is some fixed element of a group G and R(x,y)=1 iff g^x=y. We find surprisingly fast batch verifiers for this relation. We also find efficient batch verifiers for the degrees of polynomials.

The first application is to cryptography, where modular exponentiation is a common component of a large number of protocols, including digital signatures, bit commitment, and zero knowledge. Similarly, the problem of verifying the degrees of polynomials underlies (verifiable) secret sharing, which in turn underlies many secure distributed protocols.

The second application is to program checking. We can use batch verification to provide faster batch checkers, in the sense of [Rubinfeld92], for modular exponentiation. These checkers also have stronger properties than standard ones, and illustrate how batch verification can not only speed up how we do old things, but also enable us to do new things.

The "batch" papers:

Byzantine Agreement

The Byzantine agreement (BA) problem, formulated by Pease, Shostak and Lamport in the late 70's, involves n processors (agents, players), each of which holds an initial value. At most t of the processors may be faulty and ignore any protocol (even behaving maliciously), yet it is required that the non-faulty processors eventually agree on a value that was initially held by one of them. This is a central problem in fault-tolerant distributed computing and secure multi-party computation, and it abstracts out a variety of situations where conflicting parties must coordinate their decisions and cooperate towards achieving a common goal.
Typical quality measures for a BA protocol are the total number of processors, the number of communication rounds, and its communication complexity, alternatively given by the maximal message size, or the total number of transmitted bits. The known lower bounds are, respectively, n > 3t, t+1, 1, and n (t+1)/4.
Below is a series of papers that optimize different aspects of BA protocols for the standard model, as well as for different settings, such as amortized agreement, agreement in networks of bounded degree, agreement in hybrid failure models, and under mobile faults. Of special significance is the paper with Yoram Moses (Technion), where we present an efficient (i.e., polynomial-time) protocol for reaching BA with the minimal number of processors and in the minimal number of rounds. Achieving these bounds simultaneously with polynomial communication and computation had been open since the formulation of the problem.

Some Distributed Computing Highlights

Minimum Spanning Trees

In this work we consider the question of identifying the parameters governing the behavior of fundamental global network problems. Many papers on distributed network algorithms consider the task of optimizing the running time successful when an O(n) bound is achieved on an n-vertex network. We propose that a more sensitive parameter is the network's diameter Diam. This is demonstrated in the paper by providing a distributed Minimum-weight Spanning Tree algorithm whose time complexity is sub-linear in n, but linear in Diam. Our result is achieved through the application of graph decomposition and edge elimination techniques that may be of independent interest.

Renaming

Algorithms Highlights

On-line Call Control

Adaptability

In this work we introduce adaptability, a new measure for the quality of algorithms. We study the general problem of designing algorithms in situations where there is some information about the typical conditions that are encountered when the respective problem is solved. The basic goal is to assure efficient performance in the typical case, while satisfying the correctness requirements in every case. The new notion generalizes competitive analysis and related frameworks, and applies to sequential, parallel and distributed algorithms alike. In a nutshell, a ``hint'' function conveys certain information about the environment in which the algorithm operates. Adaptability compares the performance of the algorithm against that of the ``specialist'' - an algorithm specifically tuned to the particular hint value. From this perspective, finding that no single algorithm can adapt to all possible hint values is not necessarily a negative result, provided that a family of specialists can be constructed.
Our case studies - scheduling of jobs on parallel machines and distributed consensus - provide examples of both kinds.

Mutual Search

Say 2 agents land over a space of n sites and have to locate each other by sending queries of the form ``Anybody at site i?.'' How many queries would it take them? Intuition might suggest that any (deterministic) algorithm would require a total of n-1 queries in the worst case. In this work we formulate this problem, and derive lower and upper bounds for several variants. In particular, for the case posed above, we show that 0.586n total queries suffice. line separator

Recent Projects and Industrial Impact

Software Integrity Protection

In this work, we present a software scheme for protecting the integrity of computing platforms using Timed Executable Agent Systems (TEAS). A trusted challenger issues an authenticated challenge to a perhaps corrupt responder. New is that the issued challenge is an executable program that can potentially compute any function on the responder. The responder must compute not only the correct value implied by the agent, but also must complete this computation within time bounds prescribed by the challenger. Software-based attestation schemes have been proposed before - new capabilities introduced in TEAS provide means to mitigate the existing shortcomings of such proposed techniques. TEAS are general and can be adapted to many applications for which system integrity is to be tested.

Two types of adversaries to TEAS are considered. First, we address attacks by "off-line" adversaries that attempt to discern agents' functions statically by analyzing their program texts. We then consider "on-line" adversaries, which operate while the agent runs. For off-line adversaries, we show how complexity results from programming language analysis, as well as undecidability considerations, can be used to thwart such analysis by making it impossible for the adversary to correctly decipher all potential agents and reply in a timely fashion. In the on-line scenario, adversaries are difficult to stop in general. We do however present strategies that make it difficult for these adversaries to interpret an agent in a virtual machine and to thereby redirect its actions, for example.

We also address the problem of creating large libraries of useful and complicated (and hence difficult to analyze) agents through a new technique of program blinding - we hide critical functionality inside randomly generated machine-language programs. We implemented a virtual machine that allows experimentation with this approach. Experiments reveal that blinded agents whose execution conveys important integrity information can be efficiently generated in abundance.

TEAS paper. In Proc. ACM Symposium on InformAtion, Computer and Communications Security (ASIACCS'06), pp. 189-200, Taiwan, March 2006.

Efficient Authentication and Key Distribution in Wireless IP Networks

Emerging technologies such as 802.11 are facilitating the introduction of wireless IP services to an increasing number of users. The market forecasts suggest that a new class of Wireless Internet Service Providers (WISPs) will deploy more and more wireless IP networks based on these new technologies. To offer uninterrupted IP service, these networks will need to be integrated with other wireless technologies, such as the ones used by third-generation wide area wireless networks, e.g. 3G CDMA and UMTS. End-users will be able to roam across different WISP domains, and across networks that employ different access technologies. To provide seamless mobility to customers, authentication, session key establishment, mobility management and billing must be integrated across these different wireless IP networks. Therefore, efficient authentication and dynamic key exchange protocols that support homeogenous domains as well as networks with roaming agreements across trust boundaries are keys to the success of these networks. In this work we describe a simple network model that accounts for heterogeneity in network providers, and formalize the requirements of any authentication and key exchange protocol that has to operate in such model, in terms of network efficiency, security and fraud prevention. We then introduce a new authentication and key exchange protocol, called Wireless Shared Key Exchange (W-SKE). We characterize properties and limitations of W-SKE against the requirements formalized earlier. We also describe an instantiation of W-SKE in the context of 802.11/802.1x networks, called EAP-SKE. Finally, we contrast EAP-SKE against other well-known and emerging approaches.
Joint work with Milind Buddhikot, Scott Miller, Sarvar Patel, and Luca Salgarelli (Bell Labs).

Paper. IEEE Wireless Communication, December 2003.
Internet draft (pdf).

MicroISPs

An architecture for providing ubiquitous low-cost internet access. Overcomes conventional ISP architecture's shortcomings of high access link costs and poor mobility support. Novel billing; accomodates a variety of offline and online payment methods.
Work done at Bell Labs, with Jose Brustoloni .

Paper (pdf file; Proc. 9th International World Wide Web Conference, Amsterdam, May 2000).

IPSec/NAT Interoperation

IP Security (IPSec) has often been considered incompatible with Network Address Translation (NAT). We proposed a simple DHCP extension that allows client IPSec implementations to interoperate correctly with NAT. This extension is part of EASE (for EAsy, Secure and Economical Internet Access), an architecture that exploits the higher bandwidths of cable or xDSL to make Internet connectivity easy and economical for SOHO or ``road warrior'' users, without compromising fairness or security. EASE allows for end-to-end security in an application-independent fashion.
Work done at Bell Labs, with Jose Brustoloni .

EASE paper (Proc. IFIP Networking 2000, Paris, May 2000).

The e-Vault

Project leader for IBM Research's electronic vault project. The project is based on the Secure Storage and Retrieval of Information technique (SSRI), co-invented with Rosario Gennaro, Charanjit Jutla and Tal Rabin from IBM Watson. The technique guarantees the secure (including confidentiality) storage and retrieval of data in a distributed storage system, while
  1. interacting with a single server,
  2. tolerating malicious server failures, and
  3. maitaining an (asymptotically) optimal storage blow-up.

SSRI paper (Theoretical Computer Science, September 2000).
Design and implementation paper (IFIP Security 1998).

Variety Cash

A new electronic money system striking a balance between functionality, security, and user privacy. The system can encompass both network and stored-value card based payment mechanisms, with transferability between them, hence the name.
Work done at IBM, with Mihir Bellare, Charanjit Jutla , and Moti Yung.

Variety Cash paper ( Proc. 3rd Usenix Workshop on Electronic Commerce, Boston, MA, August-Sept. 1998).

iKP -- A Family of Secure Electronic Payment Protocols

I'm one of the designers of ``iKP - A Family of Secure Electronic Payment Protocols ,'' the basis for SET , the credit card-based electronic payment system being deployed by Visa and Mastercard.
The protocols implement on-line credit card-based transactions between customers and merchants while the existing financial network is used for payment clearing and authorization. Protocols can be applied to any account-based payment model, such as debit cards and electronic checks. They are based on careful and minimal use of public-key cryptography and can be implemented in either software or hardware. Individual protocols differ in both complexity and degree of security. In addition to being a pre-cursor and a direct ancestor of the SET standard being deployed by MasterCard and VISA. iKP-based payment systems have been in continuous operation on the Internet since mid-1996. This longevity - as well as the security and relative simplicity of the underlying mechanisms - make the iKP experience unique. For this reason, the design and implementation paper below also reports on a number of practical issues arising in the course of implementation and real-world deployment of a secure payment system.

iKP Usenix paper (Proc. 1st USENIX Workshop on Electronic Commerce, New York, NY, July 1995).
Design, implementation and deployment paper (JSAC, special issue on Network Security, Vol. 18, No. 4, Aril 2000).

A Security Architecture for the Internet Protocol

Security architecture for protecting the secrecy and integrity of Internet traffic at the IP layer, allowing for the establishment of a secure channel between any two communicating systems over the Internet. The design includes a key management protocol, called MKMP, for establishing shared secrets between communicating parties and meta-information prescribed by the security policy. The architecture allows for the establishment of a secure channel between any two communicating systems over the Internet. This technology is a component of IBM's Firewall product, and is being ported to other IBM platforms.
Work done at IBM with Pau-Chen Cheng, Amir Herzberg, and Hugo Krawzcyk.

Paper (IBM Systems Journal 37, No. 1 (1998), special issue on the Internet).

line separator

Selected Publications

(Mail me for any other publication not listed here.)

Cryptography and Network Security

Distributed Computing

Algorithms (on-line; network; new quality measures)

line separator



Copyright © 1996 Lucent Technologies. All rights reserved.