gnunetbib

Bibliography (BibTeX, based on AnonBib)
Log | Files | Refs | README | LICENSE

commit b7e2fa4fba8ce83d0cd6230a5f754347e7d569be
parent 412898a76d84455d432638eabd9c31b382addeaf
Author: ng0 <ng0@n0.is>
Date:   Fri,  5 Jan 2018 19:21:27 +0000

braces yourselves

Diffstat:
Mgnunetbib.bib | 42++++++++++++++++++++----------------------
1 file changed, 20 insertions(+), 22 deletions(-)

diff --git a/gnunetbib.bib b/gnunetbib.bib @@ -15248,7 +15248,7 @@ This exposition presents a model to formally study such algorithms. This model, pages = {110{\textendash}123}, publisher = {ACM}, address = {New York, NY, USA}, - abstract = {{Modern file systems associate the deletion of a file with the immediate release of storage, and file writes with the irrevocable change of file contents. We argue that this behavior is a relic of the past, when disk storage was a scarce resource. Today, large cheap disks make it possible for the file system to protect valuable data from accidental delete or overwrite. This paper describes the design, implementation, and performance of the Elephant file system, which automatically retains all important versions of user files. Users name previous file versions by combining a traditional pathname with a time when the desired version of a file or directory existed. Storage in Elephant is managed by the system using filegrain user-specified retention policies. This approach contrasts with checkpointing file systems such as Plan-9, AFS, and WAFL that periodically generate efficient checkpoints of entire file systems and thus restrict retention to be guided by a single policy for all files within that file system. Elephant is implemented as a new Virtual File System in the FreeBSD kernel.}, + abstract = {{Modern file systems associate the deletion of a file with the immediate release of storage, and file writes with the irrevocable change of file contents. We argue that this behavior is a relic of the past, when disk storage was a scarce resource. Today, large cheap disks make it possible for the file system to protect valuable data from accidental delete or overwrite. This paper describes the design, implementation, and performance of the Elephant file system, which automatically retains all important versions of user files. Users name previous file versions by combining a traditional pathname with a time when the desired version of a file or directory existed. Storage in Elephant is managed by the system using filegrain user-specified retention policies. This approach contrasts with checkpointing file systems such as Plan-9, AFS, and WAFL that periodically generate efficient checkpoints of entire file systems and thus restrict retention to be guided by a single policy for all files within that file system. Elephant is implemented as a new Virtual File System in the FreeBSD kernel.}}, keywords = {file systems, storage}, issn = {0163-5980}, doi = {10.1145/319344.319159}, @@ -15261,7 +15261,7 @@ This exposition presents a model to formally study such algorithms. This model, volume = {PhD}, year = {1999}, school = {University of Edinburgh}, - abstract = {{This report describes an algorithm which if executed by a group of interconnected nodes will provide a robust key-indexed information storage and retrieval system with no element of central control or administration. It allows information to be made available to a large group of people in a similar manner to the "World Wide Web". Improvements over this existing system include: - No central control or administration required - Anonymous information publication and retrieval - Dynamic duplication of popular information - Transfer of information location depending upon demand There is also potential for this system to be used in a modified form as an information publication system within a large organisation which may wish to utilise unused storage space which is distributed across the organisation. The system{\textquoteright}s reliability is not guaranteed, nor is its efficiency, however the intention is that the efficiency and reliability will be sufficient to make the system useful, and demonstrate that...}, + abstract = {{This report describes an algorithm which if executed by a group of interconnected nodes will provide a robust key-indexed information storage and retrieval system with no element of central control or administration. It allows information to be made available to a large group of people in a similar manner to the "World Wide Web". Improvements over this existing system include: - No central control or administration required - Anonymous information publication and retrieval - Dynamic duplication of popular information - Transfer of information location depending upon demand There is also potential for this system to be used in a modified form as an information publication system within a large organisation which may wish to utilise unused storage space which is distributed across the organisation. The system{\textquoteright}s reliability is not guaranteed, nor is its efficiency, however the intention is that the efficiency and reliability will be sufficient to make the system useful, and demonstrate that...}}, url = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.32.3665\&rep=rep1\&type=pdf}, author = {Ian Clarke} } @@ -15284,8 +15284,7 @@ This exposition presents a model to formally study such algorithms. This model, year = {1999}, month = {1999}, pages = {814{\textendash}833}, - abstract = {{We introduce the concept of a group principal and present a number of different classes of group principals, including threshold-group-principals. These appear to naturally useful concepts for looking at security. We provide an associated epistemic language and logic and use it to reason about anonymity protocols and anonymity services, where protection properties are formulated from the intruder{\textquoteright}s knowledge of group principals. Using our language, we give an epistemic characterization of anonymity properties. We also present a specification of a simple anonymizing system using our theory. -}, + abstract = {{We introduce the concept of a group principal and present a number of different classes of group principals, including threshold-group-principals. These appear to naturally useful concepts for looking at security. We provide an associated epistemic language and logic and use it to reason about anonymity protocols and anonymity services, where protection properties are formulated from the intruder{\textquoteright}s knowledge of group principals. Using our language, we give an epistemic characterization of anonymity properties. We also present a specification of a simple anonymizing system using our theory.}}, keywords = {anonymity service}, isbn = {3-540-66587-0}, doi = {10.1007/3-540-48119-2}, @@ -15301,7 +15300,7 @@ This exposition presents a model to formally study such algorithms. This model, publisher = {Springer-Verlag}, organization = {Springer-Verlag}, address = {London, UK}, - abstract = {{We will introduce a new class of erasure codes built from irregular bipartite graphs that have linear time encoding and decoding algorithms and can transmit over an erasure channel at rates arbitrarily close to the channel capacity. We also show that these codes are close to optimal with respect to the trade-off between the proximity to the channel capacity and the running time of the recovery algorithm.}, + abstract = {{We will introduce a new class of erasure codes built from irregular bipartite graphs that have linear time encoding and decoding algorithms and can transmit over an erasure channel at rates arbitrarily close to the channel capacity. We also show that these codes are close to optimal with respect to the trade-off between the proximity to the channel capacity and the running time of the recovery algorithm.}}, keywords = {coding theory, irregular bipartite graphs, recovery algorithm}, isbn = {3-540-66723-7}, url = {http://portal.acm.org/citation.cfm?id=758535\&dl=GUIDE\&coll=GUIDE\&CFID=102355791\&CFTOKEN=32605420$\#$}, @@ -15316,7 +15315,7 @@ This exposition presents a model to formally study such algorithms. This model, publisher = {ACM}, organization = {ACM}, address = {New York, NY, USA}, - abstract = {{Networked sensors -- those that coordinate amongst themselves to achieve a larger sensing task -- will revolutionize information gathering and processing both in urban environments and in inhospitable terrain. The sheer numbers of these sensors and the expected dynamics in these environments present unique challenges in the design of unattended autonomous sensor networks. These challenges lead us to hypothesize that sensor network coordination applications may need to be structured differently from traditional network applications. In particular, we believe that localized algorithms (in which simple local node behavior achieves a desired global objective) may be necessary for sensor network coordination. In this paper, we describe localized algorithms, and then discuss directed diffusion, a simple communication model for describing localized algorithms.}, + abstract = {{Networked sensors -- those that coordinate amongst themselves to achieve a larger sensing task -- will revolutionize information gathering and processing both in urban environments and in inhospitable terrain. The sheer numbers of these sensors and the expected dynamics in these environments present unique challenges in the design of unattended autonomous sensor networks. These challenges lead us to hypothesize that sensor network coordination applications may need to be structured differently from traditional network applications. In particular, we believe that localized algorithms (in which simple local node behavior achieves a desired global objective) may be necessary for sensor network coordination. In this paper, we describe localized algorithms, and then discuss directed diffusion, a simple communication model for describing localized algorithms.}}, keywords = {sensor networks}, isbn = {1-58113-142-9}, doi = {10.1145/313451.313556}, @@ -15330,7 +15329,7 @@ This exposition presents a model to formally study such algorithms. This model, volume = {42}, year = {1999}, pages = {39{\textendash}41}, - abstract = {{this article{\textquoteright}s publication, the prototype network is processing more than 1 million Web connections per month from more than six thousand IP addresses in twenty countries and in all six main top level domains. [7] Onion Routing operates by dynamically building anonymous connections within a network of real-time Chaum Mixes [3]. A Mix is a store and forward device that accepts a number of fixed-length messages from numerous sources, performs cryptographic transformations on the messages, and then forwards the messages to the next destination in a random order. A single Mix makes tracking of a particular message either by specific bit-pattern, size, or ordering with respect to other messages difficult. By routing through numerous Mixes in the network, determining who is talking to whom becomes even more difficult. Onion Routing{\textquoteright}s network of core onion-routers (Mixes) is distributed, faulttolerant, and under the control of multiple administrative domains, so no single onion-router can bring down the network or compromise a user{\textquoteright}s privacy, and cooperation between compromised onion-routers is thereby confounded.}, + abstract = {{this article{\textquoteright}s publication, the prototype network is processing more than 1 million Web connections per month from more than six thousand IP addresses in twenty countries and in all six main top level domains. [7] Onion Routing operates by dynamically building anonymous connections within a network of real-time Chaum Mixes [3]. A Mix is a store and forward device that accepts a number of fixed-length messages from numerous sources, performs cryptographic transformations on the messages, and then forwards the messages to the next destination in a random order. A single Mix makes tracking of a particular message either by specific bit-pattern, size, or ordering with respect to other messages difficult. By routing through numerous Mixes in the network, determining who is talking to whom becomes even more difficult. Onion Routing{\textquoteright}s network of core onion-routers (Mixes) is distributed, faulttolerant, and under the control of multiple administrative domains, so no single onion-router can bring down the network or compromise a user{\textquoteright}s privacy, and cooperation between compromised onion-routers is thereby confounded.}}, url = { http://www.onion-router.net/Publications/CACM-1999 }, www_pdf_url = {https://gnunet.org/git/bibliography.git/tree/docs/onionrouting.pdf}, author = {David Goldschlag and Michael Reed and Paul Syverson} @@ -15343,8 +15342,7 @@ This exposition presents a model to formally study such algorithms. This model, publisher = {USENIX Association}, organization = {USENIX Association}, address = {Berkeley, CA, USA}, - abstract = {{In this paper we describe a technique called operation-based update propagation for efficiently transmitting updates to large files that have been modified on a weakly connected client of a distributed file system. In this technique, modifications are captured above the file-system layer at the client, shipped to a surrogate client that is strongly connected to a server, re-executed at the surrogate, and the resulting files transmitted from the surrogate to the server. If re-execution fails to produce a file identical to the original, the system falls back to shipping the file from the client over the slow network. We have implemented a prototype of this mechanism in the Coda File System on Linux, and demonstrated performance improvements ranging from 40 percents to nearly three orders of magnitude in reduced network traffic and elapsed time. We also found a novel use of forward error correction in this context. -}, + abstract = {{In this paper we describe a technique called operation-based update propagation for efficiently transmitting updates to large files that have been modified on a weakly connected client of a distributed file system. In this technique, modifications are captured above the file-system layer at the client, shipped to a surrogate client that is strongly connected to a server, re-executed at the surrogate, and the resulting files transmitted from the surrogate to the server. If re-execution fails to produce a file identical to the original, the system falls back to shipping the file from the client over the slow network. We have implemented a prototype of this mechanism in the Coda File System on Linux, and demonstrated performance improvements ranging from 40 percents to nearly three orders of magnitude in reduced network traffic and elapsed time. We also found a novel use of forward error correction in this context.}}, url = {http://portal.acm.org/citation.cfm?id=1268712$\#$}, www_pdf_url = {https://gnunet.org/git/bibliography.git/tree/docs/lee.pdf}, author = {Lee, Yui-Wah and Leung, Kwong-Sak and Satyanarayanan, Mahadev} @@ -15356,7 +15354,7 @@ This exposition presents a model to formally study such algorithms. This model, publisher = {Springer-Verlag}, organization = {Springer-Verlag}, address = {Berlin, Heidelberg}, - abstract = {{This paper investigates a novel computational problem, namely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and derive from this technique three encryption schemes : a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA. Our cryptosystems, based on usual modular arithmetics, are provably secure under appropriate assumptions in the standard model.}, + abstract = {{This paper investigates a novel computational problem, namely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and derive from this technique three encryption schemes : a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA. Our cryptosystems, based on usual modular arithmetics, are provably secure under appropriate assumptions in the standard model.}}, isbn = {3-540-65889-0}, url = {http://dl.acm.org/citation.cfm?id=1756123.1756146}, www_pdf_url = {https://gnunet.org/git/bibliography.git/tree/docs/PublicKeyCryptoSystems1999Paillier.pdf}, @@ -15370,7 +15368,7 @@ This exposition presents a model to formally study such algorithms. This model, year = {1999}, month = {01/1999}, pages = {3-21}, - abstract = {{This article presents information on principal-agent models in which outcomes conditional on the agent{\textquoteright}s action are uncertain, and the agent{\textquoteright}s behavior therefore unobservable. For a model with bounded agent{\textquoteright}s utility, conditions are given under which the first-best equilibrium can be approximated arbitrarily closely by contracts relating payment to observable outcomes. For general models, it is shown that the solution may not always be obtained by using the agent{\textquoteright}s first-order conditions as constraint. General conditions of Lagrangean type are given for problems in which contracts are finite-dimensional.}, + abstract = {{This article presents information on principal-agent models in which outcomes conditional on the agent{\textquoteright}s action are uncertain, and the agent{\textquoteright}s behavior therefore unobservable. For a model with bounded agent{\textquoteright}s utility, conditions are given under which the first-best equilibrium can be approximated arbitrarily closely by contracts relating payment to observable outcomes. For general models, it is shown that the solution may not always be obtained by using the agent{\textquoteright}s first-order conditions as constraint. General conditions of Lagrangean type are given for problems in which contracts are finite-dimensional.}}, keywords = {contracts, Lagrangean conditions, unobservability}, url = {http://econpapers.repec.org/RePEc:bla:restud:v:66:y:1999:i:1:p:3-21}, author = {Mirrlees, James A.} @@ -15383,7 +15381,7 @@ This exposition presents a model to formally study such algorithms. This model, publisher = {Society for Industrial and Applied Mathematics}, organization = {Society for Industrial and Applied Mathematics}, address = {Philadelphia, PA, USA}, - abstract = {{We introduce a new set of probabilistic analysis tools based on the analysis of And-Or trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random loss-resilient codes, solving random k-SAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above. 1 Introduction We introduce a new set of probabilistic analysis tools related to the amplification method introduced by [12] and further developed and used in [13, 5]. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including the random loss-resilient codes introduced ...}, + abstract = {{We introduce a new set of probabilistic analysis tools based on the analysis of And-Or trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random loss-resilient codes, solving random k-SAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above. 1 Introduction We introduce a new set of probabilistic analysis tools related to the amplification method introduced by [12] and further developed and used in [13, 5]. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including the random loss-resilient codes introduced ...}}, keywords = {And-Or trees, coding theory}, isbn = {0-89871-410-9}, url = {http://portal.acm.org/citation.cfm?id=314722\&dl=GUIDE\&coll=GUIDE\&CFID=102355791\&CFTOKEN=32605420$\#$}, @@ -15396,7 +15394,7 @@ This exposition presents a model to formally study such algorithms. This model, volume = {16}, year = {1998}, pages = {482{\textendash}494}, - abstract = {{Onion Routing is an infrastructure for private communication over a public network. It provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. Onion routing{\textquoteright}s anonymous connections are bidirectional and near realtime, and can be used anywhere a socket connection can be used. Any identifying information must be in the data stream carried over an anonymous connection. An onion is a data structure that is treated as the destination address by onion routers; thus, it is used to establish an anonymous connection. Onions themselves appear differently to each onion router as well as to network observers. The same goes for data carried over the connections they establish. Proxy aware applications, such as web browsing and e-mail, require no modification to use onion routing, and do so through a series of proxies. A prototype onion routing network is running between our lab and other sites. This paper describes anonymous connections and their imple...}, + abstract = {{Onion Routing is an infrastructure for private communication over a public network. It provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. Onion routing{\textquoteright}s anonymous connections are bidirectional and near realtime, and can be used anywhere a socket connection can be used. Any identifying information must be in the data stream carried over an anonymous connection. An onion is a data structure that is treated as the destination address by onion routers; thus, it is used to establish an anonymous connection. Onions themselves appear differently to each onion router as well as to network observers. The same goes for data carried over the connections they establish. Proxy aware applications, such as web browsing and e-mail, require no modification to use onion routing, and do so through a series of proxies. A prototype onion routing network is running between our lab and other sites. This paper describes anonymous connections and their imple...}}, keywords = {anonymity, onion routing}, url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.2362}, www_pdf_url = {https://gnunet.org/git/bibliography.git/tree/docs/10.1.1.65.8267.pdf}, @@ -15408,7 +15406,7 @@ This exposition presents a model to formally study such algorithms. This model, volume = {1}, year = {1998}, pages = {66{\textendash}92}, - abstract = {{Crowds is a system that allows anonymous web-surfing. For each host, a random static path through the crowd is formed that then acts as a sequence of proxies, indirecting replies and responses. Vulnerable when facing adversaries that can perform traffic analysis at the local node and without responder anonymity. But highly scalable and efficient.}, + abstract = {{Crowds is a system that allows anonymous web-surfing. For each host, a random static path through the crowd is formed that then acts as a sequence of proxies, indirecting replies and responses. Vulnerable when facing adversaries that can perform traffic analysis at the local node and without responder anonymity. But highly scalable and efficient.}}, keywords = {anonymous web browsing, Crowds}, url = {http://avirubin.com/crowds.pdf}, www_pdf_url = {https://gnunet.org/git/bibliography.git/tree/docs/crowds.pdf}, @@ -15421,7 +15419,7 @@ This exposition presents a model to formally study such algorithms. This model, month = {November}, publisher = {ACM Press}, organization = {ACM Press}, - abstract = {{Attacks on servers that provide anonymity generally fall into two categories: attempts to expose anonymous users and attempts to silence them. Much existing work concentrates on withstanding the former, but the threat of the latter is equally real. One particularly e$\#$ective attack against anonymous servers is to abuse them and stir up enough trouble that they must shut down. This paper describes the design, implementation, and operation of nym.alias.net, a server providing untraceable email aliases. We enumerate many kinds of abuse the system has weathered during two years of operation, and explain the measures we enacted in response. From our experiences, we distill several principles by which one can protect anonymous servers from similar attacks.}, + abstract = {{Attacks on servers that provide anonymity generally fall into two categories: attempts to expose anonymous users and attempts to silence them. Much existing work concentrates on withstanding the former, but the threat of the latter is equally real. One particularly e$\#$ective attack against anonymous servers is to abuse them and stir up enough trouble that they must shut down. This paper describes the design, implementation, and operation of nym.alias.net, a server providing untraceable email aliases. We enumerate many kinds of abuse the system has weathered during two years of operation, and explain the measures we enacted in response. From our experiences, we distill several principles by which one can protect anonymous servers from similar attacks.}}, isbn = {1-58113-007-4}, doi = {10.1145/288090.288098}, url = {http://portal.acm.org/citation.cfm?id=288098}, @@ -15437,7 +15435,7 @@ This exposition presents a model to formally study such algorithms. This model, publisher = {ACM}, organization = {ACM}, address = {Vancouver, Canada}, - abstract = {{The proliferation of applications that must reliably distribute bulk data to a large number of autonomous clients motivates the design of new multicast and broadcast protocols. We describe an ideal, fully scalable protocol for these applications that we call a digital fountain. A digital fountain allows any number of heterogeneous clients to acquire bulk data with optimal efficiency at times of their choosing. Moreover, no feedback channels are needed to ensure reliable delivery, even in the face of high loss rates.We develop a protocol that closely approximates a digital fountain using a new class of erasure codes that for large block sizes are orders of magnitude faster than standard erasure codes. We provide performance measurements that demonstrate the feasibility of our approach and discuss the design, implementation and performance of an experimental system.}, + abstract = {{The proliferation of applications that must reliably distribute bulk data to a large number of autonomous clients motivates the design of new multicast and broadcast protocols. We describe an ideal, fully scalable protocol for these applications that we call a digital fountain. A digital fountain allows any number of heterogeneous clients to acquire bulk data with optimal efficiency at times of their choosing. Moreover, no feedback channels are needed to ensure reliable delivery, even in the face of high loss rates.We develop a protocol that closely approximates a digital fountain using a new class of erasure codes that for large block sizes are orders of magnitude faster than standard erasure codes. We provide performance measurements that demonstrate the feasibility of our approach and discuss the design, implementation and performance of an experimental system.}}, keywords = {coding theory, multicast}, isbn = {1-58113-003-1}, doi = {10.1145/285237.285258}, @@ -15451,7 +15449,7 @@ This exposition presents a model to formally study such algorithms. This model, volume = {45}, year = {1998}, pages = {1817{\textendash}1826}, - abstract = {{We reveal an equivalence relation between the construction of a new class of low density MDS array codes, that we call B-Code, and a combinatorial problem known as perfect onefactorization of complete graphs. We use known perfect one-factors of complete graphs to create constructions and decoding algorithms for both B-Code and its dual code. B-Code and its dual are optimal in the sense that (i) they are MDS, (ii) they have an optimal encoding property, i.e., the number of the parity bits that are affected by change of a single information bit is minimal and (iii) they have optimal length. The existence of perfect one-factorizations for every complete graph with an even number of nodes is a 35 years long conjecture in graph theory. The construction of B-codes of arbitrary odd length will provide an affirmative answer to the conjecture}, + abstract = {{We reveal an equivalence relation between the construction of a new class of low density MDS array codes, that we call B-Code, and a combinatorial problem known as perfect onefactorization of complete graphs. We use known perfect one-factors of complete graphs to create constructions and decoding algorithms for both B-Code and its dual code. B-Code and its dual are optimal in the sense that (i) they are MDS, (ii) they have an optimal encoding property, i.e., the number of the parity bits that are affected by change of a single information bit is minimal and (iii) they have optimal length. The existence of perfect one-factorizations for every complete graph with an even number of nodes is a 35 years long conjecture in graph theory. The construction of B-codes of arbitrary odd length will provide an affirmative answer to the conjecture}}, keywords = {array codes, low density, MDS Codes, update complexity}, url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.8899}, www_pdf_url = {https://gnunet.org/git/bibliography.git/tree/docs/10.1.1.42.8899.pdf}, @@ -15494,7 +15492,7 @@ This exposition presents a model to formally study such algorithms. This model, publisher = {Springer-Verlag}, organization = {Springer-Verlag}, address = {London, UK}, - abstract = {{Private information retrieval (PIR) schemes provide a user with information from a database while keeping his query secret from the database manager. We propose a new model for PIR, utilizing auxiliary random servers providing privacy services for database access. The principal database initially engages in a preprocessing setup computation with the random servers, followed by the on-line stage with the users. Using this model we achieve the first PIR information theoretic solutions in which the database does not need to give away its data to be replicated, and with minimal on-line computation cost for the database. This solves privacy and efficiency problems inherent to all previous solutions. Specifically, in all previously existing PIR schemes the database on-line computation for one query is at least linear in the size of the data, and all previous information theoretic schemes require multiple replications of the database which are not allowed to communicate with each other.This poses a privacy problem for the database manager, who is required to hand his data to multiple foreign entities, and to the user, who is supposed to trust the multiple copies of the database not to communicate. In contrast, in our solutions no replication is needed, and the database manager only needs to perform O(1) amount of computation to answer questions of users, while all the extra computations required on line for privacy are done by the auxiliary random servers, who contain no information about the data.}, + abstract = {{Private information retrieval (PIR) schemes provide a user with information from a database while keeping his query secret from the database manager. We propose a new model for PIR, utilizing auxiliary random servers providing privacy services for database access. The principal database initially engages in a preprocessing setup computation with the random servers, followed by the on-line stage with the users. Using this model we achieve the first PIR information theoretic solutions in which the database does not need to give away its data to be replicated, and with minimal on-line computation cost for the database. This solves privacy and efficiency problems inherent to all previous solutions. Specifically, in all previously existing PIR schemes the database on-line computation for one query is at least linear in the size of the data, and all previous information theoretic schemes require multiple replications of the database which are not allowed to communicate with each other.This poses a privacy problem for the database manager, who is required to hand his data to multiple foreign entities, and to the user, who is supposed to trust the multiple copies of the database not to communicate. In contrast, in our solutions no replication is needed, and the database manager only needs to perform O(1) amount of computation to answer questions of users, while all the extra computations required on line for privacy are done by the auxiliary random servers, who contain no information about the data.}}, keywords = {anonymity, privacy, private information retrieval}, isbn = {3-540-65142-X}, url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.18.6742}, @@ -15508,7 +15506,7 @@ This exposition presents a model to formally study such algorithms. This model, number = {4}, year = {1998}, pages = {495 - 509 }, - abstract = {{We present techniques for efficient anonymous communication with real-time constraints as necessary for services like telephony, where a continuous data stream has to be transmitted. For concreteness, we present the detailed protocols for the narrow-band ISDN (integrated services digital network), although the heart of our techniques-anonymous channels-can also be applied to other networks. For ISDN, we achieve the same data rate as without anonymity, using the same subscriber lines and without any significant modifications to the long-distance network. A precise performance analysis is given. Our techniques are based on mixes, a method for anonymous communication for e-mail-like services introduced by D. Chaum (1981)}, + abstract = {{We present techniques for efficient anonymous communication with real-time constraints as necessary for services like telephony, where a continuous data stream has to be transmitted. For concreteness, we present the detailed protocols for the narrow-band ISDN (integrated services digital network), although the heart of our techniques-anonymous channels-can also be applied to other networks. For ISDN, we achieve the same data rate as without anonymity, using the same subscriber lines and without any significant modifications to the long-distance network. A precise performance analysis is given. Our techniques are based on mixes, a method for anonymous communication for e-mail-like services introduced by D. Chaum (1981)}}, keywords = {anonymity, performance analysis}, issn = {0733-8716 }, doi = {10.1109/49.668973 }, @@ -15531,7 +15529,7 @@ This exposition presents a model to formally study such algorithms. This model, year = {1998}, publisher = {Springer-Verlag, LNCS 1525}, organization = {Springer-Verlag, LNCS 1525}, - abstract = {{Currently known basic anonymity techniques depend on identity verification. If verification of user identities is not possible due to the related management overhead or a general lack of information (e.g. on the Internet), an adversary can participate several times in a communication relationship and observe the honest users. In this paper we focus on the problem of providing anonymity without identity verification. The notion of probabilistic anonymity is introduced. Probabilistic anonymity is based on a publicly known security parameter, which determines the security of the protocol. For probabilistic anonymity the insecurity, expressed as the probability of having only one honest participant, approaches 0 at an exponential rate as the security parameter is changed linearly. Based on our security model we propose a new MIX variant called {\textquotedblleft}Stop-and-Go-MIX{\textquotedblright} (SG-MIX) which provides anonymity without identity verification, and prove that it is probabilistically secure.}, + abstract = {{Currently known basic anonymity techniques depend on identity verification. If verification of user identities is not possible due to the related management overhead or a general lack of information (e.g. on the Internet), an adversary can participate several times in a communication relationship and observe the honest users. In this paper we focus on the problem of providing anonymity without identity verification. The notion of probabilistic anonymity is introduced. Probabilistic anonymity is based on a publicly known security parameter, which determines the security of the protocol. For probabilistic anonymity the insecurity, expressed as the probability of having only one honest participant, approaches 0 at an exponential rate as the security parameter is changed linearly. Based on our security model we propose a new MIX variant called {\textquotedblleft}Stop-and-Go-MIX{\textquotedblright} (SG-MIX) which provides anonymity without identity verification, and prove that it is probabilistically secure.}}, keywords = {anonymity, identity verification, security parameter}, isbn = {978-3-540-65386-8}, doi = {10.1007/3-540-49380-8_7}, @@ -15545,7 +15543,7 @@ This exposition presents a model to formally study such algorithms. This model, year = {1998}, publisher = {Springer-Verlag, LNCS 1403}, organization = {Springer-Verlag, LNCS 1403}, - abstract = {{In this paper we construct a universally verifiable Mix-net where the amount of work done by a verifier is independent of the number of mix-servers. Furthermore, the computational task of each mix-server is constant against the number of mix-servers except for some negligible tasks like addition. The scheme is robust, too.}, + abstract = {{In this paper we construct a universally verifiable Mix-net where the amount of work done by a verifier is independent of the number of mix-servers. Furthermore, the computational task of each mix-server is constant against the number of mix-servers except for some negligible tasks like addition. The scheme is robust, too.}}, keywords = {electronic voting, mix, universal verifiability}, isbn = {978-3-540-64518-4}, doi = {10.1007/BFb0054144}, @@ -15560,7 +15558,7 @@ This exposition presents a model to formally study such algorithms. This model, publisher = {ACM Press}, organization = {ACM Press}, address = {El Paso, TX, United States}, - abstract = {{Private information retrieval (PIR) schemes enable a user to access k replicated copies of a database (k 2), and privately retrieve one of the n bits of data stored in the databases. This means that the queries give each individual database no partial information (in the information theoretic sense) on the identity of the item retrieved by the user. Today, the best two database scheme (k = 2) has communication complexity O(n 1=3 ), while for any constant number, k, the best k database scheme has communication complexity O(n 1=(2k\Gamma1) ). The motivation for the present work is the question whether this complexity can be reduced if one is willing to achieve computational privacy, rather than information theoretic privacy. (This means that privacy is guaranteed only with respect to databases that are restricted to polynomial time computations.) We answer this question affirmatively, and Computer Science Dept., Technion, Haifa, Israel. }, + abstract = {{Private information retrieval (PIR) schemes enable a user to access k replicated copies of a database (k 2), and privately retrieve one of the n bits of data stored in the databases. This means that the queries give each individual database no partial information (in the information theoretic sense) on the identity of the item retrieved by the user. Today, the best two database scheme (k = 2) has communication complexity O(n 1=3 ), while for any constant number, k, the best k database scheme has communication complexity O(n 1=(2k\Gamma1) ). The motivation for the present work is the question whether this complexity can be reduced if one is willing to achieve computational privacy, rather than information theoretic privacy. (This means that privacy is guaranteed only with respect to databases that are restricted to polynomial time computations.) We answer this question affirmatively, and Computer Science Dept., Technion, Haifa, Israel. }}, keywords = {communication complexity, private information retrieval}, isbn = {0-89791-888-6}, doi = {http://doi.acm.org/10.1145/258533.258609},