gnunetbib

Bibliography (BibTeX, based on AnonBib)
Log | Files | Refs | README | LICENSE

commit afe88adb7247c72f393bfe0b90d23bd5af934402
parent a1cc37eaccd71bd5552dcc7e4596f5c3ffe487c9
Author: Nils Gillmann <ng0@n0.is>
Date:   Sun,  7 Oct 2018 10:58:37 +0000

format fix

Signed-off-by: Nils Gillmann <ng0@n0.is>

Diffstat:
Mgnunetbib.bib | 22+++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/gnunetbib.bib b/gnunetbib.bib @@ -154,7 +154,7 @@ month = dec, pages = {0--69}, school = {Technische Universitaet Muenchen}, - type = {Master{\textquoteright}s}, + type = {Master}, address = {Muenchen}, abstract = {Byzantine consensus is a fundamental and well-studied problem in the area of distributed system. It requires a group of peers to reach agreement on some value, even if a fraction of the peers is controlled by an adversary. This thesis proposes set union consensus, an efficient generalization of Byzantine consensus from single elements to sets. This is practically motivated by Secure Multiparty Computation protocols such as electronic voting, where a large set of elements must be collected and agreed upon. Existing practical implementations of Byzantine consensus are typically based on state machine replication and not well-suited for agreement on sets, since they must process individual agreements on all set elements in sequence. We describe and evaluate our implementation of set union consensus in GNUnet, which is based on a composition of Eppstein set reconciliation protocol with the simple gradecast consensus prococol described by Ben-Or}, keywords = {byzantine consensus, GNUnet, secure multiparty computation, set reconciliation, voting}, @@ -7451,7 +7451,7 @@ collaborative forecasting; (3) we demonstrate that our protocols are not only se booktitle = {Proceedings of the 22nd International Conference on Data Engineering Workshops}, series = {ICDEW {\textquoteright}06}, year = {2006}, - pages = {32--, + pages = {0--32}, publisher = {IEEE Computer Society}, organization = {IEEE Computer Society}, address = {Washington, DC, USA}, @@ -7820,7 +7820,7 @@ We show that applying encoding based on universal re-encryption can solve many o pages = {62--76}, publisher = {Springer Berlin / Heidelberg}, organization = {Springer Berlin / Heidelberg}, - abstract = {\textquotedblleft}Censorship resistant{\textquotedblright} systems attempt to prevent censors from imposing a particular distribution of content across a system. In this paper, we introduce a variation of censorship resistance (CR) that is resistant to selective filtering even by a censor who is able to inspect (but not alter) the internal contents and computations of each data server, excluding only the server{\textquoteright}s private signature key. This models a service provided by operators who do not hide their identities from censors. Even with such a strong adversarial model, our definition states that CR is only achieved if the censor must disable the entire system to filter selected content. We show that existing censorship resistant systems fail to meet this definition; that Private Information Retrieval (PIR) is necessary, though not sufficient, to achieve our definition of CR; and that CR is achieved through a modification of PIR for which known implementations exist}, + abstract = {{\textquotedblleft}Censorship resistant{\textquotedblright} systems attempt to prevent censors from imposing a particular distribution of content across a system. In this paper, we introduce a variation of censorship resistance (CR) that is resistant to selective filtering even by a censor who is able to inspect (but not alter) the internal contents and computations of each data server, excluding only the server{\textquoteright}s private signature key. This models a service provided by operators who do not hide their identities from censors. Even with such a strong adversarial model, our definition states that CR is only achieved if the censor must disable the entire system to filter selected content. We show that existing censorship resistant systems fail to meet this definition; that Private Information Retrieval (PIR) is necessary, though not sufficient, to achieve our definition of CR; and that CR is achieved through a modification of PIR for which known implementations exist}, keywords = {censorship resistance, private information retrieval}, isbn = {978-3-540-29039-1}, doi = {10.1007/11558859}, @@ -9496,8 +9496,8 @@ In this paper we propose an {\textquotedblleft}onion-like{\textquotedblright} en } @conference {Cramer04Bootstrapping, title = {Bootstrapping Locality-Aware P2P Networks}, - booktitle = {Proceedings of the IEEE International Conference on Networks (ICON 2004)} - address = {Singapore}, + booktitle = {Proceedings of the IEEE International Conference on Networks (ICON 2004)}, + address = {Singapore}, volume = {1}, year = {2004}, pages = {357--361}, @@ -15244,7 +15244,7 @@ This exposition presents a model to formally study such algorithms. This model, volume = {PhD}, year = {1999}, school = {University of Edinburgh}, - abstract = {This report describes an algorithm which if executed by a group of interconnected nodes will provide a robust key-indexed information storage and retrieval system with no element of central control or administration. It allows information to be made available to a large group of people in a similar manner to the "World Wide Web". Improvements over this existing system include:--No central control or administration required--Anonymous information publication and retrieval--Dynamic duplication of popular information--Transfer of information location depending upon demand There is also potential for this system to be used in a modified form as an information publication system within a large organisation which may wish to utilise unused storage space which is distributed across the organisation. The system{\textquoteright}s reliability is not guaranteed, nor is its efficiency, however the intention is that the efficiency and reliability will be sufficient to make the system useful, and demonstrate that} + abstract = {This report describes an algorithm which if executed by a group of interconnected nodes will provide a robust key-indexed information storage and retrieval system with no element of central control or administration. It allows information to be made available to a large group of people in a similar manner to the "World Wide Web". Improvements over this existing system include:--No central control or administration required--Anonymous information publication and retrieval--Dynamic duplication of popular information--Transfer of information location depending upon demand There is also potential for this system to be used in a modified form as an information publication system within a large organisation which may wish to utilise unused storage space which is distributed across the organisation. The system{\textquoteright}s reliability is not guaranteed, nor is its efficiency, however the intention is that the efficiency and reliability will be sufficient to make the system useful, and demonstrate that}, url = {http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.32.3665\&rep=rep1\&type=pdf}, author = {Ian Clarke} } @@ -15364,7 +15364,7 @@ This exposition presents a model to formally study such algorithms. This model, publisher = {Society for Industrial and Applied Mathematics}, organization = {Society for Industrial and Applied Mathematics}, address = {Philadelphia, PA, USA}, - abstract = {We introduce a new set of probabilistic analysis tools based on the analysis of And-Or trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random loss-resilient codes, solving random k-SAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above. 1 Introduction We introduce a new set of probabilistic analysis tools related to the amplification method introduced by [12] and further developed and used in [13, 5]. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including the random loss-resilient codes introduced } + abstract = {We introduce a new set of probabilistic analysis tools based on the analysis of And-Or trees with random inputs. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including random loss-resilient codes, solving random k-SAT formula using the pure literal rule, and the greedy algorithm for matchings in random graphs. In addition, these tools allow generalizations of these problems not previously analyzed to be analyzed in a straightforward manner. We illustrate our methodology on the three problems listed above. 1 Introduction We introduce a new set of probabilistic analysis tools related to the amplification method introduced by [12] and further developed and used in [13, 5]. These tools provide a unifying, intuitive, and powerful framework for carrying out the analysis of several previously studied random processes of interest, including the random loss-resilient codes introduced}, keywords = {And-Or trees, coding theory}, isbn = {0-89871-410-9}, www_section = {http://portal.acm.org/citation.cfm?id=314722\&dl=GUIDE\&coll=GUIDE\&CFID=102355791\&CFTOKEN=32605420$\#$}, @@ -15377,7 +15377,7 @@ This exposition presents a model to formally study such algorithms. This model, volume = {16}, year = {1998}, pages = {482--494}, - abstract = {Onion Routing is an infrastructure for private communication over a public network. It provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. Onion routing{\textquoteright}s anonymous connections are bidirectional and near realtime, and can be used anywhere a socket connection can be used. Any identifying information must be in the data stream carried over an anonymous connection. An onion is a data structure that is treated as the destination address by onion routers; thus, it is used to establish an anonymous connection. Onions themselves appear differently to each onion router as well as to network observers. The same goes for data carried over the connections they establish. Proxy aware applications, such as web browsing and e-mail, require no modification to use onion routing, and do so through a series of proxies. A prototype onion routing network is running between our lab and other sites. This paper describes anonymous connections and their imple} + abstract = {Onion Routing is an infrastructure for private communication over a public network. It provides anonymous connections that are strongly resistant to both eavesdropping and traffic analysis. Onion routing{\textquoteright}s anonymous connections are bidirectional and near realtime, and can be used anywhere a socket connection can be used. Any identifying information must be in the data stream carried over an anonymous connection. An onion is a data structure that is treated as the destination address by onion routers; thus, it is used to establish an anonymous connection. Onions themselves appear differently to each onion router as well as to network observers. The same goes for data carried over the connections they establish. Proxy aware applications, such as web browsing and e-mail, require no modification to use onion routing, and do so through a series of proxies. A prototype onion routing network is running between our lab and other sites. This paper describes anonymous connections and their imple}, keywords = {anonymity, onion routing}, www_section = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.35.2362}, www_pdf_url = {https://gnunet.org/git/bibliography.git/tree/docs/10.1.1.65.8267.pdf}, @@ -15738,7 +15738,7 @@ in the communication chain. This implies that neither the respondent nor his pro @booklet {Stemm96reducingpower, title = {Reducing Power Consumption of Network Interfaces in Hand-Held Devices (Extended Abstract)}, year = {1996}, - abstract = {An important issue to be addressed for the next generation of wirelessly-connected hand-held devices is battery longevity. In this paper we examine this issue from the point of view of the Network Interface (NI). In particular, we measure the power usage of two PDAs, the Apple Newton Messagepad and Sony Magic Link, and four NIs, the Metricom Ricochet Wireless Modem, the AT\&T Wavelan operating at 915 MHz and 2.4 GHz, and the IBM Infrared Wireless LAN Adapter. These measurements clearly indicate that the power drained by the network interface constitutes a large fraction of the total power used by the PDA. We also conduct trace-driven simulation experiments and show that by using applicationspecific policies it is possible to } + abstract = {An important issue to be addressed for the next generation of wirelessly-connected hand-held devices is battery longevity. In this paper we examine this issue from the point of view of the Network Interface (NI). In particular, we measure the power usage of two PDAs, the Apple Newton Messagepad and Sony Magic Link, and four NIs, the Metricom Ricochet Wireless Modem, the AT\&T Wavelan operating at 915 MHz and 2.4 GHz, and the IBM Infrared Wireless LAN Adapter. These measurements clearly indicate that the power drained by the network interface constitutes a large fraction of the total power used by the PDA. We also conduct trace-driven simulation experiments and show that by using applicationspecific policies it is possible to }, url = {http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.39.8384}, www_pdf_url = {https://gnunet.org/git/bibliography.git/tree/docs/10.1.1.39.8384.pdf}, author = {Mark Stemm and Paul Gauthier and Daishi Harada and Katz, Randy H.} @@ -15860,7 +15860,7 @@ This paper describes WAFL (Write Anywhere File Layout), which is a file system d year = {1994}, publisher = {University of Tennessee}, address = {Knoxville, TN, USA}, - abstract = {Checkpointing is a simple technique for rollback recovery: the state of an executing program is periodically saved to a disk file from which it can be recovered after a failure. While recent research has developed a collection of powerful techniques for minimizing the overhead of writing checkpoint files, checkpointing remains unavailable to most application developers. In this paper we describe libckpt, a portable checkpointing tool for Unix that implements all applicable performance optimizations which are reported in the literature. While libckpt can be used in a mode which is almost totally transparent to the programmer, it also supports the incorporation of user directives into the creation of checkpoints. This user-directed checkpointing is an innovation which is unique to our work. 1 Introduction Consider a programmer who has developed an application which will take a long time to execute, say five days. Two days into the computation, the processor on which the application is} + abstract = {Checkpointing is a simple technique for rollback recovery: the state of an executing program is periodically saved to a disk file from which it can be recovered after a failure. While recent research has developed a collection of powerful techniques for minimizing the overhead of writing checkpoint files, checkpointing remains unavailable to most application developers. In this paper we describe libckpt, a portable checkpointing tool for Unix that implements all applicable performance optimizations which are reported in the literature. While libckpt can be used in a mode which is almost totally transparent to the programmer, it also supports the incorporation of user directives into the creation of checkpoints. This user-directed checkpointing is an innovation which is unique to our work. 1 Introduction Consider a programmer who has developed an application which will take a long time to execute, say five days. Two days into the computation, the processor on which the application is}, keywords = {checkpointing, performance analysis}, www_section = {http://portal.acm.org/citation.cfm?id=898770$\#$}, www_pdf_url = {https://gnunet.org/git/bibliography.git/tree/docs/10.1.1.55.257.pdf}, @@ -16338,7 +16338,7 @@ The technique can also be used to form rosters of untraceable digital pseudonyms title = {Non-Discretionary Access Control for Decentralized Computing Systems}, number = {MIT/LCS/TR-179}, year = {1977}, - month = {May}, + month = {may}, school = {Laboratory for Computer Science, Massachusetts Institute of Technology}, type = {S. M. \& E. E. thesis}, address = {Cambridge, MA},