commit f9368db2bf8ce9eb6a9e951891142309c9c898d4
parent e3757e7b62260e679a7f470a0abeeab054d21ebf
Author: Julius Bünger <buenger@mytum.de>
Date: Thu, 29 Feb 2024 17:56:21 +0100
cong: put notes into more understandable words
Diffstat:
1 file changed, 49 insertions(+), 10 deletions(-)
diff --git a/developers/apis/cong.rst b/developers/apis/cong.rst
@@ -42,9 +42,6 @@ Peer IDs
Peer ids stop to be unique for the lifetime of a peer, but change each time a
peer's addresses change. This includes gaining or losing an address.
-..
- TODO ^^^ gaining/losing makes sense?
-
It is important to note that this design choice only increases the cost of
network location tracking and does not fully prevent it. For this feature onion
routing on top of CADET is envisioned.
@@ -67,10 +64,14 @@ places can be easily tracked by everyone just by recording the different
addresses that are tied to that peer id over time.
..
- TODO why does this prevent tracking?
+ TODO
+ - why does this prevent tracking? (partially answered in the attacker
+ model section below
- tracking hellos of peer: visible
- reverse mapping (address to peer?) not possible - there's no functionality
for that
+ - gaining/losing as criterium to change peer id makes sense?
+
.. _Attacker-Model:
@@ -251,6 +252,8 @@ Transport
..
- transport creates and signs hello (w peer id)
+ - all other parts of gnunet that simply read the peer id from a file and
+ assume its continuity
.. _Details-on-how:
@@ -313,19 +316,55 @@ TODO
Open Design Questions
---------------------
- - terminology of peer ids
+In this section we list design question that are not decided on, yet.
+
+..
- scope of peer ids and other such elements (gns/identity, ...)
+ - terminology of peer ids
+ - resolve gns to pid on lower layer? (cadet or even core)
+ - convenience api: cadet connection to domain name
+ - not one id per set of addresses, but per single address?
+ - multiple peer ids per peer
+ - tracking even harder
+
+
+.. _Peer-ID-changes-and-connectivity:
+
+Peer ID changes and connectivity
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+..
- smooth handling of address/pid change possible?/signaling change of address
- should be possible at least when we gain one address
- construct that allows an additional peer id that is pre-computed (not
based on addresses) and announced ahead of address change?
- other terminology: provide tracking capability selectively
- use peer ids as long as there are open connections with them?
- - resolve gns to pid on lower layer? (cadet or even core)
- - convenience api: cadet connection to domain name
- - not one id per set of addresses, but per single address?
- - multiple peer ids per peer
- - tracking even harder
+
+In case a peer's addresses change, it gets a new peer id and therefor needs to
+reconnect. The challenge is to reconnect as fast as possible. The main problem
+is that a peer cannot know its next peer id in all cases. Connections that have
+dedicated peers at its endpoints will probably look up the new peer id of the
+other peer in a higher-layer service, most probably gns.
+
+In the case in which a peer just gains an additional address, that peer can
+pre-calculate its next peer id, signal it via still open connections on the old
+peer id and finally switch to using the new peer id.
+
+Other more evolved ideas include using multiple peer ids per peer: Either an
+additional address-independent peer id that will 'survive' address changes and
+serve as means to link to the address-based peer id after a change. It would
+just be sent to connected peers and reset once all connections have been
+re-established. Alternatively (maybe in addition) peers could use multiple
+address-based peer ids - one per address. Thus some peer ids might stay
+unchanged while others go offline.
+
+Another idea to address this challenge is to keep peer ids in use on
+connections which are still in use, but don't publish those ids anymore.
+
+Terminology-wise we might add another perspective and say that we selectively
+and deliberately provide tracking capability to peers which we want to stay in
+touch with.
.. _Requirements: