lsd0003

LSD0003: Set Union
Log | Files | Refs | README

commit ba6fa51c57e308cd7855f053d5f954dd192d3101
parent 05c485fc93d79c5474a0fdae741b21f400d9dcd5
Author: Elias Summermatter <elias.summermatter@seccom.ch>
Date:   Tue, 15 Jun 2021 19:04:34 +0200

Added comments

Diffstat:
Mdraft-summermatter-set-union.xml | 47+++++++++++++++++++++++++++++++++++------------
1 file changed, 35 insertions(+), 12 deletions(-)

diff --git a/draft-summermatter-set-union.xml b/draft-summermatter-set-union.xml @@ -1648,6 +1648,10 @@ hashSum | 0x0101 | 0x5151 | 0x5050 | 0x0000 | <dd> is SETU_P2P_DONE as registered in <xref target="gana" format="title" /> in network byte order. </dd> + <dt>FINAL CHECKSUM</dt> + <dd> + a SHA-512 bit hash of the full set after synchronization. This should ensure that the sets are identical in the end! + </dd> </dl> </section> </section> @@ -1672,8 +1676,10 @@ hashSum | 0x0101 | 0x5151 | 0x5050 | 0x0000 | <artwork name="" type="" align="left" alt=""><![CDATA[ 0 8 16 24 32 40 48 56 +-----+-----+-----+-----+-----+-----+-----+-----+ - | MSG SIZE | MSG TYPE | - +-----+-----+-----+-----+-----+-----+-----+-----+ + | MSG SIZE | MSG TYPE | FINAL CHECKSUM + +-----+-----+-----+-----+ + / / + / / ]]></artwork> </figure> <t>where:</t> @@ -1686,6 +1692,10 @@ hashSum | 0x0101 | 0x5151 | 0x5050 | 0x0000 | <dd> the type of SETU_P2P_FULL_DONE as registered in <xref target="gana" format="title" /> in network byte order. </dd> + <dt> FINAL CHECKSUM</dt> + <dd> + a SHA-512 bit hash of the full set after synchronization. This should ensure that the sets are identical in the end! + </dd> </dl> </section> </section> @@ -2853,9 +2863,12 @@ END FUNCTION <!-- FIXME: I don't see how the next sentence makes sense. If we got a FULL_DONE, and we still have differing sets, something is broken and re-doing it hardly makes sense, right? @Christian im not sure about that it could be that for example - the set size changes (from application or other sync) while synchronisation is in progress....--> - If the sets differ, a resynchronisation is required. The number of possible - resynchronisation MUST be limited, to prevent resource exhaustion attacks. + the set size changes (from application or other sync) while synchronisation is in progress.... something went + wrong (HW Failures) Should never occur! Fehlgeschlagen! Final checksum in done/full done sha512--> + + If the sets differ (the FINAL CHECKSUM field in the <xref target="messages_full_done" format="title" /> + does not match to the sha-512 hash in our set), The operation has failed. This is a strong indicator + that something went horribly wrong (eg. some hardware bug), this should never ever happen! </t> </dd> </dl> @@ -2888,7 +2901,12 @@ END FUNCTION all of the other fragments/parts of the IBF first and that the parameters are thus consistent apply. @Christian So we would have to transmit the number of IBF slices that will be transmitted first - to do this check right? + to do this check right? Empfangen in mononer rheienfolge und prüfen das + es der letzte war. Sizes immer gleich? + Grösse plausible check: + - initiale bzyantine Upper bound verküpfen auf setsize differenz + - Wiederholt das ding kann sich nur verdoppeln + - Genau prüffen durch offer/demands --> </dl> </section> @@ -2900,19 +2918,24 @@ END FUNCTION generating and transmitting an unlimited number of IBFs that all do not decode, or to generate an IBF constructed to send the peers in an endless loop. To prevent an endless loop in decoding, loop detection MUST be implemented. - The simplest solution is to prevent decoding of more than a given number of elements. + The first solution is to prevent decoding of more than a given number of elements. <!-- FIXME: this description is awkward. Needs to be discussed. I think you also do not mean 'hashes' but 'element IDs'. @Christian just omit the details i guess anybody can freely decide how to handle loops its just important that e protection is - in place. Right?--> - A more robust solution is to implement a algorithm that detects a loop by + in place. Right? + - Remove salt and save this in the hashmap + - vorher gespreichert. + - Beides machen + - Nie mehr als MIN(anzahl buckets,total set grössen) + --> + A more robust solution is to implement an algorithm that detects a loop by analyzing past partially decoded IBFs. This can be achieved - by saving the element IDs of all prior partly decoded IBFs hashes in a hashmap and check - for every inserted hash, if it is already in the hashmap. + by saving the element IDs of all prior partly decoded IBFs element IDs in a hashmap and check + for every inserted element ID, if it is already in the hashmap. </t> <t> If the IBF decodes more elements than are plausible, the - operation MUST be terminated.Furthermore, if the IBF + operation MUST be terminated. Furthermore, if the IBF decoding successfully terminates and fewer elements were decoded than plausible, the operation MUST also be terminated. The upper thresholds for decoded elements from the IBF is the