- 
                Notifications
    
You must be signed in to change notification settings  - Fork 115
 
          [consensus/marshal] Integrate commonware-coding into marshal
          #1680
        
          New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
b48f6e0    to
    39ef3e4      
    Compare
  
    
          
 In this case, you are referring to   | 
    
e44f142    to
    9b114d1      
    Compare
  
    b5abca8    to
    590cdd2      
    Compare
  
    a1657ec    to
    acd884d      
    Compare
  
    acd884d    to
    5f67c8b      
    Compare
  
    ecd698a    to
    d902967      
    Compare
  
    d902967    to
    3a1bf45      
    Compare
  
    Adds type definitions for erasure coding.
Adds a new wrapper around the `buffered::Mailbox` that shards messages and recovers blocks upon retrieval of enough shards from the network.
Integrates the new `ShardedMailbox` into marshal, allowing for the broadcast and reconstruction of `CodedBlock`s.
Defines a new interface for applications that employ erasure coding
Introduces a wrapper around the new `Application` trait that minimizes the boilerplate required to implement applications that utilize sharded broadcast. This wrapper is possible because when using erasure coding, the verify and broadcast steps are identical for all applications.
[consensus/marshal] Append `CodingConfig` to consensus commitment
Broadcasts the local shard on verification, rather than on every notarization vote received from consensus.
Reduces pressure on the marshal control loop
3a1bf45    to
    ce28dbc      
    Compare
  
    
          Codecov Report❌ Patch coverage is  @@            Coverage Diff             @@
##             main    #1680      +/-   ##
==========================================
- Coverage   92.26%   90.55%   -1.71%     
==========================================
  Files         309      312       +3     
  Lines       81477    81733     +256     
==========================================
- Hits        75171    74013    -1158     
- Misses       6306     7720    +1414     
 ... and 20 files with indirect coverage changes Continue to review full report in Codecov by Sentry. 
 🚀 New features to boost your workflow:
  | 
    
Overview
Warning
WIP; Open for early review, but not feature complete - See TODOs.
Integrates
commonware-codingas a default withincommonware-consensus'marshalby adding a new shim between consensus and themarshal::Actor. This layer is responsible for broadcasting the erasure coded chunks ofBlocks sent by the proposer as well as reconstructing them as they arrive.Example usage of the new API in
alto: commonwarexyz/alto#149TODO
test_finalize_good_linkstest seems to be pseudo-passing; all validators are eventually reconstructing the proposed blocks & receiving the same finalized chain, but it’s failing the determinism check. Need to investigate that.ShardLayer, prune whenOrchestratorreports.ShardLayerAutomatonneeds this in order to verify individual chunks prior to sending notarization votes.ShardMailboxhad a way of being notified when itsbuffered::Mailboxreceived shards, it could register a listener whenAutomaton::verifyinitially verifies the local shard for the notarization vote. This would (hopefully) mean that by the time a proposer wanted to build on the notarized parent, the block is already present for them.CodingAdapterwill need to ensure that it never asks the application to build on a "bad" parent; The application should not need to know this detail about the consensus chain. (e: or, a more elegant solution: [consensus] Allow for a second, post-notarization,verify(...)hook to theAutomaton, calledcertify(...)#1767)Viz
sequenceDiagram participant P as Proposer / App participant E as Erasure Encoder participant N1 as Participant 1 participant N2 as Participant 2 participant N3 as Participant 3 participant Nr as Participant r participant V as Consensus participant S as Shard Layer participant M as Marshal Note over P,M: Block Proposal & Encoding Phase P->>E: Original block data E->>E: Split into k chunks + generate r parity chunks E->>P: Return k+r encoded chunks Note over P,M: Distribution Phase P->>N1: Send chunk 1 + merkle proof P->>N2: Send chunk 2 + merkle proof P->>N3: Send chunk 3 + merkle proof P->>Nr: Send chunk k+r + merkle proof Note over P,M: Notarization Phase N1->>N1: Validate chunk integrity N2->>N2: Validate chunk integrity N3->>N3: Validate chunk integrity Nr->>Nr: Validate chunk integrity N1->>V: Attest chunk validity + vote N2->>V: Attest chunk validity + vote N3->>V: Attest chunk validity + vote Nr->>V: Attest chunk validity + vote V->>V: Notarization Note over P,M: Reconstruction Phase (parallel w/ finalization) S->>S: Collect ≥k valid chunks S->>S: Reconstruct original block S->>S: Verify commitment S->>M: Send block Note over P,M: Finalization Phase (parallel w/ reconstruction) N1->>V: Vote N2->>V: Vote N3->>V: Vote Nr->>V: Vote V->>V: Finalization Note over P,M: Reporting Phase M->>P: Send finalized blockhonest validator shard distribution

Meta
closes #1520