-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DO NOT MERGE, only for rebase testing #14391
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Add Support For Discovery Of Column Subnets * Lint for SubnetsPerNode * Manu's Review * Change to a better name
* Add Data Column Subscriber * Add Data Column Vaidator * Wire all Handlers In * Fix Build * Fix Test * Fix IP in Test * Fix IP in Test
* Add RPC Handler * Add Column Requests * Update beacon-chain/db/filesystem/blob.go Co-authored-by: Manu NALEPA <[email protected]> * Update beacon-chain/p2p/rpc_topic_mappings.go Co-authored-by: Manu NALEPA <[email protected]> * Manu's Review * Manu's Review * Interface Fixes * mock manager --------- Co-authored-by: Manu NALEPA <[email protected]>
* Bump `c-kzg-4844` lib to the `das` branch. * Implement `MerkleProofKZGCommitments`. * Implement `das-core.md`. * Use `peerdas.CustodyColumnSubnets` and `peerdas.CustodyColumns`. * `CustodyColumnSubnets`: Include `i` in the for loop. * Remove `computeSubscribedColumnSubnet`. * Remove `peerdas.CustodyColumns` out of the for loop.
* Remove capital letter from error messages. * `[4]byte` => `[fieldparams.VersionLength]byte`. * Prometheus: Remove extra `committee`. They are probably due to a bad copy/paste. Note: The name of the probe itself is remaining, to ensure backward compatibility. * Implement Proposer RPC for data columns. * Fix TestProposer_ProposeBlock_OK test. * Remove default peerDAS activation. * `validateDataColumn`: Workaround to return a `VerifiedRODataColumn`
* Add new DA check * Exit early in the event no commitments exist. * Gazelle * Fix Mock Broadcaster * Fix Test Setup * Update beacon-chain/blockchain/process_block.go Co-authored-by: Manu NALEPA <[email protected]> * Manu's Review * Fix Build --------- Co-authored-by: Manu NALEPA <[email protected]>
* Update `consensus_spec_version` to `v1.5.0-alpha.1`. * `CustodyColumns`: Fix and implement spec tests. * Make deepsource happy. * `^uint64(0)` => `math.MaxUint64`. * Fix `TestLoadConfigFile` test.
…ob`. (#13957) * `SendDataColumnSidecarByRoot`: Return `RODataColumn` instead of `ROBlob`. * Make deepsource happier.
* Upgrade c-kzg-4844 package * Upgrade bazel deps
* Enable E2E And Add Fixes * Register Same Topic For Data Columns * Initialize Capacity Of Slice * Fix Initialization of Data Column Receiver * Remove Mix In From Merkle Proof * E2E: Subscribe to all subnets. * Remove Index Check * Remaining Bug Fixes to Get It Working * Change Evaluator to Allow Test to Finish * Fix Build * Add Data Column Verification * Fix LoopVar Bug * Do Not Allocate Memory * Update beacon-chain/blockchain/process_block.go Co-authored-by: Manu NALEPA <[email protected]> * Update beacon-chain/core/peerdas/helpers.go Co-authored-by: Manu NALEPA <[email protected]> * Update beacon-chain/core/peerdas/helpers.go Co-authored-by: Manu NALEPA <[email protected]> * Gofmt * Fix It Again * Fix Test Setup * Fix Build * Fix Trusted Setup panic * Fix Trusted Setup panic * Use New Test --------- Co-authored-by: Manu NALEPA <[email protected]>
* Add Data Structure for New Request Type * Add Data Column By Range Handler * Add Data Column Request Methods * Add new validation for columns by range requests * Fix Build * Allow Prysm Node To Fetch Data Columns * Allow Prysm Node To Fetch Data Columns And Sync * Bug Fixes For Interop * GoFmt * Use different var * Manu's Review
* PeerDAS: Implement sampling. * `TestNewRateLimiter`: Fix with the new number of expected registered topics.
* Set Custody Count Correctly * Fix Discovery Count
* Adding error wrapping * Fix `CustodyColumnSubnets` tests.
* Support Data Columns For By Root Requests * Revert Config Changes * Fix Panic * Fix Process Block * Fix Flags * Lint * Support Checkpoint Sync * Manu's Review * Add Support For Columns in Remaining Methods * Unmarshal Uncorrectly
* Hack E2E * Fix it For Real * Gofmt * Remove
* Wrap errors, add logs. * `missingColumnRequest`: Fix blobs <-> data columns mix. * `ColumnIndices`: Return `map[uint64]bool` instead of `[fieldparams.NumberOfColumns]bool`. * `DataColumnSidecars`: `interfaces.SignedBeaconBlock` ==> `interfaces.ReadOnlySignedBeaconBlock`. We don't need any of the non read-only methods. * Fix comments. * `handleUnblidedBlock` ==> `handleUnblindedBlock`. * `SaveDataColumn`: Move log from debug to trace. If we attempt to save an already existing data column sidecar, a debug log was printed. This case could be quite common now with the data column reconstruction enabled. * `sampling_data_columns.go` --> `data_columns_sampling.go`. * Reconstruct data columns.
* Remove some `_` identifiers. * Blob storage: Implement a notifier system for data columns. * `dataColumnSidecarByRootRPCHandler`: Remove ugly `time.Sleep(100 * time.Millisecond)`. * Address Nishant's comment.
* Introduce hidden flag `data-columns-withhold-count`. * Address Nishant's comment.
…ock` case. (#14066) * `recoverBlobs`: Cover the `0 < blobsCount < fieldparams.MaxBlobsPerBlock` case. * Fix Nishant's comment.
* PeerDAS: Broadcast not seen via gossip but reconstructed data columns. * Address Nishant's comment.
* `privKey`: Improve logs. * peerDAS: Move functions in file. Add documentation. * PeerDAS: Remove unused `ComputeExtendedMatrix` and `RecoverMatrix` functions. * PeerDAS: Stop generating new P2P private key at start. * Fix sammy' comment.
* [PeerDAS] Minor fixes and tests for gossiping out data columns * Fix metrics
…14103) * [PeerDAS] add data column related protos and fix data column by root bug * Add more tests
* `ConvertPeerIDToNodeID`: Add tests. * Remove `extractNodeID` and uses `ConvertPeerIDToNodeID` instead. * Implement IncrementalDAS. * `DataColumnSamplingLoop` ==> `DataColumnSamplingRoutine`. * HypergeomCDF: Add test. * `GetValidCustodyPeers`: Optimize and add tests. * Remove blank identifiers. * Implement `CustodyCountFromRecord`. * Implement `TestP2P.CustodyCountFromRemotePeer`. * `NewTestP2P`: Add `swarmt.Option` parameters. * `incrementalDAS`: Rework and add tests. * Remove useless warning.
…kage (#14136) * chore: move all ckzg related functionality into kzg package * refactor code to match * run: bazel run //:gazelle -- fix * chore: add some docs and stop copying large objects when converting between types * fixes * manually add kzg.go dep to Build.Hazel * move kzg methods to kzg.go * chore: add RecoverCellsAndProofs method * bazel run //:gazelle -- fix * use BytesPerBlob constant * chore: fix some deepsource issues * one declaration for commans and blobs
* change recoverBlobs to recoverCellsAndProofs * modify code to take in the cells and proofs for a particular blob instead of the blob itself * add CellsAndProofs structure * modify recoverCellsAndProofs to return `cellsAndProofs` structure * modify `DataColumnSidecarsForReconstruct` to accept the `cellsAndKZGProofs` structure * bazel run //:gazelle -- fix * use kzg abstraction for kzg method * move CellsAndProofs to kzg.go
* Save All the Current Changes * Add check for data sampling * Fix Test * Gazelle * Manu's Review * Fix Test
Reason: If a peer does not exposes its `csc` field into it's ENR, then there is nothing we can do.
* chore: move all ckzg related functionality into kzg package * refactor code to match * run: bazel run //:gazelle -- fix * chore: add some docs and stop copying large objects when converting between types * fixes * manually add kzg.go dep to Build.Hazel * move kzg methods to kzg.go * chore: add RecoverCellsAndProofs method * bazel run //:gazelle -- fix * make Cells be flattened sequence of bytes * chore: add test for flattening roundtrip * chore: remove code that was doing the flattening outside of the kzg package * fix merge * fix * remove now un-needed conversion * use pointers for Cell parameters * linter * rename cell conversion methods (this only applies to old version of c-kzg)
* `validateDataColumn`: Add comments and remove debug computation. * `sampleDataColumnsFromPeer`: Add KZG verification * `VerifyKZGInclusionProofColumn`: Add unit test. * Make deepsource happy. * Address Nishant's comment. * Address Nishant's comment.
* Trigger At Deneb * Fix Rate Limits
…-> `CellsToBlob` -> `ComputeCellsAndKZGProofs` (#14183) * use recoverCellsAndKZGProofs * make recoverAllCells and CellsToBlob private * chore: all methods now return CellsAndProof struct * chore: update code
* PeerDAS: parallelizing sample queries * PeerDAS: select sample from non custodied columns * Finish rebase * Add more test cases
* Update ckzg4844 to latest version * Run go mod tidy * Remove unnecessary tests & run goimports * Remove fieldparams from blockchain/kzg * Add back blank line * Avoid large copies * Run gazelle * Use trusted setup from the specs & fix issue with struct * Run goimports * Fix mistake in makeCellsAndProofs --------- Co-authored-by: Manu NALEPA <[email protected]>
* PeerDAS: Run reconstruction in parallel. * `isDataAvailableDataColumns` --> `isDataColumnsAvailable` * `isDataColumnsAvailable`: Return `nil` as soon as half of the columns are received. * Make deepsource happy.
* DeepSource: Pass heavy objects by pointers. * `removeBlockFromQueue`: Remove redundant error checking. * `fetchBlobsFromPeer`: Use same variable for `append`. * Remove unused arguments. * Combine types. * `Persist`: Add documentation. * Remove unused receiver * Remove duplicated import. * Stop using both pointer and value receiver at the same time. * `verifyAndPopulateColumns`: Remove unused parameter * Stop using mpty slice literal used to declare a variable.
* `SendDataColumnsByRangeRequest`: Add some new fields in logs. * `BlobStorageSummary`: Implement `HasDataColumnIndex` and `AllDataColumnsAvailable`. * Implement `fetchDataColumnsFromPeers`. * `fetchBlobsFromPeer`: Return only one error.
* Fix the obvious... * Data columns sampling: Modify logging. * `waitForChainStart`: Set it threadsafe - Do only wait once. * Sampling: Wait for chain start before running the sampling. Reason: `newDataColumnSampler1D` needs `s.ctxMap`. `s.ctxMap` is only set when chain is started. Previously `waitForChainStart` was only called in `s.registerHandlers`, it self called in a go-routine. ==> We had a race condition here: Sometimes `newDataColumnSampler1D` were called once `s.ctxMap` were set, sometimes not. * Adresse Nishant's comments. * Sampling: Improve logging. * `waitForChainStart`: Remove `chainIsStarted` check.
* `sendPingRequest`: Add some comments. * `sendPingRequest`: Replace `stream.Conn().RemotePeer()` by `peerID`. * `pingHandler`: Add comments. * `sendMetaDataRequest`: Add comments and implement an unique test. * Gather `SchemaVersion`s in the same `const` definition. * Define `SchemaVersionV3`. * `MetaDataV1`: Fix comment. * Proto: Define `MetaDataV2`. * `MetaDataV2`: Generate SSZ. * `newColumnSubnetIDs`: Use smaller lines. * `metaDataHandler` and `sendMetaDataRequest`: Manage `MetaDataV2`. * `RefreshPersistentSubnets`: Refactor tests (no functional change). * `RefreshPersistentSubnets`: Refactor and add comments (no functional change). * `RefreshPersistentSubnets`: Compare cache with both ENR & metadata. * `RefreshPersistentSubnets`: Manage peerDAS. * `registerRPCHandlersPeerDAS`: Register `RPCMetaDataTopicV3`. * `CustodyCountFromRemotePeer`: Retrieve the count from metadata. Then default to ENR, then default to the default value. * Update beacon-chain/sync/rpc_metadata.go Co-authored-by: Nishant Das <[email protected]> * Fix duplicate case. * Remove version testing. * `debug.proto`: Stop breaking ordering. --------- Co-authored-by: Nishant Das <[email protected]>
* Persist All Changes * Fix All Tests * Fix Build * Fix Build * Fix Build * Fix Test Again * Add missing verification * Add Test Cases for Data Column Validation * Fix comments for methods * Fix comments for methods * Fix Test * Manu's Review
) * `parseIndices`: `O(n**2)` ==> `O(n)`. * PeerDAS: Implement `/eth/v1/beacon/blob_sidecars/{block_id}`. * Update beacon-chain/core/peerdas/helpers.go Co-authored-by: Sammy Rosso <[email protected]> * Rename some functions. * `Blobs`: Fix empty slice. * `recoverCellsAndProofs` --> Move function in `beacon-chain/core/peerdas`. * peerDAS helpers: Add missing tests. * Implement `CustodyColumnCount`. * `RecoverCellsAndProofs`: Remove useless argument `columnsCount`. * Tests: Add cleanups. * `blobsFromStoredDataColumns`: Reconstruct if needed. * Make deepsource happy. * Beacon API: Use provided indices. * Make deepsource happier. --------- Co-authored-by: Sammy Rosso <[email protected]>
* Update go.yml * Disable mnd * Update .golangci.yml * Update go.yml * Update go.yml * Update .golangci.yml * Update go.yml * Fix Lint Issues * Remove comment * Update .golangci.yml
* Update values * Update Spec To v1.5.0-alpha.5 * Fix Discovery Tests * Hardcode Subnet Count For Tests * Fix All Initial Sync Tests * Gazelle * Less Chaotic Service Initialization * Gazelle
* Use Data Column Validation Everywhere * Fix Build * Fix Lint * Fix Clock Synchronizer * Fix Panic
* Add Changes for Uint8 Csc * Fix Build * Fix Build for Sync * Fix Discovery Test
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.