-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add scraping of Sealevel transactions into E2E tests #4850
base: main
Are you sure you want to change the base?
Conversation
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #4850 +/- ##
=======================================
Coverage 74.60% 74.60%
=======================================
Files 103 103
Lines 1516 1516
Branches 195 195
=======================================
Hits 1131 1131
Misses 364 364
Partials 21 21
|
"block which should contain {} transaction does not contain any after filtering", | ||
self.transaction_description, | ||
)))?; | ||
|
||
// Construct log index which will be increasing relative to block |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fwiw I view us as being able to define this however we want per VM, and don't think we necessarily need to replicate the EVM behavior here, but happy as it is
"block which should contain {} transaction does not contain any after filtering", | ||
self.transaction_description, | ||
)))?; | ||
|
||
// Construct log index which will be increasing relative to block | ||
let log_index = U256::from((transaction_index << 8) + (program_index as usize)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noting that because the log index is used to determine "uniqueness" of an IGP payment, changing this may have some consequences. If we re-index a payment, we'll consider them as two separate payments because the log index is different:
- scraper: https://github.com/hyperlane-xyz/hyperlane-monorepo/blob/main/rust/main/agents/scraper/src/db/payment.rs#L58
- other agents: https://github.com/hyperlane-xyz/hyperlane-monorepo/blob/main/rust/main/hyperlane-base/src/db/rocks/hyperlane_db.rs#L169-L181
What I'm not sure about is if we'd expect to actually double-process any gas payments?
On the relayer/validator side I'm pretty sure this won't happen, but unsure about the scraper.
@@ -448,19 +448,19 @@ fn main() -> ExitCode { | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we add some invariants to the e2e test to make sure that these sealevel events / txs are being indexed? similar to what we do for the EVM things
@@ -462,6 +462,22 @@ const DOMAINS: &[RawDomain] = &[ | |||
is_test_net: true, | |||
is_deprecated: false, | |||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something I'm wondering - is the scraper indexing the SVM IGP payments and deliveries in a sequence aware fashion, or using the rate limited / watermark approach?
In the relayer it indexes SVM IGP payments in a sequence aware manner, but other ones in a rate limited / watermark approach. We'll want to do the same in the scraper otherwise I'd expect bugs to occur
For reference, this is how we do it in the relayer:
We create ContractSyncs here
let interchain_gas_payment_syncs = settings |
hyperlane-monorepo/rust/main/hyperlane-base/src/settings/base.rs
Lines 205 to 250 in 5db46bd
pub async fn contract_syncs<T, D>( | |
&self, | |
domains: impl Iterator<Item = &HyperlaneDomain>, | |
metrics: &CoreMetrics, | |
sync_metrics: &ContractSyncMetrics, | |
dbs: HashMap<HyperlaneDomain, Arc<D>>, | |
) -> Result<HashMap<HyperlaneDomain, Arc<dyn ContractSyncer<T>>>> | |
where | |
T: Indexable + Debug + Send + Sync + Clone + Eq + Hash + 'static, | |
SequenceIndexer<T>: TryFromWithMetrics<ChainConf>, | |
D: HyperlaneLogStore<T> | |
+ HyperlaneSequenceAwareIndexerStoreReader<T> | |
+ HyperlaneWatermarkedLogStore<T> | |
+ 'static, | |
{ | |
// TODO: parallelize these calls again | |
let mut syncs = vec![]; | |
for domain in domains { | |
let sync = match T::indexing_cursor(domain.domain_protocol()) { | |
CursorType::SequenceAware => self | |
.sequenced_contract_sync( | |
domain, | |
metrics, | |
sync_metrics, | |
dbs.get(domain).unwrap().clone(), | |
) | |
.await | |
.map(|r| r as Arc<dyn ContractSyncer<T>>)?, | |
CursorType::RateLimited => self | |
.watermark_contract_sync( | |
domain, | |
metrics, | |
sync_metrics, | |
dbs.get(domain).unwrap().clone(), | |
) | |
.await | |
.map(|r| r as Arc<dyn ContractSyncer<T>>)?, | |
}; | |
syncs.push(sync); | |
} | |
syncs | |
.into_iter() | |
.map(|i| Ok((i.domain().clone(), i))) | |
.collect() | |
} |
sequenced_contract_sync
if the CursorType is SequenceAware (the case for SVM IGP payments), and a watermark_contract_sync
otherwise
The db used by the sequence aware cursors also requires it to impl HyperlaneSequenceAwareIndexerStoreReader
hyperlane-monorepo/rust/main/hyperlane-base/src/contract_sync/cursors/sequence_aware/forward.rs
Line 31 in 5db46bd
db: Arc<dyn HyperlaneSequenceAwareIndexerStoreReader<T>>, |
impl HyperlaneSequenceAwareIndexerStoreReader<HyperlaneMessage> for HyperlaneSqlDb { |
Description
Add scraping of Sealevel transactions into E2E tests
Drive-by changes
Related issues
Backward compatibility
Yes
Testing
Manual run of Scraper
E2E tests for Ethereum and Sealevel