Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project Idea: Static Analysis to predict access list #26

Open
pipermerriam opened this issue Sep 24, 2021 · 5 comments
Open

Project Idea: Static Analysis to predict access list #26

pipermerriam opened this issue Sep 24, 2021 · 5 comments

Comments

@pipermerriam
Copy link
Contributor

pipermerriam commented Sep 24, 2021

When a client to the Portal Network executes something like eth_call or eth_estimateGas, they will typically be doing so without the actual state data, and thus, they will need to retrieve state on demand from the network.

The simple initial implementation would pause EVM execution each time state data is accessed, retrieve that data from the network, and then proceed with execution. This approach suffers from all of the data needing to be retrieved one piece at a time. We can do better.

Given a transaction payload, some collection of known state data (which might be empty), we should be able to predict a significant amount of the accounts and storage slots that will be accessed.

Lets pretend we have such a function:

def divine_access_list(transaction, known_state) -> AccessList:
    ...

At the very beginning of the transaction we can "predict" that the tx.sender and tx.to accounts will be needed, along with the bytecode for the tx.to account. Once these values have been retrieved, we can call the function again, but this time with known_state containing this account information and bytecode.

Now, suppose that the bytecode was something like:

PUSH1 0x00 SLOAD ...

From this, we can perform some very simple static analysis to know that we will be needing the value at storage slot 0.

This logic could then be run in a loop that looked something like this.

# start with zero state
known_state = []

while True:
    # figure out which state we're able to determine that we need
    state_data_to_retrieve = divine_access_list(tx, known_state)

    # if we cannot determine any additional state needs from static 
    # analysis then maybe we're at a dead end and can only determine 
    # the remaining needs via execution
    if not state_data_to_retrieve:
        break
    
    # fetch the state from wherever we're getting it from and 
    # merge it in with the state we may already have
    new_state_data = retrieve_state(state_data_to_retrieve)
    known_state.merge(new_state_data)

This could be run alongside the EVM execution, passively determining values we are going to need and retrieving them, while the other process actually performs the EVM execution.

So the project would be to write a version of this predictive engine, and then to do some experiment with real transactions on mainnet to measure how effective it can be at predicting what state will be needed during execution.

@pipermerriam
Copy link
Contributor Author

In theory, this type of predictive engine could actually be used to speed up EVM execution in mainnet execution clients since it could warm the state cache, reducing the number of times that EVM execution has to hit disk to retrieve state.

@norswap
Copy link

norswap commented Sep 27, 2021

(Disclaimer: I'm from cohort-zero, I'm not going to tackle this project, but I'd be happy to talk to you / help you if you do. Here are just some thoughts I had about this idea.)

Piper, tell me if I'm getting this straight: In normal execution, you would end up doing mostly the same disk accesses, but you would have to wait for execution in between two disk accesses. Using the diviner, you can "look ahead" (because you don't need to wait for the result of a disk access to resume execution, to know the next disk access) and build up a queue of disk access. Therefore, you maximize disk utilization by avoiding the downtime that you would get in normal execution, between the end of a disk access and the start of the next access.

And if you have multiple disks, you can use this to prefetch in parallel.

Note that go-ethereum has a prefetcher component. I don't know how that works, but it might be worth looking into:

Given the description of the state prefetcher, I wonder if it's not achieving the same, but only in the case where we have multiple blocks to process:

// statePrefetcher is a basic Prefetcher, which blindly executes a block on top
// of an arbitrary state with the goal of prefetching potentially useful state
// data from disk before the main block processor start executing.

The trie prefetcher is in particular about pulling data for the internal Merkle tree nodes from disk (you only need to pull the leaves for executing the block).

Regarding the static analysis part, you probably want to start looking at memory access opcode (e.g. SLOAD) that are used with constants. Then you might want to perform a few transformation on the opcode sequence to increase the number of such constant access. A big one is constant folding: if a constant is added to another constant, the result is constant, and if it's used with SLOAD, that's also a memory location you can prefetch.

@pipermerriam
Copy link
Contributor Author

Your assessment/description appears correct, though the thing I'm actually targeting is portal network because.... if you substitute "disk access" for "retrieval from the DHT network" all of a sudden your latency numbers go up 1-3 orders of magnitude. Similarly, since network retrieval can happen concurrently we also get big gains when we can retrieve multiple things in parallel.

Using this in an actual execution client has the potential to boost execution speed, but clients are already pretty heavily optimized. Using it in "beam sync" however would very likely also be beneficial.

@alexchenzl
Copy link
Contributor

@pipermerriam , I'd like to take this project. It sounds very interesting. I'm planning to implement it based on Geth codebase, and then to do the analysis.

@carver
Copy link

carver commented Oct 9, 2021

I suspect we'll want this analysis to be expansive, rather than trying to identify a "most likely" access pattern. For example, if there's a conditional branch based on the absence or presence of a storage slot, follow both code branches and look up the possible accesses either way. Obviously this starts to fall apart when you load an address out of a storage slot and do something with it.

I think it would also be cool to have "active" analysis that might load a real storage slot from the network in order to continue analyzing possible code paths.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants