Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fork choice fixes 5 #1381

Merged
merged 5 commits into from
Jul 28, 2020
Merged

Fork choice fixes 5 #1381

merged 5 commits into from
Jul 28, 2020

Conversation

arnetheduck
Copy link
Member

No description provided.

With fork choice updated, the attestation pool only needs to keep track
of attestations that will eventually end up in blocks - we can thus
limit the horizon of attestations that we keep more aggressively.

To get here, we expose getEpochRef which gets metadata about a
particular epochref, and make sure to populate it when a block is added
- this ensures that state rewinds during block addition are minimized.

In addition, we'll use the target root/epoch when validating
attestations - this helps minimize the number of different states that
we need to rewind to, in general.
@arnetheduck arnetheduck requested a review from tersec July 28, 2020 06:39
@@ -60,7 +60,7 @@ func parent*(bs: BlockSlot): BlockSlot =
slot: bs.slot - 1
)

func populateEpochCache(state: BeaconState): EpochRef =
func init*(T: type EpochRef, state: BeaconState): T =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There might be some value in having the more easily grepped populateEpochCache(), but init()'s more consistent with how to use it. So, no opinion, I suppose.

@@ -55,8 +57,11 @@ proc addResolvedBlock(
let
blockRoot = signedBlock.root
blockRef = BlockRef.init(blockRoot, signedBlock.message)
blockRef.epochsInfo = filterIt(parent.epochsInfo,
it.epoch + 1 >= state.data.get_current_epoch())
if parent.slot.compute_epoch_at_slot() == blockRef.slot.compute_epoch_at_slot:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems a bit more aggressive in discarding information -- is that in response to a particular bug, or otherwise motivated?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, it's both less and more aggressive - if parent.epoch == state.data.get_current_epoch, parent.epochsInfo[0] would get filtered out (by +1 if I understand it correctly) which seems wasteful.

the other epochinfos seemed more.. fragile. if we add blockRef to the history, the empty-slot epoch transition on the next epoch-boundary might change, so the empty-slots of the parent are not necessarily the same as that of the child. murky.

Copy link
Contributor

@tersec tersec Jul 28, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable. Better to lose some potential performance (which in practice will be only a once-per-epoch thing, because once it gets to a future epoch, it'll get filled in again), and ensure correctness.

Copy link
Contributor

@tersec tersec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have questions about a couple of the EpochRef-related changes, but nothing too serious.

Copy link
Contributor

@mratsim mratsim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, small cosmetic comments that can be done in ssubsequent PR

proc addAttestation*(pool: var AttestationPool, attestation: Attestation) =
proc addAttestation*(pool: var AttestationPool,
attestation: Attestation,
wallSlot: Slot) =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have yet to read the rest but, should wallSlot be a distinct type from Slot if mixing both may lead to subtle bugs?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well.. slots are like minutes basically, don't think a specific type is motivated. this code is a little wrong though, it should be a range of slots to handle clock disparity - but that is for a different PR

justifiedEpoch = shortLog(dag.headState.data.data.current_justified_checkpoint.epoch),
finalizedEpoch = shortLog(dag.headState.data.data.finalized_checkpoint.epoch)
justified = shortLog(dag.headState.data.data.current_justified_checkpoint),
finalized = shortLog(dag.headState.data.data.finalized_checkpoint)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we added chronicles.formatIt for all those types

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

er, that's one piece of magic too much for my taste, specially since the rest of the code is not relying on it.. I prefer the explicitness, and so does most of the code it seems - formatIt relies on the right imports being there, which is fragile - this is auditable at least (ping @zah )

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ultimately, relying on formatIt is better, because it will affect how these values are displyed even when they appear as fields within other objects. The support for this is coming soon with the following PR:
status-im/nim-chronicles#82

@tersec tersec merged commit 157ddd2 into devel Jul 28, 2020
@tersec tersec deleted the fork-choice-fixes-5 branch July 28, 2020 13:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants