Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prod Release 25/06/24 #832

Merged
merged 10 commits into from
Jun 24, 2024
Merged

Prod Release 25/06/24 #832

merged 10 commits into from
Jun 24, 2024

Conversation

morgsmccauley
Copy link
Collaborator

morgsmccauley and others added 10 commits June 20, 2024 08:35
This PR updates Coordinator to handle Data Layer provisioning, removing
the implicit step from Runner. Provisioning itself is still completed
within Runner, but Coordinator will trigger and monitor it.
Functionally, provisioning is the same, but there are some subtle
differences around how it is handled:
- Provisioning will now happen as soon as the Indexer is registered,
rather than when the first matched block is executed.
- Block Streams/Executors will not be started until provisioning is
successful, neither will start when either pending or failed.

A `provisioned_state` enum has been added to the Indexer state within
Redis. This is used to persist what stage of provisioning the Data Layer
is at, as well as ensuring we only provision once.

## Concerns with current implementation
Overall, I'm not happy with the implementation here, but feel it is the
best we can do given the current structure of Coordinator. As to not
block this feature I decided to go forward with this approach, and will
create a ticket to refactor/update later.

Provisioning is triggered within the "new" handler, and then polled
within the "existing" handler, which seems a little awkward. The
separated handling is necessary since no operation within the control
loop (i.e. `Synchroniser`) should block, as that would block
synchronisation for all other Indexers. So we need to trigger the
provisioning initially, and then poll the completion each subsequent
control loop.

I feel we have outgrown the current control loop, and am planning to
refactor later. Rather than have a single control loop for _all_
Indexers, I'm thinking we can have dedicated loops for each of them. We
could spawn a new task for each Indexer, which then manages its own
lifecycle. Then each Indexer is free to wait for as long as it wants,
without impacting other Indexers. This would allow us to handle the
blocking provisioning step much more elegantly.
…817)

- Need to write the state to move it out of "new" and in to "existing"
- Added some logs to `DataLayerService`
This PR removes Data Layer resources on Indexer Delete. To achieve this,
the following has been added:
- `Provisioner.deprovision()` method which removes: schema, cron jobs,
and if necessary, Hasura source, database, and role
- `DataLayerService.StartDeprovisioningTask` gRPC method
- Calling the above from Coordinator within the delete lifecycle hook

In addition to the above, I've slightly refactored DataLayerService to
make the addition of de-provisioning more accomodating:
- `StartDeprovisioningTask` and `StartProvisioningTask` now return
opaque IDs rather than using `accountId`/`functionName` - to avoid
conflicts with eachother
- There is a single `GetTaskStatus` method which is used for both,
before it was specific to provisioning

As mentioned in #805, the Coordinator implementation is a little awkward
due to the shared/non-blocking Control Loop. I'll look to refactor this
later and hopefully improve on this.
Provisioning a recently de-provisioned Data Layer would silently fail.
This is due to the fact that de/provisioning tasks are stored in-memory,
keyed by a hash of the config. So if a provisioning task was recently
completed, attempting to re-provision would return that same task.

This PR keys by random UUIDs, instead of hashes, so we can trigger
multiple provisioning jobs for a given Indexer/Data Layer, allowing for
the Provision > De-provision > Provision flow. To protect against
re-provisioning an _existing_ Data Layer, we only start the task after
verifying it doesn't already exist. Also removed cache from
`Provisioner` to ensure we are getting accurate results.
Fixed logic to handle editable contract filter
no functional changes. created scripts for prettier and lint in
package.json. reinstalled modules/lock and ran prettier on all files.
@morgsmccauley morgsmccauley requested a review from a team as a code owner June 24, 2024 21:11
@morgsmccauley morgsmccauley merged commit b382782 into stable Jun 24, 2024
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants