-
-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propose bug triage process + nightly build mgmt process #3238
base: main
Are you sure you want to change the base?
Conversation
…ability, which implies some sort of dataset tier structure) + write down nightly build management process.
|
||
When they don't pass, we need to fix them. | ||
|
||
Every morning, someone from inframundo will check the #pudl-deployments slack |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@catalyst-cooperative/inframundo is it ok to sign us up for this? I think it's mostly been @zaneselvans and @bendnorman checking in the past. I've been doing it in the new year. We could set up a rotation if we want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a realistic description of this team's responsibilities, even if we wind up delegating the fix to someone outside of inframundo.
2. Track the GitHub issue and the build status in the above spreadsheet. | ||
3. Look in the logs and determine whether it was an "infrastructure failure," i.e. something went wrong with the code that *runs* | ||
the nightly build, or a "PUDL failure," i.e. something that went wrong with the PUDL ETL itself. | ||
4. Investigate the source of the issue & explore ways to fix it. Get help from the folks whose PRs broke the build. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think we need more guidance here? TBH I sort of ran out of doc-writing steam for this afternoon. But I also think this is probably good enough?
and "tier 2" groups below. | ||
|
||
**Tier 1 datasets** | ||
* FERC 1 schedules XYZ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just some placeholder stuff. I think the main thrust of this proposal is:
- We should decide which datasets are "important" and thus warrant firefighting action, vs. "unimportant" and thus get slotted in with the rest.
- We should also decide what it means for something to be "broken" - i.e. X% data missing/incorrect, new data unincorporated after X time, etc. - absolutely I need help actually defining these
- If we have 1. and 2. then it will be much easier for us to make prioritization decisions about random things that blow up!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue here is that the most integrated data has the most testing infrastructure written into it and is the most likely to fail. I think more useful here would be that the first step of triage is actually to scope out the issue and write some proposed solutions. Then we make the step of deciding whether we want to fix it in the most minimal way (relax the restriction, xfail the test), actually fix the core issue, or implement a more extensive design change. I think the pause between "here's the problem and what needs to be done" and "which version of this fix should we implement now" is probably the thing we most often fail to do and would help us prioritize.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left some questions and suggestions.
|
||
When they don't pass, we need to fix them. | ||
|
||
Every morning, someone from inframundo will check the #pudl-deployments slack |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a realistic description of this team's responsibilities, even if we wind up delegating the fix to someone outside of inframundo.
|
||
For *tier 1* tables: | ||
|
||
- latest source data is incorporated into PUDL within 1 month of publication |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is faster than our actual funding levels allow for at present, we're running on a quarterly integration calendar for our sub-annual datasets for now.
and "tier 2" groups below. | ||
|
||
**Tier 1 datasets** | ||
* FERC 1 schedules XYZ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue here is that the most integrated data has the most testing infrastructure written into it and is the most likely to fail. I think more useful here would be that the first step of triage is actually to scope out the issue and write some proposed solutions. Then we make the step of deciding whether we want to fix it in the most minimal way (relax the restriction, xfail the test), actually fix the core issue, or implement a more extensive design change. I think the pause between "here's the problem and what needs to be done" and "which version of this fix should we implement now" is probably the thing we most often fail to do and would help us prioritize.
|
||
**High (prioritize in the upcoming sprint planning)** | ||
- missing/incorrect data in Tier 1 tables | ||
- new Tier 1 source data available |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New data availability won't show up in our nightly builds, so I'm not 100% following the connection here?
Overview
Sometimes stuff goes wrong! We need to make decisions about how urgently they need to be fixed! If we don't make those decisions, we'll treat everything as urgent and sink hours and hours into stuff that we don't actually care about!
It might be worth writing down some guidelines - but maybe we can just use "discuss as a team how urgent this is" as a process for a little longer. In any case, I tried to sketch out what some written guidelines could look like.
Separately, we have a bit more of an actual process around the nightly build failures now, so I wrote that down.
Testing
Well, other people need to read it and give feedback ;)
To-do list