-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fail gracefully when Socrata is down #213
Comments
CRAN was removed because it has been failing tests randomly and creating overhead for the CRAN team. The CRAN maintainers are no longer allowing any tolerance for occasional transgressions, and RSocrata will be removed after a single false positive failed test. The problem is that over the years we’ve written between 50 and 100 tests to address specific issues with specific datasets. The tests work when everything is working normally, but there are ways that the tests can fail under abnormal circumstances. For example, there could be a brief but unplanned outage, planned API maintenance, a dataset might get removed, or there could be changes in datasets that invalidate our tests. Those external factors are unexpected, unusual, and generally impossible to predict. Also, it's hard to make exceptions for the tests. If we said "when this test fails, ignore it", then what's the purpose of the test? Also, I'm not sure if we can reliably catch outages and timeouts because this isn't a reproducible condition. Some ideas:
That's not a comprehensive list of possible approaches, and each have their own considerations. In my opinion we need a test API where we can reproduce the specific conditions that we are testing if we're ever going to have more reliable interactions with the API. The tests use several API endpoints, examples include:
All of the tests here: https://github.com/Chicago/RSocrata/blob/master/tests/testthat/test-all.R It’s unsustainable to rely on this disparate collection of random datasets, because you never know when the conditions will change and invalidate the tests. |
After going back and forth with ChatGPT I had another idea to take a two step approach. Within read.socrata <- function(url, ...) {
response <- tryCatch({
httr::GET(url)
## plus rest of read.socrata code
return(result)
}, error = function(e) {
warning(paste("Website non-responsive for URL:", url, "Skipping request."))
return(NULL)
}) Then write a second wrapper for the tests, perhaps called This is a bit of a work in progress and wouldn't solve for the cases when datasets change or other unexpected errors occur, but it would at least be foolproof (hopefully) for outages. Also, I noticed that httr2 is out, and that could provide some mechanisms for more robust handling. |
For future reference there may be something useful in the mocking functions documented here: https://r-pkgs.org/testing-advanced.html#mocking I'm not sure at a glance. |
We received notice from CRAN that RSocrata is failing with some regularity, which causes headaches for the CRAN administrators. We believe this happens because Socrata is occasionally briefly down for maintenance.
In order to "fail gracefully" @nicklucius and I discussed checking the site to see if it's up before running the tests.
For now I plan to use something like
attr(curlGetHeaders("https://data.cityofchicago.org/"), "status")
to check for 200 at the start of the tests, and abort testing if that fails. Error handling will be necessary because it returns an error if the site is down.Example error for non existent website:
Example error for timeout example:
I would also like to take this opportunity to at least try splitting the tests (#103), and this should also take care of #131 which is fail gracefully on write.
I think it would be a mistake to resolve this issue by introducing new error handling into
read.socrata
andwrite.socrata
. The existing errors are descriptive. For example if you plug in a bad URL you get a 404. Also modifying the tests is the least invasive way to make changes. Adding unexpected error handling would be a significant change and could have unintended consequences. For example, I would to have a process running that updates a dataset nightly and for that process to fail without the expected error.The text was updated successfully, but these errors were encountered: