Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial fix #420

Merged
merged 4 commits into from
Aug 21, 2017
Merged

Initial fix #420

merged 4 commits into from
Aug 21, 2017

Conversation

chapmanc
Copy link
Contributor

@chapmanc chapmanc commented May 25, 2017

  • I've read and understood the Contributing guidelines and have done my best effort to follow them.
  • I've read and agree to the Code of Conduct.
  • I've been mindful about doing atomic commits, adding documentation to my changes, not refactoring too much.
  • I've a descriptive title and added any useful information for the reviewer. Where appropriate, I've attached a screenshot and/or screencast (gif preferrably).
  • I've written tests to cover the new code and functionality included in this PR.
  • I've read, agree to, and signed the Contributor License Agreement (CLA).

PR Summary

Currently the hubot slack adapter makes a call to users.list everytime we call robot.brain.set since that fires the 'loaded' event. If a call updates several properties separately then you get multiple calls to the slack api to load users.

Related Issues

For #419

Test strategy

Currently this is manually tested and after a couple of hours of trying to figure out the test framework I had to cut my losses. If someone else can write a test for this or can help me write a test that would be appreciated. Used to using sinon where I can clearly see the calls made to functions.

@codecov
Copy link

codecov bot commented May 25, 2017

Codecov Report

Merging #420 into master will decrease coverage by 0.79%.
The diff coverage is 0%.

Impacted file tree graph

@@           Coverage Diff            @@
##           master    #420     +/-   ##
========================================
- Coverage   79.29%   78.5%   -0.8%     
========================================
  Files           5       5             
  Lines         198     200      +2     
  Branches       46      47      +1     
========================================
  Hits          157     157             
- Misses         30      32      +2     
  Partials       11      11
Impacted Files Coverage Δ
src/client.coffee 82.22% <ø> (ø) ⬆️
src/bot.coffee 71.05% <0%> (-1.27%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e76cf7f...5a1753a. Read the comment docs.

@chapmanc
Copy link
Contributor Author

@aoberoi if you can give me some help on the tests it would be good to get this in. It's a bad performance hit and really unnecessary hammering on the slack api.

@aoberoi
Copy link
Contributor

aoberoi commented Jun 6, 2017

@chapmanc thanks for investigating and the PR. i'll be able to review this by mid-week. if everything looks 🆗 , i hope to ship it before the end of the week.

@aoberoi
Copy link
Contributor

aoberoi commented Jun 16, 2017

now that i've gotten some time to look at this (sorry!)...

i think we can go a bit further here. rather than letting the adapter refresh the whole user list every hour, we only really want to do it once. the individual user updates will happen in user_change events, so i don't think there's any value in complete reloads by calling users.list() again.

@chapmanc if you'd like to adjust this PR to work in that way, i'd be happy to help review and merge it. if you don't have the time, let me know and i'll make sure its a P1 and we can get the fix out soon.

@seansabour
Copy link

Is there any update on this? It is impacting our team heavily as we our leveraging the brain quite a bit.

@chapmanc
Copy link
Contributor Author

I can update it but won't be for a while. Just on holidays now but will look into it when I'm back.

@aoberoi
Copy link
Contributor

aoberoi commented Aug 2, 2017

@chapmanc if you have a moment, can you make me a collaborator on this PR and i'll finish up the changes?

@chapmanc
Copy link
Contributor Author

chapmanc commented Aug 2, 2017

Hey @aoberoi I allowed edits is checked already 👍

@chapmanc
Copy link
Contributor Author

chapmanc commented Aug 6, 2017

Hey @aoberoi

I have added to code to stop it loading after the initial load but since the stubs of the tests fake the brain to be an event emitter there isn't much testing to do here so the code coverage is below. I could stub out the event emitter but at that point I don't know what we are really testing.

Let me know if there is something else you would rather do.

@aoberoi aoberoi merged commit ea6ed1f into slackapi:master Aug 21, 2017
@dodgepong
Copy link

Just wanted to swing by and say thanks for fixing this in v4.4.0. For the sake of future Googlers, the error message that appears in the log is info: Rate limited, will retry 10 seconds (or some other number of seconds, but usually 10), which appeared repeatedly for me in my log due to repeated calls to robot.brain.set, and upgrading to 4.4.0 fixed the issue of slow/non-existent responses for me.

@perplexes
Copy link

This is still happening for us, so I've had to set DISABLE_USER_SYNC. Perhaps it's because we have a lot of users + it seems like, at least on heroku, hubot has to reconnect to slack every minute on the minute.

@aoberoi
Copy link
Contributor

aoberoi commented Oct 23, 2018

@perplexes thanks for reporting. i'd like to ask for some clarifications. are you also getting the Rate limited,... output on your console? can you turn on debugging using the instructions here so we can try to understand what is going on?

its possible that issue #479 is related. its also possible that #524 introduced a regression.

@aoberoi
Copy link
Contributor

aoberoi commented Oct 23, 2018

@perplexes one more question: can you take a look at your Heroku app metrics to see if your memory usage is spiking and Heroku is potentially terminating your app because of that? that's one theory for why you're experiencing a reconnect at a regular interval.

@perplexes
Copy link

perplexes commented Oct 25, 2018

No memory usage spike. It still reconnects every minute, even though I've disabled user sync. And I've enabled debugging, but it doesn't help much:

Oct 25 15:54:44 hi-hubot app[web] DEBUG [Thu Oct 25 2018 21:54:44 GMT+0000 (UTC)] DEBUG Received text message in channel: C98M2PQRW, from: U3RE46AF2 (human)
Oct 25 15:54:44 hi-hubot app[web] DEBUG [Thu Oct 25 2018 21:54:44 GMT+0000 (UTC)] DEBUG No listeners executed; falling back to catch-all
Oct 25 15:54:46 hi-hubot app[web] DEBUG [Thu Oct 25 2018 21:54:46 GMT+0000 (UTC)] DEBUG Received text message in channel: C98M2PQRW, from: U3RE46AF2 (human)
Oct 25 15:54:46 hi-hubot app[web] DEBUG [Thu Oct 25 2018 21:54:46 GMT+0000 (UTC)] DEBUG No listeners executed; falling back to catch-all
Oct 25 15:54:57 hi-hubot app[web] warn warn: Reconnecting, on attempt 1
Oct 25 15:54:57 hi-hubot app[web] info info: Rate limited, will retry 44 seconds
Oct 25 15:55:01 hi-hubot app[heroku-redis] info source=REDIS addon=redis-contoured-67092 sample#active-connections=2 sample#load-avg-1m=0.005 sample#load-avg-5m=0.035 sample#load-avg-15m=0.04 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664336kB sample#memory-free=12729124kB sample#memory-cached=1285164kB sample#memory-redis=528664bytes sample#hit-rate=0.95455 sample#evicted-keys=0
Oct 25 15:55:41 hi-hubot app[web] DEBUG [Thu Oct 25 2018 21:55:41 GMT+0000 (UTC)] DEBUG SlackBot#authenticated() Found self in RTM start data
Oct 25 15:55:41 hi-hubot app[web] INFO [Thu Oct 25 2018 21:55:41 GMT+0000 (UTC)] INFO Logged in as @hibot in workspace Human Interest
Oct 25 15:55:41 hi-hubot app[web] INFO [Thu Oct 25 2018 21:55:41 GMT+0000 (UTC)] INFO Connected to Slack RTM
Oct 25 15:55:57 hi-hubot app[web] warn warn: Reconnecting, on attempt 1
Oct 25 15:55:57 hi-hubot app[web] info info: Rate limited, will retry 45 seconds
Oct 25 15:56:12 hi-hubot app[heroku-redis] info source=REDIS addon=redis-contoured-67092 sample#active-connections=2 sample#load-avg-1m=0.065 sample#load-avg-5m=0.04 sample#load-avg-15m=0.04 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664336kB sample#memory-free=12729824kB sample#memory-cached=1285164kB sample#memory-redis=528664bytes sample#hit-rate=0.95455 sample#evicted-keys=0
Oct 25 15:56:42 hi-hubot app[web] DEBUG [Thu Oct 25 2018 21:56:42 GMT+0000 (UTC)] DEBUG SlackBot#authenticated() Found self in RTM start data
Oct 25 15:56:42 hi-hubot app[web] INFO [Thu Oct 25 2018 21:56:42 GMT+0000 (UTC)] INFO Logged in as @hibot in workspace Human Interest
Oct 25 15:56:42 hi-hubot app[web] INFO [Thu Oct 25 2018 21:56:42 GMT+0000 (UTC)] INFO Connected to Slack RTM
Oct 25 15:56:57 hi-hubot app[web] warn warn: Reconnecting, on attempt 1
Oct 25 15:56:57 hi-hubot app[web] info info: Rate limited, will retry 46 seconds
Oct 25 15:57:23 hi-hubot app[heroku-redis] info source=REDIS addon=redis-contoured-67092 sample#active-connections=2 sample#load-avg-1m=0.14 sample#load-avg-5m=0.07 sample#load-avg-15m=0.05 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664336kB sample#memory-free=12729684kB sample#memory-cached=1285168kB sample#memory-redis=528664bytes sample#hit-rate=0.95455 sample#evicted-keys=0
Oct 25 15:57:43 hi-hubot app[web] INFO [Thu Oct 25 2018 21:57:43 GMT+0000 (UTC)] INFO Logged in as @hibot in workspace Human Interest
Oct 25 15:57:43 hi-hubot app[web] INFO [Thu Oct 25 2018 21:57:43 GMT+0000 (UTC)] INFO Connected to Slack RTM
Oct 25 15:57:46 hi-hubot app[web] INFO [Thu Oct 25 2018 21:57:45 GMT+0000 (UTC)] INFO keepalive ping
Oct 25 15:57:46 hi-hubot app[web] INFO [Thu Oct 25 2018 21:57:46 GMT+0000 (UTC)] INFO keepalive pong: 200 OK
Oct 25 15:57:46 hi-hubot heroku[router] info at=info method=POST path="/heroku/keepalive" host=hi-hubot.herokuapp.com request_id=a8cd31e4-6e56-49ed-9bc9-c6e34ebd1e29 fwd="107.22.61.64" dyno=web.1 connect=1ms service=10ms status=200 bytes=205 protocol=https
Oct 25 15:57:57 hi-hubot app[web] warn warn: Reconnecting, on attempt 1
Oct 25 15:57:57 hi-hubot app[web] info info: Rate limited, will retry 46 seconds
Oct 25 15:58:51 hi-hubot app[heroku-redis] info source=REDIS addon=redis-contoured-67092 sample#active-connections=2 sample#load-avg-1m=0.08 sample#load-avg-5m=0.065 sample#load-avg-15m=0.05 sample#read-iops=0 sample#write-iops=0 sample#memory-total=15664336kB sample#memory-free=12729972kB sample#memory-cached=1285168kB sample#memory-redis=528664bytes sample#hit-rate=0.95455 sample#evicted-keys=0
Oct 25 15:58:57 hi-hubot app[web] warn warn: Reconnecting, on attempt 1
Oct 25 15:58:57 hi-hubot app[web] info info: Rate limited, will retry 47 seconds

@perplexes
Copy link

@aoberoi any ideas?

@aoberoi
Copy link
Contributor

aoberoi commented Oct 31, 2018

@perplexes sorry, this fell off my radar. let's move this to a new issue so that it isn't hidden inside something that appears closed. would you mind creating an issue and mentioning this issue (by number) in it?

I'm looking at the logs now and i'll try to leave any thoughts I have about next steps in the new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants