-
Notifications
You must be signed in to change notification settings - Fork 89
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memcache issues on production #869
Comments
Yeah these have always been the same. @craigspaeth Had said that we could probably easily switch to e.g. redis. |
Saw this again this morning. Fixed by restarting the app, meaning it was not on Memcachier's side, and most likely in the error handling of MemJS. I expect the appropriate fix would be to make sure MemJS reinstantiates (i.e. fully closes and opens a new socket) when it gets these errors, or even to just bail out and let the Heroku dyno process supervisor restart the app. |
@alloy did update MemJS to |
@orta I don't think we should conflate this with other memory issues |
@alloy so right now there are a couple of things lined up for us in Platform, but once we update to Kubernetes 1.9 and add additional instances to plan for Metaphysics' memory usage, I think we can move forward on this. You will then be able to use AWS ElasticCache for Memcached or Redis, accessed via a private network, and so can use the leading drivers. |
@izakp Awesome! 👌 |
I took a look at the production logs - it was mostly memcache errors ATM:
The text was updated successfully, but these errors were encountered: