Skip to content
This repository has been archived by the owner on Oct 31, 2018. It is now read-only.

Implement a control mechanism to prevent two macrotasks from loading or computing the same block at the same time. #14

Open
ccanel opened this issue Mar 7, 2015 · 0 comments

Comments

@ccanel
Copy link

ccanel commented Mar 7, 2015

Before the new disk scheduler was integrated into the rest of the Spark code, the CacheManager was responsible for making sure that two tasks did not compute the same block at the same time (thus doing duplicate work). Given that the CacheManager has been removed, we need to reimplement this functionality somewhere else.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant