Replies: 1 comment 1 reply
-
Concurrency is always tricky. If you can demonstrate a specific issue with this implementation, that would certainly be worth looking at. In the meantime, here's the argument as to why it's OK in its current form: DispatchQueues are cheap, and contrary to some implications in the docs (below), they do not consume threads simply by existing. In fact, they never create threads at all: any queue you create ultimately dumps its blocks into one of the system concurrent queues for execution. The system queues will certainly create new threads if they have enough parallel work to do, however. The concurrent queue in the box code is really a serial queue in disguise. The only thing you can do that isn't barriered is read the resolved value from the box. That should always be fast and nonblocking. Apple seems to emit a lot of sturm und drang about concurrent queues, for reasons that aren't entirely clear to me. For example, the DispatchQueue doc says "Another way that apps consume too many threads is by creating too many private concurrent dispatch queues. Because each dispatch queue consumes thread resources, creating additional concurrent dispatch queues exacerbates the thread consumption problem." But again, my understanding is that this statement is BS when read literally. The problem isn't the existence of concurrent queues, it's the abuse of concurrency. To run into problems, you have to actually consume threads. That is, you either have to a) try to actually run too many blocks in parallel, or b) submit concurrent blocks that suspend in the kernel. |
Beta Was this translation helpful? Give feedback.
-
Hi,
thanks for making this neat and useful library!
While working on improving my understanding of the impact dispatch queues can have on an application, I have read up on GCD documentation and revisited WWDC videos such as "Modernizing Grand Central Dispatch Usage" and "Building Responsive and Efficient Apps with GCD". In order to leverage GCD efficiently, Apple highlights that one should restrict the number of queues (or hierarchies of queues) to subsystems of a program, such that one does not inadvertently spawn too many threads.
After poking around a bit in the code base, I came across the mechanism for synchronizing access to the underlying storage of
Promise
, i.e.,EmptyBox
:I'm curious whether you have assessed the impact of having a private concurrent dispatch queue per object as in the case above. As several promises may exist simultaneously, have you seen cases where multiple instances of
EmptyBox
could lead to spawning more threads than necessary? Also, I'm keen to hear if you've considered implementing synchronization with other primitives such as a read/write lock.Beta Was this translation helpful? Give feedback.
All reactions