-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Safepoints #18
Comments
Nice, I tried something like this a while back and it was good but I didn't think of the &mut api to prevent epoch advancement issues. This seems like a pretty simple branch to do right anyways with the new scope guard, unless others here are really worried about misuse I think we should put it in for the upcoming release I saw discussion of. Are we sure that the relaxed store is ok? My prior says no since safepoint is basically a clean wrapper over re-pinning. If we take some worst-case scenario where safepoint advancements to cur_epoch+2 are ordered before loads from cur_epoch, wouldn't that expose the code to a situation where the collector could free memory while the thread read it? I suspect a release store (not fence!) would work and remain cheaper (free on x86) than a seq_cst fence. Finally, a simple addition that would be really nice is to have a safepoint_weak api which advances the counter but doesn't safepoint. This would be ideal for something reacting to input quickly, like: loop {
if poll_more_data() {
process_data();
safepoint_weak(false); // this is a trite use of if, but one could imagine
more complex conditions leading to some boolean
}
else {
safepoint();
}
} or maybe more generally, a safepoint_at(num_call) which safepoints if the counter is greater than num_call. |
Nope, you're absolutely right. I've edited the comment - can you take a look again? It should be enough to simply execute a fence if the epoch changed.
Easy and configurable. I like it. :) |
I think a release store may be good enough, since it ensures all memory accesses (not just stores) have completed before the store completes. That would limit the leakage to loads from epoch N+1 going into epoch N, which I believe is safe. This keeps safepoint essentially free on x86 and fairly fast on architectures with optimized release fences. Something else that might be really good is having safepoint prefetch the global epoch, so the chance of throwing an extra cache miss into the pipeline decreases |
CorrectnessI think the store should be let g = GLOBAL_EPOCH.load(Relaxed);
let l = self.local_epoch.load(Relaxed);
if l != g {
self.local_epoch.store(g, Release);
fence(SeqCst);
} As @schets said, The Note that we don't need to issue A note on signature: I think the it should be APIBackground: these days I'm implementing wait-free objects in Crossbeam. Wait-free objects typically require "helping": when a thread is slow, other fast threads should be able to "help" the slow thread. In order to do that, we need to maintain the list of all the threads that works on the object. The list should be safely managed by some memory reclamation scheme, too. Naturally, I want to manage the helper thread list by a Problem: A (fast) thread should iterate over all the threads, looking for a thread to help. The entire iteration should be inside a single critical section (think: the code region inside Proposal: So I began to think of a more low-level API to "pin" the thread: a pair of |
I see now. Agreed.
But
Are you saying that an operation iterates over a few threads, stores the pointer where it left off, then exits the pinned scope, and continues later from the same pointer?
Would these functions have the |
Release should work with increments on one, but I'm not about increments of two. Wouldn't it be possible to see loads of objects unlinked in epoch N+2 happen before the epoch update to N+2 if we move local_epoch by two or more? If it's a movement of only one, I believe that there are non seq-cst fence alternatives which may still work. As long as we can avoid that we'll get code is is between really to decently fast on many architectures. Correctness is most important here of course. |
On the safety of
|
No, this code will not compile because The code is attempting to borrow the scope (
Can you just wrap the whole code within an |
Ah.. now I understand why On helpingSuppose there's a wait-free queue let handle = queue.handle();
...
handle.push(3);
...
let v = handle.try_pop(); In order to implement helping, all handles should be connected in e.g. a linked list. Also, a handle should point to another handle in the list. The pointer represents the next thread to help. We need to store the pointer because we need to be fair in helping other threads in order to achieve wait-freedom. Also, for safety, the pointer should be valid all the time. So the existing of a handle should imply that a GC scope is somehow pinned. However, I failed to provide such a guarantee with I think the easiest way to guarantee that a GC scope is pinned whenever we have a queue handle, is using a QSBR-flavored handle For the separation of concern, I'm writing a reusable library for helping: https://github.com/jeehoonkang/crossbeam-queue/blob/master/src/helpers.rs |
Closing since |
The problem is that in some special situations pinning might be prohibitively slow. For example, suppose you have the following code:
If
do_something_that_pins
performs a quick operation and pins the current thread every time, then the cost of pinning might take a large percentage of the total running time. A way to alleviate the problem would be to enclose the whole loop withepoch::pin
:But the new problem is that the GC will not be able to make any progress during the whole loop, which might take a long time! Let's try fixing that as well:
Okay, that's a bit better, but ugly, error-prone and cumbersome to write.
I'm proposing to introduce safepoints. The idea is to add a new method
Scope::safepoint
and changeepoch::pin
so that it takes a closure that accepts a&mut Scope
instead of&Scope
:Now we can change the original code like this:
Method
safepoint
increments an internal counter and every, say, 50th step simply unpins and re-pins the current thread. But, of course, only if this scope is the outermost one (the one that actually pinned the thread). This way we have essentially implemented something very much like QSBR (quiescent-state based reclamation).Note that the
safepoint
method is pretty fast. We can implement it like this:Maybe we don't even need to count steps to occasionally re-pin (the
&& self.increment()
part). Perhaps it's just fine to re-pin on every call.The reason why the method is called
safepoint
is because it reminds me of safepoints in garbage-collected languages (e.g. see this and this).cc @hniksic
The text was updated successfully, but these errors were encountered: