-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
high cpu usage #38
Comments
Thanks for the report @m040601! I was able to reproduce the issue locally. |
If there's a single entry in the collection of matches, our huffman implementation will return an empty label. That results in an infinite loop when rendering because we attempt to match empy input to an empty label. Resolve this by handling the single match case separately. Fixes #38
If there's a single entry in the collection of matches, our huffman implementation will return an empty label. That results in an infinite loop when rendering because we attempt to match empy input to an empty label. Resolve this by handling the single match case separately. Fixes #38
This releases v0.5.0 with a fix for #38.
This releases v0.5.0 with a fix for #38.
Fix released in v0.5.0. Thanks again @m040601. |
Nice. I'm not even a programmer, so I wasnt even sure this was some coding problem. The only thing I think could be improved later is the high CPU usage. PS: By the way. Good thing that you provide binaries for different plattforms. |
Thanks for the short and actionable bug report! It made it easy to track the issue down. Ah, I thought the high cpu was related to the empty match, but I didn't confirm that. I'll dig into that separately. I suspect it's the render loop. |
Never mind, it wasn't the render loop. I found the root cause; it had to do with logging. Fix incoming. |
The way log tailing was implemented, we were effectively spin looping when the source had no bytes to produce but also had not yet hit EOF. while !eof { read(f) } Fix this by spinning only if it had bytes to produce (indicating that there may be more), and once it begins producing zero bytes, use the same delay we would use for EOF. do { n = read(f) } while (n > 0 && !eof) if eof || n == 0 { sleep; retry } This resolves 100% CPU usage when idling. Fixes #38
The way log tailing was implemented, we were effectively spin looping when the source had no bytes to produce but also had not yet hit EOF. while !eof { read(f) } Fix this by spinning only if it had bytes to produce (indicating that there may be more), and once it begins producing zero bytes, use the same delay we would use for EOF. do { n = read(f) } while (n > 0 && !eof) if eof || n == 0 { sleep; retry } This resolves 100% CPU usage when idling. Fixes #38
This should be fixed in v0.5.1. Thanks again @m040601. |
Yeap. Tested and confirmed on my system too. Can't hear my fans spinning crazy, can't see anything abnormal in htop anymore. Great job ! |
If there's a single entry in the collection of matches, our huffman implementation will return an empty label. That results in an infinite loop when rendering because we attempt to match empy input to an empty label. Resolve this by handling the single match case separately. Fixes abhinav/tmux-fastcopy#38
First of all thank you for your work in this clever tool.
I've been following it and testing it, since the first releases.
I've been comparing it and testing it with others that provide similar "hinting" to copy to the tmux buffer.
Everything works fine.
Except, when there's nothing to find or no regex to match (CASE1).
That is, for ex. , if i open a new tmux window, with a fresh empty buffer, with nothing being displayed except my prompt.
When i fire tmux-fastcopy, it simply greys out everything, and the CPU usage shoots up.
I have to abort it with C-c. Nothing gets copied to the buffer.
If I have a little more to be found (CASE2) then it works as usual.
https://imgur.com/a/zNUKall
I have to say that I use the released compiled version, on ArchLinux.
I simply have this on my .tmux.conf,
bind-key F run-shell -b ~/.tmux.d/experiments/tmux-fastcopy
The text was updated successfully, but these errors were encountered: