-
Notifications
You must be signed in to change notification settings - Fork 41
fix: protect concurrent map access in port-forward #303
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Add synchronization around shared maps used for port forwarding. Introduced a read/write mutex for the recvChans map in portforward.go and for the mutexAck map in portforward_tcp.go. All map reads and writes are now wrapped with sync.RWMutex locks to prevent data races and intermittent panics under concurrent usage. Changelog: None Ticket: None Signed-off-by: Quentin Dufournet <[email protected]>
|
Thanks for your contribution! We don't have capacity to take care of it atm, so I only labeled it and I hope someone from community can help with this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @mcxiv very much for the contribution. I consider it a valid fix of a critical bug. It will be part of the upcoming mender-cli release.
Thank you for your interest in and continued use of Mender. Before I approve and we run all the tests, I think we should create a test case for the error scenario you just fixed. Would you consider creating one, or would you like me to write one?
And last minor nit(s): would you mind signing the commits with your key? And we have to do something about the gocyclo: cmd/portforward_tcp.go:106:1: cyclomatic complexity 21 of func (*TCPPortForwarder).handleRequest is high (> 20) (gocyclo)
Thank you! And I am very sorry was not able to comment earlier and that you waited so long.
| p.mutexAckLock.RLock() | ||
| if mutex, ok := p.mutexAck[connectionID]; ok { | ||
| mutex.Unlock() | ||
| } | ||
| p.mutexAckLock.RUnlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you consider:
| p.mutexAckLock.RLock() | |
| if mutex, ok := p.mutexAck[connectionID]; ok { | |
| mutex.Unlock() | |
| } | |
| p.mutexAckLock.RUnlock() | |
| p.mutexAckLock.RLock() | |
| mutex, ok := p.mutexAck[connectionID] | |
| p.mutexAckLock.RUnlock() | |
| if ok && mutex != nil { | |
| mutex.Unlock() | |
| } |
you know, I do panic when I see mutexes in the critical section ;-)
Hi everyone,
I rely on the port-forward feature daily at work to proxy hundreds of HTTP requests to our devices. Until now, “concurrent map read and map write” panics caused the CLI to crash dozens of times per day.
Here's my typical stack trace : mender_portforward_stacktrace.txt
This patch adds
sync.RWMutexaround both therecvChansandmutexAckmaps to eliminate data races on map accesses. During my tests, crashes have dropped from hundreds per day to almost zero under normal load.I'm not going to lie, I'm stuck now, and would appreciate any help as it overpasses my skills. But at least, I think it's a good start.
Sorry for the comments and new line auto-formatting, I'll fix it and squash my final commit if this get any attention.
Thanks,
Quentin