-
Notifications
You must be signed in to change notification settings - Fork 180
Extract goroutine management and clean up into a separate package #1362
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
# Conflicts: # packages/orchestrator/internal/sandbox/network/pool.go
| closed bool | ||
| doneCh chan taskResult | ||
| wg sync.WaitGroup | ||
| cancelFunc context.CancelFunc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a fan of having this context on the struct. Also, the cancelation is incorrect and differs from the initial implementation drastically
| func (s *Runner) Close(ctx context.Context) error { | ||
| // Idempotent: if Run already performed cleanup, nothing to do | ||
| s.mu.Lock() | ||
| if s.cleanupsRun { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be done using atomic boolean
| mu sync.Mutex | ||
| started bool | ||
| closed bool | ||
| doneCh chan taskResult |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about using the setonce instead? It could then handle multiple variables on this struct
| tasks []Task | ||
| logger *zap.Logger | ||
|
|
||
| mu sync.Mutex |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we even need the mutex?
This allows us to test and standardize how we start, monitor, cancel, and clean up our goroutines. This PR only adds it to orchestrator, but we should add this to api, client-proxy, and the rest as well.
Note
Introduces a shared supervisor for task lifecycle and refactors orchestrator to use it for starting, monitoring, and cleanup; removes legacy goroutine/closer logic and tweaks network pool Close.
packages/shared/pkg/supervisor):RunnerwithAddTask,Run, andClosefor task lifecycle management.TaskExitedErrorand tests covering cancellation and early task exit.packages/orchestrator/main.go):errgroup/signal/closers/startServicewithsupervisor.Runnertasks for all services (sandbox proxy,device pool,network pool,hyperloop server,cmux,http,grpc, etc.).cleanupLoggerhelper and registers loggers/clients/batchers/observers asCleanuptasks.runner.Run(ctx)thenrunner.Close(ctx); removesserviceDoneError,closertype, and related logic.internal/sandbox/network/pool.go):Close; functional behavior unchanged.Written by Cursor Bugbot for commit 8ecda4d. This will update automatically on new commits. Configure here.