-
Notifications
You must be signed in to change notification settings - Fork 19
add async support to tls_fetcher/proxy and qos #524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
85d5895
to
3667ab4
Compare
7aeca11
to
65672b2
Compare
6810191
to
f4d01f0
Compare
f372272
to
b55c07f
Compare
|
||
// allow clean exit | ||
std::thread::sleep(Duration::from_millis(50)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm surprised child.kill doesn't do this for us
What does the sleep do
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunate that send signal is nightly https://doc.rust-lang.org/stable/std/os/unix/process/trait.ChildExt.html#tymethod.send_signal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's just to ensure the drops are done on the running child. If we just SIGKILL the children won't cleanup.
test_only_init_phase_override, | ||
); | ||
SocketServer::listen(addr, processor).unwrap(); | ||
tokio::runtime::Builder::new_current_thread() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talked about on our call - we will spawn this as a new task. And remove the std thread and new tokio runtime
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I'll do this on a follow up PR along with Reaper refactor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the goal with adding this example? Shouldn't this be almost the same boot e2e test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's for local testing, so you can spin up a full local enclave that's properly booted and then feed it whatever. Makes it easier than having to write a specific test, it's also "live" so easier to debug.
); | ||
|
||
// do a bunch of timing out requests at the same time, this still needs to fit within the pool size | ||
for _ in 0..3 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what would happen if we did more then the pool size (>5)
None, | ||
); | ||
// start reaper in a thread so we can terminate on ctrl+c properly | ||
std::thread::spawn(move || { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can put this into a tokio task, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, once Reaper is refactored
3970a94
to
afedb25
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First read through! Overall no major concerns, the only "big" thing I'd like to push for is getting --pool-size
out of the pivot args now rather than later. I think it'll be easier to do before this lands and gets deployed in mono. My other comments are all small stuff!
// find the u32 value of --pool-size argument passed to the pivot if present | ||
fn extract_pool_size_arg(args: &[String]) -> Option<u32> { | ||
if let Some((i, _)) = | ||
args.iter().enumerate().find(|(_, a)| *a == "--pool-size") | ||
{ | ||
if let Some(pool_size_str) = args.get(i + 1) { | ||
match pool_size_str.parse::<u32>() { | ||
Ok(pool_size) => Some(pool_size), | ||
Err(_) => None, | ||
} | ||
} else { | ||
None | ||
} | ||
} else { | ||
None | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
relates to my earlier comment about awkwardness of this being a pivot arg...let's make this a proper manifest field and this can go away! (I think it's worth it: it's easy enough to change now, but if this ships as-is I think the change becomes more daunting)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm mostly concerned with keeping this alive longer. If we do this change we have to retest the full stack again and keeping both this and the mono PRs up is becoming really hard due to all the related changes from main.
684c783
to
1d18532
Compare
} | ||
|
||
/// Expands the underlaying `AsyncPool` to given `pool_size` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(typo) underlying
575df85
to
08b5707
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some followups left to address but I think this PR is in a great spot already! It'll be easier to review and address the rest in separate, smaller PRs. Let's get you out of rebase hell! 🚀
Summary & Motivation (Problem vs. Solution)
Implement async qos_net::AsyncProxy
this includes:
-
Reaper::execute_async [qos_core]
-
Stream [qos_core]
-
AsyncPool/Stream [qos_core]
-
Listener [qos_core]
-
RequestProcessor [qos_core]
-
ProxyConnection [qos_net]
-
Proxy [qos_net]
-
ProxyStream [qos_net]
-
HostServer [qos_host]
as well as fixing qos_next's qos_core "vm" feature dependency issue by only requiring it if qos_net is requested with it, adding new integration tests and fixing
qos_test_primitives
shutdown handling on unix (SIGINT)New dependencies:
[email protected]
[email protected]
[email protected]
-> was added in some places but we already used it beforeHow I Tested These Changes
Locally and in preprod.
Pre merge check list