mirror of
https://github.com/curl/curl.git
synced 2026-04-13 00:31:41 +08:00
Further testing with timeouts in event based processing revealed that our current shutdown handling in the connection pool was not clear enough. Graceful shutdowns can only happen inside a multi handle and it was confusing to track in the code which situation actually applies. It seems better to split the shutdown handling off and have that code always be part of a multi handle. Add `cshutdn.[ch]` with its own struct to maintain connections being shut down. A `cshutdn` always belongs to a multi handle and uses that for socket/timeout monitoring. The `cpool`, which can be part of a multi or share, either passes connections to a `cshutdn` or terminates them with a one-time, best effort. Add an `admin` easy handle to each multi and share. This is used to perform all maintenance operations where no "real" easy handle is available. This solves the problem that the multi admin handle requires some additional initialisation (e.g. timeout list). The share needs its admin handle as it is often cleaned up when no other transfer or multi handle exists any more. But we need a `data` in almost every call. Fix file:// handling of errors when adding a new connection to the pool. Changes in `curl` itself: - for parallel transfers, do not set a connection pool in the share, rely on the multi's connection pool instead. While not a requirement for the new `cshutdn` to work, this is a) helpful in testing to trigger graceful shutdowns b) a broader code coverage of libcurl via the curl tool - on test_event with uv, cleanup the multi handle before returning from parallel_event(). The uv struct is on the stack, cleanup of the multi later will crash when it tries to register sockets. This is a "eat your own dogfood" related fix. Closes #16508
94 lines
1.7 KiB
Plaintext
94 lines
1.7 KiB
Plaintext
<testcase>
|
|
<info>
|
|
<keywords>
|
|
HTTP
|
|
HTTP GET
|
|
shared connections
|
|
</keywords>
|
|
</info>
|
|
|
|
# Server-side
|
|
<reply>
|
|
<data>
|
|
HTTP/1.1 200 OK
|
|
Date: Tue, 09 Nov 2010 14:49:00 GMT
|
|
Server: test-server/fake
|
|
Content-Type: text/html
|
|
Content-Length: 29
|
|
|
|
run 1: foobar and so on fun!
|
|
</data>
|
|
<datacheck>
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
run 1: foobar and so on fun!
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
run 1: foobar and so on fun!
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
run 1: foobar and so on fun!
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
</datacheck>
|
|
</reply>
|
|
|
|
# Client-side
|
|
<client>
|
|
<server>
|
|
http
|
|
</server>
|
|
<name>
|
|
HTTP with shared connection cache
|
|
</name>
|
|
<tool>
|
|
lib%TESTNUMBER
|
|
</tool>
|
|
<command>
|
|
http://%HOSTIP:%HTTPPORT/%TESTNUMBER
|
|
</command>
|
|
</client>
|
|
|
|
# Verify data after the test has been "shot"
|
|
<verify>
|
|
</verify>
|
|
</testcase>
|