mirror of
https://github.com/curl/curl.git
synced 2026-04-10 23:51:42 +08:00
stop using the word 'just'
Everywhere. In documentation and code comments. It is almost never a good word and almost always a filler that should be avoided. Closes #20793
This commit is contained in:
parent
4b583b7585
commit
b4dba346cd
1
.github/scripts/badwords.txt
vendored
1
.github/scripts/badwords.txt
vendored
@ -82,6 +82,7 @@ file names\b:filenames
|
||||
\b([02-9]|[1-9][0-9]+) bit\b: NN-bit
|
||||
[0-9]+-bits:NN bits or NN-bit
|
||||
\bvery\b:rephrase using an alternative word
|
||||
\bjust\b:rephrase using an alternative word
|
||||
\bCurl\b=curl
|
||||
\bcURL\b=curl
|
||||
\bLibcurl\b=libcurl
|
||||
|
||||
@ -10,8 +10,8 @@ Read our [Vulnerability Disclosure Policy](docs/VULN-DISCLOSURE-POLICY.md).
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
If you have found or just suspect a security problem somewhere in curl or
|
||||
libcurl, [report it](https://curl.se/dev/vuln-disclosure.html)!
|
||||
If you have found or suspect a security problem somewhere in curl or libcurl,
|
||||
[report it](https://curl.se/dev/vuln-disclosure.html)!
|
||||
|
||||
We treat security issues with confidentiality until controlled and disclosed
|
||||
responsibly.
|
||||
|
||||
25
docs/BUGS.md
25
docs/BUGS.md
@ -91,10 +91,10 @@ Showing us a real source code example repeating your problem is the best way
|
||||
to get our attention and it greatly increases our chances to understand your
|
||||
problem and to work on a fix (if we agree it truly is a problem).
|
||||
|
||||
Lots of problems that appear to be libcurl problems are actually just abuses
|
||||
of the libcurl API or other malfunctions in your applications. It is advised
|
||||
that you run your problematic program using a memory debug tool like valgrind
|
||||
or similar before you post memory-related or "crashing" problems to us.
|
||||
Lots of problems that appear to be libcurl problems are instead abuses of the
|
||||
libcurl API or other malfunctions in your applications. It is advised that you
|
||||
run your problematic program using a memory debug tool like valgrind or
|
||||
similar before you post memory-related or "crashing" problems to us.
|
||||
|
||||
## Who fixes the problems
|
||||
|
||||
@ -106,11 +106,11 @@ All developers that take on reported bugs do this on a voluntary basis. We do
|
||||
it out of an ambition to keep curl and libcurl excellent products and out of
|
||||
pride.
|
||||
|
||||
Please do not assume that you can just lump over something to us and it then
|
||||
Please do not assume that you can lump over something to us and it then
|
||||
magically gets fixed after some given time. Most often we need feedback and
|
||||
help to understand what you have experienced and how to repeat a problem.
|
||||
Then we may only be able to assist YOU to debug the problem and to track down
|
||||
the proper fix.
|
||||
help to understand what you have experienced and how to repeat a problem. Then
|
||||
we may only be able to assist YOU to debug the problem and to track down the
|
||||
proper fix.
|
||||
|
||||
We get reports from many people every month and each report can take a
|
||||
considerable amount of time to really go to the bottom with.
|
||||
@ -165,11 +165,10 @@ Even if you cannot immediately upgrade your application/system to run the
|
||||
latest curl version, you can most often at least run a test version or
|
||||
experimental build or similar, to get this confirmed or not.
|
||||
|
||||
At times people insist that they cannot upgrade to a modern curl version, but
|
||||
instead, they "just want the bug fixed". That is fine, just do not count on us
|
||||
spending many cycles on trying to identify which single commit, if that is
|
||||
even possible, that at some point in the past fixed the problem you are now
|
||||
experiencing.
|
||||
At times people insist that they cannot upgrade to a modern curl version, they
|
||||
only "want the bug fixed". That is fine, but do not count on us spending many
|
||||
cycles on trying to identify which single commit, if that is even possible,
|
||||
that at some point in the past fixed the problem you are now experiencing.
|
||||
|
||||
Security wise, it is almost always a bad idea to lag behind the current curl
|
||||
versions by a lot. We keep discovering and reporting security problems
|
||||
|
||||
@ -56,7 +56,7 @@ Source code, the man pages, the [INTERNALS
|
||||
document](https://curl.se/dev/internals.html),
|
||||
[TODO](https://curl.se/docs/todo.html),
|
||||
[KNOWN_BUGS](https://curl.se/docs/knownbugs.html) and the [most recent
|
||||
changes](https://curl.se/dev/sourceactivity.html) in git. Just lurking on the
|
||||
changes](https://curl.se/dev/sourceactivity.html) in git. Lurking on the
|
||||
[curl-library mailing list](https://curl.se/mail/list.cgi?list=curl-library)
|
||||
gives you a lot of insights on what's going on right now. Asking there is a
|
||||
good idea too.
|
||||
@ -145,8 +145,8 @@ then come on GitHub.
|
||||
|
||||
Your changes be reviewed and discussed and you are expected to correct flaws
|
||||
pointed out and update accordingly, or the change risks stalling and
|
||||
eventually just getting deleted without action. As a submitter of a change,
|
||||
you are the owner of that change until it has been merged.
|
||||
eventually getting deleted without action. As a submitter of a change, you are
|
||||
the owner of that change until it has been merged.
|
||||
|
||||
Respond on the list or on GitHub about the change and answer questions and/or
|
||||
fix nits/flaws. This is important. We take lack of replies as a sign that you
|
||||
@ -169,8 +169,8 @@ ways. [See the CI document for more
|
||||
information](https://github.com/curl/curl/blob/master/docs/tests/CI.md).
|
||||
|
||||
Sometimes the tests fail due to a dependency service temporarily being offline
|
||||
or otherwise unavailable, e.g. package downloads. In this case you can just
|
||||
try to update your pull requests to rerun the tests later as described below.
|
||||
or otherwise unavailable, e.g. package downloads. In this case you can try to
|
||||
update your pull requests to rerun the tests later as described below.
|
||||
|
||||
You can update your pull requests by pushing new commits or force-pushing
|
||||
changes to existing commits. Force-pushing an amended commit without any
|
||||
@ -285,8 +285,9 @@ If you are a frequent contributor, you may be given push access to the git
|
||||
repository and then you are able to push your changes straight into the git
|
||||
repository instead of sending changes as pull requests or by mail as patches.
|
||||
|
||||
Just ask if this is what you would want. You are required to have posted
|
||||
several high quality patches first, before you can be granted push access.
|
||||
Feel free to ask for this, if this is what you want. You are required to have
|
||||
posted several high quality patches first, before you can be granted push
|
||||
access.
|
||||
|
||||
## Useful resources
|
||||
|
||||
@ -320,13 +321,13 @@ You must also double-check the findings carefully before reporting them to us
|
||||
to validate that the issues are indeed existing and working exactly as the AI
|
||||
says. AI-based tools frequently generate inaccurate or fabricated results.
|
||||
|
||||
Further: it is *rarely* a good idea to just copy and paste an AI generated
|
||||
report to the project. Those generated reports typically are too wordy and
|
||||
rarely to the point (in addition to the common fabricated details). If you
|
||||
actually find a problem with an AI and you have verified it yourself to be
|
||||
true: write the report yourself and explain the problem as you have learned
|
||||
it. This makes sure the AI-generated inaccuracies and invented issues are
|
||||
filtered out early before they waste more people's time.
|
||||
Further: it is *rarely* a good idea to copy and paste an AI generated report
|
||||
to the project. Those generated reports typically are too wordy and rarely to
|
||||
the point (in addition to the common fabricated details). If you actually find
|
||||
a problem with an AI and you have verified it yourself to be true: write the
|
||||
report yourself and explain the problem as you have learned it. This makes
|
||||
sure the AI-generated inaccuracies and invented issues are filtered out early
|
||||
before they waste more people's time.
|
||||
|
||||
As we take security reports seriously, we investigate each report with
|
||||
priority. This work is both time and energy consuming and pulls us away from
|
||||
|
||||
@ -119,7 +119,7 @@ syntax:
|
||||
~~~
|
||||
|
||||
Quoted source code should start with `~~~c` and end with `~~~` while regular
|
||||
quotes can start with `~~~` or just be indented with 4 spaces.
|
||||
quotes can start with `~~~` or be indented with 4 spaces.
|
||||
|
||||
Headers at top-level `#` get converted to `.SH`.
|
||||
|
||||
@ -134,8 +134,7 @@ Write italics like:
|
||||
This is *italics*.
|
||||
|
||||
Due to how man pages do not support backticks especially formatted, such
|
||||
occurrences in the source are instead just using italics in the generated
|
||||
output:
|
||||
occurrences in the source are instead using italics in the generated output:
|
||||
|
||||
This `word` appears in italics.
|
||||
|
||||
|
||||
@ -27,7 +27,7 @@ in the git master branch.
|
||||
An early patch release means that we ship a new, complete and full release
|
||||
called `major.minor.patch` where the `patch` part is increased by one since
|
||||
the previous release. A curl release is a curl release. There is no small or
|
||||
big and we never release just a patch. There is only "release".
|
||||
big and we never ship stand-alone separate patches. There is only "release".
|
||||
|
||||
## Questions to ask
|
||||
|
||||
|
||||
20
docs/ECH.md
20
docs/ECH.md
@ -410,9 +410,9 @@ for ECH when DoH is not used by curl - if a system stub resolver supports DoT
|
||||
or DoH, then, considering only ECH and the network threat model, it would make
|
||||
sense for curl to support ECH without curl itself using DoH. The author for
|
||||
example uses a combination of stubby+unbound as the system resolver listening
|
||||
on localhost:53, so would fit this use-case. That said, it is unclear if
|
||||
this is a niche that is worth trying to address. (The author is just as happy to
|
||||
let curl use DoH to talk to the same public recursive that stubby might use:-)
|
||||
on localhost:53, so would fit this use-case. That said, it is unclear if this
|
||||
is a niche that is worth trying to address. (The author is happy to let curl
|
||||
use DoH to talk to the same public recursive that stubby might use:-)
|
||||
|
||||
Assuming for the moment this is a use-case we would like to support, then if
|
||||
DoH is not being used by curl, it is not clear at this time how to provide
|
||||
@ -432,14 +432,6 @@ Our current conclusion is that doing the above is likely best left until we
|
||||
have some experience with the "using DoH" approach, so we are going to punt on
|
||||
this for now.
|
||||
|
||||
### Debugging
|
||||
|
||||
Just a note to self as remembering this is a nuisance:
|
||||
|
||||
```sh
|
||||
LD_LIBRARY_PATH=$HOME/code/openssl:./lib/.libs gdb ./src/.libs/curl
|
||||
```
|
||||
|
||||
### Localhost testing
|
||||
|
||||
It can be useful to be able to run against a localhost OpenSSL ``s_server``
|
||||
@ -467,9 +459,9 @@ cd $HOME/code/curl/
|
||||
### Automated use of ``retry_configs`` not supported so far...
|
||||
|
||||
As of now we have not added support for using ``retry_config`` handling in the
|
||||
application - for a command line tool, one can just use ``dig`` (or ``kdig``)
|
||||
to get the HTTPS RR and pass the ECHConfigList from that on the command line,
|
||||
if needed, or one can access the value from command line output in verbose more
|
||||
application - for a command line tool, one can use ``dig`` (or ``kdig``) to
|
||||
get the HTTPS RR and pass the ECHConfigList from that on the command line, if
|
||||
needed, or one can access the value from command line output in verbose more
|
||||
and then reuse that in another invocation.
|
||||
|
||||
Both our OpenSSL fork and BoringSSL/AWS-LC have APIs for both controlling GREASE
|
||||
|
||||
35
docs/FAQ.md
35
docs/FAQ.md
@ -425,8 +425,8 @@ about bindings on the curl-library list too, but be prepared that people on
|
||||
that list may not know anything about bindings.
|
||||
|
||||
In December 2025 there were around **60** different [interfaces
|
||||
available](https://curl.se/libcurl/bindings.html) for just about all the
|
||||
languages you can imagine.
|
||||
available](https://curl.se/libcurl/bindings.html) for almost any language you
|
||||
can imagine.
|
||||
|
||||
## What about SOAP, WebDAV, XML-RPC or similar protocols over HTTP?
|
||||
|
||||
@ -435,8 +435,8 @@ protocol that is built on top of HTTP. Protocols such as SOAP, WebDAV and
|
||||
XML-RPC are all such ones. You can use `-X` to set custom requests and -H to
|
||||
set custom headers (or replace internally generated ones).
|
||||
|
||||
Using libcurl is of course just as good and you would just use the proper
|
||||
library options to do the same.
|
||||
Using libcurl of course also works and you would use the proper library
|
||||
options to do the same.
|
||||
|
||||
## How do I POST with a different Content-Type?
|
||||
|
||||
@ -488,14 +488,13 @@ individuals have ever tried.
|
||||
## Does curl support JavaScript or PAC (automated proxy config)?
|
||||
|
||||
Many webpages do magic stuff using embedded JavaScript. curl and libcurl have
|
||||
no built-in support for that, so it will be treated just like any other
|
||||
contents.
|
||||
no built-in support for that, so it is treated like any other contents.
|
||||
|
||||
`.pac` files are a Netscape invention and are sometimes used by organizations
|
||||
to allow them to differentiate which proxies to use. The `.pac` contents is
|
||||
just a JavaScript program that gets invoked by the browser and that returns
|
||||
the name of the proxy to connect to. Since curl does not support JavaScript,
|
||||
it cannot support .pac proxy configuration either.
|
||||
to allow them to differentiate which proxies to use. The `.pac` contents is a
|
||||
JavaScript program that gets invoked by the browser and that returns the name
|
||||
of the proxy to connect to. Since curl does not support JavaScript, it cannot
|
||||
support .pac proxy configuration either.
|
||||
|
||||
Some workarounds usually suggested to overcome this JavaScript dependency:
|
||||
|
||||
@ -601,7 +600,7 @@ URL syntax which for SFTP might look similar to:
|
||||
|
||||
curl -O -u user:password sftp://example.com/~/file.txt
|
||||
|
||||
and for SCP it is just a different protocol prefix:
|
||||
and for SCP it is a different protocol prefix:
|
||||
|
||||
curl -O -u user:password scp://example.com/~/file.txt
|
||||
|
||||
@ -624,7 +623,7 @@ the protocol part with a space as in `" https://example.com/"`.
|
||||
In normal circumstances, `-X` should hardly ever be used.
|
||||
|
||||
By default you use curl without explicitly saying which request method to use
|
||||
when the URL identifies an HTTP transfer. If you just pass in a URL like `curl
|
||||
when the URL identifies an HTTP transfer. If you pass in a URL like `curl
|
||||
https://example.com` it will use GET. If you use `-d` or `-F`, curl will use
|
||||
POST, `-I` will cause a HEAD and `-T` will make it a PUT.
|
||||
|
||||
@ -929,7 +928,7 @@ In either case, curl should now be looking for the correct file.
|
||||
|
||||
Unplugging a cable is not an error situation. The TCP/IP protocol stack was
|
||||
designed to be fault tolerant, so even though there may be a physical break
|
||||
somewhere the connection should not be affected, just possibly delayed.
|
||||
somewhere the connection should not be affected, but possibly delayed.
|
||||
Eventually, the physical break will be fixed or the data will be re-routed
|
||||
around the physical problem through another path.
|
||||
|
||||
@ -1033,7 +1032,7 @@ WriteMemoryCallback(void *ptr, size_t size, size_t nmemb, void *data)
|
||||
|
||||
## How do I fetch multiple files with libcurl?
|
||||
|
||||
libcurl has excellent support for transferring multiple files. You should just
|
||||
libcurl has excellent support for transferring multiple files. You should
|
||||
repeatedly set new URLs with `curl_easy_setopt()` and then transfer it with
|
||||
`curl_easy_perform()`. The handle you get from curl_easy_init() is not only
|
||||
reusable, but you are even encouraged to reuse it if you can, as that will
|
||||
@ -1274,8 +1273,8 @@ never exposed to the outside.
|
||||
# License
|
||||
|
||||
curl and libcurl are released under an MIT/X derivative license. The license
|
||||
is liberal and should not impose a problem for your project. This section is
|
||||
just a brief summary for the cases we get the most questions.
|
||||
is liberal and should not impose a problem for your project. This section is a
|
||||
brief summary for the cases we get the most questions.
|
||||
|
||||
We are not lawyers and this is not legal advice. You should probably consult
|
||||
one if you want true and accurate legal insights without our prejudice. Note
|
||||
@ -1384,8 +1383,8 @@ PHP/CURL was initially written by Sterling Hughes.
|
||||
|
||||
Yes.
|
||||
|
||||
After a transfer, you just set new options in the handle and make another
|
||||
transfer. This will make libcurl reuse the same connection if it can.
|
||||
After a transfer, you set new options in the handle and make another transfer.
|
||||
This will make libcurl reuse the same connection if it can.
|
||||
|
||||
## Does PHP/CURL have dependencies?
|
||||
|
||||
|
||||
@ -21,7 +21,7 @@ what the project and the general user population wants and expects from us.
|
||||
|
||||
## Legal entity
|
||||
|
||||
There is no legal entity. The curl project is just a bunch of people scattered
|
||||
There is no legal entity. The curl project is a bunch of people scattered
|
||||
around the globe with the common goal to produce source code that creates
|
||||
great products. We are not part of any umbrella organization and we are not
|
||||
located in any specific country. We are totally independent.
|
||||
@ -110,7 +110,7 @@ developers familiar with the curl project.
|
||||
The security team works best when it consists of a small set of active
|
||||
persons. We invite new members when the team seems to need it, and we also
|
||||
expect to retire security team members as they "drift off" from the project or
|
||||
just find themselves unable to perform their duties there.
|
||||
find themselves unable to perform their duties there.
|
||||
|
||||
## Core team
|
||||
|
||||
|
||||
@ -8,9 +8,9 @@ SPDX-License-Identifier: curl
|
||||
|
||||
Towards the end of 1996, Daniel Stenberg was spending time writing an IRC bot
|
||||
for an Amiga related channel on EFnet. He then came up with the idea to make
|
||||
currency-exchange calculations available to Internet Relay Chat (IRC)
|
||||
users. All the necessary data were published on the Web; he just needed to
|
||||
automate their retrieval.
|
||||
currency-exchange calculations available to Internet Relay Chat (IRC) users.
|
||||
All the necessary data were published on the Web; he only needed to automate
|
||||
their retrieval.
|
||||
|
||||
## 1996
|
||||
|
||||
@ -18,9 +18,9 @@ On November 11, 1996 the Brazilian developer Rafael Sagula wrote and released
|
||||
HttpGet version 0.1.
|
||||
|
||||
Daniel extended this existing command-line open-source tool. After a few minor
|
||||
adjustments, it did just what he needed. The first release with Daniel's
|
||||
additions was 0.2, released on December 17, 1996. Daniel quickly became the
|
||||
new maintainer of the project.
|
||||
adjustments, it did what he needed. The first release with Daniel's additions
|
||||
was 0.2, released on December 17, 1996. Daniel quickly became the new
|
||||
maintainer of the project.
|
||||
|
||||
## 1997
|
||||
|
||||
@ -309,7 +309,7 @@ June: support for multiplexing with HTTP/2
|
||||
August: support for HTTP/2 server push
|
||||
|
||||
September: started "everything curl". A separate stand-alone book documenting
|
||||
curl and related info in perhaps a more tutorial style rather than just a
|
||||
curl and related info in perhaps a more tutorial style rather than a
|
||||
reference,
|
||||
|
||||
December: Public Suffix List
|
||||
|
||||
@ -98,8 +98,8 @@ Field number, what type and example data and the meaning of it:
|
||||
|
||||
## Cookies with curl the command line tool
|
||||
|
||||
curl has a full cookie "engine" built in. If you just activate it, you can
|
||||
have curl receive and send cookies exactly as mandated in the specs.
|
||||
curl has a full cookie "engine" built in. If you activate it, you can have
|
||||
curl receive and send cookies exactly as mandated in the specs.
|
||||
|
||||
Command line options:
|
||||
|
||||
|
||||
@ -29,7 +29,7 @@ HTTP/3 support in curl is considered **EXPERIMENTAL** until further notice
|
||||
when built to use *quiche*. Only the *ngtcp2* backend is not experimental.
|
||||
|
||||
Further development and tweaking of the HTTP/3 support in curl happens in the
|
||||
master branch using pull-requests, just like ordinary changes.
|
||||
master branch using pull-requests like ordinary changes.
|
||||
|
||||
To fix before we remove the experimental label:
|
||||
|
||||
@ -305,9 +305,9 @@ handshake or time out.
|
||||
|
||||
Note that all this happens in addition to IP version happy eyeballing. If the
|
||||
name resolution for the server gives more than one IP address, curl tries all
|
||||
those until one succeeds - just as with all other protocols. If those IP
|
||||
addresses contain both IPv6 and IPv4, those attempts happen, delayed, in
|
||||
parallel (the actual eyeballing).
|
||||
those until one succeeds - as with all other protocols. If those IP addresses
|
||||
contain both IPv6 and IPv4, those attempts happen, delayed, in parallel (the
|
||||
actual eyeballing).
|
||||
|
||||
## Known Bugs
|
||||
|
||||
@ -322,8 +322,7 @@ development and experimenting.
|
||||
|
||||
An existing local HTTP/1.1 server that hosts files. Preferably also a few huge
|
||||
ones. You can easily create huge local files like `truncate -s=8G 8GB` - they
|
||||
are huge but do not occupy that much space on disk since they are just big
|
||||
holes.
|
||||
are huge but do not occupy that much space on disk since they are big holes.
|
||||
|
||||
In a Debian setup you can install apache2. It runs on port 80 and has a
|
||||
document root in `/var/www/html`. Download the 8GB file from apache with `curl
|
||||
@ -350,8 +349,8 @@ Get, build and install nghttp2:
|
||||
% make && make install
|
||||
|
||||
Run the local h3 server on port 9443, make it proxy all traffic through to
|
||||
HTTP/1 on localhost port 80. For local toying, we can just use the test cert
|
||||
that exists in curl's test dir.
|
||||
HTTP/1 on localhost port 80. For local toying, we can use the test cert that
|
||||
exists in curl's test dir.
|
||||
|
||||
% CERT=/path/to/stunnel.pem
|
||||
% $HOME/bin/nghttpx $CERT $CERT --backend=localhost,80 \
|
||||
|
||||
@ -155,8 +155,8 @@ assumes that CMake generates `Makefile`:
|
||||
|
||||
# CMake usage
|
||||
|
||||
Just as curl can be built and installed using CMake, it can also be used from
|
||||
CMake.
|
||||
This section describes how to locate and use curl/libcurl from CMake-based
|
||||
projects.
|
||||
|
||||
## Using `find_package`
|
||||
|
||||
|
||||
@ -75,7 +75,7 @@ curl ipfs://bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi
|
||||
```
|
||||
|
||||
With the IPFS protocol way of asking a file, curl still needs to know the
|
||||
gateway. curl essentially just rewrites the IPFS based URL to a gateway URL.
|
||||
gateway. curl essentially rewrites the IPFS based URL to a gateway URL.
|
||||
|
||||
### IPFS_GATEWAY environment variable
|
||||
|
||||
|
||||
@ -26,8 +26,8 @@ to you from untrusted sources.
|
||||
curl can do a lot of things, and you should only ask it do things you want and
|
||||
deem correct.
|
||||
|
||||
Even just accepting just the URL part without careful vetting might make curl
|
||||
do things you do not like. Like accessing internal hosts, like connecting to
|
||||
Even accepting only the URL part without careful vetting might make curl do
|
||||
things you do not like. Like accessing internal hosts, like connecting to
|
||||
rogue servers that redirect to even weirder places, like using ports or
|
||||
protocols that play tricks on you.
|
||||
|
||||
|
||||
@ -39,8 +39,8 @@ way to read the reply, but to ask the one person the question. The one person
|
||||
consequently gets overloaded with mail.
|
||||
|
||||
If you really want to contact an individual and perhaps pay for his or her
|
||||
services, by all means go ahead, but if it is just another curl question, take
|
||||
it to a suitable list instead.
|
||||
services, by all means go ahead, but if it is another curl question, take it
|
||||
to a suitable list instead.
|
||||
|
||||
### Subscription Required
|
||||
|
||||
@ -150,8 +150,8 @@ individuals. There is no way to undo a sent email.
|
||||
|
||||
When sending emails to a curl mailing list, do not include sensitive
|
||||
information such as usernames and passwords; use fake ones, temporary ones or
|
||||
just remove them completely from the mail. Note that this includes base64
|
||||
encoded HTTP Basic auth headers.
|
||||
remove them completely from the mail. Note that this includes base64 encoded
|
||||
HTTP Basic auth headers.
|
||||
|
||||
This public nature of the curl mailing lists makes automatically inserted mail
|
||||
footers about mails being "private" or "only meant for the recipient" or
|
||||
@ -167,14 +167,14 @@ the lists.
|
||||
|
||||
Many mail programs and web archivers use information within mails to keep them
|
||||
together as "threads", as collections of posts that discuss a certain subject.
|
||||
If you do not intend to reply on the same or similar subject, do not just hit
|
||||
reply on an existing mail and change the subject, create a new mail.
|
||||
If you do not intend to reply on the same or similar subject, do not hit reply
|
||||
on an existing mail and change the subject, create a new mail.
|
||||
|
||||
### Reply to the List
|
||||
|
||||
When replying to a message from the list, make sure that you do "group reply"
|
||||
or "reply to all", and not just reply to the author of the single mail you
|
||||
reply to.
|
||||
or "reply to all", and not reply to the author of the single mail you reply
|
||||
to.
|
||||
|
||||
We are actively discouraging replying to the single person by setting the
|
||||
correct field in outgoing mails back asking for replies to get sent to the
|
||||
@ -222,8 +222,8 @@ mails to your friends. We speak plain text mails.
|
||||
|
||||
### Quoting
|
||||
|
||||
Quote as little as possible. Just enough to provide the context you cannot
|
||||
leave out.
|
||||
Quote as little as possible. Enough to provide the context you cannot leave
|
||||
out.
|
||||
|
||||
### Digest
|
||||
|
||||
|
||||
@ -96,8 +96,8 @@ or specify them with the `-u` flag like
|
||||
|
||||
### FTPS
|
||||
|
||||
It is just like for FTP, but you may also want to specify and use SSL-specific
|
||||
options for certificates etc.
|
||||
It is like FTP, but you may also want to specify and use SSL-specific options
|
||||
for certificates etc.
|
||||
|
||||
Note that using `FTPS://` as prefix is the *implicit* way as described in the
|
||||
standards while the recommended *explicit* way is done by using `FTP://` and
|
||||
@ -660,7 +660,7 @@ incoming connections.
|
||||
curl ftp.example.com
|
||||
|
||||
If the server, for example, is behind a firewall that does not allow
|
||||
connections on ports other than 21 (or if it just does not support the `PASV`
|
||||
connections on ports other than 21 (or if it does not support the `PASV`
|
||||
command), the other way to do it is to use the `PORT` command and instruct the
|
||||
server to connect to the client on the given IP number and port (as parameters
|
||||
to the PORT command).
|
||||
@ -855,8 +855,8 @@ therefore most Unix programs do not read this file unless it is only readable
|
||||
by yourself (curl does not care though).
|
||||
|
||||
curl supports `.netrc` files if told to (using the `-n`/`--netrc` and
|
||||
`--netrc-optional` options). This is not restricted to just FTP, so curl can
|
||||
use it for all protocols where authentication is used.
|
||||
`--netrc-optional` options). This is not restricted to FTP, so curl can use it
|
||||
for all protocols where authentication is used.
|
||||
|
||||
A simple `.netrc` file could look something like:
|
||||
|
||||
@ -936,8 +936,8 @@ are persistent.
|
||||
|
||||
As is mentioned above, you can download multiple files with one command line
|
||||
by simply adding more URLs. If you want those to get saved to a local file
|
||||
instead of just printed to stdout, you need to add one save option for each
|
||||
URL you specify. Note that this also goes for the `-O` option (but not
|
||||
instead of printed to stdout, you need to add one save option for each URL you
|
||||
specify. Note that this also goes for the `-O` option (but not
|
||||
`--remote-name-all`).
|
||||
|
||||
For example: get two files and use `-O` for the first and a custom file
|
||||
|
||||
@ -50,9 +50,9 @@ generated automatically using those files.
|
||||
|
||||
## Document format
|
||||
|
||||
The easy way is to start with a recent previously published advisory and just
|
||||
blank out old texts and save it using a new name. Save the subtitles and
|
||||
general layout.
|
||||
The easy way is to start with a recent previously published advisory and blank
|
||||
out old texts and save it using a new name. Save the subtitles and general
|
||||
layout.
|
||||
|
||||
Some details and metadata are extracted from this document so it is important
|
||||
to stick to the existing format.
|
||||
|
||||
14
docs/TODO.md
14
docs/TODO.md
@ -283,8 +283,8 @@ See [curl issue 1508](https://github.com/curl/curl/issues/1508)
|
||||
## Provide the error body from a CONNECT response
|
||||
|
||||
When curl receives a body response from a CONNECT request to a proxy, it
|
||||
always just reads and ignores it. It would make some users happy if curl
|
||||
instead optionally would be able to make that responsible available. Via a new
|
||||
always reads and ignores it. It would make some users happy if curl instead
|
||||
optionally would be able to make that responsible available. Via a new
|
||||
callback? Through some other means?
|
||||
|
||||
See [curl issue 9513](https://github.com/curl/curl/issues/9513)
|
||||
@ -454,7 +454,7 @@ Currently the SMB authentication uses NTLMv1.
|
||||
## Create remote directories
|
||||
|
||||
Support for creating remote directories when uploading a file to a directory
|
||||
that does not exist on the server, just like `--ftp-create-dirs`.
|
||||
that does not exist on the server, like `--ftp-create-dirs`.
|
||||
|
||||
# FILE
|
||||
|
||||
@ -662,8 +662,8 @@ the new transfer to the existing one.
|
||||
The SFTP code in libcurl checks the file size *before* a transfer starts and
|
||||
then proceeds to transfer exactly that amount of data. If the remote file
|
||||
grows while the transfer is in progress libcurl does not notice and does not
|
||||
adapt. The OpenSSH SFTP command line tool does and libcurl could also just
|
||||
attempt to download more to see if there is more to get...
|
||||
adapt. The OpenSSH SFTP command line tool does and libcurl could also attempt
|
||||
to download more to see if there is more to get...
|
||||
|
||||
[curl issue 4344](https://github.com/curl/curl/issues/4344)
|
||||
|
||||
@ -958,8 +958,8 @@ test tools built with either OpenSSL or GnuTLS
|
||||
|
||||
## more protocols supported
|
||||
|
||||
Extend the test suite to include more protocols. The telnet could just do FTP
|
||||
or http operations (for which we have test servers).
|
||||
Extend the test suite to include more protocols. The telnet could do FTP or
|
||||
http operations (for which we have test servers).
|
||||
|
||||
## more platforms supported
|
||||
|
||||
|
||||
@ -61,9 +61,9 @@ receives. Use it like this:
|
||||
|
||||
## See the Timing
|
||||
|
||||
Many times you may wonder what exactly is taking all the time, or you just
|
||||
want to know the amount of milliseconds between two points in a transfer. For
|
||||
those, and other similar situations, the
|
||||
Many times you may wonder what exactly is taking all the time, or you want to
|
||||
know the amount of milliseconds between two points in a transfer. For those,
|
||||
and other similar situations, the
|
||||
[`--trace-time`](https://curl.se/docs/manpage.html#--trace-time) option is
|
||||
what you need. It prepends the time to each trace output line:
|
||||
|
||||
@ -145,9 +145,9 @@ to use forms and cookies instead.
|
||||
|
||||
## Path part
|
||||
|
||||
The path part is just sent off to the server to request that it sends back
|
||||
the associated response. The path is what is to the right side of the slash
|
||||
that follows the hostname and possibly port number.
|
||||
The path part is sent off to the server to request that it sends back the
|
||||
associated response. The path is what is to the right side of the slash that
|
||||
follows the hostname and possibly port number.
|
||||
|
||||
# Fetch a page
|
||||
|
||||
@ -182,9 +182,8 @@ actual body in the HEAD response.
|
||||
## Multiple URLs in a single command line
|
||||
|
||||
A single curl command line may involve one or many URLs. The most common case
|
||||
is probably to just use one, but you can specify any amount of URLs. Yes any.
|
||||
No limits. You then get requests repeated over and over for all the given
|
||||
URLs.
|
||||
is probably to use one, but you can specify any amount of URLs. Yes any. No
|
||||
limits. You then get requests repeated over and over for all the given URLs.
|
||||
|
||||
Example, send two GET requests:
|
||||
|
||||
@ -232,7 +231,7 @@ entered address on a map or using the info as a login-prompt verifying that
|
||||
the user is allowed to see what it is about to see.
|
||||
|
||||
Of course there has to be some kind of program on the server end to receive
|
||||
the data you send. You cannot just invent something out of the air.
|
||||
the data you send. You cannot invent something out of the air.
|
||||
|
||||
## GET
|
||||
|
||||
@ -257,8 +256,7 @@ the second page you get becomes
|
||||
|
||||
Most search engines work this way.
|
||||
|
||||
To make curl do the GET form post for you, just enter the expected created
|
||||
URL:
|
||||
To make curl do the GET form post for you, enter the expected created URL:
|
||||
|
||||
curl "https://www.example.com/when/junk.cgi?birthyear=1905&press=OK"
|
||||
|
||||
@ -328,8 +326,8 @@ To post to a form like this with curl, you enter a command line like:
|
||||
|
||||
A common way for HTML based applications to pass state information between
|
||||
pages is to add hidden fields to the forms. Hidden fields are already filled
|
||||
in, they are not displayed to the user and they get passed along just as all
|
||||
the other fields.
|
||||
in, they are not displayed to the user and they get passed along as all the
|
||||
other fields.
|
||||
|
||||
A similar example form with one visible field, one hidden field and one
|
||||
submit button could look like:
|
||||
@ -498,11 +496,11 @@ JavaScript to do it.
|
||||
|
||||
## Cookie Basics
|
||||
|
||||
The way the web browsers do "client side state control" is by using
|
||||
cookies. Cookies are just names with associated contents. The cookies are
|
||||
sent to the client by the server. The server tells the client for what path
|
||||
and hostname it wants the cookie sent back, and it also sends an expiration
|
||||
date and a few more properties.
|
||||
The way the web browsers do "client side state control" is by using cookies.
|
||||
Cookies are names with associated contents. The cookies are sent to the client
|
||||
by the server. The server tells the client for what path and hostname it wants
|
||||
the cookie sent back, and it also sends an expiration date and a few more
|
||||
properties.
|
||||
|
||||
When a client communicates with a server with a name and path as previously
|
||||
specified in a received cookie, the client sends back the cookies and their
|
||||
@ -646,9 +644,9 @@ body etc.
|
||||
|
||||
## Some login tricks
|
||||
|
||||
While not strictly just HTTP related, it still causes a lot of people
|
||||
problems so here's the executive run-down of how the vast majority of all
|
||||
login forms work and how to login to them using curl.
|
||||
While not strictly HTTP related, it still causes a lot of people problems so
|
||||
here's the executive run-down of how the vast majority of all login forms work
|
||||
and how to login to them using curl.
|
||||
|
||||
It can also be noted that to do this properly in an automated fashion, you
|
||||
most certainly need to script things and do multiple curl invokes etc.
|
||||
|
||||
@ -201,8 +201,8 @@ This is an incomplete list of issues that are not considered vulnerabilities.
|
||||
We do not consider a small memory leak a security problem; even if the amount
|
||||
of allocated memory grows by a small amount every now and then. Long-living
|
||||
applications and services already need to have countermeasures and deal with
|
||||
growing memory usage, be it leaks or just increased use. A small memory or
|
||||
resource leak is then expected to *not* cause a security problem.
|
||||
growing memory usage, be it leaks or increased use. A small memory or resource
|
||||
leak is then expected to *not* cause a security problem.
|
||||
|
||||
Of course there can be a discussion if a leak is small or not. A large leak
|
||||
can be considered a security problem due to the DOS risk. If leaked memory
|
||||
@ -293,9 +293,8 @@ same directory where curl is directed to save files.
|
||||
A creative, misleading or funny looking command line is not a security
|
||||
problem. The curl command line tool takes options and URLs on the command line
|
||||
and if an attacker can trick the user to run a specifically crafted curl
|
||||
command line, all bets are off. Such an attacker can just as well have the
|
||||
user run a much worse command that can do something fatal (like
|
||||
`sudo rm -rf /`).
|
||||
command line, all bets are off. Such an attacker can already have the user run
|
||||
a much worse command that can do something fatal (like `sudo rm -rf /`).
|
||||
|
||||
## Terminal output and escape sequences
|
||||
|
||||
@ -414,9 +413,9 @@ roles:
|
||||
It is likely that our [BDFL](https://en.wikipedia.org/wiki/Benevolent_dictator_for_life) occupies
|
||||
one of these roles, though this plan does not depend on it.
|
||||
|
||||
A declaration may also contain more detailed information but as we honor embargoes
|
||||
and vulnerability disclosure throughout this process, it may also just contain
|
||||
brief notification that a **major incident** is occurring.
|
||||
A declaration may also contain more detailed information but as we honor
|
||||
embargoes and vulnerability disclosure throughout this process, it may also
|
||||
contain a brief notification that a **major incident** is occurring.
|
||||
|
||||
## Major incident ongoing
|
||||
|
||||
|
||||
@ -21,7 +21,7 @@ Enable the alt-svc parser. If the filename points to an existing alt-svc cache
|
||||
file, that gets used. After a completed transfer, the cache is saved to the
|
||||
filename again if it has been modified.
|
||||
|
||||
Specify a "" filename (zero length) to avoid loading/saving and make curl just
|
||||
Specify a "" filename (zero length) to avoid loading/saving and make curl
|
||||
handle the cache in memory.
|
||||
|
||||
You may want to restrict your umask to prevent other users on the same system
|
||||
|
||||
@ -18,4 +18,4 @@ Example:
|
||||
|
||||
# `--data-ascii`
|
||||
|
||||
This option is just an alias for --data.
|
||||
This option is an alias for --data.
|
||||
|
||||
@ -28,9 +28,9 @@ a separator and a content specification. The \<data\> part can be passed to
|
||||
curl using one of the following syntaxes:
|
||||
|
||||
## content
|
||||
URL-encode the content and pass that on. Just be careful so that the content
|
||||
does not contain any `=` or `@` symbols, as that makes the syntax match one of
|
||||
the other cases below.
|
||||
URL-encode the content and pass that on. Be careful so that the content does
|
||||
not contain any `=` or `@` symbols, as that makes the syntax match one of the
|
||||
other cases below.
|
||||
|
||||
## =content
|
||||
URL-encode the content and pass that on. The preceding `=` symbol is not
|
||||
|
||||
@ -28,11 +28,11 @@ For SMTP and IMAP protocols, this composes a multipart mail message to
|
||||
transmit.
|
||||
|
||||
This enables uploading of binary files etc. To force the 'content' part to be
|
||||
a file, prefix the filename with an @ sign. To just get the content part from
|
||||
a file, prefix the filename with the symbol \<. The difference between @ and
|
||||
\< is then that @ makes a file get attached in the post as a file upload,
|
||||
while the \< makes a text field and just gets the contents for that text field
|
||||
from a file.
|
||||
a file, prefix the filename with an @ sign. To get the content part from a
|
||||
file, prefix the filename with the symbol \<. The difference between @ and \<
|
||||
is then that @ makes a file get attached in the post as a file upload, while
|
||||
the \< makes a text field and gets the contents for that text field from a
|
||||
file.
|
||||
|
||||
Read content from stdin instead of a file by using a single "-" as filename.
|
||||
This goes for both @ and \< constructs. When stdin is used, the contents is
|
||||
|
||||
@ -25,7 +25,7 @@ in the HSTS cache, it upgrades the transfer to use HTTPS. Each HSTS cache
|
||||
entry has an individual lifetime after which the upgrade is no longer
|
||||
performed.
|
||||
|
||||
Specify a "" filename (zero length) to avoid loading/saving and make curl just
|
||||
Specify a "" filename (zero length) to avoid loading/saving and make curl
|
||||
handle HSTS in memory.
|
||||
|
||||
You may want to restrict your umask to prevent other users on the same system
|
||||
|
||||
@ -29,7 +29,7 @@ include subdirectories and symbolic links.
|
||||
When listing an SFTP directory, this switch forces a name-only view, one per
|
||||
line. This is especially useful if the user wants to machine-parse the
|
||||
contents of an SFTP directory since the normal directory view provides more
|
||||
information than just filenames.
|
||||
information than filenames.
|
||||
|
||||
When retrieving a specific email from POP3, this switch forces a LIST command
|
||||
to be performed instead of RETR. This is particularly useful if the user wants
|
||||
|
||||
@ -40,7 +40,7 @@ this:
|
||||
|
||||
curl -o aa example.com -o bb example.net
|
||||
|
||||
and the order of the -o options and the URLs does not matter, just that the
|
||||
and the order of the -o options and the URLs does not matter, only that the
|
||||
first -o is for the first URL and so on, so the above command line can also be
|
||||
written as
|
||||
|
||||
|
||||
@ -18,13 +18,13 @@ Example:
|
||||
# `--quote`
|
||||
|
||||
Send an arbitrary command to the remote FTP or SFTP server. Quote commands are
|
||||
sent BEFORE the transfer takes place (just after the initial **PWD** command
|
||||
in an FTP transfer, to be exact). To make commands take place after a
|
||||
sent BEFORE the transfer takes place (immediately after the initial **PWD**
|
||||
command in an FTP transfer, to be exact). To make commands take place after a
|
||||
successful transfer, prefix them with a dash '-'.
|
||||
|
||||
(FTP only) To make commands be sent after curl has changed the working
|
||||
directory, just before the file transfer command(s), prefix the command with a
|
||||
'+'.
|
||||
directory, immediately before the file transfer command(s), prefix the command
|
||||
with a '+'.
|
||||
|
||||
You may specify any number of commands.
|
||||
|
||||
|
||||
@ -18,5 +18,5 @@ Example:
|
||||
|
||||
If there is a local file present when a download is requested, the operation
|
||||
is skipped. Note that curl cannot know if the local file was previously
|
||||
downloaded fine, or if it is incomplete etc, it just knows if there is a
|
||||
filename present in the file system or not and it skips the transfer if it is.
|
||||
downloaded fine, or if it is incomplete etc, it knows if there is a filename
|
||||
present in the file system or not and it skips the transfer if it is.
|
||||
|
||||
@ -25,9 +25,8 @@ from stdin you write "@-".
|
||||
|
||||
The variables present in the output format are substituted by the value or
|
||||
text that curl thinks fit, as described below. All variables are specified as
|
||||
%{variable_name} and to output a normal % you just write them as %%. You can
|
||||
output a newline by using \n, a carriage return with \r and a tab space with
|
||||
\t.
|
||||
%{variable_name} and to output a normal % you write them as %%. You can output
|
||||
a newline by using \n, a carriage return with \r and a tab space with \t.
|
||||
|
||||
The output is by default written to standard output, but can be changed with
|
||||
%{stderr} and %output{}.
|
||||
@ -249,9 +248,9 @@ The time, in seconds, it took from the start until the last byte is sent
|
||||
by libcurl. (Added in 8.10.0)
|
||||
|
||||
## `time_pretransfer`
|
||||
The time, in seconds, it took from the start until the file transfer was just
|
||||
about to begin. This includes all pre-transfer commands and negotiations that
|
||||
are specific to the particular protocol(s) involved.
|
||||
The time, in seconds, it took from the start until immediately before the file
|
||||
transfer was about to begin. This includes all pre-transfer commands and
|
||||
negotiations that are specific to the particular protocol(s) involved.
|
||||
|
||||
## `time_queue`
|
||||
The time, in seconds, the transfer was queued during its run. This adds
|
||||
|
||||
@ -16,8 +16,7 @@ them for submission in future packages and on the website.
|
||||
## Building
|
||||
|
||||
The `Makefile.example` is an example Makefile that could be used to build
|
||||
these examples. Just edit the file according to your system and requirements
|
||||
first.
|
||||
these examples. Edit the file according to your system and requirements first.
|
||||
|
||||
Most examples should build fine using a command line like this:
|
||||
|
||||
|
||||
@ -36,7 +36,7 @@ for my $f (@ARGV) {
|
||||
while(<F>) {
|
||||
my $l = $_;
|
||||
if($l =~ /\/* $docroot/) {
|
||||
# just ignore preciously added refs
|
||||
# ignore preciously added refs
|
||||
}
|
||||
elsif($l =~ /^( *).*curl_easy_setopt\([^,]*, *([^ ,]*) *,/) {
|
||||
my ($prefix, $anchor) = ($1, $2);
|
||||
|
||||
@ -164,7 +164,7 @@ int main(void)
|
||||
|
||||
/* second try: retrieve page using cacerts' certificate -> succeeds to
|
||||
* load the certificate by installing a function doing the necessary
|
||||
* "modifications" to the SSL CONTEXT just before link init
|
||||
* "modifications" to the SSL CONTEXT before link init
|
||||
*/
|
||||
curl_easy_setopt(curl, CURLOPT_SSL_CTX_FUNCTION, sslctx_function);
|
||||
result = curl_easy_perform(curl);
|
||||
|
||||
@ -22,8 +22,7 @@
|
||||
*
|
||||
***************************************************************************/
|
||||
/* <DESC>
|
||||
* Performs an FTP upload and renames the file just after a successful
|
||||
* transfer.
|
||||
* Performs an FTP upload and renames the file after a successful transfer.
|
||||
* </DESC>
|
||||
*/
|
||||
#ifdef _MSC_VER
|
||||
|
||||
@ -177,7 +177,7 @@ static int update_timeout_cb(CURLM *multi, long timeout_ms, void *userp)
|
||||
timeout_ms, timeout.tv_sec, timeout.tv_usec);
|
||||
|
||||
/*
|
||||
* if timeout_ms is -1, just delete the timer
|
||||
* if timeout_ms is -1, delete the timer
|
||||
*
|
||||
* For other values of timeout_ms, this should set or *update* the timer to
|
||||
* the new value
|
||||
|
||||
@ -52,7 +52,7 @@ int main(void)
|
||||
/* example.com is redirected, so we tell libcurl to follow redirection */
|
||||
curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L);
|
||||
|
||||
/* this example just ignores the content */
|
||||
/* this example ignores the content */
|
||||
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_cb);
|
||||
|
||||
/* Perform the request, result gets the return code */
|
||||
|
||||
@ -153,7 +153,7 @@ static int multi_timer_cb(CURLM *multi, long timeout_ms, struct GlobalInfo *g)
|
||||
fprintf(MSG_OUT, "multi_timer_cb: Setting timeout to %ld ms\n", timeout_ms);
|
||||
|
||||
/*
|
||||
* if timeout_ms is -1, just delete the timer
|
||||
* if timeout_ms is -1, delete the timer
|
||||
*
|
||||
* For all other values of timeout_ms, this should set or *update* the timer
|
||||
* to the new value
|
||||
|
||||
@ -43,8 +43,7 @@ int main(void)
|
||||
curl = curl_easy_init();
|
||||
if(curl) {
|
||||
/* First set the URL that is about to receive our POST. This URL can
|
||||
just as well be an https:// URL if that is what should receive the
|
||||
data. */
|
||||
be an https:// URL if that is what should receive the data. */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "http://postit.example.com/moo.cgi");
|
||||
/* Now specify the POST data */
|
||||
curl_easy_setopt(curl, CURLOPT_POSTFIELDS, "name=daniel&project=curl");
|
||||
|
||||
@ -89,9 +89,8 @@ int main(int argc, const char **argv)
|
||||
file = argv[1];
|
||||
url = argv[2];
|
||||
|
||||
/* get a FILE * of the same file, could also be made with
|
||||
fdopen() from the previous descriptor, but hey this is just
|
||||
an example! */
|
||||
/* get a FILE * of the same file, could also be made with fdopen() from the
|
||||
previous descriptor, but hey this is an example! */
|
||||
hd_src = fopen(file, "rb");
|
||||
if(!hd_src)
|
||||
return 2;
|
||||
|
||||
@ -105,8 +105,8 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com/Sent");
|
||||
|
||||
/* In this case, we are using a callback function to specify the data. You
|
||||
* could just use the CURLOPT_READDATA option to specify a FILE pointer to
|
||||
* read from. */
|
||||
* could use the CURLOPT_READDATA option to specify a FILE pointer to read
|
||||
* from. */
|
||||
curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb);
|
||||
curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx);
|
||||
curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L);
|
||||
|
||||
@ -49,7 +49,7 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
|
||||
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
|
||||
|
||||
/* This is just the server URL */
|
||||
/* This is the server URL */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com");
|
||||
|
||||
/* Set the CREATE command specifying the new folder name */
|
||||
|
||||
@ -49,7 +49,7 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
|
||||
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
|
||||
|
||||
/* This is just the server URL */
|
||||
/* This is the server URL */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com");
|
||||
|
||||
/* Set the DELETE command specifying the existing folder */
|
||||
|
||||
@ -49,7 +49,7 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
|
||||
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
|
||||
|
||||
/* This is just the server URL */
|
||||
/* This is the server URL */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com");
|
||||
|
||||
/* Set the EXAMINE command specifying the mailbox folder */
|
||||
|
||||
@ -49,7 +49,7 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
|
||||
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
|
||||
|
||||
/* This is just the server URL */
|
||||
/* This is the server URL */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com");
|
||||
|
||||
/* Set the LSUB command. Note the syntax is similar to that of a LIST
|
||||
|
||||
@ -49,7 +49,7 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
|
||||
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
|
||||
|
||||
/* This is just the server URL */
|
||||
/* This is the server URL */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com");
|
||||
|
||||
/* Set the NOOP command */
|
||||
|
||||
@ -49,7 +49,7 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
|
||||
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
|
||||
|
||||
/* This is just the server URL */
|
||||
/* This is the server URL */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com");
|
||||
|
||||
/* Set the NOOP command */
|
||||
|
||||
@ -49,7 +49,7 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
|
||||
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
|
||||
|
||||
/* This is just the server URL */
|
||||
/* This is the server URL */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com");
|
||||
|
||||
/* Set the STAT command */
|
||||
|
||||
@ -49,7 +49,7 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
|
||||
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
|
||||
|
||||
/* This is just the server URL */
|
||||
/* This is the server URL */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com");
|
||||
|
||||
/* Set the TOP command for message 1 to only include the headers */
|
||||
|
||||
@ -49,7 +49,7 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_USERNAME, "user");
|
||||
curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret");
|
||||
|
||||
/* This is just the server URL */
|
||||
/* This is the server URL */
|
||||
curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com");
|
||||
|
||||
/* Set the UIDL command */
|
||||
|
||||
@ -46,7 +46,7 @@
|
||||
|
||||
static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
|
||||
static int j = 0;
|
||||
static gint num_urls = 9; /* Just make sure this is less than urls[] */
|
||||
static gint num_urls = 9; /* make sure this is less than urls[] */
|
||||
static const char * const urls[] = {
|
||||
"90022",
|
||||
"90023",
|
||||
|
||||
@ -129,8 +129,8 @@ int main(void)
|
||||
recipients = curl_slist_append(recipients, TO_ADDR);
|
||||
curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients);
|
||||
|
||||
/* We are using a callback function to specify the payload (the headers and
|
||||
* body of the message). You could just use the CURLOPT_READDATA option to
|
||||
/* We are using a callback function to specify the payload (the headers
|
||||
* and body of the message). You can use the CURLOPT_READDATA option to
|
||||
* specify a FILE pointer to read from. */
|
||||
curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb);
|
||||
curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx);
|
||||
|
||||
@ -117,8 +117,8 @@ int main(void)
|
||||
recipients = curl_slist_append(recipients, CC_ADDR);
|
||||
curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients);
|
||||
|
||||
/* We are using a callback function to specify the payload (the headers and
|
||||
* body of the message). You could just use the CURLOPT_READDATA option to
|
||||
/* We are using a callback function to specify the payload (the headers
|
||||
* and body of the message). You can use the CURLOPT_READDATA option to
|
||||
* specify a FILE pointer to read from. */
|
||||
curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb);
|
||||
curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx);
|
||||
|
||||
@ -116,8 +116,8 @@ int main(void)
|
||||
curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients);
|
||||
|
||||
/* We are using a callback function to specify the payload (the headers
|
||||
* and body of the message). You could just use the CURLOPT_READDATA
|
||||
* option to specify a FILE pointer to read from. */
|
||||
* and body of the message). You can use the CURLOPT_READDATA option to
|
||||
* specify a FILE pointer to read from. */
|
||||
curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb);
|
||||
curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx);
|
||||
curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L);
|
||||
|
||||
@ -139,8 +139,8 @@ int main(void)
|
||||
recipients = curl_slist_append(recipients, CC_MAIL);
|
||||
curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients);
|
||||
|
||||
/* We are using a callback function to specify the payload (the headers and
|
||||
* body of the message). You could just use the CURLOPT_READDATA option to
|
||||
/* We are using a callback function to specify the payload (the headers
|
||||
* and body of the message). You can use the CURLOPT_READDATA option to
|
||||
* specify a FILE pointer to read from. */
|
||||
curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb);
|
||||
curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx);
|
||||
|
||||
@ -141,8 +141,8 @@ int main(void)
|
||||
recipients = curl_slist_append(recipients, CC_MAIL);
|
||||
curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients);
|
||||
|
||||
/* We are using a callback function to specify the payload (the headers and
|
||||
* body of the message). You could just use the CURLOPT_READDATA option to
|
||||
/* We are using a callback function to specify the payload (the headers
|
||||
* and body of the message). You can use the CURLOPT_READDATA option to
|
||||
* specify a FILE pointer to read from. */
|
||||
curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb);
|
||||
curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx);
|
||||
|
||||
@ -172,8 +172,8 @@ int main(void)
|
||||
printf("*** transfer failed ***\n");
|
||||
|
||||
/* second try: retrieve page using user certificate and key -> succeeds to
|
||||
* load the certificate and key by installing a function doing
|
||||
* the necessary "modifications" to the SSL CONTEXT just before link init
|
||||
* load the certificate and key by installing a function doing the
|
||||
* necessary "modifications" to the SSL CONTEXT before link init
|
||||
*/
|
||||
curl_easy_setopt(curl, CURLOPT_SSL_CTX_FUNCTION, sslctx_function);
|
||||
result = curl_easy_perform(curl);
|
||||
|
||||
@ -172,8 +172,8 @@ This ignores the warning for overly long lines until it is re-enabled with:
|
||||
If the enabling is not performed before the end of the file, it is enabled
|
||||
again automatically for the next file.
|
||||
|
||||
You can also opt to ignore just N violations so that if you have a single long
|
||||
line you just cannot shorten and is agreed to be fine anyway:
|
||||
You can also opt to ignore N violations so that if you have a single long line
|
||||
you cannot shorten and is agreed to be fine anyway:
|
||||
|
||||
/* !checksrc! disable LONGLINE 1 */
|
||||
|
||||
|
||||
@ -100,7 +100,7 @@ typedef enum {
|
||||
If a writer for phase `PROTOCOL` is added to the chain, it is always added
|
||||
*after* any `RAW` or `TRANSFER_DECODE` and *before* any `CONTENT_DECODE` and
|
||||
`CLIENT` phase writer. If there is already a writer for the same phase
|
||||
present, the new writer is inserted just before that one.
|
||||
present, the new writer is inserted before that one.
|
||||
|
||||
All transfers have a chain of 3 writers by default. A specific protocol
|
||||
handler may alter that by adding additional writers. The 3 standard writers
|
||||
|
||||
@ -19,9 +19,9 @@ Our C code has a few style rules. Most of them are verified and upheld by the
|
||||
by the build system when built after `./configure --enable-debug` has been
|
||||
used.
|
||||
|
||||
It is normally not a problem for anyone to follow the guidelines, as you just
|
||||
need to copy the style already used in the source code and there are no
|
||||
particularly unusual rules in our set of rules.
|
||||
It is normally not a problem for anyone to follow the guidelines, simply copy
|
||||
the style already used in the source code and there are no particularly
|
||||
unusual rules in our set of rules.
|
||||
|
||||
We also work hard on writing code that are warning-free on all the major
|
||||
platforms and in general on as many platforms as possible. Code that causes
|
||||
@ -39,7 +39,7 @@ understand it when debugging.
|
||||
|
||||
Try using a non-confusing naming scheme for your new functions and variable
|
||||
names. It does not necessarily have to mean that you should use the same as in
|
||||
other places of the code, just that the names should be logical,
|
||||
other places of the code, only that the names should be logical,
|
||||
understandable and be named according to what they are used for. File-local
|
||||
functions should be made static. We like lower case names.
|
||||
|
||||
|
||||
@ -41,8 +41,7 @@ Curl_write(data, buffer)
|
||||
|
||||
While connection filters all do different things, they look the same from the
|
||||
"outside". The code in `data` and `conn` does not really know **which**
|
||||
filters are installed. `conn` just writes into the first filter, whatever that
|
||||
is.
|
||||
filters are installed. `conn` writes into the first filter, whatever that is.
|
||||
|
||||
Same is true for filters. Each filter has a pointer to the `next` filter. When
|
||||
SSL has encrypted the data, it does not write to a socket, it writes to the
|
||||
@ -135,7 +134,7 @@ do *not* use an http proxy, or socks, or https is lower.
|
||||
|
||||
As to transfer efficiency, writing and reading through a filter comes at near
|
||||
zero cost *if the filter does not transform the data*. An http proxy or socks
|
||||
filter, once it is connected, just passes the calls through. Those filters
|
||||
filter, once it is connected, passes the calls through. Those filters
|
||||
implementations look like this:
|
||||
|
||||
```c
|
||||
@ -277,7 +276,7 @@ connect (in time), it is torn down and another one is created for the next
|
||||
address. This keeps the `TCP` filter simple.
|
||||
|
||||
The `HAPPY-EYEBALLS` on the other hand stays focused on its side of the
|
||||
problem. We can use it also to make other type of connection by just giving it
|
||||
problem. We can use it also to make other type of connection by giving it
|
||||
another filter type to try to have happy eyeballing for QUIC:
|
||||
|
||||
```
|
||||
|
||||
@ -20,8 +20,8 @@ of `llist.c`). Use the functions.
|
||||
initialized with a call to `Curl_llist_init()` before it can be used
|
||||
|
||||
To clean up a list, call `Curl_llist_destroy()`. Since the linked lists
|
||||
themselves do not allocate memory, it can also be fine to just *not* clean up
|
||||
the list.
|
||||
themselves do not allocate memory, it can also be fine to *not* clean up the
|
||||
list.
|
||||
|
||||
## Add a node
|
||||
|
||||
|
||||
@ -117,10 +117,10 @@ those).
|
||||
|
||||
### And Come Again
|
||||
|
||||
While transfer and connection identifier are practically unique in a
|
||||
libcurl application, sockets are not. Operating systems are keen on reusing
|
||||
their resources, and the next socket may get the same identifier as
|
||||
one just having been closed with high likelihood.
|
||||
While transfer and connection identifiers are practically unique in a libcurl
|
||||
application, sockets are not. Operating systems are keen on reusing their
|
||||
resources, and the next socket may get the same identifier as a recently
|
||||
closed one with high likelihood.
|
||||
|
||||
This means that multi event handling needs to be informed *before* a close,
|
||||
clean up all its tracking and be ready to see that same socket identifier
|
||||
|
||||
@ -57,9 +57,9 @@ There should be a documented URL format. If there is an RFC for it there is no
|
||||
question about it but the syntax does not have to be a published RFC. It could
|
||||
be enough if it is already in use by other implementations.
|
||||
|
||||
If you make up the syntax just in order to be able to propose it to curl, then
|
||||
you are in a bad place. URLs are designed and defined for interoperability.
|
||||
There should at least be a good chance that other clients and servers can be
|
||||
If you make up the syntax in order to be able to propose it to curl, then you
|
||||
are in a bad place. URLs are designed and defined for interoperability. There
|
||||
should at least be a good chance that other clients and servers can be
|
||||
implemented supporting the same URL syntax and work the same or similar way.
|
||||
|
||||
URLs work on registered 'schemes'. There is a register of [all officially
|
||||
@ -91,8 +91,8 @@ to curl and immediately once the code had been merged, the originator vanished
|
||||
from the face of the earth. That is fine, but we need to take the necessary
|
||||
precautions so when it happens we are still fine.
|
||||
|
||||
Our test infrastructure is powerful enough to test just about every possible
|
||||
protocol - but it might require a bit of an effort to make it happen.
|
||||
Our test infrastructure is powerful enough to test almost every protocol - but
|
||||
it might require a bit of an effort to make it happen.
|
||||
|
||||
## Documentation
|
||||
|
||||
|
||||
@ -43,7 +43,7 @@ curl> python3 tests/http/scorecard.py -h
|
||||
|
||||
Apart from `-d/--downloads` there is `-u/--uploads` and `-r/--requests`. These
|
||||
are run with a variation of resource sizes and parallelism by default. You can
|
||||
specify these in some way if you are just interested in a particular case.
|
||||
specify these in some way if you are interested in a particular case.
|
||||
|
||||
For example, to run downloads of a 1 MB resource only, 100 times with at max 6
|
||||
parallel transfers, use:
|
||||
|
||||
@ -161,7 +161,7 @@ string.
|
||||
int curlx_str_number(char **linep, curl_size_t *nump, size_t max);
|
||||
~~~
|
||||
|
||||
Get an unsigned decimal number not larger than `max`. Leading zeroes are just
|
||||
Get an unsigned decimal number not larger than `max`. Leading zeroes are
|
||||
swallowed. Return non-zero on error. Returns error if there was not a single
|
||||
digit.
|
||||
|
||||
@ -181,8 +181,8 @@ int curlx_str_hex(char **linep, curl_size_t *nump, size_t max);
|
||||
~~~
|
||||
|
||||
Get an unsigned hexadecimal number not larger than `max`. Leading zeroes are
|
||||
just swallowed. Return non-zero on error. Returns error if there was not a
|
||||
single digit. Does *not* handled `0x` prefix.
|
||||
swallowed. Return non-zero on error. Returns error if there was not a single
|
||||
digit. Does *not* handled `0x` prefix.
|
||||
|
||||
## `curlx_str_octal`
|
||||
|
||||
@ -190,7 +190,7 @@ single digit. Does *not* handled `0x` prefix.
|
||||
int curlx_str_octal(char **linep, curl_size_t *nump, size_t max);
|
||||
~~~
|
||||
|
||||
Get an unsigned octal number not larger than `max`. Leading zeroes are just
|
||||
Get an unsigned octal number not larger than `max`. Leading zeroes are
|
||||
swallowed. Return non-zero on error. Returns error if there was not a single
|
||||
digit.
|
||||
|
||||
|
||||
@ -52,8 +52,8 @@ Examples:
|
||||
same as the previous, except it is configured to use TLSv1.2 as
|
||||
min and max versions.
|
||||
|
||||
Different configurations produce different keys which is just what
|
||||
curl needs when handling SSL session tickets.
|
||||
Different configurations produce different keys which is what curl needs when
|
||||
handling SSL session tickets.
|
||||
|
||||
One important thing: peer keys do not contain confidential information. If you
|
||||
configure a client certificate or SRP authentication with username/password,
|
||||
@ -121,8 +121,8 @@ concurrent connections do not reuse the same ticket.
|
||||
#### Privacy and Security
|
||||
|
||||
As mentioned above, ssl peer keys are not intended for storage in a file
|
||||
system. They clearly show which hosts the user talked to. This maybe "just"
|
||||
privacy relevant, but has security implications as an attacker might find
|
||||
system. They clearly show which hosts the user talked to. This is not only
|
||||
privacy relevant, but also has security implications as an attacker might find
|
||||
worthy targets among your peer keys.
|
||||
|
||||
Also, we do not recommend to persist TLSv1.2 tickets.
|
||||
@ -138,11 +138,11 @@ The salt is generated randomly for each peer key on export. The SHA256 makes
|
||||
sure that the peer key cannot be reversed and that a slightly different key
|
||||
still produces a different result.
|
||||
|
||||
This means an attacker cannot just "grep" a session file for a particular
|
||||
entry, e.g. if they want to know if you accessed a specific host. They *can*
|
||||
however compute the SHA256 hashes for all salts in the file and find a
|
||||
specific entry. They *cannot* find a hostname they do not know. They would
|
||||
have to brute force by guessing.
|
||||
This means an attacker cannot "grep" a session file for a particular entry,
|
||||
e.g. if they want to know if you accessed a specific host. They *can* however
|
||||
compute the SHA256 hashes for all salts in the file and find a specific entry.
|
||||
They *cannot* find a hostname they do not know. They would have to brute force
|
||||
by guessing.
|
||||
|
||||
#### Import
|
||||
|
||||
|
||||
@ -15,7 +15,7 @@ sizes/defines and more.
|
||||
## Upgrades
|
||||
|
||||
A libcurl upgrade does not break the ABI or change established and documented
|
||||
behavior. Your application can remain using libcurl just as before, only with
|
||||
behavior. Your application can remain using libcurl like before, only with
|
||||
fewer bugs and possibly with added new features.
|
||||
|
||||
## Version Numbers
|
||||
|
||||
@ -52,7 +52,7 @@ to the function is encoded correctly.
|
||||
# URLs
|
||||
|
||||
URLs are by definition *URL encoded*. To create a proper URL from a set of
|
||||
components that may not be URL encoded already, you cannot just URL encode the
|
||||
components that may not be URL encoded already, you cannot URL encode the
|
||||
entire URL string with curl_easy_escape(3), because it then also converts
|
||||
colons, slashes and other symbols that you probably want untouched.
|
||||
|
||||
|
||||
@ -196,16 +196,15 @@ In microseconds. (Added in 8.10.0) See CURLINFO_POSTTRANSFER_TIME_T(3)
|
||||
|
||||
## CURLINFO_PRETRANSFER_TIME
|
||||
|
||||
The time it took from the start until the file transfer is just about to
|
||||
begin. This includes all pre-transfer commands and negotiations that are
|
||||
specific to the particular protocol(s) involved. See
|
||||
CURLINFO_PRETRANSFER_TIME(3)
|
||||
The time it took from the start until the file transfer is about to begin.
|
||||
This includes all pre-transfer commands and negotiations that are specific to
|
||||
the particular protocol(s) involved. See CURLINFO_PRETRANSFER_TIME(3)
|
||||
|
||||
## CURLINFO_PRETRANSFER_TIME_T
|
||||
|
||||
The time it took from the start until the file transfer is just about to
|
||||
begin. This includes all pre-transfer commands and negotiations that are
|
||||
specific to the particular protocol(s) involved. In microseconds. See
|
||||
The time it took from the start until the file transfer is about to begin.
|
||||
This includes all pre-transfer commands and negotiations that are specific to
|
||||
the particular protocol(s) involved. In microseconds. See
|
||||
CURLINFO_PRETRANSFER_TIME_T(3)
|
||||
|
||||
## CURLINFO_PRIMARY_IP
|
||||
|
||||
@ -30,7 +30,7 @@ void curl_easy_reset(CURL *handle);
|
||||
|
||||
Re-initializes all options previously set on a specified curl handle to the
|
||||
default values. This puts back the handle to the same state as it was in when
|
||||
it was just created with curl_easy_init(3).
|
||||
it was created with curl_easy_init(3).
|
||||
|
||||
It does not change the following information kept in the handle: live
|
||||
connections, the Session ID cache, the DNS cache, the cookies, the shares or
|
||||
|
||||
@ -708,7 +708,7 @@ How to act on redirects after POST. See CURLOPT_POSTREDIR(3)
|
||||
|
||||
## CURLOPT_PREQUOTE
|
||||
|
||||
Commands to run just before transfer. See CURLOPT_PREQUOTE(3)
|
||||
Commands to run immediately before transfer. See CURLOPT_PREQUOTE(3)
|
||||
|
||||
## CURLOPT_PREREQDATA
|
||||
|
||||
|
||||
@ -83,8 +83,7 @@ a cryptographic hash of the salt and **session_key**. The salt is generated
|
||||
for every session individually. Storing **shmac** is recommended when
|
||||
placing session tickets in a file, for example.
|
||||
|
||||
A third party may brute-force known hostnames, but cannot just "grep" for
|
||||
them.
|
||||
A third party may brute-force known hostnames, but cannot "grep" for them.
|
||||
|
||||
## Session Data
|
||||
|
||||
|
||||
@ -37,11 +37,11 @@ curl_version_info(3) has the CURL_VERSION_THREADSAFE feature bit set
|
||||
(most platforms).
|
||||
|
||||
If this is not thread-safe, you must not call this function when any other
|
||||
thread in the program (i.e. a thread sharing the same memory) is running.
|
||||
This does not just mean no other thread that is using libcurl. Because
|
||||
curl_global_cleanup(3) calls functions of other libraries that are
|
||||
similarly thread-unsafe, it could conflict with any other thread that uses
|
||||
these other libraries.
|
||||
thread in the program (i.e. a thread sharing the same memory) is running. This
|
||||
does not only mean other threads that use libcurl. Because
|
||||
curl_global_cleanup(3) calls functions of other libraries that are similarly
|
||||
thread-unsafe, it could conflict with any other thread that uses these other
|
||||
libraries.
|
||||
|
||||
See the description in libcurl(3) of global environment requirements for
|
||||
details of how to use this function.
|
||||
|
||||
@ -50,10 +50,10 @@ the `threadsafe` feature set (added in 7.84.0).
|
||||
|
||||
If this is not thread-safe (the bit mentioned above is not set), you must not
|
||||
call this function when any other thread in the program (i.e. a thread sharing
|
||||
the same memory) is running. This does not just mean no other thread that is
|
||||
using libcurl. Because curl_global_init(3) calls functions of other libraries
|
||||
that are similarly thread-unsafe, it could conflict with any other thread that
|
||||
uses these other libraries.
|
||||
the same memory) is running. This does not only mean other threads that use
|
||||
libcurl. Because curl_global_init(3) calls functions of other libraries that
|
||||
are similarly thread-unsafe, it could conflict with any other thread that uses
|
||||
these other libraries.
|
||||
|
||||
If you are initializing libcurl from a Windows DLL you should not initialize
|
||||
it from *DllMain* or a static initializer because Windows holds the loader
|
||||
|
||||
@ -62,7 +62,7 @@ curl_version_info(3) has the CURL_VERSION_THREADSAFE feature bit set
|
||||
|
||||
If this is not thread-safe, you must not call this function when any other
|
||||
thread in the program (i.e. a thread sharing the same memory) is running.
|
||||
This does not just mean no other thread that is using libcurl.
|
||||
This does not only mean no other thread that is using libcurl.
|
||||
|
||||
# Names
|
||||
|
||||
@ -72,7 +72,7 @@ Schannel, wolfSSL
|
||||
The name "OpenSSL" is used for all versions of OpenSSL and its associated
|
||||
forks/flavors in this function. OpenSSL, BoringSSL, LibreSSL, quictls and
|
||||
AmiSSL are all supported by libcurl, but in the eyes of curl_global_sslset(3)
|
||||
they are all just "OpenSSL". They all mostly provide the same API.
|
||||
they are all called "OpenSSL". They all mostly provide the same API.
|
||||
curl_version_info(3) can return more specific info about the exact OpenSSL
|
||||
flavor and version number in use.
|
||||
|
||||
|
||||
@ -41,7 +41,7 @@ the CURL_VERSION_THREADSAFE feature bit set (most platforms).
|
||||
|
||||
If this is not thread-safe, you must not call this function when any other
|
||||
thread in the program (i.e. a thread sharing the same memory) is running. This
|
||||
does not just mean no other thread that is using libcurl. Because
|
||||
does not only mean no other thread that is using libcurl. Because
|
||||
curl_global_init(3) may call functions of other libraries that are similarly
|
||||
thread-unsafe, it could conflict with any other thread that uses these other
|
||||
libraries.
|
||||
|
||||
@ -159,14 +159,14 @@ An optional precision in the form of a period ('.') followed by an optional
|
||||
decimal digit string. Instead of a decimal digit string one may write "*" or
|
||||
"*m$" (for some decimal integer m) to specify that the precision is given in
|
||||
the next argument, or in the *m-th* argument, respectively, which must be of
|
||||
type int. If the precision is given as just '.', the precision is taken to be
|
||||
zero. A negative precision is taken as if the precision were omitted. This
|
||||
gives the minimum number of digits to appear for **d**, **i**, **o**,
|
||||
**u**, **x**, and **X** conversions, the number of digits to appear
|
||||
after the radix character for **a**, **A**, **e**, **E**, **f**, and
|
||||
**F** conversions, the maximum number of significant digits for **g** and
|
||||
**G** conversions, or the maximum number of characters to be printed from a
|
||||
string for **s** and **S** conversions.
|
||||
type int. If the precision is given as a single '.', the precision is taken to
|
||||
be zero. A negative precision is taken as if the precision were omitted. This
|
||||
gives the minimum number of digits to appear for **d**, **i**, **o**, **u**,
|
||||
**x**, and **X** conversions, the number of digits to appear after the radix
|
||||
character for **a**, **A**, **e**, **E**, **f**, and **F** conversions, the
|
||||
maximum number of significant digits for **g** and **G** conversions, or the
|
||||
maximum number of characters to be printed from a string for **s** and **S**
|
||||
conversions.
|
||||
|
||||
# Length modifier
|
||||
|
||||
|
||||
@ -41,11 +41,6 @@ libcurl only keeps one single pointer associated with a socket, so calling
|
||||
this function several times for the same socket makes the last set pointer get
|
||||
used.
|
||||
|
||||
The idea here being that this association (socket to private pointer) is
|
||||
something that just about every application that uses this API needs and then
|
||||
libcurl can just as well do it since it already has the necessary
|
||||
functionality.
|
||||
|
||||
It is acceptable to call this function from your multi callback functions.
|
||||
|
||||
# %PROTOCOLS%
|
||||
|
||||
@ -27,10 +27,10 @@ CURLMsg *curl_multi_info_read(CURLM *multi_handle, int *msgs_in_queue);
|
||||
|
||||
# DESCRIPTION
|
||||
|
||||
Ask the multi handle if there are any messages from the individual
|
||||
transfers. Messages may include information such as an error code from the
|
||||
transfer or just the fact that a transfer is completed. More details on these
|
||||
should be written down as well.
|
||||
Ask the multi handle if there are any messages from the individual transfers.
|
||||
Messages may include information such as an error code from the transfer or
|
||||
the fact that a transfer is completed. More details on these should be written
|
||||
down as well.
|
||||
|
||||
Repeated calls to this function returns a new struct each time, until a NULL
|
||||
is returned as a signal that there is no more to get at this point. The
|
||||
@ -63,7 +63,7 @@ struct CURLMsg {
|
||||
~~~
|
||||
When **msg** is *CURLMSG_DONE*, the message identifies a transfer that
|
||||
is done, and then **result** contains the return code for the easy handle
|
||||
that just completed.
|
||||
that completed.
|
||||
|
||||
At this point, there are no other **msg** types defined.
|
||||
|
||||
|
||||
@ -40,8 +40,8 @@ or a timeout has elapsed, the application should call this function to
|
||||
read/write whatever there is to read or write right now etc.
|
||||
curl_multi_perform(3) returns as soon as the reads/writes are done. This
|
||||
function does not require that there actually is any data available for
|
||||
reading or that data can be written, it can be called just in case. It stores
|
||||
the number of handles that still transfer data in the second argument's
|
||||
reading or that data can be written, it can be called as a precaution. It
|
||||
stores the number of handles that still transfer data in the second argument's
|
||||
integer-pointer.
|
||||
|
||||
If the amount of *running_handles* is changed from the previous call (or
|
||||
|
||||
@ -37,7 +37,7 @@ Removing an easy handle while being in use is perfectly legal and effectively
|
||||
halts the transfer in progress involving that easy handle. All other easy
|
||||
handles and transfers remain unaffected.
|
||||
|
||||
It is fine to remove a handle at any time during a transfer, just not from
|
||||
It is fine to remove a handle at any time during a transfer, but not from
|
||||
within any libcurl callback function.
|
||||
|
||||
Removing an easy handle from the multi handle before the corresponding
|
||||
|
||||
@ -38,8 +38,8 @@ still running easy handles within the multi handle. When this number reaches
|
||||
zero, all transfers are complete/done.
|
||||
|
||||
Force libcurl to (re-)check all its internal sockets and transfers instead of
|
||||
just a single one by calling curl_multi_socket_all(3). Note that there should
|
||||
not be any reason to use this function.
|
||||
a single one by calling curl_multi_socket_all(3). Note that there should not
|
||||
be any reason to use this function.
|
||||
|
||||
# %PROTOCOLS%
|
||||
|
||||
|
||||
@ -45,9 +45,9 @@ An application that uses the *multi_socket* API should not use this function.
|
||||
It should instead use the CURLMOPT_TIMERFUNCTION(3) option for proper and
|
||||
desired behavior.
|
||||
|
||||
Note: if libcurl returns a -1 timeout here, it just means that libcurl
|
||||
currently has no stored timeout value. You must not wait too long (more than a
|
||||
few seconds perhaps) before you call curl_multi_perform(3) again.
|
||||
Note: if libcurl returns a -1 timeout here, it means that libcurl currently
|
||||
has no stored timeout value. You must not wait too long (more than a few
|
||||
seconds perhaps) before you call curl_multi_perform(3) again.
|
||||
|
||||
# %PROTOCOLS%
|
||||
|
||||
|
||||
@ -48,7 +48,7 @@ If the *fd_count* argument is not a null pointer, it points to a variable
|
||||
that on return specifies the number of descriptors used by the multi_handle to
|
||||
be checked for being ready to read or write.
|
||||
|
||||
The client code can pass *size* equal to zero just to get the number of the
|
||||
The client code can pass *size* equal to zero to get the number of the
|
||||
descriptors and allocate appropriate storage for them to be used in a
|
||||
subsequent function call. In this case, *fd_count* receives a number greater
|
||||
than or equal to the number of descriptors.
|
||||
|
||||
@ -187,8 +187,8 @@ If the hostname is a numeric IPv6 address, this field might also be set.
|
||||
|
||||
## CURLUPART_PORT
|
||||
|
||||
A port cannot be URL decoded on get. This number is returned in a string just
|
||||
like all other parts. That string is guaranteed to hold a valid port number in
|
||||
A port cannot be URL decoded on get. This number is returned in a string like
|
||||
all other parts. That string is guaranteed to hold a valid port number in
|
||||
ASCII using base 10.
|
||||
|
||||
## CURLUPART_PATH
|
||||
|
||||
@ -116,7 +116,7 @@ new the libcurl you are using is. You are however guaranteed to get a struct
|
||||
that you have a matching struct for in the header, as you tell libcurl your
|
||||
"age" with the input argument.
|
||||
|
||||
*version* is just an ASCII string for the libcurl version.
|
||||
*version* is an ASCII string for the libcurl version.
|
||||
|
||||
*version_num* is a 24-bit number created like this: \<8 bits major number\> |
|
||||
\<8 bits minor number\> | \<8 bits patch number\>. Version 7.9.8 is therefore
|
||||
|
||||
@ -29,11 +29,10 @@ Why they occur and possibly what you can do to fix the problem are also included
|
||||
# CURLcode
|
||||
|
||||
Almost all "easy" interface functions return a CURLcode error code. No matter
|
||||
what, using the curl_easy_setopt(3) option CURLOPT_ERRORBUFFER(3)
|
||||
is a good idea as it gives you a human readable error string that may offer
|
||||
more details about the cause of the error than just the error code.
|
||||
curl_easy_strerror(3) can be called to get an error string from a given
|
||||
CURLcode number.
|
||||
what, using the curl_easy_setopt(3) option CURLOPT_ERRORBUFFER(3) is a good
|
||||
idea as it gives you a human readable error string that may offer more details
|
||||
about the cause of the error than the error code alone. curl_easy_strerror(3)
|
||||
can be called to get an error string from a given CURLcode number.
|
||||
|
||||
CURLcode is one of the following:
|
||||
|
||||
@ -45,8 +44,7 @@ All fine. Proceed as usual.
|
||||
|
||||
The URL you passed to libcurl used a protocol that this libcurl does not
|
||||
support. The support might be a compile-time option that you did not use, it
|
||||
can be a misspelled protocol string or just a protocol libcurl has no code
|
||||
for.
|
||||
can be a misspelled protocol string or a protocol libcurl has no code for.
|
||||
|
||||
## CURLE_FAILED_INIT (2)
|
||||
|
||||
|
||||
@ -130,13 +130,12 @@ using large numbers of simultaneous connections.
|
||||
curl_multi_socket_action(3) is then used instead of
|
||||
curl_multi_perform(3).
|
||||
|
||||
When using this API, you add easy handles to the multi handle just as with the
|
||||
When using this API, you add easy handles to the multi handle like with the
|
||||
normal multi interface. Then you also set two callbacks with the
|
||||
CURLMOPT_SOCKETFUNCTION(3) and CURLMOPT_TIMERFUNCTION(3) options
|
||||
to curl_multi_setopt(3). They are two callback functions that libcurl
|
||||
calls with information about what sockets to wait for, and for what activity,
|
||||
and what the current timeout time is - if that expires libcurl should be
|
||||
notified.
|
||||
CURLMOPT_SOCKETFUNCTION(3) and CURLMOPT_TIMERFUNCTION(3) options to
|
||||
curl_multi_setopt(3). They are two callback functions that libcurl calls with
|
||||
information about what sockets to wait for, and for what activity, and what
|
||||
the current timeout time is - if that expires libcurl should be notified.
|
||||
|
||||
The multi_socket API is designed to inform your application about which
|
||||
sockets libcurl is currently using and for what activities (read and/or write)
|
||||
|
||||
@ -64,8 +64,8 @@ plain text anywhere.
|
||||
|
||||
Many of the protocols libcurl supports send name and password unencrypted as
|
||||
clear text (HTTP Basic authentication, FTP, TELNET etc). It is easy for anyone
|
||||
on your network or a network nearby yours to just fire up a network analyzer
|
||||
tool and eavesdrop on your passwords. Do not let the fact that HTTP Basic uses
|
||||
on your network or a network nearby yours to fire up a network analyzer tool
|
||||
and eavesdrop on your passwords. Do not let the fact that HTTP Basic uses
|
||||
base64 encoded passwords fool you. They may not look readable at a first
|
||||
glance, but they are easily "deciphered" by anyone within seconds.
|
||||
|
||||
@ -118,11 +118,11 @@ transfers require a new connection with validation performed again.
|
||||
|
||||
# Redirects
|
||||
|
||||
The CURLOPT_FOLLOWLOCATION(3) option automatically follows HTTP
|
||||
redirects sent by a remote server. These redirects can refer to any kind of
|
||||
URL, not just HTTP. libcurl restricts the protocols allowed to be used in
|
||||
redirects for security reasons: only HTTP, HTTPS, FTP and FTPS are
|
||||
enabled by default. Applications may opt to restrict that set further.
|
||||
The CURLOPT_FOLLOWLOCATION(3) option automatically follows HTTP redirects sent
|
||||
by a remote server. These redirects can refer to any kind of URL, not only
|
||||
HTTP. libcurl restricts the protocols allowed to be used in redirects for
|
||||
security reasons: only HTTP, HTTPS, FTP and FTPS are enabled by default.
|
||||
Applications may opt to restrict that set further.
|
||||
|
||||
A redirect to a file: URL would cause the libcurl to read (or write) arbitrary
|
||||
files from the local file system. If the application returns the data back to
|
||||
@ -131,8 +131,8 @@ leverage this to read otherwise forbidden data (e.g.
|
||||
**file://localhost/etc/passwd**).
|
||||
|
||||
If authentication credentials are stored in the ~/.netrc file, or Kerberos is
|
||||
in use, any other URL type (not just file:) that requires authentication is
|
||||
also at risk. A redirect such as **ftp://some-internal-server/private-file** would
|
||||
in use, any other URL type (except file:) that requires authentication is also
|
||||
at risk. A redirect such as **ftp://some-internal-server/private-file** would
|
||||
then return data even when the server is password protected.
|
||||
|
||||
In the same way, if an unencrypted SSH private key has been configured for the
|
||||
@ -178,7 +178,7 @@ of a server behind a firewall, such as 127.0.0.1 or 10.1.2.3. Applications can
|
||||
mitigate against this by setting a CURLOPT_OPENSOCKETFUNCTION(3) or
|
||||
CURLOPT_PREREQFUNCTION(3) and checking the address before a connection.
|
||||
|
||||
All the malicious scenarios regarding redirected URLs apply just as well to
|
||||
All the malicious scenarios regarding redirected URLs apply equally to
|
||||
non-redirected URLs, if the user is allowed to specify an arbitrary URL that
|
||||
could point to a private resource. For example, a web app providing a
|
||||
translation service might happily translate **file://localhost/etc/passwd**
|
||||
@ -211,15 +211,15 @@ or a mix of decimal, octal or hexadecimal encoding.
|
||||
|
||||
# IPv6 Addresses
|
||||
|
||||
libcurl handles IPv6 addresses transparently and just as easily as IPv4
|
||||
addresses. That means that a sanitizing function that filters out addresses
|
||||
like 127.0.0.1 is not sufficient - the equivalent IPv6 addresses **::1**,
|
||||
**::**, **0:00::0:1**, **::127.0.0.1** and **::ffff:7f00:1** supplied
|
||||
somehow by an attacker would all bypass a naive filter and could allow access
|
||||
to undesired local resources. IPv6 also has special address blocks like
|
||||
link-local and site-local that generally should not be accessed by a
|
||||
server-side libcurl-using application. A poorly configured firewall installed
|
||||
in a data center, organization or server may also be configured to limit IPv4
|
||||
libcurl handles IPv6 addresses transparently and as easily as IPv4 addresses.
|
||||
That means that a sanitizing function that filters out addresses like
|
||||
127.0.0.1 is not sufficient - the equivalent IPv6 addresses **::1**, **::**,
|
||||
**0:00::0:1**, **::127.0.0.1** and **::ffff:7f00:1** supplied somehow by an
|
||||
attacker would all bypass a naive filter and could allow access to undesired
|
||||
local resources. IPv6 also has special address blocks like link-local and
|
||||
site-local that generally should not be accessed by a server-side
|
||||
libcurl-using application. A poorly configured firewall installed in a data
|
||||
center, organization or server may also be configured to limit IPv4
|
||||
connections but leave IPv6 connections wide open. In some cases, setting
|
||||
CURLOPT_IPRESOLVE(3) to CURL_IPRESOLVE_V4 can be used to limit resolved
|
||||
addresses to IPv4 only and bypass these issues.
|
||||
@ -294,7 +294,7 @@ system.
|
||||
|
||||
The conclusion we have come to is that this is a weakness or feature in the
|
||||
Windows operating system itself, that we as an application cannot safely
|
||||
protect users against. It would just be a whack-a-mole race we do not want to
|
||||
protect users against. It would make a whack-a-mole race we do not want to
|
||||
participate in. There are too many ways to do it and there is no knob we can
|
||||
use to turn off the practice.
|
||||
|
||||
@ -333,8 +333,8 @@ libcurl programs can use CURLOPT_PROTOCOLS_STR(3) to limit what URL schemes it a
|
||||
|
||||
## consider not allowing the user to set the full URL
|
||||
|
||||
Maybe just let the user provide data for parts of it? Or maybe filter input to
|
||||
only allow specific choices? Remember that the naive approach of appending a
|
||||
Maybe let the user provide data for parts of it? Or maybe filter input to only
|
||||
allow specific choices? Remember that the naive approach of appending a
|
||||
user-specified string to a base URL could still allow unexpected results
|
||||
through use of characters like ../ or ? or Unicode characters or hiding
|
||||
characters using various escaping means.
|
||||
@ -396,10 +396,10 @@ using a SOCKS or HTTP proxy in between curl and the target server.
|
||||
# Denial of Service
|
||||
|
||||
A malicious server could cause libcurl to effectively hang by sending data
|
||||
slowly, or even no data at all but just keeping the TCP connection open. This
|
||||
could effectively result in a denial-of-service attack. The
|
||||
CURLOPT_TIMEOUT(3) and/or CURLOPT_LOW_SPEED_LIMIT(3) options can
|
||||
be used to mitigate against this.
|
||||
slowly, or even no data at all but keeping the TCP connection open. This could
|
||||
effectively result in a denial-of-service attack. The CURLOPT_TIMEOUT(3)
|
||||
and/or CURLOPT_LOW_SPEED_LIMIT(3) options can be used to mitigate against
|
||||
this.
|
||||
|
||||
A malicious server could cause libcurl to download an infinite amount of data,
|
||||
potentially causing system resources to be exhausted resulting in a system or
|
||||
@ -455,8 +455,8 @@ passwords, things like URLs, cookies or even filenames could also hold
|
||||
sensitive data.
|
||||
|
||||
To avoid this problem, you must of course use your common sense. Often, you
|
||||
can just edit out the sensitive data or just search/replace your true
|
||||
information with faked data.
|
||||
can edit out the sensitive data or search/replace your true information with
|
||||
faked data.
|
||||
|
||||
# setuid applications using libcurl
|
||||
|
||||
@ -515,6 +515,6 @@ cookies.
|
||||
|
||||
# Report Security Problems
|
||||
|
||||
Should you detect or just suspect a security problem in libcurl or curl,
|
||||
contact the project curl security team immediately. See
|
||||
Should you detect or suspect a security problem in libcurl or curl, contact
|
||||
the project curl security team immediately. See
|
||||
https://curl.se/dev/secprocess.html for details.
|
||||
|
||||
@ -92,9 +92,9 @@ The people behind libcurl have put a considerable effort to make libcurl work
|
||||
on a large amount of different operating systems and environments.
|
||||
|
||||
You program libcurl the same way on all platforms that libcurl runs on. There
|
||||
are only a few minor details that differ. If you just make sure to write your
|
||||
code portable enough, you can create a portable program. libcurl should not
|
||||
stop you from that.
|
||||
are only a few minor details that differ. If you make sure to write your code
|
||||
portable enough, you can create a portable program. libcurl should not stop
|
||||
you from that.
|
||||
|
||||
# Global Preparation
|
||||
|
||||
@ -171,7 +171,7 @@ Get an easy handle with
|
||||
handle = curl_easy_init();
|
||||
~~~
|
||||
It returns an easy handle. Using that you proceed to the next step: setting
|
||||
up your preferred actions. A handle is just a logic entity for the upcoming
|
||||
up your preferred actions. A handle is a logic entity for the upcoming
|
||||
transfer or series of transfers.
|
||||
|
||||
You set properties and options for this handle using
|
||||
@ -311,8 +311,8 @@ uploading to a remote FTP site is similar to uploading data to an HTTP server
|
||||
with a PUT request.
|
||||
|
||||
Of course, first you either create an easy handle or you reuse one existing
|
||||
one. Then you set the URL to operate on just like before. This is the remote
|
||||
URL, that we now upload.
|
||||
one. Then you set the URL to operate on like before. This is the remote URL,
|
||||
that we now upload.
|
||||
|
||||
Since we write an application, we most likely want libcurl to get the upload
|
||||
data by asking us for it. To make it do that, we set the read callback and the
|
||||
@ -620,15 +620,17 @@ handle:
|
||||
~~~
|
||||
|
||||
Since all options on an easy handle are "sticky", they remain the same until
|
||||
changed even if you do call curl_easy_perform(3), you may need to tell
|
||||
curl to go back to a plain GET request if you intend to do one as your next
|
||||
request. You force an easy handle to go back to GET by using the
|
||||
CURLOPT_HTTPGET(3) option:
|
||||
changed even if you do call curl_easy_perform(3), you may need to tell curl to
|
||||
go back to a plain GET request if you intend to do one as your next request.
|
||||
You force an easy handle to go back to GET by using the CURLOPT_HTTPGET(3)
|
||||
option:
|
||||
|
||||
~~~c
|
||||
curl_easy_setopt(handle, CURLOPT_HTTPGET, 1L);
|
||||
~~~
|
||||
Just setting CURLOPT_POSTFIELDS(3) to "" or NULL does *not* stop libcurl
|
||||
from doing a POST. It just makes it POST without any data to send!
|
||||
|
||||
Setting CURLOPT_POSTFIELDS(3) to "" or NULL does *not* stop libcurl from doing
|
||||
a POST. It makes it POST without any data to send!
|
||||
|
||||
# Converting from deprecated form API to MIME API
|
||||
|
||||
@ -956,10 +958,10 @@ Mozilla JavaScript engine in the past.
|
||||
Re-cycling the same easy handle several times when doing multiple requests is
|
||||
the way to go.
|
||||
|
||||
After each single curl_easy_perform(3) operation, libcurl keeps the
|
||||
connection alive and open. A subsequent request using the same easy handle to
|
||||
the same host might just be able to use the already open connection! This
|
||||
reduces network impact a lot.
|
||||
After each single curl_easy_perform(3) operation, libcurl keeps the connection
|
||||
alive and open. A subsequent request using the same easy handle to the same
|
||||
host might be able to reuse the already open connection! This reduces network
|
||||
impact a lot.
|
||||
|
||||
Even if the connection is dropped, all connections involving SSL to the same
|
||||
host again, benefit from libcurl's session ID cache that drastically reduces
|
||||
@ -978,9 +980,9 @@ may also be added in the future.
|
||||
|
||||
Each easy handle attempts to keep the last few connections alive for a while
|
||||
in case they are to be used again. You can set the size of this "cache" with
|
||||
the CURLOPT_MAXCONNECTS(3) option. Default is 5. There is rarely any
|
||||
point in changing this value, and if you think of changing this it is often
|
||||
just a matter of thinking again.
|
||||
the CURLOPT_MAXCONNECTS(3) option. Default is 5. There is rarely any point in
|
||||
changing this value, and if you think of changing this it is often a reason to
|
||||
think again.
|
||||
|
||||
To force your upcoming request to not use an already existing connection, you
|
||||
can do that by setting CURLOPT_FRESH_CONNECT(3) to 1. In a similar
|
||||
@ -1025,9 +1027,9 @@ libcurl is your friend here too.
|
||||
|
||||
## CURLOPT_CUSTOMREQUEST
|
||||
|
||||
If just changing the actual HTTP request keyword is what you want, like when
|
||||
GET, HEAD or POST is not good enough for you, CURLOPT_CUSTOMREQUEST(3)
|
||||
is there for you. It is simple to use:
|
||||
If changing the actual HTTP request keyword is what you want, like when GET,
|
||||
HEAD or POST is not good enough for you, CURLOPT_CUSTOMREQUEST(3) is there for
|
||||
you. It is simple to use:
|
||||
|
||||
~~~c
|
||||
curl_easy_setopt(handle, CURLOPT_CUSTOMREQUEST, "MYOWNREQUEST");
|
||||
@ -1152,8 +1154,8 @@ content transfer is performed.
|
||||
## FTP Custom CURLOPT_CUSTOMREQUEST
|
||||
|
||||
If you do want to list the contents of an FTP directory using your own defined
|
||||
FTP command, CURLOPT_CUSTOMREQUEST(3) does just that. "NLST" is the default
|
||||
one for listing directories but you are free to pass in your idea of a good
|
||||
FTP command, CURLOPT_CUSTOMREQUEST(3) does that. "NLST" is the default one for
|
||||
listing directories but you are free to pass in your idea of a good
|
||||
alternative.
|
||||
|
||||
# Cookies Without Chocolate Chips
|
||||
@ -1170,8 +1172,8 @@ update them. Server use cookies to "track" users and to keep "sessions".
|
||||
Cookies are sent from server to clients with the header Set-Cookie: and
|
||||
they are sent from clients to servers with the Cookie: header.
|
||||
|
||||
To just send whatever cookie you want to a server, you can use
|
||||
CURLOPT_COOKIE(3) to set a cookie string like this:
|
||||
To send whatever cookie you want to a server, you can use CURLOPT_COOKIE(3) to
|
||||
set a cookie string like this:
|
||||
|
||||
~~~c
|
||||
curl_easy_setopt(handle, CURLOPT_COOKIE, "name1=var1; name2=var2;");
|
||||
@ -1186,16 +1188,15 @@ when you make a request, you tell libcurl to read the previous headers to
|
||||
figure out which cookies to use. Set the header file to read cookies from with
|
||||
CURLOPT_COOKIEFILE(3).
|
||||
|
||||
The CURLOPT_COOKIEFILE(3) option also automatically enables the cookie
|
||||
parser in libcurl. Until the cookie parser is enabled, libcurl does not parse
|
||||
or understand incoming cookies and they are just be ignored. However, when the
|
||||
The CURLOPT_COOKIEFILE(3) option also automatically enables the cookie parser
|
||||
in libcurl. Until the cookie parser is enabled, libcurl does not parse or
|
||||
understand incoming cookies and they are instead ignored. However, when the
|
||||
parser is enabled the cookies are understood and the cookies are kept in
|
||||
memory and used properly in subsequent requests when the same handle is
|
||||
used. Many times this is enough, and you may not have to save the cookies to
|
||||
disk at all. Note that the file you specify to CURLOPT_COOKIEFILE(3)
|
||||
does not have to exist to enable the parser, so a common way to just enable
|
||||
the parser and not read any cookies is to use the name of a file you know does
|
||||
not exist.
|
||||
memory and used properly in subsequent requests when the same handle is used.
|
||||
Many times this is enough, and you may not have to save the cookies to disk at
|
||||
all. Note that the file you specify to CURLOPT_COOKIEFILE(3) does not have to
|
||||
exist to enable the parser, so a common way to enable the parser and not read
|
||||
any cookies is to use the name of a file you know does not exist.
|
||||
|
||||
If you would rather use existing cookies that you have previously received
|
||||
with your Netscape or Mozilla browsers, you can make libcurl use that cookie
|
||||
@ -1370,9 +1371,9 @@ multiple transfers at the same time by adding up multiple easy handles into
|
||||
a "multi stack".
|
||||
|
||||
You create the easy handles you want, one for each concurrent transfer, and
|
||||
you set all the options just like you learned above, and then you create a
|
||||
multi handle with curl_multi_init(3) and add all those easy handles to
|
||||
that multi handle with curl_multi_add_handle(3).
|
||||
you set all the options like you learned above, and then you create a multi
|
||||
handle with curl_multi_init(3) and add all those easy handles to that multi
|
||||
handle with curl_multi_add_handle(3).
|
||||
|
||||
When you have added the handles you have for the moment (you can still add new
|
||||
ones at any time), you start the transfers by calling
|
||||
|
||||
@ -45,7 +45,7 @@ When done with it, clean it up with curl_url_cleanup(3)
|
||||
|
||||
# DUPLICATE
|
||||
|
||||
When you need a copy of a handle, just duplicate it with curl_url_dup(3):
|
||||
When you need a copy of a handle, duplicate it with curl_url_dup(3):
|
||||
~~~c
|
||||
CURLU *nh = curl_url_dup(h);
|
||||
~~~
|
||||
|
||||
@ -219,7 +219,7 @@ AC_DEFUN([LIBCURL_CHECK_CONFIG],
|
||||
|
||||
if test -z "$_libcurl_protocols"; then
|
||||
|
||||
# We do not have --protocols, so just assume that all
|
||||
# We do not have --protocols; assume that all
|
||||
# protocols are available
|
||||
_libcurl_protocols="HTTP FTP FILE TELNET LDAP DICT TFTP"
|
||||
|
||||
|
||||
@ -192,8 +192,8 @@ libcurl at all. Call curl_global_cleanup(3) immediately before the
|
||||
program exits, when the program is again only one thread and after its last
|
||||
use of libcurl.
|
||||
|
||||
It is not actually required that the functions be called at the beginning
|
||||
and end of the program -- that is just usually the easiest way to do it.
|
||||
It is not actually required that the functions be called at the beginning and
|
||||
end of the program -- that is usually the easiest way to do it.
|
||||
|
||||
You can call both of these multiple times, as long as all calls meet
|
||||
these requirements and the number of calls to each is the same.
|
||||
@ -205,13 +205,13 @@ other parts of the program -- it does not know whether they use libcurl or
|
||||
not. Its code does not necessarily run at the start and end of the whole
|
||||
program.
|
||||
|
||||
A module like this must have global constant functions of its own, just like
|
||||
curl_global_init(3) and curl_global_cleanup(3). The module thus
|
||||
has control at the beginning and end of the program and has a place to call
|
||||
the libcurl functions. If multiple modules in the program use libcurl, they
|
||||
all separately call the libcurl functions, and that is OK because only the
|
||||
first curl_global_init(3) and the last curl_global_cleanup(3) in a
|
||||
program change anything. (libcurl uses a reference count in static memory).
|
||||
A module like this must have global constant functions of its own, like
|
||||
curl_global_init(3) and curl_global_cleanup(3). The module thus has control at
|
||||
the beginning and end of the program and has a place to call the libcurl
|
||||
functions. If multiple modules in the program use libcurl, they all separately
|
||||
call the libcurl functions, and that is OK because only the first
|
||||
curl_global_init(3) and the last curl_global_cleanup(3) in a program change
|
||||
anything. (libcurl uses a reference count in static memory).
|
||||
|
||||
In a C++ module, it is common to deal with the global constant situation by
|
||||
defining a special class that represents the global constant environment of
|
||||
|
||||
@ -30,7 +30,7 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_PRETRANSFER_TIME,
|
||||
# DESCRIPTION
|
||||
|
||||
Pass a pointer to a double to receive the time, in seconds, it took from the
|
||||
start until the file transfer is just about to begin.
|
||||
start until the file transfer is about to begin.
|
||||
|
||||
This time-stamp includes all pre-transfer commands and negotiations that are
|
||||
specific to the particular protocol(s) involved. It includes the sending of
|
||||
|
||||
@ -30,7 +30,7 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_PRETRANSFER_TIME_T,
|
||||
# DESCRIPTION
|
||||
|
||||
Pass a pointer to a curl_off_t to receive the time, in microseconds, it took
|
||||
from the start until the file transfer is just about to begin.
|
||||
from the start until the file transfer is about to begin.
|
||||
|
||||
This time-stamp includes all pre-transfer commands and negotiations that are
|
||||
specific to the particular protocol(s) involved. It includes the sending of
|
||||
|
||||
@ -54,7 +54,7 @@ The *backend* struct member is one of these defines: CURLSSLBACKEND_NONE (when
|
||||
built without TLS support), CURLSSLBACKEND_WOLFSSL,
|
||||
CURLSSLBACKEND_SECURETRANSPORT, CURLSSLBACKEND_GNUTLS, CURLSSLBACKEND_MBEDTLS,
|
||||
CURLSSLBACKEND_NSS, CURLSSLBACKEND_OPENSSL or CURLSSLBACKEND_SCHANNEL. (Note
|
||||
that the OpenSSL forks are all reported as just OpenSSL here.)
|
||||
that the OpenSSL forks are all reported as OpenSSL here.)
|
||||
|
||||
The *internals* struct member points to a TLS library specific pointer for
|
||||
the active ("in use") SSL connection, with the following underlying types:
|
||||
|
||||
@ -63,11 +63,11 @@ usual.
|
||||
If the callback returns CURL_PUSH_OK, the new easy handle is added to the
|
||||
multi handle, the callback must not do that by itself.
|
||||
|
||||
The callback can access PUSH_PROMISE headers with two accessor
|
||||
functions. These functions can only be used from within this callback and they
|
||||
can only access the PUSH_PROMISE headers: curl_pushheader_byname(3) and
|
||||
curl_pushheader_bynum(3). The normal response headers are passed to the
|
||||
header callback for pushed streams just as for normal streams.
|
||||
The callback can access PUSH_PROMISE headers with two accessor functions.
|
||||
These functions can only be used from within this callback and they can only
|
||||
access the PUSH_PROMISE headers: curl_pushheader_byname(3) and
|
||||
curl_pushheader_bynum(3). The normal response headers are passed to the header
|
||||
callback for pushed streams like for normal streams.
|
||||
|
||||
The header fields can also be accessed with curl_easy_header(3),
|
||||
introduced in later libcurl versions.
|
||||
|
||||
@ -52,8 +52,8 @@ Set CURLOPT_ACCEPT_ENCODING(3) to NULL to explicitly disable it, which makes
|
||||
libcurl not send an Accept-Encoding: header and not decompress received
|
||||
contents automatically.
|
||||
|
||||
You can also opt to just include the Accept-Encoding: header in your request
|
||||
with CURLOPT_HTTPHEADER(3) but then there is no automatic decompressing when
|
||||
You can also opt to include the `Accept-Encoding:` header in your request with
|
||||
CURLOPT_HTTPHEADER(3) but then there is no automatic decompressing when
|
||||
receiving data.
|
||||
|
||||
Setting this option is a request, not an order; the server may or may not do
|
||||
|
||||
@ -62,8 +62,8 @@ Example with "Test:Try", when curl uses the algorithm, it generates
|
||||
for "date", **"test4_request"** for "request type",
|
||||
**"SignedHeaders=content-type;host;x-try-date"** for "signed headers"
|
||||
|
||||
If you use just "test", instead of "test:try", test is used for every
|
||||
generated string.
|
||||
If you use "test", instead of "test:try", test is used for every generated
|
||||
string.
|
||||
|
||||
Setting CURLOPT_HTTPAUTH(3) with the CURLAUTH_AWS_SIGV4 bit set is the same as
|
||||
setting this option with a **"aws:amz"** parameter.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user