tidy-up: miscellaneous

- whitespace, indent, comments, clang-format.
- openssl: move feature guards within function blocks.
- tunit: drop redundant blocks.

Closes #20361
This commit is contained in:
Viktor Szakats 2026-01-16 18:13:44 +01:00
parent 2c6f13093e
commit 814b54d83e
No known key found for this signature in database
GPG Key ID: B5ABD165E2AEF201
45 changed files with 1521 additions and 1527 deletions

View File

@ -10,9 +10,9 @@ How to contribute to curl
Join the community
------------------
1. Click 'watch' on the GitHub repo
1. Click 'watch' on the GitHub repo
2. Subscribe to the suitable [mailing lists](https://curl.se/mail/)
2. Subscribe to the suitable [mailing lists](https://curl.se/mail/)
Read [CONTRIBUTE](/docs/CONTRIBUTE.md)
---------------------------------------
@ -20,10 +20,10 @@ Read [CONTRIBUTE](/docs/CONTRIBUTE.md)
Send your suggestions using one of these methods:
-------------------------------------------------
1. in a mail to the mailing list
1. in a mail to the mailing list
2. as a [pull request](https://github.com/curl/curl/pulls)
2. as a [pull request](https://github.com/curl/curl/pulls)
3. as an [issue](https://github.com/curl/curl/issues)
3. as an [issue](https://github.com/curl/curl/issues)
/ The curl team

View File

@ -331,7 +331,7 @@ dnl we would like an httpd as test server
dnl
HTTPD_ENABLED="maybe"
AC_ARG_WITH(test-httpd, [AS_HELP_STRING([--with-test-httpd=PATH],
[where to find httpd/apache2 for testing])],
[where to find httpd/apache2 for testing])],
[request_httpd=$withval], [request_httpd=check])
if test "x$request_httpd" = "xcheck" || test "x$request_httpd" = "xyes"; then
if test -x "/usr/sbin/apache2"; then
@ -411,7 +411,7 @@ dnl we would like a sshd as test server
dnl
SSHD_ENABLED="maybe"
AC_ARG_WITH(test-sshd, [AS_HELP_STRING([--with-test-sshd=PATH],
[where to find sshd for testing])],
[where to find sshd for testing])],
[request_sshd=$withval], [request_sshd=check])
if test "x$request_sshd" = "xcheck" || test "x$request_sshd" = "xyes"; then
if test -x "/usr/sbin/sshd"; then

View File

@ -7,14 +7,14 @@ SPDX-License-Identifier: curl
libcurl bindings
================
Creative people have written bindings or interfaces for various environments
and programming languages. Using one of these allows you to take advantage of
curl powers from within your favourite language or system.
Creative people have written bindings or interfaces for various environments
and programming languages. Using one of these allows you to take advantage of
curl powers from within your favourite language or system.
This is a list of all known interfaces as of this writing.
This is a list of all known interfaces as of this writing.
The bindings listed below are not part of the curl/libcurl distribution
archives, but must be downloaded and installed separately.
The bindings listed below are not part of the curl/libcurl distribution
archives, but must be downloaded and installed separately.
<!-- markdown-link-check-disable -->
@ -23,8 +23,8 @@ libcurl bindings
[Basic](https://scriptbasic.com/) ScriptBasic bindings written by Peter Verhas
C++: [curlpp](https://github.com/jpbarrette/curlpp) Written by Jean-Philippe Barrette-LaPierre,
[curlcpp](https://github.com/JosephP91/curlcpp) by Giuseppe Persico and [C++
Requests](https://github.com/libcpr/cpr) by Huu Nguyen
[curlcpp](https://github.com/JosephP91/curlcpp) by Giuseppe Persico and
[C++ Requests](https://github.com/libcpr/cpr) by Huu Nguyen
[Ch](https://chcurl.sourceforge.net/) Written by Stephen Nestinger and Jonathan Rogado

View File

@ -8,263 +8,263 @@ SPDX-License-Identifier: curl
## There are still bugs
curl and libcurl keep being developed. Adding features and changing code
means that bugs sneak in, no matter how hard we try to keep them out.
curl and libcurl keep being developed. Adding features and changing code
means that bugs sneak in, no matter how hard we try to keep them out.
Of course there are lots of bugs left. Not to mention misfeatures.
Of course there are lots of bugs left. Not to mention misfeatures.
To help us make curl the stable and solid product we want it to be, we need
bug reports and bug fixes.
To help us make curl the stable and solid product we want it to be, we need
bug reports and bug fixes.
## Where to report
If you cannot fix a bug yourself and submit a fix for it, try to report an as
detailed report as possible to a curl mailing list to allow one of us to have
a go at a solution. You can optionally also submit your problem in [curl's
bug tracking system](https://github.com/curl/curl/issues).
If you cannot fix a bug yourself and submit a fix for it, try to report an as
detailed report as possible to a curl mailing list to allow one of us to have
a go at a solution. You can optionally also submit your problem in
[curl's bug tracking system](https://github.com/curl/curl/issues).
Please read the rest of this document below first before doing that.
Please read the rest of this document below first before doing that.
If you feel you need to ask around first, find a suitable [mailing
list](https://curl.se/mail/) and post your questions there.
If you feel you need to ask around first, find a suitable
[mailing list](https://curl.se/mail/) and post your questions there.
## Security bugs
If you find a bug or problem in curl or libcurl that you think has a security
impact, for example a bug that can put users in danger or make them
vulnerable if the bug becomes public knowledge, then please report that bug
using our security development process.
If you find a bug or problem in curl or libcurl that you think has a security
impact, for example a bug that can put users in danger or make them
vulnerable if the bug becomes public knowledge, then please report that bug
using our security development process.
Security related bugs or bugs that are suspected to have a security impact,
should be reported on the [curl security tracker at
HackerOne](https://hackerone.com/curl).
Security related bugs or bugs that are suspected to have a security impact,
should be reported on the
[curl security tracker at HackerOne](https://hackerone.com/curl).
This ensures that the report reaches the curl security team so that they
first can deal with the report away from the public to minimize the harm and
impact it has on existing users out there who might be using the vulnerable
versions.
This ensures that the report reaches the curl security team so that they
first can deal with the report away from the public to minimize the harm and
impact it has on existing users out there who might be using the vulnerable
versions.
The curl project's process for handling security related issues is
[documented separately](https://curl.se/dev/secprocess.html).
The curl project's process for handling security related issues is
[documented separately](https://curl.se/dev/secprocess.html).
## What to report
When reporting a bug, you should include all information to help us
understand what is wrong, what you expected to happen and how to repeat the
bad behavior. You therefore need to tell us:
When reporting a bug, you should include all information to help us
understand what is wrong, what you expected to happen and how to repeat the
bad behavior. You therefore need to tell us:
- your operating system's name and version number
- your operating system's name and version number
- what version of curl you are using (`curl -V` is fine)
- what version of curl you are using (`curl -V` is fine)
- versions of the used libraries that libcurl is built to use
- versions of the used libraries that libcurl is built to use
- what URL you were working with (if possible), at least which protocol
- what URL you were working with (if possible), at least which protocol
and anything and everything else you think matters. Tell us what you expected
to happen, tell us what did happen, tell us how you could make it work
another way. Dig around, try out, test. Then include all the tiny bits and
pieces in your report. You benefit from this yourself, as it enables us to
help you quicker and more accurately.
and anything and everything else you think matters. Tell us what you expected
to happen, tell us what did happen, tell us how you could make it work
another way. Dig around, try out, test. Then include all the tiny bits and
pieces in your report. You benefit from this yourself, as it enables us to
help you quicker and more accurately.
Since curl deals with networks, it often helps us if you include a protocol
debug dump with your bug report. The output you get by using the `-v` or
`--trace` options.
Since curl deals with networks, it often helps us if you include a protocol
debug dump with your bug report. The output you get by using the `-v` or
`--trace` options.
If curl crashed, causing a core dump (in Unix), there is hardly any use to
send that huge file to anyone of us. Unless we have the same system setup as
you, we cannot do much with it. Instead, we ask you to get a stack trace and
send that (much smaller) output to us instead.
If curl crashed, causing a core dump (in Unix), there is hardly any use to
send that huge file to anyone of us. Unless we have the same system setup as
you, we cannot do much with it. Instead, we ask you to get a stack trace and
send that (much smaller) output to us instead.
The address and how to subscribe to the mailing lists are detailed in the
`MANUAL.md` file.
The address and how to subscribe to the mailing lists are detailed in the
`MANUAL.md` file.
## libcurl problems
When you have written your own application with libcurl to perform transfers,
it is even more important to be specific and detailed when reporting bugs.
When you have written your own application with libcurl to perform transfers,
it is even more important to be specific and detailed when reporting bugs.
Tell us the libcurl version and your operating system. Tell us the name and
version of all relevant sub-components like for example the SSL library
you are using and what name resolving your libcurl uses. If you use SFTP or
SCP, the libssh2 version is relevant etc.
Tell us the libcurl version and your operating system. Tell us the name and
version of all relevant sub-components like for example the SSL library
you are using and what name resolving your libcurl uses. If you use SFTP or
SCP, the libssh2 version is relevant etc.
Showing us a real source code example repeating your problem is the best way
to get our attention and it greatly increases our chances to understand your
problem and to work on a fix (if we agree it truly is a problem).
Showing us a real source code example repeating your problem is the best way
to get our attention and it greatly increases our chances to understand your
problem and to work on a fix (if we agree it truly is a problem).
Lots of problems that appear to be libcurl problems are actually just abuses
of the libcurl API or other malfunctions in your applications. It is advised
that you run your problematic program using a memory debug tool like valgrind
or similar before you post memory-related or "crashing" problems to us.
Lots of problems that appear to be libcurl problems are actually just abuses
of the libcurl API or other malfunctions in your applications. It is advised
that you run your problematic program using a memory debug tool like valgrind
or similar before you post memory-related or "crashing" problems to us.
## Who fixes the problems
If the problems or bugs you describe are considered to be bugs, we want to
have the problems fixed.
If the problems or bugs you describe are considered to be bugs, we want to
have the problems fixed.
There are no developers in the curl project that are paid to work on bugs.
All developers that take on reported bugs do this on a voluntary basis. We do
it out of an ambition to keep curl and libcurl excellent products and out of
pride.
There are no developers in the curl project that are paid to work on bugs.
All developers that take on reported bugs do this on a voluntary basis. We do
it out of an ambition to keep curl and libcurl excellent products and out of
pride.
Please do not assume that you can just lump over something to us and it then
magically gets fixed after some given time. Most often we need feedback and
help to understand what you have experienced and how to repeat a problem.
Then we may only be able to assist YOU to debug the problem and to track down
the proper fix.
Please do not assume that you can just lump over something to us and it then
magically gets fixed after some given time. Most often we need feedback and
help to understand what you have experienced and how to repeat a problem.
Then we may only be able to assist YOU to debug the problem and to track down
the proper fix.
We get reports from many people every month and each report can take a
considerable amount of time to really go to the bottom with.
We get reports from many people every month and each report can take a
considerable amount of time to really go to the bottom with.
## How to get a stack trace
First, you must make sure that you compile all sources with `-g` and that you
do not 'strip' the final executable. Try to avoid optimizing the code as well,
remove `-O`, `-O2` etc from the compiler options.
First, you must make sure that you compile all sources with `-g` and that you
do not 'strip' the final executable. Try to avoid optimizing the code as well,
remove `-O`, `-O2` etc from the compiler options.
Run the program until it cores.
Run the program until it cores.
Run your debugger on the core file, like `<debugger> curl core`. `<debugger>`
should be replaced with the name of your debugger, in most cases that is
`gdb`, but `dbx` and others also occur.
Run your debugger on the core file, like `<debugger> curl core`. `<debugger>`
should be replaced with the name of your debugger, in most cases that is
`gdb`, but `dbx` and others also occur.
When the debugger has finished loading the core file and presents you a
prompt, enter `where` (without quotes) and press return.
When the debugger has finished loading the core file and presents you a
prompt, enter `where` (without quotes) and press return.
The list that is presented is the stack trace. If everything worked, it is
supposed to contain the chain of functions that were called when curl
crashed. Include the stack trace with your detailed bug report, it helps a
lot.
The list that is presented is the stack trace. If everything worked, it is
supposed to contain the chain of functions that were called when curl
crashed. Include the stack trace with your detailed bug report, it helps a
lot.
## Bugs in libcurl bindings
There are of course bugs in libcurl bindings. You should then primarily
approach the team that works on that particular binding and see what you can
do to help them fix the problem.
There are of course bugs in libcurl bindings. You should then primarily
approach the team that works on that particular binding and see what you can
do to help them fix the problem.
If you suspect that the problem exists in the underlying libcurl, then please
convert your program over to plain C and follow the steps outlined above.
If you suspect that the problem exists in the underlying libcurl, then please
convert your program over to plain C and follow the steps outlined above.
## Bugs in old versions
The curl project typically releases new versions every other month, and we
fix several hundred bugs per year. For a huge table of releases, number of
bug fixes and more, see: https://curl.se/docs/releases.html
The curl project typically releases new versions every other month, and we
fix several hundred bugs per year. For a huge table of releases, number of
bug fixes and more, see: https://curl.se/docs/releases.html
The developers in the curl project do not have bandwidth or energy enough to
maintain several branches or to spend much time on hunting down problems in
old versions when chances are we already fixed them or at least that they have
changed nature and appearance in later versions.
The developers in the curl project do not have bandwidth or energy enough to
maintain several branches or to spend much time on hunting down problems in
old versions when chances are we already fixed them or at least that they have
changed nature and appearance in later versions.
When you experience a problem and want to report it, you really SHOULD
include the version number of the curl you are using when you experience the
issue. If that version number shows us that you are using an out-of-date curl,
you should also try out a modern curl version to see if the problem persists
or how/if it has changed in appearance.
When you experience a problem and want to report it, you really SHOULD
include the version number of the curl you are using when you experience the
issue. If that version number shows us that you are using an out-of-date curl,
you should also try out a modern curl version to see if the problem persists
or how/if it has changed in appearance.
Even if you cannot immediately upgrade your application/system to run the
latest curl version, you can most often at least run a test version or
experimental build or similar, to get this confirmed or not.
Even if you cannot immediately upgrade your application/system to run the
latest curl version, you can most often at least run a test version or
experimental build or similar, to get this confirmed or not.
At times people insist that they cannot upgrade to a modern curl version, but
instead, they "just want the bug fixed". That is fine, just do not count on us
spending many cycles on trying to identify which single commit, if that is
even possible, that at some point in the past fixed the problem you are now
experiencing.
At times people insist that they cannot upgrade to a modern curl version, but
instead, they "just want the bug fixed". That is fine, just do not count on us
spending many cycles on trying to identify which single commit, if that is
even possible, that at some point in the past fixed the problem you are now
experiencing.
Security wise, it is almost always a bad idea to lag behind the current curl
versions by a lot. We keep discovering and reporting security problems
over time see you can see in [this
table](https://curl.se/docs/vulnerabilities.html)
Security wise, it is almost always a bad idea to lag behind the current curl
versions by a lot. We keep discovering and reporting security problems
over time see you can see in
[this table](https://curl.se/docs/vulnerabilities.html)
# Bug fixing procedure
## What happens on first filing
When a new issue is posted in the issue tracker or on the mailing list, the
team of developers first needs to see the report. Maybe they took the day off,
maybe they are off in the woods hunting. Have patience. Allow at least a few
days before expecting someone to have responded.
When a new issue is posted in the issue tracker or on the mailing list, the
team of developers first needs to see the report. Maybe they took the day off,
maybe they are off in the woods hunting. Have patience. Allow at least a few
days before expecting someone to have responded.
In the issue tracker, you can expect that some labels are set on the issue to
help categorize it.
In the issue tracker, you can expect that some labels are set on the issue to
help categorize it.
## First response
If your issue/bug report was not perfect at once (and few are), chances are
that someone asks follow-up questions. Which version did you use? Which
options did you use? How often does the problem occur? How can we reproduce
this problem? Which protocols does it involve? Or perhaps much more specific
and deep diving questions. It all depends on your specific issue.
If your issue/bug report was not perfect at once (and few are), chances are
that someone asks follow-up questions. Which version did you use? Which
options did you use? How often does the problem occur? How can we reproduce
this problem? Which protocols does it involve? Or perhaps much more specific
and deep diving questions. It all depends on your specific issue.
You should then respond to these follow-up questions and provide more info
about the problem, so that we can help you figure it out. Or maybe you can
help us figure it out. An active back-and-forth communication is important
and the key for finding a cure and landing a fix.
You should then respond to these follow-up questions and provide more info
about the problem, so that we can help you figure it out. Or maybe you can
help us figure it out. An active back-and-forth communication is important
and the key for finding a cure and landing a fix.
## Not reproducible
We may require further work from you who actually see or experience the
problem if we cannot reproduce it and cannot understand it even after having
gotten all the info we need and having studied the source code over again.
We may require further work from you who actually see or experience the
problem if we cannot reproduce it and cannot understand it even after having
gotten all the info we need and having studied the source code over again.
## Unresponsive
If the problem have not been understood or reproduced, and there is nobody
responding to follow-up questions or questions asking for clarifications or
for discussing possible ways to move forward with the task, we take that as a
strong suggestion that the bug is unimportant.
If the problem have not been understood or reproduced, and there is nobody
responding to follow-up questions or questions asking for clarifications or
for discussing possible ways to move forward with the task, we take that as a
strong suggestion that the bug is unimportant.
Unimportant issues are closed as inactive sooner or later as they cannot be
fixed. The inactivity period (waiting for responses) should not be shorter
than two weeks but may extend months.
Unimportant issues are closed as inactive sooner or later as they cannot be
fixed. The inactivity period (waiting for responses) should not be shorter
than two weeks but may extend months.
## Lack of time/interest
Bugs that are filed and are understood can unfortunately end up in the
"nobody cares enough about it to work on it" category. Such bugs are
perfectly valid problems that *should* get fixed but apparently are not. We
try to mark such bugs as `KNOWN_BUGS material` after a time of inactivity and
if no activity is noticed after yet some time those bugs are added to the
`KNOWN_BUGS` document and are closed in the issue tracker.
Bugs that are filed and are understood can unfortunately end up in the
"nobody cares enough about it to work on it" category. Such bugs are
perfectly valid problems that *should* get fixed but apparently are not. We
try to mark such bugs as `KNOWN_BUGS material` after a time of inactivity and
if no activity is noticed after yet some time those bugs are added to the
`KNOWN_BUGS` document and are closed in the issue tracker.
## `KNOWN_BUGS`
This is a list of known bugs. Bugs we know exist and that have been pointed
out but that have not yet been fixed. The reasons for why they have not been
fixed can involve anything really, but the primary reason is that nobody has
considered these problems to be important enough to spend the necessary time
and effort to have them fixed.
This is a list of known bugs. Bugs we know exist and that have been pointed
out but that have not yet been fixed. The reasons for why they have not been
fixed can involve anything really, but the primary reason is that nobody has
considered these problems to be important enough to spend the necessary time
and effort to have them fixed.
The `KNOWN_BUGS` items are always up for grabs and we love the ones who bring
one of them back to life and offer solutions to them.
The `KNOWN_BUGS` items are always up for grabs and we love the ones who bring
one of them back to life and offer solutions to them.
The `KNOWN_BUGS` document has a sibling document known as `TODO`.
The `KNOWN_BUGS` document has a sibling document known as `TODO`.
## `TODO`
Issues that are filed or reported that are not really bugs but more missing
features or ideas for future improvements and so on are marked as
*enhancement* or *feature-request* and get added to the `TODO` document and
the issues are closed. We do not keep TODO items open in the issue tracker.
Issues that are filed or reported that are not really bugs but more missing
features or ideas for future improvements and so on are marked as
*enhancement* or *feature-request* and get added to the `TODO` document and
the issues are closed. We do not keep TODO items open in the issue tracker.
The `TODO` document is full of ideas and suggestions of what we can add or
fix one day. You are always encouraged and free to grab one of those items and
take up a discussion with the curl development team on how that could be
implemented or provided in the project so that you can work on ticking it odd
that document.
The `TODO` document is full of ideas and suggestions of what we can add or
fix one day. You are always encouraged and free to grab one of those items and
take up a discussion with the curl development team on how that could be
implemented or provided in the project so that you can work on ticking it odd
that document.
If an issue is rather a bug and not a missing feature or functionality, it is
listed in `KNOWN_BUGS` instead.
If an issue is rather a bug and not a missing feature or functionality, it is
listed in `KNOWN_BUGS` instead.
## Closing off stalled bugs
The [issue and pull request trackers](https://github.com/curl/curl) only hold
"active" entries open (using a non-precise definition of what active actually
is, but they are at least not completely dead). Those that are abandoned or
in other ways dormant are closed and sometimes added to `TODO` and
`KNOWN_BUGS` instead.
The [issue and pull request trackers](https://github.com/curl/curl) only hold
"active" entries open (using a non-precise definition of what active actually
is, but they are at least not completely dead). Those that are abandoned or
in other ways dormant are closed and sometimes added to `TODO` and
`KNOWN_BUGS` instead.
This way, we only have "active" issues open on GitHub. Irrelevant issues and
pull requests do not distract developers or casual visitors.
This way, we only have "active" issues open on GitHub. Irrelevant issues and
pull requests do not distract developers or casual visitors.

View File

@ -20,11 +20,11 @@ In March 2026, we drop support for all c-ares versions before 1.16.0.
RTMP in curl is powered by the 3rd party library librtmp.
- RTMP is barely used by curl users (2.2% in the 2025 survey)
- librtmp has no test cases, makes no proper releases and has not had a single
commit within the last year
- librtmp parses the URL itself and requires non-compliant URLs for this
- we have no RTMP tests
- RTMP is barely used by curl users (2.2% in the 2025 survey)
- librtmp has no test cases, makes no proper releases and has not had a single
commit within the last year
- librtmp parses the URL itself and requires non-compliant URLs for this
- we have no RTMP tests
Support for RTMP in libcurl gets removed in April 2026.
@ -36,24 +36,24 @@ CMake 3.18 was released on 2020-07-15.
## Past removals
- axTLS (removed in 7.63.0)
- Pipelining (removed in 7.65.0)
- PolarSSL (removed in 7.69.0)
- NPN (removed in 7.86.0)
- Support for systems without 64-bit data types (removed in 8.0.0)
- NSS (removed in 8.3.0)
- gskit (removed in 8.3.0)
- MinGW v1 (removed in 8.4.0)
- NTLM_WB (removed in 8.8.0)
- space-separated `NOPROXY` patterns (removed in 8.9.0)
- hyper (removed in 8.12.0)
- Support for Visual Studio 2005 and older (removed in 8.13.0)
- Secure Transport (removed in 8.15.0)
- BearSSL (removed in 8.15.0)
- msh3 (removed in 8.16.0)
- winbuild build system (removed in 8.17.0)
- Windows CE (removed in 8.18.0)
- Support for Visual Studio 2008 (removed in 8.18.0)
- OpenSSL 1.1.1 and older (removed in 8.18.0)
- Support for Windows XP (removed in 8.19.0)
- OpenSSL-QUIC (removed in 8.19.0)
- axTLS (removed in 7.63.0)
- Pipelining (removed in 7.65.0)
- PolarSSL (removed in 7.69.0)
- NPN (removed in 7.86.0)
- Support for systems without 64-bit data types (removed in 8.0.0)
- NSS (removed in 8.3.0)
- gskit (removed in 8.3.0)
- MinGW v1 (removed in 8.4.0)
- NTLM_WB (removed in 8.8.0)
- space-separated `NOPROXY` patterns (removed in 8.9.0)
- hyper (removed in 8.12.0)
- Support for Visual Studio 2005 and older (removed in 8.13.0)
- Secure Transport (removed in 8.15.0)
- BearSSL (removed in 8.15.0)
- msh3 (removed in 8.16.0)
- winbuild build system (removed in 8.17.0)
- Windows CE (removed in 8.18.0)
- Support for Visual Studio 2008 (removed in 8.18.0)
- OpenSSL 1.1.1 and older (removed in 8.18.0)
- Support for Windows XP (removed in 8.19.0)
- OpenSSL-QUIC (removed in 8.19.0)

View File

@ -31,10 +31,10 @@ big and we never release just a patch. There is only "release".
## Questions to ask
- Is there a security advisory rated high or critical?
- Is there a data corruption bug?
- Did the bug cause an API/ABI breakage?
- Does the problem annoy a significant share of the user population?
- Is there a security advisory rated high or critical?
- Is there a data corruption bug?
- Did the bug cause an API/ABI breakage?
- Does the problem annoy a significant share of the user population?
If the answer is yes to one or more of the above, an early release might be
warranted.
@ -42,25 +42,25 @@ warranted.
More questions to ask ourselves when doing the assessment if the answers to
the three ones above are all 'no'.
- Does the bug cause curl to prematurely terminate?
- How common is the affected buggy option/feature/protocol/platform to get
used?
- How large is the estimated impacted user base?
- Does the bug block something crucial for applications or other adoption of
curl "out there" ?
- Does the bug cause problems for curl developers or others on "the curl
team" ?
- Is the bug limited to the curl tool only? That might have a smaller impact
than a bug also present in libcurl.
- Is there a (decent) workaround?
- Is it a regression? Is the bug introduced in this release?
- Can the bug be fixed "easily" by applying a patch?
- Does the bug break the build? Most users do not build curl themselves.
- How long is it until the already scheduled next release?
- Can affected users safely rather revert to a former release until the next
scheduled release?
- Is it a performance regression with no functionality side-effects? If so it
has to be substantial.
- Does the bug cause curl to prematurely terminate?
- How common is the affected buggy option/feature/protocol/platform to get
used?
- How large is the estimated impacted user base?
- Does the bug block something crucial for applications or other adoption of
curl "out there" ?
- Does the bug cause problems for curl developers or others on "the curl
team" ?
- Is the bug limited to the curl tool only? That might have a smaller impact
than a bug also present in libcurl.
- Is there a (decent) workaround?
- Is it a regression? Is the bug introduced in this release?
- Can the bug be fixed "easily" by applying a patch?
- Does the bug break the build? Most users do not build curl themselves.
- How long is it until the already scheduled next release?
- Can affected users safely rather revert to a former release until the next
scheduled release?
- Is it a performance regression with no functionality side-effects? If so it
has to be substantial.
## If an early release is deemed necessary

View File

@ -8,242 +8,242 @@ SPDX-License-Identifier: curl
## curl tool
- config file support
- multiple URLs in a single command line
- range "globbing" support: [0-13], {one,two,three}
- multiple file upload on a single command line
- redirect stderr
- parallel transfers
- config file support
- multiple URLs in a single command line
- range "globbing" support: [0-13], {one,two,three}
- multiple file upload on a single command line
- redirect stderr
- parallel transfers
## libcurl
- URL RFC 3986 syntax
- custom maximum download time
- custom lowest download speed acceptable
- custom output result after completion
- guesses protocol from hostname unless specified
- supports .netrc
- progress bar with time statistics while downloading
- standard proxy environment variables support
- have run on 101 operating systems and 28 CPU architectures
- selectable network interface for outgoing traffic
- IPv6 support on Unix and Windows
- happy eyeballs dual-stack IPv4 + IPv6 connects
- persistent connections
- SOCKS 4 + 5 support, with or without local name resolving
- *pre-proxy* support, for *proxy chaining*
- supports username and password in proxy environment variables
- operations through HTTP proxy "tunnel" (using CONNECT)
- replaceable memory functions (malloc, free, realloc, etc)
- asynchronous name resolving
- both a push and a pull style interface
- international domain names (IDN)
- transfer rate limiting
- stable API and ABI
- TCP keep alive
- TCP Fast Open
- DNS cache (that can be shared between transfers)
- non-blocking single-threaded parallel transfers
- Unix domain sockets to server or proxy
- DNS-over-HTTPS
- uses non-blocking name resolves
- selectable name resolver backend
- URL RFC 3986 syntax
- custom maximum download time
- custom lowest download speed acceptable
- custom output result after completion
- guesses protocol from hostname unless specified
- supports .netrc
- progress bar with time statistics while downloading
- standard proxy environment variables support
- have run on 101 operating systems and 28 CPU architectures
- selectable network interface for outgoing traffic
- IPv6 support on Unix and Windows
- happy eyeballs dual-stack IPv4 + IPv6 connects
- persistent connections
- SOCKS 4 + 5 support, with or without local name resolving
- *pre-proxy* support, for *proxy chaining*
- supports username and password in proxy environment variables
- operations through HTTP proxy "tunnel" (using CONNECT)
- replaceable memory functions (malloc, free, realloc, etc)
- asynchronous name resolving
- both a push and a pull style interface
- international domain names (IDN)
- transfer rate limiting
- stable API and ABI
- TCP keep alive
- TCP Fast Open
- DNS cache (that can be shared between transfers)
- non-blocking single-threaded parallel transfers
- Unix domain sockets to server or proxy
- DNS-over-HTTPS
- uses non-blocking name resolves
- selectable name resolver backend
## URL API
- parses RFC 3986 URLs
- generates URLs from individual components
- manages "redirects"
- parses RFC 3986 URLs
- generates URLs from individual components
- manages "redirects"
## Header API
- easy access to HTTP response headers, from all contexts
- named headers
- iterate over headers
- easy access to HTTP response headers, from all contexts
- named headers
- iterate over headers
## TLS
- selectable TLS backend(s)
- TLS False Start
- TLS version control
- TLS session resumption
- key pinning
- mutual authentication
- Use dedicated CA cert bundle
- Use OS-provided CA store
- separate TLS options for HTTPS proxy
- selectable TLS backend(s)
- TLS False Start
- TLS version control
- TLS session resumption
- key pinning
- mutual authentication
- Use dedicated CA cert bundle
- Use OS-provided CA store
- separate TLS options for HTTPS proxy
## HTTP
- HTTP/0.9 responses are optionally accepted
- HTTP/1.0
- HTTP/1.1
- HTTP/2, including multiplexing and server push
- GET
- PUT
- HEAD
- POST
- multipart formpost (RFC 1867-style)
- authentication: Basic, Digest, NTLM (9) and Negotiate (SPNEGO)
to server and proxy
- resume transfers
- follow redirects
- maximum amount of redirects to follow
- custom HTTP request
- cookie get/send fully parsed
- reads/writes the Netscape cookie file format
- custom headers (replace/remove internally generated headers)
- custom user-agent string
- custom referrer string
- range
- proxy authentication
- time conditions
- via HTTP proxy, HTTPS proxy or SOCKS proxy
- HTTP/2 or HTTP/1.1 to HTTPS proxy
- retrieve file modification date
- Content-Encoding support for deflate, gzip, brotli and zstd
- "Transfer-Encoding: chunked" support in uploads
- HSTS
- alt-svc
- ETags
- HTTP/1.1 trailers, both sending and getting
- HTTP/0.9 responses are optionally accepted
- HTTP/1.0
- HTTP/1.1
- HTTP/2, including multiplexing and server push
- GET
- PUT
- HEAD
- POST
- multipart formpost (RFC 1867-style)
- authentication: Basic, Digest, NTLM (9) and Negotiate (SPNEGO)
to server and proxy
- resume transfers
- follow redirects
- maximum amount of redirects to follow
- custom HTTP request
- cookie get/send fully parsed
- reads/writes the Netscape cookie file format
- custom headers (replace/remove internally generated headers)
- custom user-agent string
- custom referrer string
- range
- proxy authentication
- time conditions
- via HTTP proxy, HTTPS proxy or SOCKS proxy
- HTTP/2 or HTTP/1.1 to HTTPS proxy
- retrieve file modification date
- Content-Encoding support for deflate, gzip, brotli and zstd
- "Transfer-Encoding: chunked" support in uploads
- HSTS
- alt-svc
- ETags
- HTTP/1.1 trailers, both sending and getting
## HTTPS
- HTTP/3
- using client certificates
- verify server certificate
- via HTTP proxy, HTTPS proxy or SOCKS proxy
- select desired encryption
- select usage of a specific TLS version
- ECH
- HTTP/3
- using client certificates
- verify server certificate
- via HTTP proxy, HTTPS proxy or SOCKS proxy
- select desired encryption
- select usage of a specific TLS version
- ECH
## FTP
- download
- authentication
- Kerberos 5
- active/passive using PORT, EPRT, PASV or EPSV
- single file size information (compare to HTTP HEAD)
- 'type=' URL support
- directory listing
- directory listing names-only
- upload
- upload append
- upload via http-proxy as HTTP PUT
- download resume
- upload resume
- custom ftp commands (before and/or after the transfer)
- simple "range" support
- via HTTP proxy, HTTPS proxy or SOCKS proxy
- all operations can be tunneled through proxy
- customizable to retrieve file modification date
- no directory depth limit
- download
- authentication
- Kerberos 5
- active/passive using PORT, EPRT, PASV or EPSV
- single file size information (compare to HTTP HEAD)
- 'type=' URL support
- directory listing
- directory listing names-only
- upload
- upload append
- upload via http-proxy as HTTP PUT
- download resume
- upload resume
- custom ftp commands (before and/or after the transfer)
- simple "range" support
- via HTTP proxy, HTTPS proxy or SOCKS proxy
- all operations can be tunneled through proxy
- customizable to retrieve file modification date
- no directory depth limit
## FTPS
- implicit `ftps://` support that use SSL on both connections
- explicit "AUTH TLS" and "AUTH SSL" usage to "upgrade" plain `ftp://`
connection to use SSL for both or one of the connections
- implicit `ftps://` support that use SSL on both connections
- explicit "AUTH TLS" and "AUTH SSL" usage to "upgrade" plain `ftp://`
connection to use SSL for both or one of the connections
## SSH (both SCP and SFTP)
- selectable SSH backend
- known hosts support
- public key fingerprinting
- both password and public key auth
- selectable SSH backend
- known hosts support
- public key fingerprinting
- both password and public key auth
## SFTP
- both password and public key auth
- with custom commands sent before/after the transfer
- directory listing
- both password and public key auth
- with custom commands sent before/after the transfer
- directory listing
## TFTP
- download
- upload
- download
- upload
## TELNET
- connection negotiation
- custom telnet options
- stdin/stdout I/O
- connection negotiation
- custom telnet options
- stdin/stdout I/O
## LDAP
- full LDAP URL support
- full LDAP URL support
## DICT
- extended DICT URL support
- extended DICT URL support
## FILE
- URL support
- upload
- resume
- URL support
- upload
- resume
## SMB
- SMBv1 over TCP and SSL
- download
- upload
- authentication with NTLMv1
- SMBv1 over TCP and SSL
- download
- upload
- authentication with NTLMv1
## SMTP
- authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM, Kerberos 5 and
External
- send emails
- mail from support
- mail size support
- mail auth support for trusted server-to-server relaying
- multiple recipients
- via http-proxy
- authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM, Kerberos 5 and
External
- send emails
- mail from support
- mail size support
- mail auth support for trusted server-to-server relaying
- multiple recipients
- via http-proxy
## SMTPS
- implicit `smtps://` support
- explicit "STARTTLS" usage to "upgrade" plain `smtp://` connections to use SSL
- via http-proxy
- implicit `smtps://` support
- explicit "STARTTLS" usage to "upgrade" plain `smtp://` connections to use SSL
- via http-proxy
## POP3
- authentication: Clear Text, APOP and SASL
- SASL based authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM,
Kerberos 5 and External
- list emails
- retrieve emails
- enhanced command support for: CAPA, DELE, TOP, STAT, UIDL and NOOP via
custom requests
- via http-proxy
- authentication: Clear Text, APOP and SASL
- SASL based authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM,
Kerberos 5 and External
- list emails
- retrieve emails
- enhanced command support for: CAPA, DELE, TOP, STAT, UIDL and NOOP via
custom requests
- via http-proxy
## POP3S
- implicit `pop3s://` support
- explicit `STLS` usage to "upgrade" plain `pop3://` connections to use SSL
- via http-proxy
- implicit `pop3s://` support
- explicit `STLS` usage to "upgrade" plain `pop3://` connections to use SSL
- via http-proxy
## IMAP
- authentication: Clear Text and SASL
- SASL based authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM,
Kerberos 5 and External
- list the folders of a mailbox
- select a mailbox with support for verifying the `UIDVALIDITY`
- fetch emails with support for specifying the UID and SECTION
- upload emails via the append command
- enhanced command support for: EXAMINE, CREATE, DELETE, RENAME, STATUS,
STORE, COPY and UID via custom requests
- via http-proxy
- authentication: Clear Text and SASL
- SASL based authentication: Plain, Login, CRAM-MD5, Digest-MD5, NTLM,
Kerberos 5 and External
- list the folders of a mailbox
- select a mailbox with support for verifying the `UIDVALIDITY`
- fetch emails with support for specifying the UID and SECTION
- upload emails via the append command
- enhanced command support for: EXAMINE, CREATE, DELETE, RENAME, STATUS,
STORE, COPY and UID via custom requests
- via http-proxy
## IMAPS
- implicit `imaps://` support
- explicit "STARTTLS" usage to "upgrade" plain `imap://` connections to use SSL
- via http-proxy
- implicit `imaps://` support
- explicit "STARTTLS" usage to "upgrade" plain `imap://` connections to use SSL
- via http-proxy
## MQTT
- Subscribe to and publish topics using URL scheme `mqtt://broker/topic`
- Subscribe to and publish topics using URL scheme `mqtt://broker/topic`

View File

@ -246,7 +246,7 @@ November:
Known libcurl bindings: 37
Contributors: 683
145,000 unique visitors. >100 GB downloaded.
145,000 unique visitors. >100 GB downloaded.
2009
----
@ -282,7 +282,7 @@ August:
Known libcurl bindings: 39
Contributors: 808
Gopher support added (re-added actually, see January 2006)
Gopher support added (re-added actually, see January 2006)
2011
----
@ -294,146 +294,146 @@ April: added the cyassl backend (later renamed to wolfSSL)
2012
----
July: Added support for Schannel (native Windows TLS backend) and Darwin SSL
(Native Mac OS X and iOS TLS backend).
July: Added support for Schannel (native Windows TLS backend) and Darwin SSL
(Native Mac OS X and iOS TLS backend).
Supports Metalink
Supports Metalink
October: SSH-agent support.
October: SSH-agent support.
2013
----
February: Cleaned up internals to always uses the "multi" non-blocking
approach internally and only expose the blocking API with a wrapper.
February: Cleaned up internals to always uses the "multi" non-blocking
approach internally and only expose the blocking API with a wrapper.
September: First small steps on supporting HTTP/2 with nghttp2.
September: First small steps on supporting HTTP/2 with nghttp2.
October: Removed krb4 support.
October: Removed krb4 support.
December: Happy eyeballs.
December: Happy eyeballs.
2014
----
March: first real release supporting HTTP/2
March: first real release supporting HTTP/2
September: Website had 245,000 unique visitors and served 236GB data
September: Website had 245,000 unique visitors and served 236GB data
SMB and SMBS support
SMB and SMBS support
2015
----
June: support for multiplexing with HTTP/2
June: support for multiplexing with HTTP/2
August: support for HTTP/2 server push
August: support for HTTP/2 server push
September: started "everything curl". A separate stand-alone book documenting
curl and related info in perhaps a more tutorial style rather than just a
reference,
September: started "everything curl". A separate stand-alone book documenting
curl and related info in perhaps a more tutorial style rather than just a
reference,
December: Public Suffix List
December: Public Suffix List
2016
----
January: the curl tool defaults to HTTP/2 for HTTPS URLs
January: the curl tool defaults to HTTP/2 for HTTPS URLs
December: curl 7.52.0 introduced support for HTTPS-proxy
December: curl 7.52.0 introduced support for HTTPS-proxy
First TLS 1.3 support
First TLS 1.3 support
2017
----
May: Fastly starts hosting the curl website
May: Fastly starts hosting the curl website
July: OSS-Fuzz started fuzzing libcurl
July: OSS-Fuzz started fuzzing libcurl
September: Added MultiSSL support
September: Added MultiSSL support
The website serves 3100 GB/month
The website serves 3100 GB/month
Public curl releases: 169
Command line options: 211
curl_easy_setopt() options: 249
Public functions in libcurl: 74
Contributors: 1609
Public curl releases: 169
Command line options: 211
curl_easy_setopt() options: 249
Public functions in libcurl: 74
Contributors: 1609
October: SSLKEYLOGFILE support, new MIME API
October: SSLKEYLOGFILE support, new MIME API
October: Daniel received the Polhem Prize for his work on curl
October: Daniel received the Polhem Prize for his work on curl
November: brotli
November: brotli
2018
----
January: new SSH backend powered by libssh
January: new SSH backend powered by libssh
March: starting with the 1803 release of Windows 10, curl is shipped bundled
with Microsoft's operating system.
March: starting with the 1803 release of Windows 10, curl is shipped bundled
with Microsoft's operating system.
July: curl shows headers using bold type face
July: curl shows headers using bold type face
October: added DNS-over-HTTPS (DoH) and the URL API
October: added DNS-over-HTTPS (DoH) and the URL API
MesaLink is a new supported TLS backend
MesaLink is a new supported TLS backend
libcurl now does HTTP/2 (and multiplexing) by default on HTTPS URLs
libcurl now does HTTP/2 (and multiplexing) by default on HTTPS URLs
curl and libcurl are installed in an estimated 5 *billion* instances
world-wide.
curl and libcurl are installed in an estimated 5 *billion* instances
world-wide.
October 31: curl and libcurl 7.62.0
October 31: curl and libcurl 7.62.0
Public curl releases: 177
Command line options: 219
curl_easy_setopt() options: 261
Public functions in libcurl: 80
Contributors: 1808
Public curl releases: 177
Command line options: 219
curl_easy_setopt() options: 261
Public functions in libcurl: 80
Contributors: 1808
December: removed axTLS support
December: removed axTLS support
2019
----
January: Daniel started working full-time on curl, employed by wolfSSL
January: Daniel started working full-time on curl, employed by wolfSSL
March: added experimental alt-svc support
March: added experimental alt-svc support
August: the first HTTP/3 requests with curl.
August: the first HTTP/3 requests with curl.
September: 7.66.0 is released and the tool offers parallel downloads
September: 7.66.0 is released and the tool offers parallel downloads
2020
----
curl and libcurl are installed in an estimated 10 *billion* instances
world-wide.
curl and libcurl are installed in an estimated 10 *billion* instances
world-wide.
January: added BearSSL support
January: added BearSSL support
March: removed support for PolarSSL, added wolfSSH support. Created the first
dashboard on the website.
March: removed support for PolarSSL, added wolfSSH support. Created the first
dashboard on the website.
April: experimental MQTT support
April: experimental MQTT support
August: zstd support
August: zstd support
November: the website moves to curl.se. The website serves 10TB data monthly.
November: the website moves to curl.se. The website serves 10TB data monthly.
December: alt-svc support
December: alt-svc support
2021
----
February 3: curl 7.75.0 ships with support for Hyper as an HTTP backend
February 3: curl 7.75.0 ships with support for Hyper as an HTTP backend
March 31: curl 7.76.0 ships with support for Rustls
March 31: curl 7.76.0 ships with support for Rustls
July: HSTS is supported
July: HSTS is supported
2022
----
@ -446,8 +446,8 @@ March: added --json, removed mesalink support
Public functions in libcurl: 86
Contributors: 2601
The curl.se website serves 16,500 GB/month over 462M requests, the
official docker image has been pulled 4,098,015,431 times.
The curl.se website serves 16,500 GB/month over 462M requests, the
official docker image has been pulled 4,098,015,431 times.
April: added support for msh3 as another HTTP/3 backend

View File

@ -23,7 +23,7 @@ HTTP-only requests to a hostname present in the cache gets internally
- `CURLOPT_HSTS_CTRL` - enable HSTS for this easy handle
- `CURLOPT_HSTS` - specify filename where to store the HSTS cache on close
(and possibly read from at startup)
(and possibly read from at startup)
## curl command line options

View File

@ -38,14 +38,16 @@ To fix before we remove the experimental label:
# ngtcp2 version
Building curl with ngtcp2 involves 3 components: `ngtcp2` itself, `nghttp3` and a QUIC supporting TLS library. The supported TLS libraries are covered below.
Building curl with ngtcp2 involves 3 components: `ngtcp2` itself, `nghttp3`
and a QUIC supporting TLS library. The supported TLS libraries are covered
below.
While any version of `ngtcp2` and `nghttp3` from v1.0.0 on are expected to
work, using the latest versions often brings functional and performance
improvements.
The build examples use `$NGHTTP3_VERSION` and `$NGTCP2_VERSION` as placeholders
for the version you build.
The build examples use `$NGHTTP3_VERSION` and `$NGTCP2_VERSION` as
placeholders for the version you build.
## Build with OpenSSL
@ -224,7 +226,9 @@ Build curl:
quiche support is **EXPERIMENTAL**
Since the quiche build manages its dependencies, curl can be built against the latest version. You are *probably* able to build against their main branch, but in case of problems, we recommend their latest release tag.
Since the quiche build manages its dependencies, curl can be built against the
latest version. You are *probably* able to build against their main branch,
but in case of problems, we recommend their latest release tag.
## Build
@ -247,8 +251,8 @@ Build curl:
% make
% make install
If `make install` results in `Permission denied` error, you need to prepend
it with `sudo`.
If `make install` results in `Permission denied` error, you need to prepend
it with `sudo`.
# `--http3`
@ -284,16 +288,17 @@ or HTTP/1.1. At half of that value - currently - is the **soft** timeout. The
soft timeout fires, when there has been **no data at all** seen from the
server on the HTTP/3 connection.
So, without you specifying anything, the hard timeout is 200ms and the soft is 100ms:
So, without you specifying anything, the hard timeout is 200ms and the soft is
100ms:
* Ideally, the whole QUIC handshake happens and curl has an HTTP/3 connection
in less than 100ms.
* When QUIC is not supported (or UDP does not work for this network path), no
reply is seen and the HTTP/2 TLS+TCP connection starts 100ms later.
* In the worst case, UDP replies start before 100ms, but drag on. This starts
the TLS+TCP connection after 200ms.
* When the QUIC handshake fails, the TLS+TCP connection is attempted right
away. For example, when the QUIC server presents the wrong certificate.
* Ideally, the whole QUIC handshake happens and curl has an HTTP/3 connection
in less than 100ms.
* When QUIC is not supported (or UDP does not work for this network path), no
reply is seen and the HTTP/2 TLS+TCP connection starts 100ms later.
* In the worst case, UDP replies start before 100ms, but drag on. This starts
the TLS+TCP connection after 200ms.
* When the QUIC handshake fails, the TLS+TCP connection is attempted right
away. For example, when the QUIC server presents the wrong certificate.
The whole transfer only fails, when **both** QUIC and TLS+TCP fail to
handshake or time out.
@ -354,8 +359,8 @@ that exists in curl's test dir.
### Caddy
[Install Caddy](https://caddyserver.com/docs/install). For easiest use, the binary
should be either in your PATH or your current directory.
[Install Caddy](https://caddyserver.com/docs/install). For easiest use, the
binary should be either in your PATH or your current directory.
Create a `Caddyfile` with the following content:
~~~
@ -368,7 +373,9 @@ Then run Caddy:
% ./caddy start
Making requests to `https://localhost:7443` should tell you which protocol is being used.
Making requests to `https://localhost:7443` should tell you which protocol is
being used.
You can change the hard-coded response to something more useful by replacing `respond`
with `reverse_proxy` or `file_server`, for example: `reverse_proxy localhost:80`
You can change the hard-coded response to something more useful by replacing
`respond` with `reverse_proxy` or `file_server`, for example: `reverse_proxy
localhost:80`

View File

@ -79,18 +79,18 @@ generate the project. After the project is generated, you can run make.
CMake also comes with a Qt based GUI called `cmake-gui`. To configure with
`cmake-gui`, you run `cmake-gui` and follow these steps:
1. Fill in the "Where is the source code" combo box with the path to
the curl source tree.
2. Fill in the "Where to build the binaries" combo box with the path to
the directory for your build tree, ideally this should not be the same
as the source tree, but a parallel directory called curl-build or
something similar.
3. Once the source and binary directories are specified, press the
"Configure" button.
4. Select the native build tool that you want to use.
5. At this point you can change any of the options presented in the GUI.
Once you have selected all the options you want, click the "Generate"
button.
1. Fill in the "Where is the source code" combo box with the path to
the curl source tree.
2. Fill in the "Where to build the binaries" combo box with the path to
the directory for your build tree, ideally this should not be the same
as the source tree, but a parallel directory called curl-build or
something similar.
3. Once the source and binary directories are specified, press the
"Configure" button.
4. Select the native build tool that you want to use.
5. At this point you can change any of the options presented in the GUI.
Once you have selected all the options you want, click the "Generate"
button.
# Building
@ -345,7 +345,7 @@ target_link_libraries(my_target PRIVATE CURL::libcurl)
- `CMAKE_INSTALL_PREFIX` (see CMake)
- `CMAKE_STATIC_LIBRARY_SUFFIX` (see CMake)
- `CMAKE_UNITY_BUILD_BATCH_SIZE`: Set the number of sources in a "unity" unit. Default: `0` (all)
- `CMAKE_UNITY_BUILD`: Enable "unity" (aka jumbo) builds. Default: `OFF`
- `CMAKE_UNITY_BUILD`: Enable "unity" (aka "jumbo") builds. Default: `OFF`
Details via CMake
[variables](https://cmake.org/cmake/help/latest/manual/cmake-variables.7.html) and

View File

@ -270,8 +270,8 @@ Download the latest version of the `cygwin` packages required (*and suggested*)
Once all the packages have been installed, begin the process of installing curl from the source code:
<details>
<summary>configure_options</summary>
<details>
<summary>configure_options</summary>
```
--with-gnutls
@ -282,7 +282,7 @@ Once all the packages have been installed, begin the process of installing curl
--without-ssl
```
</details>
</details>
1. `sh configure <configure_options>`
2. `make`

View File

@ -33,11 +33,11 @@ pipe character (`|`).
The eleven fields for each CVE in `vuln.pm` are, in order:
HTML page name, first vulnerable version, last vulnerable version, name of
the issue, CVE Id, announce date (`YYYYMMDD`), report to the project date
(`YYYYMMDD`), CWE, awarded reward amount (USD), area (single word), C-issue
(`-` if not a C issue at all, `OVERFLOW` , `OVERREAD`, `DOUBLE_FREE`,
`USE_AFTER_FREE`, `NULL_MISTAKE`, `UNINIT`)
HTML page name, first vulnerable version, last vulnerable version, name of
the issue, CVE Id, announce date (`YYYYMMDD`), report to the project date
(`YYYYMMDD`), CWE, awarded reward amount (USD), area (single word), C-issue
(`-` if not a C issue at all, `OVERFLOW` , `OVERREAD`, `DOUBLE_FREE`,
`USE_AFTER_FREE`, `NULL_MISTAKE`, `UNINIT`)
### `Makefile`

File diff suppressed because it is too large Load Diff

View File

@ -7,60 +7,60 @@ SPDX-License-Identifier: curl
Version Numbers and Releases
============================
The command line tool curl and the library libcurl are individually
versioned, but they usually follow each other closely.
The command line tool curl and the library libcurl are individually
versioned, but they usually follow each other closely.
The version numbering is always built up using the same system:
The version numbering is always built up using the same system:
X.Y.Z
X.Y.Z
- X is main version number
- Y is release number
- Z is patch number
- X is main version number
- Y is release number
- Z is patch number
## Bumping numbers
One of these numbers get bumped in each new release. The numbers to the right
of a bumped number are reset to zero.
One of these numbers get bumped in each new release. The numbers to the right
of a bumped number are reset to zero.
The main version number is bumped when *really* big, world colliding changes
are made. The release number is bumped when changes are performed or
things/features are added. The patch number is bumped when the changes are
mere bugfixes.
The main version number is bumped when *really* big, world colliding changes
are made. The release number is bumped when changes are performed or
things/features are added. The patch number is bumped when the changes are
mere bugfixes.
It means that after release 1.2.3, we can release 2.0.0 if something really
big has been made, 1.3.0 if not that big changes were made or 1.2.4 if only
bugs were fixed.
It means that after release 1.2.3, we can release 2.0.0 if something really
big has been made, 1.3.0 if not that big changes were made or 1.2.4 if only
bugs were fixed.
Bumping, as in increasing the number with 1, is unconditionally only
affecting one of the numbers (except the ones to the right of it, that may be
set to zero). 1 becomes 2, 3 becomes 4, 9 becomes 10, 88 becomes 89 and 99
becomes 100. So, after 1.2.9 comes 1.2.10. After 3.99.3, 3.100.0 might come.
Bumping, as in increasing the number with 1, is unconditionally only
affecting one of the numbers (except the ones to the right of it, that may be
set to zero). 1 becomes 2, 3 becomes 4, 9 becomes 10, 88 becomes 89 and 99
becomes 100. So, after 1.2.9 comes 1.2.10. After 3.99.3, 3.100.0 might come.
All original curl source release archives are named according to the libcurl
version (not according to the curl client version that, as said before, might
differ).
All original curl source release archives are named according to the libcurl
version (not according to the curl client version that, as said before, might
differ).
As a service to any application that might want to support new libcurl
features while still being able to build with older versions, all releases
have the libcurl version stored in the `curl/curlver.h` file using a static
numbering scheme that can be used for comparison. The version number is
defined as:
As a service to any application that might want to support new libcurl
features while still being able to build with older versions, all releases
have the libcurl version stored in the `curl/curlver.h` file using a static
numbering scheme that can be used for comparison. The version number is
defined as:
```c
#define LIBCURL_VERSION_NUM 0xXXYYZZ
```
Where `XX`, `YY` and `ZZ` are the main version, release and patch numbers in
hexadecimal. All three number fields are always represented using two digits
(eight bits each). 1.2 would appear as "0x010200" while version 9.11.7
appears as `0x090b07`.
Where `XX`, `YY` and `ZZ` are the main version, release and patch numbers in
hexadecimal. All three number fields are always represented using two digits
(eight bits each). 1.2 would appear as "0x010200" while version 9.11.7
appears as `0x090b07`.
This 6-digit hexadecimal number is always a greater number in a more recent
release. It makes comparisons with greater than and less than work.
This 6-digit hexadecimal number is always a greater number in a more recent
release. It makes comparisons with greater than and less than work.
This number is also available as three separate defines:
`LIBCURL_VERSION_MAJOR`, `LIBCURL_VERSION_MINOR` and `LIBCURL_VERSION_PATCH`.
This number is also available as three separate defines:
`LIBCURL_VERSION_MAJOR`, `LIBCURL_VERSION_MINOR` and `LIBCURL_VERSION_PATCH`.
## Past releases

View File

@ -32,31 +32,30 @@
* but this uses epoll and timerfd instead of libevent.
*
* Written by Jeff Pohlmeyer, converted to use epoll by Josh Bialkowski
Requires a Linux system with epoll
When running, the program creates the named pipe "hiper.fifo"
Whenever there is input into the fifo, the program reads the input as a list
of URL's and creates some new easy handles to fetch each URL via the
curl_multi "hiper" API.
Thus, you can try a single URL:
% echo http://www.yahoo.com > hiper.fifo
Or a whole bunch of them:
% cat my-url-list > hiper.fifo
The fifo buffer is handled almost instantly, so you can even add more URL's
while the previous requests are still being downloaded.
Note:
For the sake of simplicity, URL length is limited to 1023 char's !
This is purely a demo app, all retrieved data is simply discarded by the write
callback.
*/
*
* Requires a Linux system with epoll
*
* When running, the program creates the named pipe "hiper.fifo"
*
* Whenever there is input into the fifo, the program reads the input as a list
* of URL's and creates some new easy handles to fetch each URL via the
* curl_multi "hiper" API.
*
* Thus, you can try a single URL:
* % echo http://www.yahoo.com > hiper.fifo
*
* Or a whole bunch of them:
* % cat my-url-list > hiper.fifo
*
* The fifo buffer is handled almost instantly, so you can even add more URL's
* while the previous requests are still being downloaded.
*
* Note:
* For the sake of simplicity, URL length is limited to 1023 chars.
*
* This is purely a demo app, all retrieved data is simply discarded by
* the write callback.
*/
#include <errno.h>
#include <fcntl.h>
#include <signal.h>

View File

@ -32,34 +32,33 @@
* but this uses libev instead of libevent.
*
* Written by Jeff Pohlmeyer, converted to use libev by Markus Koetter
Requires libev and a (POSIX?) system that has mkfifo().
This is an adaptation of libcurl's "hipev.c" and libevent's "event-test.c"
sample programs.
When running, the program creates the named pipe "hiper.fifo"
Whenever there is input into the fifo, the program reads the input as a list
of URL's and creates some new easy handles to fetch each URL via the
curl_multi "hiper" API.
Thus, you can try a single URL:
% echo http://www.yahoo.com > hiper.fifo
Or a whole bunch of them:
% cat my-url-list > hiper.fifo
The fifo buffer is handled almost instantly, so you can even add more URL's
while the previous requests are still being downloaded.
Note:
For the sake of simplicity, URL length is limited to 1023 char's !
This is purely a demo app, all retrieved data is simply discarded by the write
callback.
*/
*
* Requires libev and a (POSIX?) system that has mkfifo().
*
* This is an adaptation of libcurl's "hipev.c" and libevent's "event-test.c"
* sample programs.
*
* When running, the program creates the named pipe "hiper.fifo"
*
* Whenever there is input into the fifo, the program reads the input as a list
* of URL's and creates some new easy handles to fetch each URL via the
* curl_multi "hiper" API.
*
* Thus, you can try a single URL:
* % echo http://www.yahoo.com > hiper.fifo
*
* Or a whole bunch of them:
* % cat my-url-list > hiper.fifo
*
* The fifo buffer is handled almost instantly, so you can even add more URL's
* while the previous requests are still being downloaded.
*
* Note:
* For the sake of simplicity, URL length is limited to 1023 chars.
*
* This is purely a demo app, all retrieved data is simply discarded by
* the write callback.
*/
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>

View File

@ -22,38 +22,38 @@
*
***************************************************************************/
/* <DESC>
* multi socket API usage with glib2
* multi socket interface with glib2
* </DESC>
*/
/* Example application source code using the multi socket interface to
* download many files at once.
*
* Written by Jeff Pohlmeyer
Requires glib-2.x and a (POSIX?) system that has mkfifo().
This is an adaptation of libcurl's "hipev.c" and libevent's "event-test.c"
sample programs, adapted to use glib's g_io_channel in place of libevent.
When running, the program creates the named pipe "hiper.fifo"
Whenever there is input into the fifo, the program reads the input as a list
of URL's and creates some new easy handles to fetch each URL via the
curl_multi "hiper" API.
Thus, you can try a single URL:
% echo http://www.yahoo.com > hiper.fifo
Or a whole bunch of them:
% cat my-url-list > hiper.fifo
The fifo buffer is handled almost instantly, so you can even add more URL's
while the previous requests are still being downloaded.
This is purely a demo app, all retrieved data is simply discarded by the write
callback.
*/
*
* Requires glib-2.x and a (POSIX?) system that has mkfifo().
*
* This is an adaptation of libcurl's "hipev.c" and libevent's "event-test.c"
* sample programs, adapted to use glib's g_io_channel in place of libevent.
*
* When running, the program creates the named pipe "hiper.fifo"
*
* Whenever there is input into the fifo, the program reads the input as a list
* of URL's and creates some new easy handles to fetch each URL via the
* curl_multi "hiper" API.
*
* Thus, you can try a single URL:
* % echo http://www.yahoo.com > hiper.fifo
*
* Or a whole bunch of them:
* % cat my-url-list > hiper.fifo
*
* The fifo buffer is handled almost instantly, so you can even add more URL's
* while the previous requests are still being downloaded.
*
* This is purely a demo app, all retrieved data is simply discarded by
* the write callback.
*
*/
#include <glib.h>
#include <unistd.h>

View File

@ -22,41 +22,40 @@
*
***************************************************************************/
/* <DESC>
* multi socket API usage with libevent 2
* multi socket interface with libevent 2
* </DESC>
*/
/* Example application source code using the multi socket interface to
download many files at once.
Written by Jeff Pohlmeyer
Requires libevent version 2 and a (POSIX?) system that has mkfifo().
This is an adaptation of libcurl's "hipev.c" and libevent's "event-test.c"
sample programs.
When running, the program creates the named pipe "hiper.fifo"
Whenever there is input into the fifo, the program reads the input as a list
of URL's and creates some new easy handles to fetch each URL via the
curl_multi "hiper" API.
Thus, you can try a single URL:
% echo http://www.yahoo.com > hiper.fifo
Or a whole bunch of them:
% cat my-url-list > hiper.fifo
The fifo buffer is handled almost instantly, so you can even add more URL's
while the previous requests are still being downloaded.
Note:
For the sake of simplicity, URL length is limited to 1023 char's !
This is purely a demo app, all retrieved data is simply discarded by the write
callback.
*/
* download many files at once.
*
* Written by Jeff Pohlmeyer
*
* Requires libevent version 2 and a (POSIX?) system that has mkfifo().
*
* This is an adaptation of libcurl's "hipev.c" and libevent's "event-test.c"
* sample programs.
*
* When running, the program creates the named pipe "hiper.fifo"
*
* Whenever there is input into the fifo, the program reads the input as a list
* of URL's and creates some new easy handles to fetch each URL via the
* curl_multi "hiper" API.
*
* Thus, you can try a single URL:
* % echo http://www.yahoo.com > hiper.fifo
*
* Or a whole bunch of them:
* % cat my-url-list > hiper.fifo
*
* The fifo buffer is handled almost instantly, so you can even add more URL's
* while the previous requests are still being downloaded.
*
* Note:
* For the sake of simplicity, URL length is limited to 1023 chars.
*
* This is purely a demo app, all retrieved data is simply discarded by
* the write callback.
*/
#include <errno.h>
#include <fcntl.h>
#include <stdio.h>

View File

@ -26,12 +26,12 @@
* </DESC>
*/
/* Use the socket_action interface to download multiple files in parallel,
powered by libuv.
Requires libuv and (of course) libcurl.
See https://docs.libuv.org/en/v1.x/index.html libuv API documentation
*/
* powered by libuv.
*
* Requires libuv and (of course) libcurl.
*
* See https://docs.libuv.org/en/v1.x/index.html libuv API documentation
*/
/* Requires: USE_LIBUV */

View File

@ -38,17 +38,17 @@ WebSocket with libcurl can be done two ways.
The new options to `curl_easy_setopt()`:
`CURLOPT_WS_OPTIONS` - to control specific behavior. `CURLWS_RAW_MODE` makes
libcurl provide all WebSocket traffic raw in the callback. `CURLWS_NOAUTOPONG`
disables automatic `PONG` replies.
`CURLOPT_WS_OPTIONS` - to control specific behavior. `CURLWS_RAW_MODE` makes
libcurl provide all WebSocket traffic raw in the callback. `CURLWS_NOAUTOPONG`
disables automatic `PONG` replies.
The new function calls:
`curl_ws_recv()` - receive a WebSocket frame
`curl_ws_recv()` - receive a WebSocket frame
`curl_ws_send()` - send a WebSocket frame
`curl_ws_send()` - send a WebSocket frame
`curl_ws_meta()` - return WebSocket metadata within a write callback
`curl_ws_meta()` - return WebSocket metadata within a write callback
## Max frame size

View File

@ -7,62 +7,62 @@ SPDX-License-Identifier: curl
ABI - Application Binary Interface
==================================
"ABI" describes the low-level interface between an application program and a
library. Calling conventions, function arguments, return values, struct
sizes/defines and more.
"ABI" describes the low-level interface between an application program and a
library. Calling conventions, function arguments, return values, struct
sizes/defines and more.
[Wikipedia has a longer description](https://en.wikipedia.org/wiki/Application_binary_interface)
[Wikipedia has a longer description](https://en.wikipedia.org/wiki/Application_binary_interface)
## Upgrades
A libcurl upgrade does not break the ABI or change established and documented
behavior. Your application can remain using libcurl just as before, only with
fewer bugs and possibly with added new features.
A libcurl upgrade does not break the ABI or change established and documented
behavior. Your application can remain using libcurl just as before, only with
fewer bugs and possibly with added new features.
## Version Numbers
In libcurl land, you cannot tell by the libcurl version number if that
libcurl is binary compatible or not with another libcurl version. As a rule,
we do not break the ABI so you can *always* upgrade to a later version without
any loss or change in functionality.
In libcurl land, you cannot tell by the libcurl version number if that
libcurl is binary compatible or not with another libcurl version. As a rule,
we do not break the ABI so you can *always* upgrade to a later version without
any loss or change in functionality.
## SONAME Bumps
Whenever there are changes done to the library that causes an ABI breakage,
that may require your application to get attention or possibly be changed to
adhere to new things, we bump the SONAME. Then the library gets a different
output name and thus can in fact be installed in parallel with an older
installed lib (on most systems). Thus, old applications built against the
previous ABI version remains working and using the older lib, while newer
applications build and use the newer one.
Whenever there are changes done to the library that causes an ABI breakage,
that may require your application to get attention or possibly be changed to
adhere to new things, we bump the SONAME. Then the library gets a different
output name and thus can in fact be installed in parallel with an older
installed lib (on most systems). Thus, old applications built against the
previous ABI version remains working and using the older lib, while newer
applications build and use the newer one.
During the first seven years of libcurl releases, there have only been four
ABI breakages.
During the first seven years of libcurl releases, there have only been four
ABI breakages.
We are determined to bump the SONAME as rarely as possible. Ideally, we never
do it again.
We are determined to bump the SONAME as rarely as possible. Ideally, we never
do it again.
## Downgrades
Going to an older libcurl version from one you are currently using can be a
tricky thing. Mostly we add features and options to newer libcurls as that
does not break ABI or hamper existing applications. This has the implication
that going backwards may get you in a situation where you pick a libcurl that
does not support the options your application needs. Or possibly you even
downgrade so far so you cross an ABI break border and thus a different
SONAME, and then your application may need to adapt to the modified ABI.
Going to an older libcurl version from one you are currently using can be a
tricky thing. Mostly we add features and options to newer libcurls as that
does not break ABI or hamper existing applications. This has the implication
that going backwards may get you in a situation where you pick a libcurl that
does not support the options your application needs. Or possibly you even
downgrade so far so you cross an ABI break border and thus a different
SONAME, and then your application may need to adapt to the modified ABI.
## History
The previous major library SONAME number bumps (breaking backwards
compatibility) happened the following times:
The previous major library SONAME number bumps (breaking backwards
compatibility) happened the following times:
0 - libcurl 7.1, August 2000
0 - libcurl 7.1, August 2000
1 - libcurl 7.5 December 2000
1 - libcurl 7.5 December 2000
2 - libcurl 7.7 March 2001
2 - libcurl 7.7 March 2001
3 - libcurl 7.12.0 June 2004
3 - libcurl 7.12.0 June 2004
4 - libcurl 7.16.0 October 2006
4 - libcurl 7.16.0 October 2006

View File

@ -398,11 +398,11 @@ An overview of the time values available from curl_easy_getinfo(3)
|--|--|--|--|--|--|--|--TOTAL
|--|--|--|--|--|--|--|--REDIRECT
CURLINFO_QUEUE_TIME_T(3), CURLINFO_NAMELOOKUP_TIME_T(3),
CURLINFO_CONNECT_TIME_T(3), CURLINFO_APPCONNECT_TIME_T(3),
CURLINFO_PRETRANSFER_TIME_T(3), CURLINFO_POSTTRANSFER_TIME_T(3),
CURLINFO_STARTTRANSFER_TIME_T(3), CURLINFO_TOTAL_TIME_T(3),
CURLINFO_REDIRECT_TIME_T(3)
CURLINFO_QUEUE_TIME_T(3), CURLINFO_NAMELOOKUP_TIME_T(3),
CURLINFO_CONNECT_TIME_T(3), CURLINFO_APPCONNECT_TIME_T(3),
CURLINFO_PRETRANSFER_TIME_T(3), CURLINFO_POSTTRANSFER_TIME_T(3),
CURLINFO_STARTTRANSFER_TIME_T(3), CURLINFO_TOTAL_TIME_T(3),
CURLINFO_REDIRECT_TIME_T(3)
# %PROTOCOLS%

View File

@ -46,7 +46,7 @@ Your compiler needs to know where the libcurl headers are located. Therefore
you must set your compiler's include path to point to the directory where you
installed them. The 'curl-config'[3] tool can be used to get this information:
~~~c
$ curl-config --cflags
$ curl-config --cflags
~~~
## Linking the Program with libcurl
@ -58,7 +58,7 @@ OpenSSL libraries, but even some standard OS libraries may be needed on the
command line. To figure out which flags to use, once again the 'curl-config'
tool comes to the rescue:
~~~c
$ curl-config --libs
$ curl-config --libs
~~~
## SSL or Not
@ -71,7 +71,7 @@ installed libcurl has been built with SSL support enabled, use *curl-config*
like this:
~~~c
$ curl-config --feature
$ curl-config --feature
~~~
If SSL is supported, the keyword *SSL* is written to stdout, possibly together
@ -102,7 +102,7 @@ The program must initialize some of the libcurl functionality globally. That
means it should be done exactly once, no matter how many times you intend to
use the library. Once for your program's entire life time. This is done using
~~~c
curl_global_init()
curl_global_init()
~~~
and it takes one parameter which is a bit pattern that tells libcurl what to
initialize. Using *CURL_GLOBAL_ALL* makes it initialize all known internal
@ -168,7 +168,7 @@ must never share the same handle in multiple threads.
Get an easy handle with
~~~c
handle = curl_easy_init();
handle = curl_easy_init();
~~~
It returns an easy handle. Using that you proceed to the next step: setting
up your preferred actions. A handle is just a logic entity for the upcoming
@ -194,7 +194,7 @@ One of the most basic properties to set in the handle is the URL. You set your
preferred URL to transfer with CURLOPT_URL(3) in a manner similar to:
~~~c
curl_easy_setopt(handle, CURLOPT_URL, "https://example.com/");
curl_easy_setopt(handle, CURLOPT_URL, "https://example.com/");
~~~
Let's assume for a while that you want to receive data as the URL identifies a
@ -203,17 +203,17 @@ that needs this transfer, I assume that you would like to get the data passed
to you directly instead of simply getting it passed to stdout. So, you write
your own function that matches this prototype:
~~~c
size_t write_data(void *buffer, size_t size, size_t nmemb, void *userp);
size_t write_data(void *buffer, size_t size, size_t nmemb, void *userp);
~~~
You tell libcurl to pass all data to this function by issuing a function
similar to this:
~~~c
curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, write_data);
curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, write_data);
~~~
You can control what data your callback function gets in the fourth argument
by setting another property:
~~~c
curl_easy_setopt(handle, CURLOPT_WRITEDATA, &internal_struct);
curl_easy_setopt(handle, CURLOPT_WRITEDATA, &internal_struct);
~~~
Using that property, you can easily pass local data between your application
and the function that gets invoked by libcurl. libcurl itself does not touch
@ -243,7 +243,7 @@ There are of course many more options you can set, and we get back to a few of
them later. Let's instead continue to the actual transfer:
~~~c
success = curl_easy_perform(handle);
success = curl_easy_perform(handle);
~~~
curl_easy_perform(3) connects to the remote site, does the necessary commands
@ -319,7 +319,7 @@ data by asking us for it. To make it do that, we set the read callback and the
custom pointer libcurl passes to our read callback. The read callback should
have a prototype similar to:
~~~c
size_t function(char *bufptr, size_t size, size_t nitems, void *userp);
size_t function(char *bufptr, size_t size, size_t nitems, void *userp);
~~~
Where *bufptr* is the pointer to a buffer we fill in with data to upload
and *nitems* is the size of the buffer and therefore also the maximum
@ -327,21 +327,21 @@ amount of data we can return to libcurl in this call. The *userp* pointer
is the custom pointer we set to point to a struct of ours to pass private data
between the application and the callback.
~~~c
curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_function);
curl_easy_setopt(handle, CURLOPT_READFUNCTION, read_function);
curl_easy_setopt(handle, CURLOPT_READDATA, &filedata);
curl_easy_setopt(handle, CURLOPT_READDATA, &filedata);
~~~
Tell libcurl that we want to upload:
~~~c
curl_easy_setopt(handle, CURLOPT_UPLOAD, 1L);
curl_easy_setopt(handle, CURLOPT_UPLOAD, 1L);
~~~
A few protocols do not behave properly when uploads are done without any prior
knowledge of the expected file size. So, set the upload file size using the
CURLOPT_INFILESIZE_LARGE(3) for all known file sizes like this[1]:
~~~c
/* in this example, file_size must be an curl_off_t variable */
curl_easy_setopt(handle, CURLOPT_INFILESIZE_LARGE, file_size);
/* in this example, file_size must be an curl_off_t variable */
curl_easy_setopt(handle, CURLOPT_INFILESIZE_LARGE, file_size);
~~~
When you call curl_easy_perform(3) this time, it performs all the
@ -361,7 +361,7 @@ Most protocols support that you specify the name and password in the URL
itself. libcurl detects this and use them accordingly. This is written like
this:
~~~c
protocol://user:password@example.com/path/
protocol://user:password@example.com/path/
~~~
If you need any odd letters in your username or password, you should enter
them URL encoded, as %XX where XX is a two-digit hexadecimal number.
@ -372,7 +372,7 @@ CURLOPT_USERPWD(3) option. The argument passed to libcurl should be a
char * to a string in the format "user:password". In a manner like this:
~~~c
curl_easy_setopt(handle, CURLOPT_USERPWD, "myname:thesecret");
curl_easy_setopt(handle, CURLOPT_USERPWD, "myname:thesecret");
~~~
Another case where name and password might be needed at times, is for those
@ -381,7 +381,7 @@ another option for this, the CURLOPT_PROXYUSERPWD(3). It is used quite similar
to the CURLOPT_USERPWD(3) option like this:
~~~c
curl_easy_setopt(handle, CURLOPT_PROXYUSERPWD, "myname:thesecret");
curl_easy_setopt(handle, CURLOPT_PROXYUSERPWD, "myname:thesecret");
~~~
There is a long time Unix "standard" way of storing FTP usernames and
@ -396,15 +396,15 @@ non-FTP protocols such as HTTP. To make curl use this file, use the
CURLOPT_NETRC(3) option:
~~~c
curl_easy_setopt(handle, CURLOPT_NETRC, 1L);
curl_easy_setopt(handle, CURLOPT_NETRC, 1L);
~~~
A basic example of how such a .netrc file may look like:
~~~c
machine myhost.mydomain.com
login userlogin
password secretword
machine myhost.mydomain.com
login userlogin
password secretword
~~~
All these examples have been cases where the password has been optional, or
@ -414,7 +414,7 @@ you are using an SSL private key for secure transfers.
To pass the known private key password to libcurl:
~~~c
curl_easy_setopt(handle, CURLOPT_KEYPASSWD, "keypassword");
curl_easy_setopt(handle, CURLOPT_KEYPASSWD, "keypassword");
~~~
# HTTP Authentication
@ -431,15 +431,14 @@ Negotiate (SPNEGO). You can tell libcurl which one to use with
CURLOPT_HTTPAUTH(3) as in:
~~~c
curl_easy_setopt(handle, CURLOPT_HTTPAUTH, CURLAUTH_DIGEST);
curl_easy_setopt(handle, CURLOPT_HTTPAUTH, CURLAUTH_DIGEST);
~~~
When you send authentication to a proxy, you can also set authentication type
the same way but instead with CURLOPT_PROXYAUTH(3):
~~~c
curl_easy_setopt(handle, CURLOPT_PROXYAUTH, CURLAUTH_NTLM);
curl_easy_setopt(handle, CURLOPT_PROXYAUTH, CURLAUTH_NTLM);
~~~
Both these options allow you to set multiple types (by ORing them together),
@ -448,7 +447,7 @@ claims to support. This method does however add a round-trip since libcurl
must first ask the server what it supports:
~~~c
curl_easy_setopt(handle, CURLOPT_HTTPAUTH, CURLAUTH_DIGEST | CURLAUTH_BASIC);
curl_easy_setopt(handle, CURLOPT_HTTPAUTH, CURLAUTH_DIGEST | CURLAUTH_BASIC);
~~~
For convenience, you can use the *CURLAUTH_ANY* define (instead of a list with
@ -487,21 +486,21 @@ done in a generic way, by building a list of our own headers and then passing
that list to libcurl.
~~~c
struct curl_slist *headers = NULL;
headers = curl_slist_append(headers, "Content-Type: text/xml");
struct curl_slist *headers = NULL;
headers = curl_slist_append(headers, "Content-Type: text/xml");
/* post binary data */
curl_easy_setopt(handle, CURLOPT_POSTFIELDS, binaryptr);
/* post binary data */
curl_easy_setopt(handle, CURLOPT_POSTFIELDS, binaryptr);
/* set the size of the postfields data */
curl_easy_setopt(handle, CURLOPT_POSTFIELDSIZE, 23L);
/* set the size of the postfields data */
curl_easy_setopt(handle, CURLOPT_POSTFIELDSIZE, 23L);
/* pass our list of custom made headers */
curl_easy_setopt(handle, CURLOPT_HTTPHEADER, headers);
/* pass our list of custom made headers */
curl_easy_setopt(handle, CURLOPT_HTTPHEADER, headers);
curl_easy_perform(handle); /* post away! */
curl_easy_perform(handle); /* post away! */
curl_slist_free_all(headers); /* free the header list */
curl_slist_free_all(headers); /* free the header list */
~~~
While the simple examples above cover the majority of all cases where HTTP
@ -531,24 +530,24 @@ The following example sets two simple text parts with plain textual contents,
and then a file with binary contents and uploads the whole thing.
~~~c
curl_mime *multipart = curl_mime_init(handle);
curl_mimepart *part = curl_mime_addpart(multipart);
curl_mime_name(part, "name");
curl_mime_data(part, "daniel", CURL_ZERO_TERMINATED);
part = curl_mime_addpart(multipart);
curl_mime_name(part, "project");
curl_mime_data(part, "curl", CURL_ZERO_TERMINATED);
part = curl_mime_addpart(multipart);
curl_mime_name(part, "logotype-image");
curl_mime_filedata(part, "curl.png");
curl_mime *multipart = curl_mime_init(handle);
curl_mimepart *part = curl_mime_addpart(multipart);
curl_mime_name(part, "name");
curl_mime_data(part, "daniel", CURL_ZERO_TERMINATED);
part = curl_mime_addpart(multipart);
curl_mime_name(part, "project");
curl_mime_data(part, "curl", CURL_ZERO_TERMINATED);
part = curl_mime_addpart(multipart);
curl_mime_name(part, "logotype-image");
curl_mime_filedata(part, "curl.png");
/* Set the form info */
curl_easy_setopt(handle, CURLOPT_MIMEPOST, multipart);
/* Set the form info */
curl_easy_setopt(handle, CURLOPT_MIMEPOST, multipart);
curl_easy_perform(handle); /* post away! */
curl_easy_perform(handle); /* post away! */
/* free the post data again */
curl_mime_free(multipart);
/* free the post data again */
curl_mime_free(multipart);
~~~
To post multiple files for a single form field, you must supply each file in
@ -559,8 +558,8 @@ multiple files posting is deprecated by RFC 7578, chapter 4.3.
To set the data source from an already opened FILE pointer, use:
~~~c
curl_mime_data_cb(part, filesize, (curl_read_callback) fread,
(curl_seek_callback) fseek, NULL, filepointer);
curl_mime_data_cb(part, filesize, (curl_read_callback)fread,
(curl_seek_callback)fseek, NULL, filepointer);
~~~
A deprecated curl_formadd(3) function is still supported in libcurl.
@ -574,25 +573,25 @@ parts, you post the whole form.
The MIME API example above is expressed as follows using this function:
~~~c
struct curl_httppost *post = NULL;
struct curl_httppost *last = NULL;
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "name",
CURLFORM_COPYCONTENTS, "daniel", CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "project",
CURLFORM_COPYCONTENTS, "curl", CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "logotype-image",
CURLFORM_FILECONTENT, "curl.png", CURLFORM_END);
struct curl_httppost *post = NULL;
struct curl_httppost *last = NULL;
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "name",
CURLFORM_COPYCONTENTS, "daniel", CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "project",
CURLFORM_COPYCONTENTS, "curl", CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "logotype-image",
CURLFORM_FILECONTENT, "curl.png", CURLFORM_END);
/* Set the form info */
curl_easy_setopt(handle, CURLOPT_HTTPPOST, post);
/* Set the form info */
curl_easy_setopt(handle, CURLOPT_HTTPPOST, post);
curl_easy_perform(handle); /* post away! */
curl_easy_perform(handle); /* post away! */
/* free the post data again */
curl_formfree(post);
/* free the post data again */
curl_formfree(post);
~~~
Multipart formposts are chains of parts using MIME-style separators and
@ -605,19 +604,19 @@ shows how you set headers to one specific part when you add that to the post
handle:
~~~c
struct curl_slist *headers = NULL;
headers = curl_slist_append(headers, "Content-Type: text/xml");
struct curl_slist *headers = NULL;
headers = curl_slist_append(headers, "Content-Type: text/xml");
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "logotype-image",
CURLFORM_FILECONTENT, "curl.xml",
CURLFORM_CONTENTHEADER, headers,
CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "logotype-image",
CURLFORM_FILECONTENT, "curl.xml",
CURLFORM_CONTENTHEADER, headers,
CURLFORM_END);
curl_easy_perform(handle); /* post away! */
curl_easy_perform(handle); /* post away! */
curl_formfree(post); /* free post */
curl_slist_free_all(headers); /* free custom header list */
curl_formfree(post); /* free post */
curl_slist_free_all(headers); /* free custom header list */
~~~
Since all options on an easy handle are "sticky", they remain the same until
@ -626,7 +625,7 @@ curl to go back to a plain GET request if you intend to do one as your next
request. You force an easy handle to go back to GET by using the
CURLOPT_HTTPGET(3) option:
~~~c
curl_easy_setopt(handle, CURLOPT_HTTPGET, 1L);
curl_easy_setopt(handle, CURLOPT_HTTPGET, 1L);
~~~
Just setting CURLOPT_POSTFIELDS(3) to "" or NULL does *not* stop libcurl
from doing a POST. It just makes it POST without any data to send!
@ -647,18 +646,18 @@ CURLOPT_MIMEPOST(3) instead of CURLOPT_HTTPPOST(3).
Here are some example of *curl_formadd* calls to MIME API sequences:
~~~c
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "id",
CURLFORM_COPYCONTENTS, "daniel", CURLFORM_END);
CURLFORM_CONTENTHEADER, headers,
CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "id",
CURLFORM_COPYCONTENTS, "daniel", CURLFORM_END);
CURLFORM_CONTENTHEADER, headers,
CURLFORM_END);
~~~
becomes:
~~~c
part = curl_mime_addpart(multipart);
curl_mime_name(part, "id");
curl_mime_data(part, "daniel", CURL_ZERO_TERMINATED);
curl_mime_headers(part, headers, FALSE);
part = curl_mime_addpart(multipart);
curl_mime_name(part, "id");
curl_mime_data(part, "daniel", CURL_ZERO_TERMINATED);
curl_mime_headers(part, headers, FALSE);
~~~
Setting the last curl_mime_headers(3) argument to TRUE would have caused
@ -666,16 +665,16 @@ the headers to be automatically released upon destroyed the multi-part, thus
saving a clean-up call to curl_slist_free_all(3).
~~~c
curl_formadd(&post, &last,
CURLFORM_PTRNAME, "logotype-image",
CURLFORM_FILECONTENT, "-",
CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_PTRNAME, "logotype-image",
CURLFORM_FILECONTENT, "-",
CURLFORM_END);
~~~
becomes:
~~~c
part = curl_mime_addpart(multipart);
curl_mime_name(part, "logotype-image");
curl_mime_data_cb(part, (curl_off_t)-1, fread, fseek, NULL, stdin);
part = curl_mime_addpart(multipart);
curl_mime_name(part, "logotype-image");
curl_mime_data_cb(part, (curl_off_t)-1, fread, fseek, NULL, stdin);
~~~
curl_mime_name(3) always copies the field name. The special filename "-" is
@ -684,79 +683,79 @@ source using fread(). The transfer is be chunk-encoded since the data size is
unknown.
~~~c
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "datafile[]",
CURLFORM_FILE, "file1",
CURLFORM_FILE, "file2",
CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "datafile[]",
CURLFORM_FILE, "file1",
CURLFORM_FILE, "file2",
CURLFORM_END);
~~~
becomes:
~~~c
part = curl_mime_addpart(multipart);
curl_mime_name(part, "datafile[]");
curl_mime_filedata(part, "file1");
part = curl_mime_addpart(multipart);
curl_mime_name(part, "datafile[]");
curl_mime_filedata(part, "file2");
part = curl_mime_addpart(multipart);
curl_mime_name(part, "datafile[]");
curl_mime_filedata(part, "file1");
part = curl_mime_addpart(multipart);
curl_mime_name(part, "datafile[]");
curl_mime_filedata(part, "file2");
~~~
The deprecated multipart/mixed implementation of multiple files field is
translated to two distinct parts with the same name.
~~~c
curl_easy_setopt(handle, CURLOPT_READFUNCTION, myreadfunc);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "stream",
CURLFORM_STREAM, arg,
CURLFORM_CONTENTLEN, (curl_off_t) datasize,
CURLFORM_FILENAME, "archive.zip",
CURLFORM_CONTENTTYPE, "application/zip",
CURLFORM_END);
curl_easy_setopt(handle, CURLOPT_READFUNCTION, myreadfunc);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "stream",
CURLFORM_STREAM, arg,
CURLFORM_CONTENTLEN, (curl_off_t)datasize,
CURLFORM_FILENAME, "archive.zip",
CURLFORM_CONTENTTYPE, "application/zip",
CURLFORM_END);
~~~
becomes:
~~~c
part = curl_mime_addpart(multipart);
curl_mime_name(part, "stream");
curl_mime_data_cb(part, (curl_off_t)datasize,
myreadfunc, NULL, NULL, arg);
curl_mime_filename(part, "archive.zip");
curl_mime_type(part, "application/zip");
part = curl_mime_addpart(multipart);
curl_mime_name(part, "stream");
curl_mime_data_cb(part, (curl_off_t)datasize,
myreadfunc, NULL, NULL, arg);
curl_mime_filename(part, "archive.zip");
curl_mime_type(part, "application/zip");
~~~
CURLOPT_READFUNCTION(3) callback is not used: it is replace by directly
setting the part source data from the callback read function.
~~~c
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "memfile",
CURLFORM_BUFFER, "memfile.bin",
CURLFORM_BUFFERPTR, databuffer,
CURLFORM_BUFFERLENGTH, (long) sizeof databuffer,
CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "memfile",
CURLFORM_BUFFER, "memfile.bin",
CURLFORM_BUFFERPTR, databuffer,
CURLFORM_BUFFERLENGTH, (long)sizeof(databuffer),
CURLFORM_END);
~~~
becomes:
~~~c
part = curl_mime_addpart(multipart);
curl_mime_name(part, "memfile");
curl_mime_data(part, databuffer, (curl_off_t)sizeof(databuffer));
curl_mime_filename(part, "memfile.bin");
part = curl_mime_addpart(multipart);
curl_mime_name(part, "memfile");
curl_mime_data(part, databuffer, (curl_off_t)sizeof(databuffer));
curl_mime_filename(part, "memfile.bin");
~~~
curl_mime_data(3) always copies the initial data: data buffer is thus
free for immediate reuse.
~~~c
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "message",
CURLFORM_FILECONTENT, "msg.txt",
CURLFORM_END);
curl_formadd(&post, &last,
CURLFORM_COPYNAME, "message",
CURLFORM_FILECONTENT, "msg.txt",
CURLFORM_END);
~~~
becomes:
~~~c
part = curl_mime_addpart(multipart);
curl_mime_name(part, "message");
curl_mime_filedata(part, "msg.txt");
curl_mime_filename(part, NULL);
part = curl_mime_addpart(multipart);
curl_mime_name(part, "message");
curl_mime_filedata(part, "msg.txt");
curl_mime_filename(part, NULL);
~~~
Use of curl_mime_filedata(3) sets the remote filename as a side effect: it is
@ -780,11 +779,11 @@ Set the progress callback by using CURLOPT_PROGRESSFUNCTION(3). Pass a pointer
to a function that matches this prototype:
~~~c
int progress_callback(void *clientp,
double dltotal,
double dlnow,
double ultotal,
double ulnow);
int progress_callback(void *clientp,
double dltotal,
double dlnow,
double ultotal,
double ulnow);
~~~
If any of the input arguments is unknown, a 0 is provided. The first argument,
@ -801,13 +800,13 @@ The callbacks CANNOT be non-static class member functions
Example C++ code:
~~~c
class AClass {
static size_t write_data(void *ptr, size_t size, size_t nmemb,
void *ourpointer)
{
/* do what you want with the data */
}
}
class AClass {
static size_t write_data(void *ptr, size_t size, size_t nmemb,
void *ourpointer)
{
/* do what you want with the data */
}
}
~~~
# Proxies
@ -840,12 +839,12 @@ commands or even proper FTP directory listings.
To tell libcurl to use a proxy at a given port number:
~~~c
curl_easy_setopt(handle, CURLOPT_PROXY, "proxy-host.com:8080");
curl_easy_setopt(handle, CURLOPT_PROXY, "proxy-host.com:8080");
~~~
Some proxies require user authentication before allowing a request, and you
pass that information similar to this:
~~~c
curl_easy_setopt(handle, CURLOPT_PROXYUSERPWD, "user:password");
curl_easy_setopt(handle, CURLOPT_PROXYUSERPWD, "user:password");
~~~
If you want to, you can specify the hostname only in the
CURLOPT_PROXY(3) option, and set the port number separately with
@ -854,7 +853,7 @@ CURLOPT_PROXYPORT(3).
Tell libcurl what kind of proxy it is with CURLOPT_PROXYTYPE(3) (if not,
it defaults to assuming an HTTP proxy):
~~~c
curl_easy_setopt(handle, CURLOPT_PROXYTYPE, CURLPROXY_SOCKS4);
curl_easy_setopt(handle, CURLOPT_PROXYTYPE, CURLPROXY_SOCKS4);
~~~
## Environment Variables
@ -921,7 +920,7 @@ rarely allowed.
Tell libcurl to use proxy tunneling like this:
~~~c
curl_easy_setopt(handle, CURLOPT_HTTPPROXYTUNNEL, 1L);
curl_easy_setopt(handle, CURLOPT_HTTPPROXYTUNNEL, 1L);
~~~
In fact, there might even be times when you want to do plain HTTP operations
using a tunnel like this, as it then enables you to operate on the remote
@ -1031,7 +1030,7 @@ GET, HEAD or POST is not good enough for you, CURLOPT_CUSTOMREQUEST(3)
is there for you. It is simple to use:
~~~c
curl_easy_setopt(handle, CURLOPT_CUSTOMREQUEST, "MYOWNREQUEST");
curl_easy_setopt(handle, CURLOPT_CUSTOMREQUEST, "MYOWNREQUEST");
~~~
When using the custom request, you change the request keyword of the actual
@ -1046,17 +1045,17 @@ request, and you are free to pass any amount of extra headers that you
think fit. Adding headers is this easy:
~~~c
struct curl_slist *headers = NULL; /* init to NULL is important */
struct curl_slist *headers = NULL; /* init to NULL is important */
headers = curl_slist_append(headers, "Hey-server-hey: how are you?");
headers = curl_slist_append(headers, "X-silly-content: yes");
headers = curl_slist_append(headers, "Hey-server-hey: how are you?");
headers = curl_slist_append(headers, "X-silly-content: yes");
/* pass our list of custom made headers */
curl_easy_setopt(handle, CURLOPT_HTTPHEADER, headers);
/* pass our list of custom made headers */
curl_easy_setopt(handle, CURLOPT_HTTPHEADER, headers);
curl_easy_perform(handle); /* transfer http */
curl_easy_perform(handle); /* transfer http */
curl_slist_free_all(headers); /* free the header list */
curl_slist_free_all(headers); /* free the header list */
~~~
... and if you think some of the internally generated headers, such as Accept:
@ -1064,8 +1063,8 @@ or Host: do not contain the data you want them to contain, you can replace
them by simply setting them too:
~~~c
headers = curl_slist_append(headers, "Accept: Agent-007");
headers = curl_slist_append(headers, "Host: munged.host.line");
headers = curl_slist_append(headers, "Accept: Agent-007");
headers = curl_slist_append(headers, "Host: munged.host.line");
~~~
## Delete Headers
@ -1075,7 +1074,9 @@ header from being sent. For instance, if you want to completely prevent the
"Accept:" header from being sent, you can disable it with code similar to
this:
headers = curl_slist_append(headers, "Accept:");
~~~c
headers = curl_slist_append(headers, "Accept:");
~~~
Both replacing and canceling internal headers should be done with careful
consideration and you should be aware that you may violate the HTTP protocol
@ -1096,7 +1097,9 @@ we support. libcurl speaks HTTP 1.1 by default. Some old servers do not like
getting 1.1-requests and when dealing with stubborn old things like that, you
can tell libcurl to use 1.0 instead by doing something like this:
curl_easy_setopt(handle, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0);
~~~c
curl_easy_setopt(handle, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0);
~~~
## FTP Custom Commands
@ -1116,14 +1119,14 @@ correct remote directory.
A little example that deletes a given file before an operation:
~~~c
headers = curl_slist_append(headers, "DELE file-to-remove");
headers = curl_slist_append(headers, "DELE file-to-remove");
/* pass the list of custom commands to the handle */
curl_easy_setopt(handle, CURLOPT_QUOTE, headers);
/* pass the list of custom commands to the handle */
curl_easy_setopt(handle, CURLOPT_QUOTE, headers);
curl_easy_perform(handle); /* transfer ftp data! */
curl_easy_perform(handle); /* transfer ftp data! */
curl_slist_free_all(headers); /* free the header list */
curl_slist_free_all(headers); /* free the header list */
~~~
If you would instead want this operation (or chain of operations) to happen
@ -1171,7 +1174,7 @@ To just send whatever cookie you want to a server, you can use
CURLOPT_COOKIE(3) to set a cookie string like this:
~~~c
curl_easy_setopt(handle, CURLOPT_COOKIE, "name1=var1; name2=var2;");
curl_easy_setopt(handle, CURLOPT_COOKIE, "name1=var1; name2=var2;");
~~~
In many cases, that is not enough. You might want to dynamically save whatever
@ -1271,43 +1274,43 @@ Here is an example building an email message with an inline plain/html text
alternative and a file attachment encoded in base64:
~~~c
curl_mime *message = curl_mime_init(handle);
curl_mime *message = curl_mime_init(handle);
/* The inline part is an alternative proposing the html and the text
versions of the email. */
curl_mime *alt = curl_mime_init(handle);
/* The inline part is an alternative proposing the html and the text
versions of the email. */
curl_mime *alt = curl_mime_init(handle);
/* HTML message. */
curl_mimepart *part = curl_mime_addpart(alt);
curl_mime_data(part, "<html><body><p>This is HTML</p></body></html>",
CURL_ZERO_TERMINATED);
curl_mime_type(part, "text/html");
/* HTML message. */
curl_mimepart *part = curl_mime_addpart(alt);
curl_mime_data(part, "<html><body><p>This is HTML</p></body></html>",
CURL_ZERO_TERMINATED);
curl_mime_type(part, "text/html");
/* Text message. */
part = curl_mime_addpart(alt);
curl_mime_data(part, "This is plain text message",
CURL_ZERO_TERMINATED);
/* Text message. */
part = curl_mime_addpart(alt);
curl_mime_data(part, "This is plain text message",
CURL_ZERO_TERMINATED);
/* Create the inline part. */
part = curl_mime_addpart(message);
curl_mime_subparts(part, alt);
curl_mime_type(part, "multipart/alternative");
struct curl_slist *headers = curl_slist_append(NULL,
"Content-Disposition: inline");
curl_mime_headers(part, headers, TRUE);
/* Create the inline part. */
part = curl_mime_addpart(message);
curl_mime_subparts(part, alt);
curl_mime_type(part, "multipart/alternative");
struct curl_slist *headers = curl_slist_append(NULL,
"Content-Disposition: inline");
curl_mime_headers(part, headers, TRUE);
/* Add the attachment. */
part = curl_mime_addpart(message);
curl_mime_filedata(part, "manual.pdf");
curl_mime_encoder(part, "base64");
/* Add the attachment. */
part = curl_mime_addpart(message);
curl_mime_filedata(part, "manual.pdf");
curl_mime_encoder(part, "base64");
/* Build the mail headers. */
headers = curl_slist_append(NULL, "From: me@example.com");
headers = curl_slist_append(headers, "To: you@example.com");
/* Build the mail headers. */
headers = curl_slist_append(NULL, "From: me@example.com");
headers = curl_slist_append(headers, "To: you@example.com");
/* Set these into the easy handle. */
curl_easy_setopt(handle, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(handle, CURLOPT_MIMEPOST, mime);
/* Set these into the easy handle. */
curl_easy_setopt(handle, CURLOPT_HTTPHEADER, headers);
curl_easy_setopt(handle, CURLOPT_MIMEPOST, mime);
~~~
It should be noted that appending a message to an IMAP directory requires
@ -1412,7 +1415,7 @@ to figure out success on each individual transfer.
# SSL, Certificates and Other Tricks
[ seeding, passwords, keys, certificates, ENGINE, ca certs ]
[ seeding, passwords, keys, certificates, ENGINE, ca certs ]
# Sharing Data Between Easy Handles

View File

@ -23,43 +23,43 @@
###########################################################################
# Shared between CMakeLists.txt and Makefile.am
LIB_CURLX_CFILES = \
curlx/base64.c \
curlx/dynbuf.c \
curlx/fopen.c \
curlx/inet_ntop.c \
curlx/inet_pton.c \
curlx/multibyte.c \
curlx/nonblock.c \
curlx/strcopy.c \
curlx/strerr.c \
curlx/strparse.c \
curlx/timediff.c \
curlx/timeval.c \
LIB_CURLX_CFILES = \
curlx/base64.c \
curlx/dynbuf.c \
curlx/fopen.c \
curlx/inet_ntop.c \
curlx/inet_pton.c \
curlx/multibyte.c \
curlx/nonblock.c \
curlx/strcopy.c \
curlx/strerr.c \
curlx/strparse.c \
curlx/timediff.c \
curlx/timeval.c \
curlx/version_win32.c \
curlx/wait.c \
curlx/warnless.c \
curlx/wait.c \
curlx/warnless.c \
curlx/winapi.c
LIB_CURLX_HFILES = \
curlx/binmode.h \
curlx/base64.h \
curlx/curlx.h \
curlx/dynbuf.h \
curlx/fopen.h \
curlx/inet_ntop.h \
curlx/inet_pton.h \
curlx/multibyte.h \
curlx/nonblock.h \
curlx/snprintf.h \
curlx/strcopy.h \
curlx/strerr.h \
curlx/strparse.h \
curlx/timediff.h \
curlx/timeval.h \
LIB_CURLX_HFILES = \
curlx/binmode.h \
curlx/base64.h \
curlx/curlx.h \
curlx/dynbuf.h \
curlx/fopen.h \
curlx/inet_ntop.h \
curlx/inet_pton.h \
curlx/multibyte.h \
curlx/nonblock.h \
curlx/snprintf.h \
curlx/strcopy.h \
curlx/strerr.h \
curlx/strparse.h \
curlx/timediff.h \
curlx/timeval.h \
curlx/version_win32.h \
curlx/wait.h \
curlx/warnless.h \
curlx/wait.h \
curlx/warnless.h \
curlx/winapi.h
LIB_VAUTH_CFILES = \
@ -116,22 +116,22 @@ LIB_VTLS_HFILES = \
vtls/wolfssl.h \
vtls/x509asn1.h
LIB_VQUIC_CFILES = \
LIB_VQUIC_CFILES = \
vquic/curl_ngtcp2.c \
vquic/curl_quiche.c \
vquic/vquic.c \
vquic/vquic.c \
vquic/vquic-tls.c
LIB_VQUIC_HFILES = \
LIB_VQUIC_HFILES = \
vquic/curl_ngtcp2.h \
vquic/curl_quiche.h \
vquic/vquic.h \
vquic/vquic_int.h \
vquic/vquic.h \
vquic/vquic_int.h \
vquic/vquic-tls.h
LIB_VSSH_CFILES = \
vssh/libssh.c \
vssh/libssh2.c \
LIB_VSSH_CFILES = \
vssh/libssh.c \
vssh/libssh2.c \
vssh/vssh.c
LIB_VSSH_HFILES = \

View File

@ -626,12 +626,12 @@
#cmakedefine CURL_OS ${CURL_OS}
/*
Note: SIZEOF_* variables are fetched with CMake through check_type_size().
As per CMake documentation on CheckTypeSize, C preprocessor code is
generated by CMake into SIZEOF_*_CODE. This is what we use in the
following statements.
Note: SIZEOF_* variables are fetched with CMake through check_type_size().
As per CMake documentation on CheckTypeSize, C preprocessor code is
generated by CMake into SIZEOF_*_CODE. This is what we use in the
following statements.
Reference: https://cmake.org/cmake/help/latest/module/CheckTypeSize.html
Reference: https://cmake.org/cmake/help/latest/module/CheckTypeSize.html
*/
/* The size of `int', as computed by sizeof. */

View File

@ -53,8 +53,7 @@
* sizeof(int) < 4. sizeof(int) > 4 is fine; all the world's not a VAX.
*/
/* int
* inet_pton4(src, dst)
/* int inet_pton4(src, dst)
* like inet_aton() but without all the hexadecimal and shorthand.
* return:
* 1 if `src' is a valid dotted quad, else 0.
@ -102,8 +101,7 @@ static int inet_pton4(const char *src, unsigned char *dst)
return 1;
}
/* int
* inet_pton6(src, dst)
/* int inet_pton6(src, dst)
* convert presentation level address to network order binary form.
* return:
* 1 if `src' is a valid [RFC1884 2.2] address, else 0.
@ -192,8 +190,7 @@ static int inet_pton6(const char *src, unsigned char *dst)
return 1;
}
/* int
* inet_pton(af, src, dst)
/* int inet_pton(af, src, dst)
* convert from presentation format (which usually means ASCII printable)
* to network format (which is usually some kind of binary format).
* return:

View File

@ -170,7 +170,7 @@ static void MD4_Final(unsigned char *result, MD4_CTX *ctx)
* MD4 Message-Digest Algorithm (RFC 1320).
*
* Homepage:
https://openwall.info/wiki/people/solar/software/public-domain-source-code/md4
* https://openwall.info/wiki/people/solar/software/public-domain-source-code/md4
*
* Author:
* Alexander Peslyak, better known as Solar Designer <solar at openwall.com>
@ -179,8 +179,8 @@ static void MD4_Final(unsigned char *result, MD4_CTX *ctx)
* claimed, and the software is hereby placed in the public domain. In case
* this attempt to disclaim copyright and place the software in the public
* domain is deemed null and void, then the software is Copyright (c) 2001
* Alexander Peslyak and it is hereby released to the general public under the
* following terms:
* Alexander Peslyak and it is hereby released to the general public under
* the following terms:
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted.

View File

@ -256,17 +256,17 @@ static void my_md5_final(unsigned char *digest, void *in)
* MD5 Message-Digest Algorithm (RFC 1321).
*
* Homepage:
https://openwall.info/wiki/people/solar/software/public-domain-source-code/md5
* https://openwall.info/wiki/people/solar/software/public-domain-source-code/md5
*
* Author:
* Alexander Peslyak, better known as Solar Designer <solar at openwall.com>
*
* This software was written by Alexander Peslyak in 2001. No copyright is
* claimed, and the software is hereby placed in the public domain.
* In case this attempt to disclaim copyright and place the software in the
* public domain is deemed null and void, then the software is
* Copyright (c) 2001 Alexander Peslyak and it is hereby released to the
* general public under the following terms:
* claimed, and the software is hereby placed in the public domain. In case
* this attempt to disclaim copyright and place the software in the public
* domain is deemed null and void, then the software is Copyright (c) 2001
* Alexander Peslyak and it is hereby released to the general public under
* the following terms:
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted.

View File

@ -1022,8 +1022,8 @@ static int enginecheck(struct Curl_easy *data,
SSL_CTX* ctx,
const char *key_file,
const char *key_passwd)
#ifdef USE_OPENSSL_ENGINE
{
#ifdef USE_OPENSSL_ENGINE
EVP_PKEY *priv_key = NULL;
/* Implicitly use pkcs11 engine if none was provided and the
@ -1066,22 +1066,20 @@ static int enginecheck(struct Curl_easy *data,
return 0;
}
return 1;
}
#else
{
(void)ctx;
(void)key_file;
(void)key_passwd;
failf(data, "SSL_FILETYPE_ENGINE not supported for private key");
return 0;
}
#endif
}
static int providercheck(struct Curl_easy *data,
SSL_CTX* ctx,
const char *key_file)
#ifdef OPENSSL_HAS_PROVIDERS
{
#ifdef OPENSSL_HAS_PROVIDERS
char error_buffer[256];
/* Implicitly use pkcs11 provider if none was provided and the
* key_file is a PKCS#11 URI */
@ -1155,22 +1153,20 @@ static int providercheck(struct Curl_easy *data,
return 0;
}
return 1;
}
#else
{
(void)ctx;
(void)key_file;
failf(data, "SSL_FILETYPE_PROVIDER not supported for private key");
return 0;
}
#endif
}
static int engineload(struct Curl_easy *data,
SSL_CTX* ctx,
const char *cert_file)
{
/* ENGINE_CTRL_GET_CMD_FROM_NAME supported by OpenSSL, LibreSSL <=3.8.3 */
#if defined(USE_OPENSSL_ENGINE) && defined(ENGINE_CTRL_GET_CMD_FROM_NAME)
{
char error_buffer[256];
/* Implicitly use pkcs11 engine if none was provided and the
* cert_file is a PKCS#11 URI */
@ -1228,21 +1224,19 @@ static int engineload(struct Curl_easy *data,
return 0;
}
return 1;
}
#else
{
(void)ctx;
(void)cert_file;
failf(data, "SSL_FILETYPE_ENGINE not supported for certificate");
return 0;
}
#endif
}
static int providerload(struct Curl_easy *data,
SSL_CTX* ctx,
const char *cert_file)
#ifdef OPENSSL_HAS_PROVIDERS
{
#ifdef OPENSSL_HAS_PROVIDERS
char error_buffer[256];
/* Implicitly use pkcs11 provider if none was provided and the
* cert_file is a PKCS#11 URI */
@ -1306,15 +1300,13 @@ static int providerload(struct Curl_easy *data,
return 0;
}
return 1;
}
#else
{
(void)ctx;
(void)cert_file;
failf(data, "SSL_FILETYPE_PROVIDER not supported for certificate");
return 0;
}
#endif
}
static int pkcs12load(struct Curl_easy *data,
SSL_CTX* ctx,

View File

@ -7,7 +7,7 @@
History:
9-MAR-2004, Created this readme. file. Marty Kuhrt (MSK).
09-MAR-2004, Created this readme. file. Marty Kuhrt (MSK).
15-MAR-2004, MSK, Updated to reflect the new files in this directory.
14-FEB-2005, MSK, removed config-vms.h_with* file comments
10-FEB-2010, SMS. General update.

View File

@ -381,10 +381,9 @@ static const struct LongShort aliases[]= {
*
* Unit test 1394
*/
UNITTEST
ParameterError parse_cert_parameter(const char *cert_parameter,
char **certname,
char **passphrase)
UNITTEST ParameterError parse_cert_parameter(const char *cert_parameter,
char **certname,
char **passphrase)
{
size_t param_length = strlen(cert_parameter);
size_t span;

View File

@ -372,9 +372,9 @@ ParameterError getparameter(const char *flag, const char *nextarg,
int max_recursive);
#ifdef UNITTESTS
ParameterError parse_cert_parameter(const char *cert_parameter,
char **certname,
char **passphrase);
UNITTEST ParameterError parse_cert_parameter(const char *cert_parameter,
char **certname,
char **passphrase);
UNITTEST ParameterError GetSizeParameter(const char *arg, curl_off_t *out);
#endif

View File

@ -50,7 +50,7 @@ lib%TESTNUMBER
CURLOPT_PROXYHEADER is ignored CURLHEADER_UNIFIED
</name>
<command>
http://the.old.moo.%TESTNUMBER:%HTTPPORT/%TESTNUMBER %HOSTIP:%PROXYPORT
http://the.old.moo.%TESTNUMBER:%HTTPPORT/%TESTNUMBER %HOSTIP:%PROXYPORT
</command>
<features>
proxy

View File

@ -52,7 +52,7 @@ lib%TESTNUMBER
CURLOPT_PROXYHEADER: separate host/proxy headers
</name>
<command>
http://the.old.moo.%TESTNUMBER:%HTTPPORT/%TESTNUMBER %HOSTIP:%PROXYPORT
http://the.old.moo.%TESTNUMBER:%HTTPPORT/%TESTNUMBER %HOSTIP:%PROXYPORT
</command>
<features>
proxy

View File

@ -51,7 +51,7 @@ lib%TESTNUMBER
Same headers with CURLOPT_HEADEROPT == CURLHEADER_UNIFIED
</name>
<command>
http://the.old.moo.%TESTNUMBER:%HTTPPORT/%TESTNUMBER %HOSTIP:%PROXYPORT
http://the.old.moo.%TESTNUMBER:%HTTPPORT/%TESTNUMBER %HOSTIP:%PROXYPORT
</command>
<features>
proxy

View File

@ -42,7 +42,7 @@ lib%TESTNUMBER
Separately specified proxy/server headers sent in a proxy GET
</name>
<command>
http://the.old.moo:%HTTPPORT/%TESTNUMBER %HOSTIP:%PROXYPORT
http://the.old.moo:%HTTPPORT/%TESTNUMBER %HOSTIP:%PROXYPORT
</command>
<features>
proxy

View File

@ -30,7 +30,7 @@ lib%TESTNUMBER
HTTP request-injection in URL sent over proxy
</name>
<command>
"http://the.old.moo:%HTTPPORT/%TESTNUMBER" %HOSTIP:%PROXYPORT
"http://the.old.moo:%HTTPPORT/%TESTNUMBER" %HOSTIP:%PROXYPORT
</command>
<features>
proxy

View File

@ -12,12 +12,12 @@ HTTP Digest auth
<!--
Explanation for the duplicate 400 requests:
Explanation for the duplicate 400 requests:
libcurl does not detect that a given Digest password is wrong already on the
first 401 response (as the data400 gives). libcurl will instead consider the
new response just as a duplicate and it sends another and detects the auth
problem on the second 401 response!
libcurl does not detect that a given Digest password is wrong already on the
first 401 response (as the data400 gives). libcurl will instead consider the
new response just as a duplicate and it sends another and detects the auth
problem on the second 401 response!
-->

View File

@ -17,12 +17,12 @@ ensure that the order does not matter. -->
<!--
Explanation for the duplicate 400 requests:
Explanation for the duplicate 400 requests:
libcurl does not detect that a given Digest password is wrong already on the
first 401 response (as the data400 gives). libcurl will instead consider the
new response just as a duplicate and it sends another and detects the auth
problem on the second 401 response!
libcurl does not detect that a given Digest password is wrong already on the
first 401 response (as the data400 gives). libcurl will instead consider the
new response just as a duplicate and it sends another and detects the auth
problem on the second 401 response!
-->

View File

@ -30,7 +30,7 @@ IMAP custom FETCH with larger literal response (~7KB)
</name>
# The quoted string contains {50} which must not be parsed as a literal
<command>
imap://%HOSTIP:%IMAPPORT/%TESTNUMBER/ -u user:secret -X 'FETCH 456 ("fake {50}" BODY[TEXT])'
imap://%HOSTIP:%IMAPPORT/%TESTNUMBER/ -u user:secret -X 'FETCH 456 ("fake {50}" BODY[TEXT])'
</command>
</client>

View File

@ -36,7 +36,7 @@ http
<name>
--remove-on-error with --no-clobber and an added number
</name>
<command option="no-output">
<command option="no-output">
http://%HOSTIP:%HTTPPORT/%TESTNUMBER -o %LOGDIR/save --remove-on-error --no-clobber
</command>
</client>

View File

@ -37,7 +37,7 @@ imap
IMAP custom request does not check continuation data
</name>
<command>
imap://%HOSTIP:%IMAPPORT/%TESTNUMBER/ -u user:secret -X 'FETCH 123 BODY[1]'
imap://%HOSTIP:%IMAPPORT/%TESTNUMBER/ -u user:secret -X 'FETCH 123 BODY[1]'
</command>
</client>

View File

@ -28,48 +28,48 @@
static CURLcode test_tool1622(const char *arg)
{
UNITTEST_BEGIN_SIMPLE
{
char buffer[9];
curl_off_t secs;
int i;
static const curl_off_t check[] = {
/* bytes to check */
131072,
12645826,
1073741824,
12938588979,
1099445657078333,
0 /* end of list */
};
puts("time2str");
for(i = 0, secs = 0; i < 63; i++) {
time2str(buffer, sizeof(buffer), secs);
curl_mprintf("%20" FMT_OFF_T " - %s\n", secs, buffer);
if(strlen(buffer) != 8) {
curl_mprintf("^^ was too long!\n");
}
secs *= 2;
secs++;
char buffer[9];
curl_off_t secs;
int i;
static const curl_off_t check[] = {
/* bytes to check */
131072,
12645826,
1073741824,
12938588979,
1099445657078333,
0 /* end of list */
};
puts("time2str");
for(i = 0, secs = 0; i < 63; i++) {
time2str(buffer, sizeof(buffer), secs);
curl_mprintf("%20" FMT_OFF_T " - %s\n", secs, buffer);
if(strlen(buffer) != 8) {
curl_mprintf("^^ was too long!\n");
}
puts("max5data");
for(i = 0, secs = 0; i < 63; i++) {
max5data(secs, buffer, sizeof(buffer));
curl_mprintf("%20" FMT_OFF_T " - %s\n", secs, buffer);
if(strlen(buffer) != 5) {
curl_mprintf("^^ was too long!\n");
}
secs *= 2;
secs++;
secs *= 2;
secs++;
}
puts("max5data");
for(i = 0, secs = 0; i < 63; i++) {
max5data(secs, buffer, sizeof(buffer));
curl_mprintf("%20" FMT_OFF_T " - %s\n", secs, buffer);
if(strlen(buffer) != 5) {
curl_mprintf("^^ was too long!\n");
}
for(i = 0; check[i]; i++) {
secs = check[i];
max5data(secs, buffer, sizeof(buffer));
curl_mprintf("%20" FMT_OFF_T " - %s\n", secs, buffer);
if(strlen(buffer) != 5) {
curl_mprintf("^^ was too long!\n");
}
secs *= 2;
secs++;
}
for(i = 0; check[i]; i++) {
secs = check[i];
max5data(secs, buffer, sizeof(buffer));
curl_mprintf("%20" FMT_OFF_T " - %s\n", secs, buffer);
if(strlen(buffer) != 5) {
curl_mprintf("^^ was too long!\n");
}
}
UNITTEST_END_SIMPLE
}

View File

@ -34,94 +34,93 @@ struct check1623 {
static CURLcode test_tool1623(const char *arg)
{
UNITTEST_BEGIN_SIMPLE
{
int i;
static const struct check1623 check[] = {
{ "0", 0, PARAM_OK},
{ "00", 0, PARAM_OK},
{ "000", 0, PARAM_OK},
{ "1", 1, PARAM_OK},
{ "1b", 1, PARAM_OK},
{ "99B", 99, PARAM_OK},
{ "2", 2, PARAM_OK},
{ "3", 3, PARAM_OK},
{ "4", 4, PARAM_OK},
{ "5", 5, PARAM_OK},
{ "6", 6, PARAM_OK},
{ "7", 7, PARAM_OK},
{ "77", 77, PARAM_OK},
{ "8", 8, PARAM_OK},
{ "9", 9, PARAM_OK},
{ "10", 10, PARAM_OK},
{ "010", 10, PARAM_OK},
{ "000000000000000000000000000000000010", 10, PARAM_OK},
{ "1k", 1024, PARAM_OK},
{ "2K", 2048, PARAM_OK},
{ "3k", 3072, PARAM_OK},
{ "4K", 4096, PARAM_OK},
{ "5k", 5120, PARAM_OK},
{ "6K", 6144, PARAM_OK},
{ "7k", 7168, PARAM_OK},
{ "8K", 8192, PARAM_OK},
{ "9k", 9216, PARAM_OK},
{ "10K", 10240, PARAM_OK},
{ "20M", 20971520, PARAM_OK},
{ "30G", 32212254720, PARAM_OK},
{ "40T", 43980465111040, PARAM_OK},
{ "50P", 56294995342131200, PARAM_OK},
{ "1.1k", 1126, PARAM_OK},
{ "1.01k", 1034, PARAM_OK},
{ "1.001k", 1025, PARAM_OK},
{ "1.0001k", 1024, PARAM_OK},
{ "22.1m", 23173529, PARAM_OK},
{ "22.01m", 23079157, PARAM_OK},
{ "22.001m", 23069720, PARAM_OK},
{ "22.0001m", 23068776, PARAM_OK},
{ "22.00001m", 23068682, PARAM_OK},
{ "22.000001m", 23068673, PARAM_OK},
{ "22.0000001m", 23068672, PARAM_OK},
{ "22.000000001m", 23068672, PARAM_OK},
{ "3.4", 0, PARAM_BAD_USE},
{ "3.14b", 0, PARAM_BAD_USE},
{ "5000.9P", 5630512844129278361, PARAM_OK},
{ "5000.99P", 5630614175120894197, PARAM_OK},
{ "5000.999P", 5630624308220055781, PARAM_OK},
{ "5000.9999P", 5630625321529969316, PARAM_OK},
{ "8191P", 9222246136947933184, PARAM_OK},
{ "8191.9999999P", 9223372036735343194, PARAM_OK},
{ "8192P", 0, PARAM_NUMBER_TOO_LARGE},
{ "9223372036854775807", 9223372036854775807, PARAM_OK},
{ "9223372036854775808", 0, PARAM_NUMBER_TOO_LARGE},
{ "a", 0, PARAM_BAD_NUMERIC},
{ "-2", 0, PARAM_BAD_NUMERIC},
{ "+2", 0, PARAM_BAD_NUMERIC},
{ "2,2k", 0, PARAM_BAD_USE},
{ NULL, 0, PARAM_OK } /* end of list */
};
for(i = 0; check[i].input; i++) {
bool ok = FALSE;
curl_off_t output = 0;
ParameterError err =
GetSizeParameter(check[i].input, &output);
if(err != check[i].err)
curl_mprintf("'%s' unexpectedly returned %d \n",
check[i].input, err);
else if(check[i].amount != output)
curl_mprintf("'%s' unexpectedly gave %" FMT_OFF_T "\n",
check[i].input, output);
else {
int i;
static const struct check1623 check[] = {
{ "0", 0, PARAM_OK },
{ "00", 0, PARAM_OK },
{ "000", 0, PARAM_OK },
{ "1", 1, PARAM_OK },
{ "1b", 1, PARAM_OK },
{ "99B", 99, PARAM_OK },
{ "2", 2, PARAM_OK },
{ "3", 3, PARAM_OK },
{ "4", 4, PARAM_OK },
{ "5", 5, PARAM_OK },
{ "6", 6, PARAM_OK },
{ "7", 7, PARAM_OK },
{ "77", 77, PARAM_OK },
{ "8", 8, PARAM_OK },
{ "9", 9, PARAM_OK },
{ "10", 10, PARAM_OK },
{ "010", 10, PARAM_OK },
{ "000000000000000000000000000000000010", 10, PARAM_OK },
{ "1k", 1024, PARAM_OK },
{ "2K", 2048, PARAM_OK },
{ "3k", 3072, PARAM_OK },
{ "4K", 4096, PARAM_OK },
{ "5k", 5120, PARAM_OK },
{ "6K", 6144, PARAM_OK },
{ "7k", 7168, PARAM_OK },
{ "8K", 8192, PARAM_OK },
{ "9k", 9216, PARAM_OK },
{ "10K", 10240, PARAM_OK },
{ "20M", 20971520, PARAM_OK },
{ "30G", 32212254720, PARAM_OK },
{ "40T", 43980465111040, PARAM_OK },
{ "50P", 56294995342131200, PARAM_OK },
{ "1.1k", 1126, PARAM_OK },
{ "1.01k", 1034, PARAM_OK },
{ "1.001k", 1025, PARAM_OK },
{ "1.0001k", 1024, PARAM_OK },
{ "22.1m", 23173529, PARAM_OK },
{ "22.01m", 23079157, PARAM_OK },
{ "22.001m", 23069720, PARAM_OK },
{ "22.0001m", 23068776, PARAM_OK },
{ "22.00001m", 23068682, PARAM_OK },
{ "22.000001m", 23068673, PARAM_OK },
{ "22.0000001m", 23068672, PARAM_OK },
{ "22.000000001m", 23068672, PARAM_OK },
{ "3.4", 0, PARAM_BAD_USE },
{ "3.14b", 0, PARAM_BAD_USE },
{ "5000.9P", 5630512844129278361, PARAM_OK },
{ "5000.99P", 5630614175120894197, PARAM_OK },
{ "5000.999P", 5630624308220055781, PARAM_OK },
{ "5000.9999P", 5630625321529969316, PARAM_OK },
{ "8191P", 9222246136947933184, PARAM_OK },
{ "8191.9999999P", 9223372036735343194, PARAM_OK },
{ "8192P", 0, PARAM_NUMBER_TOO_LARGE },
{ "9223372036854775807", 9223372036854775807, PARAM_OK },
{ "9223372036854775808", 0, PARAM_NUMBER_TOO_LARGE },
{ "a", 0, PARAM_BAD_NUMERIC },
{ "-2", 0, PARAM_BAD_NUMERIC },
{ "+2", 0, PARAM_BAD_NUMERIC },
{ "2,2k", 0, PARAM_BAD_USE },
{ NULL, 0, PARAM_OK } /* end of list */
};
for(i = 0; check[i].input; i++) {
bool ok = FALSE;
curl_off_t output = 0;
ParameterError err = GetSizeParameter(check[i].input, &output);
if(err != check[i].err)
curl_mprintf("'%s' unexpectedly returned %d \n",
check[i].input, err);
else if(check[i].amount != output)
curl_mprintf("'%s' unexpectedly gave %" FMT_OFF_T "\n",
check[i].input, output);
else {
#if 0 /* enable for debugging */
if(err)
curl_mprintf("'%s' returned %d\n", check[i].input, err);
else
curl_mprintf("'%s' == %" FMT_OFF_T "\n", check[i].input, output);
if(err)
curl_mprintf("'%s' returned %d\n", check[i].input, err);
else
curl_mprintf("'%s' == %" FMT_OFF_T "\n", check[i].input, output);
#endif
ok = TRUE;
}
if(!ok)
unitfail++;
ok = TRUE;
}
if(!ok)
unitfail++;
}
UNITTEST_END_SIMPLE
}