badwords: rework exceptions, fix many of them

Also:
- support per-directory and per-upper-directory whitelist entries.
- convert badlist input grep tweak into the above format.
  (except for 'And' which had just a few hits.)
- fix many code exceptions, but do not enforce.
  (there also remain about 350 'will' uses in lib)
- fix badwords in example code, drop exceptions.
- badwords-all: convert to Perl.
  To make it usable from CMake.
- FAQ: reword to not use 'will'. Drop exception.

Closes #20886
This commit is contained in:
Viktor Szakats 2026-03-11 10:17:10 +01:00
parent 11c14b5ca5
commit 435eabeac8
No known key found for this signature in database
89 changed files with 367 additions and 344 deletions

View File

@ -86,7 +86,7 @@ curl is not a program for a single operating system. curl exists, compiles,
builds and runs under a wide range of operating systems, including all modern
Unixes (and a bunch of older ones too), Windows, Amiga, OS/2, macOS, QNX etc.
## When will you make curl do ... ?
## When would you make curl do ... ?
We love suggestions of what to change in order to make curl and libcurl
better. We do however believe in a few rules when it comes to the future of
@ -102,7 +102,7 @@ redirected to another file for the next program to interpret.
We focus on protocol related issues and improvements. If you want to do more
with the supported protocols than curl currently does, chances are good we
will agree. If you want to add more protocols, we may agree.
would agree. If you want to add more protocols, we may agree.
If you want someone else to do all the work while you wait for us to implement
it for you, that is not a friendly attitude. We spend a considerable time
@ -111,7 +111,7 @@ you should consider trading in some of your time and effort in return. Go to
the [GitHub repository](https://github.com/curl/curl), fork the project,
and create pull requests with your proposed changes.
If you write the code, chances are better that it will get into curl faster.
If you write the code, chances are better that it gets into curl faster.
## Who makes curl?
@ -135,7 +135,7 @@ not controlled by nor supervised in any way by the curl project.
We get help from companies. Haxx provides website, bandwidth, mailing lists
etc, GitHub hosts [the primary git repository](https://github.com/curl/curl)
and other services like the bug tracker. Also again, some companies have
sponsored certain parts of the development in the past and I hope some will
sponsored certain parts of the development in the past and I hope some
continue to do so in the future.
If you want to [support our project](https://curl.se/sponsors.html), consider
@ -153,7 +153,7 @@ Our project name curl has been in effective use since 1998. We were not the
first computer related project to use the name *curl* and do not claim any
rights to the name.
We recognize that we will be living in parallel with curl.com and wish them
We recognize that we are living in parallel with curl.com and wish them
every success.
## I have a problem, who do I mail?
@ -248,8 +248,8 @@ An incomprehensible description of the two numbers above is available on
We strongly encourage you to submit changes and improvements directly as [pull
requests on GitHub](https://github.com/curl/curl/pulls).
If you for any reason cannot or will not deal with GitHub, send your patch to
the curl-library mailing list. We are many subscribers there and there are
If you cannot or choose not to engage with with GitHub, send your patch
to the curl-library mailing list. We are many subscribers there and there are
lots of people who can review patches, comment on them and receive them
properly.
@ -304,7 +304,7 @@ library comparison](https://curl.se/docs/ssl-compared.html).
The curl tool that is shipped as an integrated component of Windows 10 and
Windows 11 is managed by Microsoft. If you were to delete the file or replace
it with a newer version downloaded from [the curl
website](https://curl.se/windows/), then Windows Update will cease to work on
website](https://curl.se/windows/), then Windows Update ceases to work on
your system.
There is no way to independently force an upgrade of the curl.exe that is part
@ -346,7 +346,7 @@ option.
You cannot arbitrarily use `-F` or `-d`, the choice between `-F` or `-d`
depends on the HTTP operation you need curl to do and what the web server that
will receive your post expects.
receives your post expects.
If the form you are trying to submit uses the type 'multipart/form-data',
then and only then you must use the -F type. In all the most common cases,
@ -448,7 +448,7 @@ To make a simple HTTP POST with `text/xml` as content-type, do something like:
## Why do FTP-specific features over HTTP proxy fail?
Because when you use an HTTP proxy, the protocol spoken on the network will be
Because when you use an HTTP proxy, the protocol spoken on the network is
HTTP, even if you specify an FTP URL. This effectively means that you normally
cannot use FTP-specific features such as FTP upload and FTP quote etc.
@ -480,8 +480,8 @@ single quotes. To escape inner double quotes seems to require a
backslash-backtick escape sequence and the outer quotes as double quotes.
Please study the documentation for your particular environment. Examples in
the curl docs will use a mix of both of these as shown above. You must adjust
them to work in your environment.
the curl docs use a mix of both of these as shown above. You must adjust them
to work in your environment.
Remember that curl works and runs on more operating systems than most single
individuals have ever tried.
@ -585,7 +585,7 @@ but use the target IP address in the URL:
curl --header "Host: www.example.com" https://somewhere.example/
You can also opt to add faked hostname entries to curl with the --resolve
option. That has the added benefit that things like redirects will also work
option. That has the added benefit to make things like redirects also work
properly. The above operation would instead be done as:
curl --resolve www.example.com:80:127.0.0.1 https://www.example.com/
@ -615,8 +615,8 @@ how to speak that protocol) or if it was explicitly disabled. curl can be
built to only support a given set of protocols, and the rest would then be
disabled or not supported.
Note that this error will also occur if you pass a wrongly spelled protocol
part as in `htpts://example.com` or as in the less evident case if you prefix
Note that this error also occurs if you pass a wrongly spelled protocol part
as in `htpts://example.com` or as in the less evident case if you prefix
the protocol part with a space as in `" https://example.com/"`.
## curl `-X` gives me HTTP problems
@ -625,8 +625,8 @@ In normal circumstances, `-X` should hardly ever be used.
By default you use curl without explicitly saying which request method to use
when the URL identifies an HTTP transfer. If you pass in a URL like `curl
https://example.com` it will use GET. If you use `-d` or `-F`, curl will use
POST, `-I` will cause a HEAD and `-T` will make it a PUT.
https://example.com` it uses GET. If you use `-d` or `-F`, curl uses POST,
`-I` causes a HEAD and `-T` makes it a PUT.
If for whatever reason you are not happy with these default choices that curl
does for you, you can override those request methods by specifying `-X
@ -643,8 +643,7 @@ the actual string sent in the request, but that may of course trigger a
different set of events.
Accordingly, by using `-XPOST` on a command line that for example would follow
a 303 redirect, you will effectively prevent curl from behaving correctly. Be
aware.
a 303 redirect, you effectively prevent curl from behaving correctly. Be aware.
# Running
@ -685,8 +684,7 @@ them for the curl URL *globbing* system), use the `-g`/`--globoff` option:
curl asks remote servers for the page you specify. If the page does not exist
at the server, the HTTP protocol defines how the server should respond and
that means that headers and a page will be returned. That is how HTTP
works.
that means that headers and a page get returned. That is how HTTP works.
By using the `--fail` option you can tell curl explicitly to not get any data
if the HTTP return code does not say success.
@ -708,7 +706,7 @@ The request requires user authentication.
### 403 Forbidden
The server understood the request, but is refusing to fulfill it.
Authorization will not help and the request SHOULD NOT be repeated.
Authorization cannot help and the request SHOULD NOT be repeated.
### 404 Not Found
@ -749,9 +747,8 @@ This problem has two sides:
The first part is to avoid having clear-text passwords in the command line so
that they do not appear in *ps* outputs and similar. That is easily avoided by
using the `-K` option to tell curl to read parameters from a file or stdin to
which you can pass the secret info. curl itself will also attempt to hide the
given password by blanking out the option - this does not work on all
platforms.
which you can pass the secret info. curl itself also attempts to hide the given
password by blanking out the option - this does not work on all platforms.
To keep the passwords in your account secret from the rest of the world is
not a task that curl addresses. You could of course encrypt them somehow to
@ -842,7 +839,7 @@ curl supports HTTP redirects well (see a previous question above). Browsers
generally support at least two other ways to perform redirects that curl does
not:
Meta tags. You can write an HTML tag that will cause the browser to redirect
Meta tags. You can write an HTML tag that causes the browser to redirect
to another given URL after a certain time.
JavaScript. You can write a JavaScript program embedded in an HTML page that
@ -858,13 +855,13 @@ curl supports FTPS (sometimes known as FTP-SSL) both implicit and explicit
mode.
When a URL is used that starts with `FTPS://`, curl assumes implicit SSL on
the control connection and will therefore immediately connect and try to speak
the control connection and therefore immediately connects and tries to speak
SSL. `FTPS://` connections default to port 990.
To use explicit FTPS, you use an `FTP://` URL and the `--ssl-reqd` option (or
one of its related flavors). This is the most common method, and the one
mandated by RFC 4217. This kind of connection will then of course use the
standard FTP port 21 by default.
mandated by RFC 4217. This kind of connection then of course uses the standard
FTP port 21 by default.
## My HTTP POST or PUT requests are slow
@ -874,7 +871,7 @@ server to deny the operation early so that libcurl can bail out before having
to send any data. This is useful in authentication cases and others.
Many servers do not implement the `Expect:` stuff properly and if the server
does not respond (positively) within 1 second libcurl will continue and send
does not respond (positively) within 1 second libcurl continues and sends
off the data anyway.
You can disable libcurl's use of the `Expect:` header the same way you disable
@ -883,8 +880,8 @@ any header, using `-H` / `CURLOPT_HTTPHEADER`, or by forcing it to use HTTP
## Non-functional connect timeouts
In most Windows setups having a timeout longer than 21 seconds make no
difference, as it will only send 3 TCP SYN packets and no more. The second
In most Windows setups having a timeout longer than 21 seconds makes no
difference, as it only sends 3 TCP SYN packets and no more. The second
packet sent three seconds after the first and the third six seconds after
the second. No more than three packets are sent, no matter how long the
timeout is set.
@ -894,8 +891,8 @@ page](https://support.microsoft.com/topic/hotfix-enables-the-configuration-of-th
Also, even on non-Windows systems there may run a firewall or anti-virus
software or similar that accepts the connection but does not actually do
anything else. This will make (lib)curl to consider the connection connected
and thus the connect timeout will not trigger.
anything else. This makes (lib)curl to consider the connection connected
and thus the connect timeout does not trigger.
## file:// URLs containing drive letters (Windows, NetWare)
@ -904,15 +901,15 @@ format:
file://D:/blah.txt
you will find that even if `D:\blah.txt` does exist, curl returns a 'file not
you find that even if `D:\blah.txt` does exist, curl returns a 'file not
found' error.
According to [RFC 1738](https://datatracker.ietf.org/doc/html/rfc1738),
`file://` URLs must contain a host component, but it is ignored by most
implementations. In the above example, `D:` is treated as the host component,
and is taken away. Thus, curl tries to open `/blah.txt`. If your system is
installed to drive C:, that will resolve to `C:\blah.txt`, and if that does
not exist you will get the not found error.
installed to drive C:, that resolves to `C:\blah.txt`, and if that does
not exist you get the not found error.
To fix this problem, use `file://` URLs with *three* leading slashes:
@ -930,8 +927,8 @@ In either case, curl should now be looking for the correct file.
Unplugging a cable is not an error situation. The TCP/IP protocol stack was
designed to be fault tolerant, so even though there may be a physical break
somewhere the connection should not be affected, but possibly delayed.
Eventually, the physical break will be fixed or the data will be re-routed
around the physical problem through another path.
Eventually, the physical break gets fixed or the data re-routed around
the physical problem through another path.
In such cases, the TCP/IP stack is responsible for detecting when the network
connection is irrevocably lost. Since with some protocols it is perfectly
@ -942,12 +939,12 @@ in the TCP/IP stack which makes it periodically probe the connection to make
sure it is still available to send data. That should reliably detect any
TCP/IP network failure.
TCP keep alive will not detect the network going down before the TCP/IP
TCP keep alive does not detect the network going down before the TCP/IP
connection is established (e.g. during a DNS lookup) or using protocols that
do not use TCP. To handle those situations, curl offers a number of timeouts
on its own. `--speed-limit`/`--speed-time` will abort if the data transfer
rate falls too low, and `--connect-timeout` and `--max-time` can be used to
put an overall timeout on the connection phase or the entire transfer.
on its own. `--speed-limit`/`--speed-time` aborts if the data transfer rate
falls too low, and `--connect-timeout` and `--max-time` can be used to put
an overall timeout on the connection phase or the entire transfer.
A libcurl-using application running in a known physical environment (e.g. an
embedded device with only a single network connection) may want to act
@ -959,8 +956,8 @@ OS-specific mechanism, then signaling libcurl to abort.
Correct. Unless you use `-f` (`--fail`) or `--fail-with-body`.
When doing HTTP transfers, curl will perform exactly what you are asking it to
do and if successful it will not return an error. You can use curl to test
When doing HTTP transfers, curl performs exactly what you are asking it to
do and if successful it does not return an error. You can use curl to test
your web server's "file not found" page (that gets 404 back), you can use it
to check your authentication protected webpages (that gets a 401 back) and so
on.
@ -986,9 +983,9 @@ extract the exact response code that was returned in the response.
Yes.
We have written the libcurl code specifically adjusted for multi-threaded
programs. libcurl will use thread-safe functions instead of non-safe ones if
your system has such. Note that you must never share the same handle in
multiple threads.
programs. libcurl uses thread-safe functions instead of non-safe ones if your
system has such. Note that you must never share the same handle in multiple
threads.
There may be some exceptions to thread safety depending on how libcurl was
built. Please review [the guidelines for thread
@ -1004,7 +1001,7 @@ whatever you want. You do not have to write the received data to a file.
One solution to this problem could be to have a pointer to a struct that you
pass to the callback function. You set the pointer using the CURLOPT_WRITEDATA
option. Then that pointer will be passed to the callback instead of a FILE *
option. Then that pointer is passed to the callback instead of a FILE *
to a file:
~~~c
@ -1036,8 +1033,8 @@ WriteMemoryCallback(void *ptr, size_t size, size_t nmemb, void *data)
libcurl has excellent support for transferring multiple files. You should
repeatedly set new URLs with `curl_easy_setopt()` and then transfer it with
`curl_easy_perform()`. The handle you get from curl_easy_init() is not only
reusable, but you are even encouraged to reuse it if you can, as that will
enable libcurl to use persistent connections.
reusable, but you are even encouraged to reuse it if you can, as that
enables libcurl to use persistent connections.
## Does libcurl do Winsock initialization on Win32 systems?
@ -1055,15 +1052,15 @@ all it does is write the data to the specified FILE *. Similarly, if you use
## What about Keep-Alive or persistent connections?
curl and libcurl have excellent support for persistent connections when
transferring several files from the same server. curl will attempt to reuse
transferring several files from the same server. curl attempts to reuse
connections for all URLs specified on the same command line/config file, and
libcurl will reuse connections for all transfers that are made using the same
libcurl reuses connections for all transfers that are made using the same
libcurl handle.
When you use the easy interface the connection cache is kept within the easy
handle. If you instead use the multi interface, the connection cache will be
kept within the multi handle and will be shared among all the easy handles
that are used within the same multi handle.
handle. If you instead use the multi interface, the connection cache is kept
within the multi handle and shared among all the easy handles that are used
within the same multi handle.
## Link errors when building libcurl on Windows
@ -1076,7 +1073,7 @@ options to the command line compiler. `/MD` (linking against `MSVCRT.dll`)
seems to be the most commonly used option.
When building an application that uses the static libcurl library, you must
add `-DCURL_STATICLIB` to your `CFLAGS`. Otherwise the linker will look for
add `-DCURL_STATICLIB` to your `CFLAGS`. Otherwise the linker looks for
dynamic import symbols. If you are using Visual Studio, you need to instead
add `CURL_STATICLIB` in the "Preprocessor Definitions" section.
@ -1110,14 +1107,14 @@ They are usually:
* Adjust the system's config to check for libs in the directory where you have
put the library (like Linux's `/etc/ld.so.conf`)
`man ld.so` and `man ld` will tell you more details
`man ld.so` and `man ld` tells you more details
## How does libcurl resolve hostnames?
libcurl supports a large number of name resolve functions. One of them is
picked at build-time and will be used unconditionally. Thus, if you want to
change name resolver function you must rebuild libcurl and tell it to use a
different function.
picked at build-time and used unconditionally. Thus, if you want to change
name resolver function you must rebuild libcurl and tell it to use
a different function.
### The non-IPv6 resolver
@ -1151,7 +1148,7 @@ set `CURLOPT_WRITEDATA` to a different FILE * handle.
## How do I make libcurl not receive the whole HTTP response?
You make the write callback (or progress callback) return an error and libcurl
will then abort the transfer.
then aborts the transfer.
## Can I make libcurl fake or hide my real IP address?
@ -1160,21 +1157,21 @@ imply sending IP packets with a made-up source address, and then you normally
get a problem with receiving the packet sent back as they would then not be
routed to you.
If you use a proxy to access remote sites, the sites will not see your local
If you use a proxy to access remote sites, the sites do not see your local
IP address but instead the address of the proxy.
Also note that on many networks NATs or other IP-munging techniques are used
that makes you see and use a different IP address locally than what the remote
server will see you coming from. You may also consider using
server is seeing you coming from. You may also consider using
[Tor](https://www.torproject.org/).
## How do I stop an ongoing transfer?
With the easy interface you make sure to return the correct error code from
one of the callbacks, but none of them are instant. There is no function you
can call from another thread or similar that will stop it immediately.
can call from another thread or similar that stops it immediately.
Instead, you need to make sure that one of the callbacks you use returns an
appropriate value that will stop the transfer. Suitable callbacks that you can
appropriate value that stops the transfer. Suitable callbacks that you can
do this with include the progress callback, the read callback and the write
callback.
@ -1204,7 +1201,7 @@ curl_easy_setopt(hcurl, CURLOPT_WRITEDATA, this);
## How do I get an FTP directory listing?
If you end the FTP URL you request with a slash, libcurl will provide you with
If you end the FTP URL you request with a slash, libcurl provides you with
a directory listing of that given directory. You can also set
`CURLOPT_CUSTOMREQUEST` to alter what exact listing command libcurl would use
to list the files.
@ -1212,7 +1209,7 @@ to list the files.
The follow-up question tends to be how is a program supposed to parse the
directory listing. How does it know what's a file and what's a directory and
what's a symlink etc. If the FTP server supports the `MLSD` command then it
will return data in a machine-readable format that can be parsed for type. The
returns data in a machine-readable format that can be parsed for type. The
types are specified by RFC 3659 section 7.5.1. If `MLSD` is not supported then
you have to work with what you are given. The `LIST` output format is entirely
at the server's own liking and the `NLST` output does not reveal any types and
@ -1259,17 +1256,17 @@ proven for many years. There is no need for you to reinvent them.
## Does libcurl use threads?
No, libcurl will execute in the same thread you call it in. All
callbacks will be called in the same thread as the one you call libcurl in.
No, libcurl executes in the same thread you call it in. All callbacks are
called in the same thread as the one you call libcurl in.
If you want to avoid your thread to be blocked by the libcurl call, you make
sure you use the non-blocking multi API which will do transfers
sure you use the non-blocking multi API which does transfers
asynchronously - still in the same single thread.
libcurl will potentially internally use threads for name resolving, if it was
built to work like that, but in those cases it will create the child threads
by itself and they will only be used and then killed internally by libcurl and
never exposed to the outside.
libcurl does potentially internally use threads for name resolving, if it was
built to work like that, but in those cases it creates the child threads by
itself and they are only used and then killed internally by libcurl and never
exposed to the outside.
# License
@ -1385,7 +1382,7 @@ PHP/CURL was initially written by Sterling Hughes.
Yes.
After a transfer, you set new options in the handle and make another transfer.
This will make libcurl reuse the same connection if it can.
This makes libcurl reuse the same connection if it can.
## Does PHP/CURL have dependencies?
@ -1407,12 +1404,12 @@ long time even necessary to make things work on otherwise considered modern
platforms such as Windows. Today, we do not really know how many users that
still require the use of a C89 compiler.
We will continue to use C89 for as long as nobody brings up a strong enough
reason for us to change our minds. The core developers of the project do not
feel restricted by this and we are not convinced that going C99 will offer us
enough of a benefit to warrant the risk of cutting off a share of users.
We continue to use C89 for as long as nobody brings up a strong enough reason
for us to change our minds. The core developers of the project do not feel
restricted by this and we are not convinced that going C99 offers us enough
of a benefit to warrant the risk of cutting off a share of users.
## Will curl be rewritten?
## Would curl be rewritten?
In one go: no. Little by little over time? Sure.
@ -1424,7 +1421,7 @@ Some the most important properties in curl are maintaining the API and ABI for
libcurl and keeping the behavior for the command line tool. As long as we can
do that, everything else is up for discussion. To maintain the ABI, we
probably have to maintain a certain amount of code in C, and to remain rock
stable, we will never risk anything by rewriting a lot of things in one go.
stable, we never risk anything by rewriting a lot of things in one go.
That said, we can certainly offer more and more optional backends written in
other languages, as long as those backends can be plugged in at build-time.
Backends can be written in any language, but should probably provide APIs

View File

@ -47,7 +47,7 @@ foreach(_target IN LISTS COMPLICATED_MAY_BUILD check_PROGRAMS _all) # keep 'COM
# CMake generates a static library for the OBJECT target. Silence these 'lib.exe' warnings:
# warning LNK4006: main already defined in ....obj; second definition ignored
# warning LNK4221: This object file does not define any previously undefined public symbols,
# so it will not be used by any link operation that consumes this library
# [...] not be used by any link operation that consumes this library
if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.13)
set_target_properties(${_target_name} PROPERTIES STATIC_LIBRARY_OPTIONS "-ignore:4006;-ignore:4221")
else()

View File

@ -127,7 +127,7 @@ static struct ip *ip_list_append(struct ip *list, const char *data)
return NULL;
}
/* determine the number of bits that this IP will match against */
/* determine the number of bits that this IP matches against */
cidr = strchr(ip->str, '/');
if(cidr) {
ip->maskbits = atoi(cidr + 1);

View File

@ -205,7 +205,7 @@ static size_t read_cb(char *ptr, size_t size, size_t nmemb, void *userp)
static int setup(struct input *t, int num, const char *upload)
{
char url[256];
char upload_url[256];
char filename[128];
struct stat file_info;
curl_off_t uploadsize;
@ -217,17 +217,18 @@ static int setup(struct input *t, int num, const char *upload)
snprintf(filename, sizeof(filename), "dl-%d", num);
t->out = fopen(filename, "wb");
if(!t->out) {
fprintf(stderr, "error: could not open file %s for writing: %s\n",
upload, strerror(errno));
fprintf(stderr, "error: could not open file %s for writing: %s\n", upload,
strerror(errno));
return 1;
}
snprintf(url, sizeof(url), "https://localhost:8443/upload-%d", num);
snprintf(upload_url, sizeof(upload_url), "https://localhost:8443/upload-%d",
num);
t->in = fopen(upload, "rb");
if(!t->in) {
fprintf(stderr, "error: could not open file %s for reading: %s\n",
upload, strerror(errno));
fprintf(stderr, "error: could not open file %s for reading: %s\n", upload,
strerror(errno));
fclose(t->out);
t->out = NULL;
return 1;
@ -257,7 +258,7 @@ static int setup(struct input *t, int num, const char *upload)
curl_easy_setopt(curl, CURLOPT_INFILESIZE_LARGE, uploadsize);
/* send in the URL to store the upload as */
curl_easy_setopt(curl, CURLOPT_URL, url);
curl_easy_setopt(curl, CURLOPT_URL, upload_url);
/* upload please */
curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L);

View File

@ -32,7 +32,7 @@
static const char olivertwist[] =
"Among other public buildings in a certain town, which for many reasons "
"it will be prudent to refrain from mentioning, and to which I will assign "
"it is prudent to refrain from mentioning, and to which I assign "
"no fictitious name, there is one anciently common to most towns, great or "
"small: to ___, a workhouse; and in this workhouse was born; on a day and "
"date which I need not trouble myself to repeat, inasmuch as it can be of "

View File

@ -73,7 +73,7 @@ retry:
result = curl_ws_recv(curl, buffer, sizeof(buffer), &rlen, &meta);
if(result == CURLE_OK) {
/* on small PING content, this example assumes the complete
* PONG content arrives in one go. Larger frames will arrive
* PONG content arrives in one go. Larger frames arrive
* in chunks, however. */
if(meta->flags & CURLWS_PONG) {
int same = 0;

View File

@ -30,9 +30,8 @@ VERSIONDEL=8
# libtool version:
VERSIONINFO=-version-info $(VERSIONCHANGE):$(VERSIONADD):$(VERSIONDEL)
# This flag accepts an argument of the form current[:revision[:age]]. So,
# passing -version-info 3:12:1 sets current to 3, revision to 12, and age to
# 1.
# This flag accepts an argument of the form current[:revision[:age]]. It means
# passing -version-info 3:12:1 sets current to 3, revision to 12, and age to 1.
#
# Here's the simplified rule guide on how to change -version-info:
# (current version is C:R:A)

View File

@ -26,9 +26,9 @@
#ifdef CURLRES_ARES
/***********************************************************************
* Only for ares-enabled builds
* And only for functions that fulfill the asynch resolver backend API
* as defined in asyn.h, nothing else belongs in this file!
* Only for ares-enabled builds and only for functions that fulfill
* the asynch resolver backend API as defined in asyn.h,
* nothing else belongs in this file!
**********************************************************************/
#ifdef HAVE_NETINET_IN_H
@ -549,16 +549,16 @@ static void async_ares_hostbyname_cb(void *user_data,
it is also possible that the other request could always take longer
because it needs more time or only the second DNS server can fulfill it
successfully. But, to align with the philosophy of Happy Eyeballs, we
successfully. Yet, to align with the philosophy of Happy Eyeballs, we
do not want to wait _too_ long or users will think requests are slow
when IPv6 lookups do not actually work (but IPv4 ones do).
So, now that we have a usable answer (some IPv4 addresses, some IPv6
Now that we have a usable answer (some IPv4 addresses, some IPv6
addresses, or "no such domain"), we start a timeout for the remaining
pending responses. Even though it is typical that this resolved
request came back quickly, that need not be the case. It might be that
this completing request did not get a result from the first DNS
server or even the first round of the whole DNS server pool. So it
server or even the first round of the whole DNS server pool. This
could already be a long time after we issued the DNS queries in
the first place. Without modifying c-ares, we cannot know exactly
where in its retry cycle we are. We could guess based on how much

View File

@ -132,8 +132,8 @@ void Curl_bufq_initp(struct bufq *q, struct bufc_pool *pool,
size_t max_chunks, int opts);
/**
* Reset the buffer queue to be empty. Will keep any allocated buffer
* chunks around.
* Reset the buffer queue to be empty. Keep any allocated buffer chunks
* around.
*/
void Curl_bufq_reset(struct bufq *q);
@ -243,7 +243,7 @@ CURLcode Curl_bufq_sipn(struct bufq *q, size_t max_len,
/**
* Write buf to the end of the buffer queue.
* Will write bufq content or passed `buf` directly using the `writer`
* Write bufq content or passed `buf` directly using the `writer`
* callback when it sees fit. 'buf' might get passed directly
* on or is placed into the buffer, depending on `len` and current
* amount buffered, chunk size, etc.

View File

@ -171,7 +171,7 @@ static void h1_tunnel_go_state(struct Curl_cfilter *cf,
proxy */
/* If a proxy-authorization header was used for the proxy, then we should
make sure that it is not accidentally used for the document request
after we have connected. So let's free and clear it here. */
after we have connected. Let's thus free and clear it here. */
Curl_safefree(data->state.aptr.proxyuserpwd);
break;
}

View File

@ -157,7 +157,7 @@ static void h2_tunnel_go_state(struct Curl_cfilter *cf,
ts->state = new_state;
/* If a proxy-authorization header was used for the proxy, then we should
make sure that it is not accidentally used for the document request
after we have connected. So let's free and clear it here. */
after we have connected. Let's thus free and clear it here. */
Curl_safefree(data->state.aptr.proxyuserpwd);
break;
}

View File

@ -688,8 +688,8 @@ static CURLcode bindlocal(struct Curl_easy *data, struct connectdata *conn,
if(scope_ptr) {
/* The "myhost" string either comes from Curl_if2ip or from
Curl_printable_address. The latter returns only numeric scope
IDs and the former returns none at all. So the scope ID, if
present, is known to be numeric */
IDs and the former returns none at all. Making the scope ID,
if present, known to be numeric */
curl_off_t scope_id;
if(curlx_str_number((const char **)CURL_UNCONST(&scope_ptr),
&scope_id, UINT_MAX))

View File

@ -429,7 +429,7 @@ void Curl_conn_close(struct Curl_easy *data, int sockindex);
/**
* Shutdown the connection at `sockindex` non-blocking, using timeout
* from `data->set.shutdowntimeout`, default DEFAULT_SHUTDOWN_TIMEOUT_MS.
* Will return CURLE_OK and *done == FALSE if not finished.
* Return CURLE_OK and *done == FALSE if not finished.
*/
CURLcode Curl_conn_shutdown(struct Curl_easy *data, int sockindex, bool *done);
@ -604,14 +604,14 @@ int Curl_conn_sockindex(struct Curl_easy *data, curl_socket_t sockfd);
/*
* Receive data on the connection, using FIRSTSOCKET/SECONDARYSOCKET.
* Will return CURLE_AGAIN iff blocked on receiving.
* Return CURLE_AGAIN iff blocked on receiving.
*/
CURLcode Curl_conn_recv(struct Curl_easy *data, int sockindex,
char *buf, size_t len, size_t *pnread);
/*
* Send data on the connection, using FIRSTSOCKET/SECONDARYSOCKET.
* Will return CURLE_AGAIN iff blocked on sending.
* Return CURLE_AGAIN iff blocked on sending.
*/
CURLcode Curl_conn_send(struct Curl_easy *data, int sockindex,
const void *buf, size_t len, bool eos,

View File

@ -84,8 +84,8 @@ CURLcode Curl_cpool_add(struct Curl_easy *data,
/**
* Return if the pool has reached its configured limits for adding
* the given connection. Will try to discard the oldest, idle
* connections to make space.
* the given connection. Try to discard the oldest, idle connections
* to make space.
*/
#define CPOOL_LIMIT_OK 0
#define CPOOL_LIMIT_DEST 1

View File

@ -347,7 +347,7 @@ static bool bad_domain(const char *domain, size_t len)
cookie-octet = %x21 / %x23-2B / %x2D-3A / %x3C-5B / %x5D-7E
But Firefox and Chrome as of June 2022 accept space, comma and double-quotes
Yet, Firefox and Chrome as of June 2022 accept space, comma and double-quotes
fine. The prime reason for filtering out control bytes is that some HTTP
servers return 400 for requests that contain such.
*/

View File

@ -33,7 +33,7 @@ struct Curl_share;
struct Curl_sigpipe_ctx;
/* Run the shutdown of the connection once.
* Will shortly attach/detach `data` to `conn` while doing so.
* Shortly attach/detach `data` to `conn` while doing so.
* `done` will be set TRUE if any error was encountered or if
* the connection was shut down completely. */
void Curl_cshutdn_run_once(struct Curl_easy *data,
@ -78,7 +78,7 @@ size_t Curl_cshutdn_dest_count(struct Curl_easy *data,
bool Curl_cshutdn_close_oldest(struct Curl_easy *data,
const char *destination);
/* Add a connection to have it shut down. Will terminate the oldest
/* Add a connection to have it shut down. Terminate the oldest
* connection when total connection limit of multi is being reached. */
void Curl_cshutdn_add(struct cshutdn *cshutdn,
struct connectdata *conn,

View File

@ -114,8 +114,8 @@ CURLSHcode curl_share_setopt(CURLSH *sh, CURLSHoption option, ...)
/* There is no way (yet) for the application to configure the
* session cache size, shared between many transfers. As for curl
* itself, a high session count will impact startup time. Also, the
* scache is not optimized for several hundreds of peers. So,
* keep it at a reasonable level. */
* scache is not optimized for several hundreds of peers.
* Keep it at a reasonable level. */
if(Curl_ssl_scache_create(25, 2, &share->ssl_scache))
res = CURLSHE_NOMEM;
}

View File

@ -134,7 +134,7 @@ void curlx_dyn_reset(struct dynbuf *s)
/*
* Specify the size of the tail to keep (number of bytes from the end of the
* buffer). The rest will be dropped.
* buffer). The rest is dropped.
*/
CURLcode curlx_dyn_tail(struct dynbuf *s, size_t trail)
{

View File

@ -94,7 +94,7 @@ WINBASEAPI DWORD WINAPI GetFullPathNameW(LPCWSTR, DWORD, LPWSTR, LPWSTR *);
* longer than MAX_PATH then setting 'out' to "\\?\" prefix + that full path.
*
* For example 'in' filename255chars in current directory C:\foo\bar is
* fixed as \\?\C:\foo\bar\filename255chars for 'out' which will tell Windows
* fixed as \\?\C:\foo\bar\filename255chars for 'out' which tells Windows
* it is ok to access that filename even though the actual full path is longer
* than 260 chars.
*
@ -439,7 +439,7 @@ int curlx_win32_stat(const char *path, curlx_struct_stat *buffer)
#if !defined(CURL_DISABLE_HTTP) || !defined(CURL_DISABLE_COOKIES) || \
!defined(CURL_DISABLE_ALTSVC)
/* rename() on Windows does not overwrite, so we cannot use it here.
MoveFileEx() will overwrite and is usually atomic but fails when there are
MoveFileEx() does overwrite and is usually atomic but fails when there are
open handles to the file. */
int curlx_win32_rename(const char *oldpath, const char *newpath)
{

View File

@ -170,7 +170,7 @@ static int inet_pton6(const char *src, unsigned char *dst)
if(colonp) {
/*
* Since some memmove()'s erroneously fail to handle
* overlapping regions, we will do the shift by hand.
* overlapping regions, we do the shift by hand.
*/
const ssize_t n = tp - colonp;
ssize_t i;

View File

@ -30,10 +30,10 @@
*
* Provide the target buffer @dest and size of the target buffer @dsize, If
* the source string @src with its *string length* @slen fits in the target
* buffer it will be copied there - including storing a null terminator.
* buffer it is copied there - including storing a null terminator.
*
* If the target buffer is too small, the copy is not performed but if the
* target buffer has a non-zero size it will get a null terminator stored.
* target buffer has a non-zero size it gets a null terminator stored.
*/
void curlx_strcopy(char *dest, /* destination buffer */
size_t dsize, /* size of target buffer */

View File

@ -46,7 +46,7 @@
* wait on, being used to delay execution. Winsock select() and poll() timeout
* mechanisms need a valid socket descriptor in a not null file descriptor set
* to work. Waiting indefinitely with this function is not allowed, a zero or
* negative timeout value will return immediately. Timeout resolution,
* negative timeout value is returned immediately. Timeout resolution,
* accuracy, as well as maximum supported value is system dependent, neither
* factor is a critical issue for the intended use of this function in the
* library.

View File

@ -44,7 +44,7 @@ const char *curlx_get_winapi_error(DWORD err, char *buf, size_t buflen)
return NULL;
/* We return the local codepage version of the error string because if it is
output to the user's terminal it will likely be with functions which
output to the user's terminal, it is likely done with functions which
expect the local codepage (eg fprintf, failf, infof). */
if(!FormatMessageA((FORMAT_MESSAGE_FROM_SYSTEM |
FORMAT_MESSAGE_IGNORE_INSERTS), NULL, err,

View File

@ -350,8 +350,8 @@ static CURLcode cw_out_append(struct cw_out_ctx *ctx,
}
/* if we do not have a buffer, or it is of another type, make a new one.
* And for CW_OUT_HDS always make a new one, so we "replay" headers
* exactly as they came in */
* For CW_OUT_HDS always make a new one, so we "replay" headers exactly
* as they came in */
if(!ctx->buf || (ctx->buf->type != otype) || (otype == CW_OUT_HDS)) {
struct cw_out_buf *cwbuf = cw_out_buf_create(otype);
if(!cwbuf)

View File

@ -1049,7 +1049,7 @@ UNITTEST void de_cleanup(struct dohentry *d)
* https://datatracker.ietf.org/doc/html/rfc1035#section-3.1
*
* The input buffer pointer will be modified so it points to after the end of
* the DNS name encoding on output. (And that is why it is an "unsigned char
* the DNS name encoding on output. (that is why it is an "unsigned char
* **" :-)
*/
static CURLcode doh_decode_rdata_name(const unsigned char **buf,

View File

@ -167,7 +167,7 @@ CURLcode Curl_dynhds_h1_add_line(struct dynhds *dynhds,
/**
* Add the headers to the given `dynbuf` in HTTP/1.1 format with
* cr+lf line endings. Will NOT output a last empty line.
* CR+LF line endings. Does NOT output a last empty line.
*/
CURLcode Curl_dynhds_h1_dprint(struct dynhds *dynhds, struct dynbuf *dbuf);

View File

@ -938,7 +938,7 @@ static CURLcode dupset(struct Curl_easy *dst, struct Curl_easy *src)
static void dupeasy_meta_freeentry(void *p)
{
(void)p;
/* Will always be FALSE. Cannot use a 0 assert here since compilers
/* Always FALSE. Cannot use a 0 assert here since compilers
* are not in agreement if they then want a NORETURN attribute or
* not. *sigh* */
DEBUGASSERT(p == NULL);

View File

@ -209,7 +209,7 @@ static CURLcode ftp_parse_url_path(struct Curl_easy *data,
const char *slashPos = NULL;
const char *fileName = NULL;
CURLcode result = CURLE_OK;
const char *rawPath = NULL; /* url-decoded "raw" path */
const char *rawPath = NULL; /* URL-decoded "raw" path */
size_t pathLen = 0;
ftpc->ctl_valid = FALSE;
@ -217,7 +217,7 @@ static CURLcode ftp_parse_url_path(struct Curl_easy *data,
if(ftpc->rawpath)
freedirs(ftpc);
/* url-decode ftp path before further evaluation */
/* URL-decode ftp path before further evaluation */
result = Curl_urldecode(ftp->path, 0, &ftpc->rawpath, &pathLen, REJECT_CTRL);
if(result) {
failf(data, "path contains control characters");
@ -232,8 +232,8 @@ static CURLcode ftp_parse_url_path(struct Curl_easy *data,
fileName = rawPath; /* this is a full file path */
/*
else: ftpc->file is not used anywhere other than for operations on
a file. In other words, never for directory operations.
So we can safely leave filename as NULL here and use it as a
a file. In other words, never for directory operations,
so we can safely leave filename as NULL here and use it as a
argument in dir/file decisions.
*/
break;
@ -677,7 +677,7 @@ static CURLcode getftpresponse(struct Curl_easy *data,
* A caution here is that the ftp_readresp() function has a cache that may
* contain pieces of a response from the previous invoke and we need to
* make sure we do not wait for input while there is unhandled data in
* that cache. But also, if the cache is there, we call ftp_readresp() and
* that cache. Also, if the cache is there, we call ftp_readresp() and
* the cache was not good enough to continue we must not busy-loop around
* this function.
*
@ -1559,7 +1559,7 @@ static CURLcode ftp_state_list(struct Curl_easy *data,
char *cmd;
if((data->set.ftp_filemethod == FTPFILE_NOCWD) && ftp->path) {
/* url-decode before evaluation: e.g. paths starting/ending with %2f */
/* URL-decode before evaluation: e.g. paths starting/ending with %2f */
const char *rawPath = ftpc->rawpath;
const char *slashPos = strrchr(rawPath, '/');
if(slashPos) {
@ -1688,7 +1688,7 @@ static CURLcode ftp_state_ul_setup(struct Curl_easy *data,
which may not exist in the server! The SIZE command is not in
RFC959. */
/* 2. This used to set REST. But since we can do append, we issue no
/* 2. This used to set REST, but since we can do append, we issue no
another ftp command. Skip the source file offset and APPEND the rest on
the file instead */
@ -1942,8 +1942,8 @@ static CURLcode ftp_state_quote(struct Curl_easy *data,
behavior.
In addition: asking for the size for 'TYPE A' transfers is not
constructive since servers do not report the converted size. So
skip it.
constructive since servers do not report the converted size.
Thus, skip it.
*/
result = Curl_pp_sendf(data, &ftpc->pp, "RETR %s", ftpc->file);
if(!result)
@ -2382,10 +2382,10 @@ static CURLcode ftp_do_more(struct Curl_easy *data, int *completep)
else if((data->state.list_only || !ftpc->file) &&
!(data->set.prequote)) {
/* The specified path ends with a slash, and therefore we think this
is a directory that is requested, use LIST. But before that we
is a directory that is requested, use LIST. Before that, we also
need to set ASCII transfer mode. */
/* But only if a body transfer was requested. */
/* Only if a body transfer was requested. */
if(ftp->transfer == PPTRANSFER_BODY) {
result = ftp_nb_type(data, ftpc, ftp, TRUE, FTP_LIST_TYPE);
if(result)
@ -3709,7 +3709,7 @@ static CURLcode ftp_done(struct Curl_easy *data, CURLcode status,
if(data->set.ftp_filemethod == FTPFILE_NOCWD)
pathLen = 0; /* relative path => working directory is FTP home */
else
/* file is url-decoded */
/* file is URL-decoded */
pathLen -= ftpc->file ? strlen(ftpc->file) : 0;
ftpc->prevpath = curlx_memdup0(rawPath, pathLen);
}

View File

@ -38,7 +38,7 @@ struct Curl_header_store {
/*
* Initialize header collecting for a transfer.
* Will add a client writer that catches CLIENTWRITE_HEADER writes.
* Add a client writer that catches CLIENTWRITE_HEADER writes.
*/
CURLcode Curl_headers_init(struct Curl_easy *data);

View File

@ -505,8 +505,8 @@ static bool http_should_fail(struct Curl_easy *data, int httpcode)
/*
** Examine the current authentication state to see if this is an error. The
** idea is for this function to get called after processing all the headers
** in a response message. So, if we have been to asked to authenticate a
** particular stage, and we have done it, we are OK. If we are already
** in a response message. If we have been to asked to authenticate
** a particular stage, and we have done it, we are OK. If we are already
** completely authenticated, it is not OK to get another 401 or 407.
**
** It is possible for authentication to go stale such that the client needs
@ -1973,7 +1973,7 @@ void Curl_http_method(struct Curl_easy *data,
static CURLcode http_useragent(struct Curl_easy *data)
{
/* The User-Agent string might have been allocated in url.c already, because
/* The User-Agent string might have been allocated already, because
it might have been used in the proxy connect, but if we have got a header
with the user-agent string specified, we erase the previously made string
here. */
@ -2097,7 +2097,7 @@ static CURLcode http_target(struct Curl_easy *data,
if(conn->bits.httpproxy && !conn->bits.tunnel_proxy) {
/* Using a proxy but does not tunnel through it */
/* The path sent to the proxy is in fact the entire URL. But if the remote
/* The path sent to the proxy is in fact the entire URL, but if the remote
host is a IDN-name, we must make sure that the request we produce only
uses the encoded hostname! */
@ -4136,7 +4136,7 @@ static CURLcode http_on_response(struct Curl_easy *data,
k->download_done = TRUE;
/* If max download size is *zero* (nothing) we already have
nothing and can safely return ok now! But for HTTP/2, we would
nothing and can safely return ok now! For HTTP/2, we would
like to call http2_handle_stream_close to properly close a
stream. In order to do this, we keep reading until we
close the stream. */
@ -4545,7 +4545,7 @@ CURLcode Curl_http_write_resp_hd(struct Curl_easy *data,
}
/*
* HTTP protocol `write_resp` implementation. Will parse headers
* HTTP protocol `write_resp` implementation. Parse headers
* when not done yet and otherwise return without consuming data.
*/
CURLcode Curl_http_write_resp_hds(struct Curl_easy *data,

View File

@ -125,7 +125,7 @@ CURLcode Curl_output_digest(struct Curl_easy *data,
return CURLE_OK;
}
/* So IE browsers < v7 cut off the URI part at the query part when they
/* IE browsers < v7 cut off the URI part at the query part when they
evaluate the MD5 and some (IIS?) servers work with them so we may need to
do the Digest IE-style. Note that the different ways cause different MD5
sums to get sent.

View File

@ -73,7 +73,7 @@ size_t Curl_llist_count(struct Curl_llist *list);
void *Curl_node_elem(struct Curl_llist_node *n);
/* Remove the node from the list and return the custom data
* from a Curl_llist_node. Will NOT invoke a registered `dtor`. */
* from a Curl_llist_node. Does NOT invoke a registered `dtor`. */
void *Curl_node_take_elem(struct Curl_llist_node *e);
/* Curl_node_next() returns the next element in a list from a given

View File

@ -207,7 +207,7 @@ static void mstate(struct Curl_easy *data, CURLMstate state
static void ph_freeentry(void *p)
{
(void)p;
/* Will always be FALSE. Cannot use a 0 assert here since compilers
/* Always FALSE. Cannot use a 0 assert here since compilers
* are not in agreement if they then want a NORETURN attribute or
* not. *sigh* */
DEBUGASSERT(p == NULL);
@ -1867,7 +1867,7 @@ static void multi_posttransfer(struct Curl_easy *data)
* multi_follow() handles the URL redirect magic. Pass in the 'newurl' string
* as given by the remote server and set up the new URL to request.
*
* This function DOES NOT FREE the given url.
* This function DOES NOT FREE the given URL.
*/
static CURLcode multi_follow(struct Curl_easy *data,
const struct Curl_scheme *handler,

View File

@ -1230,7 +1230,7 @@ static CURLcode pop3_state_command_resp(struct Curl_easy *data,
when there is no body to return. */
pop3c->eob = 2;
/* But since this initial CR LF pair is not part of the actual body, we set
/* Since this initial CR LF pair is not part of the actual body, we set
the strip counter here so that these bytes will not be delivered. */
pop3c->strip = 2;

View File

@ -109,8 +109,8 @@ struct SingleRequest {
BIT(eos_sent); /* iff EOS has been sent to the server */
BIT(rewind_read); /* iff reader needs rewind at next start */
BIT(upload_done); /* set to TRUE when all request data has been sent */
BIT(upload_aborted); /* set to TRUE when upload was aborted. Will also
* show `upload_done` as TRUE. */
BIT(upload_aborted); /* set to TRUE when upload was aborted. Also
* shows `upload_done` as TRUE. */
BIT(ignorebody); /* we read a response-body but we ignore it! */
BIT(http_bodyless); /* HTTP response status code is between 100 and 199,
204 or 304 */
@ -207,13 +207,13 @@ bool Curl_req_sendbuf_empty(struct Curl_easy *data);
/**
* Stop sending any more request data to the server.
* Will clear the send buffer and mark request sending as done.
* Clear the send buffer and mark request sending as done.
*/
CURLcode Curl_req_abort_sending(struct Curl_easy *data);
/**
* Stop sending and receiving any more request data.
* Will abort sending if not done.
* Abort sending if not done.
*/
CURLcode Curl_req_stop_send_recv(struct Curl_easy *data);

View File

@ -433,7 +433,7 @@ static CURLcode rtsp_do(struct Curl_easy *data, bool *done)
}
}
/* The User-Agent string might have been allocated in url.c already, because
/* The User-Agent string might have been allocated already, because
it might have been used in the proxy connect, but if we have got a header
with the user-agent string specified, we erase the previously made string
here. */
@ -1011,7 +1011,7 @@ CURLcode Curl_rtsp_parseheader(struct Curl_easy *data, const char *header)
*
* Allow any non whitespace content, up to the field separator or end of
* line. RFC 2326 is not 100% clear on the session ID and for example
* gstreamer does url-encoded session ID's not covered by the standard.
* gstreamer does URL-encoded session ID's not covered by the standard.
*/
end = start;
while((*end > ' ') && (*end != ';'))

View File

@ -75,8 +75,8 @@ static int our_select(curl_socket_t maxfd, /* highest socket number */
#ifdef USE_WINSOCK
/* Winsock select() must not be called with an fd_set that contains zero
fd flags, or it will return WSAEINVAL. But, it also cannot be called
with no fd_sets at all! From the documentation:
fd flags, or it will return WSAEINVAL. It also cannot be called with
no fd_sets at all! From the documentation:
Any two of the parameters, readfds, writefds, or exceptfds, can be
given as null. At least one must be non-null, and any non-null
@ -84,7 +84,7 @@ static int our_select(curl_socket_t maxfd, /* highest socket number */
It is unclear why Winsock does not handle this for us instead of
calling this an error. Luckily, with Winsock, we can _also_ ask how
many bits are set on an fd_set. So, let's check it beforehand.
many bits are set on an fd_set. Therefore, let's check it beforehand.
*/
return select((int)maxfd + 1,
fds_read && fds_read->fd_count ? fds_read : NULL,

View File

@ -175,7 +175,7 @@ static CURLcode tftp_set_timeouts(struct tftp_conn *state)
/* Average reposting an ACK after 5 seconds */
state->retry_max = (int)timeout / 5;
/* But bound the total number */
/* Bound the total number */
if(state->retry_max < 3)
state->retry_max = 3;
@ -370,9 +370,9 @@ static CURLcode tftp_tx(struct tftp_conn *state, tftp_event_t event)
int rblock = getrpacketblock(&state->rpacket);
if(rblock != state->block &&
/* There is a bug in tftpd-hpa that causes it to send us an ack for
* 65535 when the block number wraps to 0. So when we are expecting
* 0, also accept 65535. See
/* There is a bug in tftpd-hpa that causes it to send us an ACK for
* 65535 when the block number wraps to 0. To handle it, when we are
* expecting 0, also accept 65535. See
* https://www.syslinux.org/archives/2010-September/015612.html
* */
!(state->block == 0 && rblock == 65535)) {
@ -418,7 +418,7 @@ static CURLcode tftp_tx(struct tftp_conn *state, tftp_event_t event)
return CURLE_OK;
}
/* TFTP considers data block size < 512 bytes as an end of session. So
/* TFTP considers data block size < 512 bytes as an end of session, so
* in some cases we must wait for additional data to build full (512 bytes)
* data block.
* */

View File

@ -39,8 +39,8 @@ bool Curl_meets_timecondition(struct Curl_easy *data, time_t timeofdoc);
/**
* Write the transfer raw response bytes, as received from the connection.
* Will handle all passed bytes or return an error. By default, this will
* write the bytes as BODY to the client. Protocols may provide a
* Handle all passed bytes or return an error. By default, this writes
* the bytes as BODY to the client. Protocols may provide a
* "write_resp" callback in their handler to add specific treatment. E.g.
* HTTP parses response headers and passes them differently to the client.
* @param data the transfer
@ -112,7 +112,7 @@ CURLcode Curl_xfer_flush(struct Curl_easy *data);
/**
* Send data on the socket/connection filter designated
* for transfer's outgoing data.
* Will return CURLE_OK on blocking with (*pnwritten == 0).
* Return CURLE_OK on blocking with (*pnwritten == 0).
*/
CURLcode Curl_xfer_send(struct Curl_easy *data,
const void *buf, size_t blen, bool eos,
@ -121,7 +121,7 @@ CURLcode Curl_xfer_send(struct Curl_easy *data,
/**
* Receive data on the socket/connection filter designated
* for transfer's incoming data.
* Will return CURLE_AGAIN on blocking with (*pnrcvd == 0).
* Return CURLE_AGAIN on blocking with (*pnrcvd == 0).
*/
CURLcode Curl_xfer_recv(struct Curl_easy *data,
char *buf, size_t blen,

View File

@ -100,7 +100,7 @@
#include "curlx/strerr.h"
#include "curlx/strparse.h"
/* And now for the protocols */
/* Now for the protocols */
#include "ftp.h"
#include "dict.h"
#include "telnet.h"
@ -458,7 +458,7 @@ void Curl_init_userdefined(struct Curl_easy *data)
static void easy_meta_freeentry(void *p)
{
(void)p;
/* Will always be FALSE. Cannot use a 0 assert here since compilers
/* Always FALSE. Cannot use a 0 assert here since compilers
* are not in agreement if they then want a NORETURN attribute or
* not. *sigh* */
DEBUGASSERT(p == NULL);
@ -2083,7 +2083,7 @@ static CURLcode parse_proxy(struct Curl_easy *data,
proxyinfo = sockstype ? &conn->socks_proxy : &conn->http_proxy;
proxyinfo->proxytype = (unsigned char)proxytype;
/* Is there a username and password given in this proxy url? */
/* Is there a username and password given in this proxy URL? */
uc = curl_url_get(uhp, CURLUPART_USER, &proxyuser, CURLU_URLDECODE);
if(uc && (uc != CURLUE_NO_USER)) {
result = Curl_uc_to_curlcode(uc);
@ -3204,7 +3204,7 @@ static void url_conn_reuse_adjust(struct Curl_easy *data,
static void conn_meta_freeentry(void *p)
{
(void)p;
/* Will always be FALSE. Cannot use a 0 assert here since compilers
/* Always FALSE. Cannot use a 0 assert here since compilers
* are not in agreement if they then want a NORETURN attribute or
* not. *sigh* */
DEBUGASSERT(p == NULL);

View File

@ -26,7 +26,7 @@
#include "curl_setup.h"
/*
* Prototypes for library-wide functions provided by url.c
* Prototypes for library-wide functions
*/
CURLcode Curl_init_do(struct Curl_easy *data, struct connectdata *conn);

View File

@ -255,13 +255,13 @@ static void auth_digest_get_qop_values(const char *options, int *value)
* Parameters:
*
* chlgref [in] - The challenge message.
* nonce [in/out] - The buffer where the nonce will be stored.
* nonce [in/out] - The buffer where the nonce is stored.
* nlen [in] - The length of the nonce buffer.
* realm [in/out] - The buffer where the realm will be stored.
* realm [in/out] - The buffer where the realm is stored.
* rlen [in] - The length of the realm buffer.
* alg [in/out] - The buffer where the algorithm will be stored.
* alg [in/out] - The buffer where the algorithm is stored.
* alen [in] - The length of the algorithm buffer.
* qop [in/out] - The buffer where the qop-options will be stored.
* qop [in/out] - The buffer where the qop-options is stored.
* qlen [in] - The length of the qop buffer.
*
* Returns CURLE_OK on success.
@ -384,7 +384,7 @@ CURLcode Curl_auth_create_digest_md5_message(struct Curl_easy *data,
if(result)
return result;
/* So far so good, now calculate A1 and H(A1) according to RFC 2831 */
/* Good so far, now calculate A1 and H(A1) according to RFC 2831 */
ctxt = Curl_MD5_init(&Curl_DIGEST_MD5);
if(!ctxt)
return CURLE_OUT_OF_MEMORY;
@ -669,7 +669,7 @@ CURLcode Curl_auth_decode_digest_http_message(const char *chlg,
* uripath [in] - The path of the HTTP uri.
* digest [in/out] - The digest data struct being used and modified.
* outptr [in/out] - The address where a pointer to newly allocated memory
* holding the result will be stored upon completion.
* holding the result is stored upon completion.
* outlen [out] - The length of the output message.
*
* Returns CURLE_OK on success.
@ -853,8 +853,8 @@ static CURLcode auth_create_digest_http_message(
nonce="1053604145", uri="/64", response="c55f7f30d83d774a3d2dcacf725abaca"
Digest parameters are all quoted strings. Username which is provided by
the user will need double quotes and backslashes within it escaped.
realm, nonce, and opaque will need backslashes as well as they were
the user needs double quotes and backslashes within it escaped.
realm, nonce, and opaque needs backslashes as well as they were
de-escaped when copied from request header. cnonce is generated with
web-safe characters. uri is already percent encoded. nc is 8 hex
characters. algorithm and qop with standard values only contain web-safe
@ -977,7 +977,7 @@ oom:
* uripath [in] - The path of the HTTP uri.
* digest [in/out] - The digest data struct being used and modified.
* outptr [in/out] - The address where a pointer to newly allocated memory
* holding the result will be stored upon completion.
* holding the result is stored upon completion.
* outlen [out] - The length of the output message.
*
* Returns CURLE_OK on success.

View File

@ -375,7 +375,7 @@ CURLcode Curl_auth_decode_digest_http_message(const char *chlg,
* uripath [in] - The path of the HTTP uri.
* digest [in/out] - The digest data struct being used and modified.
* outptr [in/out] - The address where a pointer to newly allocated memory
* holding the result will be stored upon completion.
* holding the result is stored upon completion.
* outlen [out] - The length of the output message.
*
* Returns CURLE_OK on success.

View File

@ -152,7 +152,7 @@
/* Indicates that 128-bit encryption is supported. */
#define NTLMFLAG_NEGOTIATE_KEY_EXCHANGE (1 << 30)
/* Indicates that the client will provide an encrypted master key in
/* Indicates that the client provides an encrypted master key in
the "Session Key" field of the Type 3 message. */
#define NTLMFLAG_NEGOTIATE_56 (1 << 31)

View File

@ -255,7 +255,7 @@ CURLcode Curl_auth_create_ntlm_type3_message(struct Curl_easy *data,
/* ssl context comes from schannel.
* When extended protection is used in IIS server,
* we have to pass a second SecBuffer to the SecBufferDesc
* otherwise IIS will not pass the authentication (401 response).
* otherwise IIS does not pass the authentication (401 response).
* Minimum supported version is Windows 7.
* https://learn.microsoft.com/security-updates/SecurityAdvisories/2009/973811
*/

View File

@ -211,7 +211,7 @@ CURLcode Curl_auth_decode_spnego_message(struct Curl_easy *data,
* data [in] - The session handle.
* nego [in/out] - The Negotiate data struct being used and modified.
* outptr [in/out] - The address where a pointer to newly allocated memory
* holding the result will be stored upon completion.
* holding the result is stored upon completion.
* outlen [out] - The length of the output message.
*
* Returns CURLE_OK on success.

View File

@ -192,7 +192,7 @@ CURLcode Curl_auth_decode_spnego_message(struct Curl_easy *data,
/* ssl context comes from Schannel.
* When extended protection is used in IIS server,
* we have to pass a second SecBuffer to the SecBufferDesc
* otherwise IIS will not pass the authentication (401 response).
* otherwise IIS does not pass the authentication (401 response).
* Minimum supported version is Windows 7.
* https://learn.microsoft.com/security-updates/SecurityAdvisories/2009/973811
*/
@ -278,7 +278,7 @@ CURLcode Curl_auth_decode_spnego_message(struct Curl_easy *data,
* data [in] - The session handle.
* nego [in/out] - The Negotiate data struct being used and modified.
* outptr [in/out] - The address where a pointer to newly allocated memory
* holding the result will be stored upon completion.
* holding the result is stored upon completion.
* outlen [out] - The length of the output message.
*
* Returns CURLE_OK on success.

View File

@ -183,7 +183,7 @@ static void cf_ngtcp2_setup_keep_alive(struct Curl_cfilter *cf,
struct cf_ngtcp2_ctx *ctx = cf->ctx;
const ngtcp2_transport_params *rp;
/* Peer should have sent us its transport parameters. If it
* announces a positive `max_idle_timeout` it will close the
* announces a positive `max_idle_timeout` it closes the
* connection when it does not hear from us for that time.
*
* Some servers use this as a keep-alive timer at a rather low
@ -2018,7 +2018,7 @@ static CURLcode cf_progress_egress(struct Curl_cfilter *cf,
* This is called PMTUD (Path Maximum Transmission Unit Discovery).
* Since a PMTUD might be rejected right on send, we do not want it
* be followed by other packets of lesser size. Because those would
* also fail then. So, if we detect a PMTUD while buffering, we flush.
* also fail then. If we detect a PMTUD while buffering, we flush.
*/
max_payload_size = ngtcp2_conn_get_max_tx_udp_payload_size(ctx->qconn);
path_max_payload_size =
@ -2043,7 +2043,7 @@ static CURLcode cf_progress_egress(struct Curl_cfilter *cf,
++pktcnt;
if(pktcnt == 1) {
/* first packet in buffer. This is either of a known, "good"
* payload size or it is a PMTUD. We will see. */
* payload size or it is a PMTUD. We shall see. */
gsolen = nread;
}
else if(nread > gsolen ||
@ -2257,7 +2257,7 @@ static CURLcode cf_ngtcp2_shutdown(struct Curl_cfilter *cf,
if(Curl_bufq_is_empty(&ctx->q.sendbuf)) {
/* Sent everything off. ngtcp2 seems to have no support for graceful
* shutdowns. So, we are done. */
* shutdowns. We are done. */
CURL_TRC_CF(data, cf, "shutdown completely sent off, done");
*done = TRUE;
result = CURLE_OK;
@ -2859,7 +2859,7 @@ static bool cf_ngtcp2_conn_is_alive(struct Curl_cfilter *cf,
goto out;
/* We do not announce a max idle timeout, but when the peer does
* it will close the connection when it expires. */
* it closes the connection when it expires. */
rp = ngtcp2_conn_get_remote_transport_params(ctx->qconn);
if(rp && rp->max_idle_timeout) {
timediff_t idletime_ms =

View File

@ -481,7 +481,7 @@ static void cf_quiche_recv_body(struct Curl_cfilter *cf,
return;
/* Even when the transfer has already errored, we need to receive
* the data from quiche, as quiche will otherwise get stuck and
* the data from quiche, as quiche otherwise gets stuck and
* raise events to receive over and over again. */
cb_ctx.cf = cf;
cb_ctx.data = data;
@ -779,7 +779,7 @@ static CURLcode cf_flush_egress(struct Curl_cfilter *cf,
else
failf(data, "connection closed by server");
/* Connection timed out, expire all transfers belonging to it
* as will not get any more POLL events here. */
* as it does not get any more POLL events here. */
cf_quiche_expire_conn_closed(cf, data);
return CURLE_SEND_ERROR;
}
@ -939,8 +939,8 @@ static CURLcode cf_quiche_send_body(struct Curl_cfilter *cf,
rv = quiche_h3_send_body(ctx->h3c, ctx->qconn, stream->id,
(uint8_t *)CURL_UNCONST(buf), len, eos);
if(rv == QUICHE_H3_ERR_DONE || (rv == 0 && len > 0)) {
/* Blocked on flow control and should HOLD sending. But when do we open
* again? */
/* Blocked on flow control and should HOLD sending.
When do we open again? */
if(!quiche_conn_stream_writable(ctx->qconn, stream->id, len)) {
CURL_TRC_CF(data, cf, "[%" PRIu64 "] send_body(len=%zu) "
"-> window exhausted", stream->id, len);

View File

@ -72,7 +72,7 @@ typedef CURLcode Curl_vquic_session_reuse_cb(struct Curl_cfilter *cf,
* @param ctx the TLS context to initialize
* @param cf the connection filter involved
* @param data the transfer involved
* @param peer the peer that will be connected to
* @param peer the peer to be connected to
* @param alpns the ALPN specifications to negotiate, may be NULL
* @param cb_setup optional callback for early TLS config
* @param cb_user_data user_data param for callback

View File

@ -1306,8 +1306,8 @@ static int myssh_in_SFTP_REALPATH(struct Curl_easy *data,
/* This is the last step in the SFTP connect phase. Do note that while
we get the homedir here, we get the "workingpath" in the DO action
since the homedir will remain the same between request but the
working path will not. */
since the homedir remains the same between request but the
working path does not. */
CURL_TRC_SSH(data, "CONNECT phase done");
myssh_to(data, sshc, SSH_STOP);
return SSH_NO_ERROR;
@ -1372,8 +1372,8 @@ static int myssh_in_SFTP_QUOTE(struct Curl_easy *data,
sshc->acceptfail = FALSE;
/* if a command starts with an asterisk, which a legal SFTP command never
can, the command will be allowed to fail without it causing any
aborts or cancels etc. It will cause libcurl to act as if the command
can, the command is allowed to fail without it causing any
aborts or cancels etc. It causes libcurl to act as if the command
is successful, whatever the server responds. */
if(cmd[0] == '*') {
@ -1583,8 +1583,8 @@ static int myssh_in_SFTP_QUOTE_STAT(struct Curl_easy *data,
sshc->acceptfail = FALSE;
/* if a command starts with an asterisk, which a legal SFTP command never
can, the command will be allowed to fail without it causing any
aborts or cancels etc. It will cause libcurl to act as if the command
can, the command is allowed to fail without it causing any
aborts or cancels etc. It causes libcurl to act as if the command
is successful, whatever the server responds. */
if(cmd[0] == '*') {
@ -1844,7 +1844,7 @@ static void sshc_cleanup(struct ssh_conn *sshc)
/*
* ssh_statemach_act() runs the SSH state machine as far as it can without
* blocking and without reaching the end. The data the pointer 'block' points
* to will be set to TRUE if the libssh function returns SSH_AGAIN
* to is set to TRUE if the libssh function returns SSH_AGAIN
* meaning it wants to be called again when the socket is ready
*/
static CURLcode myssh_statemach_act(struct Curl_easy *data,
@ -2637,7 +2637,7 @@ static CURLcode scp_send(struct Curl_easy *data, int sockindex,
#if 0
/* The following code is misleading, mostly added as wishful thinking
* that libssh at some point will implement non-blocking ssh_scp_write/read.
* that libssh at some point would implement non-blocking ssh_scp_write/read.
* Currently rc can only be number of bytes read or SSH_ERROR. */
myssh_block2waitfor(conn, sshc, (rc == SSH_AGAIN));
@ -2671,7 +2671,7 @@ static CURLcode scp_recv(struct Curl_easy *data, int sockindex,
return CURLE_SSH;
#if 0
/* The following code is misleading, mostly added as wishful thinking
* that libssh at some point will implement non-blocking ssh_scp_write/read.
* that libssh at some point would implement non-blocking ssh_scp_write/read.
* Currently rc can only be SSH_OK or SSH_ERROR. */
myssh_block2waitfor(conn, sshc, (nread == SSH_AGAIN));

View File

@ -405,7 +405,7 @@ static CURLcode ssh_knownhost(struct Curl_easy *data,
rc = CURLKHSTAT_REJECT;
switch(rc) {
default: /* unknown return codes will equal reject */
default: /* unknown return codes is the same as reject */
case CURLKHSTAT_REJECT:
myssh_to(data, sshc, SSH_SESSION_FREE);
FALLTHROUGH();
@ -545,8 +545,8 @@ static CURLcode ssh_check_fingerprint(struct Curl_easy *data,
infof(data, "SSH MD5 fingerprint: %s", md5buffer);
}
/* This does NOT verify the length of 'pubkey_md5' separately, which will
make the comparison below fail unless it is exactly 32 characters */
/* This does NOT verify the length of 'pubkey_md5' separately, which
makes the comparison below fail unless it is exactly 32 characters */
if(!fingerprint || !curl_strequal(md5buffer, pubkey_md5)) {
if(fingerprint) {
failf(data,
@ -600,7 +600,7 @@ static CURLcode ssh_check_fingerprint(struct Curl_easy *data,
}
/*
* ssh_force_knownhost_key_type() will check the known hosts file and try to
* ssh_force_knownhost_key_type() checks the known hosts file and try to
* force a specific public key type from the server if an entry is found.
*/
static CURLcode ssh_force_knownhost_key_type(struct Curl_easy *data,
@ -624,7 +624,7 @@ static CURLcode ssh_force_knownhost_key_type(struct Curl_easy *data,
struct connectdata *conn = data->conn;
/* lets try to find our host in the known hosts file */
while(!libssh2_knownhost_get(sshc->kh, &store, store)) {
/* For non-standard ports, the name will be enclosed in */
/* For non-standard ports, the name is enclosed in */
/* square brackets, followed by a colon and the port */
if(store) {
if(store->name) {
@ -741,8 +741,8 @@ static CURLcode sftp_quote(struct Curl_easy *data,
sshc->acceptfail = FALSE;
/* if a command starts with an asterisk, which a legal SFTP command never
can, the command will be allowed to fail without it causing any
aborts or cancels etc. It will cause libcurl to act as if the command
can, the command is allowed to fail without it causing any
aborts or cancels etc. It causes libcurl to act as if the command
is successful, whatever the server responds. */
if(cmd[0] == '*') {
@ -894,7 +894,7 @@ static CURLcode sftp_upload_init(struct Curl_easy *data,
/*
* NOTE!!! libssh2 requires that the destination path is a full path
* that includes the destination file and name OR ends in a "/"
* If this is not done the destination file will be named the
* If this is not done the destination file is named the
* same name as the last directory in the path.
*/
@ -1169,8 +1169,8 @@ static CURLcode sftp_quote_stat(struct Curl_easy *data,
sshc->acceptfail = FALSE;
/* if a command starts with an asterisk, which a legal SFTP command never
can, the command will be allowed to fail without it causing any aborts or
cancels etc. It will cause libcurl to act as if the command is
can, the command is allowed to fail without it causing any aborts or
cancels etc. It causes libcurl to act as if the command is
successful, whatever the server responds. */
if(cmd[0] == '*') {
@ -1296,7 +1296,7 @@ static CURLcode sftp_download_stat(struct Curl_easy *data,
data->req.size = -1;
data->req.maxdownload = -1;
Curl_pgrsSetDownloadSize(data, -1);
attrs.filesize = 0; /* might be uninitialized but will be read below */
attrs.filesize = 0; /* might be uninitialized but is read below */
}
else {
curl_off_t size = attrs.filesize;
@ -1489,9 +1489,9 @@ static CURLcode ssh_state_authlist(struct Curl_easy *data,
* must never change it later. Thus, always specify the correct username
* here, even though the libssh2 docs kind of indicate that it should be
* possible to get a 'generic' list (not user-specific) of authentication
* methods, presumably with a blank username. That will not work in my
* methods, presumably with a blank username. That does not work in my
* experience.
* So always specify it here.
* Therefore always specify it here.
*/
struct connectdata *conn = data->conn;
sshc->authlist = libssh2_userauth_list(sshc->ssh_session,
@ -1835,8 +1835,7 @@ static CURLcode ssh_state_sftp_realpath(struct Curl_easy *data,
/* This is the last step in the SFTP connect phase. Do note that while we
get the homedir here, we get the "workingpath" in the DO action since the
homedir will remain the same between request but the working path will
not. */
homedir remains the same between request but the working path does not. */
CURL_TRC_SSH(data, "CONNECT phase done");
return CURLE_OK;
}
@ -2235,7 +2234,7 @@ static CURLcode ssh_state_scp_download_init(struct Curl_easy *data,
curl_off_t bytecount;
/*
* We must check the remote file; if it is a directory no values will
* We must check the remote file; if it is a directory no values are
* be set in sb
*/
@ -2380,7 +2379,7 @@ static CURLcode ssh_state_scp_upload_init(struct Curl_easy *data,
/*
* libssh2 requires that the destination path is a full path that
* includes the destination file and name OR ends in a "/" . If this is
* not done the destination file will be named the same name as the last
* not done the destination file is named the same name as the last
* directory in the path.
*/
sshc->ssh_channel =
@ -2560,7 +2559,7 @@ static CURLcode sshc_cleanup(struct ssh_conn *sshc, struct Curl_easy *data,
/*
* ssh_statemachine() runs the SSH state machine as far as it can without
* blocking and without reaching the end. The data the pointer 'block' points
* to will be set to TRUE if the libssh2 function returns LIBSSH2_ERROR_EAGAIN
* to is set to TRUE if the libssh2 function returns LIBSSH2_ERROR_EAGAIN
* meaning it wants to be called again when the socket is ready
*/
static CURLcode ssh_statemachine(struct Curl_easy *data,

View File

@ -130,7 +130,7 @@ CURLcode Curl_vtls_apple_verify(struct Curl_cfilter *cf,
struct ssl_config_data *ssl_config = Curl_ssl_cf_get_config(cf, data);
if(!ssl_config->no_revoke) {
if(__builtin_available(macOS 10.9, iOS 7, tvOS 9, watchOS 2, *)) {
/* Even without this set, validation will seemingly-unavoidably fail
/* Even without this set, validation seemingly-unavoidably fails
* for certificates that trustd already knows to be revoked.
* This policy further allows trustd to consult CRLs and OCSP data
* to determine revocation status (which it may then cache). */
@ -142,7 +142,7 @@ CURLcode Curl_vtls_apple_verify(struct Curl_cfilter *cf,
* of a cert being NOT REVOKED. Which not in general available for
* certificates on the Internet.
* It seems that applications using this policy are expected to PIN
* their certificate public keys or verification will fail.
* their certificate public keys or verification fails.
* This does not seem to be what we want here. */
if(!ssl_config->revoke_best_effort) {
revocation_flags |= kSecRevocationRequirePositiveResponse;

View File

@ -304,7 +304,7 @@ static gnutls_x509_crt_fmt_t gnutls_do_file_type(const char *type)
#define GNUTLS_CIPHERS "NORMAL:%PROFILE_MEDIUM:-ARCFOUR-128:" \
"-CTYPE-ALL:+CTYPE-X509"
/* If GnuTLS was compiled without support for SRP it will error out if SRP is
/* If GnuTLS was compiled without support for SRP it errors out if SRP is
requested in the priority string, so treat it specially
*/
#define GNUTLS_SRP "+SRP"
@ -710,7 +710,7 @@ CURLcode Curl_gtls_cache_session(struct Curl_cfilter *cf,
/* get the session ID data size */
gnutls_session_get_data(session, NULL, &sdata_len);
if(!sdata_len) /* gnutls does this for some version combinations */
if(!sdata_len) /* GnuTLS does this for some version combinations */
return CURLE_OK;
sdata = curlx_malloc(sdata_len); /* get a buffer for it */
@ -818,7 +818,7 @@ static CURLcode gtls_set_priority(struct Curl_cfilter *cf,
#ifdef USE_GNUTLS_SRP
if(conn_config->username) {
/* Only add SRP to the cipher list if SRP is requested. Otherwise
* GnuTLS will disable TLS 1.3 support. */
* GnuTLS disables TLS 1.3 support. */
result = curlx_dyn_add(&buf, priority);
if(!result)
result = curlx_dyn_add(&buf, ":" GNUTLS_SRP);
@ -1116,7 +1116,7 @@ CURLcode Curl_gtls_ctx_init(struct gtls_ctx *gctx,
Curl_alpn_copy(&alpns, alpns_requested);
/* This might be a reconnect, so we check for a session ID in the cache
to speed up things. We need to do this before constructing the gnutls
to speed up things. We need to do this before constructing the GnuTLS
session since we need to set flags depending on the kind of reuse. */
if(conn_config->cache_session && !conn_config->verifystatus) {
result = Curl_ssl_scache_take(cf, data, peer->scache_key, &scs);
@ -1178,7 +1178,7 @@ CURLcode Curl_gtls_ctx_init(struct gtls_ctx *gctx,
#endif
/* convert the ALPN string from our arguments to a list of strings that
* gnutls wants and will convert internally back to this string for sending
* GnuTLS wants and does convert internally back to this string for sending
* to the server. nice. */
if(!gtls_alpns_count && alpns.count) {
size_t i;
@ -1579,7 +1579,7 @@ CURLcode Curl_gtls_verifyserver(struct Curl_cfilter *cf,
long * const certverifyresult = &ssl_config->certverifyresult;
(void)cf;
/* This function will return the peer's raw certificate (chain) as sent by
/* This function returns the peer's raw certificate (chain) as sent by
the peer. These certificates are in raw format (DER encoded for
X.509). In case of a X.509 then a certificate list may be present. The
first certificate in the list is the peer's certificate, following the
@ -1637,7 +1637,7 @@ CURLcode Curl_gtls_verifyserver(struct Curl_cfilter *cf,
if(config->verifypeer) {
bool verified = FALSE;
unsigned int verify_status = 0;
/* This function will try to verify the peer's certificate and return
/* This function tries to verify the peer's certificate and return
its status (trusted, invalid etc.). The value of status should be
one or more of the gnutls_certificate_status_t enumerated elements
bitwise or'd. To avoid denial of service attacks some default
@ -1693,7 +1693,7 @@ CURLcode Curl_gtls_verifyserver(struct Curl_cfilter *cf,
/* initialize an X.509 certificate structure. */
if(gnutls_x509_crt_init(&x509_cert)) {
failf(data, "failed to init gnutls x509_crt");
failf(data, "failed to init GnuTLS x509_crt");
*certverifyresult = GNUTLS_E_NO_CERTIFICATE_FOUND;
result = CURLE_SSL_CONNECT_ERROR;
goto out;
@ -1777,7 +1777,7 @@ CURLcode Curl_gtls_verifyserver(struct Curl_cfilter *cf,
if(config->issuercert) {
gnutls_datum_t issuerp;
if(gnutls_x509_crt_init(&x509_issuer)) {
failf(data, "failed to init gnutls x509_crt for issuer");
failf(data, "failed to init GnuTLS x509_crt for issuer");
result = CURLE_SSL_ISSUER_ERROR;
goto out;
}
@ -1796,7 +1796,7 @@ CURLcode Curl_gtls_verifyserver(struct Curl_cfilter *cf,
config->issuercert ? config->issuercert : "none");
}
/* This function will check if the given certificate's subject matches the
/* This function checks if the given certificate's subject matches the
given hostname. This is a basic implementation of the matching described
in RFC2818 (HTTPS), which takes into account wildcards, and the subject
alternative name PKIX extension. Returns non zero on success, and zero on
@ -1890,7 +1890,7 @@ static CURLcode gtls_send_earlydata(struct Curl_cfilter *cf,
goto out;
}
else if(!n) {
/* gnutls is buggy, it *SHOULD* return the amount of bytes it took in.
/* GnuTLS is buggy, it *SHOULD* return the amount of bytes it took in.
* Instead it returns 0 if everything was written. */
n = (ssize_t)blen;
}

View File

@ -518,7 +518,7 @@ static CURLcode mbed_load_cacert(struct Curl_cfilter *cf,
}
#else
/* DER encoded certs do not need to be null terminated because it is a
binary format. So if we are not compiling with PEM_PARSE we can avoid
binary format. Thus, if we are not compiling with PEM_PARSE we can avoid
the extra memory copies altogether. */
ret = mbedtls_x509_crt_parse_der(&backend->cacert, ca_info_blob->data,
ca_info_blob->len);
@ -631,7 +631,7 @@ static CURLcode mbed_load_clicert(struct Curl_cfilter *cf,
}
#else
/* DER encoded certs do not need to be null terminated because it is a
binary format. So if we are not compiling with PEM_PARSE we can avoid
binary format. Thus, if we are not compiling with PEM_PARSE we can avoid
the extra memory copies altogether. */
ret = mbedtls_x509_crt_parse_der(&backend->clicert, ssl_cert_blob->data,
ssl_cert_blob->len);
@ -932,8 +932,8 @@ static CURLcode mbed_configure_ssl(struct Curl_cfilter *cf,
if(mbedtls_ssl_set_hostname(&backend->ssl, connssl->peer.sni ?
connssl->peer.sni : connssl->peer.hostname)) {
/* mbedtls_ssl_set_hostname() sets the name to use in CN/SAN checks and
the name to set in the SNI extension. So even if curl connects to a
host specified as an IP address, this function must be used. */
the name to set in the SNI extension. Thus even if curl connects to
a host specified as an IP address, this function must be used. */
failf(data, "Failed to set SNI");
return CURLE_SSL_CONNECT_ERROR;
}
@ -1210,7 +1210,7 @@ static CURLcode mbed_send(struct Curl_cfilter *cf, struct Curl_easy *data,
connssl->io_need = CURL_SSL_IO_NEED_NONE;
/* mbedTLS is picky when a mbedtls_ssl_write() was previously blocked.
* It requires to be called with the same amount of bytes again, or it
* will lose bytes, e.g. reporting all was sent but they were not.
* loses bytes, e.g. reporting all was sent but they were not.
* Remember the blocked length and use that when set. */
if(backend->send_blocked) {
DEBUGASSERT(backend->send_blocked_len <= len);

View File

@ -35,7 +35,7 @@
* <winldap.h>, <iphlpapi.h>, or something else, <wincrypt.h> does this:
* #define X509_NAME ((LPCSTR)7)
*
* And in BoringSSL/AWC-LC's <openssl/base.h> there is:
* In BoringSSL/AWC-LC's <openssl/base.h> there is:
* typedef struct X509_name_st X509_NAME;
* etc.
*
@ -661,7 +661,7 @@ static int ossl_bio_cf_in_read(BIO *bio, char *buf, int blen)
}
/* Before returning server replies to the SSL instance, we need
* to have setup the x509 store or verification will fail. */
* to have setup the x509 store or verification fails. */
if(!octx->x509_store_setup) {
r2 = Curl_ssl_setup_x509_store(cf, data, octx);
if(r2) {
@ -2141,9 +2141,9 @@ static CURLcode ossl_verifyhost(struct Curl_easy *data,
case GEN_DNS: /* name/pattern comparison */
/* The OpenSSL man page explicitly says: "In general it cannot be
assumed that the data returned by ASN1_STRING_data() is null
terminated or does not contain embedded nulls." But also that
"The actual format of the data will depend on the actual string
type itself: for example for an IA5String the data will be ASCII"
terminated or does not contain embedded nulls.", but also that
"The actual format of the data depends on the actual string
type itself: for example for an IA5String the data is ASCII"
It has been however verified that in 0.9.6 and 0.9.7, IA5String
is always null-terminated.
@ -2201,7 +2201,7 @@ static CURLcode ossl_verifyhost(struct Curl_easy *data,
i = j;
}
/* we have the name entry and we will now convert this to a string
/* we have the name entry and we now convert this to a string
that we can use for comparison. Doing this we support BMPstring,
UTF8, etc. */
@ -2586,7 +2586,7 @@ static void ossl_trace(int direction, int ssl_ver, int content_type,
ssl_ver >>= 8; /* check the upper 8 bits only below */
/* SSLv2 does not seem to have TLS record-type headers, so OpenSSL
* always pass-up content-type as 0. But the interesting message-type
* always pass-up content-type as 0, but the interesting message-type
* is at 'buf[0]'.
*/
if(ssl_ver == SSL3_VERSION_MAJOR && content_type)
@ -2681,7 +2681,7 @@ static CURLcode ossl_set_ssl_version_min_max(struct Curl_cfilter *cf,
case CURL_SSLVERSION_MAX_DEFAULT: /* max selected */
default:
/* SSL_CTX_set_max_proto_version states that: setting the maximum to 0
will enable protocol versions up to the highest version supported by
enables protocol versions up to the highest version supported by
the library */
ossl_ssl_version_max = 0;
break;
@ -2865,7 +2865,7 @@ static CURLcode ossl_win_load_store(struct Curl_easy *data,
hStore = CertOpenSystemStoreA(0, win_store);
if(hStore) {
PCCERT_CONTEXT pContext = NULL;
/* The array of enhanced key usage OIDs will vary per certificate and
/* The array of enhanced key usage OIDs varies per certificate and
is declared outside of the loop so that rather than malloc/free each
iteration we can grow it with realloc, when necessary. */
CERT_ENHKEY_USAGE *enhkey_usage = NULL;
@ -3129,7 +3129,7 @@ static CURLcode ossl_load_trust_anchors(struct Curl_cfilter *cf,
#ifdef CURL_CA_FALLBACK
if(octx->store_is_empty) {
/* verifying the peer without any CA certificates will not
/* verifying the peer without any CA certificates does not
work so use OpenSSL's built-in default as fallback */
X509_STORE_set_default_paths(store);
infof(data, " OpenSSL default paths (fallback)");
@ -3675,7 +3675,7 @@ static CURLcode ossl_init_method(struct Curl_cfilter *cf,
case CURL_SSLVERSION_TLSv1_1:
case CURL_SSLVERSION_TLSv1_2:
case CURL_SSLVERSION_TLSv1_3:
/* it will be handled later with the context options */
/* it is handled later with the context options */
*pmethod = TLS_client_method();
break;
case CURL_SSLVERSION_SSLv2:
@ -4080,7 +4080,7 @@ static CURLcode ossl_connect_step1(struct Curl_cfilter *cf,
#ifdef HAVE_SSL_SET0_WBIO
/* with OpenSSL v1.1.1 we get an alternative to SSL_set_bio() that works
* without backward compat quirks. Every call takes one reference, so we
* up it and pass. SSL* then owns it and will free.
* up it and pass. SSL* then owns and frees it.
* We check on the function in configure, since LibreSSL and friends
* each have their own versions to add support for this. */
BIO_up_ref(bio);
@ -4440,7 +4440,7 @@ static CURLcode ossl_pkp_pin_peer_pubkey(struct Curl_easy *data, X509 *cert,
/*
* These checks are verifying we got back the same values as when we
* sized the buffer. it is pretty weak since they should always be the
* same. But it gives us something to test.
* same, but it gives us something to test.
*/
if((len1 != len2) || !temp || ((temp - buff1) != len1))
break; /* failed */
@ -5088,7 +5088,7 @@ static CURLcode ossl_send(struct Curl_cfilter *cf,
memlen = (len > (size_t)INT_MAX) ? INT_MAX : (int)len;
if(octx->blocked_ssl_write_len && (octx->blocked_ssl_write_len != memlen)) {
/* The previous SSL_write() call was blocked, using that length.
* We need to use that again or OpenSSL will freak out. A shorter
* We need to use that again or OpenSSL freaks out. A shorter
* length should not happen and is a bug in libcurl. */
if(octx->blocked_ssl_write_len > memlen) {
DEBUGASSERT(0);
@ -5240,7 +5240,7 @@ static CURLcode ossl_recv(struct Curl_cfilter *cf,
result = CURLE_RECV_ERROR;
}
else {
/* We should no longer get here nowadays. But handle
/* We should no longer get here nowadays, but handle
* the error in case of some weirdness in the OSSL stack */
int sockerr = SOCKERRNO;
if(sockerr)

View File

@ -121,7 +121,7 @@ extern const struct Curl_ssl Curl_ssl_openssl;
/**
* Setup the OpenSSL X509_STORE in `ssl_ctx` for the cfilter `cf` and
* easy handle `data`. Will allow reuse of a shared cache if suitable
* easy handle `data`. Allows reuse of a shared cache if suitable
* and configured.
*/
CURLcode Curl_ssl_setup_x509_store(struct Curl_cfilter *cf,

View File

@ -297,8 +297,8 @@ static CURLcode cr_flush_out(struct Curl_cfilter *cf, struct Curl_easy *data,
* we get either an error or EAGAIN/EWOULDBLOCK.
*
* it is okay to call this function with plainbuf == NULL and plainlen == 0.
* In that case, it will not read anything into Rustls' plaintext input buffer.
* It will only drain Rustls' plaintext output buffer into the socket.
* In that case, it does not read anything into Rustls' plaintext input buffer.
* It only drains Rustls' plaintext output buffer into the socket.
*/
static CURLcode cr_send(struct Curl_cfilter *cf, struct Curl_easy *data,
const void *plainbuf, size_t plainlen,
@ -1120,7 +1120,7 @@ static void cr_set_negotiated_alpn(struct Curl_cfilter *cf,
/* Given an established network connection, do a TLS handshake.
*
* This function will set `*done` to true once the handshake is complete.
* This function sets `*done` to true once the handshake is complete.
* This function never reads the value of `*done*`.
*/
static CURLcode cr_connect(struct Curl_cfilter *cf, struct Curl_easy *data,

View File

@ -636,7 +636,7 @@ static CURLcode acquire_sspi_handle(struct Curl_cfilter *cf,
}
else {
/* Pre-Windows 10 1809 or the user set a legacy algorithm list.
Schannel will not negotiate TLS 1.3 when SCHANNEL_CRED is used. */
Schannel does not negotiate TLS 1.3 when SCHANNEL_CRED is used. */
ALG_ID algIds[NUM_CIPHERS];
char *ciphers = conn_config->cipher_list;
SCHANNEL_CRED schannel_cred = { 0 };
@ -914,18 +914,18 @@ static CURLcode schannel_connect_step1(struct Curl_cfilter *cf,
unsigned short *list_len = NULL;
struct alpn_proto_buf proto;
/* The first four bytes will be an unsigned int indicating number
/* The first four bytes is an unsigned int indicating number
of bytes of data in the rest of the buffer. */
extension_len = (unsigned int *)(void *)(&alpn_buffer[cur]);
cur += (int)sizeof(unsigned int);
/* The next four bytes are an indicator that this buffer will contain
/* The next four bytes are an indicator that this buffer contains
ALPN data, as opposed to NPN, for example. */
*(unsigned int *)(void *)&alpn_buffer[cur] =
SecApplicationProtocolNegotiationExt_ALPN;
cur += (int)sizeof(unsigned int);
/* The next two bytes will be an unsigned short indicating the number
/* The next two bytes is an unsigned short indicating the number
of bytes used to list the preferred protocols. */
list_len = (unsigned short *)(void *)(&alpn_buffer[cur]);
cur += (int)sizeof(unsigned short);
@ -1394,7 +1394,7 @@ static CURLcode schannel_connect_step2(struct Curl_cfilter *cf,
case SEC_I_INCOMPLETE_CREDENTIALS:
if(!(backend->req_flags & ISC_REQ_USE_SUPPLIED_CREDS)) {
/* If the server has requested a client certificate, attempt to
continue the handshake without one. This will allow connections to
continue the handshake without one. This allows connections to
servers which request a client certificate but do not require
it. */
backend->req_flags |= ISC_REQ_USE_SUPPLIED_CREDS;
@ -1475,7 +1475,7 @@ static CURLcode schannel_connect_step2(struct Curl_cfilter *cf,
}
/* Verify the hostname manually when certificate verification is disabled,
because in that case Schannel will not verify it. */
because in that case Schannel does not verify it. */
if(!conn_config->verifypeer && conn_config->verifyhost)
return Curl_verify_host(cf, data);
@ -2028,7 +2028,7 @@ static CURLcode schannel_send(struct Curl_cfilter *cf, struct Curl_easy *data,
sent. The unwritten encrypted bytes would be the first bytes to
send on the next invocation.
Here's the catch with this - if we tell the client that all the
bytes have been sent, will the client call this method again to
bytes have been sent, does the client call this method again to
send the buffered data? Looking at who calls this function, it
seems the answer is NO.
*/

View File

@ -262,7 +262,7 @@ static CURLcode add_certs_file_to_store(HCERTSTORE trust_store,
/*
* Read the CA file completely into memory before parsing it. This
* optimizes for the common case where the CA file will be relatively
* optimizes for the common case where the CA file is relatively
* small ( < 1 MiB ).
*/
ca_file_handle = curlx_CreateFile(ca_file,
@ -366,7 +366,7 @@ static DWORD cert_get_name_string(struct Curl_easy *data,
#ifndef CERT_NAME_SEARCH_ALL_NAMES_FLAG
#define CERT_NAME_SEARCH_ALL_NAMES_FLAG 0x2
#endif
/* CertGetNameString will provide the 8-bit character string without
/* CertGetNameString provides the 8-bit character string without
* any decoding */
DWORD name_flags =
CERT_NAME_DISABLE_IE4_UTF8_FLAG | CERT_NAME_SEARCH_ALL_NAMES_FLAG;
@ -572,7 +572,7 @@ CURLcode Curl_verify_host(struct Curl_cfilter *cf, struct Curl_easy *data)
goto cleanup;
}
/* CertGetNameString guarantees that the returned name will not contain
/* CertGetNameString guarantees that the returned name does not contain
* embedded null bytes. This appears to be undocumented behavior.
*/
cert_hostname_buff = (LPTSTR)curlx_malloc(len * sizeof(TCHAR));
@ -763,7 +763,7 @@ CURLcode Curl_verify_certificate(struct Curl_cfilter *cf,
else
engine_config.cbSize = sizeof(struct cert_chain_engine_config_win7);
/* CertCreateCertificateChainEngine will check the expected size of the
/* CertCreateCertificateChainEngine checks the expected size of the
* CERT_CHAIN_ENGINE_CONFIG structure and fail if the specified size
* does not match the expected size. When this occurs, it indicates that
* CAINFO is not supported on the version of Windows in use.

View File

@ -869,7 +869,7 @@ CURLcode Curl_pin_peer_pubkey(struct Curl_easy *data,
}
/*
* Otherwise we will assume it is PEM and try to decode it after placing
* Otherwise we assume it is PEM and try to decode it after placing
* null-terminator
*/
pem_read = pubkey_pem_to_der(curlx_dyn_ptr(&buf), &pem_ptr, &pem_len);

View File

@ -140,7 +140,7 @@ bool Curl_ssl_conn_config_match(struct Curl_easy *data,
bool proxy);
/* Update certain connection SSL config flags after they have
* been changed on the easy handle. Will work for `verifypeer`,
* been changed on the easy handle. Works for `verifypeer`,
* `verifyhost` and `verifystatus`. */
void Curl_ssl_conn_config_update(struct Curl_easy *data, bool for_proxy);

View File

@ -52,7 +52,7 @@ void Curl_ssl_scache_destroy(struct Curl_ssl_scache *scache);
/* Create a key from peer and TLS configuration information that is
* unique for how the connection filter wants to establish a TLS
* connection to the peer.
* If the filter is a TLS proxy filter, it will use the proxy relevant
* If the filter is a TLS proxy filter, it uses the proxy relevant
* information.
* @param cf the connection filter wanting to use it
* @param peer the peer the filter wants to talk to

View File

@ -112,7 +112,7 @@
* Availability note:
* The TLS 1.3 secret callback (wolfSSL_set_tls13_secret_cb) was added in
* wolfSSL 4.4.0, but requires the -DHAVE_SECRET_CALLBACK build option. If that
* option is not set, then TLS 1.3 will not be logged.
* option is not set, then TLS 1.3 is not logged.
* For TLS 1.2 and before, we use wolfSSL_get_keys().
* wolfSSL_get_client_random and wolfSSL_get_keys require OPENSSL_EXTRA
* (--enable-opensslextra or --enable-all).
@ -1733,7 +1733,7 @@ static CURLcode wssl_handshake(struct Curl_cfilter *cf, struct Curl_easy *data)
}
else if(DOMAIN_NAME_MISMATCH == detail) {
/* There is no easy way to override only the CN matching.
* This will enable the override of both mismatching SubjectAltNames
* This enables the override of both mismatching SubjectAltNames
* as also mismatching CN fields */
failf(data, " subject alt name(s) or common name do not match \"%s\"",
connssl->peer.dispname);
@ -2145,8 +2145,8 @@ static CURLcode wssl_connect(struct Curl_cfilter *cf,
}
if(ssl_connect_3 == connssl->connecting_state) {
/* Once the handshake has errored, it stays in that state and will
* error again on every call. */
/* Once the handshake has errored, it stays in that state and
* errors again on every call. */
if(wssl->hs_result) {
result = wssl->hs_result;
goto out;

View File

@ -1426,7 +1426,7 @@ CURLcode Curl_ws_accept(struct Curl_easy *data,
k->keepon |= KEEP_SEND;
}
/* And pass any additional data to the writers */
/* Then pass any additional data to the writers */
if(nread) {
result = Curl_client_write(data, CLIENTWRITE_BODY, mem, nread);
if(result)

View File

@ -17,6 +17,8 @@
use strict;
use warnings;
use File::Basename;
my @whitelist = (
# ignore what looks like URLs
'(^|\W)((https|http|ftp):\/\/[a-z0-9\-._~%:\/?\#\[\]\@!\$&\'\(\)*+,;=]+)',
@ -99,16 +101,32 @@ sub highlight {
my ($p, $w, $in, $f, $l, $lookup) = @_;
my $c = length($p)+1;
my $ch = "$f:$l:$w";
my $ch;
my $dir = dirname($f);
$ch = $dir . "/" . "::" . $w;
if($wl{$ch}) {
# whitelisted filename + line + word
# whitelisted dirname + word
return;
}
my $updir = dirname($dir);
if($dir ne $updir) {
$ch = $updir . "/" . "::" . $w;
if($wl{$ch}) {
# whitelisted upper dirname + word
return;
}
}
$ch = $f . "::" . $w;
if($wl{$ch}) {
# whitelisted filename + word
return;
}
$ch = "$f:$l:$w";
if($wl{$ch}) {
# whitelisted filename + line + word
return;
}
print STDERR "$f:$l:$c: error: found bad word \"$w\"\n";
printf STDERR " %4d | %s\n", $l, $in;

View File

@ -1,12 +1,14 @@
#!/bin/sh
#!/usr/bin/env perl
# Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
#
# SPDX-License-Identifier: curl
set -eu
use strict;
use warnings;
cd "$(dirname "${0}")"/..
use File::Basename;
# we allow some extra in source code
grep -Ev '^(will:|But=|So=|And=| url=)' scripts/badwords.txt | scripts/badwords -a src lib include docs/examples
scripts/badwords -w scripts/badwords.ok '**.md' projects/OS400/README.OS400 < scripts/badwords.txt
chdir dirname(__FILE__) . "/..";
system("scripts/badwords -a -w scripts/badwords.ok src lib include docs/examples < scripts/badwords.txt");
system("scripts/badwords -w scripts/badwords.ok '**.md' projects/OS400/README.OS400 < scripts/badwords.txt");

View File

@ -4,5 +4,12 @@
#
# whitelisted uses of bad words
# file:[line]:rule
docs/FAQ.md::will
docs/FAQ.md::Will
lib/urldata.h:: url
include/curl/::will
lib/::But
lib/::So
lib/::will
lib/::Will
lib/::WILL
src/::will
src/::Will

View File

@ -66,7 +66,7 @@ if(CURL_CA_EMBED_SET)
list(APPEND _curl_cfiles_gen "tool_ca_embed.c")
list(APPEND _curl_definitions "CURL_CA_EMBED")
else()
message(WARNING "Perl not found. Will not embed the CA bundle.")
message(WARNING "Perl not found. Cannot embed the CA bundle.")
endif()
endif()

View File

@ -845,8 +845,8 @@ CURLcode config2setopts(struct OperationConfig *config,
#ifndef DEBUGBUILD
/* On most modern OSes, exiting works thoroughly,
we will clean everything up via exit(), so do not bother with
slow cleanups. Crappy ones might need to skip this.
we clean everything up via exit(), so do not bother with slow
cleanups. Crappy ones might need to skip this.
Note: avoid having this setopt added to the --libcurl source
output. */
result = curl_easy_setopt(curl, CURLOPT_QUICK_EXIT, 1L);

View File

@ -38,7 +38,7 @@ struct slist_wc {
*
* DESCRIPTION
*
* Appends a string to a linked list. If no list exists, it will be created
* Appends a string to a linked list. If no list exists, it is created
* first. Returns the new list, after appending.
*/
struct slist_wc *slist_wc_append(struct slist_wc *list, const char *data);

View File

@ -37,7 +37,7 @@
/*
* get_terminal_columns() returns the number of columns in the current
* terminal. It will return 79 on failure. Also, the number can be big.
* terminal. It returns 79 on failure. Also, the number can be big.
*/
unsigned int get_terminal_columns(void)
{
@ -71,7 +71,7 @@ unsigned int get_terminal_columns(void)
GetConsoleScreenBufferInfo(stderr_hnd, &console_info)) {
/*
* Do not use +1 to get the true screen-width since writing a
* character at the right edge will cause a line wrap.
* character at the right edge causes a line wrap.
*/
cols = (int)(console_info.srWindow.Right - console_info.srWindow.Left);
}

View File

@ -78,7 +78,7 @@ int tool_seek_cb(void *userdata, curl_off_t offset, int whence)
if(curl_lseek(per->infd, offset, whence) == LSEEK_ERROR)
/* could not rewind, the reason is in errno but errno is not portable
enough and we do not actually care that much why we failed. We will let
enough and we do not actually care that much why we failed. We let
libcurl know that it may try other means if it wants to. */
return CURL_SEEKFUNC_CANTSEEK;

View File

@ -214,7 +214,7 @@ struct OperationConfig {
by using the default behavior for -o, -O, and -J.
If those options would have overwritten files, like
-o and -O would, then overwrite them. In the case of
-J, this will not overwrite any files. */
-J, this does not overwrite any files. */
CLOBBER_NEVER, /* If the file exists, always fail */
CLOBBER_ALWAYS /* If the file exists, always overwrite it */
} file_clobber_mode;

View File

@ -54,7 +54,7 @@ static void show_dir_errno(const char *name)
#endif
#ifdef ENOSPC
case ENOSPC:
errorf("No space left on the file system that will "
errorf("No space left on the file system that would "
"contain the directory %s", name);
break;
#endif

View File

@ -65,7 +65,7 @@ char **__crt0_glob_function(char *arg)
#endif
/*
* Test if truncating a path to a file will leave at least a single character
* Test if truncating a path to a file leaves at least a single character
* in the filename. Filenames suffixed by an alternate data stream cannot be
* truncated. This performs a dry run, nothing is modified.
*
@ -599,7 +599,7 @@ struct curl_slist *GetLoadedModulePaths(void)
#ifdef UNICODE
/* sizeof(mod.szExePath) is the max total bytes of wchars. the max total
bytes of multibyte chars will not be more than twice that. */
bytes of multibyte chars is not more than twice that. */
char buffer[sizeof(mod.szExePath) * 2];
if(!WideCharToMultiByte(CP_ACP, 0, mod.szExePath, -1,
buffer, sizeof(buffer), NULL, NULL))
@ -783,8 +783,8 @@ curl_socket_t win32_stdin_read_thread(void)
errorf("curlx_calloc() error");
break;
}
/* Create the listening socket for the thread. When it starts, it will
* accept our connection and begin writing STDIN data to the connection. */
/* Create the listening socket for the thread. When it starts, it accepts
* our connection and begin writing STDIN data to the connection. */
tdata->socket_l = CURL_SOCKET(AF_INET, SOCK_STREAM, IPPROTO_TCP);
if(tdata->socket_l == CURL_SOCKET_BAD) {
errorf("socket() error: %d", SOCKERRNO);
@ -822,7 +822,7 @@ curl_socket_t win32_stdin_read_thread(void)
/* Start up the thread. We do not bother keeping a reference to it
because it runs until program termination. From here on out all reads
from the stdin handle or file descriptor 0 will be reading from the
from the stdin handle or file descriptor 0 is reading from the
socket that is fed by the thread. */
stdin_thread = CreateThread(NULL, 0, win_stdin_thread_func,
tdata, 0, NULL);

View File

@ -690,14 +690,14 @@ static int get_param_part(char endchar,
* 'name=foo;headers=@headerfile' or why not
* 'name=@filemame;headers=@headerfile'
*
* To upload a file, but to fake the filename that will be included in the
* To upload a file, but to fake the filename that is included in the
* formpost, do like this:
*
* 'name=@filename;filename=/dev/null' or quote the faked filename like:
* 'name=@filename;filename="play, play, and play.txt"'
*
* If filename/path contains ',' or ';', it must be quoted by double-quotes,
* else curl will fail to figure out the correct filename. if the filename
* else curl fails to figure out the correct filename. if the filename
* tobe quoted contains '"' or '\', '"' and '\' must be escaped by backslash.
*
***************************************************************************/
@ -716,8 +716,8 @@ int formparse(const char *input,
struct tool_mime **mimecurrent,
bool literal_value)
{
/* input MUST be a string in the format 'name=contents' and we will
build a linked list with the info */
/* input MUST be a string in the format 'name=contents' and we build
a linked list with the info */
char *name = NULL;
char *contents = NULL;
char *contp;

View File

@ -1439,8 +1439,8 @@ static ParameterError parse_range(struct OperationConfig *config,
}
if(!curlx_str_number(&nextarg, &value, CURL_OFF_T_MAX) &&
curlx_str_single(&nextarg, '-')) {
/* Specifying a range WITHOUT A DASH will create an illegal HTTP range
(and will not actually be range by definition). The man page previously
/* Specifying a range WITHOUT A DASH does create an illegal HTTP range
(and does not actually be range by definition). The man page previously
claimed that to be a good way, why this code is added to work-around
it. */
char buffer[32];
@ -1529,7 +1529,7 @@ static ParameterError parse_verbose(bool toggle)
if(!global->trace_set && set_trace_config("-all"))
return PARAM_NO_MEM;
}
/* the '%' thing here will cause the trace get sent to stderr */
/* the '%' thing here causes the trace get sent to stderr */
switch(global->verbosity) {
case 0:
global->verbosity = 1;
@ -2789,7 +2789,7 @@ static ParameterError opt_string(struct OperationConfig *config,
case C_FTP_PORT: /* --ftp-port */
/* This makes the FTP sessions use PORT instead of PASV */
/* use <eth0> or <192.168.10.10> style addresses. Anything except
this will make us try to get the "default" address.
this makes us try to get the "default" address.
NOTE: this is a changed behavior since the released 4.1!
*/
err = getstr(&config->ftpport, nextarg, DENY_BLANK);

View File

@ -30,7 +30,7 @@
also found in one of the standard headers. */
/*
* Returning NULL will abort the continued operation!
* Returning NULL aborts the continued operation!
*/
char *getpass_r(const char *prompt, char *buffer, size_t buflen);
#endif

View File

@ -328,7 +328,7 @@ void tool_version_info(void)
/* we have ipfs and ipns support if libcurl has http support */
for(builtin = built_in_protos; *builtin; ++builtin) {
if(insert) {
/* update insertion so ipfs will be printed in alphabetical order */
/* update insertion so ipfs is printed in alphabetical order */
if(strcmp(*builtin, "ipfs") < 0)
insert = *builtin;
else

View File

@ -127,7 +127,7 @@ CURLcode ipfs_url_rewrite(CURLU *uh, const char *protocol, char **url,
goto clean;
/* We might have a --ipfs-gateway argument. Check it first and use it. Error
* if we do have something but if it is an invalid url.
* if we do have something but if it is an invalid URL.
*/
if(config->ipfs_gateway) {
if(!curl_url_set(gatewayurl, CURLUPART_URL, config->ipfs_gateway,

View File

@ -78,7 +78,7 @@ int _CRT_glob = 0;
* Ensure that file descriptors 0, 1 and 2 (stdin, stdout, stderr) are
* open before starting to run. Otherwise, the first three network
* sockets opened by curl could be used for input sources, downloaded data
* or error logs as they will effectively be stdin, stdout and/or stderr.
* or error logs as they are effectively stdin, stdout and/or stderr.
*
* fcntl's F_GETFD instruction returns -1 if the file descriptor is closed,
* otherwise it returns "the file descriptor flags (which typically can only
@ -111,7 +111,7 @@ static void memory_tracking_init(void)
curl_free(env);
curl_dbg_memdebug(fname);
/* this weird stuff here is to make curl_free() get called before
curl_dbg_memdebug() as otherwise memory tracking will log a curlx_free()
curl_dbg_memdebug() as otherwise memory tracking logs a curlx_free()
without an alloc! */
}
/* if CURL_MEMLIMIT is set, this enables fail-on-alloc-number-N feature */

View File

@ -134,7 +134,7 @@ static bool is_pkcs11_uri(const char *string)
* For fixed files, find out the size of the EOF block and adjust.
*
* For all others, have to read the entire file in, discarding the contents.
* Most posted text files will be small, and binary files like zlib archives
* Most posted text files are small, and binary files like zlib archives
* and CD/DVD images should be either a STREAM_LF format or a fixed format.
*
*/
@ -248,7 +248,7 @@ static CURLcode pre_transfer(struct per_transfer *per)
/* VMS Note:
*
* Reading binary from files can be a problem... Only FIXED, VAR
* etc WITHOUT implied CC will work. Others need a \n appended to
* etc WITHOUT implied CC do work. Others need a \n appended to
* a line
*
* - Stat gives a size but this is UNRELIABLE in VMS. E.g.
@ -487,7 +487,7 @@ static CURLcode retrycheck(struct OperationConfig *config,
if(!sleeptime)
sleeptime = per->retry_sleep;
warnf("Problem %s. "
"Will retry in %ld%s%.*ld second%s. "
"Retrying in %ld%s%.*ld second%s. "
"%ld retr%s left.",
m[retry], sleeptime / 1000L,
(sleeptime % 1000L ? "." : ""),
@ -554,7 +554,7 @@ static CURLcode retrycheck(struct OperationConfig *config,
/* truncate file at the position where we started appending */
#if defined(HAVE_FTRUNCATE) && !defined(__DJGPP__) && !defined(__AMIGA__)
if(ftruncate(fileno(outs->stream), outs->init)) {
/* when truncate fails, we cannot append as then we will
/* when truncate fails, we cannot append as then we
create something strange, bail out */
errorf("Failed to truncate file");
return CURLE_WRITE_ERROR;
@ -1141,7 +1141,7 @@ static void check_stdin_upload(struct OperationConfig *config,
if(!strcmp(per->uploadfile, ".")) {
#if defined(USE_WINSOCK) && !defined(CURL_WINDOWS_UWP)
/* non-blocking stdin behavior on Windows is challenging
Spawn a new thread that will read from stdin and write
Spawn a new thread that reads from stdin and write
out to a socket */
curl_socket_t f = win32_stdin_read_thread();
@ -1344,7 +1344,7 @@ static CURLcode create_single(struct OperationConfig *config,
}
if(config->resume_from_current)
config->resume_from = -1; /* -1 will then force get-it-yourself */
config->resume_from = -1; /* -1 then forces get-it-yourself */
}
if(!outs->out_null && output_expected(per->url, per->uploadfile) &&
@ -1489,7 +1489,7 @@ static CURLcode add_parallel_transfers(CURLM *multi, CURLSH *share,
return result;
/* parallel connect means that we do not set PIPEWAIT since pipewait
will make libcurl prefer multiplexing */
makes libcurl prefer multiplexing */
(void)curl_easy_setopt(per->curl, CURLOPT_PIPEWAIT,
global->parallel_connect ? 0L : 1L);
(void)curl_easy_setopt(per->curl, CURLOPT_PRIVATE, per);
@ -1746,7 +1746,7 @@ static CURLcode parallel_event(struct parastate *s)
}
/* We need to cleanup the multi here, since the uv context lives on the
* stack and will be gone. multi_cleanup can trigger events! */
* stack and about to be gone. multi_cleanup can trigger events! */
curl_multi_cleanup(s->multi);
#if DEBUG_UV

View File

@ -75,7 +75,7 @@ struct per_transfer {
BIT(added); /* set TRUE when added to the multi handle */
BIT(abort); /* when doing parallel transfers and this is TRUE then a critical
error (eg --fail-early) has occurred in another transfer and
this transfer will be aborted in the progress callback */
this transfer gets aborted in the progress callback */
BIT(skip); /* considered already done */
};

View File

@ -79,7 +79,7 @@ CURLcode urlerr_cvt(CURLUcode ucode)
/*
* Adds the filename to the URL if it does not already have one.
* URL will be freed before return if the returned pointer is different
* URL is freed before return if the returned pointer is different
*/
CURLcode add_file_name_to_url(CURL *curl, char **inurlp, const char *filename)
{

View File

@ -42,7 +42,7 @@
* 'regular_file' member is TRUE when output goes to a regular file, this also
* implies that output is 'seekable' and 'appendable' and also that member
* 'filename' points to filename's string. For any standard stream member
* 'regular_file' will be FALSE.
* 'regular_file' is FALSE.
*
* 'fopened' member is TRUE when output goes to a regular file and it
* has been fopen'ed, requiring it to be closed later on. In any other

View File

@ -33,7 +33,7 @@
* _LARGE_FILES in order to support files larger than 2 GB. On platforms
* where this happens it is mandatory that these macros are defined before
* any system header file is included, otherwise file handling function
* prototypes will be misdeclared and curl tool may not build properly;
* prototypes are misdeclared and curl tool may not build properly;
* therefore we must include curl_setup.h before curl.h when building curl.
*/

View File

@ -47,7 +47,7 @@ void tool_set_stderr_file(const char *filename)
}
/* precheck that filename is accessible to lessen the chance that the
subsequent freopen will fail. */
subsequent freopen fails. */
fp = curlx_fopen(filename, FOPEN_WRITETEXT);
if(!fp) {
warnf("Warning: Failed to open %s", filename);

View File

@ -122,7 +122,7 @@ static CURLcode glob_set(struct URLGlob *glob, const char **patternp,
goto error;
}
/* add 1 to size since it will be incremented below */
/* add 1 to size since it is to be incremented below */
if(multiply(amount, size + 1)) {
result = globerror(glob, "range overflow", 0, CURLE_URL_MALFORMAT);
goto error;